Major tech companies OpenAI, Microsoft, and Amazon came together at the Seoul Safety Summit Tuesday to establish an agreement on the sustainable development of their advanced Artificial Intelligence (AI) models, according to reports. The agreement sets a collaborative precedent on AI safety from multinational corporations in the U.S., China, Canada, the U.K., France, and other countries.
According to a report by CNBC, AI model makers have committed to publishing safety frameworks outlining security protocols for their AI systems, targeting risks like automated cyberattacks and bioweapons threats.
According to CNBC, the commitment made at the summit applies to frontier models, or advanced AI models that pose significant risk to the public. In the event of unmanageable risks, the companies reportedly plan to implement a "kill switch" that will halt the operation of their AI systems.
These safety commitments, including the "kill switch" implementation, signify a step forward in global efforts to ensure that advanced AI systems are developed and used responsibly.
(0) comments
Welcome to the discussion.
Log In
Keep it Clean. Please avoid obscene, vulgar, lewd, racist or sexually-oriented language.
PLEASE TURN OFF YOUR CAPS LOCK.
Don't Threaten. Threats of harming another person will not be tolerated.
Be Truthful. Don't knowingly lie about anyone or anything.
Be Nice. No racism, sexism or any sort of -ism that is degrading to another person.
Be Proactive. Use the 'Report' link on each comment to let us know of abusive posts.
Share with Us. We'd love to hear eyewitness accounts, the history behind an article.