1,100+ Tech Leaders Sign Open Letter To 'Immediately Pause' AI Systems

Signatories want a 6-month halt for development to evaluate and address threats the technology poses


As the rapid deployment of artificial intelligence platforms raises questions about how the technology may negatively impact humans, global technology leaders are calling for a moratorium on AI development.

More than 1,100 tech leaders have signed an open letter calling for a pause in the AI systems arms race in order to allow programmers to ensure the systems will be a benefit to society with reasonable steps taken to mitigate risks.

Signatories to the letter include Elon Musk, billionaire CEO of SpaceX, Tesla and Twitter, Steve Wozniak, co-founder of Apple, Andrew Yang, former presidential candidate, and Evan Sharp, co-founder of Pinterest.

The letter states:

Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.

Signatories to the letter want to “immediately pause for at least six months the training of AI systems more powerful than GPT-4.” They say that companies should ensure the halt to development should be public and verifiable, and that if companies cannot accomplish this quickly themselves, governments should “step in and issue a moratorium.”

According to a research paper published in 2022, AI systems could make it easier for bad actors to use the technology to produce dangerous pathogens to incite a pandemic, or develop advanced bio weaponry.

Sam Altman, CEO of OpenAI, which makes GPT-4 (also known as Chat GPT), said in a recent interview that he worries AI technology could be used for large-scale disinformation campaigns or offensive cyberattacks.

Some are also predicting AI to progress to human-level abilities, irreversibly changing civilization and allowing humans to merge with machines via brain-computer interfaces.

In the open letter, solutions to pressing problems are addressed, with tech leaders urging developers to ensure new AI platforms have strong regulation, oversight and tracking of advanced systems, watermarking systems to distinguish real from synthetic, an auditing and certification ecosystem, and liability for AI-induced harm.

Human Events Content recommendations!
Human Events recommendations!