To the editor: Visitor contributors Dov Greenbaum and Mark Gerstein acknowledge that synthetic intelligence presents attainable risks to society, however their prescription — monitoring, like that required of the pharmaceutical trade — is woefully insufficient (“Can AI builders keep away from Frankenstein’s fateful mistake?” Nov. 15).
In contrast to all earlier technological advances, AI isn’t just one other instrument for people to make the most of. AI builders, together with these in robotics, are competing to create ever extra highly effective entities whose capabilities vastly surpass our personal in each bodily manipulation and psychological calculation. Whether or not or not AIs have or will obtain “consciousness,” they’ve already demonstrated the flexibility to behave on their very own, motive in unexpected methods, use subterfuge and resist being turned off.
Two years in the past, Elon Musk, Steve Wozniak and greater than 1,000 different specialists signed an open letter calling for a six-month halt on the event of any AI expertise extra highly effective than OpenAI’s GPT-4. They have been involved about the opportunity of “profound dangers to society and humanity.” They stated, “latest months have seen AI labs locked in an out-of-control race to develop and deploy ever extra highly effective digital minds that nobody — not even their creators — can perceive, predict, or reliably management.” The requested pause didn’t happen, after all.
We’re a wierd species. Our “leaders” have been complicit in permitting revenue to return earlier than the safety of the Earth’s local weather. Now, with AI, they’re permitting revenue to return earlier than making certain that AI doesn’t endanger the entire human venture.
Grace Bertalot, Anaheim