John Smyth
2024-10-11 23:29:35 UTC
Reply
PermalinkCould Take Over the World.'
<https://archive.is/VuJ4L#selection-2403.0-2597.172>
'The newly minted Nobel laureate Geoffrey Hinton has a message about the
artificial-intelligence systems he helped create: get more serious about
safety or they could endanger humanity.
“I think we’re at a kind of bifurcation point in history where, in the
next few years, we need to figure out if there’s a way to deal with that
threat,” Hinton said in an interview Tuesday with a Nobel Prize official
that mixed pride in his life’s work with warnings about the growing
danger it poses.
The 76-year-old Hinton resigned from Google last year in part so he
could talk more about the possibility that AI systems could escape human
control and influence elections or power dangerous robots. Along with
other experienced AI researchers, he has called on such companies as
OpenAI, Meta Platforms and Alphabet-owned Google to devote more
resources to the safety of the advanced systems that they are competing
against each other to develop as quickly as possible.
Hinton’s Nobel win has provided a new platform for his doomsday warnings
at the same time it celebrates his critical role in advancing the
technologies fueling them. Hinton has argued that advanced AI systems
are capable of understanding their outputs, a controversial view in
research circles.
“Hopefully, it will make me more credible when I say these things really
do understand what they’re saying,” he said of the prize.
Hinton’s views have pitted him against factions of the AI community that
believe dwelling on doomsday scenarios needlessly slows technological
progress or distracts from more immediate harms, such as discrimination
against minority groups.
The Stockholm announcement of John Hopfield and Geoffrey Hinton as this
year’s Nobel Prize winners in physics.
“I think that he’s a smart guy, but I think a lot of people have way
overhyped the risk of these things, and that’s really convinced a lot of
the general public that this is what we should be focusing on, not the
more immediate harms of AI,” said Melanie Mitchell, a professor at the
Santa Fe Institute, during a panel last year.
Hinton visited Google’s Silicon Valley headquarters Tuesday for an
informal celebration, and some of the company’s top AI executives
congratulated him on social media.
On Wednesday, other prominent Googlers specializing in AI were also
awarded a Nobel Prize. Demis Hassabis, chief executive of Google
DeepMind, and John M. Jumper, director at the AI lab, were part of a
group of three scientists who won the chemistry prize for their work on
predicting the shape of proteins.
Thinking like people
Hinton is sharing the Nobel Prize in physics with John Hopfield of
Princeton University for their work since the 1980s on neural networks
that process information in ways inspired by the human brain. That work
is the basis for many of the AI technologies in use today, from
ChatGPT’s humanlike conversations to Google Photos’ ability to recognize
who is in every picture you take.
“Their contributions to connect fundamental concepts in physics with
concepts in biology, not just AI—these concepts are still with us
today,” said Yoshua Bengio, an AI researcher at the University of
Montreal.
John Hopfield of Princeton University is sharing in the Nobel Prize for
physics.
In 2012, Hinton worked with two of his University of Toronto graduate
students, Alex Krizhevsky and Ilya Sutskever, on a neural network called
AlexNet programmed to recognize images in photos. Until that point,
computer algorithms had often been unable to tell that a picture of a
dog was really a dog and not a cat or a car.
AlexNet’s blowout victory at a 2012 contest for image-recognition
technology was a pivotal moment in the development of the modern AI
boom, as it proved the power of neural nets over other approaches.
That same year, Hinton started a company with Krizhevsky and Sutskever
that turned out to be short-lived. Google acquired it in 2013 in an
auction against competitors including Baidu and Microsoft, paying $44
million essentially to hire the three men, according to the book “Genius
Makers.” Hinton began splitting time between the University of Toronto
and Google, where he continued research on neural networks.
Hinton is widely revered as a mentor for the current generation of top
AI researchers including Sutskever, who co-founded OpenAI before leaving
this spring to start a company called Safe Superintelligence.
Hinton received the 2018 Turing Award, a computer-science prize, for his
work on neural networks alongside Bengio and a fellow AI researcher,
Yann LeCun. The three are often referred to as the modern “godfathers of
AI.”
Warnings of disaster
By 2023, Hinton had become alarmed about the consequences of building
more powerful artificial intelligence. He began talking about the
possibility that AI systems could escape the control of their creators
and cause catastrophic harm to humanity. In doing so, he aligned himself
with a vocal movement of people concerned about the existential risks of
the technology.
“We’re in a situation that most people can’t even conceive of, which is
that these digital intelligences are going to be a lot smarter than us,
and if they want to get stuff done, they’re going to want to take
control,” Hinton said in an interview last year.
Hinton announced he was leaving Google in spring 2023, saying he wanted
to be able to freely discuss the dangers of AI without worrying about
consequences for the company. Google had acted “very responsibly,” he
said in an X post.
In the subsequent months, Hinton has spent much of his time speaking to
policymakers and tech executives, including Elon Musk, about AI risks.
Hinton cosigned a paper last year saying companies doing AI work should
allocate at least one-third of their research and development resources
to ensuring the safety and ethical use of their systems.
Hinton has spent much of his recent time speaking of AI risks.
“One thing governments can do is force the big companies to spend a lot
more of their resources on safety research, so that for example
companies like OpenAI can’t just put safety research on the back
burner,” Hinton said in the Nobel interview.
An OpenAI spokeswoman said the company is proud of its safety work.
With Bengio and other researchers, Hinton supported an
artificial-intelligence safety bill passed by the California Legislature
this summer that would have required developers of large AI systems to
take a number of steps to ensure they can’t cause catastrophic damage.
Gov. Gavin Newsom recently vetoed the bill, which was opposed by most
big tech companies including Google.
Hinton’s increased activism has put him in opposition to other respected
researchers who believe his warnings are fantastical because AI is far
from having the capability to cause serious harm.
“Their complete lack of understanding of the physical world and lack of
planning abilities put them way below cat-level intelligence, never mind
human-level,” LeCun wrote in a response to Hinton on X last year.