Skip to content

Geoffrey Hinton's warning of the concept of artificial intelligence

iStockphoto

Dr. Geoffrey Hinton, dubbed the “Godfather of Artificial Intelligence,” has issued another warning about the dangers of being ignored by scientists and researchers.

In April, hundreds of top tech experts called for AI labs to develop and deploy more powerful digital minds because they felt they were “locked in an out-of-control race.” It can understand, predict or reliably control.

Following that stern warning, Dr. Hinton contacted them. New York Times About it.

In an interview, he admitted that “future versions of technology pose a threat to humanity because they often learn unexpected behaviors from the vast amount of data they analyze.”

Dr. Hinton and two of his graduate students at the University of Toronto, Ilya Sutskever and Alex Kryshevsky, helped create a neural network in 2012 that became the basis for generative artificial intelligence-based products like ChatGPT.

In April, Hinton quit his job at Google after more than a decade with the company because he regretted the work he had spent so much of his life doing.

Recently, Dr. Hinton spoke at the Collision Technology Conference in Toronto and shared more of his concerns.

“They still can’t match us, but they’re getting closer.” Debrief Hinton reports, citing the “speed at which AI is advancing and can mimic humans.”

… During a recent confrontation, Hinton admitted something surprising about the closest language models come to matching human potential.

“I don’t really understand why they do it, but they have a little bit of common sense,” Hinton said.

Consider Hinton’s words for a moment: One of the leading innovators in the field of artificial intelligence admits that he “doesn’t really understand why” some of the older language models seem to have “little reasoning ability” to compare. Human logic and logic.

“We’re just machines,” Hinton explained. “We’re a wonderful, incredibly complicated machine, but we’re a big neural network,” he added, “and there’s no reason why a human neural network can’t do everything we can do.”

But if they are smarter than us and perhaps have goals of their own, which is very likely, I think we have to take seriously the possibility that they can advance the goal of control.

Dr. Hinton is particularly concerned about the military’s use of artificial intelligence.

“If you use defensive units [AI] It would be a very ugly and scary thing to make robots for war, Hinton said, adding that they don’t have to be “super intelligent” or “self-motivated” to have disastrous results.

“For example, it makes it very easy for rich countries to invade poor countries.

“There’s a disincentive to invading poor countries these days,” Hinton said, “and you bring dead citizens back home.” If it’s just dead robots, that’s great, the military-industrial complex would love that.



[ad_2]