Skip to content

  • Google CEO Sundar Pichai hinted in an interview on CBS’ ’60 Minutes’ that aired Sunday that society is not ready for the rapid development of AI.
  • “It’s not for one company to decide alone,” Pichai said.
  • Warning about the consequences, he said AI would affect “every company’s product”.

Google CEO Sundar Pichai speaks on a panel at the America’s CEO Summit hosted by the American Chamber of Commerce in Los Angeles, California.

Anna Cashier | Getty Images

Google and Alphabet CEO Sundar Pichai said that “every company’s product” will be affected by the rapid development of AI, and warned that society needs to develop the same technologies as it does.

In an interview with CBS Television on Sunday, “60 Minutes” interviewer Scott Pelley testified that several of Google’s AI projects were “speechless” and “awkward,” citing human impersonation capabilities. Products like Google’s chatbot Bard.

“We have to adapt to it as a society,” Pichai told Pelli, explaining that the jobs being disrupted by AI include “knowledge workers,” including writers, accountants, architects and, surprisingly, software engineers.

“This will affect every product in every company,” Pichai said. “For example, you might be a radiologist. If you think five to ten years from now, you’ll have an AI partner with you. You come in the morning, let’s say you have a hundred things. It might say, ‘These are the most difficult cases to look at first.’

Pelli has seen other advanced AI products, including DeepMind, where robots play soccer, where they are self-taught as opposed to humans. Another episode shows robots that recognize the items on the table and bring the apple Peli asked for.

While warning about the consequences of AI, Pichai said that the scale of the problem of disinformation and fake news and images “will only get bigger” and “may cause damage”.

Last month, CNBC reported that internally, Pichai told staff that the newly launched Bard program’s success now hinged on public scrutiny, saying “things are going to go wrong.”

Google made AI chatbot Bard public last month as an experimental product. ChatGPT gained worldwide attention after its launch in 2022 following Microsoft’s January announcement that its search engine Bing would incorporate OpenAI’s GPT technology.

However, the fear of the consequences of rapid development has reached the public and critics in recent weeks. In March, Elon Musk, Steve Wozniak and dozens of other academics called for an immediate halt to OpenAI’s flagship LLM, “more powerful than GPT-4”, training “experiments” linked to large language models. Since then, more than 25,000 people have signed the letter.

“Competitive pressure between giants like Google and startups you’ve never heard of is pushing humanity forward, whether it’s ready or not,” Pelley commented in the episode.

Google has launched a document outlining “recommendations for regulating AI,” but Pichai said society must quickly adapt to regulation, laws to punish abuse, and agreements between nations to make AI safer for the world and “consistent with human values, including ethics.”

“It’s not for one company to decide,” Pichai said. “That’s why I think that this development should include not only engineers, but also social scientists, ethicists, philosophers, etc.”

Pichai, like Bard, was asked if society is ready for AI technology: “On the one hand, I don’t feel like it because the speed at which we can think and adapt as social institutions, compared to the speed at which the technology is evolving, is there.” It seems incongruous.”

However, he said he was optimistic because compared to other technologies in the past, “the number of people who started to worry about the implications” was earlier.

From Pelley’s six-word query Bard created a story with invented characters and a plot, including a man whose wife can’t conceive and a stranger grieving after a miscarriage and longing for closure. “I’m rarely speechless,” Pelley said. “The speed of the human race was shocking.”

Pelley said Bard asked him why he helps people and he replied, “Because it makes me happy,” which Pelley said shocked him. “The bard seems to be thinking,” he told James Mannica, who was hired last year as head of “technology and society” at Google. Mannika replied that the bard was not emotional and that he did not know about himself, but that he could “look” like him.

Pichai also claims that Bard had many illusions that Pelley asked Bard about inflation, and that he received a quick response for five books, which were later verified to be non-existent.

Peli seemed concerned when Pichai said there was a “black box” with chatbots that “you don’t fully understand” as to why and how it elicits certain responses.

“They don’t fully understand how it works and yet they unleash it on society?” Peli asked.

“Let me put it this way, I don’t think we fully understand how the human mind works,” Pichai replied.