The hottest thing in technology is a tiny piece of silicon that’s closely related to the chips that power video game graphics. It is an artificial intelligence chip designed specifically for building AI systems.
As fast and cheap as ChatGPT.
Such chips are suddenly at the center of what some experts see as an AI revolution that could reshape the tech industry and possibly the world with it. Shares of Nvidia, the leading designer of i-chips, rose nearly 25 percent on Thursday after analysts said sales of its products were rising. The company was briefly valued at more than US$1 trillion earlier this week.
So what are AI chips, anyway?
That question is not easy to answer. “There is no fully agreed-upon definition of AI chips,” said Hannah Dohmen, a research analyst at the Center for Security and Emerging Technologies.
In general, however, the term AI includes computing hardware specialized to handle workloads—for example, by “training” AI systems to solve problems that enable conventional computers.
The origin of the video game
Three entrepreneurs founded Nvidia in 1993 to push the boundaries of computer graphics. Within a few years, the company had developed a new chip called the Graphics Processing Unit, or GPU, which could perform many complex graphics calculations at once, greatly speeding up the development and play of video games.
That technique, formally known as parallel processing, is proving key to the development of both games and AI. Two graduate students at the University of Toronto used a GPU-based neural network to win a 2012 AI competition called ImageNet by identifying photo images with a much lower error rate than their competitors.
The win sparked interest in AI-related parallel processing, which opens up new business opportunities for Nvidia and its rivals while giving researchers powerful tools to explore the frontiers of AI development.
Modern AI chips
Eleven years later, Nvidia is a leading supplier of chips for building and updating AI systems. One of its latest products, the H100 GPU, is packed with 80 billion transistors — about 13 million of Apple’s latest high-end processor for the MacBook Pro laptop. Unsurprisingly, this technology is not cheap; At one online retailer, the H100 lists for US$30,000.
advertisement
Advertise with NZME.
Nvidia doesn’t manufacture these complex GPU chips itself, a task that requires significant investment in new factories. Instead, it relies on Asian chipmakers such as Taiwan Semiconductor Manufacturing Co. and Korea’s Samsung Electronics.
Some of the biggest customers for AI chips are cloud computing services like Amazon and Microsoft. By leasing their AI computing power, those services allow small companies and teams to use cloud-based tools to build their own AI systems from scratch to help with tasks ranging from drug discovery to customer management. .
Other uses and competition
Parallel processing has many uses outside of AI. A few years ago, for example, Nvidia graphics cards were in short supply because cryptocurrency miners, who set up computer banks to solve thorny math problems for bitcoin rewards, snapped up most of them. In the year That problem faded when the cryptocurrency market crashed in early 2022.
Analysts say Nvidia will inevitably face tough competition. One potential competitor is Advanced Micro Devices, which already competes with Nvidia in the market for computer graphics chips. AMD has recently taken steps to strengthen its own set of AI chips.
Nvidia is based in Santa Clara, California. Co-founder Jensen Huang remains the company’s president and CEO.
[ad_2]