Thursday, November 30, 2023
Google search engine
InicioTechnology NewsHow Nvidia created a competitive moat around AI chips

How Nvidia created a competitive moat around AI chips


Neuroscientist-turned-tech entrepreneur Naveen Rao once tried to compete NVIDIAThe world’s leading manufacturer of chips designed for artificial intelligence.

At a start-up that was later bought by semiconductor giant Intel, Mr. Rao worked on chips intended to change nvidia graphics processing units, which are components optimized for AI tasks such as machine learning. But while Intel moved slowly, Nvidia rapidly upgraded its products with new AI features to compete with what it was developing, Mr. Rao said.

After leaving Intel and heading a software start-up, MosaicML, Mr. Rao used Nvidia’s chips and evaluated them against rivals. He found that Nvidia has differentiated itself beyond chips by creating a large community of AI programmers, who continually innovate using the company’s technology.

“Everyone builds on Nvidia first,” Mr. Rao said. “If you come out with a new piece of hardware, you’re racing to grab it.”

In more than 10 years, Nvidia has built a nearly unassailable lead in producing chips that can perform complex AI tasks such as image, face and speech recognition, as well as generate text for chatbots. chatgpt, The one-time industry budding achieved that dominance by identifying AI trends early, tailoring its chips for those tasks, and then developing key pieces of software that aid AI development.

jensen huangThe co-founder and chief executive of Nvidia, has continued to raise the bar ever since. To maintain its leading position, his company has also offered clients access to specialized computers, computing services, and other equipment for his budding business. This has turned Nvidia, for all intents and purposes, into a one-stop shop for AI development.

While Google, Amazon, Meta, IBM and others also produce AI chips, Nvidia accounts for more than 70 percent of AI chip sales today and an even larger share in training generative AI models, according to research firm Omdia. Keeps place

In May, the company’s status as the AI ​​revolution’s clearest winner became clear when it reported a 64 percent jump in quarterly revenue, far higher than Wall Street expected. on Wednesday, nvidia – which is exceeded $1 trillion The company — expected to become the world’s most valuable chip maker by market capitalization — is expected to confirm those record results and provide more signals about growing AI demand.

“Customers will have to wait 18 months to buy an Nvidia system instead of buying an available, off-the-shelf chip from a start-up or another competitor,” said Daniel Newman, an analyst at Futurum Group. “It’s amazing.”

Mr. Huang, 60, who is known for his trademark black leather jacket, spoke out on AI for years before becoming one of the movement’s best-known faces. He has publicly stated that computing is undergoing its greatest change since IBM defined how most systems and software work 60 years ago. Now, he said, GPUs and other special-purpose chips are replacing standard microprocessors, and AI chatbots are replacing complex software coding.

“What we realized is that this is a new invention of how computing is done,” Mr. Huang said in an interview. “And we built everything from start to finish, from processor to finish.”

Mr. Huang helped start Nvidia in 1993 to make chips that render images in video games. While standard microprocessors excel at performing complex calculations sequentially, the company’s GPUs perform many simple tasks at once.

In 2006, Mr. Huang took it a step further. They announced a software technology called CUDA, which helped program GPUs for new tasks, turning them from single-purpose chips into more general-purpose chips that could perform other tasks in areas such as physics and chemical simulation. Were.

A major breakthrough came in 2012, when researchers used GPUs to achieve human-like accuracy in tasks such as recognizing a cat in an image – a precursor to more recent developments such as generating images from text signals.

Nvidia responded by adapting “every aspect of our company to advance this new area,” Mr. Jensen said recently in a commencement speech at National Taiwan University.

The effort, which is estimated to have cost the company more than $30 billion over a decade, has made Nvidia more than just a component supplier. In addition to collaborating with leading scientists and start-ups, the company built a team that directly participates in AI activities such as language model building and training.

Advance warnings about what AI practitioners would need prompted Nvidia to develop several layers of key software beyond CUDA. These included hundreds of pre-built pieces of code called libraries that saved labor for programmers.

In hardware, Nvidia earned a reputation for consistently delivering faster chips every two years. In 2017, it began shifting to GPUs to handle specific AI calculations.

That same year, Nvidia, which typically sold chips or circuit boards for other companies’ systems, also began selling complete computers to perform AI tasks more efficiently. Some of its systems are now the size of supercomputers, which it assembles and operates using proprietary networking technology and thousands of GPUs. It can take weeks for such hardware to train the latest AI models.

“This type of computing doesn’t allow you to just build a chip and let customers use it,” Mr. Huang said in the interview. “You have to build out the entire data center.”

Last September, Nvidia announced production of new chips called the H100, which it enhanced to handle so-called transformer operation. Such calculations proved to be the foundation of services like ChatGPT, which Mr. Huang has called the “iPhone moment” of generative AI.

To further increase its influence, Nvidia has recently partnered with big tech companies and invested in high-profile AI start-ups that use its chips. there was one inflection A.I., which in June announced $1.3 billion in funding from Nvidia and others. This money was used to help finance the purchase of 22,000 H100 chips.

Inflection chief executive Mustafa Suleiman said there was no compulsion to use Nvidia’s products but that competitors did not offer a viable alternative. “None of them came close,” he said.

Nvidia has also directed cash and the scarce H100 to launch cloud services like CoreWave, which allow companies to spend time on computers instead of buying their own. CoreWeave, which will operate Inflection’s hardware and owns more than 45,000 Nvidia chips, raised $2.3 billion in debt this month to help buy more.

Given the demand for its chips, Nvidia has to decide who will get how many of them. That power makes some tech executives uncomfortable.

“It’s really important that hardware not become a barrier to AI or a gatekeeper to AI,” said Clément Delangue, chief executive of Hugging Faces, an online repository for language models collaborating with Nvidia and its competitors.

Some rivals said it was hard to compete with a company that sells computers, software, cloud services and trained AI models, as well as processors.

“Unlike any other chip company, they are willing to compete openly with their customers,” said Andrew Feldman, chief executive of Intel. cerebrasA start-up that develops AI chips.

But some customers, at least publicly, are complaining. Even Google, which began making competing AI chips more than a decade ago, relies on Nvidia’s GPUs for some of its tasks.

The demand for Google’s own chips is “tremendous,” said Amin Vahdat, Google’s vice president and general manager of compute infrastructure. But, he added, “we work really closely with Nvidia.”

Nvidia doesn’t discuss prices or chip allocation policies, but industry executives and analysts said each H100 could cost between $15,000 and more than $40,000, depending on packaging and other factors—about two times more than the preceding A100 chip. three times more.

“Pricing is one place where Nvidia leaves a lot of room for other people to compete,” said David Brown, vice president of Amazon’s cloud unit. He argued that its own AI chips are a bargain compared to Nvidia chips.

Mr. Huang said the improved performance of his chips saved customers money. “If you can cut the training time in half on a $5 billion data center, the savings are more than the cost of all the chips,” he said. “We are the lowest cost solution in the world.”

They have also started promoting a new product, Grace Hopper, which combines GPUs with internally developed microprocessors, combating chips that rivals say consume too little energy to run AI services. is used.

Still, more competition seems inevitable. One of the most promising entrants in the race, Mr. Rao said, is a GPU sold by Advanced Micro Devices, whose start-up was recently bought by data and AI company Databricks.

“No matter how much one wants to say that everything is done, everything is not done,” said Lisa Su, AMD’s chief executive.

Cade Metz Contributed reporting.



Source link

RELATED ARTICLES

DEJA UNA RESPUESTA

Por favor ingrese su comentario!
Por favor ingrese su nombre aquí

- Advertisment -
Google search engine

Most Popular

ff

The Art of Tipping

Crypto fraud with AI tie up

Recent Comments