SAN FRANCISCO, Aug 27 (Reuters) – Cerebras Systems Inc on Tuesday unveiled tools to let AI developers access its oversized chips to run applications, offering a much cheaper option than industry-standard Nvidia Inc's (NVDA.O) processors, the company said.
Access to Nvidia's graphic processing units (GPUs), often via cloud computing providers, can be hard to come by and expensive to run to train and deploy the large-scale artificial intelligence models used in applications such as OpenAI's ChatGPT – a process developers call inference.
“We're delivering performance that GPUs just can't deliver,” Cerebras CEO Andrew Feldman told Reuters in an interview. “We're delivering it with the highest accuracy and at the lowest price.”
The inference portion of the AI market is expected to grow rapidly and become increasingly attractive, eventually becoming worth tens of billions of dollars as consumers and businesses adopt AI tools.
The Sunnyvale, California-based company plans to offer several inference products through developer keys and its own cloud, and also plans to sell AI systems to customers who want to run their own data centers.
Cerebras' chips, each about the size of a dinner plate and called “wafer-scale engines,” avoid one of the problems with AI data processing: The data processed by the large-scale models that power AI applications typically doesn't fit on a single chip and may require hundreds or even thousands of chips chained together to be processed.
That means Cerebras' chips can deliver faster performance, Feldman said.
The company plans to charge users as little as 10 cents per million tokens, one way companies can measure the amount of output data from large-scale models.
Celebrus is seeking to go public and filed a confidential prospectus with the Securities and Exchange Commission this month, the company said.
Sign up here.
Reporting by Max Charney in San Francisco; Editing by Edwina Gibbs
Our standards: Thomson Reuters Trust Principles. Opens in new tab
Purchasing License Rights
Source link