SambaNova now offers a bundle of generative AI models

The world of big data is seen in this complex and vibrantly colored visual representation of data.

Image Credits: John Lund / Getty Images

SambaNova, an AI chip startup that’s raised over $1.1 billion in VC money to date, is gunning for OpenAI — and rivals — with a new generative AI product geared toward enterprise customers.

SambaNova today announced Samba-1, an AI-powered system designed for tasks like text rewriting, coding, language translation and more. The company’s calling the architecture a “composition of experts” — a jargony name for a bundle of generative open source AI models, 56 in total.

Rodrigo Liang, SambaNova’s co-founder and CEO, says that Samba-1 allows companies to fine-tune and address for multiple AI use cases while avoiding the challenges of implementing AI systems ad hoc.

“Samba-1 is fully modular, enabling companies to asynchronously add new models … without eliminating their previous investment,” Liang told TechCrunch in an interview. “Similarly, they’re iterative, extensible and easy to update, giving our customers room to adjust as new models are integrated.”

Liang’s a good salesperson, and what he says sounds promising. But is Samba-1 really superior to the many, many other AI systems for business tasks out there, least of which OpenAI’s models?

It depends on the use case.

The ostensible main advantage of Samba-1 is, because it’s a collection of models trained independently rather than a single large model, customers have control over how prompts and requests to it are routed. A request made to a large model like GPT-4 travels one direction — through GPT-4. But a request made to Samba-1 travels one of 56 directions (to one of the 56 models making up Samba-1), depending on the rules and policies a customer specifies.

This multi-model strategy also reduces the cost of fine-tuning on a customer’s data, Liang claims, because customers only have to worry about fine-tuning individual or small groups of models rather than a massive model. And — in theory — it could result in more reliable (e.g. less hallucination-driven) responses to prompts, he says, because answers from one model can be compared with the answers from the others — albeit at the cost of added compute.

“With this … architecture, you don’t have to break bigger tasks into smaller ones and so you can train many smaller models,” Liang said, adding that Samba-1 can be deployed on-premises or in a hosted environment depending on a customer’s needs. “With one big model, your compute per [request] is higher so the cost of training is higher. [Samba-1’s] architecture collapses the cost of training.”

I’d counter that plenty of vendors, including OpenAI, offer attractive pricing for fine-tuning large generative models, and that several startups, Martian and Credal, provide tools to route prompts among third-party models based on manually programmed or automated rules.

But what SambaNova’s selling isn’t novelty per se. Rather, it’s a set-it-and-forget it package — a full-stack solution with everything included, including AI chips, to build AI applications. And to some enterprises, that might be more appealing than what else is on the table.

“Samba-1 gives every enterprise their own custom GPT model, ‘privatized’ on their data and customized for their organization’s needs,” Liang said. “The models are trained on our customers’ private data, hosted on a single [server] rack, with one-tenth the cost of alternative solutions.”

admin

Leave a Reply

Your email address will not be published. Required fields are marked *