Tech giants like to boast about trillion-parameter AI models that require massive and expensive GPU clusters. But Fastino is taking a different approach.
The Palo Alto-based startup says it has invented a new kind of AI model architecture thatâs intentionally small and task-specific. The models are so small theyâre trained with low-end gaming GPUs worth less than $100,000 in total, Fastino says.
The method is attracting attention. Fastino has secured $17.5 million in seed funding led by Khosla Ventures, famously OpenAIâs first venture investor, Fastino exclusively tells TechCrunch.Â
This brings the startupâs total funding to nearly $25 million. It raised $7 million last November in a pre-seed round led by Microsoftâs VC arm M12 and Insight Partners.
âOur models are faster, more accurate, and cost a fraction to train while outperforming flagship models on specific tasks,â says Ash Lewis, Fastinoâs CEO and co-founder.
Fastino has built a suite of small models that it sells to enterprise customers. Each model focuses on a specific task a company might need, like redacting sensitive data or summarizing corporate documents.
Fastino isnât disclosing early metrics or users yet, but says its performance is wowing early users. For example, because theyâre so small, its models can deliver an entire response in a single token, Lewis told TechCrunch, showing off the tech giving a detailed answer at once in milliseconds.Â
Techcrunch event
Berkeley, CA
|
June 5
Itâs still a bit early to tell if Fastinoâs approach will catch on. The enterprise AI space is crowded, with companies like Cohere and Databricks also touting AI that excels at certain tasks. And the enterprise-focused SATA model makers, including Anthropic and Mistral, also offer small models. Itâs also no secret that the future of generative AI for enterprise is likely in smaller, more focused language models.
Time may tell, but an early vote of confidence from Khosla certainly doesnât hurt. For now, Fastino says itâs focused on building a cutting-edge AI team. Itâs targeting researchers at top AI labs who arenât obsessed with building the biggest model or beating the benchmarks.
âOur hiring strategy is very much focused on researchers that maybe have a contrarian thought process to how language models are being built right now,â Lewis says.Â