Ready to launch your own podcast? Book a strategy call.
Frontlines.io | Where B2B Founders Talk GTM.
Strategic Communications Advisory For Visionary Founders
Beyond Demo-Grade AI: Building Enterprise Language Models That Actually Work
In an era where AI demos dazzle but often fail to deliver in production, one founder is taking a radically different approach to enterprise language models. In a recent episode of Category Visionaries, Douwe Kiela, CEO of Contextual AI, shared how his team is tackling the fundamental challenges that keep large language models from reaching their full potential in enterprise environments.
The Enterprise AI Gap
While ChatGPT sparked mainstream excitement about AI’s potential, enterprise adoption faces significant hurdles. As Douwe explains, “Everybody can see that they’re going to change the world… But at the same time there’s a lot of frustration, I think, especially in enterprises where you can build very nice demos. But to get these models to actually be production grade, so enterprise grade for a production use case, that requires a lot more work.”
This gap between demos and production-ready AI stems from several critical challenges: hallucination, attribution, data privacy, and cost-quality tradeoffs. While most companies try to patch these issues individually, Contextual AI is rebuilding the foundation.
A Scientific Approach to Enterprise AI
Drawing inspiration from Jensen Huang’s first-principles thinking at NVIDIA, Contextual AI is approaching these challenges systematically. “What we’re doing is building RAG 2.0 contextual language models where everything is completely trained end to end for working on enterprise data,” Douwe shares.
This builds on his team’s pioneering work in retrieval augmented generation (RAG) at Facebook AI Research in 2019-2020. Unlike current implementations that bolt RAG onto existing models, Contextual AI trains the entire system holistically to address enterprise needs.
Specialized Intelligence Over AGI
While major AI companies chase artificial general intelligence (AGI), Contextual AI is taking a different path. Douwe argues that the real transformation will come from specialized solutions: “AI is going to change a lot of things in our lives, but the thing it is going to change the most substantially is the way we work. It is literally going to change the way the world works.”
His vision centers on empowering individual workers to become “CEOs of their own little teams of AI coworkers,” focusing on specialized intelligence rather than trying to solve everything at once.
Strategic Positioning in a Noisy Market
Despite the AI hype cycle, Contextual AI has found success by leveraging deep expertise and strategic focus. “We’re in a very fortunate position where we’re basically not doing any outreach and folks are coming to us with their problems,” Douwe notes. Their approach targets companies that have moved beyond basic AI experimentation and understand their specific needs.
Looking ahead to 2024, Douwe anticipates a cooling of the AI hype cycle: “The tide is going to run out and a bunch of people are going to get caught swimming naked.” His team is already positioning to emerge stronger when this happens.
A Fresh Take on AI Regulation
On the regulatory front, Douwe offers a compelling perspective: “We also didn’t regulate like the Linux kernel… or like a browser. And I think a language model is very similar to those pieces of technology where it can be used for good things and for bad things.” Instead of regulating the foundational technology, he argues for focusing on specific applications.
This pragmatic approach exemplifies Contextual AI’s broader strategy – focusing on concrete, solvable problems rather than getting caught up in the industry’s wilder aspirations. It’s an approach that might just bridge the gap between AI’s promise and its practical implementation in the enterprise world.
Douwe draws lessons from the browser and Linux kernel when thinking about the appropriate level of regulation for AI models, arguing that the focus should be on regulating applications rather than the underlying technology itself. Founders should look to similar industries for insights on how to navigate complex regulatory landscapes.
By building on top of open-source models like Meta's LLaMA, Contextual AI was able to focus on contextualizing and fine-tuning rather than training from scratch, accelerating their time to market. Founders should actively seek out opportunities to collaborate with and build upon open-source initiatives in their respective fields.
When faced with a broad potential market, Douwe recommends prioritizing companies that have a clear understanding of their AI use cases, success criteria, and overall strategy. These tech-forward early adopters are more likely to be receptive to innovative solutions and can serve as valuable design partners and references.
Contextual AI's inbound interest from Fortune 500 companies is largely driven by the strength of their team's pedigree and the connections provided by their investor network. Founders should invest in building a team with deep domain expertise and seek out investors who can provide strategic introductions and support.
Douwe advises founders to be authentic and self-aware in the fundraising process, rather than exaggerating or bullshitting their way through difficult questions. By staying close to one's own strengths and being honest about areas for improvement, founders can maximize their chances of successful fundraising on their own terms.