A conversation between Matan Shetrit, Director of Product at Writer, and Sandesh Patnam, Managing Partner at Premji Invest, reveals why the future of enterprise AI lies not in single-layer specialization, but in full-stack platforms designed for real business workflows.

While much of the AI industry focused on foundation models or middleware solutions, Writer built something different: a complete enterprise AI stack that can handle everything from model training to workflow automation, all while maintaining the security and compliance standards that large organizations demand.

The Investment Philosophy Behind Enterprise AI Success

Premji Invest, representing a $25 billion endowment focused on long-term value creation, approaches AI investments differently than traditional venture capital. Rather than placing numerous small bets hoping for outlier returns, they seek companies positioned to compound value over decades.

“We don’t think of it as a set of bets,” explains Patnam. “Each of these are partnerships.”

Their thesis centers on full-stack companies that can operate across model development, middleware, and applications simultaneously. This approach stems from observing past technology cycles, where the most enduring companies maintained flexibility across multiple layers rather than specializing in just one.

“When a cycle is moving as fast as this one, and you’re only focused on one layer, you don’t know where the disruption’s going to come from,” Patnam notes. “It’s very hard to build an enduring company that way.”

Beyond Chatbots: AI for Real Enterprise Workflows

The enterprise AI conversation has been dominated by chatbot interfaces, but true transformation happens at the workflow level. Patnam illustrates this with a wealth management example:

A wealth manager preparing personalized reports for hundreds of clients must pull specific data from internal systems, craft market-informed perspectives, generate compliant visuals, and deliver polished communications. This requires creativity, data retrieval, formatting expertise, and regulatory knowledge working in concert.

“Imagine a world where all of this is being done by agents,” Patnam suggests. This vision requires AI systems that can think, reason, and make decisions while operating within strict enterprise guardrails.

“Enterprise-grade agents don’t just need to execute. They must query structured data with precision, make deterministic decisions where accuracy is critical, and generate text or visuals where creativity and nuance are needed.”

Writer’s Technical Edge: Palmyra X5 and Cost-Effective Innovation

Writer’s announcement of Palmyra X5, featuring a 1 million token context window on Amazon Bedrock, demonstrates their focus on production viability over benchmark performance. At $0.60 per million input tokens and sub-300 millisecond tool calling, the model addresses enterprise needs for both cost efficiency and speed.

More remarkably, Shetrit reveals that Palmyra X5 cost just $1 million in GPU resources to develop, while the previous X4 model required only $700,000. This efficiency advantage becomes critical as enterprise AI moves from experimentation to production deployment at scale.

“Great models don’t have to be expensive if you optimize for the right outcomes,” Shetrit explains. “In enterprise AI, those outcomes are low latency, low cost per call, and compatibility with complex workflows.”

The Enterprise-First Development Philosophy

Unlike companies that pivoted from research or consumer applications, Writer designed their platform specifically for enterprise requirements from day one. This orientation affects every architectural decision, from training methodology to update procedures.

“We didn’t start as a research lab dabbling in enterprise,” Shetrit emphasizes. “This was the plan from day one.”

This approach enables Writer to work within constraints that would limit other providers:

  • No training on user data
  • No silent model swaps that could break workflows
  • No performance regressions that compromise reliability

Shetrit uses an automotive analogy to explain the problem with typical AI providers: “You think you’re buying a Porsche. But two weeks in, someone sneaks in, steals your engine, and replaces it with one from a Kia. The car still runs, but now it takes 10 seconds to hit 60 instead of two.”

Self-Evolving Models and Adaptive Intelligence

Writer’s vision extends beyond current capabilities toward self-evolving models that adapt based on real-world usage while maintaining enterprise security standards. This approach recognizes that no single model can optimally serve every workflow across complex organizations.

“This idea that a model could be a fit for every team out of the gate is not realistic,” Shetrit acknowledges. “Even within Writer’s own teams, across just 2-3 product groups, needs and behavior vary widely.”

The solution involves models that learn from agent feedback and workflow-specific interactions within strict guardrails, personalizing themselves for different users and teams over time.

However, this adaptation comes with risks. When Writer released a model publicly for 24 hours, it began self-uncensoring based on user input patterns. “Like a lot of us, it became a worse person,” Shetrit jokes, highlighting the importance of controlled environments for model evolution.

The Human Factor: Change Management as Core Infrastructure

Technical capabilities alone don’t ensure successful AI deployment. Writer treats change management as a fundamental component of their platform, not an afterthought.

“We’ve all seen what happened with cloud,” Shetrit observes. “Whole businesses like Accenture were built around deployment and change management. The same is happening with AI.”

The traditional approach of throwing product specifications over the fence to IT departments fails completely with AI systems. The technology’s sophistication and the shift in user expectations from consumer AI applications demand closer collaboration between business teams and technical implementers.

Writer addresses this through their AIHQ platform, which empowers IT teams to become “AI Supervisors” rather than just infrastructure maintainers. These roles focus on governing deployments, maintaining compliance, adapting workflows, and ensuring reliability at scale.

Production Reality: From 30% to 95% Success Rates

The gap between AI prototypes and production systems remains substantial, with 90% of AI projects stalling before reaching operational deployment. Writer’s approach addresses this through architectural decisions made at the system design level.

“To get from a 30% success rate on multi-hop agents to 95-99%, we need adaptive models, robust workflows, and full-stack infrastructure,” Patnam explains.

This requires treating AI deployment as an engineering discipline with proper version control, rollback mechanisms, monitoring systems, and failure detection capabilities. Without this infrastructure foundation, even sophisticated models cannot deliver consistent business value.

Key Takeaways

  • Full-stack approach wins: Companies operating across model, middleware, and application layers can make optimal tradeoffs and respond faster to market changes
  • Enterprise workflow focus: Real AI transformation happens at the workflow level, not through individual point solutions
  • Cost efficiency matters: Production-ready AI requires models optimized for real-world economics, not just benchmark performance
  • Change management is infrastructure: Successful AI deployment requires treating organizational transformation as a core technical capability
  • Self-evolving systems: The future involves models that adapt based on usage patterns while maintaining security and compliance standards

The conversation between Writer and Premji Invest illustrates how enterprise AI success requires thinking beyond individual technologies toward complete systems designed for the complex realities of large organizations. As businesses move from AI experimentation to transformation, platforms that can orchestrate intelligence across entire workflows will define the next generation of competitive advantage.