Microsoft's homegrown AI models

PLUS: OpenAI's new realtime voice model and Anthropic's significant privacy change

Together with

Microsoft is marking a major step toward AI independence from its key partner, OpenAI, by debuting its first in-house foundational models. The company unveiled two new systems, one focused on speech generation and the other on text-based instruction following.

The pivot introduces a new competitive dynamic to its partnership, but will the new models live up to the hype? With bold claims from leadership but no public benchmarks to back them up, the true performance of these systems against rivals remains unverified.

Today in AI:
  • Microsoft’s new in-house AI models

  • OpenAI’s new realtime voice model

  • Anthropic’s significant privacy change

Turn AI Into Your Income Stream

The AI economy is booming, and smart entrepreneurs are already profiting. Subscribe to Mindstream and get instant access to 200+ proven strategies to monetize AI tools like ChatGPT, Midjourney, and more. From content creation to automation services, discover actionable ways to build your AI-powered income. No coding required, just practical strategies that work.

What’s new? Microsoft has unveiled its first in-house AI models, MAI-Voice-1 and MAI-1-preview, marking a major step toward developing foundational AI independent of its key partner, OpenAI.

What matters?

  • The new MAI-Voice-1 model focuses on speech generation, capable of producing a minute of audio in less than one second and is already integrated into Microsoft products.

  • MAI-1-preview is a text-based model designed for instruction following and is currently being tested on LM Arena, with developers able to request API access now.

  • Microsoft AI CEO Mustafa Suleyman claimed the model is "up there with some of the best," though public benchmarks have not been released, leaving its performance relative to rivals unverified.

Why it matters?

Microsoft's development of its own foundational models introduces a new competitive dynamic to its partnership with OpenAI. This strategic pivot allows the company to reduce dependency and better control its own AI roadmap.

What’s new? OpenAI has officially launched its Realtime API, introducing the new gpt-realtime model. This update enables developers to build more natural, low-latency voice agents that can even understand image inputs.

What matters?

  • The new model achieves 82.8% accuracy on audio reasoning benchmarks, a significant leap from the 65.6% score of its predecessor.

  • gpt-realtime can now process image inputs like photos and screenshots, and it can detect nonverbal cues for more fluid conversations.

  • OpenAI added support for the Model Context Protocol (MCP), which lets voice agents connect to external tools and data sources without custom integrations.

Why it matters?

These enhancements make building more responsive, human-like voice agents more accessible for developers. The update paves the way for a new wave of voice-first applications in customer support, personal assistance, and beyond.

Keep This Stock Ticker on Your Watchlist

They’re a private company, but Pacaso just reserved the Nasdaq ticker “$PCSO.”

No surprise the same firms that backed Uber, eBay, and Venmo already invested in Pacaso. What is unique is Pacaso is giving the same opportunity to everyday investors. And 10,000+ people have already joined them.

Created a former Zillow exec who sold his first venture for $120M, Pacaso brings co-ownership to the $1.3T vacation home industry.

They’ve generated $1B+ worth of luxury home transactions across 2,000+ owners. That’s good for more than $110M in gross profit since inception, including 41% YoY growth last year alone.

And you can join them today for just $2.90/share. But don’t wait too long. Invest in Pacaso before the opportunity ends September 18.

Paid advertisement for Pacaso’s Regulation A offering. Read the offering circular at invest.pacaso.com. Reserving a ticker symbol is not a guarantee that the company will go public. Listing on the NASDAQ is subject to approvals.

What’s new? Anthropic announced a major policy shift, moving from a privacy-first stance to an opt-out model. It will now use consumer chat data from its Claude models for training unless users explicitly decline.

What matters?

  • This change impacts users on Claude's Free, Pro, and Max plans, who have until September 28, 2025, to opt-out before the new policy takes full effect.

  • For users who consent, Anthropic will extend data retention from 30 days to five years, which it says will support longer AI development and safety improvement cycles.

  • Commercial accounts, including Claude for Work and Education, are unaffected by this change and will continue operating under their existing privacy agreements.

Why it matters?

Anthropic is trading its key privacy differentiator for the data needed to compete directly with models from OpenAI and Google. This move aligns its practices with the industry standard, making user data a crucial resource for future AI advancements.

Everything else in AI

LMSYS detailed an open-source replication of DeepSeek's inference system using SGLang, achieving up to 22.3k output tokens per second per node on a 96-GPU cluster.

xAI released Grok Code Fast 1, a new advanced coding model designed to offer high performance at a very low cost for agentic coding tasks.

Krea introduced a waitlist for its new Realtime Video feature, which will allow users to generate and edit video from text, canvas painting, or live webcam feeds.

Taco Bell reconsiders its use of voice AI at the drive-through after viral videos highlighted comical and frustrating ordering mistakes.

Essential AI Guides - Reading List:

Your opinion matters!