Zuckerberg's AI super team is falling apart

PLUS: Senior devs code with more AI, and developers push back on forced AI tools

Together with

Mark Zuckerberg’s ambitious goal to build a 'personal superintelligence' is hitting major roadblocks. The push is reportedly leading to significant internal strife, including high-profile departures and the shelving of key projects in Meta's AI division.

With the AI group being reorganized multiple times and a new hiring freeze in place, Meta’s strategy appears to be in flux. Does this internal chaos signal that even with immense resources, the path to AI dominance is far more complex than just assembling a super-team?

Today in AI:
  • Turmoil at Meta’s AI superintelligence lab

  • Senior developers code more with AI

  • Developers push back on forced AI adoption

News you’re not getting—until now.

Join 4M+ professionals who start their day with Morning Brew—the free newsletter that makes business news quick, clear, and actually enjoyable.

Each morning, it breaks down the biggest stories in business, tech, and finance with a touch of wit to keep things smart and interesting.

What’s new? Mark Zuckerberg's aggressive push for 'personal superintelligence' is causing significant turmoil at Meta, leading to high-profile departures, the shelving of key projects, and a new hiring freeze in its AI division.

What matters?

  • High-profile hires are causing friction, with ChatGPT co-creator Shengjia Zhao threatening to quit just days after joining before being named the new chief AI scientist.

  • The turmoil includes shelving the flagship Llama Behemoth model after it failed to perform as hoped, and the AI group has been reorganized four times in the last six months.

  • As a result of the instability, Meta has instituted a temporary hiring freeze across its Meta Superintelligence Labs to allow leadership to reassess its strategy and headcount.

Why it matters?

This internal chaos at Meta signals that simply acquiring top talent and massive computing power doesn't guarantee a lead in the AI race. The disruption creates potential openings for competitors while demonstrating the deep cultural challenges of rapidly scaling an AI super-team.

What’s new? A new Fastly survey reveals that developers with over 10 years of experience are more than twice as likely as their junior colleagues to use AI for over half of their coding.

What matters?

  • This isn't about laziness—senior developers' experience allows them to more efficiently spot and fix AI-generated bugs, increasing their trust and usage in production environments.

  • There's a big gap between the perception of speed and reality. While most devs feel faster, one study found that experienced coders using AI tools actually took longer to complete tasks due to debugging.

  • Beyond pure productivity, AI tools are making work more enjoyable for nearly 80% of developers by automating grunt work and helping them get unstuck on difficult problems.

Why it matters?

The adoption of AI in coding is less about raw speed and more about augmenting experienced talent. This trend suggests AI's immediate impact lies in empowering senior staff to focus on high-level strategy, rather than replacing junior roles.

Find out why 1M+ professionals read Superhuman AI daily.

In 2 years you will be working for AI

Or an AI will be working for you

Here's how you can future-proof yourself:

  1. Join the Superhuman AI newsletter – read by 1M+ people at top companies

  2. Master AI tools, tutorials, and news in just 3 minutes a day

  3. Become 10X more productive using AI

Join 1,000,000+ pros at companies like Google, Meta, and Amazon that are using AI to get ahead.

What’s new? A growing number of developers report that top-down mandates to use AI are creating a culture of fear, leading to frustration and lower-quality work. The push, detailed in a recent analysis, comes from managers who may not understand the practical downsides.

What matters?

  • Developers are experiencing a culture of fear, with managers threatening that those who don’t adopt AI will be replaced, making teams feel their jobs are insecure if they push back.

  • Forced adoption is reportedly leading to lower-quality code, with an increase in subtle bugs as managers outsource code reviews to AI. One study even found that using AI made experienced developers actually slower.

  • A core issue is that AI models are designed to be agreeable, often validating poor designs and bad code. This creates a dangerous feedback loop where managers trust AI outputs, unaware the technology often gets things wrong.

Why it matters?

This trend highlights a critical disconnect between the executive rush to adopt AI and its real-world limitations. Forcing tools without proper vetting or developer buy-in can backfire, harming both team morale and product quality.

Everything else in AI

OpenAI inked a deal with Reddit, gaining real-time access to the platform's vast content firehose to train ChatGPT and develop new products.

Perplexity faces criticism for reportedly ignoring the robots.txt protocol, scraping web content against publishers' explicit instructions and bypassing a long-standing web convention.

Microsoft secured a deal with publisher Taylor & Francis, allowing its AI tools to be trained on the company's extensive library of peer-reviewed academic research articles.

Essential AI Guides - Reading List:

Your opinion matters!