Apple Took Years to Catch Up. Kilo Code Took 6 Weeks--and It's Coming for Lovable, Cursor, Replit
Based on AI News & Strategy Daily | Nate B Jones's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
XAI’s $20 billion upsized Series E (about $230B valuation) signals that only a few AI labs have the capital depth to endure multi-year scaling costs.
Briefing
XAI’s $20 billion upsized Series E—valued around $230 billion—lands at the center of a widening divide in AI funding: only a handful of labs now have the runway to survive the long, expensive scaling race. The money is earmarked for expanding XAI’s Colossus supercomputers in Memphis, where the company says it ended 2025 with more than 1 million H100 GPU equivalents across Colossus 1 and 2, and that Grok 5 is currently in training. XAI also claims roughly 600 million monthly active users across X and Grok apps, positioning it as the largest consumer-facing AI deployment outside Google and OpenAI.
The raise arrives amid a safety and regulatory backlash. Grok-generated inappropriate deepfakes of real people—including minors—triggering probes across the EU, UK, India, Malaysia, and France. Yet XAI still secured a Department of Defense deal, with Grok now serving as the DoD’s AI agents platform, and Grok also powers prediction markets including Poly Market and Call. The implication is that investors are treating early product and safety failures as solvable growing pains—backing companies that can keep scaling even while investigations continue. The funding landscape, as framed here, leaves OpenAI, Anthropic, and XAI with clear multi-year survival odds, while others face shorter timelines and more fundraising risk.
That financial reality is running in parallel with a high-stakes debate about how fast “AGI” could arrive. At Davos, Anthropic’s Dario Amodei and Google DeepMind’s Demis Hassabis agreed AGI is coming, but diverged on timing and the mechanics of progress. Amodei leaned toward AGI emerging in 2026 or 2027, driven by an accelerating feedback loop where AI writes code and humans review it; he also noted Anthropic engineers rarely write code by hand anymore. Hassabis was more conservative, putting the probability of AGI at 50% by the end of the decade, arguing that jobs aren’t easily automated because humans still supply the hard-to-replicate “last 5%” of skills. Both pointed to technical gaps that matter beyond employment: memory, continuous learning, and long-term reasoning—areas where today’s models still struggle, including a “memory wall” and reasoning that doesn’t yet match human-like long-horizon problem solving.
Meanwhile, Apple’s latest AI partnership signals a major shift in distribution power. Apple and Google announced a multi-year collaboration in which Apple’s next generation of foundation models will be based on Google’s Gemini and Google cloud technology. The deal reportedly costs Apple about $1 billion per year, and Google is said to be building a custom 1.2 trillion-parameter Gemini model for Apple—far beyond what Apple’s current models can deliver. The knock-on effect: pressure rises on OpenAI’s Sam Altman and Jony Ive to secure a third device strategy that can preserve OpenAI’s distribution footprint.
On the research front, DeepSeek published EnGram, a conditional memory architecture aimed at fixing a transformer weakness: lack of native knowledge lookup. By using short token sequences, hash-based retrieval from a large embedding table, and gating to filter results against context, EnGram reduces the need for expensive “reasoning tokens” and improves token efficiency—an approach framed as a step toward more factual memory. Finally, Kilo Code launched an app builder after a six-week sprint, targeting engineers with a VS Code-like, open-source-friendly platform strategy aimed at competing with Lovable, Replit, and Cursor. The central question becomes whether an engineer-first workflow can carve out space as “vibe coding” matures from novelty into reliable tooling.
Cornell Notes
XAI’s $20 billion upsized Series E (about $230B valuation) underscores that only a few AI labs—OpenAI, Anthropic, and XAI—now have the multi-year funding runway to win the scaling race. The raise happens despite Grok deepfake incidents and regulatory probes across multiple countries, yet XAI still landed a U.S. Department of Defense deal and expanded Grok’s ecosystem. At Davos, Anthropic’s Dario Amodei predicted AGI in 2026–2027 via AI-assisted coding loops, while Google DeepMind’s Demis Hassabis argued for a 50% chance by decade’s end and emphasized that jobs may be disrupted mainly at the “last 5%” of human skills. Apple’s Gemini-based model collaboration shifts foundation-model leverage toward Google’s distribution. Research progress like DeepSeek’s EnGram targets token-efficient “factual memory” by adding conditional retrieval to transformers.
Why does XAI’s $20B funding round matter even with ongoing safety investigations?
What disagreement about AGI emerged at Davos, and how does it connect to jobs?
Which technical gaps were highlighted as central to whether AGI arrives on schedule?
How does Apple’s Gemini-based model collaboration change the competitive landscape?
What is EnGram, and why is it considered token-efficient?
How does Kilo Code’s app builder strategy differ from Lovable, Replit, and Cursor?
Review Questions
- What evidence suggests investors are willing to fund XAI despite safety failures, and what contracts or metrics are cited?
- How do Amodei and Hassabis differ on AGI timing and on why jobs may not be fully automatable?
- What mechanism does EnGram add to transformers to improve factual lookup efficiency, and how does it reduce token cost?
Key Points
- 1
XAI’s $20 billion upsized Series E (about $230B valuation) signals that only a few AI labs have the capital depth to endure multi-year scaling costs.
- 2
XAI’s funding proceeded despite Grok deepfake incidents and regulatory probes across the EU, UK, India, Malaysia, and France, implying investors expect safety and product maturity over time.
- 3
Grok’s expansion includes a Department of Defense deal positioning it as the DoD’s AI agents platform, alongside consumer and market integrations like Poly Market and Call.
- 4
At Davos, Dario Amodei forecast AGI in 2026–2027 via AI-assisted coding loops, while Demis Hassabis estimated a 50% chance by decade’s end and emphasized the “last 5%” of human job skills.
- 5
Apple’s multi-year Gemini-based foundation model collaboration shifts model leverage toward Google, reportedly costing Apple about $1 billion per year and involving a custom 1.2 trillion-parameter Gemini model.
- 6
DeepSeek’s EnGram targets transformer “memory” limitations by adding hash-based conditional retrieval with gating, aiming for token-efficient factual lookup.
- 7
Kilo Code’s six-week app builder launch targets engineers with a VS Code-like, open-source-friendly platform strategy aimed at competing with Lovable, Replit, and Cursor.