brought to you with

Nothing captures the energy of 2025 quite like governments throwing billions at AI while companies quietly lose their safety chiefs and swear everything’s “totally fine.” The UK is splashing £2.5B on a shiny new Sovereign AI Unit as the EU backpedals on regulating Big Tech under a little diplomatic side-eye from the US. Meanwhile, Google wants to double its AI capacity every six months, Amazon’s hackathons are patching bugs faster than humans can blink, and Roblox’s CEO is sweating through questions about child safety for 70 million daily players.

Video pick: How to Calculate Customer Lifetime Value the RIGHT Way

-🕶️

🎙 Catch the punchline hidden in this week’s headlines 🎙

Nine bullets of updates

  1. 🤖 UK commits £2.5B to AI, creating a Sovereign AI Unit and boosting national innovation in the sector.

  2. 🇪🇺 EU regulators scrapped nearly half of 2023’s proposed Big Tech rules after US diplomatic pressure on digital policy.

  3. 🎮 Roblox’s CEO bristles as interviewers press about age verification and child safety for 70M daily players.

  4. 🤖 Google targets doubling its AI serving capacity every 6 months with help from custom silicon and major cloud upgrades.

  5. 🛡️ An internal hackathon led to AI agents that automatically spot and patch 100+ security flaws weekly across Amazon’s platforms.

  6. 🎬 Short video creators can now tap AI for real-time insights and content ideas, tracking what works across TikTok, Reels, and Shorts.

  7. 🛡️ America’s biggest banks rush to trace stolen client info after hackers breach a New York financial tech firm.

  8. 🧑‍⚖️ New AI benchmark rates chatbots by how well they protect user wellbeing, not just how smart they are.

  9. ✂️ Ex-members fear prosecution as Trump disbands Musk's cost-cutting team, with concerns over actions taken during audits.

£2M says InvenireX can see diseases before they even happen

UK biotech startup InvenireX has raised £2M to commercialise a new disease-detection platform that uses programmable DNA nanostructures to identify molecular markers current tests miss. Spun out of Newcastle University in 2023, the company claims its “Nanites” can detect disease signals at up to 200× the sensitivity of qPCR while cutting testing time and costs in half.

The tech could enable ultra-early detection of cancers, improve vaccine manufacturing checks, and reveal previously undetectable biological markers. With backing from DSW Ventures, XTX Ventures, Cambridge Technology Capital, Innovate UK, and biotech angels, InvenireX is now scaling pilot programmes—including its first instrument sale—as it positions itself as the UK’s next major biotech breakthrough.

How to Calculate Customer Lifetime Value the RIGHT Way

Most startups calculate their Customer Lifetime Value wrong. In this episode, Caya breaks down how to actually measure CLTV (and CAC) like a pro; using real templates, realistic churn, and actual payback periods. We’ll cover the most common mistakes in LTV math, how to adapt it for SaaS, e-commerce, and marketplaces, and what your numbers really mean for growth.

Your LinkedIn & Email Outreach Growth Engine

Salesflow helps business owners, founders, and teams automate LinkedIn & email outreach — so you can focus on closing deals, not chasing leads.

Join 10,000+ professionals who use Salesflow to scale faster.

Now up to 40% off — our lowest price of the year.

Reliable. Easy to use. Built for closers.

  1. 🤖 AI won't replace PMs yet—today's tools cover under 50% of PM tasks, leaving humans to coach and steer.

  2. 🔮 EU logged 70+ deals (€2.3B); founders should  track 2026’s seven shifts  across AI, alt funding, solo, sustainability.

  3. 🙏 Holiday gratitude: turning bad bosses into 5 leadership lessons that sharpen hiring, feedback, and trust.

Investor Data Room Checklist

An investor data room is a storage space, digital or physical, where companies store information relevant to due diligence. We've compiled a FREE Template/Checklist of all the items your data room should include and resources and tools for obtaining them.

Nothing says ‘We’re fine’ like your Safety Chief quitting

Andrea Vallone—OpenAI’s head of model policy, the team responsible for setting behavior guidelines and managing crisis response for its AI models—is quietly leaving by the end of the year. Her team has now been moved under the safety systems chief, an unusual time for a leadership reshuffle. Wired also reports that OpenAI has recently detected hundreds of thousands of weekly users showing possible manic or psychotic symptoms, along with more than a million chats containing suicidal indicators.

GPT-5 updates have reduced harmful responses by 65–80%, but legal pressure is increasing. The takeaway: safety policy is becoming a core product focus and a competitive moat, so expect larger budgets for evaluations, crisis playbooks, and governance while a successor is named.

How did we do?

Your feedback fuels us.

Login or Subscribe to participate

Keep Reading

No posts found