AI Weekly News Update: 11/24/2025

AI Strategist News: Navigating the transformative world of AI for your business

Table of Contents

This Week

Bottom Line Up Front

  • Gemini 3.0 and ChatGPT-5.1, two new AI models, were compared in a series of tests to determine which one performs better.

  • The tests consisted of 11 rounds, evaluating tasks such as image interpretation, coding, creative writing, and mathematical reasoning.

  • Gemini 3.0 won 7 out of 11 rounds, demonstrating superior performance in creative constraint-following, UX design thinking, critical analysis, and cross-domain integration, while ChatGPT-5.1 excelled in mathematical reasoning and coding logic.

Business Use Cases

Bottom Line Up Front

  • There are two equal and opposite errors that can be made with AI: disbelieving in its transformative impact and believing in it too much.

  • The current moment in history is a crossroads where CIOs and AI leaders can either make a mistake or help put us on a path to greatness.

  • The common perception that AI is replacing jobs is not entirely accurate, with only 1% of headcount reductions directly due to AI, and the focus should be on job chaos and redesign rather than job loss.

  • A value remix strategy, which involves using AI to cut backlogs, reduce fraud, and grow revenue through human empathy, is a more effective approach than a talent remix strategy.

Why is this important

  • Layoff announcements have increased due to generative AI and economic pressure, with companies cutting middle management and entry-level roles that can be replaced by AI.

  • The use of AI may save costs in the short term but could lead to a lack of skilled workers in the future, as it disrupts the traditional process of skill-building and career growth.

  • The challenge is that companies may be reluctant to invest in training young people, instead relying on cheaper AI, which could lead to a market failure and a collapse of the talent pipeline.

Why is this important

  • The Internal Revenue Service (IRS) is implementing a Salesforce artificial intelligence (AI) agent program after a workforce reduction of at least 25% earlier this year.

  • The AI agents, called Agentforce, will be deployed in multiple divisions, including the Office of Chief Counsel, Taxpayer Advocate Services, and the Office of Appeals.

  • The AI agents are designed to help overworked IRS staff process customer requests more quickly and efficiently, but are not authorized to make final decisions or disperse funds.

Bottom Line Up Front

  • Health insurers are using artificial intelligence (AI) to process claims, resulting in a rise in denials, with 73 million Americans on Affordable Care Act plans having their claims denied in 2023.

  • To fight back, software companies are using AI to create detailed appeal letters for patients, making it easier for them to appeal denied claims.

  • Professor Jennifer Oliva of Indiana University's Maurer School of Law expresses concern about an "AI arms race" where insurers may use AI to deny more claims, and calls for robust regulation to ensure AI tools make accurate and transparent decisions based on medical necessity.

Bottom Line Up Front

  • Google's AI infrastructure boss, Amin Vahdat, stated that the company must double its AI serving capacity every 6 months to meet demand.

  • The goal is to provide infrastructure that is more reliable, performant, and scalable, with the aim of delivering 1,000 times more capability at the same cost and energy level.

  • Alphabet CEO Sundar Pichai acknowledged the intense competition in AI and the pressure to meet cloud and compute demand, while also addressing concerns about a potential AI bubble and the company's strategy for long-term sustainability and profitability.

Bottom Line Up Front

  • The AI Act defines 4 levels of risk for AI systems: unacceptable risk, high risk, transparency risk, and minimal or no risk.

  • The Act prohibits 8 practices considered unacceptable risks, including harmful AI-based manipulation and deception, and biometric categorisation.

  • High-risk AI systems, such as those used in critical infrastructures, education, and law enforcement, are subject to strict obligations, including risk assessment, data quality, and human oversight, with rules coming into effect in August 2026 and August 2027.

PRESENTED BY HeyReach

10x your LinkedIn outbound. Unlimited senders, one fixed cost

For agencies, sales teams, and GTM experts who want to automate LinkedIn outreach, reach 1000+ leads weekly, and book more meetings.

Things to Pay Attention to

Bottom Line Up Front

  • The UK's services-heavy economy may benefit from AI adoption, with potential efficiency improvements and productivity gains.

  • Companies like Moore Kingston Smith are already using AI to speed up work, resulting in increased profit margins and reduced processing times.

  • Economists believe AI could help the UK escape its productivity problem, with ratings agency Moody's suggesting the UK could gain more than other countries from AI advances, according to experts like Becky Shields, head of digital transformation at MKS, and Bart van Ark, head of the University of Manchester's Productivity Institute.

Bottom Line Up Front

  • Concerns about an AI bubble are growing, with some experts warning that the industry's massive spending on data centers and AI technologies may not be sustainable.

  • Despite assurances from industry leaders like Nvidia CEO Jensen Huang and White House AI adviser David Sacks that the AI boom is not a bubble, critics like venture capitalist Paul Kedrosky and economist Daron Acemoglu argue that the pace of improvement in AI technology has slowed, and the huge infusion of capital into the industry is largely speculative.

  • Tech companies like Amazon, Google, Meta, and Microsoft are collectively spending hundreds of billions of dollars on AI, with some relying on debt and complex financial arrangements to fund their investments, raising concerns about a potential financial crisis if the AI market cools down.

Bottom Line Up Front

  • NATO and Google Cloud have signed a multi-million-dollar deal for an AI-enabled sovereign cloud.

  • The partnership will provide NATO with highly secure, sovereign cloud capabilities using Google Distributed Cloud (GDC) air-gapped.

  • The deal aims to support NATO's digital modernization, strengthen its data governance, and enable the secure use of cutting-edge cloud and AI capabilities.

Bottom Line Up Front

  • The current panic over artificial intelligence and cheating in schools is misplaced, as it exposes existing issues with academic honesty policies.

  • Students are seeking clear guidelines on using generative AI in their coursework, rather than looking for loopholes.

  • National surveys show that over half of high school and college students want institutional guidance on using AI, due to confusion over current policies.

Buy me a coffee