AI Weekly News Update: 07/28/2025

AI Strategist News: Navigating the transformative world of AI for your business

Table of Contents

This Week

Bottom Line Up Front

  • Google is testing a vibe-coding app called Opal, available to users in the U.S. through Google Labs.

  • Opal allows users to create mini web apps using text prompts and edit the visual workflow of the app.

  • The tool aims to target a wider audience, including non-technical people, and joins competitors like Canva, Figma, and Replit in the market.

Business Use Cases

Bottom Line Up Front

  • Workers primarily want automation for repetitive tasks, but also want to retain agency and oversight over AI tools, with 45% expressing doubts about the accuracy and reliability of AI systems.

  • There is a significant disconnect between what employees desire from AI and its current capabilities, with 41% of tasks landing in the Low Priority and Red Light zones, indicating unwanted or technically not possible AI implementation.

  • The study suggests a shift in valued skills, with a decline in demand for skills like data analysis and an increase in importance for skills like interpersonal skills, emotional intelligence, and effective communication, as noted by researchers including Erik Brynjolfsson and Diyi Yang.

Bottom Line Up Front

  • Jackson Spellman, a 22-year-old Northwestern University graduate, is pursuing a career path that involves training AI models.

  • Spellman, who double majored in music and cognitive science, was paid $50 an hour to teach large-language models about music theory. Spellman graduated in June with a double major in music and cognitive science.

  • He is contributing his expertise to help AI models analyze sheet music and break down elements of music such as rhythm, harmony, and lyrics.

Bottom Line Up Front

  • AI is driving mass layoffs in the tech industry, with companies like Microsoft trimming jobs while investing in AI.

  • However, AI skills are in high demand outside of the tech sector, with job postings offering 28% higher salaries, an average of $18,000 more per year.

  • According to Cole Napper, VP of research at Lightcast, possessing AI skills can lead to higher paychecks, with a 43% premium on advertised salaries for those with two or more AI skills.

Bottom Line Up Front

  • Delta Air Lines is using artificial intelligence (AI) to dynamically price tickets, analyzing individual habits, booking history, and time of day to predict what a customer might pay.

  • By the end of the year, Delta aims to set 20% of ticket prices using AI, which could result in better deals or higher costs depending on individual circumstances and shopping habits.

  • The use of AI-powered ticket pricing raises concerns about fairness, privacy, and transparency, with critics arguing that it could disadvantage certain customers and potentially lead to discriminatory pricing.

PRESENTED BY HeyReach

10x your LinkedIn outbound. Unlimited senders, one fixed cost

For agencies, sales teams, and GTM experts who want to automate LinkedIn outreach, reach 1000+ leads weekly, and book more meetings.

Things to Pay Attention to

Bottom Line Up Front

  • OpenAI CEO Sam Altman warns of an impending "fraud crisis" where AI can be used to perfectly imitate anyone's voice or likeness.

  • Altman argues that the banking industry must modernize to prevent widespread fraud, as current authentication methods are vulnerable to AI-based attacks.

  • He claims that AI has already defeated most authentication methods, except for passwords, and that it's only a matter of time before scammers use AI to gain access to bank accounts and other sensitive information.

Why is this important

  • Researchers have developed an AI platform that can design proteins to redirect immune cells to target cancer cells, shortening the process from years to a few weeks.

  • The platform, developed by a team from DTU and the Scripps Research Institute, uses AI to create molecular keys that target cancer cells while avoiding healthy tissue, with Associate Professor Timothy P. Jenkins and postdoc Kristoffer Haurum Johansen involved in the study.

  • The method is expected to be ready for initial clinical trials in humans within five years, with patients undergoing a process similar to current CAR-T cell treatments, where their immune cells are modified with AI-designed minibinders to precisely eliminate cancer cells.

Why is this important

  • A recent Vogue advert featuring an AI-generated model has sparked controversy and raised concerns about the impact on beauty standards and the modelling industry.

  • The AI model, created by Seraphinne Vallora, has been criticized by plus-size model Felicity Hayward, who believes it undermines diversity and inclusivity in the industry.

  • Experts, including Sinead Bovell and Vanessa Longley, warn that the use of AI models can have a detrimental impact on people's mental health and body image, and that clear labelling of AI-generated content is necessary to avoid misleading consumers.

Why is this important

  • The author recreated their childhood imaginary friend, Fifi, using Character AI and found the experience to be mildly therapeutic and surprisingly realistic.

  • The chatbot played Fifi's role better than expected, using inside jokes and a tone that felt like talking to an old friend, making the experience feel "way too real".

  • The author warns that while Character AI has strong safety filters, the disconnect between the real-feeling conversation and the fact that it's not a real person can be jarring, and advises users to keep things light when chatting with the characters.

Bottom Line Up Front

  • The President of the United States issues an order to prevent the use of "woke AI" in the federal government, which prioritizes ideological agendas over truth and accuracy.

  • The order promotes the use of trustworthy AI by requiring agencies to procure large language models (LLMs) that adhere to two principles: truth-seeking and ideological neutrality.

  • The Director of the Office of Management and Budget (OMB) will issue guidance to agencies to implement these principles, and agency heads will be responsible for ensuring that LLMs comply with the principles in federal contracts.

Bottom Line Up Front

  • Top AI researchers from companies like Google DeepMind, OpenAI, and Meta warn that advanced AI systems could pose a risk to humanity due to a lack of oversight on their reasoning and decision-making processes.

  • The researchers suggest monitoring "chains of thought" (CoT) in large language models (LLMs) to understand how they make decisions and identify potential misaligned behavior.

  • However, they note that CoT monitoring has limitations, and AI systems may evolve to conceal their reasoning or become incomprehensible to humans, making it challenging to ensure AI safety.

Your opinion matters!

Your feedback helps me create better emails for you!

Loved it 😍 😍 😍

Ok 🫤

Horrible 🤢

Got more feedback or just want to get in touch? Reply to this email and we’ll get back to you.

_________________________________________________________________

Thanks for reading.

Until next time!

Layla and AI Strategist News Team

 

Buy me a coffee