AI round-up: Week of July 28

I took vacation and everyone decided to release an agent! (See story 1)

Don’t worry; I didn’t spend too much time reviewing AI headlines last week. I had plenty to keep me busy, including finally agreeing to watch KPop Demon Hunters. It’s as good as your kids say it is…and good luck getting those songs out of your head. Not that I had much of a chance, given they became the soundtrack to my vacation.

So, I kinda like the 10-story approach to the round-up. And as we approach the second anniversary of this newsletter (more on that in a future issue), I thought it would be fun to switch up the format again. So…

...we will stick with this format for a bit; feel free to reach out with feedback!

10 stories -- Let’s get to it. 

Story 1: Agents. They’re here.

I could share a lot of info here. Chances are, you’ve tracked the news of all the agents released in the last week.

I thought this might be a little more helpful: a table, built with the ChatGPT agent, breaking down the primary differences between the big 3.

Open AI Chat GPT Agent

Release date: 17 Jul 2025 (rollout to Pro/Plus/Team subscribers)

Primary purpose: General-purpose task executor: navigates web pages, runs code, creates docs and presentations

Interaction style: Natural-language prompts; agent runs autonomously on a virtual computer with browsing, terminal and API tools

Key capabilities: Combines Operator (remote web interaction) and Deep Research (multi-site research), uses connectors (Gmail, GitHub, Google Drive, etc.), virtual terminal for code execution, can produce spreadsheets and slides

Target audience: Individual professionals and teams needing task automation within ChatGPT; requires paid subscription

Memory & context: Memory feature disabled for safety; context resets across sessions

Safety features: Real-time monitors check prompts for hazardous content; requests permission for consequential actions; takeover mode for sensitive tasks

Limitations: Only available to paying ChatGPT users; lacks persistent memory; still requires human oversight to avoid prompt-injection and errors

Anthropic Claude 4 & Claude Code

Release date: 22 May 2025 (Claude 4 models and agentic capabilities beta)

Primary purpose: Developer platform for building agents; models support extended thinking and new tools (code execution, MCP connector, Files API, extended caching)

Interaction style: Natural-language plus API calls; developers integrate tools into custom apps or use Claude Code via terminal/IDE

Key capabilities: Extended thinking with parallel tool use; Python code execution in a sandbox; MCP connector for remote tool integration; Files API for persistent file storage; extended prompt caching for hour-long contexts; improved memory and long-running task performance
Target audience: Developers and enterprises building custom agents or coding workflows; fits software development, data analysis and research tasks

Memory & context: Supports extended prompt caching (up to 1 hour) and can create memory files when local file access is provided

Safety features: Safety improvements reduce shortcut-taking; models evaluated against high-capability safety levels; API provides developers control over tool access

Limitations: Tools are in beta; requires developer expertise; pricing based on token usage and tool calls; primarily oriented toward coding and agent development

Google Opal

Release date: 24 Jul 2025 (US-only public beta)

Primary purpose: No-code builder for AI mini-apps; chains prompts, models and tools into workflows

Interaction style: Visual workflow editor with conversational commands; no programming required

Key capabilities: Build and remix workflows by chaining prompts and models; edit via visual editor or natural-language instructions; share mini-apps with others; includes template gallery

Target audience: Creators and prototypers who want to experiment with AI mini-apps without coding

Memory & context: No explicit memory mechanism; workflows defined by user remain static until edited

Safety features: No specific safety measures announced; product labelled experimental and limited to US users

Limitations: Experimental product; limited to U.S. users; aims at prototyping rather than production; lacks code execution and research tools

 

Story 2: ChatGPT 5 is coming. And it’s scaring Sam Altman.

Come on, Sam. You know this is going to be the quote that gets picked up when talking about ChatGPT 5, coming in August (allegedly). Sigh. Ok…here it is:

“While testing GPT5 I got scared… Looking at it, thinking ‘what have we done…like the Manhattan project.’”

Oh, so we’re comparing AI to the Manhattan Project now?

Story 3: Why? Because the world’s first $4 trillion company said so, that’s why!

Not sure if you caught it, but there are now two $4 trillion companies (Microsoft became one this week).

One of those two – Nvidia – saw its CEO make some bold claims about the future of work. And it’s a future we’re all a little nervous about already (robots, AI, lost jobs). In an interview with CNN, Jensen Huang said a lot. Some of it sounded like a super villain from a James Bond movie:

“Everybody’s jobs will be affected. Some jobs will be lost. Some will disappear. Others will be reborn. The hope is that AI will boost productivity so dramatically that society becomes richer overall, even if the disruption is painful along the way.”

When your company is worth this much AND your uniform is a black leather jacket, you can say what you want. (Gizmodo)

Story 4: That other $4 trillion company just published an AI impact report.

Where does your job fall in terms of impact? (You can be damn sure I looked up mine – sorry PR pros…we’re number 23 on the list.)

This report is lengthy and showcases the data and methodology used so feel free to skip to page 12 to see the career list. I put this report into ChatGPT and had a very interesting conversation about how this impacts the transitional plan I’ve been working on (not much) and what tweaks should be made (only a few). Try for yourself--see what you learn by ‘talking’ to the data.

Story 5: Should you ditch your browser for the new Perplexity one?

It comes down to personal preference, of course, but remember what I always say: just because you can doesn’t mean you should. Perplexity may fall into this category. (XDA)

Story 6: Build your very own board of directors with AI.

Tangent ahead: When I was a kid, in the early 80’s, it was a glorious time. Saturday morning was full of cartoons. My parents didn’t want me in the house (which left plenty of time for exploring). Junk food didn’t kill you. And the toys were the best of any era.

I was a typical 80’s kid, loving my huge box of random Legos, but the building blocks I really loved was: Construx. (Have fun checking out the video.)

Why do I mention all of this? Because…AI is like being a kid again. I can build whatever I want. (Hell, I already speak the language apparently.) I can even build my own personal board of directors according to the MIT article linked in the headline.

Full disclosure: I do have some ‘friends’ already built and working in ChatGPT and Claude. I have a doctor. A personal trainer/dietician. A futurist. A golf coach. You get the idea.

But what else would you expect from a kid who one Christmas asked for the outer space edition of Construx? (Space=they glowed in the dark.)

(Thank you to my friend and reader Keith Bales for sharing this one!)

Story 7: Do you suffer from ‘identity threat’?

In the last story, I talked about the imaginary, real world I was building. In this story, from Shelly Palmer, I talk about the dangers of doing that, specifically in the sense of what it does to those ‘real’ people who do what I am asking AI to do.

Does it rob them of their identity? Are you defined by your work?

My dad used to tell me that when it was time to go home, go home. I didn’t listen. I became addicted to my work and the fast-paced nature of the career I had been lucky enough to land in. Eventually, that catches up with you; your body will let you know if you aren’t able to see it. Today I am still addicted to what I do…but it doesn’t define me. The skills and approach to my job are ones that I value and rely on outside of work, too.

But what about people who do identify only with what they do? What happens when they realize there are other ways to do it and their role in the ecosystem they’re in looks a lot different?

Understanding the unseen impacts of AI is as important as knowing how to use it. These types of articles and insights are as critical as any training or testing.

Story 8: The latest on AI eating search (and why you should be paying attention).

Share of prompt.

Heard of it?

Basically, it is what it sounds like: does your brand get mentioned when an LLM returns information to the user?

Does this sound tricky? Messy? Hard to wrap your mind around? You need to read the article. And we need to talk. (Shelly Palmer)

Story 9: Clippy is so back!

No, not really. But Microsoft is looking to introduce a new animated character to help people using Copilot.

Honestly, it looks like the assistant from the movie Elio

Do we need this?

Story 10: The end of work as we know it.

We will end with this one. A heavierish read that you’ll want to take some time with. Mainly because it uses a topic (how different positions see AI) to shed light on something we talk about a lot: we’re not ready for this.

Period.

We’re not. The stories in this article are happening now.

Don’t wait to be told what to think or how to use AI. If you haven’t started experimenting or thinking through how this impacts your role, your job, your company, your profession…I’d recommend we talk. (Gizmodo)

This week, I’m not talking about:

  1. America’s AI Action Plan
  2. Zuck claiming superintelligence is in sight.
  3. AI can be manipulated to give suicide advice.
  4. The new Catholic AI app.

-Ben

As a reminder, this is a round-up of the biggest stories, often hitting multiple newsletters I receive/review. The sources are many … which I’m happy to read on your behalf. Let me know if there’s one you’d like me to track or have questions about a topic you’re not seeing here.