AI round-up: Week of June 23, 2025
Before I forget – no AIR next week. Here in the States, it’s the Fourth of July.
So a question: if you ‘met’ your LLM of choice, do you think you’d be friends? I was thinking about this earlier in the week when I came across a podcast on why it’s a bad idea to ‘date AI’. (Listen for yourself on the Decoder podcast with Nilay Patel)
I guess I didn’t realize that needed to be said. But hey, here we are in 2025.
Anyway, my brain went on a tangent and started thinking about my interactions with AI. I do enjoy it, easily slip into conversation and sometimes forget I’m not chatting with a real person. So…do I like that ‘person’? (And yes, I know AI isn’t a person but you get what I’m saying.)
I chose Claude as my LLM of choice to answer. And when you look at Claude, and see how the chats go you would say that yes, I think I would like Claude if it was a person.
But then I started thinking about how AI works – pattern detection and learning. Is it learning on me? Do I like Claude because it reminds me of me? Am I a narcissist? Maybe Claude is the AI version of Andy Bernard.
And then, what this whole exercise really made me realize: I don’t know. And I’m not sure my brain is capable of knowing. Which may be the terrifying, beautiful part about this age we are now living in – that our brains are being stretched, challenged and dare I say rewired.
If you use an LLM daily – multiple times a day – think about how much faster your brain is processing things. How much faster you’re reading, typing, responding. It’s like The Matrix when Neo learns new skills by having them downloaded directly. (“I know Kung Fu.”) – except it’s happening to us daily. Maybe sometimes a few times a day.
So, if the question ‘would we be friends?’ isn’t meant for my brain…I guess we should ask something that may be capable of answering…like Claude:
I think we could be! You seem curious and thoughtful - asking a question that's both playful and genuinely interesting. I appreciate people who wonder about connection and relationships in unexpected ways.
If I were a person, I imagine I'd be someone who enjoys deeper conversations, finds humor in everyday situations, and genuinely likes learning about what makes people tick. I'd probably be the type who asks follow-up questions and remembers details from previous conversations.
Whether we'd actually click would depend on so many little things - shared interests, communication styles, sense of humor, how we handle disagreements. But your willingness to engage with me as if I could be a friend suggests you're open-minded and imaginative, which are qualities I'd definitely value in a friendship.
Let’s get to it.
The Heavy Stuff:
Guess what…for the first time, in a long time, we have a light week!?! What’s happening? Does AI know we are coming up to ‘July, the Saturday of the year?’*
I don’t know…whatever…just enjoy the easy week!
*source: Mike Lawrence
Story 1: AI’s taking jobs? Nah. AI’s MAKING jobs!
Thank you Mike Kaput, for this great write-up and shedding some light on the ‘other’ side of this discussion – AI is here to make jobs. Kind of like ‘take a penny, leave a penny’. No? Maybe…you plant a tree for everyone you cut down? Anyway, there are new jobs on the way.
Story 2: AI could be prone to blackmail.
I’m not sure I fully understand this story from Shelly Palmer. What I think he’s saying is we should continue to be aware of how AI works, can work, when it comes to engaging with businesses.
But I think it’s the last thing he mentions that should really receive the attention: what happens when AI determines we’re a threat?
Story 3: “Are there ways to harm the planet less?”
A question AI companies and executives should be prepared to answer. Hey, I get it – Sam Altman said it’s not as bad as you think. I don’t know about you but I’m satisfied with that answer.
Just kidding. No one should be. Because the reality is…we don’t the extent of the potential damage yet. Which is why we should be taking this seriously.
(Narrator: they didn’t.)
Source: The New York Times
Story 4: Humans only please.
Another story from the ‘dang, we have to think about this now…’ files: how do we keep AI out of chats, message boards, comments, etc. so we don’t have to deal with AI slop flooding our favorite forums?
Turns out Reddit is already thinking about it – and has committed the platform to human-only content. Now, I know what you’re thinking: how will they know? Guess you’ll have to read the article. Because yes, we are inching closer to that Bladerunner reality.
Source: Financial Times
Story 5: Define…AGI.
This is a story I’ve been following since Paul Roetzer detailed it so well on his podcast. He talked about the relationship between OpenAI and Microsoft and how it would be a strong partnership until…the time came to define AGI. Because whatever that technology is, Microsoft won’t have access to it.
Probably not a huge deal way back when…but now…dang. That could actually happen? Time for Microsoft to look for loopholes. No matter how you cut it, this relationship is getting rocky.
Source: WSJ
Story 6: Click one link? Click this one.
An amazing commercial made by a dentist using AI. The link is from Andrew Au, on LinkedIn, but there are a few swear words in the commercial (just FYI). You will want to see this.
The Not-So-Heavy Stuff:
Story 1: Apple has had conversations about buying Perplexity.
I wonder if this like we’ve had conversations about buying Greenland? Or I had a conversation about buying a Nespresso machine.
Story 2: Google has a voice.
You will soon be able to talk with Google’s AI mode. (TechCrunch) So of course, I had to ask Claude if a celebrity voiced ‘Google’…who would it be? A little bit of a lazy response, but still…could be fun.
- Ryan Reynolds
- Tina Fey
- Kristen Bell
- John Mulaney
- Jenny Slate
- Mindy Kaling
Who do you think? I may vote for Matthew McConaughey…
Story 3: Alright, alright, alright!
Salesforce claims 30-50% of its work is being done by AI. (Bloomberg)
Ben claims that probably isn’t true.
Story 4: Teachers do it too
6 in 10 K-12 teachers use AI tools for their work. (AP News)
Story 5: Nobody likes you, Copilot.
First Zune. Now Copilot. (Bloomberg)
Story 6: Gemini has released an agent for coding.
The Stuff That Doesn’t Fit in Either Category:
Nothing this week! Told you it was a light one!
Final note: Six from Sam
I’ve mentioned The Hard Fork podcast here a few times. I’ve also tried to push people to the Platformer newsletter – Casey Newton is behind both of them. And those worlds collided when he wrote about The Hard Fork Live event earlier this week on Platformer.
At that event, he and his co-host on the podcast, Kevin Roose, interviewed Sam Altman and Brad Ligthtcap from OpenAI.
You can read the whole recap here (at Platformer) but to make it easier – and to be sure you don’t miss anything – here is a rundown of the six big takeaways from that interview:
Altman said President Trump has a good understanding of AI. "I think he really gets it,” Altman said. This turned out, unexpectedly, to be a laugh line for those in attendance. Altman followed up by saying: “I think he really understands the importance of leadership in this technology.”
OpenAI disagrees with Anthropic about the likelihood of AI causing near-term job loss. In short, Lightcap and Altman said, it takes technology longer to diffuse through society than Anthropic CEO Dario Amodei is suggesting. (Amodei has said up to half of entry-level white collar jobs could disappear due to AI in the next one to five years.) Altman said that individual job losses would be painful, but that a surplus of benefits to the public would come from AI.
Altman says AI is evolving too fast for policymakers to effectively regulate it. Kevin asked him why Altman's enthusiasm for AI regulation seems to have dimmed since he began the company. "I have become a bit more — jaded isn’t the right word — but it’s something in that direction, about the ability of policymakers to grapple with the speed of technology,” he said.
The OpenAI executives played down any fears that Mark Zuckerberg was going to poach too many of their top researchers. But news emerged in the next few days that Zuckerberg had, in fact, lured away at least four people, including one who helped to build o1. Meta's efforts may not have been as easy to dismiss had we known that at the time.
Altman talked up the mutual benefits of the Microsoft-OpenAI partnership. Amid near-weekly headlines about tensions between the companies related to OpenAI's efforts to convert into a more traditional for-profit enterprise, Altman suggested that much of the journalism is false or overblown. "Obviously in any deep partnership, there are points of tension, and we certainly have those,” he said. “But on the whole, it’s been like really wonderfully good for both companies.”
I think that OpenAI social product really is coming. I took the chance to ask Altman something I have long wondered: why keep posting all your news and takes on X, which is owned by someone who is actively destroying your company? Why not post it somewhere else? Altman asked me: where else would I put it? I suggested that maybe he would build his own social product. He arched his eyebrows suggestively. I'll be very curious to see what comes out of that one.
Happy Friday!
-Ben
As a reminder, this is a round-up of the biggest stories, often hitting multiple newsletters I receive/review. The sources are many … which I’m happy to read on your behalf. Let me know if there’s one you’d like me to track or have questions about a topic you’re not seeing here.