AI round-up: Week of March 25, 2024

Do you have two hours of free time? Of course, you do! So why not give the Lex Fridman podcast a listen?

Last week he released his interview with Sam Altman. To his credit, Lex asked a lot of the hard questions, and, to his credit, Sam gave what I felt were fairly sincere responses. I wasn’t expecting that.

Some key takeaways for me:

  • He was really, really upset and hurt over the OpenAI firing. He talked about how it may make him more skeptical that plans will run the way he thinks they will because now he’s more prone to thinking people may not operate the way he thinks.
  • He doesn’t think there’s a 0% chance he gets shot at some point; that’s how important what they’re working on is.
  • He feels Elon Musk is a big part of our future, and we should all want to root for him and maybe grow up a bit (my words, not his).
  • Is he afraid of AGI and the things that could happen if AI becomes “all-powerful”? Not exactly. He’s more nervous and worried about the dramatics around potential AI stories and how they could be sensationalized and politicized.
  • ChatGPT5? Coming at some point this year.
  • Does he use ChatGPT4? Yes, but he thinks it sucks. He uses it for brainstorming, starting any type of task and as a research assistant. 

Ok, if you don’t have two hours of free time … join the club! Just kidding. (I listened to it on a long walk over the weekend at 1.25x speed.) Paul and Mike do you a solid and break down the interview on this week’s episode of The AI Show. (Episode 89)

The Big 5 (The Lex Fridman podcast is our number 1)

2. NVIDIA announces a foundational model for its upcoming humanoid robot.
This is big, no doubt, and you’ll want to read about it. But come on, NVIDIA. The robot’s name is GR00T. You also announced a new computer for these robots called Jetson Thor. Is this the best you can do? I think we should expect a little more creative effort in your product naming.

3. 10% of U.S. workers are in jobs most exposed to AI, according to the White House.
Well, there you have it. The White House said it, so it’s true. Take that, Sam Altman. 95%??? Try 10. Or 20% if you want to talk about high-exposure jobs. So, maybe it’s like … 30% total?

I have to say, I’m not following this report, guys. And maybe we’re just still too close to COVID, but I definitely wasn’t feeling the comparison of treating AI exposure to a virus. Hire a PR firm next time.

4. “We opted you into SGE. You’re welcome.” – Google
Did you happen to hear (or see) that everyone is now seeing AI overviews in search results? Even if you didn’t opt-in?

Learn a little

If you’re interested in learning more about Sora, you will want to see these amazing videos put out by … Sora. They have been working with “the creative community” to see what Sora can do. And judging by what the community made, Sora can do some amazing things. Have fun with this one.

Did you hear about…

…the White House released guidelines for how federal agencies use AI.

…you may want to read a summary of those guidelines v. the actual fact sheet.

…Stablility AI’s CEO stepped down, and this means something?

…the UK could see job losses of 8M from AI … or an AI-inspired economic boom? TBD!

…Amazon’s investment in Anthropic is up to $4B.

…AI is replacing a ‘Mama Mia’ star in the upcoming BBC production of the hit musical.

Bonus … from Paul Roetzer

Pulled from his LinkedIn post this morning regarding Hume:

Hume wants to be “your empathetic AI voice.” According to Hume’s X account:

“EVI is the first conversational AI with emotional intelligence. EVI understands the user’s tone of voice, which adds meaning to every word, and uses it to guide its own language and speech.

“EVI has a number of unique empathic capabilities

1. Responds with human-like tones of voice based on your expressions

2. Reacts to your expressions with language that addresses your needs and maximizes satisfaction

3. EVI knows when to speak, because it uses your tone of voice for state of the art end-of-turn detection

4. Stops when interrupted, but can always pick up where it left off

5. Learns to make you happy by applying your reactions to self-improve over time”

You can certainly imagine a near future (maybe 1 - 2 years) in which all AI chatbots have this synthetic empathetic ability. As an example, talk to Siri, and it responds differently based on your emotions.

You can try Hume at

Must read/must discuss:

AI uses as much energy as a small country. And we’re just in the beginning phases.

But we can do something about it. As long as you trust people enough to think they’ll use AI responsibly.

I was just talking with our AI council about this topic. I was surprised there hadn’t been more written about this. Every aspect of AI consumes energy and takes a toll on the environment … yet, people had largely been quiet on the subject. Not now. I think we’re going to hear a lot more about this in the coming months.


An AI March Madness update.

ChatGPT went 5-4 on its 10 picks. So why 5-4? Did anyone notice … it put McNeese St. in twice?

The five upset teams it picked: NC State (a sweet 16 team); James Madison; Grand Canyon; Duquesne; Washington St.

Always here to talk or brainstorm!

Thanks for reading! Feel free to share!


As a reminder, this is a round-up of the biggest stories, often hitting multiple newsletters I receive/review. The sources are many … which I’m happy to read on your behalf. Let me know if there’s one you’d like me to track or have questions about a topic you’re not seeing here.