Why we overestimate what generative AI can do

When we apply human bias to a non-human construct.

Why we overestimate what generative AI can do
Photo Credit: Unsplash/Josh Applegate

Generative AI is impressive. But our very humanity may be causing us to overestimate what it can do.

This observation was made by MIT robotics pioneer Rodney Brooks, in an interview with TechCrunch.

Here are some nuggets he shared that caught my interest.

  1. We extrapolate using human bias

When we see an AI system perform a task, we subconsciously generalise it based on similar experiences.

We then make an estimate of the competence of the AI system - not just of the task at hand, but its overall competence.

  1. LLMs are not suited for everything

Rodney spoke of well-meaning advice that he should leverage LLMs for his robotics startup that makes warehouse robots (This is his 3rd startup).

Yet LLMs won’t help when 10,000 orders must be shipped in two hours, he says. “Language… is just going to slow things down.”

All technologies plateau eventually

I wrote recently about how OpenAI intends to double down on even larger LLMs due to no signs of the scaling paradigm plateauing.

And when we look at the improvements in GPT-4 and GPT-4o, we tend to imagine the same leap with GPT-5, GPT-6, and GPT-7.

However, Rodney argues that all technologies plateau eventually. He cited the capacity of the iPod digital player, which debuted with 10GB of built-in storage.

While its storage capacity doubled with each of the first few iterations, it tapered off at 256GB with the final model almost two decades later.

In this case, it turns out few needed that much storage.

What do you think?

Will AI plateau due to insufficient data to train with, or from a global energy crunch?

For example, mainstream broadband in Singapore stopped at 1Gbps for the longest time, only recently moving to 10Gbps. The reason? Most home users are happy with 1Gbps.

Can you think of technologies that have plateaued?

Read my CDOTrends story here.