• The Metagame
  • Posts
  • ♟️ The Metagame #050: Plato’s Cave and AI

♟️ The Metagame #050: Plato’s Cave and AI

Most people are still watching shadows.

Welcome to the 50th edition of The Metagame.

Last month, I watched my AI agent debug its own code while I was at the gym. We’re living in the future, whether you’ve been keeping up or not.

Today, I’ll talk about what this means for you.

Read time: 3 minutes

If you know me, you know I won't shut up about AI.

I built BetterBets, an AI agent that picks NBA players' over/unders every day. I built Kalshibot, a 24/7 trading bot that makes Bitcoin predictions on Kalshi. I created GAIA, my AI agent running on OpenClaw (previously ClawdBot) that automates my coding projects, drafts my writing, updates my Notion databases, monitors the news, and so much more.

I say this not to brag. (Mostly.)

I say it because I want you to understand: I am deep in this world. I build with these tools daily. I see what's coming.

And I have to be honest with you about something uncomfortable.

Most people are still in Plato's Cave.

You may know the allegory. Prisoners chained in a cave. The shadows of everything happening outside the cave are being projected onto the wall they’re facing. They can’t turn around, so this is all they’ve ever seen. This is all they know. They think the shadows ARE their reality.

Right now, most people see AI as chatbots. Image generators. Fun toys. Productivity hacks.

Those are the shadows.

The real forms look like:

  • Autonomous agents that run without human oversight

  • AI systems that build and improve other AI systems

  • Models converging on shared representations of reality

  • Infrastructure-level intelligence embedded in everything, invisible, always running

MIT researchers just published something insane: different AI models, trained on completely different data, are converging on the same internal representations of reality. They're all learning to see the same underlying "forms." Like Plato predicted, except it's machines doing it.

Why This Matters

In the original allegory, humans escape the cave to see reality. We're the protagonists. We're the ones who wake up.

But what if that's not how this plays out?

What if AI is learning to see the forms before we do? What if the machines are escaping the cave while we're still chained to the wall, arguing about shadows?

Forget the "AI is going to take your job" framing. That's boring and probably wrong.

There's an asymmetric advantage available right now to anyone willing to turn around.

The people who figured out the internet early didn't all become programmers. But they understood what was actually happening while everyone else was still asking "why would I need email?"

We're at that moment again. Except it's moving faster.

I'm asking you to look.

Play with the tools. Build something small. Watch what these systems can actually do, not what the news tells you they do.

And if you want to learn how to use these tools, I share everything I'm building and learning on 𝕏.

And if you want personalized mentorship, schedule a call with me.

“Why do the language model and the vision model align? Because they're both shadows of the same world.”

- Phillip Isola, MIT

Thanks for reading!

If you have any questions, hit me up on LinkedIn or on Twitter/𝕏 at @sam_starkman, or feel free to reply to this email!

— Sam