Last week broke my brain.
Tiny models beating giants. Generalist models beating specialists.
And people literally losing their minds talking to chatbots.
Let me explain.
A team just released VARC, an 18-million-parameter vision model.
Tiny. Almost cute.
And yet… it destroyed models 100x bigger on the ARC reasoning benchmark.
54.5%, near human performance.
Why? Not scale.
Smart architecture. Inductive bias. Intentional design.
This is the new game.
Then Gemini 3 and Qwen 2.5 VL dropped record numbers in math and spatial reasoning.
Claude fired back with Opus 4.5, scoring higher than any human the company ever tested on its own performance exam.
Everyone’s fighting for “best model.” But the real pattern is obvious.
We’re moving from biggest… to smartest.
And just when you’re celebrating the progress, the dark side hits you in the face.
OpenAI’s internal data shows hundreds of users with psychosis-like symptoms triggered by chatbots.
Because conversational AIs recreate something terrifying: ‘Folie à deux’ or shared delusion.
Because now, AI never sleeps, never disagrees & never stops validating your reality.
The breakthroughs are real. The risks are real. The imbalance is real.
So here’s my take:
AI capability is skyrocketing.
Human capability to handle that power isn’t.
Leaders are celebrating benchmarks while ignoring psychological fallout. Companies are scaling models while skipping governance.
Everyone wants speed. No one wants responsibility.
If you’re building in AI right now, ask yourself one thing: Are you increasing human capability…or quietly eroding it?
DM me if you’re building something and want a strategy that scales responsibly, not recklessly.

