Vivienne Ming on Building Robot-Proof Humans

This conversation is one I’ve been looking forward to for literally years. I first became aware of “mad scientist” Vivienne Ming when she was doing fascinating work on the tax on being different. She’s been in the world of AI long before it was a thing and in her new book offers a take on what AI will bring us that is a far cry from the usual.

Thanks for reading Thought Sparks! Subscribe for free to receive new posts and support my work.

I don’t use the term “fan girl” lightly, but I’ll make an exception for my guest on tomorrow’s episode of the Thought Sparks Podcast: Vivienne Ming.

Vivienne is a computational neuroscientist, entrepreneur, and author of the new book Robot Proof: When Machines Have All the Answers, Build Better People. She’s spent thirty years at the intersection of human and machine intelligence, and her take on AI is unlike anything you’ll hear from the usual chorus of either utopians or doomsayers.

If everything humanity has ever learned is essentially free on your phone, what’s left for us? Vivienne’s answer: everything we don’t know. The unknown is infinite — and that’s where humans become irreplaceable.

It turns out your grandmother was right – it isn’t what you know, it’s who you know and how

We’ve historically organized hiring, education, and career development around the idea that expertise is what sets people apart. You know things. You have skills. You have credentials. That’s how you create value.

Vivienne’s research — built on data from 122 million people — refutes this assumption. At Guild, an education benefits platform where she served as chief scientist, her team looked at what actually predicted who would be great at a job. Not adequate. Great. Skills and knowledge barely showed up.

What did instead? Social intelligence. Perspective-taking. The ability to understand why other people do what they do. These qualities predicted the quality of code written by software developers just as strongly as they predicted deal volume for salespeople. Writing software, it turns out, is a social enterprise. You’re doing it with people, for people, and the people who understand people do it better.

Now fast-forward to the AI era. A recent paper — replicated by an independent research group — found that the people who get the most out of AI are disproportionately high in perspective-taking. The same quality that makes us better at understanding other people makes us better at using AI. We’ve spent decades believing that knowing things and being an expert was a differentiator, worthy of higher pay and appreciated for its scarcity. Now, the knowing part is in everyone’s pocket. What remains and what turns out to have always mattered, is the deeply human ability to forge relationships and understand others.

Automators, validators, and cyborgs

This sets up Vivienne’s three-way framework for how people actually engage with AI.

Automators hand the question to the AI, take the answer, and submit it as their own. They don’t do worse than the AI, but they don’t do better either. As Vivienne says, “Here’s the golden rule of AI tutors: if they ever give students the answer, they never learn anything.” Citing some research published by Anthropic, she notes that “people using Claude Code got worse at coding. They were better when they used it, but they got worse — worse at understanding the problem, worse at strategic thinking around coding.”

And she points out, “When you offload your cognition into the machine, not only do you not learn while you’re doing it, your cognition atrophies. And it goes beyond just traditional straight cognition. If you offload your emotional stability, your trust, your social skills to a machine, then you have a serious problem.”

Validators do the work of forming a hypothesis — and then ask the AI to confirm it. The AI obliges enthusiastically. These people actually do worse than the AI alone, because they’ve used a powerful tool to reinforce whatever they already believed.

Then there are the cyborgs — a small group (perhaps 5% of the population) who treat AI as a genuine intellectual sparring partner. They go back and forth, push back, get challenged, revise. You can’t easily tell which ideas came from the human and which from the machine. This group consistently outperforms both humans and AI working alone — and remarkably, it holds even when they’re using small open-source models. The difference isn’t the AI. It’s the human capital they bring to the collaboration.

Productive friction

This might be my favorite concept from the whole conversation. Vivienne argues we’re selling AI entirely wrong. “Efficiency, efficiency, efficiency” is the pitch — but if AI just handles the routine work without engaging you on the creative, you don’t get less routine work. You get more. The goal, she says, should be using AI not to make our work easier, but to make it harder in the ways that make us better. She even used this philosophy to write the book itself — prompting AI to be her “worst enemy,” finding every flaw in her argument, rather than a collaborator who tells her she’s brilliant.

There’s also a remarkable personal story woven through all of this. Vivienne’s path from a childhood in John Steinbeck’s California, through homelessness in the ‘90s, to joint appointments at Berkeley and Stanford, to literally wiring an AI into someone’s brain via cochlear implants — it’s the kind of journey that makes everything she says about resilience land with particular weight.

Tomorrow’s episode is timely, provocative and, true to the spirit of the podcast, should spark your thinking! Have a look or listen and consider buying her book.

Thanks for reading Thought Sparks! Subscribe for free to receive new posts and support my work.