I Looked Up the Odds of AI Replacing Me, then I Closed the Tab.
Why the most viral tool on the internet is training you to rehearse your own obsolescence, and what the smartest people are doing instead.
TLDR: There's a tool going viral right now that tells you the exact percentage chance AI takes your job, and millions of people are using it to rehearse their own obsolescence without realizing that's what they're doing. Checking your replaceability score feels like preparation, but it's actually just fear wearing a data costume. The people who will thrive in the next decade aren't the ones obsessing over their odds, they're the ones doubling down on the one thing no algorithm on earth can replicate.
I was sitting at my desk this morning with a cup of Commeeteer coffee that had gone cold twenty minutes ago, staring at a search bar that wasn’t Google and wasn’t ChatGPT.
It was a brand-new tool built by Action Network using research data from Anthropic , the company behind Claude, and it’s going viral right now for a very specific reason:
you type in your job title, and it spits out the “implied odds” that artificial intelligence will replace you within the next few years.
Seven hundred and fifty-six occupations in the database, ranked from most vulnerable to least, and yours is almost certainly one of them.
So I typed in “Consultant” watched the little loading animation spin, and for about four seconds I felt that familiar heavy drop in my chest the same one you get when your boss sends “can we chat?” at 4:47 on a Friday, or when you see a missed call from your kid’s school in the middle of a meeting. (the second one is far worst than the first)
What if the number is 80 percent?
What if it’s higher?
And then, before the results could load, I did something that genuinely surprised me.
I closed the tab.
I didn’t look.
And in that moment, I realized something important about what we are collectively doing to ourselves right now that nobody seems to be talking about.
Because right this second, as you’re reading this, millions of professionals are sitting in climate-controlled offices and home offices and coffee shops, typing their livelihoods into a search bar, and waiting for an algorithm to tell them whether or not they still matter.
Computer Programmers came back at 45% odds of being replaced, Risk Rank number 1 out of 756.
Customer Service Representatives at 42%.
Data Entry at 40%.
The tool is everywhere, LinkedIn feeds, group chats, Slack channels, your uncle probably texted it to you this morning, and we are treating it like some kind of crystal ball that can see our future.
But here’s what’s actually happening, and it’s far more dangerous than most people realize: we are rehearsing our own obsolescence.
I wrote a few weeks ago about how the human brain’s default setting isn’t optimism, it’s threat detection.
Two hundred thousand years of evolution hardwired us to scan every horizon for something that might kill us, and the ancestors who heard rustling in the bushes and assumed everything was fine didn’t tend to leave many descendants.
That wiring kept us alive on the savannah, or in Midtown…and it served us well for a very long time.
But now there are no lions and there are no bushes.
There’s just a search bar and a percentage.
And that same ancient wiring that once saved our lives is now convincing millions of smart, capable people to sit in their metaphorical cars and mentally rehearse the scenario where they lose, over and over and over again, before anything has actually happened.
I call this The Odds Trap, and once you see it, you can’t unsee it.
It works like this: we feel anxious about the future of work, so we seek out data to give ourselves a sense of control.
We look up the odds…we read the reports…we scroll through headlines about Jensen Huang standing on stage at Nvidia GTC yesterday predicting a trillion dollars in AI infrastructure spending, or Jack Dorsey laying off thousands at Block because, in his words, AI can do the work and he’d rather “be honest about where things are headed.”
And all of that scrolling and searching and calculating feels productive, it feels like we’re preparing ourselves for what’s coming.
But we’re not preparing.
We’re spiraling, and the data proves it.
This week, the Wall Street Journal published one of the largest studies ever conducted on how AI is actually affecting work habits, not in theory, not in a TED talk, but in practice across 164,000 real workers.
And what they found was the opposite of what every productivity guru has been promising for the last three years: AI isn’t lightening workloads. It’s making them dramatically more intense. There were 2 really shocking findings IMO:
Time spent on email has doubled.
Focused, deep-work sessions have dropped by nine percent.
People aren’t being freed up to do their best thinking, they’re drowning in more dashboards, more micro-decisions, more tools layered on top of tools, each one demanding a little more of their attention.
Harvard researchers have already coined a term for it: “AI Brain Fry” the mental fatigue that comes from constantly overseeing, prompting, and correcting machines that were supposed to be overseeing things for us.
We built the tools to free us, and then we became their full-time supervisors.
But here’s where it gets really interesting, because there was another study published this week that didn’t get nearly the attention it deserved, and it might be the most important piece of research I’ve read all year.
Researchers at the University of British Columbia took 300 lonely college students and split them into groups. One group was given a highly supportive, empathetic AI chatbot, running on GPT-4o mini, specifically instructed to “listen actively and show empathy”, and told to text it every day for two weeks.
This was, by design, the most caring, most supportive conversational partner you could possibly build with today’s technology.
The other group was paired with a random human stranger, not a therapist, not a friend, not someone they chose, just another first-year student they’d never met, assigned completely at random, and told to text them every day.
After two weeks, the students who texted the chatbot experienced a two percent reduction in loneliness, which, by the way, was statistically identical to the group that just journaled one sentence a day.
The students who texted a random stranger?
A nine percent reduction.
Let that land for a second: a random human being, communicating over text message, was four and a half times more effective at reducing loneliness than a machine that was specifically engineered to be empathetic.
The researchers described chatbots as “social junk food” they make you feel good in the moment, the same way a candy bar gives you a quick hit of energy, but over time they don’t nourish you.
And a separate twelve-month study from the same lab found something even more troubling: higher chatbot use was consistently linked to higher loneliness later on, suggesting a negative feedback loop where isolation drives people toward AI companionship, which then deepens the isolation.
And this is the whole game, this is the thing that most people are completely missing while they’re busy looking up their odds on that viral tool.
The real dividing line in the AI economy isn’t your education level, and it isn’t your salary, and it isn’t even whether your work happens on a screen or out in the physical world.
The dividing line is humanity.
If your job is fundamentally about moving information from one box to another, reformatting, sorting, copying, pasting, summarizing, then yes, the odds are real, and you should take them seriously.
But if your job requires trust?
If it requires sitting across a kitchen table from someone and saying “I understand what this house means to your family”?
If it requires looking a scared employee in the eye and saying “I believe in you”, not “we believe in you,” because as I wrote last month, nobody runs through a wall for “we” but I believe in you?
If your work requires the messy, inefficient, deeply irreplaceable friction of one human being fully present with another human being, then the odds of a machine replacing you aren’t 45% or 42% or any other number.
The odds are zero.
Because nobody has ever run through a wall for an algorithm.
Nobody has ever felt truly seen by a language model. And nobody has ever trusted a machine with the biggest, most consequential decision of their life.
The Brookings Institution published a study this month that examined 37.1 million American workers in jobs with high AI exposure, and what they found should give you a tremendous amount of hope if you’re willing to hear it: 26.5 million of those workers, the vast majority, have what the researchers call “above-median adaptive capacity,” meaning they already possess the skills and flexibility to evolve alongside the technology rather than be consumed by it.
They can adapt.
They can find the thing the tool can’t do and become indispensable at it, which is exactly what humans have done every single time the tools have changed throughout history.
Read that twice.
The six million who face real risk?
They’re overwhelmingly in roles that were already purely mechanical, roles where the human was essentially functioning as a machine long before the machine showed up to do it faster.
The companies (and entrepreneurs) that win the next decade will not be the ones who replace their people with bots.
They will be the ones who use bots to free their people to do the things that only people can do,to build trust, to show empathy, to make the judgment calls that require wisdom and not just data, to be fully and irreplaceably human in a world that is becoming more automated by the day.
That is the second story.
The one nobody is telling you while you’re busy looking up your odds.
So here’s my challenge for you, and I mean this week, not someday, not eventually, this week.
You are going to see the link to that job replacement tool.
Someone is going to post it on LinkedIn with a breathless caption.
A friend is going to text it to your group chat.
You might even feel your thumb hovering over the search bar right now, ready to type in your title and see what comes back.
Don’t do it. Close the tab.
Take the ten minutes you would have spent doomscrolling your own replaceability and spend them doing the one thing that no machine on earth can do.
Call a client you haven’t spoken to in a while, not to sell them anything, just to ask how they’re doing and actually listen to the answer.
Walk over to a colleague’s desk and ask how their week is really going, and then stay long enough to hear the real answer.
Go home tonight, leave your phone in your office, close the door, sit down with the people who matter most to you, and be completely, fully, undeniably present.
Stop asking a machine to calculate your value.
Start proving it instead.
That tool can tell you the odds of being replaced. But it will never, not in a million years of training data, be able to tell you the odds of being irreplaceable.
Those odds are entirely up to you.
PS - I created a real assessment you can take here. Full transparency, it was created by manus ai, BUT, I really think it’s a judge of if Ai will take your job.



