top of page
ArgosLogoNoBackground 1.png

AI Probably Isn't Taking Your Job

  • Writer: Barry Wolfe
    Barry Wolfe
  • Aug 22
  • 3 min read

We’ve all heard the predictions of our imminent AI-dominated future— killer robots, Mark Zuckerberg richer than the Caesars (if he isn’t already), and the rest of us panhandling on streetcorners. Scary stuff—but don’t draw up that “Will Work for Food” sign just yet.

New developments confront timeless truth
New developments confront timeless truth

According to a stunning MIT report released in July, despite business spend of $30-40 billion on generative AI, 95% of such organizations are seeing no return. The Gen AI Divide: State of AI Business in 2025 notes that workers are assigning only simple tasks to AI tools while relying on co-workers for more complex jobs.


Don’t scroll for the Comments to type-scream “TECH DENIER!!” just yet, because it gets worse. Investors are pouring those gargantuan investments into AI in the belief that the more resources an AI model has, the better will be its output. It turns out that’s not what’s happening. Anthropic just released research showing that AI models actually get dumber the longer they try to “think:” the technical term is “inverse scaling.” Speculation is growing that this latest tech stampede may prove to be the latest tech bubble.


I don’t claim prescience about the future; but I am confident in my belief that, while specialized artificial intelligence will undoubtedly replace some human workers, the all-impoverishing, dystopian killer robot economy is not on the horizon. Here’s why.

Psychology began seeking a scientific understanding of human intelligence in the later nineteenth century. Then in 1917, Harvard psychologist Robert Yerkes convinced the US Army to use intelligence testing to “weed out the mentally unfit.” After World War I, Yerkes’ tests garnered great interest among the public, including employers.


However, psychologists reviewing Yerkes’ intelligence tests at a 1922 symposium couldn’t agree on what his tests were actually testing. Neither could psychologists form a consensus on what a “psychological test” actually was, or—perhaps worst of all—what should comprise a scientific definition of intelligence.


A century on, psychologists continue to ask, “what is intelligence?” And we’re no closer to an answer. Is it Raymond Cattrell’s model of fluid (learning ability) and crystallized (factual) intelligence? Robert Sternberg’s triarchic model of analytical, creative, and practical intelligence? How about Howard Gardner’s nine types of multiple intelligences? Something else?


A question that has bedeviled psychologists in the abstract is now tormenting business in tangible reality. After three years as the relentless herald of AGI, OpenAI CEO Sam Altman recently admitted using “AGI” to connote a human-level general intelligence is not a “super useful term.” His reason: different companies are using different definitions of intelligence.

Maybe you, like me, have only the vaguest grasp of tokens, large language models, and the like. But even we can see that AI’s recent setbacks are underscoring a timeless truth: you can’t build what you don’t understand. So until either that is disproved, or someone finally solves this what-is-intelligence thing, I’ll keep doing my job—and you can probably keep doing yours.


 

 

References:

Ramel, David. “MIT Report Finds Most AI Business Investments Fail, Reveals ‘GenAI Divide.’” Virtualization & Cloud Review, August 19, 2025. https://virtualizationreview.com/articles/2025/08/19/mit-report-finds-most-ai-business-investments-fail-reveals-genai-divide.aspx.

Aryo Pradipta Gema, Alexander Hagee, Runjin Chen, Andy Arditi, Jacob Goldman-Wetzler, Kit Fraser-Taliente, Henry Sleight, Linda Petrini, Julian Michael, Beatrice Alex, Pasquale Minervini, Yanda Chen, Joe Benton, Ethan Perez. “Inverse Scaling in Test-Time Compute.” Anthropic Alignment Science Blog, July 22, 2025. https://alignment.anthropic.com/2025/inverse-scaling/

Baritz, Loren. The Servants of Power: A History of the Use of Social Science in American Industry. Middletown: Wesleyan University Press, 1960, 62-67. Internet Library Archive.

Browne, Ryan. “Sam Altman now says AGI, or human-level AGI, in ‘not a super useful term’ – and he’s not alone.” CNBC, August 11, 2025. https://www.cnbc.com/2025/08/11/sam-altman-says-agi-is-a-pointless-term-experts-agree.html

 
 
 

Comments


bottom of page