Overview
There's been much ado about the AI bubble bursting (ref, ref). I sit in the middle ground. There's definitely been a bubble of how AI has been marketed. But take away that overblown hype, and what I don't want to lose is the recognition of the notable human achievement that our current AI models represent.
From Artificial Intelligence's beginnings (ref), the field has been haunted by the specter of overblown promises (ref). The boomerang of this was withdrawn funding, for example in the case of DARPA and the National Research Council (and the British Government) cutting off funding for undirected AI research in the 1960s.
The resulting "AI Winter" (ref) meant that the field only progressed at a snails pace. And researchers had to have a strong conviction or at least curiosity in the science to pursue the field.
Different approaches to AI
"The Master Algorithm" by Pedro Domingos (ref) gave me a good orientation on the most promising approaches in the field. One of those approaches was the idea of emulating the brain via "Neural Networks". And the progression of research for neural networks began around the "Perceptron" (ref).
Through many iterations, we've arrived at "Large Language Models", which is our current state of the art. For quite a few years, Google was the leader in AI research. It was Google's Attention models (ref) that arguably made the research breakthrough to the current LLM architectures.
But that was in a growing and competitive field. And of course the release of ChatGPT was a watershed moment (ref, ref).
Impact on Society
Decline of Reason
Numerous cassandras have warned about Artificial Intelligence more broadly and its predicted impact on society. It's easy to notice an accelerating waning of human reasoning and critical thinking skills. I'd argue that this was first due to search engines in general. And then Google in particular. All because people could offload a great deal of learning and recall to cloud services. Knowledge became much more easily searchable. And now there's ChatGPT (ref, ref).
Dating, Mating, and Sexual Selection
Technological tool's dumbing down of our society is now compounded by growing fractures in love and human relationships. A growing division that is increasingly being filled by AI. I never would have imagined Artificial Intelligences being actual companions in real life. Even though the film "Her" foretold this (ref, ref). And even in a sense, "Blade Runner 2049"'s Joi made reference to this (ref).
Now, in the real world. This Idaho man says ChatGPT sparked a ‘spiritual awakening’ (ref). And his wife says it threatens their marriage. Or people are simply all out losing their minds - AI Psychosis (ref)... Chilling indeed.
Leaving apart the separate issues in the West of political polarization and the divide between the sexes, it's hard not to think that life is becoming a Black Mirror episode. A real life "Her" incarnation. I thought that would be a generation away if at all. And even at the time, I couldn't fathom how prescient this actually was. Nor how shocked I would be at the speed of the change.
Job Loss
Indeed AI is just the tip of the iceberg. Get ready for selective Gene editing too (ref, ref). Crispr's been around long enough. But I don't want to go too far afield.
More topical, there's been justifiable alarm about the nature of work and anticipated job loss across many industries. From back office drudgery (ref) to full blown actors (ref), the career promise that so many young people banked on, is quickly dwindling. Even prestigious software development roles are reportedly on the decline (ref). And we are hearing of "strong evidence" that AI's impact is already slowing the job market for early careers (ref).
Business Case
Business Case
Balancing the i. societal impact against ii. the overblown hype is aided by iii. considering the business case of LLMs. Ie how sustainable is this? As it stands, ChatGPT (and other providers) are subsidizing their users. Service costs are more than the providers are charging (ref). Which recalls the old joke - "We lose money on every sale, but we make it up on volume" (ref, ref).
Investors pouring billions into bootstrapping an industry wouldn't be feasible if we didn't have that overblown hype... Almost like the hype helps attract all that funding.
Middle Ground
And all that noise is starting to drown out the huge human achievement that LLMs represent. Keeping in mind that neural networks were just one of the approaches itemized in "The Master Algorithm", we're very far from Artificial General Intelligence. But I think even brief, sober reflection lets us see that this is a huge step in how we humans think about solving AI. Ie all the talk about an AI bubble obscures the fact that AI is still a seismic technological shift. Like the Industrial Revolution or the Internet.
I think this sentiment is distinct from AI's imminent decline. I don't think we'll have another AI Winter. Like heat conduction in metals (ref), I anticipate all things in nature (including AI development) move towards their equilibrium.
Letters
I was recently exchanging letters with a friend and former colleague Tianxiang Xiong (ref). I still remember TX beginning as an AI skeptic overall. Then becoming an optimist. And recently seeming to return to the ranks of skeptics.
One of our discussions (ref), germane to job loss following AI in the workplace,
was the currently collapsing Junior to Senior Software Developer pipeline. And besides launching one's own startup or Open Source contribution, I'm not sure what the solution here is.
We discussed it, and he clarified that his latest missive is critical of the overblown hype. So it seems we bridged the nuance between our two positions. Ie while marveling at the huge leap that LLMs represent, I take his point. And when you scratch a little below the surface, I'm seeing similar sentiments from Programming thought leaders. Stephen Diehl's "It Would Be Good if the AI Bubble Burst" (ref) similarly sees the utility in the most advanced language models. But understands that they are just probabilistic token generators... Ie they are just tools. And you have to have the requisite expertise to effectively wield them.
TLDR, the bursting AI bubble seems to be the trending theme. But if you're not the kind of organization that knows "how" to leverage LLMs, you need to become that kind of organization. LLMs are just technologies (and value propositions), like all before them. Marketing hype shouldn't obscure or diminish the profound human achievement they represent.
After jousting for a bit, I thought our exchanges would be interesting to record and share with a wider audience.
Jul 10, 2025
TX
"https://x.com/metr_evals/status/1943360399220388093
We ran a randomized controlled trial to see how much AI coding tools speed up experienced open-source developers.
The results surprised us: Developers thought they were 20% faster with AI tools, but they were actually 19% slower when they had access to AI than when they didn't."
Tim
"Ermm, even anecdotally I would question this study.
I'd like to know how they found a baseline of experienced developers knowledge and capabilities around LLMs.
Even experienced developers need to completely rebuild their tooling workflow and mental model. And that's for
i. a moving target - LLMs constantly accelerating capabilities.
ii. new paradigms (Prompts, Tool-calling, Agents (MCP), etc)
And funny enough re Grok, right after we finished messaging yesterday, I saw this antisemitism story pop-up in my feed.https://www.youtube.com/watch?v=3JCRkZRlzbI
So on the first story, I suspect the evaluators aren't themselves developers,
and aren't using the right metrics to capture productivity.
And Elon Musk does grok the paradigm shift ( sorry :).
His focus is just too diffuse and seems to be losing control of everything. Classic mistake."
TX
"I think the funny thing is that all the developers thought they were more productive, when in fact they were less"
"It's possible that they could become more productive with further training"
"But right now they're living in La La Land "
Tim
"Well that's just it... I'm skeptical that they've correctly identified those developers were less productive.
Think of that old maxim of the 10x developer typing less than a regular developer. I think something like that is happening with those devs who have fleshed out their tooling."
Jul 18, 2025
"Ha ha, is that what vibe coding is? No wonder it's been going so viral. Sorry guys, the sword only swings for those who can wield it!
We talked about this before. The productivity of senior devs who know how to use this... will skyrocket. But the developer pipeline is going to crater. How will new blood acquire the skills?!"
Jul 13, 2025
TX
"https://www.pcgamer.com/gaming-industry/ai-is-no-longer-optional-microsoft-is-allegedly-pressuring-employees-to-use-ai-tools-through-manager-evaluations/"
Tim
"Well that's a great, and I'd argue necessary development. Like linters or unit tests, LLMs will become indispensable in the development cycle.
...
In fact I just watched this video where the presenter asserted that the most valuable skill will be specifying the problem domain and goals of the solution. And he's completely right. https://www.youtube.com/watch?v=8rABwKRsec4"
TX
"A tool so useful you have to force people to use it"
"The more management starts dipping their toes into the technical stuff, the worse the results"
Tim
"If management forces unit tests over formal verification tools, is that good or bad ¯_(ツ)_/¯
Meh, I think the free market will sort it out."
Jul 23, 2025
TX
"In the future the judge, jury, and executioner will also be AI 😬
https://m.youtube.com/watch?v=He8-DkJnXik
93% match? Good enough!"
Tim
"I'm not going to let you live down the fact that you were so uninterested in AI just a few years ago ☝🏿😆"
TX
"A few trillion dollars would change anyone's mind 😂"
Tim
"I still remember a talk Ray Kurzweil gave in Toronto. And this was before I left for Funding Circle in SF.
Even back then he was telling us to watch for the geometric growth that will happen in AI, and thus technology in general.
All that being the precursor to that technological "Singularity" phrase I'm sure you've heard bandied about. That meme was from Kurzweil.https://betakit.com/ray-kurzweil-shares-strategies-to-create-the-future
"Our brains see the future as linear, as flat progression. Exponential growth starts off the same as geometric growth, appearing flat until it reaches “that knee in the curve” at which point it rises vertically by doubling at regular points in time.""
"So strap in. Barring some geopolitical breakdown, it's only going to get crazier."
Sept 8, 2025
TX
"Pretty good rant on AI
https://m.youtube.com/watch?v=7dm_UY8jCGQ"
Sept 8, 2025
Tim
"I think the people getting frustrated with AI chatbots don't understand technology enough to i. understand these are just tools which should be used for the appropriate purpose. And ii. if you don't have the training or experience to understand how to apply these things, I can see how the reality doesn't seem to live up to the hype.
I'm of the opposite mind. LLMs means everyone having a personalized oracle at their fingertips. And this represents a sea change in generalized problem solving. That's applicable to education, science research, programming, etc... everything that requires deep cognitive work.
Remember the O'Reilly technology adoption grid with concentric circles? I thought something similar should happen with AI... Well lo and behold they still run it. But the report is now paid. Something like this for the general public can help orient people to the right tools for the right tasks.
https://www.oreilly.com/pub/pr/3465
https://www.oreilly.com/radar/is-ai-a-normal-technology"
TX
"The proof needs to be in the pudding IMO. Stop claiming AI will revolutionize productivity and demonstrate it.
I built a web app in much less time w/ a coding assistant than it would have taken me otherwise. There's value there, but there are also trade-offs. Most people never even get to this point of using AI to do real work.
"Talk is cheap. Show me the code!"
https://www.tianxiangxiong.com/2025/08/21/will-ai-replace-programmers.html"
Tim
"Whoa lol. Strong opinions, strongly held :)
(TX) "Pushing the coding assistant hard also exposed its weaknesses, however. While it was great at following specific instructions–make this sidebar collapsible, for example–it didn’t have a good grasp of the overall project goals. ... Perhaps as a natural consequence of eagerly following Current Instruction™, the model paid virtually zero attention to good programming practices like Don’t Repeat Yourself.
...
In less than two days, I had a working application with a nicer UI than I could have built in two weeks."
I'll refer back to Ray Kurzweil's anecdote that once computers started beating humans in chess and go, commentators started dismissing this acheivement as no big thing. Your own example demonstrates an LLM that can build even a moderate sized project. But only fails when breaking higher level software design principles. Think of what that means for human acheivement. That a software can understand your text description, and build anything... That's incredible!
Lol I'm reminded of the Louis CK joke that everyone flying in a plane should be stupified into a constant state of shock, that humans have figured out a way to... "fly in the air"!! My point of agreement in your post is the dysfunction of the hype cycle.
(TX) "The set of problems for which modern AI can deliver outsized gains is small... and most firms lack the sophistication to even recognize what they are."
I think that's more like it.
That "Is AI a “Normal Technology”?" O'Reilly article I shared makes the same observation.
(TX) "Models like o3, they’re actually very useful. They can do a lot of things that nonreasoning models can’t... It’s exactly an illustration of the point that diffusion—changes to user workflows, learning new skills, those kinds of things—are the real bottleneck."
"Yet, many who try to apply AI have had little success. Businesses are reporting meagre payoff, with only 5% seeing value from their projects.
...
One study suggests that, contrary to expectations, developers with AI tools were slower than those without."
Remeber Phil Hagelberg's famous quote (I think originally Magnar's)... "If you are not the kind of person who can deal with paredit or smartparens, you need to become that kind of person."... It's like that, but with LLMs. I suspect that the teams seeing those meagre payoffs have not restructured their tooling, workflow or thinking to fully use the leverage that LLMs give them.
My opinion is tempered from my experience building a FeedForward Backprop NN many years ago. And by reflections like Stephen Wolfram's "What Is ChatGPT Doing … and Why Does It Work?". My takeaway on its impact on society is to always look at the technology's fundamentals and first principles. Marketing hype does exist. But it shouldn't overshadow the core value proposition of a given technology. And in that sense yes, prognosticators should stick to a technology's core provable value. And avoid these overblown claims we're seeing now. Ie I think the only think we've lost are our illusions about LLMs. They are still just technologies, like all before them. But that shouldn't diminish the profound human achievement they represent."
TX
"Sorry, I didn't mean you
Just reread that and realized it seemed very targeted 🎯
I meant the AI hype men we see in the media
Most of which barely understand what a neural network is, much less have done any useful task w/ AI"
Tim
"Oh of course. I didn't take it as such.
This is just my broader philosophical reflections on the point.
(TX) Most of which barely understand what a neural network is, much less have done any useful task w/ AI
Tim
Exaaactly. That's what I think is going on."