AI: Has the bubble burst?

AI is the talk of Davos. Is it time to sell? 

So asks James Mackintosh in the WSJ. The illustrious gathering has a troubled record when it comes to foresight. (Case in point: 2023’s doom-laden predictions of an imminent US recession). Mackintosh wonders whether this January’s wall-to-wall AI coverage represents a similar misstep. The AI bull market ended last June. While Nvidia motors on, others like Symbotic (down 35 percent since July) and SoundHound AI (down 60 percent since June) are struggling.


It’s not just stock prices. There is a noticeable change in sentiment. Even Davos darling Sam Altman is toning down the rhetoric. He said human-level AI will change the world and jobs “much less than we all think.” 

Last year’s frenzied atmosphere painted generative AI (LLM models like ChatGPT) rather more dramatically. Egged on by some of the more outlandish pronouncements of messrs Musk and Altman, and the doom-laden prophecies of Yudkowsky & co, “AI influencers” were quick to claim this a transformational moment. 


But now ‘everything is about to change’ feels a little underwhelming. 2022 looks very recognisable to most. Janan Ganesh suggests the hype is a symptom of overly abstract thinking. He contrasts his week in Dubai with conversations in Davos

“You wouldn’t know from the abstraction of the discourse there how much of modern life still hinges on the safe passage of tangible objects across water (or on the distribution of mineral deposits). It is easier in Dubai to see what makes the world go round because of, not despite, its de-intellectualisation of things.”

Ganesh gets at the fact that our world is still full of practical considerations. And this is missing from the AI discourse. 2023 was a great year for futurists who could just tell companies they needed to do more AI. No one dared to ask how, where or why. 


There is a lull as these questions surface. Gartner puts generative AI at the top of its hype cycle monitor, the “peak of inflated expectations”. The “trough of disillusionment” awaits. This is when early implementations fail to deliver and investment dries up. Comical but memorable examples are appearing. DPD’s new AI chatbot started swearing at a customer before acting the part of a highly disgruntled employee, saying, “DPD is the worst delivery firm in the world. They are slow, unreliable and their customer service is terrible. I would never recommend them to anyone.”     

Teething problems are inevitable and proponents argue later iterations will turn up more loyal customer service reps. But it speaks to a sense that these things take longer to bed in than we initially think. The Economist compares this to the tractor. Tractors promised an agricultural revolution in 1900 but by 1920 just 4 percent of farms had one and only 23 percent by 1940. Eventually tractors took over but the change was far more incremental than early evangelisers expected. 


And archaic practices persist. Tom Goodwin offers the signature as an example. “Random squiggles” still prove our identity on a docusign even though passwords, touch or face ID are readily available. And that’s cutting edge. I recently had to have documents couriered back to Hong Kong because a wet ink signature was required.


A recent BCG study suggests executives recognise this longer road ahead. 90 percent are taking a “wait and see” approach, either holding off on generative AI or experimenting with it in minor ways. Technologist Jeffrey Funk adds that “they are concerned about the large number of hallucinations and weaknesses that generative AI exhibits.” 


And there is not yet any clear playbook for using the technology in a meaningful way. Early adopters marvelled at ChatGPT’s content creation abilities. Reid Hoffman boasted it was his co-author on “Impromptu”. But my experience is that it reads like a waffling undergraduate trying to hit 2,000 words. Its plodding prose is easy to spot and hardly helps a brand stand out. 

It’s why copywriters have survived. After initial scare stories that ChatGPT was putting them out of business, reported a 29 percent increase in job postings in Q3 last year. Businesses realised they didn’t need more content. It’s already abundant. Rather, they needed succinct and insightful writing. That remains a human endeavour.  


Funk also references the issue of hallucinations. This is when generative AI confidently states inaccurate information. The New York Times reports ChatGPT does this 3 percent of the time and Google’s PaLM records a weighty 27 percent. Its defenders will quite justifiably say this is better than your average sales rep. But imagine the legal and reputational consequences of generative AI hallucinating in industries like healthcare or finance. Reporting the wrong figures or erring in a diagnosis will have profound implications.   

The MIT economist Daron Acemoglu is sceptical there is any quick fix to this problem. Developers are looking to supervised learning where models are taught to stay away from questionable sources or statements. But Acemoglu argues the inherent architecture of these models is based on predicting the next words in a sequence. He says it is “exceedingly difficult to have these predictions anchored to known truths.”Acemoglu finishes damningly: 

“Some people will start recognising that it was always a pipe dream to reach anything resembling human cognition on the basis of predicting words.” 


Meta’s Yann LeCun agrees. He doubts whether LLMs are a momentous breakthrough in the quest for human-level AI: 

“They are only trained on language and most of human knowledge has nothing to do with language.”  

Only a small portion of human knowledge can ever be captured by LLMs. They are fundamentally limited. 

While criticising LeCun’s laissez-faire attitude to more tangible dangers of generative AI (misinformation, security bugs, tech monopolies) NYU professor Gary Marcus concurs. He believes LLMs are a distraction:

“We are wasting funding and bright young minds on an approach that probably isn’t the right path.” 


This is not to underestimate the huge role AI will play in the next 50 years. Including Tesla, 8 of the top 10 S&P stocks by index weight are tech ones. And they are all engaged in something akin to an AI arms race. But we should appreciate its influence without exaggerating its enormity. 

Apple is the 2nd best performing stock of the last 30 years. It’s changed our lives in practical and very visible ways. But not unrecognisably so. The increasingly abandoned WFH experiment proves technology has not freed us of practical considerations. Nvidia may be the best performing stock of the next 30 years but its influence will be similarly incremental. 

As that realisation dawns, expect a pause in the AI hype.    



Leave a Reply

Your email address will not be published. Required fields are marked *