Artificial Intelligence is likely to end up as the 21st-century equivalent of the South Sea bubble. In the early 1700s, the South Sea Company, initially formed to manage some of Britain's national debt, offered investors a guaranteed 6% return. The company later began lending to investors to buy its own shares, driving up the price until the business collapsed, and everyone who had invested was ruined. AI has also attracted phenomenal amounts of cash. Billions and billions of dollars have been invested in the hope of seeing great returns at some point in the future, which I think will never materialise.
I must confess I am an AI sceptic. I used ChatGPT some time ago and realised it was using The Sun and several other doubtful sources to provide the answer I was seeking. That sowed doubt in my mind, and more recently, it turns out that AI sometimes ‘hallucinates’ and simply makes up its answers.
The US National Institute of Health (NIH) was recently found to have produced a report that contained references to scientific papers that didn’t actually exist. AI was blamed. You can find a non-exhaustive list of AI's errors and howlers HERE.
AI was originally referred to as Large Language Models (LLM), which is probably a more accurate description. They are simply vast repositories of data, some of it right, some not. But crucially, the models can’t tell the difference.
There have been several cases where false information has been disseminated and when asked to verify it, AI has mistakenly used the erroneous information to construct its answer. It simply reinforces the fake news. I always thought to be useful, AI should only use peer-reviewed papers but it uses anything and everything that appears on the web. I really can’t see why you would use AI to do anything important or useful if you have to check the output for errors yourself.
Despite its obvious limitations, there are many disciples and a lot of people working on AI systems like ChatGPT, Deepseek, Grok, xAI, Google Gemini, Microsoft CoPilot, etc and attracting a lot of money from investors who don't really understand the technology.
Modern computers work at phenomenal speeds, perhaps around 5Ghz. That’s 5 billion clock cycles per second. To think about this in human terms, at 5Ghz, there are as many clock cycles per second as there are seconds in 155 years. A million seconds is about 11.5 days. A billion is a little over 31 years.
A computer can do a lot of work in a second, and that’s why they often appear smart, when they’re not. They are just fast. When ChatGPT, or others, takes a second or two to produce a response it has had an awful lot of time to review data and cobble together something that resembles an answer.
The FT has published an extended interview with Emily Bender, an expert in how computers model human language. She was an academic at Stanford and Berkeley universities before working for YY Technologies, a natural language processing company. She has now written a book, The AI Con. The FT article uses one of her phrases for the title: The Emperor has no clothes.
From the FT:
"Her thesis is that the whizzy chatbots and image-generation tools created by OpenAI and rivals Anthropic, Elon Musk’s xAI, Google and Meta are little more than 'stochastic parrots', a term that she coined in a 2021 paper. A stochastic parrot, she wrote, is a system 'for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning'."
She dismisses the idea that AI models could extend the boundaries of human understanding and cognitive ability and describes the technology as “a fancy wrapper around some spreadsheets”.
AI appears to understand things, to have the ability to convey meaning in the same way that a tame parrot repeats what it's heard. But does it 'understand'? I don't think so, and neither does Ms Bender.
By the way, to save you the trouble, stochastic means, “having a random probability distribution or pattern that may be analysed statistically but may not be predicted precisely.” AI works by predicting the probability of the next word in a sentence based on the statistical probability of certain words occurring alongside others. There is "no magic and no emergent mind", says Bender.
It seems to me that AI is very good at producing plausible cliches. That, essentially, is what it does, at massive cost in processing power and energy. It might automate some repetitive tasks, but it isn’t going to transform civilisation as some people think.
You might find a test conducted in this piece on the National Centre for AI website of interest. When asked about a topic with an abundance of information on the web, AI summarises it well. But ask it about something obscure, and it struggles. In the test, the author first asks ChatGPT 3 about AI, and the response is good because there's a lot of data out there. But then he asks about himself. The answer again looks pretty impressive, until he explains that none of it is true. ChatGPT simply made it all up. But you and I wouldn't know it.
AI seems to believe it must provide a comprehensive answer even if the facts are all wrong. The highly respected Science Journal Nature even has an article that finds that if you give AI models data generated by other AI models, it quickly begins to "spew nonsense."
Bear this in mind when reading this tweet by Elon Musk:
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.Then retrain on that.Far too much garbage in any foundation model trained on uncorrected data.— Elon Musk (@elonmusk) June 21, 2025
Feeding his AI model the output from AI systems is exactly what he intends to do
Ms Bender says AI can be useful in finding patterns in data that might help to advance medical knowledge or perhaps fight crime. But, this is only automating laborious tasks, something we can already do.
Iran
Trump has now effectively put a match to the blue touch paper and entered the Israel/Iran war on the side of the aggressor. The BBC are live reporting about attacks by USAF on three nuclear enrichment sites in Iran and Trump has give a nationwide address on TV about the action flanked by Rubio, Hegseth and JD Vance, and he’s also tweeted about it. He described the operation as a “spectacular military success" although nobody knows if it has achieved its aims or not.
The man who came into office on the pledge to bring peace has now bombed a sovereign nation without Congressional approval, and expressly against the UN Charter. As far as I can see, Republicans are keeping very quiet about it.
It is, as the BBC report, a huge gamble that the Tehran regime will capitulate and come to the negotiating table. I daresay that is what will happen - eventually. Most conflicts end that way. Unfortunately, not before an awful lot of blood has been spilt.
As someone has pointed out, nobody attacks North Korea because it has nuclear weapons. A lot of states will take note. And Iran will redouble its efforts.