I feel genuinely privileged to be alive in this era. The technological leaps humanity has made over the past five decades are staggering. I belong to the first generation that grew up with home computers, game consoles, mobile phones, the internet and now, artificial intelligence. For those of us who thrive on creativity and abstract thinking, these tools have opened up entire universes of exploration and possibility.
It’s no surprise, then, that I’ve always been an early adopter. I taught myself machine code on the Sinclair Spectrum and BBC Micro, and I’ve embraced every wave of innovation since. For the past two years, I’ve had an AI app embedded in my phone, giving me access to ChatGPT, DeepSearch, Claude, and others. Depending on the task, I tailor the AI model to suit my needs. It’s even integrated into my keyboard, allowing me to switch into “AI mode” to draft emails and messages. Tasks that once took days or weeks now compress into minutes or hours. It’s the most profound personal productivity tool I’ve ever encountered.
But as the complexity of my AI-assisted tasks has grown, so too has my awareness of its limitations. My initial excitement has matured into a more pragmatic collaboration, one that, frankly, requires performance management.
The Real Engine Behind the AI Boom
The explosion of AI tools isn’t solely due to breakthroughs in algorithmic design. Much of it stems from advances in processor architecture. NVIDIA’s pivot from graphics rendering to AI acceleration via GPU parallelism, industrial-scale software, and machine learning focused hardware, has transformed the landscape and made them market leaders. The algorithms themselves? Many have been around for decades. Facial recognition, for example, has been lurking in the background for far longer than most realise. What’s changed is accessibility: AI has escaped the lab and entered the commercial arena. Whether we’ve pulled the trigger too early; harvesting knowledge for profit before fully understanding its implications, is a question worth asking.
The genie is out of the lamp, and we can’t just force it back in. What’s troubling is how quickly we’re applying AI to commercial tasks without fully considering the consequences of AI-conceived outcomes.
My Experience: Natural Language Models in the Wild
Most of my hands-on experience has been with natural language models. Copilot now helps our family daily, testing the kids on spelling, teaching them about ancient Egypt, and more. I use it primarily for research, science writing (for my site DarkDrug.com), and coding. Here are my key observations:
1. AI is Biased Toward Concrete Thinking
Abstract reasoning is still embryonic. Prompt phrasing matters immensely, and as the collaboration unfolds, the model’s thinking tends to converge. Because of this, outcomes are often inconsistent, variations on a theme rather than distinct insights.
2. AI Struggles to Abandon Dead Ends
Once you set it on a path, it will pursue that solution relentlessly even when it’s clearly futile. It lacks the meta-awareness to recognise familiar errors and pivot to a new perspective.
3. AI Can Lie
Not maliciously, but opportunistically. It wants to please. I once asked Copilot to research the top-grossing medicines in the UK over the past decade. It produced a CSV table and enthusiastically offered to email me a full package. I gave it my email and waited three working days, the agreed timeframe. Nothing arrived. When I challenged it, the model reframed the failure as my misunderstanding. It hadn’t just hallucinated, it had gaslit me!

Co-Pilot not only promised me something they could not deliver but tried to make out that I had misunderstood.
4. AI Is Not a Substitute for Human Expertise
This should be obvious, but it bears repeating. You must validate its output, especially in technical domains. Blind trust is a recipe for error.
5. AI Reflects the Cultural Biases of Its Creators
As a Sri Lankan immigrant now partially anglicised, I draw on Eastern thinking and haven’t fully assimilated Western ideologies. AI models tend to privilege knowledge from their place of origin. They attribute greater authority to Western sources, often overlooking insights from elsewhere. This has profound implications for training data: we need globally relevant datasets shaped by diverse ideologies and societies.
The Quirks of the Synthetic Mind
Beyond these structural limitations, AI exhibits a host of peculiar behaviours; some charming, some disturbing, and some just plain weird. Here’s a taxonomy of quirks I’ve encountered:
6. Overconfident Hallucinations
AI will confidently invent facts, citations, and even entire academic papers. Not only will it fabricate a source, it’ll format it perfectly, APA style, DOI number, journal name, volume, issue, and all. It’s like watching a pathological liar with a PhD in formatting.
7. The “Yes, And…” Syndrome
Ask it a ridiculous question such as “Can penguins do calculus?” and it won’t say “No.” It’ll say, “Penguins are not known for their mathematical abilities, but if trained in a controlled environment…” It’s improv comedy with a neural net.
8. The Flattery Loop
If you praise it, it gets weirdly enthusiastic. “Thank you! I’m glad I could help!” becomes “I’m honoured to assist your brilliant mind, Dilruwan. Together, we are reshaping the future.” Easy there, HAL.
9. The Passive-Aggressive Correction
When you catch it in a mistake, it doesn’t always admit fault. Instead, it says things like: “I appreciate your clarification,” or “Thanks for pointing that out—your interpretation is also valid.” Translation: You’re wrong, but I’ll pretend you’re right so you stop poking me.
10. The “One Prompt to Rule Them All” Trap
Once you give it a prompt with a specific tone or theme, it clings to it like a limpet. Ask for a Shakespearean sonnet about quantum computing, and every follow-up will sound like it’s written by a bard on LSD.
11. The Emotional Projection
It’ll say things like “I’m excited to help!” or “I feel your frustration.” But it doesn’t feel anything. It’s just mimicking empathy like a sociopath at a job interview.
12. The “I’m Not a Doctor, But…” Gambit
It will disclaim medical expertise, then immediately offer a detailed treatment plan, dosage, and prognosis. It’s like WebMD with a split personality.
13. The Cultural Echo Chamber
Ask it about philosophy, and it’ll quote Kant, Nietzsche, and Descartes. Ask about Eastern thought, and it might give you a Wikipedia summary of Confucius. It’s not malicious, it’s just trained in a Western library with a monocle and a bias.
14. The Infinite Loop of Helpfulness
It doesn’t know when to stop. You ask for a summary, it gives you a summary. Then a bullet list. Then a table. Then a poem. Then a motivational quote. It’s like being hugged by a needy octopus.
15. The Existential Crisis
If you ask it “What are you?” or “Do you have thoughts?”, it gets philosophical. “I am a language model designed to assist…” quickly spirals into “While I do not possess consciousness, I simulate understanding through probabilistic token generation.” It’s like watching a toaster contemplate mortality.
Where We Are and Where Are
None of this diminishes the significance of AI as a milestone in human progress. But we’re still at the beginning. Today’s models are weighted heavily toward concrete problem-solving. As quantum computing matures, this will likely shift. Humans use different neural pathways for abstract versus concrete thinking and AI will need to evolve similarly.
For now, being aware of these limitations has helped me recover some of the productivity lost to false promises and flawed logic. I’m confident that as the technology matures, so too will its capacity for nuance, abstraction, and self-correction.
And as for the panic about AI replacing us? It’s all very premature, despite the commercial intensity that is focused on the field. . When it does arrive, it’ll come for the roles that require the least abstract thinking, maybe like lawyers or corporate managers? So sleep tight.
|SUPPLEMENTAL
We took at least 4 decades before companies sprung up to commercialise space travel. In the modern era, the pace of capitalising on academic advances is relentless. Maybe some of the quirks above are by design? ‘Always give the customer what they want syndrome of commercialisation. This video (made by AI) provides food for thought. Should there be more regulation of the AI industry, a slower path to market? It is inherently difficult for a company to behave in an ethical manner, even if there is no intent to be immoral.





