Apple’s failure to deliver on Apple Intelligence may just be one of my favorite events in 2025. And this has been a supremely eventful year so far, at the time of writing on June 25, 2025. To defend this failure, they published an eyebrow-raising white paper named ‘The Illusion of Thinking’.
The study involved testing advanced 'AI' on elementary puzzles, and the models performed well. Until the puzzles got harder. Then, they just quit. They didn't run out of memory, or use up all resources. They just quit. Why? Because they can't extrapolate patterns beyond a limit. And they can't apply mathematics to the problem. You know, like how humans switch to formulas for bigger calculations.
I find this study to be a very Apple way of dealing with their failure. Because they’re owning it with a convincing (to me at least) public explanation. In my books, that's a win.
Now coming to what I really want to say.
Way too many, otherwise really smart folks, seem to think that ‘AI’ is going to get a lot better, and then most of us won’t stand a chance. After both using 'AI' for over a year, and reading up on it, I find this to be a supremely stupid viewpoint. It comes from zero idea about how intelligence, or creativity (in its simplest sense, that of creating something) works. And ignorance about how the code behind current Generative AI systems work.

And what I’m about to say is applicable across any discipline you can think of.
'AI', in its current technological state, is not going to create shit, because it can’t. As much as people like to think that rehashing something is creating something, that isn’t how creating works. Creation is an extremely complex endeavour, with many moving parts and pieces that have to fall exactly in the right places for things to click. This blog post, for example, is something I created. And I’ll be damned if 'AI' can write it half as nuanced, or with half as perspective.
Yes, as a copywriter, I use 'AI' to do parts of my job. And most things it does, it does very well. That is precisely why I think the technology will free up time and energy for us to create. Because it does the mundane quite well. But it most definitely isn't creating on its own. It can barely make a decision on its own.

When you ask it to think of something novel, write a line that hits all the right buttons, or create anything worth consuming, it simply can’t. You still need to draw boundaries for it, and it can only think/conjure/recreate things within those boundaries. It's incapable of 'thinking outside the box'. Which, if you ask me, is the first step to creating something worth consuming.
So if you aren’t already ‘intelligent’ or ‘creative’, all the best. 'AI' will just make you stupider, and faker. Of the last 300+ days I’ve used ChatGPT, I can count on one hand the times I’ve used something it spat out without manually editing it. And this is after I've already done all the thinking for it. Even with super simple tasks. Sophisticated random text generator much?
Sure. With time, LLMs and LRMs might string better sentences, provide more compelling arguments and emulate a wider plethora of ideas. It might improve the sharpness of the images and videos it creates. It might finally end up getting basic elements of nature right (Veo 3 actually looks great—it is, after all, trained on Youtube). But it can’t script anything for crap, make a travel itinerary (or any plan) worth following, or exercise any sort of judgement.
Neither technology understands flow, context (unless painfully spelt out) or human psychology and emotions. It isn’t sentient, and if you’re scared that AI will take your livelihood, go do something that requires you to use your creative muscles (everyone has them). 'AI' doesn’t have them, and I don’t see it getting them. On the scale of human inventions, I would peg current ‘AI’ near the microwave. And microwaves couldn’t even replace stoves—only the cooks employed for reheating.

In terms of how advanced this technology can get, I think LLMs and LRMs are 80% the way there. The tech stack isn't exactly novel. And the argument that the remaining 20% is where the magic will happen is pathetically wishful thinking. Because for what most fanatics dream of 'AI' doing, the underlying technology itself has to evolve beyond its current state. And that’s eons away. This thing can't even think on its own right now. And we think true intelligence is on the horizon. Pfft.
The fact that people expect something trained to follow patterns, to break them, is mind-numbingly illogical, and quite hilarious. The only reason 'AI' has gained the traction it has, is thanks to the literal billions of dollars being burnt in its pursuit. And the primary force driving the development of this technology is human greed, which, if history is any witness, never really ends well for us.
Besides, this isn't the first time we've thrown basic logic out the window, and it definitely won't be our last. Humans have an uncanny ability to never learn. And the best of us are especially vulnerable to The Hubris Effect.
And that's what I have to say.
Any improvement henceforth will be very incremental, and we’re on the verge of diminishing marginal returns, if we aren’t already there. I don’t know exactly, ask Sam. Oh wait.