Further to my thoughts on the dangers of generative AI not just to our livelihoods but to how we relate to Nature in its broadest sense, I also want to flag up a smaller issue I have with this new technology. The biggest impact of AI in my working life so far has been improved captioning, and spelling and grammar checks. They’re still quite* inaccurate, but as someone who learned to do a lot of things the long way because there were no other options (or no other affordable options), I appreciate how hugely time-saving they are. I appreciate how much they’re improving. The inaccuracy matters, though.
Most generative AI suggestions are great reminders that fundamentally, what this tech is doing is taking an input, comparing it to other similar things, and suggesting an option that’s closer to commonly used patterns. You end up with a grammatically correct piece of text that, at best, has had its individuality sanded away and, at worst, means something entirely in contradiction to what you’re trying to say. In the case of Grammarly, for example, it offers to ‘improve’ your text, but really, it’s a kind of normalisation.
For transparency, I should say that I use Grammarly regularly, and its spelling and grammar checks are the best I’ve found, though I reckon I only accept 75-80% of the suggestions, and I’ve never been happy with the generative AI suggestions.
It’s the latter I want to highlight. I tried re-writing a paragraph from my previous post with Grammarly’s generative AI. Here’s the original:
Our overall direction of travel in this new world, with AI mediating so many of our connections to external reality, looks more and more like an accelerating slide towards a Matrix-like disembodied dystopia that swallows all aspects of our lives, distracting us with hyper-Debordian spectacle until there’s nothing of us or the rest of the world left to extract.
Here’s the ‘persuasive’ version:
Our trajectory in this emerging world, with AI transforming our connections to reality, increasingly resembles an accelerating plunge into a Matrix-like dystopia. This disconnected existence engulfs every facet of our lives, ensnaring us with overwhelming spectacle until we find ourselves and the world around us reduced to mere shadows, stripped of substance and meaning.
I don’t hate most of this, though splitting that long sentence into two feels very vanilla and loses the slippery-slope feel I hoped to convey. Also, I’m not suggesting we become shadows, but rather that our experience becomes disconnected from our bodies and the rest of the physical world. But the most interesting change is that the concept of extraction is gone, effectively depoliticising the point I’m trying to make. I don’t think this is deliberate; it’s just a machine doing its best to sound normal.
How about asking it to be ‘inspirational’? This gets even more interesting:
Our journey in this new world—where AI reshapes our connections to reality—offers both challenges and opportunities. While it may seem like an accelerating slide towards a Matrix-like dystopia that seeks to consume our lives, we have the power to reclaim our narrative and connect meaningfully. By staying present and aware, we can rise above the distractions and create a vibrant reality that honors both ourselves and the world around us.
Well. This not only deletes the idea of extractivism and undermines the whole argument with that tricksy word ‘seem’, but it also places the agency (and therefore future blame) firmly with us. As it happens, I think we do have the agency to free ourselves, but that’s not what I was saying in that particular text. And where did it get those ‘opportunities’ from? The Ladybird Book of Platitudes?
Someone less confident in their command of language and communication skills may well accept those changes, believing the AI to know better than them, thus unwittingly (in this case, at least) reducing righteous wrath to some kind of self-help, positivity vibe.
More generally, it pushes ideas back into the Overton window and creates a kind of intellectual monoculture with less space for intellectual diversity and radicalism.
*Grammarly suggested changing ‘quite’ to ‘entirely’, which highlights the not particularly high level of linguistic nuance AI is unable to detect.

Note on the images: both had the same prompt in Stable Diffusion: ‘human evolution, but please make it less Neuromancer and more Children of Compost’. The top one had the added style ‘anime’, while the one directly above, which really has a lot going on, was given the style ‘comic book’.
This is the second of two blog posts on the dangers of AI from an ecological perspective. The first is here.
One thought on “Lost in Improvement: a Quick Postcript to My Previous Post”