AGAINST SUMMARY
opinion
-by Barbora Horská (Curator/Editor-in-chief of Improper Dose)
AND SO THE RANT BEGINS
There are countless issues with generative AI, from its development to usage, be it ethical, environmental, or cognitive. The scope of this article can’t cover them all, nor am I equipped to comment on these topics with any relevance. What I can talk about, though, as I am talking from within the field of publishing and communication, is the fast decline of the quality of the produced texts and the entire mental shift of what is considered good or interesting writing. At least from the ‘leadership’ perspective, anyway. Speaking about those ‘in charge,’ have you noticed how the OpenAI’s ChatGPT boom enabled even more incompetent people than before to not only rise into positions they would have never had if they had to rely on their own knowledge, expertise, and effort, but created more toxic environment where the speed of artificially generated results sets up the precedent of ‘appropriate’ working tempo? I mean, using your own brain? You’re WAY behind schedule.
Now, I know this is not as important or urgent as the environmental impact of each LLM’s [1] prompt or the health danger to the communities living nearby the massive AI data centers it requires to function, its use to further sexploit and dehumanize women and children, the way it's worsening the mental health crisis across genders and age groups, the data protection and copyright issues or how its development and continuation is fully dependent on colonialism.[2] Full stop, no arguments there. However, we now have preliminary data to support the hypothesis that relying on the content-generating bots for thinking and creating lowers the brain’s ability to engage in such activities itself.[3] Like a muscle that needs to be worked out regularly to not lose its volume and composure, our cognitive abilities need to be regularly exercised to continue to function.
Once again, we’ve been given a tool that not only doesn’t overtake the mundane labor so people can be free from work and instead invest their effort and time into whatever brings them the most joy, the tool is used by those with capital to discard the workers with more ease—of course, only thanks to training the generative models on their stolen work first.
IF THIS WERE AI-GENERATED TEXT, HERE WOULD BE AN EXPLANATORY SUBTITLE
As an originally trained Transmedia artist, I was quite a fan of Generative Art. Or at least of what generative art meant historically until about 10–15 years ago. I used to love experimenting with multimedia data corruption, learning to code automated data visualization in Processing[4] or building upon randomized results through computer interaction. In a nutshell, generative art used to be fun. Why? Because it contained an element of surprise, imperfection, an accident that could inspire more creativity, it fueled the process instead of eliminating it. And what do we have now? Uncanny valley meets kitsch on steroids aesthetic, patched together from stolen work of original creators—without consent and remuneration—and haunting doubt that the cute animal video that interrupted your day with a precious spark of joy was, in fact, not real. Though I will admit, if we absolutely must use GenAI for anything, anthropomorphized cats are not the worst thing I can think of.[5]
So often, any criticism of LLMs’ popular use immediately triggers someone to bring up other technological tools and their impact on society, like the invention of the phone, camera, or, from the more recent ones, software like Photoshop. But this comparison doesn’t stand because while they had a significant impact on societal development, none of them were built to think for us, or to make decisions instead of us. The camera didn’t replace painting as a medium, and while Photoshop helped to uphold unrealistic beauty standards and sprinkled doubt into media authenticity, it did it nowhere near the scope of the currently available AI deepfake generative apps. Even with malicious intent, at least you actually had to learn something complex first in order to use it. If you still want to compare it with a tech feature, try social media that went from a safe meeting space for socially awkward people to an endless stream of addictive brain rot and personalized advertisements. The comparison with the phone is the one I find the most bizarre, though, because it somehow implies that GenAI is here to connect us, instead of replacing us, both as a workforce and in real human relationships that require effort and willingness to stay with inner discomfort that many people are simply not willing to undergo. The correlation between women globally rejecting unfulfilling or straight-up harmful relationships with men and the rising accessibility of all-affirming artificial companions is one that shouldn’t be overlooked.
But why am I telling you all this in a supposed-to-be piece about writing? Because it’s important to keep reminding ourselves that generative AI is not just another tech toy and that whatever ‘advantages’ it possibly offers are not worth the human and environmental costs of its existence and further development.
As we got that out of the way, let's talk about this 'magical' writing we've been promised by Altman et al. Except that regardless of the marketing teams’ relentless work to convince us their chatbots are supreme beings, hijacking one of my favourite ✨emojis✨ in their latest attempt, the fact remains that by design, all of these models will only generate the most average and agreeable results based on their training data and your user history, guessing the most probable version of what will make you want to continue using it. I understand how this might sound like a bitter cry of someone whose job is amongst the first ones on the line, but perhaps exactly because of that, I’m aware of its actual abilities as I’m dealing with its produce on a daily basis.[6]
“REMEMBER, THE FIREMEN ARE RARELY NECESSARY”
During the process of writing this article, I came across Andrew Berardini’s “Let’s Talk About Artificial Intelligence Art English” piece in Mousse Magazine.[7] I will say with gratitude that it fully satiated the need to have my frustration with GenAI’s abhorrent syntax and vomit-inducing vocabulary that forces every sentence into an outrageously empty and illogical wanna-be-edgy statement, reflected by the world. Berardini reminds us that a major responsibility for why generative AI sounds so hollow is that long before AI chatbots, artspeak wasn’t too keen on substance to begin with. To depart from his analysis, I ask myself whether people would be so eager to accept the shortened and simplified writing, first produced for social media to accommodate technical limitations and, in time, reduced attention span, now supercharged by generative AI, if the baseline of academic and institutional communication wasn’t so elitist. We could be debating the nuances of accessible language vs language as an art form for a very long time, likely without consensus or a definitive answer on which one has the higher ground. I would argue both are equally relevant, perhaps not at the same time and in every situation. What enrages me, however, is when GenAI is framed to be an accessibility tool, especially in education, to accommodate children with learning disorders or neurodivergence. Tell me again how a biased, designed to lie if unsure, thief of cognitive abilities, will help struggling kids to learn better? It won’t. But what it will do is make them less of a disruption for teachers and parents—a strategy fully in line with how we diagnose these conditions in the first place.[8] I understand it is difficult to impossible to properly support each child’s unique needs, but pretending that an artificially generated summary of learning materials can change that is a major step back for disability justice within and beyond the school environment.
Undeniably, it is easy to fall into the time-saving trap of the now omnipresent summarizing feature. The problem is, all these pseudo-accommodations are only making us complacent and reinforce stigma around diverse needs for learning and information processing. Who wants to go through the discomfort of voicing their needs, advocating for more inclusive education and workplace settings, and pushing for systemic changes in working culture when faking the ability to participate in the exploitative and ableist version of reality has never been easier. Instead of real changes, we simply take another shortcut to survive the system. In times when most of us sacrifice our health and quality time spent with loved ones or joyfully by ourselves just to financially cover the necessities, the allure of ‘saving time’ at work can feel irresistible. But here comes the catch again. Even if we are willing to pay the price of cognitive decline, the promised extra time to spend by our choice doesn’t come, because the AI assistant is not designed to save our time. It exists to, well, steal our data for further learning and surveillance, but also to push us into more production and profit. For our boss, and our boss’s boss, and their boss’s boss, all the way to the tech-bro-hordes from hell.
Outside of linguistic issues, cognitive debt, and worsened productivity obsession, AI speak is already measurably affecting our own talking. Through repeated exposure, which will only grow as the models keep learning from the internet that is simultaneously being flooded with AI slop, researchers observed a plausible possibility of AI linguistic patterns seeping into the human language system, even when the word or expression didn’t reflect the speaker’s own preference. Two main issues arise with this phenomenon: the authorship indeterminacy and the effect a misaligned or malicious model can, over time, have on users’ political and social beliefs.[9] Because that is exactly the power of writing: its effect on our beliefs, ideas, creativity, openness. We get to know the world and ourselves through it, through the work of humans made for other humans out of sheer desire for self-expression and connection. The machine won’t ever build a clever narrative or believable characters whose stories are worth accompanying. Its ‘art’ will never truly move us. Or connect with us. So where are all the readers demanding an end to this literary butchery? All the art lovers, curators, and self-proclaimed radical thinkers? Why do I hear only the authors, editors, and artists complaining? The ability to read and access diverse literature is still a privilege in this world, yet we who have it are giving it up at the speed of light and freely. No governmental oppression necessary. Just as Ray Bradbury predicted, the book-burning “firemen are rarely necessary. The public itself stopped reading of its own accord.”[10]
THE RESISTANCE IS NEVER FUTILE
Even within my own bubble of leftist or at least, left-leaning folks, fully aware and considerate of the human and environmental costs, the idea that there isn’t much we can do about the presence of a harmful tool none of us really asked for persists. Like we were simply thrown to the far end of the galaxy and got assimilated by the Borg. Leaving a critical analysis of Star Trek for another time, I beg you to shut that inner drone voice down! Resistance is never futile, though its shape and timing do matter, so for inspiration, here is a non-exhaustive list of things I try to abide by and recommend, if you want to practice everyday resistance against generative AI:
First of all: Instead of worrying about becoming tech illiterate if you don’t embrace it, learn how it actually functions. How it’s being developed, and what is the difference between generative AI and the other AI types already in use within medicine, climate prediction and other scientific fields. The field changes constantly, so this really is an everyday practice. And maybe most importantly, learn who the people developing and funding this technology are. The ones who are on the quest to bring Artificial General Intelligence into the world, regardless of the price, while defining ‘intelligence’ only through an increase in economic productivity.
Next: Resist the convenience. With all this tech available to us until now, speeding processes up, how much time have you actually saved? None, if I were to guess, because productivity—the prerequisite for the efficiency race—as we discussed before, is not a physics law. We don’t need a tool to hack it; we need to dismantle the system where our self-worth and survival are tied to it, and decreasing our discomfort tolerance is not going to help us do that. I am not suggesting anyone to risk potentially existential threats like losing a job by challenging your boss’s AI craze too much if you really can’t afford it, but before we all mindlessly jump on the “just ask the chatbot if you don’t have time” wagon, let’s at least check if there is an option to first remind everyone involved that 👏things 👏take 👏 time.👏
Use your brain like your life depends on it, because, well… it kinda does. Duh. Read. Write. Create. Think. Critically AND for fun. Which brings me to:
Resist the simplification. The dumbing down of everything from multi-page PDFs to text messages. No, Slack, I don’t need your new AI assistant to “explain message” to me (picture me as the Captain Picard Facepalm meme because words just aren’t enough here). This also means to complain. Drop a message or a comment when you see an obviously non-human text in any business or institutional communication, because the only way the decision-makers there will listen is if they hear it from their audience.
Resist the AI dictate. Literally. Have you noticed how I keep using em-dashes, the supposed dead giveaway of AI-generated text? It’s because I—the human with complex thoughts—know how to use them. And you probably do, too. It’s not that complicated, but it’s impactful. So much so that all the LLMs trained for text generation learned to overuse them because they were present in training data in quantities large enough to convince the models that it is an average punctuation mark. As a neurodivergent person incapable of producing a concise thought even if someone held a gun to my head, I am a particular fan of segment isolation. Of course, soon after the initial backlash, most of the models stopped using the dash in their generated answers, and some grammar checkers, like Grammarly, even started to mark their use as an error. However, the connection has been made, and people who didn’t know an em-dash was a thing before 2022 now use it to shame people who can actually utilize it in their own original writing. So I say—Long live the em-dash queen! Because editing myself to NOT sound like an AI bot is almost as bad as embracing the AI speak.
Embrace the human experience. And I don’t mean only the creative one. I noticed how sometimes people who feel like they can’t express themselves in ways that immediately do justice to their inner worlds or meet the conservative standard of how one ‘should’ express themselves, resort to AI chatbots for help. When we are taught by everyone from parents to teachers and some neoliberal therapists that only full emotional regulation and coherent verbal expression with clear and concise vocabulary is worth being taken seriously, reaching for a tool that can compose pretty sentences based on a prompt seems only natural. But as a friend, I would so much more appreciate the honest, emotional, and messy human voice in a conversation where we need to engage in active listening to learn and understand each other without the pressure of perfection. I wish for freedom from therapy speak, tone policing, and words that can’t be taken back, no matter how distressed or simply neurodivergent you are. We are not machines, and even with them, we understand when a computer fan works louder, or the whole system freezes if put under too much stress, so why insist on suppressing the same mechanisms within ourselves?
And on the other hand, call the machine a machine and call out the humans operating it. Because Grok didn’t strip women and children on the social media platform that can’t be named. The men using it did. Some people enabled it. Many developed it. And a few funded it. Remember it next time you see a passive headline in the news because those people, unlike a machine, can be held accountable.
And if you need a simple starting point: Keep learning new things. Active (self-)education increases neuroplasticity, builds resilience, and keeps the mind open to other perspectives. Literally no downsides except the inevitable radicalization against all forms of oppression.
Speaking of which: learn about your rights and options to participate in the democratization of technological development and implementation. Sign petitions. Protest. Look for ways to reach your representatives on the local and international level, especially if you are a citizen of an EU country—annoy them until they listen. Make them know you demand legislation that doesn’t prioritize billionaires’ interests. Or, you know, direct action has historically worked pretty effectively, too.
To conclude, besides the obvious capitalist hellscape forcing us every day into more elaborate self-exploitation, as long as education and intellectualism are weaponized to define someone’s worth, generative AI, as we know it, will always find its customers. With that being said, maybe equally relevant as the quest for knowledge is to change our perspective on what we don’t know. But more on that next time.
RANT OVER
Notes
[1] “A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation.” As found on Wikipedia on December 10, 2025. See HERE
[2] The GenAI industry is entirely dependent on global exploitation of material resources to keep building more hardware, storing its massive data centers to run your favourite chatbot, and for supply of cheap labour to train the language models—a job that not only exploits the workers’ economic situation but also leaves them with significant mental health issues due to exposure to the worst and most violent of what humanity has to offer, of course, without any support—to give us, the potentially paying customers in the Global North, a promise of safety. For details, see, for example, Karen Hao’s “Empire of AI,” featuring several chapters of her own investigations of the conditions of workers from Venezuela, Kenya, or Chile.
[3] Kosmyna, Nataliya & Hauptmann, Eugene & Yuan, Ye & Situ, Jessica & Liao, Xian-Hao & Beresnitzky, Ashly & Braunstein, Iris & Maes, Pattie. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. See HERE
[4] In their website copywriter’s words: Processing is a flexible software sketchbook and a language for learning how to code, promoting software literacy within the visual arts and visual literacy within technology.
[5] But also, Katarina Zimmer explains for Atmos how “AI Generated Animals Are Distancing Us From Nature.” See HERE
[6] And I’m not the only one. Read more about the real effects of AI-generated workslop in Jennifer Liu’s piece for CNBC HERE
[7] Seriously, give it a read! Available HERE
[8] The diagnostic criteria of ADHD and Autism focus almost exclusively on outward expression over internal experience. It’s not about how you feel, but how you act, more specifically, how disturbing your behaviour is to social order and to what extent it is in alignment with the stereotype of your assigned gender. This approach is directly responsible for the generations of un-, mis-, or late-diagnosed women who learn to mask their symptoms in socially acceptable ways but struggle internally throughout their lives.
[9] Learn more about why this is a problem in: Anderson, Bryce. Galpin, Riley. Juzek, Tom S: “Model Misalignment and Language Change: Traces of AI-Associated Language in Unscripted Spoken English.” August 1, 2025. Available HERE
[10] Ray Bradbury’s “Fahrenheit 451”, is a critique of censorship and intellectual and moral decline accompanying mass media’s takeover of our daily lives. Since its release in 1953, the generative AI seems to finally be the tool to catapult us into his dystopian future. I only wonder, after coming across some of his remarks from the 90s’ on ‘political correctness’ whether he would be wearing a MAGA hat today or finally pick up a book written by someone other than a white man.
All images are courtesy of the author.
Barbora Horská is an editor and cultural worker based in Vienna. She studied transmedia art with a focus on participation and has been a part of Improper Walls collective as a curator since 2020, and since 2021 also as the editor-in-chief of Improper Dose magazine. Her practice centers on drawing attention to socially engaged and ecological issues, with a particular focus on education and mental health.