Ok fine, let’s talk about AI…

Ok fine, let’s talk about AI…


Julie Souza

I feel cynical about our relationship with technology. Perhaps the dystopian media I grew up with planted the seeds or the negative effects of having the smartest computer ever created glued to everyone’s hand has encouraged this cynicism. I am fed constant information, sometimes contradicting, about how technology not only impacts humans, but the environment we live in. If I have learned anything about cynicism it’s that it’s normal for it to arise and to navigate it requires open-mindedness and shame-free education. So, let me share what I learned, and I hope we can keep an open mind together…


AI, or artificial intelligence, is a, “computer systems capable of performing complex tasks that historically only a human could do, such as reasoning, making decisions, or solving problems” (Coursera, 2024). It’s a relatively broad term that encompasses a lot, but mostly includes machine learning, deep learning, and natural language processing (NLP).


Machine learning, which is probably the most commonly used form of AI, is an algorithm that has been trained on a particular data set that creates models for performing certain tasks. In other words, anytime you listen to music and get a song recommendation or shop online and are shown other products that you may like, it’s a form of machine learning. Deep learning is a branch of machine learning that attempts (key word attempts) to mimic human neural processes by layering the data processing. To put it in plainer terms, the deep learning model will repeatedly perform a task until the output is accurate, so when you use Chatgpt it’s using deep learning to give you more accurate answers. Let us keep in mind though, this technology cannot replace all human intelligences, and I will touch more on this later. Finally, NLP does exactly what the term says, it’s a form of AI that attempts to understand human speech and writing. For example, this is what search engine summarizations use.


AI has had a long history, that I won’t go into here, but it’s clear that this form of technology has advanced to great heights. Its not quite the dystopian Terminator reality we see in so many films since AI does not have the capability to recognize other entities in the world nor does it have any self-awareness. An important note is that this technology pulls from the data we give it, it does necessarily generate anything “new”, but this seems to be getting lost in many debates about AI.


AI has to be trained with set data to become accurate, in the case of image generators, its data is the work of real artists. There have been numerous lawsuits attempting to hold tech companies accountable with some headway in the form of transparency requirements in EU laws, but there are still unknown consequences. For example, the popular U.S. arts and crafts store Michaels was recently under fire for selling AI generated art. Which ultimately raises the question, when companies use under-regulated technologies that steal art to minimize labor costs and tries to sell that art right back to you, can we imagine technology outside of the lens of capitalism?

I firmly believe that to achieve the sustainable futures that we hope for requires that we envision them together. To envision them we need to acknowledge the problems in our present and past that prevent us from getting there. Something I want to bring up at this point is an idea that many Silicon Valley types are enamored with, Techno-utopianism.

Techno-utopianism—the idea that technology, and technology alone, will create a more egalitarian, democratic society—has been around since we have had tools to make labor easier. But this utopia, like all utopias, doesn’t really exist… Technology can never occupy a space outside of capitalism. With rare exceptions, every application, company, or innovation will have a funding source, a board, or a bottom line; and in all cases the logic of capitalism will eventually supersede and control technical tools. What we identify as “technology” is just capitalism, but faster and worse. (Pepi, 3, 7, 2025).

Mike Pepi’s Against Platforms: Surviving Digital Utopia explains where the digital world we have went wrong and how we could get it right. Pepi makes the argument that technology as we know it today is just capitalism. Capitalism, especially late-stage capitalism, has exacerbated our systemic issues. This is why we have billionaires dismantling our democracy, but before I get too cynical, let’s bring it back a bit. Prejudices stem from biases that create our systemic issues and we can’t eliminate biases, but we can interrogate them and our prejudices.

Technology is biased and perpetuates the same prejudices from our social world because its being fed data from us, biased and prejudice humans. Safiya Noble, internet studies scholar and professor of gender studies and African American studies at UCLA, shares some of her perspective in this NPR article regarding how AI is used to determine criminal sentencing:

Black people who were charged with crimes were more than four times likely to be sentenced to very severe punishment… What is used to determine these kinds of predictive AIs are things like histories of arrests in a certain zip code. So if you live in a zip code that has been overpoliced historically, you are going to have overarresting… That has nothing to do with you. That has to do with the history of structural racism in policing in the United States (Scott, Chang, Woods, and Noble, 2023).

Activists and civil rights advocates call for a human framework in evaluating this technology. Evaluation and filtering this technology is key since there are concerns for AI models collapsing due to training coming from AI-generated content, otherwise known as AI cannibalism. Research is ongoing, but these uncertainties are fear-inducing to me when there is limited regulation on its current implementation. And not everyone shares my same hesitancy for this tech because 86% of students are already using AI in their studies. But when there are concerns for AI’s accuracy, lack of source-citing, and widespread usage, what does this mean for the future of education?

Approximately 40% of students across the nation cannot read at a basic level. Almost 70% of low-income fourth grade students cannot read at a basic level (National Literacy Institute, 2022–2023).

When you have a bunch of kids who can’t read and generations of people with poor education access (did I mention literacy rates have been down for decades?), it means solutions like AI come along and sell you convenience when you are already navigating a world that overworks you to survive. You don’t have the time or the learned skills to imagine a different life, so why bother when AI can do your homework for you even bypassing the AI-detectors. This phenomenon is best described as anti-intellectualism, “a social attitude that systematically undermines science-based facts, academic and institutional authorities, and the pursuit of theory and knowledge” (Huang, Dorsey, Mosteller, Chen, 2021). (FYI I highly recommend this read to better understand anti-intellectualism). And the truth is, anti-intellectualism is super intentional. Corporations benefit since less questioning from the public means more lobbying, less restrictions, and more money. Fascist governments benefit since less education means ease to control populations for exploitation. And when you’re tired because work is stressful,
everything is expensive, and the world is on fire then the appeal of anti-intellectualism starts to make sense. Regulation for AI won’t catch up without making money off of us first, so I’m not holding my breath, but what do we do now?

AI is not a problem, it’s a symptom. Kids are using tools available to them (especially when books continue to be banned) to pass through the school year because the education system is unjust. The same injustices we see in our criminal justice system that over-arrests black and brown Americans. And the same injustices that are informing the data we feed AI. To teach kids to not rely on AI for fundamental learnings, abolish the school to prison pipeline, and not fall into the pitfalls of techno-utopianism is to (drumroll please) leave capitalism in our past!

Easier said than done, but the reason I harp on this very chord over and over again is because capitalism urges us to move at the speed of light and generative AI is, just as Mike Pepi put it, capitalism in a trench coat. To slow down, and I mean really slow down, means to notice the little ways capitalism has disguised itself in our lives. When was the last time you typed an email without using predicative text? Or bought something online not because you saw an ad, but because you asked a stranger where they got their shirt? Or only engaged on social media with the people you follow instead of constantly being shown content because your algorithm “curated” it for you? I am not saying throw your phone and computer away (although very soon with the way I feel I very well may do just that), but there are forms of resistance that just involve slowing down and sacrificing productivity for the sake of your sanity.

@commonsearth and @lawfullybenjamin

And slowing down, sometimes means acknowledging the privilege to do so, and equipping myself with the knowledge of what AI’s impact really is. “Data centres accounted for around 1.5% of the world’s electricity consumption in 2024, or 415 terawatt-hours (TWh)…. [and] account for around one-tenth of global electricity demand growth to 2030” (IEA, 2025). The way we engage with our technology is a product of our over-consumption. Even when these data centers run on renewable energies, it means we have to have efficient cooling systems to keep them running at the speed we demand. Those cooling systems use a lot of water and that water can’t necessarily be reused. So, to actively avoid using AI means to lower the demand of these data centers and minimize water waste.

And yet I don’t vilify folks who do want to embrace this technology. I think its fair for us to envision a future where technology exists outside of capitalism and doesn’t hurt our planet (big shout out to Burntout from Humans for their work). I also think its fair to critique generative AI and choose to avoid it especially as a means to protest the harmful effects it currently has or just to reject the ways capitalism hurts our brains. I personally fall into the category of trying to avoid reliance on AI as part of a larger effort to resist capitalism. Its not perfect and there is no clear path and I do mess up. I use predict texts. I read AI summarizes. I like my music recommendations. And I toggle do not disturb to not make myself available 24/7 via email, scroll past AI summarizes to train my brain to actually look for information, and put on a record instead of playing music from my phone. I do both as an imperfect human because I can’t exist outside of this capitalist system, but I can try these ways of slowing down that help me envision sustainable pathways forward.

Enable Notifications OK No thanks