Illustration with mountains, cactus, people, and digital details like
a cursor

Digitized Divides

Revealing the trade-offs of a tech-dependent world

Digitized Divides

Revealing the trade-offs of a tech-dependent world

Examining just a few of the threads in recent technological developments reveals a tapestry of interwoven troubles. Tools like artificial intelligence systematize pre-existing struggles in society and on the planet. In this multi-part series, you’ll find out what’s really beneath the shiny surface of technology.

Executive Summary

This essay is part ofDigitized Divides, a multi-part series about technology and crisis. This executive summary was written by Safa and co-developed through discussions, research, framing, and editing by Safa, Louise Hisayasu, Dominika Knoblochová, Christy Lange, Mo R., Helderyse Rendall, and Marek Tuszynski. Image by Safa and Liz Carrigan, with visual elements from Yiorgos Bagakis and Alessandro Cripsta.
In 2023, the Tactical Tech Studio team set out to create a new intervention for teenagers and educators of young people about technology in a project called the Media Literacy Case for Educators1. This came after months of dedicated research and preparation, including an assessment of educators and young people across several countries2. The project gave the team the chance to conduct research, work with experts3, and build partnerships with community leaders who could conduct co-creation sessions with young people and educators around the world. The project culminated with the 2024 release of “Everywhere, All the Time,” a playful exhibition for teens about technologies such as AI, packaged in a ready-to-go way for educators to blend seamlessly into their youth engagement programs4. During the process of creating these materials, considerations for age-appropriateness of content, care for wellbeing of readers, and word limitations of the materials resulted in a lot left unsaid.
The team had gone down research rabbit holes about AI-fueled discrimination and abuse, inhumane labor conditions, and the environmental impacts of technologies, affecting people from traditionally marginalized backgrounds and more specifically, people living in the so-called Global South, also known as the ‘Global Majority’. Then, with more documentation surfacing about the extent of Israel’s use of AI-powered weapons against Palestinians in Gaza, many of which are funded and backed by US government agencies and US-headquartered tech companies like Google, Amazon, and Microsoft5 of the GAFAM Empire6, it has become apparent that physical, real-world issues are interconnected with virtual technologies and that these connections are not one-time flukes; rather, they are rooted in long-standing systems of domination and oppression. There was a lot that we wanted to unpack, shed light on, and tell the world about. So the team took to co-creating this series to document their research findings with more detail for adult readers. This series also follows on from Tactical Tech’s outdoor public exhibition in Berlin in 2021, Everything Will Be Fine, which explored how people respond in a crisis, and how technologies have the power to mitigate and amplify our worst dilemmas.7
This series is very Western-centric, and that’s somewhat inevitable – Big Tech companies are largely based in the West (specifically the US), but it has tangible effects on the rest of the world – namely the Global Majority.
Technology is omnipresent, and over time, it will only become more deeply embedded in all aspects of life. Of course technology can help connect us, and can provide pathways to organize civic engagements, disseminate events, and stay informed at the touch of a screen. Technology has also allowed medical and scientific advancements to speed ahead at incredible rates. But with these advantages come drawbacks. In nearly the entire life cycle of technology — from resource extraction to managing waste — as well as in the applications of technological tools, human rights abuses and environmental damages have been well documented, a few examples of which are covered in this series.
In the project “100 Pandemic Technologies: Technologies of Hope and Fear”, which served as a time capsule, documenting 100 technologies during the COVID-19 pandemic, Tactical Tech co-founders Stephanie Hankey and Marek Tusyznski wrote: “With the promise of sustaining our health and our societies, we have faced trade-offs: safety versus surveillance, care versus control, fear versus freedom. [...] The dilemma we as societies face is that our technological response to this planetary-scale crisis may not offer greater control and understanding of the virus, but rather greater control and understanding of ourselves.”8 This series picks up on these same dualities, highlighting both sides of the same coin through a variety of case studies and crises beyond the COVID-19 pandemic. While these essays will illustrate the points by using many real-world examples, they are in no way exhaustive, but rather focused from the vantage point of the author’s key interests.
Indeed, the dualities and trade-offs are constant, however not always fully informed, understood, or consented to. Even when a website’s or app’s Terms and Conditions and Privacy Policies are not long and confusing, like Amazon Kindle's famous 2016 one that took nine hours to read9, they still tend to take over an hour to read10 and even then people may not fully comprehend the potential harms or real-world consequences. “I don’t think that the average person likely reads that whole document,” said Mark Zuckerberg in his 2018 testimony to Congress, when referring to Facebook's Terms and Conditions11.
In 2016 when Microsoft released “Tay”, an AI chatbot on Twitter, within 24 hours, they had to shut it down because users were able to train the bot to post obscenely hateful and derogatory content12. Why Twitter users corrupted the bot is another conversation, but it was an emblematic case study of the potential for disaster and the safeguarding measures that developers, designers, policy makers, and many others along the line needed to put into place to reduce the impact of those potential (and in this case, actual) harms. In 2024, Google’s Gemini AI chatbot responded to a college student who was researching how to detect and prevent elder abuse, “This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. Please die. Please.”13 Google responded to the report by saying: "Large language models can sometimes respond with nonsensical responses, and this is an example of that.”14 However, the words uttered by the chatbot aren’t ‘nonsense’ in the literal sense - they are perfectly legible and comprehensible to the reader.
New technologies often perpetuate age-old societal issues, such as bias, discrimination, misinformation, scams, sextortion, information chaos, and more. What is unique is that powerful tools like AI have supercharged these issues, making them more widespread, difficult to verify and instantaneous. But why would people create and commercially release systems that have the potential to exacerbate these deep societal harms? Profit is likely a contributing factor when looking at the 2025 net worth of companies like Apple (3.52 trillion USD15), Microsoft (3.07 trillion USD16), Amazon (2.3 trillion USD17), Alphabet (2.3 trillion USD18), Meta (1.57 trillion USD19), and Tesla (1.29 trillion USD20). Mark Zuckerberg’s infamous motto “move fast and break things” illustrates how Facebook and Big Tech companies of the same ilk did not prioritize care – neither care for their workforce, nor care for the people affected by the tools. They instead favored an approach that would benefit their bottom line – an ethos shared by many of the ‘disruptors’ in Silicon Valley.
This multi-part series examines some of the ways in which technologies have, and are, exacerbating harms in society, providing snapshots into who makes technologies such as AI, who benefits most and who is harmed, and how it is being used out in the field. As activist and professor Kimberlé Crenshaw has said: “What kind of movement can you base on the experience of one person? To understand a social problem we have to confront all aspects of it.”21 And that is what this series aims to do. The fact that so much can be left out from a series this long illustrates just how widespread these issues are. The information contained in these essays is far from comprehensive, but it aims to give a glimpse into some important aspects to consider, including:
This series sheds light on very desperate conditions and real-world abuses. We didn’t want to sugar-coat it, and so we suggest that readers tread cautiously with this in mind (that is to say: there is a content warning about this entire series).
Download a PDF of Digitized Divides by clicking here.
Notice: This work is licensed under a Creative Commons Attribution 4.0 International Licence.

Endnotes

4 What the Future Wants. “Everywhere, All the Time.” Tactical Tech, 2024.
6 Tuszynski, Marek; et al. “The GAFAM Empire.” Tactical Tech, 2022.
7 Hankey, Stephanie; et al. “Everything Will Be Fine.” Tactical Tech, 2022.
8 Hankey, Stephanie; et al. “100 Pandemic Technologies: Technologies of Hope and Fear.” Tactical Tech, 2020.
13 Gemini conversation: “Challenges and Solutions for Aging Adults.” 2024.
15 Stock analysis: Apple. Retrieved January 14, 2025.
16 Stock analysis: Microsoft. Retrieved February 6, 2025.
17 Stock analysis: Amazon. Retrieved January 14, 2025.
18 Stock analysis: Alphabet. Retrieved February 6, 2025.
19 Stock analysis: Meta. Retrieved January 23, 2025.
20 Stock analysis: Tesla. Retrieved January 14, 2025.
21 Spike, Carlett quoting Crenshaw, Kimberlé. “The intersectional struggle of black women.” UDaily, University of Delaware, 2018.