Grindr, AI, & Space-Time

I still occasionally cannot help but put out an academic paper, especially queer-trans work on AI. Also, I drop some amazing archival gossip on David Livingstone.

Grindr, AI, & Space-Time
Queering and trans-ing AI, amen. And one of those academic papers I still can't help but occasionally write.

After 20-odd years as an academic, occasionally I cannot help myself and I put out an academic paper. This is a little ditty on AI which called to me with full text below. We need more queer-trans work on AI like many of the folks who are also part of the Liberatory + AI Think Tank at MIT are doing.

  • Feel free to jump to the full text of the article, have a free PDF here, or read the abstract,
  • or just read the David Livingstone archival gossip,
  • or! first/only read why I co-wrote this short but poignant riff because I always think that is so fascinating too. Here's those four reasons:
  1. I have so much to say and share about LGBTQ+ dating and hookup experiences that I want the academic world to think on. I feel confident that the bridging of these apps to AI will surely only f*ck over queers, trans, hets, and cis alike. And apps are a space so many of us spend wayyyy too much time on, based on the promises of connection and often unable to deliver.
  2. My partner is COO of various internet social justice think tanks so we chat about AI on the biweekly to daily (but never in bed, bc sanctuary).
  3. I so much wanted to co-write with my friendcolleague, the utterly wonderful Nikko Stevens. They write on the history of databases through the lens of the prison system. Damnnnnn that's on point.
  4. And the fantastic co-editors Jess McClean and co. invited me to write as part of a special collection in the U.K.'s Royal Geographic Society's (RGS) Transactions of British Geographers journal. I think these co-editors are dope (hi, dear Sneha!) and publishing in Transactions has been a long life goal. Because, y'all, this is the journal the British geogs can get in the mail as members of RGS! Notably, the RGS has more geographers (~16k) than the U.S. Assoc American Geographers (~10k) because empire. The RGS is actually a wacky place to visit as you have to be a member and you walk into a private, members-only, lush rolling green field within stunning British academic buildings, after coming through a brick wall near a flipping set of statues of Ernest Shackleton and David Livingstone. Woah. In sum: it's so cool to be in chats with those you admire and be delivered in the mail.

Epic side note or archival gossip re geography and empire: it's so wild that David Livingstone would be on this building! I once found a note from him when I worked in the Union Theological Seminary Archives. I vividly recall that it something like: "No, I will not come speak in England. No, I will not be coming back ever. Please stop writing me. I am staying here in Africa and I prefer not to speak English anymore." Now, that's a way to refuse empire.

Oh, empire we see you and your architecture. C/o of the RGS.

The full text of our submitted manuscript is below. Would that I could share the copyedited text here but them's the copyright rules of the power of publishers.

You can also down a free PDF here - YAY!

"A Reply on Responsibility & Repair in AI: Lessons from Outer Space, Agriculture, and Gay Hookup Apps"

Nikko Stevens and Jack Jen Gieseking 

ABSTRACT :: Academic digital studies focused especially on big data in the 2010s, unusually in step with corporate machines, policy spheres, and their marketing machines. Now corporate, government, advertising, tech, etc., industries are pitching us another version of technology to drive capitalist investment: AI, aka artificial intelligence. Just like the supposed magic that big data was to provide, AI is promoted as ideal, utopian, and ultra-solution oriented, even, too, as its implications are violent, depressing, anxiety-inducing, and possibly authoritarian- and fascist-supporting. For a collection on digital geographies of responsibility and repair, it is then poignant that each piece in this collection (inevitably?) wound up all focusing on AI. The insights of these papers highlight the colonial and racist physical, economic, and political infrastructures that support AI across a range of geographies: from the agricultural fields of the Global South to outer space. Whatever then should the responsibility to and repair of our digital worlds be?

When attempting to define AI, it is easy to succumb to using AI as a floating signifier without a stable referent. There’s an “uncontroversial thingness” of AI where we presume its power without understanding its composition or scope (Suchman, 2023). The current, rapid, and uncritical adoption of AI reinforces what Lucy Suchman describes as AI’s “strategic vagueness”: that AI’s power is maximized through ontological vagueness that then endows AI with power to shape spacetime to its coder, marketer, and/or corporate overlords’ desires–whether it be racist, colonial, capitalistic, and/or otherwise. Strategic vagueness also enables the deployment of AI to harmful ends, even under the guise of responsibility. Without engaging with a stable conception (or particular instance) of AI/ML, our understanding of repair or responsibility is suspect at best.

Specificity is therefore essential when discussing AI, and all the more so when tying AI to human-centered concepts like responsibility and repair. As a starting point, IBM (2025), a purported leader in AI, defines artificial intelligence as “technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy.” Notably, the focus is often on human-cum-artificial “intelligence.” Emotional intelligences like care, responsibility, and repair are often less prioritized in the creation and promotion of AI.

The denial of emotional intelligence in AI is bound to colonial and racist roots. As James Muldoon & Boxi A. Wu write: “The global economic and political power imbalances in AI production are inextricably linked to the continuities of historical colonialism, constituting the colonial supply chain of AI” (2023, 1). The AI supply chain runs from its massive data gathering and cleaning regimes through hardware and software, all of which is an excessive drain on power supplies and resources. We need to engage with the AI supply chain in a way that enables individual AI developers’ access to responsibility. Such engagement requires that those developers and administrators be engaged in their own networks of care and accountability, largely outside of the supply chain (Widder & Nafus, 2023).

Critical engagement with the AI supply chain further means that we must all resist AI systems' seductive panoptic vision, a purpose at the heart of this edited paper collection. Yet, as queer transfeminist killjoys, we can also easily imagine many scholars wondering how “repair” and “responsibility” could be mustered in AI. Where is the repair in the bundles of algorithms that supposedly will and, in many ways, already do run our lives? Where is the responsibility in the environmentally-degrading server farms storing the information these algorithms process?

The authors of these commentaries offer three possible ways forward. First, their work demonstrates that the concepts of responsibility, repair, and AI may seem disjointed, but that's exactly why they need to be in conversation. What at first seems like a sharp juxtaposition deepens the theoretical work in both digital geographies and the geographies of care and responsibility (see also Walker, et al., 2021). Second, all of the authors offer a skillful entrance to discussion around AI for the novice, while advancing the work of digital geographies in their own research.

The third and final contribution of these interventions is that digital geographies are never just about AI—or big data, data centres, or even apps, for that matter. Rather all digitalia are imbricated in surveilling and structuring our daily lives. These three short essays from Katarzyna Cieslik, Yung Au, and Margath Walker and Jaime Winders are a useful and needed compendium for those geographers out there wondering what to do next digitally, both in our research and in our teaching. This collection is especially important as digital geographies has become a central but still small part of the field, while this material dominates our and our students’ everyday lives. In this essay we add another intimate scale to this conversation: dating and hookup apps, and the (at times very limited) role of Trust & Safety workers in the responsibility and repair of social apps and sites.

While “data colonialism” has wrought itself upon lands both near and far, from outer space to global agricultural lands (Thatcher, et al., 2016; Couldry & Mejias, 2019), these days, what we can clearly term “AI colonialism” is upon us too. Yung Au lays out the frameworks and protests to the military industrial complex who are  “perpetuating a political economy of war and conquest,” a group which includes Amazon and CISCO in lockstep with the likes of Lockheed Martin. Insightfully, she highlights the ways scale and the geographical imagination intertwine to produce “a disembodied rhetoric” used to justify “the rapid developments of data infrastructures in both terrestrial and extra-terrestrial environments: they benefit everyone” (her emphasis). In the face of this colonial, militarized “space expansionism,” she points to Indigenous and feminist geographies of care frameworks, and abolition and worker movements for strategies and tactics to engage otherwise with AI. As an example, Au lists Space Science in Context’s call to end the weaponization in outer space as on Earth, whereas such weapons target “Indigenous people and people of the Global South, including Palestine.” Au’s description of the infrastructures for “interplanetary computing”—including the absurdity of “how to stream Netflix on Mars”—shows these energy-sucking, labor-deflating spaces are deeply colonial projects that seek to lay “claim to outer space” for “us all.” When “all” and “everyone” remain undefined, it’s never about the actual human me and you–and “we” become tools of empire.

Paired together, Au and Cieslik draw our attention to the deep weight, havoc, and damage that AI and big data, as social and physical tools of technological empire, reap havoc upon our physical environments, from the stratosphere to the farms of our planet–by which she means the farms of food and not those of servers. While AI can only thrive through resource hungry infrastructures, Cieslik cogently describes a “dismal harvest”: how most of us will be hungry if we are forced to depend on such technology. Her research examines the “current trends towards externally controlled, intensive, monoculture megafarms,” as agricultural AI is owned and pushed upon farmers by big agribusiness. The Global North further subjugates the Global South by retaining ever more power through “data processing hubs of power and control.” She writes:

The effective delinking of the farming knowledge and decision making from the farm is both spatial and geographically grounded, as it underscores the subjugation of the places where food is produced (the Global South) to the expert data processing hubs of power and control (Global North).

The concept of repair extolled by Big Agricultural AI (or Big Ag AI) is used with ill will: farmers are painted as “backwards” and “environmental predators,” but mined for local knowledge to feed AI, which in turn reifies “human-nature separation embodied in colonial fantasies of domination and control” (citing Huff and Brock 2023:2116). These violent processes fuel the decline of agricultural knowledge and experience, amp up deskilling, and reproduce colonial, racist, and patriarchal rule. Cieslik poignantly argues that “these new dimensions of care for AI introduce new forms of labour exploitation, whereby caring for the land is displaced by caring for the technology that exploits the land” (her emphasis).

To bring us back to responsibility and repair, Margath Walker and Jaime Winders offer a more theoretical intervention that calls in the underlying risk-averse frameworks that deny responsibility among AI creators and purveyors. For a decade, queer feminist theory has been enamored of Eve Sedgwick’s call for a “reparative reading,” which, in the words of Robyn Wiegman, is “about learning how to build small worlds of sustenance that cultivate a different present and future for the losses that one has suffered” (2014, 11). But as academics, we are (ourselves included) experts of critique and not always people of action. Responsibility is then a bit of a wiggly object. Thus, Walker and Winders write bluntly, helpfully, and poignantly in their essay in regard to AI’s creators: “What understanding of responsibility, however, are they mobilizing?” They appropriately come down hard on any sort of responsibility that derives from “avoiding the catastrophic and existential risk . . . not enacting an ethical or moral framework.” Indeed, there is always a loss to be recovered from, a small world to be built and survived from and upon, and the eternal ethical debate about how to bring that into being.

Walker and Winders astutely write that the creators of AI “capitalize on the doom and gloom they helped produce,” and which is based upon a now famous “origin story” claimed by AI’s predominantly white, cisgender male “creators.” The general public—and academia’s—fear of technology delays our accountability to study, write about, and act upon how AI is (not) regulated. Yet, Walker and Winders argue, it is women of color and trans people of color like Alex Hanna (Hanna, et al., 2020; Paulluda, et al., 2021) and Timnit Gebru (Buolamwini & Gebru, 2018; Raji, et al., 2020), and femme of color scholars like Safiya Noble (2018), Lisa Nakamura (2014), and Ruha Benjamin (2019a, 2019b) who “mobilize, not the origin, responsibility narrative, but narratives of obligation, duty, and burden. Their attention to responsibility often morphs into a language of repair.” (Even Rolling Stone agrees with a recent article titled, “These Women Tried to Warn Us About AI” (O’Neill, 2023)). The authors point to the ways the AI of the future will not offer us the respect and do the care work we need. Instead we must start from a model of “responsibility, not risk.”

Across these three interventions, we must look beyond and intentionally reject the white, colonial, patriarchal, and cis-norming models of technology and geography in order to create the worlds we need and desire. For repair and responsibility to be enacted, we must not succumb to the dominant technological imagination and the dominant authoritarian world it reproduces.

But, we think there’s something Walker and Winders are getting at that the other pieces’ approach leaves out: the human aspect of AI.

Given the examples of AI laid out by Cieslik, Au, and Walker and Winders, we felt there are other intimate aspects of AI that would further their ideas, and to look at the actual people within the corporate and research group coding machines who are tasked with responsibility and repair: Trust & Safety. Trust & Safety (T&S) is the tech industry’s term for the small numbers of human teams that keep users safe from one another and they are an acknowledgement of the responsibility that platforms have to their users. In other words, these teams create practices to structure algorithms to keep users safe from one another and from the worst of the internet: sexual trafficking, animal trafficking, drug trafficking, ponzi schemes, fraud, trolling, doxing, and so on. T&S teams create formalizations of the company’s policies. One way to understand them is through a capitalist lens: Trust & Safety teams frustrate the money-making arm of the corporation as they frequently lessen profits, but bolster the money-keeping arm of the corporation by preventing expensive lawsuits. At times, T&S professionals offer the work of repair as well, but solely in a technological vein: they are the teams deep in the bowels of our apps and websites to mend the algorithms in responsible ways.

While the relationship between T&S (their teams, tools, and approaches) and AI (its models, limits, and outputs) is complex, we want to look at the most human element of small and large platforms through the lens of T&S to make further sense of the possibilities between responsibility and repair around AI. As authors who are both trans and queer, nothing feels more human than the desire to connect with others. Thus, we briefly discuss one of the most intimate geographies of the modern age: dating and hookup apps, namely Grindr. While 3 out of 10 people in the U.S. have ever used a dating app, over 50% of LGBTQ people in the U.S. have used them (Vogels and McClain, 2023); all of these apps or versions of them are internationally distributed. Given their profound reach, it’s all the more important to think about the way AI will affect an app that has significant effects on feelings about our bodies, ability to connect, and self worth (Zervoulis, et al., 2020).

Grindr is a popular gay hookup app that has over 13 million monthly users. For scale, this is larger than the number of subscribers on Tinder, the world’s largest dating app. For gay men and many trans people who are seeking other romantic and/or sexual connections, the app is a go-to site of networking, sexual connections, and community-building. All popular dating apps were at first based on the success of Grindr, meaning their social norms, data structures, and other design hail back to this app. All social connection platforms struggle with maintaining safe communities, and Grindr is no exception. Grindr is developing an “AI wingman” and their 2025 product roadmap is focused on leveraging AI to drive users’ ability to safely connect with each other (Rogers, 2025). Wired reports, “Whether users want them or not, it’s all part of a continuing barrage of AI features being added by developers to most dating apps, from Hinge deciding whether profile answers are a slog using AI, to Tinder soon rolling out AI-powered matches.” Bumble is already using AI to test its matches’ compatibility (Pandey, 2024). In other words, our dating and hookup digital connections will be and already are further bound to AI.

A few years ago, Alice Hunsberger led Grindr’s T&S team to add protections for trans people. Hunsberger, a cis woman with a trans daughter, saw the hurt and harm that trans people were experiencing on Grindr and she fought to create algorithmic and content moderation protections (Oasis, 2022). If 13 million people touch this app every month, that is 13 million people around the world who are supported and/or forced toward practices of respect and kindness toward trans people.

Grindr is thus enacting responsibility for safer conditions for its users–but these T&S adjustments don’t repair any harm. Grindr’s failure to halt transphobia, racism, xenophobia, ableism, sizeism, and even fascism is constant. On the popular DouchebagsofGrindr.com site, for over 14 years Grindr users have continued to submit the profiles of those who reproduce the very harms T&S seeks to upend. As DouchebagsofGrindr.com evidences, attempts to leverage technology to solve social problems are not always effective. The actual wingman, wingwoman, or wingtrans who accompanies us to an IRL social space to interact with and meet others provides counsel and support. They’re also there for us when we are in need of emotional repair in times of rejection, loss, and confusion. When AI builds on the already existing harms of a dataset, we can only expect the ranks of DouchebagsofGrindr.com and other accountability sites to grow. And without formal ways to keep one another safe, we resort to creating sites to call out–rather than the practice of calling in (Calliham, 2025)–the people who suffer from the oppressive isms try to mind the work of responsibility and repair in ways that may affect more harm.

Beyond what ChatGPT, Open AI, Wingman, and the like have in store for us, the intimate effects of AI do not stop at our food supply, apps, or connection to the cosmos. More complex algorithmic solutions like AI or Bitcoin mining require more intense computing resources, which in turn exacerbates AI’s harmful environmental impacts. As platforms deploy more AI-powered software, global demands for these resources increase and the harms are concentrated among the most vulnerable of us. Because of data centers’ drain on energy grids, Indigenous nations live with unreliable access to electricity, and already drought-prone areas burdened by data centers' water demands (Lally 2019; Li, et al., 2023; Vermin 2025). 

Further, the AI supply chain is reliant on vast data stores in order to function and those data are supplied by complex networks of human laborers (Turbaro et al., 2020). Most commonly, the tedious work of collating, cleaning, annotating, and verifying data is done by exploited workers in the Global South (Posada, 2022). Workers experience not only difficult working conditions of long hours and exhausted eyes, necks, backs, and minds, but are exposed to large volumes of violent and disturbing content that takes an immense personal toll (BBC, 2023). These examples point to how AI “having formed in the largely white military-industrial-academic complex, [...] has served the dominant interests of that world” (Katz, 2020:8). Writing about justice for these data workers, Lily Irani asks: “What would computer science look like if it did not see human-algorithmic partnerships as an embarrassment, but rather as an ethical project where the humans were as, or even more, important than the algorithms?” What role, then, does AI have in the repair of the harms it causes?  We would argue and we read the authors of these paper arguing the same: AI has no role of repair in the harms it causes because the systems causing harm must not be held up as the agents of their repair (see also McLean, 2024).  

We close by thinking again about the four themes, but now reconfigured. When we resist making AI the solution, we are then able to consider ways that AI can be a tool towards our hopeful, ethical, and possibility-filled visions for the future. Catherine D’Ignazio (2024) has outlined a “restorative/transformative data science”: a process by which “more people and groups may undertake data science grounded in the restoration of life, living, vitality, rights, and dignity and the transformation of the structural conditions that produce inequality in the first place.” (D'Ignazio, 2024, p. 23). As these commentaries have pointed out, AI–the computational and algorithmic formalization of some data science practices–could also be approached through the lens of restoration and transformation. Putting AI in its place further facilitates grounding AI-as-tool in a specific place and time.  For example, Wonyoung So is using reparative algorithms to engage with discriminatory housing practices in the United States (So et al., 2023).  The Data Against Feminicide project has built AI-based tools, co-designed with Latin-American activists, to combat local gender-based violence (D’Ignazio, 2024). These, and many other projects, directly engage in the transformation of inequitable structural conditions, much as Au lays out for us.  They demonstrate a relationship to AI that is ethical, responsible, situated, and for us, fundamentally hopeful. We are inspired by scholars like Blair Attard-Frost (2025) who is pushing us to think about transfeminist AI governance; Jason Edward Lewis, et al. (2024) who  locate AI within Indigenous knowledge production; and Ngozi Okidegbe (2024) who stipulates algorithms as a central matter of Black feminist concerns. While AI does not afford responsibility and repair for the harm it causes, humans always will and must.

 

References

Attard-Frost, B. (2025) Transfeminist AI governance. First Monday, 30(4). Available from: https://doi.org/10.5210/fm.v30i4.14121 [Accessed 6th August 2025].

BBC. (2023) AI and data labelling: 'I felt like my life ended'. 16 August 2023. Available from: https://www.bbc.com/news/av/world-africa-66514287 [Accessed 6th August 2025].

Benjamin, R. (Ed.). (2019a) Captivating Technology: Race, Carceral Technoscience, and Liberatory Imagination in Everyday Life. Duke University Press Books.

Benjamin, R. (2019b) Race After Technology: Abolitionist Tools for the New Jim Code (1st edition). Polity.

Buolamwini, J. & Gebru, T. (2018) Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 77–91. Available from: https://proceedings.mlr.press/v81/buolamwini18a.html?mod=article_inline&ref=akusion-ci-shi-dai-bizinesumedeia

Calliham, C. (2025) Calling In: Author Q&A with Loretta Ross. The Public Eye, Winter/Spring 2025, 25–27. Available from: https://politicalresearch.org/2025/02/04/calling [Accessed 6th August 2025].

Couldry, N. & Mejias, U. A. (2019) The Costs of Connection: How Data is Colonizing Human Life and Appropriating It for Capitalism. Stanford University Press.

D’Ignazio, C. (2024) Counting feminicide: Data feminism in action. The MIT Press. Available from: https://doi.org/10.7551/mitpress/14671.001.0001

Hanna, A., Denton, E., Smart, A. & Smith-Loud, J. (2020) Towards a critical race methodology in algorithmic fairness. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 501–512. Available from: https://doi.org/10.1145/3351095.3372826

Huff, A. & Brock, A. (2023) Introduction: Accumulation by restoration and political ecologies of repair. Environment and Planning E: Nature and Space, 6(4), 2113-2133. Available from: https://doi.org/10.1177/25148486231168393

IBM. (2025) Artificial Intelligence. 16 April. Available from: https://www.ibm.com/think/topics/artificial-intelligence [Accessed 6th August 2025].

Irani, L. (2016) The hidden faces of automation. XRDS: Crossroads, The ACM Magazine for Students, 23(2), 34-37. Available from: https://doi.org/10.1145/3014390

Katz, Y. (2020) Artificial whiteness: Politics and ideology in artificial intelligence. Columbia University Press.

Lally, N., Kay, K. & Thatcher, J. (2019) Computational parasites and hydropower: A political ecology of Bitcoin mining on the Columbia River. Environment and Planning E, 5(1), 18-38. Available from: https://doi.org/10.1177/2514848619867608

Lewis, J. E., Whaanga, H. & Yolgörmez, C. (2024) Abundant intelligences: placing AI within Indigenous knowledge frameworks. AI & Society. Available from: https://doi.org/10.1007/s00146-024-02099-4

Li, P., Yang, J., Islam, M. A. & Ren, S. (2023) Making AI less" thirsty": Uncovering and addressing the secret water footprint of AI models. arXiv preprint arXiv:2304.03271. Available from: https://doi.org/10.48550/arXiv.2304.03271

McLean, J. (2024) Responsibility, care and repair in/of AI: Extinction threats and more-than-real worlds. Environment and Planning F, 3(1-2), 3-16. Available from: https://doi.org/10.1177/26349825241228586

McPherson, T. (2012) Why Are the Digital Humanities So White? Or Thinking the Histories of Race and Computation. In Gold, M. K. (Ed.), Debates in the Digital Humanities (pp. 139–160). University of Minnesota Press.

Oasis Consortium (Director). (2022) Gender Inclusivity: A Recipe for a Thriving and Safe Dating Community with Alice Hunsberger - Grindr. Available from: https://www.youtube.com/watch?v=9xaLOrBJ_yY [Accessed 6th August 2025].

Okidegbe, N. (2024) Revisioning Algorithms as a Black Feminist Project. In Jones, M. L. & Levendowski, A. (Eds.), Feminist Cyberlaw (1st ed., pp. 200–209). University of California Press. Available from: https://doi.org/10.1525/luminos.190

O’Neil, L. (2023) These Women Tried to Warn Us About AI. Rolling Stone. 12 August. Available from: https://www.rollingstone.com/culture/culture-features/women-warnings-ai-danger-risk-before-chatgpt-1234804367/ [Accessed 6th August 2025].

Nakamura, L. (2014) Indigenous Circuits: Navajo Women and the Racialization of Early Electronic Manufacture. American Quarterly, 66(4), 919–941. Available from: https://doi.org/10.1353/aq.2014.0070

Noble, S. (2018) Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.

Paullada, A., Raji, I. D., Bender, E. M., Denton, E. & Hanna, A. (2021) Data and its (dis) contents: A survey of dataset development and use in machine learning research. Patterns, 2(11). Available from: https://doi.org/10.1016/j.patter.2021.100336

Pandey, N. (Ed.) (2024) AI Is Reshaping Romance: Virtual Avatars To Go On Dates On Your Behalf. NDTV, May 16, 2024. Available from: www.ndtv.com/offbeat/ai-is-reshaping-romance-virtual-avatars-to-go-on-dates-on-your-behalf-5673578 [Accessed 6th August 2025].

Posada, J. (2021) The coloniality of data work in Latin America. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (pp. 277-278). Available from: https://doi.org/10.1145/3461702.3462471

Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D. & Barnes, P. (2020) Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 33–44. Available from: https://doi.org/10.1145/3351095.3372873

Rogers, R. (2025) I Took Grindr’s AI Wingman for a Spin. Here’s a Glimpse of Your Dating Future. Wired. Feb. 11, 2025. Available from: https://www.wired.com/story/hands-on-with-grindr-ai-wingman/ [Accessed 6th August 2025].

Ross, L. J. (2025) Calling In: How to Start Making Change with Those You'd Rather Cancel. Simon and Schuster.

So, W., Lohia, P., Pimplikar, R., Hosoi, A. E. & D’Ignazio, C. (2022) Beyond fairness: Reparative algorithms to address historical injustices of housing discrimination in the US. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 988–1004. Available from: https://doi.org/10.1145/3531146.3533160

Suchman, L. (2023) The uncontroversial ‘thingness’ of AI. Big Data & Society, 10(2). Available from: https://doi.org/10.1177/20539517231206794

Thatcher, J., O’Sullivan, D. & Mahmoudi, D. (2016) Data colonialism through accumulation by dispossession: New metaphors for daily data. Society & Space: Environment and Planning D, 34(6), 990-1006. Available from: https://doi.org/10.1177/0263775816633195

Tubaro, P., Casilli, A. A. & Coville, M. (2020) The trainer, the verifier, the imitator: Three ways in which human platform workers support artificial intelligence. Big Data & Society, 7(1), 2053951720919776. Available from: https://doi.org/10.1177/2053951720919776

Verma, P. (2025) In the shadows of Arizona’s data center boom, thousands live without power. Washington Post. Available from: https://www.washingtonpost.com/technology/2024/12/23/arizona-data-centers-navajo-power-aps-srp [Accessed 6th August 2025].

Vogels, E. & McClain, C. (2025) Key findings about online dating in the U.S. Pew Research Center. Available from: https://pewresearch.org/short-reads/2023/02/02/key-findings-about-online-dating-in-the-u-s [Accessed 6th August 2025].

Walker, M., Winders, J. & Frimpong Boamah, E. (2021) Locating Artificial Intelligence: a Research Agenda. Space and Polity, 25, 202-219. Available from: https://doi.org/10.1080/13562576.2021.1985868

Widder, D. G. & Nafus, D. (2023) Dislocated accountabilities in the “AI supply chain”: Modularity and developers’ notions of responsibility. Big Data & Society, 10(1). Available from: https://doi.org/10.1177/20539517231177620

Wiegman, R. (2014) The times we’re in: Queer feminist criticism and the reparative ‘turn.’ Feminist Theory, 15(1), 4–25. Available from: https://doi.org/10.1177/1464700113513081a

Zervoulis, K., Smith, D. S., Reed, R. & Dinos, S. (2020) Use of ‘gay dating apps’ and its relationship with individual wellbeing and sense of community in men who have sex with men. Psychology and Sexuality, 11(1–2), 88–102. Available from: https://doi.org/10.1080/19419899.2019.1607404