On Assange, Social AI, and the Growing Importance of Digital Minimalism
Suren Pahlevan reflects on Assange’s freedom from Belmarch Prison, Social AI and its potential consequences, and how digital minimalism could be the best way to counter Big Tech’s obsession with AI
Julian Assange was freed from Belmarsh Prison in June this year after agreeing to a plea deal with the US Department of Justice. A month earlier in May, after a long legal battle, the High Court granted Assange the right to appeal his US extradition – a massive turning point in his case which ultimately led to the deal being reached.
Chris Hedges’ article titled “You Saved Julian Assange” beautifully highlights the crucial role of the political pressure created by mass protests in London and worldwide in bringing about Assange’s freedom.
Less than two weeks ago, Assange made his first public appearance and speech since his return to Australia at the Parliamentary Assembly of the Council of Europe. He spoke out in detail about the plot to murder him in the Ecuadorian Embassy in London, his more than a decade-long legal battle for his freedom, and his psychological state at Belmarsh where he spent 22 hours a day in solitary confinement. Assange also used this speech to re-emphasise the important work done by WikiLeaks, and spoke out about why he agreed to the deal for his freedom with the US:
“I eventually chose freedom over unrealisable justice, after being detained for years and facing a 175-year sentence with no effective remedy. (...) The US government insisted in writing into its plea agreement that I cannot file a case at the European Court of Human Rights or even a freedom of information act request over what it did to me as a result of its extradition request.
I want to be totally clear. I am not free today because the system worked. I am free today because after years of incarceration, becauseI plead guilty to journalism. I plead guilty to seeking information from a source. I plead guilty to obtaining information from a source. And I plead guilty to informing the public what that information was. I did not plead guilty to anything else. I hope my testimony today can serve to highlight the weaknesses of the existing safeguards and to help those whose cases are less visible but who are equally vulnerable.”
“As I emerge from prison, I see that artificial intelligence is being used to create mass assassinations, where before there was a difference between assassination and warfare, now the two are conjoined. Where many, perhaps the majority of targets in Gaza are bombed as a result by artificial intelligence targeting.
The connection between artificial intelligence and surveillance is important, artificial intelligence needs information to come up with targets or ideas or propaganda. When we’re talking about the use of artificial intelligence to conduct mass assassinations, surveillance data from telephones, internet, is key to training those algorithms.” – Assange, speaking to the Council of Europe, 1st October 2024.
The following day, the Council of Europe voted in favour of recognising that Assange was held as a political prisoner. This result is a small beam of justice and hope in a world often unjust. Nonetheless, we should use this as a reminder of what is possible when millions come together to support the truth, freedom of press, and the sacrifices made by journalists with incorruptible principles and values.
I want, however, to focus on one area of Assange’s testimony which I find to be highly pertinent to research at Cambridge and conversations more widely across society. Artificial intelligence as a tool for war.
“New ‘Social AI’ products, in their current form, are a key example of this misguided goldrush, and are often portrayed as instruments of ‘progress’.”
Even back in 2017, in a livestreamed talk from the Ecuadorian Embassy in London, Assange warned about the increasing dangers of AI and the profit (and glory) incentive driving the Big Tech companies behind their development: “The dystopian consequences [of the work being done by Silicon Valley AI engineers] is not what is most present in their mind.”
Social AI for Social Good?
Not only are companies and startups looking to profit off the newly emerging market for ‘Social’ AI – plenty of academic research is being done into the benefits and negatives of Social AI. ‘Social AI’ refers to the newly emerging AI chatbots created and designed to be used for social connection, AI ‘friends’ and ‘assistants’, and AI ‘girlfriends’ and ‘boyfriends’ that provide you with romantic messages, discussion about your day and your hobbies, companionship, and sexts. Services such as Replika, Character.ai, Nomi.ai, and even Claude and ChatGPT are increasingly promoting themselves as replacements (or more often as supplements) to human interaction. ChatGPT’s business model has been highlighted as a “disaster in the making” – handing our psychological requirement of a ‘social life’ to a company like this is not a smart move.
Whilst in the process of writing this article, a lawsuit story broke out in the media that a 14 year-old boy, Sewell Setzer III, “killed himself after becoming obsessed with an artificial-intelligence chatbot” according to his mother.
Companies and researchers alike are exploring how we might rethink and reinvent social connection in the near future with the aid of AI. The idea is that we can now all talk to a virtual AI-powered friend or ‘partner’ whenever we want, from our phones, about anything. We can even personalise their traits and personality to suit exactly ‘who’ we want to talk to. This is society going in the wrong direction.
I have found that AI Ethicists are well aware of the risks of Social AI tools for encouraging/causing suicide/harm or spreading misinformation. However, what I find most curious about this area of academic research is the lack of intense attention overall given by some scholars to the potential of these technologies for data farming, warfare usage, and societal manipulation by bad actors.
There are, of course, promising benefits to these technologies which should not be ignored. For instance, the use of AI-powered voice assistants that can speak to lonely elderly people for hours more than a family member of carer may be able to could be argued to be a net benefit to applaud. However, the increasingly digitised ‘enhanced’ and ‘improved’ future that such companies are promising with their services could serve as further continuation of the Silicon-Valley-induced dystopia that Assange and WikiLeaks have warned and alerted us about for the past decade. A normalisation of Social AI could be disastrous.
We have already seen the damage and manipulation caused by Big Tech across the past decade, and we’ve mostly ignored it or brushed it off as inevitable – the Cambridge Analytica scandal being a prime example of this. Even as we scroll through short clips and posts, we have become the new techno-feudal worker class for the companies that own these platforms, and we’re evidently okay with it. Put more poetically by Byung-Chul Han:
“The smartphone is the cult object of digital domination. As a subjugation device, it acts like a rosary and its beads; this is how we keep a smartphone constantly at hand. The ‘like’ is a digital “amen.” We keep going to confession. We undress by choice. But we don’t ask for forgiveness: instead, we call out for attention.”
The Paradox of the Digital
Furthermore, we live in an era where information is constant and bombards us. It is not hidden. But, if a big story can’t capture our attention with a short TikTok video or YouTube short clip it often goes unnoticed. If it does not make it to the front of our algorithmically selected feeds, most people are unlikely to search out alone to find it. If big news outlets do not report on it, it will often be ignored.
This is where I am somewhat of a hypocrite towards these double-edged platforms. I would not have found out about the Assange protests in February this year without X (formerly known as Twitter). Without these platforms we would not as easily be able to witness the horrors coming out of Gaza, Ukraine, and the upcoming US presidential election.
“Both these social media platforms and the mainstream media require a certain level of user self-regulation and education when it comes to their usage.”
Mainstream media often cannot compare to the utility of these social media platforms, and it is here where important stories often lie. Good, independent journalism relies, to an extent, on these sites – for now. I can’t provide an alternative to how we could communicate without the infrastructures owned by Big Tech, but it is clear that they are also riddled with their own issues. Both these social media platforms and the mainstream media require a certain level of user self-regulation and education when it comes to their usage.
In late October, Investigative journalist and co-founder of news outlet Declassified UK Matt Kennard highlighted the failures of the mainstream media in a debate on British democracy at the Cambridge Union:
“Just the other day, the Council of Europe, the highest human rights body in Europe voted that [Julian Assange] was held as a political prisoner for five years. Not one single British newspaper has ever written a word about it.”
Listen to Julian
AI could hopefully revolutionise medicine, science, and lower our workloads, but when it comes to human connection, socialising, and relaxation with friends and family – we should adopt the philosophy of Digital Minimalism, rejecting over-reliance on these new inventions and their capabilities. Knowing where we draw healthy limits to the usage of these tools will be key going forward, and as pointed out by Assange, we should take Silicon Valley’s obsession with AI with caution.
If we don’t opt for increased Digital Minimalism in our lives, ‘Social AI’ products, platforms and tools could end up being the next frontier for private companies to influence our purchasing habits, our views and beliefs, and our emotions. Real human connection will be increasingly replaced and perhaps increasingly de-valued. Social AI researchers, of course, are mindful of this risk to a certain extent, but it remains unclear how large companies and corporations will design and develop their products.
“If we don’t opt for increased Digital Minimalism in our lives, ‘Social AI’ products, platforms and tools could end up being the next frontier for private companies to influence our purchasing habits, our views and beliefs, and our emotions.”
The environmental cost of these technologies is another major concern which just furthers the devastation they could cause. Additionally, sociologist Ruha Benjamin’s 2019 “Race After Technology” urges us not to forget about the racial and social inequalities exacerbated by such tools. Research is being done here also by AI Ethicists such as Jude Brown, Stephen Cave, Eleanor Drage, and Kerry McInerney in their 2023 edited collection “Feminist AI Critical Perspectives on Algorithms, Data, and Intelligent Machines” to challenge the development of these tools from different angle. From analysing changing cultural depictions and portrayals of AI engineers in films over the past century, to explaining as to why predictive-policing AI tools “cannot be successfully employed for feminist ends”, the book emphasises vital different perspectives we must account for on AI when it comes to how we adopt it as a society.
Whether for surveillance, warfare, social, environmental, racial and gender, regulation in each of these areas is fundamental and necessary. It will be interesting to see how lawmakers across the world respond to the increasing usage and availability of these new tools in the next few years. With the variety of concerns from all angles, these technologies should not be adopted without deep consideration.
Assange’s imprisonment and fight for freedom may be quickly forgotten by the media – and eventually us. This is to our own peril. Assange is now free, and we should cherish this victory. But if we now choose to ignore his warnings and forget the work that he sacrificed over a decade of his life for, we could be headed for a much less free future.
Suggested Reading:
Cal Newport – Digital Minimalism (2019).
Aylsworth, T., & Castro, C. (2021). Is there a Duty to Be a Digital Minimalist? Journal of Applied Philosophy, 38(4), 662-673. https://doi.org/10.1111/japp.12498.
AP News: WikiLeaks’ Julian Assange says he pleaded ‘guilty to journalism’ in order to be freed.
Depounti, I., Saukko, P., & Natale, S. (2023). Ideal technologies, ideal women: AI and gender imaginaries in Redditors’ discussions on the Replika bot girlfriend. Media, Culture & Society, 45(4), 720-736. https://doi.org/10.1177/01634437221119021.
Byung-Chul Han – Psychopolitics: Neoliberalism and New Technologies of Power (2014).
Jude Brown, Stephen Cave, Eleanor Drage, and Kerry McInerney (eds.) – Feminist AI: Critical Perspectives on Algorithms, Data, and Intelligent Machines (2023).