Reflections on our daring future: AI at the center of a focus program at the 2025 Thessaloniki International Documentary Festival
‘Success in creating AI would be the biggest advance in human history. Unfortunately, it might also be the last one.’ (Stephen Hawking)
Part #1
The Automatic Extinction of Humanity
This year’s edition of the Thessaloniki International Documentary Festival devoted extensive space to one of the most challenging topics of our time: Artificial Intelligence (AI). Dimitris Kerkinos compiled a short, AI-themed documentary film program comprised of 17 works that address the topic from a variety of perspectives.
Arguably the most threatening vision of an apocalyptic end for humanity involves the emerging interplay of weapons systems and AI. Daniel Andrey Wunderer’s analysis of the current situation in Flash Wars: Autonomous Weapons, A.I. and the Future of Warfare (2024) begins with a reference to developed, autonomously operating and interconnected drone squadrons that are almost impossible to intercept, due to their large number. ‘Autonomous’ means that once activated, these systems are no longer dependent on instructions or confirmation from military personnel and automatically follow their programming to detect and destroy enemy targets. In doing so, any ethical proportionality between collateral human casualties and real threat is lost. At the same time, such systems are becoming increasingly efficient through their use of all data, including private forms, that circulates on the internet, including social messages in social media. Here, the possible influence of hackers cannot be ruled out.
An even greater danger attached to hacking can be summed up in the formula ‘Hack the Minds’. Manipulation is nothing new, given the systematic and propagandistic fabrication of ‘news’, but the increasingly sophisticated analysis of news consumers’ needs and expectations makes it increasingly possible to create not just successful individual fake news messages, but also more extensive and self-sustaining news bubbles. People see and hear what they want to see and hear; when combined with ever more perfected deepfake technologies, such urges make identity forgery ever easier. Not only can faces be swapped, but they can also be lip-synched to seemingly utter statements which the ostensible speaker never in fact gave voice to. While detailed analysis can still expose such forgeries, soon this will no longer be possible. In addition, the significance and status of documentary film is being nullified.
The Current War is Being Fought by Gaining Decision-making Power
Another real problem associated with the technological advances Flash Wars explores is that even if military agents still have the power to exercise decision-making authority, they are no longer able to oversee the rapidly mounting wealth of data computers provide them with, yet are required to react quickly to it, a challenge which can be decisive in the outcome of any contemporary war. For years now, even the arms industry itself has, along with activists and scientists, vainly demanded that politicians incorporate safety mechanisms into autonomous weapons systems. However, this seems like an illusory and impossible task, since these safety systems would inevitably result in significant time delays and consequently lead to military losses.
The scenario becomes most threatening in the face of nuclear weapons, where actual deployment might be a case of only computers reacting to other computers. The strategy of deterrence, based on the idea that every nuclear attack could be retaliated to with equal destructive power, is today reaching its limits, due to sophisticated sensor technology combined with advancing AI capabilities. Historically speaking, nuclear powers’ ability to retaliate to peer attack has been mostly guaranteed by previously untraceable, nuclear-armed submarines. However, if we succeed in making the ocean transparent and these units visible, then the chances (and, thus, also the probability) of a successful nuclear first strike increase significantly. Andrey Wunderer interviews military specialists, political advisors, scientists and industrialists in order to illustrate this threatening scenario. Ultimately, his informative film concludes with the remark that we don’t yet know what AI’s impact will be. However, given the facts Flash Wars presents, this uncertainty constitutes more a helpless, rather than consolatory, form of hope.
Part #2
The Dark Side of AI: Social and Ecological Consequences
Elsewhere in Thessaloniki’s AI program, Henri Poulain’s In the Belly of AI (2025) contrasts the perspective of AI as an autonomous, self-generating process with its technological dependences. For example, thousands of minimum-wage earners, mostly young people from the southern hemisphere, feed AI with data every day. These exploited people work in isolation and are not allowed to disclose their tasks or working conditions, under threat of imprisonment and fines. Many of them are traumatized by being constantly confronted with the most shocking still and moving images imaginable, depicting real-life violence, murder, torture, child abuse, and every conceivable perversion. But they are not provided with psychological support and, largely, work overtime. Only a few dare to appear, even anonymously, in front of Poulain’s camera. The reality of these workers is cynically counteracted by tech companies that present them in their fresh work clothes as an example of human escape from poverty. Yet, such employment is a dead end, as data importers waste the most productive phase of their lives on horrific work that leaves no opportunity to find other professional training, and so consigns them to a state of permanent dependency.
Another fundamental downside of AI is the very high energy consumption required to request ever-larger databases. The resultant environmental damage, along with its human consequences, is systematically ignored. Extraction of relevant natural resources lead to land degradation and water shortages for the population. In view of the urgent need for action on climate change, we can say that it is a suicidal project that is being staged here. The techno-industry usually responds to such accusations with the argument that it is working not for the immediate future, but for a distant one. It celebrates the ideology of transhumanism, which promises to make human life free of physical drawbacks and limitations and even holds out the prospect of possible immortality. Bodies will be reshaped; brains will become many times more intelligent. Autonomously generated intelligence (AGI) appears, in this logic, to be the best thing human beings have ever produced. The move toward colonizing the known universe is celebrated as the species’s desired future. With these fantasies of colonization, capitalism once again reproduces its disastrous central mechanism.
What becomes clear here is that, within a contemporary moment following on from widespread global collapses of religious belief (a still-unresolved ‘Death of God’), a new god-like promise is being proclaimed, one which AIG invokes as a placeholder. The promises of transhumanity replace the painful loss of previous human visions of religious ‘paradise’. Contemporary humanity gains its self-confidence and sense of worth only as a worthy transmitter to a new divine entity, one that justifies every sacrifice to nature and the possible diversity of human life forms.
A debate about this new, future-focussed ideology has never properly begun. The current technological power blocs that profit from AIG not only have no interest in debate, but use their power to nip it in the bud. Vast amounts of money are being invested in an incalculable future, one which would be necessary to still enable a dignified life in an intact natural environment. Tech companies have the necessary means to instrumentalize political decisions worldwide for their own interests: workplace rules, laws, conditions, safety regulations, and protective measures are barely considered in eroding democracies. In the Belly of AI gives the victims of this destructive ideology of ‘The Future’ a voice and summarizes the human costs of AI. Henri Poulain’s film should find its place in schools and universities in order to stimulate discussion about possible forms of resistance against this sell-out of humankind’s future.
Part #3
AI and the Forms of Learning
Also included in Thessaloniki’s AI-themed film programme was a work that dealt with a topic of ostensibly secondary relevance – learning – precisely in order to demonstrate just how central it in fact is to reflections on AI. In The Hexagon Hive and the Mouse in a Maze (2024), co-directors Tilda Swinton and Bartek Dziadosz unify a panoramic range of perspectives on the phenomenon of learning, diverse forms of which are perceived within a wide variety of cultural and social contexts. It is precisely learning’s interfacing with the most diverse fields of reference and divergent human intentions that can make it so diverse a phenomenon. This contradicts the misguided ideology of human learning and knowledge being in any way reducible to a possible simple statement of ‘facts’ and a closely associated, one-dimensional construction of ‘reality’ as something generated in and from individual items of information. However, there are no such things as isolated ‘facts’ – there are only contexts that influence humans’ interpretations of each individual aspect of reality. Since such contexts are open, it is also inaccurate to speak of determinations through contexts. In fact, contexts influence every human thought and action, and every human perception of details, which are always interpreted in the specific environment within which they are encountered and experienced.
Learning as human beings while considering the multitude of forms that human learning can and does take is the decisive project that will shape our culture and future. The ideological position that ‘information’ can be stored in computers, thus enabling us to capture and reproduce ‘reality’ is one of the most fundamental fallacies of our time. It is not only an ideal path to an empty, one-dimensional interpretation of reality. It also entails the destruction of essential human abilities to live with contradictions, including: the ability to enjoy diversity; to exist with multiple identities; to play; and to create open spaces for thought and action. We don’t know how AI learns, experts repeatedly say. Ultimately, AI also interprets data inputs by creating contexts for its acts of interpretation. Such contextualization of inputs is crucial for the consequences of AI’s interpretations of them. The likelihood that AI’s decisive interpretive pattern will be similarity matching reinforces this technology’s tendency to negate differences and alternatives.
In addition to the climate crisis, there exists a crisis based on the suppression of human abilities. We make little use of our talents: people go through life unaware of their true potential and engaged in work processes that not only don’t bring them joy, but seem meaningless to them. When asked in one survey whether their job makes a meaningful contribution to society, 37% of respondents answered ‘no’ and a further 13% were unsure. Other people, however, are productively concerned with what they do and can’t imagine any other life: such individuals see their work as an authentic expression of their human self.
Education often distances people from their true abilities. Individuals’ resources are usually hidden and only reveal themselves in certain circumstances. However, there is no widely available form of education to recognize one’s own abilities. On the contrary, we can clearly see how abilities, especially those of children, are forgotten and destroyed by mainstream schools that measure success in learning solely by the ability to repeat suggested information (learning units). There is no room here for new thoughts, questions and perceptions that change perspectives.
For those reasons, The Hexagon Hive and the Mouse in a Maze‘s exploration of a multitude of other forms of learning is crucial in order to be able to counteract the interpretation of reality produced and perfected by AI and the associated control over the emergence of possible alternatives to the former. With this film, the concept and practice of learning is applied to and within a diverse range of human activities and contexts, including, for example, body language and craft techniques. The overarching goal of the alternative educational practices and philosophies the film depicts relates to the insight that successful learning is measured not on an individual, but on a collective level. True success involves providing something for the community, and thus, helping to create a viable, humane civilization. Yet, while the most urgent contemporary learning goals relate to the enablement of communicative and collective forms of human action, AI’s function, in contrast, is already interpreted primarily as involving the perfection of processes geared towards satisfying individual needs.
Learning is based on communication, characterized by the effort to appear approximately ‘correct’ with our statements and behavior within any given context. This tendency to adapt is a result of human beings’ uncertainty regarding whether other people perceive shared environments and their details in the same way as we do. The principle of practical approximation is the most effective learning impetus, a fact which presents both a danger and an opportunity. AI seems to proceed no differently here. It calculates how words and arguments relate to one another, with every AI statement being a result of this probability calculation. Indeed, any given statement doesn’t even make sense to a machine while it is in the process of uttering it.
At the end of their multifaceted film, Tilda Swinton and Bartek Dziadosz therefore give space to a voice that reminds us that scientific methodology is not neutral but, rather, based on the belief that we can consistently and permanently separate ‘false’ from ‘true’ in order to create ‘facts’. Yet, between the ostensible epistemological poles of ‘black’ and ‘white’, data itself often insists on the gray of indistinguishability.
Part #4
AI as the End of Human Culture and Way of Life
Dimitris Kerkino’s decision to include Tonje Hessen Schei’s iHuman (2019) in Thessaloniki’s AI program is entirely understandable, because this film offers alarming extrapolations of the diverse future scenarios we can expect from AI’s various applications. Who are we? What is our place in the world? What is intelligence? These questions are urgently posed within our current confrontation with AI. Are the human brain’s neurological processes reproducible by silicon atoms in computers? Are neural networks also fabricated in AI machines? And, if AI programs AI itself, will this be humankind’s last invention, since, from that moment on, we will, as a species, no longer be able to understand what is happening around and to us?
What forms of desire and behavioral structures will we try to program into AI? Such questions are vital to consider, because robots will undoubtedly become more capable of dialogue and learning and will act smarter and faster than humans. When they are incorporated into the instruments of the military and police services, the question of regulation arises, but the latter’s protective capacity may well prove illusory. On one hand, scientific curiosity’s unmodifiable striving to put into practice everything possible will resist attempts to regulate AI. On the other hand, no one in the military-industrial complex will accept any form of regulation that could reduce the efficiency of weapons systems within a hostile competitive environment, one that makes true forms of regulation-based human control of AI unthinkable.
AI is collecting an ever-increasing amount of data, creating comprehensive human behavioral profiles even in the most intimate areas. Human behavior and ways of thinking are becoming calculable and controllable. By analyzing each specific desire structure, it is possible to precisely target these with manipulative content. Digital avatars can be created from the analysis of genomes, emotional and behavioral patterns, and observed in test situations. Human behavior is becoming more predictable. The end result of such advances is perfected techniques and tools of individual and mass manipulation. Today, then, we must ask ourselves: How we will live in a post-privacy age?
Despite the consensus regarding the obvious risks of excessively rapid production and auto-production of AI, the de facto forms and overarching extent of necessary international regulatory collaboration remain unthinkable, given the current international situation. For instance, the near-future prospect of Earth’s surface covered with solar cells and data centers is becoming more likely, given that increasingly complex forms of data require ever more energy within their creation, communication and conservation. Yet real resistance to the consequently looming erosion of the natural environment is not even being addressed at a politically relevant level. Instead, contemporarily dominant ideologies misguidedly celebrate the fact that that humanity is creating something that transcends itself. As a self-programming form of AI, AGI is about to learn how to learn and create its own, henceforth unpredictable, tasks. The similarity between the respective patterns of biological and technological evolution is becoming apparent: technology itself is becoming a force of nature.
According to Ilya Sutskever, an Israeli-Canadian computer scientist and co-founder of OpenAI, the central problem we face involves aligning the intentions of AI with human interests. If humankind succeeds in this, a better way of life in the interest of the common good would be conceivable. But if we fail, the sophistication of ‘fake news’ will increase infinitely, assisted by cyberattacks and AI weapons systems. In Sutskever’s words: “AI has the ability to create an infinitely stable dictatorship.” Elsewhere, Michal Kosinsk, Professor of Organizational Behavior and computational psychologist at Stanford University, predicts that the new global information economy will perfect the ability to lock people into echo chambers, polarizing them and inducing extremist outbursts. Machine intelligence will also replace millions of individuals within the present-day workforce and rendering all kinds of job economically useless. Human activity will become obsolete.
Yet another of AI’s applications, the interpretation of facial recognition data, is also not without controversy. If patterns are programmed that predict, on a physiological basis, the probabilities of individual tendencies toward, for example, criminality, resistant behavior, homosexuality, depression, and suicide, the succeeding step to disastrous mechanisms of preemptive restriction and control seems all-too-predictable. In such a dystopian near-future, human resistance to social injustice could end up enacting a self-fulfilling prophecy: AI-enabled control of resistance would further exacerbate social inequality. Added to this scenario would be perfected forms of surveillance technology that make it possible to reconstruct the social network of a resistant individual in reverse: the age of total social control would be ushered in. We still believe we control AI technologies. But how will AIGs decide on their application in the immediate future?
At the end of iHuman, director Tonje Hessen Schei returns once again to present-day AI’s most threatening aspect: autonomous weapons systems. While major tech companies refused to cooperate with the military complex for decades, Google, Amazon, IBM and others are now beginning to make their data available, for example, to enable total forms of aerial surveillance. Such companies’ data is also increasingly being made available to autonomous weapons systems. Technology is remaking human beings. The necessary forms of adaptation we undertake in response have already led to our increasing dependence on technology. The human habitat has already become a testing ground for other forms and orders of intelligence. And this is just the beginning…
Flash Wars : Autonom Weapons, A.I. and the Future of Warfare
by Daniel Andrey Wunderer
Austria / 2024 / 94 min
In the Belly of AI
by Henri Poulain
France / 2025 / 75 min
The Hexagon Hive and the Mouse in a Maze
by Tilda Swinton, Bartek Dziadosz
UK / 2024 / 92 min
Human
by Tonje Hessen Schei
Norway, Denmark / 2019 / 99 min
Text by Dieter Wieczorek
Edited by Jonny Murray
Copyright FIPRESCI