Algorithmic Regimes – Abstracts

Matthew Fuller and Graham Harwood

Algorithms are not Angels

The term “Algorithm” means a logically described step-by-step procedure. It has recently become a popular, if not dominant, term to refer to when describing the power computational processes have in contemporary forms of life. Algorithms are used in all systems that involve computers, from traffic control to rituals of mate-selection and war-fighting. They select targets and enact processes. Certain kinds of algorithm extract novel readings or predictions from aggregates of data. Understanding their actions and bringing them into ethical scrutiny is very important. However, algorithms do not act alone. They require numerous other factors to operate, not the least of which is their organised relation to data and data-structures. Indeed, the recent critical attention to algorithms should be seen as only a useful step towards a more substantial understanding of computational systems in culture, politics and everyday life.

This talk will look at algorithmic processes in the contexts of the wider socio-technical ensembles they operate in. We will look at two specific cases: firstly, the Afghan War Diaries files released by Chelsea Manning; secondly, using the case of an addiction services database, we will examine the way in which healthcare services are modulated by the structures of databases. In both these cases, significant decisions are made not by algorithms alone, but by forms of data, command structures, processes of valuation, law, and means of encoding and storing data amongst others. The recent turn to algorithmic ethics needs to understand the material contexts in which algorithms operate if it is not to idealise forms of effective procedure – like some kind of mathematical angels – as having primary agency. Understanding the wider ecology in which algorithms operate is essential in being able to distinguish those cases where particular kinds of powers are held and enacted by algorithms. This talk aims to provide some grounds for such work.

Matthew Fuller is the author of the forthcoming, ‘How to Sleep, in art, biology and culture’, (Bloomsbury). Other titles include, include ‘Media Ecologies, materialist energies in art and technoculture’, (MIT) ‘Behind the Blip, essays on the culture of software’ and ‘Elephant & Castle’. (both Autonomedia) With Usman Haque, he is co-author of ‘Urban Versioning System v1.0’ (ALNY) and with Andrew Goffey, of ‘Evil Media’. (MIT) Editor of ‘Software Studies, a lexicon’, (MIT) and co-editor of the journal Computational Culture, he is involved in a number of projects in art, media and software. He is Professor of Cultural Studies and Director of the Centre for Cultural Studies, Goldsmiths, University of London.

Graham Harwood is well known for his engagement with controversial projects, sensitively bringing conflicting views together through his work with secure hospitals, asylum seekers, local authorities, museums or with the National Health Service. His work over 25 years with his partner Matsuko Yokokoji has entangled YoHa in the ethics of media systems, representation, anonymity and highly sensitive data. His current research interests explore how art as a method of enquiry can be utilised to reveal logics as a fluid strategy of power. Graham is the convenor of the MA Interactive Media at the Centre for Cultural Studies, Goldsmiths, University of London, and a co-editor of Computational Culture.

 

Peter Purgathofer:

Software will take over the World

Software, and the algorithms it encapsulates, turns out to be the ultimate meta-technology. It can be argued that everything will be software, even beyond simple algorithmic thinking. The emergent, post-algorithmic behavior of software is already so complex that it cannot be explained any more by simple logic. Without any caution, researchers develop software that probably will one day take over the world. This presentation will discuss several cases and applications to illustrate what software can do today.

Peter Purgathofer is associate professor at the Institute of Design and Assessment of Technology, Vienna University of Technology. His work focusses on the intersection of design and technology, most prominently interaction design, user experience and game design, as well as the areas of tension between society and technology. He is coordinator for the media informatics master program and as a part-time artist, won an award of distinction at the Prix Ars Electronica.

 

Thomas Sturm

Rationality, Reason, and Formal Rules: Reflections from the Cold War

What can it mean to be rational, especially in a world that seemed to be on the brink of thermonuclear destruction? During the Cold War, this fundamental question engaged the sharpest minds. Which theories of rationality could be invoked to explain human behavior, especially in the domains of international relations, war, and military strategy? And could one develop theories also for the resolution of political problems, thus providing clear normative guidance? Could the rules be given an axiomatic structure and applied to various domains of science and society in a strictly determinate fashion? Today’s fragmentation in the study of rationality undermines the Cold War hope for a unified concept of rationality. This also reveals how more traditional notions of “reason”, understood as mindful deliberation over when and how to apply a rule, have returned to the scene.

Thomas Sturm is ICREA Research Professor at the Dept. of Philosophy and the Center for the History of Science at the Autonomous University of Barcelona, and Associate Research Fellow at the Wilhelm Wundt Center for Philosophy & History of Psychology, Universidade Federal Juiz de Fora (Brazil). Sturm’s research centers on Immanuel Kant’s philosophy, the philosophy of mind and psychology, and the relations between history & philosophy of science with emphasis on the topic of rationality. He is co-author (with L. Daston, M. Gordin, P. Erickson, J. Klein, and R. Lemov) of “How Reason Almost Lost Its Mind: The Strange Career of Cold War Rationality” (Chicago University Press, 2013), and has published numerous articles in leading journals,

 

Antoinette Rouvroy

Algorithmic Governmentality

Algorithmic governmentality constitutes an unprecedented mode of government which addresses individuals not as self-willed subjects, but through their profiles – behavioral patterns produced on a purely inductive base, derived from infra-personal, meaningless but quantifiable and relatable signals (raw data and metadata). This new mode appears as personalization (marketing), augmented reality (entertainment) or as flexible security (law enforcement and fraud detection). The algorithmic governance knows (in)dividuals not as persons endowed with (real or supposed) capabilities of will and understanding, but only as temporary aggregates of data exploitable or manageable at an industrial scale.

Antoinette Rouvroy is a doc­tor of Laws of the Eu­ro­pean Uni­ver­si­ty Insti­tu­te, a per­ma­nent re­se­arch as­so­cia­te at the Bel­gi­an Na­tio­nal Fund for Scienti­fic Re­se­arch and se­ni­or re­se­ar­cher at the Re­se­arch Cent­re Informa­tion, Law and So­cie­ty, Law Fa­cul­ty, Uni­ver­si­ty of Na­mur.

 

Reinhard Kreissl

All-go-rithmic, No-go-rithmic – Problems of predictive policing.

The idea of using historical data about crime and other social processes to improve resource allocation of police forces has been one of the hot topics in policing discourse over the last years. While system providers present their success stories, pointing to decreasing crime rates in areas where their tools were implemented by local police forces, the evidence is not convincing. Problems of predictive policing will be discussed from a technical and data perspective and from a theoretical and conceptual perspective to demonstrate the limits and un-intended side effects of this new approach to doing police work.

Reinhard Kreissl holds a PhD in sociology and is the director of VICESSE (www.vicesse.eu). His main areas of research relevant for this topic are security studies, policing, sociology of law, and criminology. He has coordinated major European and national research projects and acted as consultant to the EC and national government. His most recent publication is “Surveillance in Europe” (London/Routledge 2015) edited together with David Wright.

 

Btihaj Ajana

Projective Cultures: The case of the Quantified Self and Health 2.0

Recently, the use of algorithms, data and metric technologies has invaded many spheres of production, knowledge and expertise. From marketing and advertising to healthcare and bioinformatics, various fields are currently exploring the possible benefits and challenges pertaining to the collection and usage of large data sets for different purposes and contexts. In my contribution I will consider some of the examples whereby data and their analytics are being deployed for the purpose of predicting certain activities and pre-empting future events. In doing so, I will discuss some of the ethical issues pertaining to such data-driven practices, focusing on issues of categorization and profiling, the projective and predictive nature of data science and its approach to the future, and the implications vis-à-vis understandings and practices of identity. Following these, I invoke the philosophy of Jean-Luc Nancy by way of offering alternative signposts for reconceiving the future beyond technocracy and prediction.

Btihaj Ajana is a Senior Lecturer in the Departments of Media, Culture and Creative Industries, and Digital Humanities at King’s College London. Her teaching and research interests are concerned with the areas of culture and identity, ethics and politics, and the philosophy of digital media. She has published on ICT-based surveillance and is the author of “Governing through Biometrics: The Biopolitics of Identity” (Palgrave, 2013), a book that provides a critical and multi-level analysis of the various socio-political and ethical implications of identity systems in relation to the field of immigration and citizenship. She is currently conducting further research in this area looking at the application of Big Data analytics in the governance of mobility, in addition to researching the ontological and ethical aspects of the Quantified Self movement.

 

Francesca Musiani

Omnipresence, Invisibility and Classification: The Power and Politics of Making Data Intelligible

The issue of information classification and organization has perhaps never been as relevant as in our current times of “information overload” and internet-mediated access to the vast majority of the information surrounding us. The algorithms embedded in the information and communication technologies we use daily are (also) artefacts of governance, arrangements of power and “politics by other means”(Latour). Examples drawing from the field of Internet services and from recent attempts at “regulation by algorithms” (e.g. the French loi sur le renseignement) will serve as empirical points of entry into a discussion of the power and politics inherent in “making data intelligible” in the era of information pervasiveness, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.

Francesca Musiani is a researcher at the Institute for Communication Sciences (ISCC, CNRS/Paris-Sorbonne/UPMC) and an associate researcher at the Centre for the Sociology of Innovation of MINES ParisTech-PSL, France. Francesca’s research work focuses on Internet governance, in an interdisciplinary perspective blending information and communication sciences with Science and Technology Studies (STS). Francesca is a former Yahoo! Fellow in Residence at Georgetown University, the recipient of the 2013 Informatique et Libertés Award of the French Privacy and Data Protection Commission, and a member of the French Parliamentary Commission on Law and Liberties in the Digital Age.

 

Olga Goriunova

Data Subjects

This talk will explore the return or birth of the subject in the context of the computational, which produces something that can be addressed as a digital subject. I propose to focus on some of the processes through which digital subjects are constructed out of data by patterning and modelling, put in relation to each other, acted upon and become enactive themselves. I plan in particular to focus on Facebook’s look-alike model to explore some of the emerging machines of indexicality.

Olga Goriunova is Associate Professor at Warwick University. She is a co-editor of “Computational Culture, A Journal of Software Studies” (http://computationalculture.net), editor of “Fun and Software: Exploring Pain, Pleasure and Paradox in Computing” (Bloomsbury, 2014), and author of “Art Platforms and Cultural Production on the Internet” (Routledge, 2012), among other things. She has also been involved in organizing a number of exhibitions and projects related to software art and digital culture more broadly.

 

Paolo Ruffino
Acephalous Algorithms: Alternative Forms of Life in Video Games

The gaming industry has recently started using algorithms for the generation of 3D game environments. The game “No Man’s Sky”, not yet released, attracted attention since its first announcement in 2014 because of its procedurally generated online environment. I will draw on the work of Roger Caillois, who has looked at the ways in which we make sense of the relation between nature and ourselves. His ‘diagonal’ approach to the rationalization of nature and life, which brought him to work within the Surrealist movement in the late ’30s in France, could provide an alternative perspective on the practice of using algorithms for the reproduction of living environments.
Paolo Ruffino is a Lecturer at Goldsmiths and London South Bank University. His research focuses on video game studies, media and cultural studies, art criticism, semiotics and philosophy of language. With the artist group IOCOSE he explores alternative narratives surrounding new media and technologies. paoloruffino.com

 

Gerald Raunig

NO FUTURE. Dividual lines against the appropriation of our present becoming by algorithmic futures

“Those who fell prey to the future, seek advice from the soothsayers” , wrote a leftist intellectual in 1940, and one could add today: advice from brokers, economists, and analysts. Not just the colonization of our future by today’s soothsayers, but a brutal future of algorithms colonizing our past and our present. And as the algorithms traverse individuals with their dividual lines, we have to invent dividual lines, too, and bundle them into molecular revolution.

Gerald Raunig is a philosopher who works at the Zürcher Hochschule der Künste and at the eipcp (European Institute for Progressive Cultural Policies). His books have been translated into English, Serbian, Spanish, Slovenian, Russian, Italian, and Turkish. Recent books in English include “A Thousand Machines”, 2010; “Factories of Knowledge, Industries of Creativity”, 2013, and “DIVIDUUM. Machinic Capitalism and Molecular Revolution, Vol.1”, 2016, all translated by Aileen Derieg and published by Semiotext(e)/MIT Press.

 

Felix Stalder

Felix Stalder is a Professor for Digital Culture at the Zurich University of the Arts, a senior researcher at the World Information Institute in Vienna and a moderator of nettime, a critical nexus of the discourse on net culture, since 1995. His work focuses on the intersection of cultural, political and technological dynamics, in particular on new modes of commons-based production, copyright, news spatial patterns, and transformation of subjectivity. 

 

Konrad Becker

Konrad Becker is an interdisciplinary researcher and artist. Director of World-Information.Net, a cultural intelligence agency, he is associated with several renown projects of advanced cultural practice including Public Netbase (1994 to 2006). Initiating many international conferences and exhibitions on technology, arts and culture he has published a large number of audiovisual productions, articles and books in several languages. A pioneering hypermedia wizard, he has been active in electronic media as an artist, author, composer as well as curator, producer and organizer.