Close
When you subscribe to Furtherfield’s newsletter service you will receive occasional email newsletters from us plus invitations to our exhibitions and events. To opt out of the newsletter service at any time please click the unsubscribe link in the emails.
Close
All Content
Contributors
UFO Icon
Close
Irridescent cyber duck illustration with a bionic eye Irridescent cyber bear illustration with a bionic eye Irridescent cyber bee illustration
Visit People's Park Plinth

Crypto 2.0 and DAWCs.

Crypto 2.0 and DAWCs: Dawn of Decentralized Autonomous Workers Councils.

Many of both Bitcoin’s most vocal proponents and detractors agree that the way the cryptocurrency operates technologically determines the form of the economy and therefore the society that uses it. That society would be anarcho-capitalist, lacking state institutions (anarcho-) but enforcing commodity property law (capitalist). If this is true then Bitcoin has the potential to achieve a far greater political effect than financial engineering efforts like the Euro or quantitative easing and with far fewer resources. Perhaps variations on this technology can create alternatives to Bitcoin that determine or at least afford different socioeconomic orders.

Bitcoin is already more than half a decade old and “Crypto 2.0” systems that build on its underlying blockchain technology (the blockchain is a network-wide shared database built by consensus, Bitcoin uses it for its ledger) are starting to emerge. The most advanced allow the creation of entire organizations and systems of organization on the blockchain, as Decentralized Autonomous Organizations (DAOs). We can use them to help create those different socioeconomic orders.

Workers’ Councils are a Liberatarian Socialist system of organization. Rather than implementing Soviet-style centralized command economies, workers councils are decentralized and democratic. Workers in a particular workplace decide what their objectives are then appoint temporary (and instantly revocable) delegates to be responsible for them. Workplaces appoint representatives to local councils, local councils appoint representatives to regional councils, and so on, always temporarily and revocably. It is a system of face to face socialisation and political representation rather than top-down control.

This system emerged at various times in Europe, South America and the Middle East throughout the Twentieth Century. It is a very human method of governance, in stark contrast to the “trustless” code of Bitcoin as well as to the centralized politics of the Soviets. That said, technology can assist organization as easily as it can support material production. In the 1970s the cordones of Chile interfaced with the Allende government’s Project Cybersyn network, and contemporary online workers collectives can use the Internet to co-ordinate.

A DAO is a blockchain-based program that implements an organization’s governance and controls its resources using code rather than law. There can be a fetishistic quality to the idea of cold, hard, unyielding software perfect in its unambiguous transparency and incapable of human failing in its decision making. There can be similar fetishistic qualities to legal and political organizational perfectionism, this doesn’t disqualify any of their subjects as useful ideals however they need to be tempered pragmatically.

Using the public code and records of a DAO can help with the well known problem of structurelessness, and can store information more efficiently and reliably than a human being with a pen and paper. The much vaunted trustlessness of cryptovurrency and smart contract systems can help build trust in communication within and between groups – cryptographically signed minutes are relatively hard to forge although the ambiguity of language is impossible to avoid even in the mathematics of software.

The delegates of a workers’ council can be efficiently and transparently voted on, identified by, and recalled using a DAO. This makes even more sense for distributed groups of workers, groups that share a common cause but lack a geographic centre. Delegates can even be implemented as smart contracts, code written to control resource allocation and evaluate performance in the pursuit of their objective (unless recalled by the council that created them).

Entire councils, and inter-council organisation, can be supported or implemented in their organization as DAOs. Support includes communication and record keeping. Implementation included control of resources, running delegates as code, and even setting objectives for delegates programatically.

5286887_the-blockchain-is-a-new-model-of-governance_b6f54c67_m.jpg

The latter finally brings the concept of DAOs into direct conflict with the spirit of the Workers’ Council. Councils exist to allow individual human beings to express and agree on their objectives, not to have them imposed from above. Being controlled by code is no better than political or economic control. It is the nature of this relationship to code, politics or the economy that is positive or negative – writing code to charge someone or something with seeing that a task be undertaken is no different from writing it in the minutes and makes mroe explicit that organization is production as the subject of work in itself. A democratic, recallable DAO that sets objectives is very different from a blob of capital with unchangeable orders to maximise its profits online.

The resources that a DAO controls need not be monetary (or tokenized). A DAO that controls access to property, energy or other resources can contribute to avoiding the pricing problem that conventional economics regarded as a showstopper for the Soviet cybernetic economic planning of “Red Plenty“. DAOs need not even be created to represent human organization – “deodands” can represent environmental commons as economic actors. These can then interact with workers council DAOs, representing environmental factors as social and economic peers and avoiding the neoliberal economic problems both of externalities and privatisation.

Workers Council DAOs – Decentralized Autonomous Workers Councils (DAWCs) are science fiction, but only just. Workers councils have existed and been plugged in to the network, structurelessnes and scalability are problems, DAOs exist and can help with this. Simply tokenizing “sharing economy” (actually rentier economy) forms, for example replacing Uber’s taxi sharing with La’zooz, while maintaining the exploitative logic of disintermediation isn’t enough.

If we are unable or unwilling to accelerate the social and productive forces of technology to take us to the moon, we can at least embrace and extend them in a more human direction.

The text of this article is licenced under the Creative Commons BY-SA 4.0 Licence.

Being in the uncomfortable middle and the continued need for physical space

About a year ago Eleanor Greenhalgh started her project The Dissolute Image (TDI), a speculative, poetic image hosting technique. By splitting images into individual pixels and distributing them, it enables banned content to be secretly posted on corporate social platforms. TDI enables users to post a single pixel on their own social media page. All the entries are tracked by TDI and each pixel will re-appear on a dedicated website, eventually re-forming the image. I asked Eleanor about her motivation and interest in censorship and hosting issues.

Annet Dekker (AD): Could you tell me a little bit about your background?

Eleanor Greenhalgh (EG): I did a fine art BA at Oxford Brookes in the UK where I started working on participatory projects. I consider myself somewhere between a curator and a facilitator, but it is a role that I haven’t quite worked out. From being involved in environmental activism, I became really fascinated by the way that these kinds of groups organized themselves. These were non-hierarchical groups that tried to avoid replicating the types of hierarchies which they’re opposing. It is a really fascinating process because it doesn’t always go so well.

AD: Could you give an example of such conflict?

EG: Facebook for example hides its ideological biases behind fluffy language of wanting to make it a community space that’s safe for everybody, so you’re not going to come across offensive material. Whereas you talk to an anarchist hosting collective, they will be honest and tell that they’re not hosting stuff that they disagree with, because they consider it part of their activism and they’re not going to give resources to a cause that they disagree with. So how does that relate to the demand for solidarity? It’s a recurring problem. If you believe in building some kind of alternatives, solidarity is essential. But where does your desire to show solidarity conflict with your own values, your own autonomy?

I think this is a source of deep ambivalence. On the one hand being autonomous, while at the same time being deeply vulnerable to the collective – whether relying on others to host your data, to back you up on a demonstration, or just look after each other on a very physical level. I want to expose this vulnerability, this ambivalence, which you find between the two extremes of total autonomy or total solidarity. Rather than choosing one of them, I’m interested in looking in the middle and asking, why is it that being in the middle is so uncomfortable, and why is there this temptation to flee to one of these two extremes?

AD: Why is being in the middle uncomfortable? Isn’t that the place that most people choose to be in?

EG: I think social life puts us in the middle, whether we choose it or not. To give an example from a campaign I’m involved with, for abortion rights: we use the rhetoric of ‘bodily autonomy’. Yet, this ‘autonomy’ relies upon medical care given by others. It can only exist because of other people. What’s uncomfortable about this fact is that it confronts us with our own vulnerability. The fantasy of an asocial autonomy is seductive (and dangerous); the idea that we could be self-sustaining, without the need to do politics.

AD: During your time at Piet Zwart Institute in Rotterdam you focused a lot on the issues of censorship and hosting, both in The Dissolute Image (TDI) and in Volunteer Hosts, where you asked people to physically host files which they didn’t know the contents of. What is your interest in hosting?

EG: I became interested in hosting from two angles. Firstly from a social angle, and the power dynamics of who hosts what and why. Secondly from a physical angle, the fact that data needs to live somewhere and our reliance on hardware and services that we don’t own and have little control over. I’m interested in how that relates to people who are trying to articulate an alternative. Also asking the question of whether hosting something is the same as endorsing it.

I have been watching for example the work that Freedom Box has been doing, developing a small server that you can carry with you. The emphasis is that this is on your body and it is in your house, which makes it harder to seize data because different laws apply when something is in your house. I am exploring or even arguing for the beauty of this kind of approach – the beauty of the continued need for physical space. Hakim Bey (who is good at rhetoric if not politics) said that ‘the question of land refuses to go away’. Meaning, if we want to build an alternative then it has to live somewhere – somewhere physical. And yes, that includes ‘the cloud’ – another rhetorical device which obscures this fact. I don’t think this need for embodiment is a weakness or an inconvenience, as feminists have long argued (Karen A. Franck’s early critique of virtual reality comes to mind). The fantasy of disembodiment – whether geographical, sexual, technological – usually serves those who want to avoid discussions of how these spaces and bodies are governed.

AD: What were the reactions of people on TDI?

EG: People found it really fun, which I didn’t expect. Although, the project is still in a very early stage so not many pixels have been adopted, and it’s not possible to see what the image is. If and when the image ever is completed I wonder how those people will feel, if they will retract their ‘vote’, or whether it will just be like so many other things online where you click OK and then you forget about it. But in the early stages people seem to be very engaged by it and they like this rhetoric of showing solidarity and being part of it.

AD: How did you select the image(s)?

EG: The question of motivation is what interested me in choosing an image, because its very easy to stand up and say, ‘I disagree with censorship’, for example, but I don’t think it is as simple as to censor or not to censor. By asking people to adopt part of an image I’m trying to ask them the question of where they draw their own limits. If they will host something purely in the name of solidarity, or if they need to know a little bit more about it before they are willing to give their resources or their endorsements to it. Without giving away too much, I have tried to choose images in such a way that they would challenge the audience, so that people who are likely to say ‘yeah that’s great, I’m against censorship’, would stop and think a little bit about what they are willing to give a platform to.

AD: What is the role of people who participate in your projects?

EG: The question I came up against with Volunteer Hosts was figuring out what the investment would be for the people participating in it, and also how to keep track of the files. Also would this count as an archive, if there is no way of tracking the files that have been put in it? I think that could be quite nice as a gesture: that you create an archive which then is scattered and you have no way of knowing whether these USB keys have just been immediately wiped and had put more interesting things on there, or whether people really have faithfully held on to them, and maybe that is where gathering feedback becomes really important. It seems quite important to know, although there is also a beauty maybe in not knowing and somehow just surrendering your files.

I’m trying to experiment with how much you can remove something from its context, where it still has enough meaning to be engaging. There is something for me really beautiful about single pixels of which you’ve really have no idea what it could be. TDI has over 95,000 pixels, so it is highly unlikely that this image will ever be completed and that’s obviously built into the design of the project itself. The fact that it would take so long, and take so many people, for me is a source of beauty. I think it can be fine to use a type of game-like mentality to engage people, if you get them to think about it. I think if you agree to participate in something without really knowing what it is, you are probably going to be quite interested in finding out what it is, as that thing is gradually emerging. It’s the inherent excitement of thinking you have a stake in something, and therefore perhaps it will affect you. It is fascinating to take a whole and break it into lots of tiny pieces, or take tiny pieces and bring them together.

AD: So your main interest is in the conversation or a discussion?

EG: Yes, I think it is important to have some form in which that conversation happens. I try to capture the reactions of people who participate in the things that I make. That is maybe part of my background in facilitation, and my interest in counselling. The truth of something is in the feelings that it provokes. It is in trying to find out what the subject position is, or feels like, of somebody who is called upon to transmit the content of other people. What are the investments in there, why would you do it? What are the dilemmas that they face? At what point will you stop doing it, or under what conditions? Would you either withdraw your agreement, or put more conditions on it?

I’d like to argue for the value of simply reflecting, but also acknowledging my stake in it as an individual. I feel I have to try and resist a pragmatic attempt to somehow merely utilize the information, or to identify it as an activist act; there doesn’t necessarily need to be an outcome. It can be quite uncomfortable to admit that maybe you don’t have all the answers, or maybe there are contradictions in your approach. I’m trying to get to that point where those anxieties and uncomfortable feelings sit.

AD: How do you relate that back to yourself, what is your role?

EG: I am heavily influenced by my training as a facilitator, using an anarchist model where the facilitator is not the boss or the chair of the meeting but they are really in service to the group. In this model there are two qualities needed by a facilitator: being assertive and being neutral. It is an immensely powerful way of thinking, that you could be really assertive in, for example, designing a project and setting boundaries (kicking out spammers, people who are dominating the group), but at the same time being completely neutral in a sort of psychoanalytic way, while looking at the content. Anything that comes in, you hold in that space. On the other hand it is a complete contradiction; how can you be assertive and yet also neutral? You are always making decisions about what counts as spam versus what counts as a valid input. Perhaps it is a parallel dilemma to the one I mentioned, between solidarity and autonomy. These are the difficult and interesting questions of doing radical politics. Or doing any kind of democracy. So, while it is contradictory in many ways, I have seen this technique of neutral facilitation being used to incredible effect, and it’s one that I adopt. I think not having the answers, not determining the outcome, and being vulnerable to other people are beautiful ethical positions.

—-
+ For more information about Eleanor Greenhalgh: http://eleanorg.org/

Algorithms and Control

Featured image: The Simplex Algorithm

Algorithms have become a hot topic of political lament in the last few years. The literature is expansive; Christopher Steiner’s upcoming book Automate This: How Algorithms Came to Rule Our World attempts to lift the lid on how human agency is largely helpless in the face of precise algorithmic bots that automate the majority of daily life and business. So too, is this matter being approached historically, with Chris Bishop and John MacCormick’s Nine Algorithms That Changed the Future, outlining the specific construction and broad use of these procedures (such as Google’s powerful PageRank algorithm, and others used in string searching, (i.e regular expressions, cryptography and compression, Quicksort for database management). The Fast Fourier Transform, first developed in 1965 by J.W Cooley & John Tukey, was designed to compute the much older mathematical discovery of the Discrete Fourier Algorithm,* and is perhaps the most widely used algorithm in digital communications, responsible for breaking down irregular signals into their pure sine-wave components. However, the point of this article is to critically analyse what the specific global dependences of algorithmic infrastructure are, and what they’re doing to the world.

A name which may spring forth in most people’s minds is the former employee of Zynga, founder of social games company Area/Code and self described ‘entrepreneur, provocateur, raconteur’ Kevin Slavin. In his famously scary TED talk, Slavin outlined the lengths Wall Street Traders were prepared to go in order to construct faster and more efficient algo-trading transactions: such as Spead Networks building an 825 mile, ‘one signal’ trench between NYC and Chicago or gutting entire NYC apartments, strategically positioned to install heavy duty server farms. All of this effort, labelled as ‘investment’ for the sole purpose of transmitting a deal-closing, revenue building algorithm which can be executed 3 – 5 microseconds faster than all the other competitors.

A subset of this, are purposely designed algorithms which make speedy micro-profits from large volumes of trades, otherwise known as ‘high speed or high frequency traders (HST). Such trading times can be divided into billionths of a second on a mass scale, with the ultimate goal of making trades before any possible awareness from rival systems. Other sets of trading rely on unspeakably complicated mathematical formulas to trade on brief movements in the relationship between security risks. With little to no regulation (as you would expect), the manipulation of stock prices is an already rampant activity.

The Simplex Algorithm, originally developed by George Dantzig in the late 1940s, is widely responsible for solving large scale optimisation problems in big business and (according the optimisation specialist Jacek Gondzio) it runs attens, probably hundreds of thousands of calls every minute. With its origins in multidimensional geometry space, the Simplex’s methodological function arrives at optimal solutions for maximum profit or orienting extensive distribution networks through constraints. It’s a truism in certain circles to suggest that almost all corporate and commerical CPU’s are executing Dantzig’s Simplex algorithm, which determines almost everything from work schedules, food prices, bus timetables and trade shares.

But on a more basic level, within the supposedly casual and passive act of browsing information online, algorithms are constructing more and more of our typical experiences on the Web. Moreover they are constructing and deciding what content we browse for. A couple of weeks ago John Naughton wrote a rather Foucaultian piece for the Guardian online, commenting on the multitude of algorithmic methods which secretly shape our behaviour. It’s the usual rhetoric, with Naughton treating algorithms as if they silently operate in secret, through the screens in global board rooms, and the shadowy corners of back offices dictating the direction of our world – x-files style.

‘They have probably already influenced your Christmas shopping, for example. They have certainly determined how your pension fund is doing, and whether your application for a mortgage has been successful. And one day they may effectively determine how you vote.’

The political abuse here is retained in the productive means of generating information and controlling human consumption. Naugnton cites an article last month by Nick Diakopoulos who warns that not only are online news environments saturated with generative algorithms, but they also reveal themselves to be biased, masquerading as ‘objective’. The main flaw in this being ‘Summerisation‘; that relatively naive decision criteria, inputted into a functional algorithm (no matter how well-designed and well intentioned) can process biased outputs that exclude and prioritise certain political, racial or ethical views. In an another (yet separate) TED talk, Eli Pariser makes similar comments about so-called “filter bubbles”; unintended consequences of personal editing systems which narrow news search results, because high developed algorithms interpret your historical actions and specifically ‘tailor’ the results. Presumably its for liberal self-improvement, unless one mistakes self-improvement with technocratic solipsism.

Earlier this year, Nextag CEO Jeffery Katz wrote a hefty polemic against the corporate power of Google’s biased Pagerank algorithm, expressing doubt about its capability to objectively search for other companies aside from its own partners. This was echoed in James Grimmelmann’s essay, ‘Some Skepticism About Search Neutrality’, for the collection The Next Digital DecadeGrimmelmann gives a heavily detailed exposition on Google’s own ‘net neutrality’ algorithms and how biased they happen to be. In short, Pagerank doesn’t simply decide relevant results, it decides visitor numbers and he concluded on this note.

‘With disturbing frequency, though, websites are not users’’ friends. Sometimes they are, but often, the websites want visitors, and will be willing to do what it takes to grab them.’

But lets think about this; its not as if on a formal, computational level, anything has changed. Algorithmic step by step procedures are mathematically speaking as old as Euclid. Very old. Indeed, reading this article wouldn’t even be possible without two algorithms in particular: the Universal Turing Machine, the theoretical template for programming which is sophisticated enough to mimic all other Turing Machines, and the 1957 Fortran Compiler; the first complete algorithm to convert source code in executable machine code. The pioneering algorithm responsible for early languages such as COBOL.

Moreover, its not as if computation itself has become more powerful, rather it has been given a larger, expansive platform to operate in. The logic of computation, the formalisation of algorithms, or, the ‘secret sauce’, (as Naughton whimsically puts it) have simply fulfilled their general purpose, which is to say they have become purposely generalised, in most, if not all corners of Western production. As Cory Doctorow put in 2011’s 28c3 and throughout last year, ‘We don’t have cars anymore, we have computers we ride in; we don’t have airplanes anymore, we have flying Solaris boxes with a big bucketful of SCADA controllers.’ Any impact on one corner of computational use affects another type of similar automation.

Algorithms in-themselves then, haven’t really changed, they have simply expanded their automation. Securing, compressing, trading, sharing, writing, exploiting. Even machine-learning, a name which infers myths of self-awareness and intelligence are only created to make lives easier through automation and function.

The fears surrounding their control are an expansion of this automated formalisation, not something remarkably different in kind. It was inevitable that in a capitalist system, effective procedures which produce revenue would be increasingly automated. So one should make the case that the controlling aspect of algorithmic behaviour be tracked within this expansion (which is not to say that computational procedures are inherently capitalist). To understand algorithmic control is to understand what the formal structure of algorithms are and how they are used to construct controlling environments. Before one can inspect how algorithms are changing daily life, and environmental space, it is helpful to understand what algorithms are, and on a formal level, how they both work and don’t work.

The controlling, effective and structuring ‘power’ of algorithms, are simply a product of two main elements intrinsic to formal structure of the algorithm itself as originally presupposed by mathematics: these two elements are Automation and Decision. If it is to be built for an effective purpose, (capitalist or otherwise) an algorithm must simultaneously do both.

For Automation purposes, the algorithm must be converted from a theoretical procedure into an equivalent automated mechanical ‘effective’ procedure (inadvertently this is an accurate description of the Church-Turing thesis, a conjecture which formulated the initial beginnings of computing in its mathematical definition).

Although it is sometimes passed over as obvious, algorithms are also designed for decisional purposes. Algorithms must also be programmed to ‘decide’ on a particular user input or decide on what is the best optimal result from a set of possible alternatives. The algorithm has to be programmed to decide on the difference between a query which is ‘profitable’ or ‘loss-making’, or a set of shares which are ‘secure’ or ‘insecure’, or deciding the optimal path amongst millions of variables and constraints, or locating various differences between ‘feed for common interest’ and ‘feed for corporate interest’. When any discussion arises on the predictive nature of algorithms, it operates on the suggestion that it can decide an answer or reach the end of its calculation.

Code both elements together consistently and you have an optimal algorithm which functions effectively automating the original decision as directed by the individual or company in question. This is what can be typically denoted as ‘control’ – determined action at a distance. But that doesn’t mean that an algorithm suddenly emerges with both elements from the start, they are not the same thing, although they are usually mistaken to be: negotiations must arise according to which elements are to be automated and which are to be decided.

But code both elements or either element inconsistently and you have a buggy algorithm, no matter what controlling functionality it’s used for. If it is automated, but can’t ultimately decide on what is profit or loss, havoc ensues. If it can decide on optimised answers, but can’t be automated effectively, then its accuracy and speed is only as good as those controlling it, making the algorithm’s automation ineffective, unreliable, or only as good as human supervision.

“Algorithmic control” then, is a dual product of getting these two elements to work, and my suggestion here is any resistance to that control comes from separating the two, or at least understanding and exploiting the pragmatic difficulties of getting the two to work. So looking at both elements separately (and very quickly), there are two conflicting political issues going on and thus two opposing mixtures of control and non-control;

Firstly there is the privileging of automation in algorithmic control. This, as Slavin asserts, examines algorithms as unreadable “co-evolutionary forces” which one must understand alongside nature and man. The danger that faces us consists in blindly following the automated whims of algorithms no matter what they decide or calculate. Decision-making is limited to the speed of automation. This view is one of surrendering calculation and opting for speed and blindness. These algorithms operate as perverse capitalist effective procedures, supposedly generating revenue and exploiting users on their own well enough (and better than any human procedure), the role of their creators and co-informants is a role best suited to improving the algorithm’s conditions for automation or increasing the speed to calculate.

Relative to the autonomous “nature” of algorithms, humans are likely to leave them unchecked and unsupervised, and in turn they lead to damaging technical glitches which inevitably cause certain fallouts, such as the infamous “Flash Crash” loss and regain on May 6th 2010 (its worrying to note that two years on, hardly anyone knows exactly why this happened, precisely insofar no answer was decided). The control established in automation can flip into an unspeakable mode of being out of control, or being subject to the control of an automaton, the consequences of which can’t be fully anticipated until it ruins the algorithm’s ability to decide an answer. The environment is now subject to its efficiency and speed.

But there is also a contradictory political issue concerning the privileging of decidability in algorithmic control. This as Naughton and Katz suggest, is located in the closed elements of algorithmic decision and function. Algorithms built to specifically decide results which only favour and benefit the ruling elite who have built them for specific effective purposes. These algorithms not only shape the way content is structured, they also shape the access of online content itself, determining consumer understanding and its means of production.

This control occurs in the aforementioned Simplex Algorithm, the formal properties which decide nearly all commercial and business optimising; from how best to roster staff in supermarkets, to deciding how much finite machine resources can be used in server farms. Its control is global, yet it too faces a problem of control in that its automation is limited by its decision-making. Thanks to a mathematical conjecture originating with US mathematician Warren Hirsch, there is no developed method for finding a more effective algorithm, causing serious future consequences for maximising profit and minimising cost. In other words, the primary of decidability reaches a point where it’s automation is struggling to support the real world it has created. The algorithm is now subject to the working environment’s appetite for efficiency and speed.

This is the opposite of privileging automation – the environment isn’t reconstructed to speed up the algorithm automation-capabilities irrespective of answers, rather the algorithm’s limited decision-capabilities are subject to the environment which now desires solutions and increased answers. If the modern world cannot find an algorithm which decides more efficiently, modern life reaches a decisive limit. Automation becomes limited.

——————–

These are two contradictory types of control; once one is privileged, the other recedes from view. Code an algorithm to automate at speed, but risk automating undecidable, meaningless, gibberish output, or, code an algorithm to decide results completely, but risk the failure to be optimally autonomous. In both cases, the human dependency on either automation or decision crumbles leading to unintended disorder. The whole issue does not lead to any easy answers, instead it leads to a tense, antagonistic network of algorithmic actions struggling to fully automate or decide, never entirely obeying the power of control. Contrary to the usual understanding, algorithms aren’t monolithic, characterless beings of generic function, to which humans adapt and adopt, but complex, fractured systems to be negotiated and traversed.

In between these two political issues lies our current, putrid situation as far as the expansion of computation is concerned – a situation in which computational artists have more to say about it than perhaps they think they do. Their role is more than mere commentary, but a mode of traversal. Such an aesthetics has the ability to roam the effects of automation and decision, examining their actions even while they are, in turn, determined by them.

* With special thanks to the artist and writer Paul Brown, whom pointed out the longer history of the FFT to me.