In the last seminars I attended, I heard more and more talks about the ethics of algorithms. At Jerome Ravetz’s 90th birthday celebration, Urmie Ray made the case for introducing ethics in the curriculum of mathematicians – a field traditionally alien to humanistic concerns such as ethics. Ray used the example of Tinder, a dating app that, if successful, may lead to one finding their life partner and having children, and would then turn into the biggest ethnic programming project yet. What is new about this situation is that the skills of the mathematician find a direct application in the world “out there,” through the creation of algorithms for apps, social media and search engines, which are not mediated by an ethics committee nor any governmental body. “Value-free” mathematical techniques are directly applied to the world, handing over value choices to the algorithms and outside of the human realm. This situation has been described as post-humanism – and complexity is increasingly invoked to help make sense of relational challenges in human and non-human interaction.

First image: the case of self-driving cars.

At the symposium on “The politics of uncertainty” last July, Jack Stilgoe presented the case of self-driving cars and the ethical questions that accidents of self-driving cars pose. One of the first fatal accidents involving a self-driving car occurred because the car software failed to recognise a truck on the road and drove into it. Stilgoe takes this example further, and asks, what if the car had to choose between driving into the truck and killing its owner or avoiding the truck, driving into the sidewalk and killing a pedestrian? This question is similar to the ethical dilemmas used in ethics course, such as the trolley problem. The trolley problem goes as follows: “A runaway trolley is moving toward five unsuspecting rail workers. If it hits them, it will kill them. You are standing next to a switch that could divert the train onto a separate track, where only one rail worker is standing. If you flip the switch, the five workers will be spared and the single worker will be killed. What would you do?” While the trolley problem is a hypothetical situation mostly used for teaching purposes, in the case of self-driving cars the problem becomes a matter of software design. Stilgoe provocatively suggests that one could take a neo-liberal approach and leave the choice of algorithm to the buyer of the car. When purchasing a car, the owner would have to decide which type of algorithm they want with it, one that saves the owner or one that saves pedestrians. What is new about this situation is that ethical dilemmas become a practical question, mediated by machines.

Second image: the case of aviation.

On March 10, 2019, a Boeing 737 Max crashed a few minutes after taking off from Addis Ababa, killing all its 157 passengers. Investigations of the crash found that the pilots “were unable to prevent the plane repeatedly nose-diving despite following procedures” (https://www.bbc.com/news/world-africa-47553174). The problem was due to the use of MCAS, Manoeuvring Characteristics Augmentation System, which overrides inputs from the pilots. The machine makes decisions about the flight. This is different from the fly-by-wire system, in which the pilot told the system how to fly, and the computer adjusted to the inputs by the pilots. The first report about the crash, produced by Ethiopian authorities, did not attribute blame for the accident – but this accident raises the question, can a software be liable for an accident? Is software failure more desirable than human error?

Both examples take us to the field of post-humanism, in which ethics and responsibility cease to be a prerogative of humans, and are shared with, or transferred to, algorithms. Braidotti [1] argues that post-humanism is a means to overcome the natural-culture divide, and criticises the idea that certain issues – including ethics, values, intelligence – belong exclusively to the human realm, to “culture.” Braidotti speaks of a nature-culture continuum and reminds us that nature is self-organising, there is intelligence in the processes of nature. Because self-organisation and autopoiesis are terms that hail from the sciences of complexity, she suggests that complexity can help conceptualise the nature-culture continuum.

I add to this argument by making reference to the work of Peirce [2] on semiotics, and in particular to the notion of semiotic closure. Semiotics is the study of signs, which Peirce describes through a triadic relationship to argue that signs are relational, they are not a way of being in themselves. Semiotic closure is achieved when signs are validated through action. The applications of the sign generate experience, and experience impacts the perception of the sign generating new knowledge. The concept of semiotic triadic relation refers to the process in which representations are continuously selected and validated through interactions with the external world.

The direct application of algorithms and mathematics to the world is then a form of validation of “mathematical signs.” As opposed to the idea that mathematics carries knowledge in itself, semiotics suggests that the application of mathematics is a means of generating knowledge. Post-humanism may therefore offer the opportunity to validate the signs of algorithms and machines. The accidents and failures speak to the uncertainty that algorithms carry – and clearly point to the limits of basing policy of scientific information generated by the non-applied sciences. Complexity thinking is necessary not only to conceptualise the posthuman but also, and crucially, to study the role, validity and limits of algorithms when applied to action.

References

[1]        R. Braidotti, The posthuman. Cambridge, UK: Polity Press, 2013.

[2]        C. S. Peirce, Collected papers. Vol. VI. Boston: Harvard University Press, 1935.


0 Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.