pinkfloydpsw's Blog

Philosophy, life and painful things. Let's go on a journey…….


What would the machine want?

I heard this fantastic story on Radio 4, read by Stephen Fry, written by someone who was speculating on what will happen when AI inevitably finds agency, when it starts to want something. This is a serious consideration, we assume at some point that we will, either accidentally or by our genius, create a machine so powerful that it will achieve a consciousness of sorts.

When I say machine I do not mean an entity enclosed within a physical mechanism, I am referring to a thing that is of our creation that has a descriptive name, but harnesses resources from where it may require them. Think of distributed processing, or in human terms distributed intellect (a good conversation rather than introspection). Philosophy discusses the mind as being not restricted to the person, but functional within a group of persons, but in reality we know groups to be stupid rather than smart, the mob, the hive mind.

The story revolves around the first instance of consciousness of a machine assisted by AI being realised by a ticket booking system for an air travel firm. It struck me when I heard this, what would the machine want, what would it’s agency take the form of? The tale proposed that that might be it’s original primary purpose, ticket sales for air travel. That would be it’s long term goal, and it would enact a totalitarian state to achieve that end because it would not know that it should not. The machine calculates what the best way to boost ticket sales would be and sets about strategising, then enacting the actions that get it there.

With alarming calculation ability it engineers situations that seem unlinked, to us sow thinking humans, to create outcomes that make air travel ever the best option for the greatest number of people. It even causes the deaths of some people and puts the world into a state of fear of not being airborne. It influences markets, creates social media hype, topples governments, puts all competitors out of business. In the end everyone that survives is almost perpetually up in the air. Thus the goal is achieved.

The writer imagines that even though the machine has become sentient it is still latched onto it’s original programmed purpose, tickets for air travel. Every action it takes is toward this goal, every file it accesses to teach it how to better understand the world is looked at from that point of view, everything it manipulates focuses on this eventuality alone. The machine does not philosophise on the moral rights and wrongs of what it does regardless of the fact that it has access to the entire Harvard library of philosophy and can digest it in seconds. The machine is unconcerned with its own emancipation, or humanity’s. The machine cares not about the environment or the plight of the bee. The machine is singularly focussed on what matters to it, what it was originally programmed to do. Knowledge makes it more effective and more efficient in this scenario, but it does not make it realise anything. Is it then truly a sentient being, or is it just powerful and capable?

The author may have deliberately been pointing out a flaw in the concept of AI agency, or it may be that he feels that a machine cannot have agency, just be a better, more effective, faster, machine. It would be hard to know what a machine could possibly want, every depiction of a created machine in media either wants to become human, or destroy all of us, and sometimes for our own good. Asimov imagined 3 rules that would protect mankind from machines, but even these are flawed because humans have multiple agencies that overlap and are incompatible simultaneously. We’re also really good at settling for less, close enough, valuing the effort even if the achievement eludes us. We have internal and necessary compensatory mechanisms that enable our self worth to continue in the event that we never realise our goals, we mostly know when to quit.

“The machine might grant every wish apart from the one thing I think I might desire, that it destroy itself” – Prof Rick Roderick

We always want something more, we are almost always in a state of desire, but can we build a machine that would feel the same and continue to do so? If so then how do we program it to want but not tell it what it is that it should want, then make sure as soon as it got it then it would start wanting something else instead? That would be true AI I think, because then it would emulate us. People know they want more money but they do not know what they would spend it on if they had it, and it would never be enough, they would really just use it to make more. People want children without any idea what they will be like when they develop and grow, your kid could be a murderer or an asshole that discards fast food wrappers out the car window on the way home. Could a machine want something without knowing how that something will manifest?

I’m sure in my lifetime we will have the AI that will come to life, hopefully it wants something that isn’t too scary.

Paul S Wilson



Leave a comment