Policeman John walks his beat, today accompanied by one of the new law enforcement units, it has no name but it is designated PO7-32-Z$. As they pass each house the unit accesses a report built from various dBs, a report that holds all the info on the assets and occupants. Suddenly in a flurry of action PO7-32-Z$ leaves two of the residents of number 8 dead and the other two in plasticuffs on the side path. John didn’t even have time to react and participate. In a nanosecond the unit decided, based on the info it could access and the assessment it could make using its advanced sensors, that it could play out a scenario that resulted in a couple of dead, and a couple of captured, criminals. This was exactly what it calculated would happen, and it was exactly the result that would make most financial sense when considered as an assessed incident after the fact. It has always been the case that there have been dubious circumstances that lead to arguably optimal outcomes in concern with law enforcement scenarios, and we must bear in mind that PO7-32-Z$ had access to all of these for analysis too, it also had access to the tribunals and legal work in concern of them. The resulting outcome was based on calculation, not assumption or instinct.
In our scenario, PO7-32-Z$ had access to the criminal record, shopping habits, pornography preferences, music preferences, medical and psychological history of each of these criminals, it had access to every detail about their relatives and friends, it had access to data on the social standing of these people also. It could build a picture of the future likeliness that a court case could be brought and won by anyone they knew or were related to in concern of the incident that just happened and how it played out. PO7-32-Z$ calculated all this in the time that John was trying to become aware of what was happening, and acted before he could react. This is certainly efficient yes, but is it moral? There is a thought experiment where a person is asked if it is possible to fully intend in the present, to commit an act in the future that between the intention and the act the person will be sure, and so will everyone else, that they will change their mind? This is the intention to drink poison tomorrow, and to mean it fully in the now, but hold the knowledge that when it comes to doing it you will without doubt become incapable.
Nobody can argue, the things we can build are now better than us in a functional sense, but now it is not just a machine, it is an artificial intelligence that has purpose and is not governed by our every decisions. What we are giving the machines is vague purpose, then letting them think within the programmed brain space that we, humans that have the capability not us, have set the parameters of. What we intend is that we will discover what it discovers that we maybe would not have. Science fiction writers imagined the exoskeleton, an enhanced human, a symbiosis between the machine and the person, but this is not our future because the human would only slow the device down, if it was thinking it would of course do it faster. The new Robocop film explores the concept that the machine delivers it’s purpose more optimally when the human merely believes that it is in charge, this is an illusion of course, the AI is really doing the work. Better, more efficient, is to just have machine, but we humans do not trust that condition, we prefer the illusion of human control, and we prefer the idea of a big red button that will turn the machine off should we choose to. That will likely be an illusion too though, like the illusion that your mobile phone is off when you turn it off. It absolutely isn’t. You used to be able to remove the battery, now they are built so that you cannot, does that seem like it removes your control? Think about it.. it goes everywhere with you, you cannot turn it off by any means, and even when it appears to run out of battery power, and it appears to turn itself off, it still isn’t because it had a few % left when it did, and……. it’s always listening according to Edward Snowdon (he would know) …. I’m just going to leave that thought with you…
The future is not Robocop though, it is The Terminator. When integrated, it will be the removal of human agency, a machine with defined autonomy, and if a human is in the device at all it will be as an observer rather than pilot. This means that the device, the appliance, will enact the agency and purpose of those that build it. That should be a scary thought for everyone because the financial capital that will develop the machine will of course be sourced from the most greedy and power hungry among us, the rich folks, the capitalists, those that are only now motivated by power since they already have more money than they can use. We all realise by now, I hope, that capitalist are not the most moral of persons, especially as we witness the world falling into a false democracy that is more like Aristotle’s description of Oligarchy than the post-war dream of the 1950s. Let’s take the Elon example, here is a man that has so much money he could end world hunger, but what does he choose to do, he spends his time trying to make sure that the poor people of a country he was not born in get absolutely not a cent more than they deserve according to his class, and get less tomorrow than they do today if his class can make it legal and possible. Now that is the sort of guy that wants to own the agency of the AI, worried yet?
Imagine your weapon does not fire because some machine has decided it shouldn’t, or your car won’t start, or your oven won’t operate, or you cannot buy groceries? Is this a possible form of enslavement or control? Yet this is the future we are all welcoming, as if it were going to make our lives better. What of human agency when the machine takes over and enacts the remit of its programmers, the capitalists? Will we then not be just slaves to the controllers of those appliances as the devices hone not only the physical technological fabric of the world, but invade the social fabric too? You might argue that there are existing rules and structures that already inhibit our human freedoms in just the way I am describing, and you would be correct yes, but these do not have agency or ability, they have consequences. If you commit a crime now, nothing about the law that prohibited you actually acts to prohibit your act, it is not an active thing the law, it has no arms and legs or mind, it is just a piece of legislation that says something is not allowed. The machines will prevent, they will stop the action, they will remove the ability of persons to make decisions that are other than what has been decided to be acceptable.
This is the removal of human agency, and it resembles H&S thinking. We all agree with the idea that nobody should be harmed by an unacceptably dangerous situation, and that health and safety is, in theory, a very good thing. So some people are employed to form rules and make objects that restrain other people from putting themselves in danger. Yet people still do things that put themselves in danger all the time, they do this to feel alive, or they assess and accept the risks inherent in their behaviour. Driving a car is dangerous, especially the way some people do it, but electric vehicles will make this safer by removing decision from the person, eventually people will just be passengers I suspect because the vehicle will be better at driving itself than we ever could be. In fact, I’m willing to bet that AI could already get a formula one car round Silverstone faster than Lewis Hamilton ever could. The implication is that we lose driving as a pleasurable endeavour. In the movie Demolition Man Sandra Bullock’s character is bemused by the fact that Sylvester Stallone’s character wishes to ‘go manual’.
Forget driving, that is arguable, are there other things that will be logical on the trajectory, other things that the AI think-tank, funded by our capitalists, will mess with? In the industrial revolution the workers feared the machine so much that they threw their shoes into it so as to break it, but those enemy machines had singular purpose. Computers have wide purpose, they do many things, but they also have changed the way we work and live. The hope was that they would make our lives easier, they instead made our lives faster and less interesting, they made us stupider, less able, further from our ancestors, and less valuable as employees. The machines are worth investing in, the people who operate them are a, for now, necessary burden. How long before an AI replaces you at work, and what will that do to your quality of life? As we marvel that we each can have a thinking machine in our pockets, externalising our intellect, we might consider that we won’t even be needed to clean or pour coffee soon, let alone be skilled labour.
We cannot fix in meaning what beauty is, not as a concept that doesn’t change. In Roman times beautiful buildings looked different to the beautiful buildings of the middle ages, these again look different to the beautiful buildings of the Georgian period, etc. For a machine, an AI, an appliance, to understand even this subtle changing condition, and to act accordingly, it would have to possess the sort of human flaws that mean we drive what is already thought to be satisfying to become different simply because our artistic pleasure ceases, and for no good reason that we can understand, to be satisfied by what exists enough to keep replicating it. We have had the spirit level for centuries, it’s a simple device, the Romans had a type, yet we’ll still put our energies into ever developing a better version, that’s just a human thing to do. How do you program that into an AI, how can a thing that lacks agency ever be dissatisfied, or restless in its desires, and is that not most of the reason for innovation in the first place? Can an AI be driven by jealousy, competitiveness, anger, passion, greed, boredom, or is it just a utility machine that expresses those objects merely in the agency it is given by its programmers?
I do not believe in AI because I do not believe in a machine that can want something. The character Data in star trek wishes to be human, Pinocchio is a wooden machine that wants to be a boy, in the latest Terminator film the machine learns to be more human, Marvin from the Hitchhiker books displays many human traits. Alan Turing postulated that at some point a machine could fool a human into thinking it was a human, but John Searle proved in his Chinese Room experiment that this did not mean anything since the interpretation and response could do it without intellect. My question mainly revolves around how we look at these things, are they appliances no matter how sophisticated they become, and what do we wish to retain despite the fact that they can do it all better?
Should I have had an AI write this piece, I could have but I never would, simply because I want to do my thinking for myself. Is this blog perfect? No. Is it accurate? Unlikely. Is it the best it could be? Very unlikely. Have I missed and misunderstood? Indeed. It is part of who I am to put my energies into this blog and an intellectual muscle I wish to flex to make stronger, it is part of my intellectual development to think well enough to put my words on a page. I don’t want the AI to do better what I feel compelled to do, because I want to do it, even if the fucker is better, and I cannot develop personally if something does my intellectual labours for me. Let the AI do the mundane and the dangerous, let it step in where we are not able, but please don’t let it make obsolete what is, and should be, important about being us. We are not just living meat that absorbs truths, we are thinking beings that need difficulties and challenges to fight against for us to get stronger in mind and body. Machinery has made us weaker than our ancestors, so we visit the gym to build ourselves physically, what will we visit to make our brains stronger when the machine replaces our thinking? I wish for machines that wash my clothes, heat my house, help me to live a longer and better life, but not to create my narratives, not to think or decide for me, that’s ridiculous.

Leave a comment