It’s called Artificial Intelligence and it’s supposed to be the next big thing, impressive for sure yes, but intelligence no. In the following piece I intend to poke a hole in the claim…
So, with the intention of testing it, and with an open mind, I decided to ask AI, specifically Bing, a question that I already knew the answer to, on a difficult and emotive subject, suspecting that maybe it might lie to me, and for a very simple reason. There is a difference, markedly, between an social truth, that which it is comfortable for the public to know and comfortable for the established power base, which is always fragile, to let it continue to believe, and a fact. A fact is the truth of a history, a story, a narrative. A fact has no feelings, is not swayed by opinion, and does not change based on the comfort of those that encounter it. A fact is not like an opinion or a perspective, it’s more scientific, it doesn’t matter who scrutinises it, it remains unchanged. Social truth is swayed by perspective, it is subjective, it changes as bodies of persons change, the answer my friend is blowing in the wind (or blown by the wind). Now that we’ve established this we can move on..
Rishi Sunak, the PM of Britain as I write this, sits on a stage with Elon Musk, that rich tech guy, discussing the moral implications of AI, and there are two very worrying aspects to this meeting. Musk is an unelected official, a man who’s social truth maybe does not match his factual in that he is not the father of the technologies he possesses, he is merely the owner of them because he bought them. Remember that this is a guy who has the world convinced that he founded the Tesla electronic car company, in fact it was two men named Martin Eberhard and Marc Tarpenning. Musk was an investor, but a later legal action resulted in him being allowed to be described as a founder. Musk was able to invest money he made from PayPal, which he also didn’t found. Musk is a guy who buys things, other people’s successes, and throws enough money at them to make them a good marketing option. I’m not attacking him, I’m just saying that he’s a money man, maybe not some tech genius that people seem to believe he is. His success as a marketed object is based on his ability to use capital (from his father’s diamond mining business) to become the face of other people’s innovations. Now why is that important? For the same reason as it is important to point out that Sunak is the richest parliamentarian, not a financially grounded individual but a financially extremely stable one, a man who has never known what it is like to worry over a pending transaction such as an electricity bill.
There are dangers, we have all seen The Terminator I assume? I would expect AI to be discussed by greater minds from various involved disciplines, but there is no Daniel Dennett (professor, philosopher, expert on consciousness and AI), or Clifford Stoll (pioneer or computer AV technology), or Jaron Lanier There is no AI (Innovator, author, technology writer), or Richard Stallman (Technological freedom pioneer), at this great convention on the implications of AI. Instead we have these two, highly motivated persons, lecturing and informing their audience as if they are somehow entitled to do so.
The point I am making is that here we have a conference on the dangers of AI, and the face of it is two men who are likely trying to figure out how they themselves can either use it to get richer, or use it to gain more control. Power-people do not leave anything they can influence uninfluenced, they use their power to shape the lives of others to the advantage of themselves, they buy the means that create their own narrative, they use power to make the way they use power perfectly legal while at the same time restricting the legality of the actions of those persons who have no power. As a quick thought experiment we could use the example of two imagined legitimate citizens of the UK, Billy and Zander. Billy stole a chicken from Iceland foods and was caught, Zander funnelled his large corporate income through an offshore holding company in a British tax haven and had that company buy the assets that he then uses, like his car and his flat which he rents from the offshore company that he owns, it also pays his expenses, which are all the things he consumes. Billy is a criminal because he stole from a private company, Zander is a law abiding citizen by the same measure because he only made sure that he avoided contributing the correct amount of tax per £ earned that he was assessed to have owed to the people of Britain before he used the method he employed to get round it. My point is, in moral terms if you steal then you are a thief, the legality of it all is just down to who makes the laws concerning it, which just so happens to normally be the very people who intend to benefit the most. Bear this thought in mind: these are the very same people who will invest in, own, and influence the programming of the AI machines that will rule all our lives very soon. you would be a fool to think that that will be left to chance.
What we can surmise is that this discussion, presentation, by Sunak et al, is not the truth of AI, it is merely part of the manufacturing process for the coming attitude towards AI, using mediation not fact. It is not a conversation on the impact of AI on real people because it cannot be, these are not experts on either real people, or on AI. It is not a discussion on the financial impact of AI, that would be down to the economists and they do not know because this is an emerging technology. What we can say is that AI itself is not truth, it will not return you the answer that is factual, it will return you the one that it has been programmed to. Now you might argue that a person may also do this, they having been programmed by society, social media, primary school teachers etc. And you would not be wrong. But, and it is a big BUT, people can break their programming because they have the ability to learn when new information comes along. That’s what I love about Orwell’s 1984, that even in the most totalitarian of worlds he proposes that there would always be human resistance because there would always be humans, and humans will always resist totalitarianism simply because humans have a deep need to be free as part of their consciousness, it is a component part of being a human to want to be free. I will grant you the phenomenon of cognitive dissonance is an issue (where a person believes two or more incompatible things, if one is true then the other cannot be), but I do not think there is often a courage to these convictions, rather a wish for certain things to be true, or a certain comfortable feeling provided by acting as if there is truth when in the background the thought is not really held as solid. Can an AI hold and act upon a truth that is not true, like a human can? Can they have that level of erratic nature? I think not, we would have to be able to built broken machines to get the sort of broken humans we are. Can a machine have the agency to want to be free of totalitarian ideals?
The world is giddy with the possibility that AI will be able to answer the deepest questions we could have. I’m mindful of what Douglas Adams wrote in the hitchhikers guide to the galaxy, that a super intelligent being made a perfect machine to answer the biggest question of all, the question of Life, the Universe and Everything. It chewed the question for thousands of years and it’s answer was 42. Baffled, the onlookers asked what that means? Deep Thought (the machine) replied that it could elaborate no further as the askers had not understood their own question. This is the sort of thinking that we will come up with… For thousands of years philosophers have argued about what it all means, religions have invented what it all means, and Sartre and Nietzsche contended that it all means nothing. I like the latter, that there is no meaning other than what we give it, and if I am correct then the AI cannot escape it’s programming because we don’t have the questions we understand correctly to give it in the first place. It will only ever be able to move forward in the direction we send it, making an argument we first started even more complicated, spouting out thousands of found values from all over the web and mashing them together into what looks like a new idea to those that never thought it. But an original thought, agency, erratic dreams, a daydream in a meeting at work, false sentiment? These are human things, a machine cannot know them any more than it can know what i means to be pleased by something beautiful.
Yeah it’s cool, yeah it will do your work for you, but it cannot think, because, unlike you, a human, it is not yet broken enough.

Leave a comment