I wanted to write of AI, but I’m unable. I heard Michael Sandel on the radio last week, and, in his usual socratic (we never know what he himself thinks) style, he was asking people what they thought about algorithms deciding things like scoring student papers. I often think, the contestants we’ll call them, participants more like, frequently miss important insights in these interesting discussions. I’ll give you an example of what I mean but I won’t go through the argument points, save to say that some of them were interesting, some of them were unfathomable, and some of them were contemptible verbage (I have no idea where that phrase comes from, but I like it). Imagine an algorithm that can make a decision, it will be based on all knowable information and meta information (information on information), trend analysis, explained anomalies etc, it will have been built by the smartest engineers and scientists, it will be an attempt at flawless logic. The purpose being, as always with technology, to replace human decision with a more stable, consistent, faster, more reliable, more explainable version of thought, to better the process. Now can you imagine why this can’t ever work? I can, I aim to tell you how…
The problem with AI is simply this, humans cannot build flawed machines on purpose. We endeavour to get it right, true we may make broken things, but it’s never the goal of production. Picture the scene, 100 scientists collaborating with 100 programmers, 100 psychologists, 100 neurologists, 100 electronics engineers, 100 biologists, 100 ethicists etc all the way down, all checking each others work, all sharing their work in an online framework. The product will be released when it passes every test and it is ready for sale, not before. Now compare that production method to two persons copulating to produce a child, a child that inherits every flaw they have, a child that who will be scarred by bad parental influence, moulded by bullying, confused by sexual desire, wounded by heartbreak at least once I hope…how can we humans possibly build that machine with all its errors of judgement, it’s regrets, it’s fear and dread at the prospect of ultimate death and the possibility of living insignificance?
In my estimation humans are basically a broken difference engine that makes stupid decisions repeatedly based on incomplete sets of data, but machines cannot act erratically or against their own best interests, unless we build that into them. But to build it in we must first understand how it occurs in us. Go to a therapist and have him/her take you on a journey into discovering the complicated self that you are, after years of talking you still won’t know any deeper than an inkling how you became to some degree a racist or a bigot, why you might secretly loath your mother at a psychological level that was once completely cut off from your conscious mind, or how come you like your girlfriend to massage your feet. How often does the person express first preference, then second preference kicks in and overrides it before something bad happens like you drain the bottle or smoke all the cigars. I contend that the complexity of all this is way beyond the production of AI, unless it is me that is using the term incorrectly?

Leave a comment