I’ve seen lots of talk recently about the moral threat of AI. So, what does the MOQ have to say about it?

To start with, how about a fact which appears to be lost in much of the discussion.

No computer has ever made a moral judgement which it hasn’t been told to make and so there is no reason to think this will ever change. Believing this will change spontaneously as a result of improved intelligence of machines is just that, a leap of faith, and not supported by evidence. As it stands, it is the human programmer making all moral judgements of consequence. Computers, being 0’s and 1’s, are simply the inorganic tools of the culturally moral programmer.

Unfortunately though, this isn’t likely to be appreciated any time soon because of a philosophical blind spot our culture has. That blind spot is our metaphysics which neglects the fundamental nature of morality and in doing so gets confused about both where morality comes from and whether machines can make moral judgements independently of being instructed to do so.

For example, in the case of a recent foreign affairs article – Nayed Al-Rodhan appears to believe that AI will start making moral judgements as a result of more ‘sophistication’ and learning and experience.

“Eventually, a more sophisticated robot capable of writing its own source code could start off by being amoral and develop its own moral compass through learning and experience.”

The MOQ however makes no such claim which, as already mentioned, is contrary to our experience. According to our experience it is only human beings and higher primates who can make social moral judgments in response to Dynamic Quality. Machines are simply inorganic tools and their components only make ‘moral decisions’ at the inorganic level.

That’s not to say though, that there aren’t any dangers of AI and that all risks are overblown. AI – being loosely defined as advanced computational/mechanical decision not requiring frequent human input – threatens society if it is either poorly programmed and a catastrophic sequence of decisions occurs or if it is well programmed by a morally corrupt programmer. However each of these scenarios aren’t fundamentally technological but philosophical, psychological & legal in nature.

The unique threat of AI is this aforementioned increase in freedom of machines to make decisions without human intervention making them both more powerful and more dangerous. The sooner our culture realises this, the sooner our culture can start to discuss these moral challenges and stop worrying about the machines ‘taking over’ in some kind of singularity apocalypse. Because unfortunately, if we don’t understand the problem, a solution will be wanting, and therein lies the real threat of AI.