There’s a great video on Youtube(above) called ‘The War on Science’ by ASAPScience which outlines an oft misunderstood conflict. That conflict is when:

“Science and society are often at odds”.

Putting the conflict in these terms clearly shows the wrong-headed thinking of those who are undercutting the intellectual values of science with the social values of society. Current social norms may be more convenient to defend and continue for society but it not intelligent to continue thinking the same thing when evidence shows otherwise.

In fact, rather than simply wrong-headed, such defence of social values in the face of intellectual values to the contrary, is immoral and not supported by the MOQ.

The historical risk though, is that without the Metaphysics of Quality the intellectual level can start to undercut the quality of society and defend biological values at the risk of social cohesion. This could well explain why many a political conflict throughout the world simply are between those who defend social values vs those who support intellectual ones.

The MOQ however, shows there is a more nuanced way to view social vs intellectual conflicts such as this. Within the structure of the MOQ is the ability to morally defend intellectual values while not risking social decay in the process. This is clearly shown with the MOQ’s ‘Codes of Morality’ and in the difference between ‘The Law’ and ‘Intellectual Morality’ the latter of which is not acknowledged with our current Metaphysics.

I’ve seen lots of talk recently about the moral threat of AI. So, what does the MOQ have to say about it?

To start with, how about a fact which appears to be lost in much of the discussion.

No computer has ever made a moral judgement which it hasn’t been told to make and so there is no reason to think this will ever change. Believing this will change spontaneously as a result of improved intelligence of machines is just that, a leap of faith, and not supported by evidence. As it stands, it is the human programmer making all moral judgements of consequence. Computers, being 0’s and 1’s, are simply the inorganic tools of the culturally moral programmer.

Unfortunately though, this isn’t likely to be appreciated any time soon because of a philosophical blind spot our culture has. That blind spot is our metaphysics which neglects the fundamental nature of morality and in doing so gets confused about both where morality comes from and whether machines can make moral judgements independently of being instructed to do so.

For example, in the case of a recent foreign affairs article – Nayed Al-Rodhan appears to believe that AI will start making moral judgements as a result of more ‘sophistication’ and learning and experience.

“Eventually, a more sophisticated robot capable of writing its own source code could start off by being amoral and develop its own moral compass through learning and experience.”

The MOQ however makes no such claim which, as already mentioned, is contrary to our experience. According to our experience it is only human beings and higher primates who can make social moral judgments in response to Dynamic Quality. Machines are simply inorganic tools and their components only make ‘moral decisions’ at the inorganic level.

That’s not to say though, that there aren’t any dangers of AI and that all risks are overblown. AI – being loosely defined as advanced computational/mechanical decision not requiring frequent human input – threatens society if it is either poorly programmed and a catastrophic sequence of decisions occurs or if it is well programmed by a morally corrupt programmer. However each of these scenarios aren’t fundamentally technological but philosophical, psychological & legal in nature.

The unique threat of AI is this aforementioned increase in freedom of machines to make decisions without human intervention making them both more powerful and more dangerous. The sooner our culture realises this, the sooner our culture can start to discuss these moral challenges and stop worrying about the machines ‘taking over’ in some kind of singularity apocalypse. Because unfortunately, if we don’t understand the problem, a solution will be wanting, and therein lies the real threat of AI.

Russell Brand does yoga and meditation. In the above video he talks about how both of these things help him find a connection which Drugs and Sex cannot. It’s rare to have a celebrity with such considerable self confessed experience in drugs and sex, speak with such honesty in what way they don’t work.

“I’ve really tried drugs. I’ve really tried sex. I really tried all these things and they do not work.”
Russell Brand

Jeni Cross who is a sociology professor at Colorado State University, talks about the three Myths of Behavior change. In these notes below I’ve ignored the SOM jargon and instead written about how the research applies to values. I could find five main takeaways from the 3 Myths.

MYTH 1. Information is enough to change behavior.

  1. If we speak to what folks value by making things tangible, personalized and interactive this is far more likely to change their mind than simply supplying them facts.

  2. Also, because folks are also loss averse, that is, they don’t want to lose value they already have, they are far more likely to change their behavior if it means they won’t lose value.

MYTH 2. Changing attitudes changes behavior.

  1. You can actually set qualitative behavioral expectations to change behavior and attitudes.

  2. You can also change behavior by speaking to what folks already value.

MYTH 3. Folks know their values.

  1. Social norms influence behavior far more than folks give credit for. If you speak to folks value of social norms, then you’re far more likely to change their behavior than they give credit for.

“Lying just beneath the surface of (political) arguments with passions raging on all sides are big questions of Moral Philosophy.. But we too rarely articulate and defend and argue about those big moral questions in our Politics”
Michael Sandel

Michael Sandel has a great series on Justice which explains the currently competing philosophical theories of social justice that exist in the world today. Unlike anyone else I’ve seen he’s bringing the problems of Moral Philosophy to the public at large in an easily accessible way.

In the TED Talk above, Sandel gives a passionate plea to bring some of this intelligent philosophical discourse to our political dialogue. I share his frustration and it is heartwarming to see someone make such an argument in a public setting. Of course, the Metaphysics of Quality provides us with a vastly improved language with which we can discuss morality and it brings with it coherence and evolutionary context to these discussions where there previously was none.