With Elon Musk having recently said he will be giving away a Billion USD to fund research into AI to ensure risks are minimised – I wonder if there’s not already a free solution to the unique problem presented by AI in the codes of a moral philosophy we know.
In the Metaphysics Of Quality the Law of the Jungle declares that biological quality should always prevail over inorganic quality. In this case – I propose a simple AI rule. If a machine, controlled by software, is capable of taking a life in its day to day operation – then the machine must be able to detect life and avoid killing or injuring it where possible, unless of course specifically designed to do so (weapons).
That’s it. Doing scientific research to solve what is fundamentally a philosophical issue seems a lot like declaring war on an international policy issue [The War on Terror] that is – lots of money spent and bad results. Unless, of course, the research improves the life detecting capabilities of machines to be more affordable. I live in hope.