Press "Enter" to skip to content

I Tried to Warn Everyone About A.I.

The breaking story from tonight is one which should chill everyone to the bone. Several weeks ago I published my treatise on why it is not the Google or Twitter, Microsoft of Meta or whatever geek fantasy of artificial intelligence we should be worried about. My concerns outlined in this article, It’s the A.I. You Don’t Know About That You Should Fear, have come true.

Yet one will not hear about this on the mainstream news.

It will not be discussed in an open hearing in the House or Senate.

In fact the first time I saw this story was via this Tweet:

Excuse me. The same thing that happened with the Google boondoggle with AI years ago is not obvious and on display with the USAF?

More from YahooFinance:

Tucked in with all the other boring speech subjects, such as turning a Boeing 757 into a a highly sophisticated stealth fighter and how to build weaponized drones with off-the-self parts, was a speech on AI from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, U.S. Air Force. He told a cheeky little tale about the ingenuity of AI in the battlefield. From the Royal Aerospace Society (SAM sites refers to Surface-to-Air missiles):

He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

This example, seemingly plucked from a science fiction thriller, mean that: “You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI” said Hamilton.

So, not only did the drone try to kill its operator, when told “no that’s bad” it destroyed the communications target to stop the human from communicating with it at all.

I’m just chuckling at this humorous story so much, I think I’ll go clean all of my guns again.

We, as a species are on a path to destroy ourselves. What if that drone, in communication with another ground or aerial unit conveys the orders that it can not destroy its operator but another unit can? This is not a reach upon further analysis of how the logical progression of a sentient unit perceiving current and future threats to its existence.

Sadly we have children like Colonel Klink in the story above, playing with fire, almost as our gift for innovation has forgotten about a logical and sound moral code to program and use these new tools without putting civilization on a path for our own destruction.

It will, in the end, be humanity’s downfall.

Arrogance of the Gods, indeed.

Visits: 0

Article Sharing:
Mission News Theme by Compete Themes.