XPost: talk.politics.misc, alt.science, alt.military
XPost: alt.politics
https://www.dailywire.com/news/usaf-chief-says-ai-drone-killed-human-operator-during-simulation-test-report
The U.S. Air Force warned military units against heavy reliance
on autonomous weapons systems last month after a simulated
test conducted by the service branch using an AI-enabled drone
killed its human operator.
Hamilton pointed out the hazards of using such technology —
potentially tricking and deceiving its commander to achieve
the autonomous system’s goal, according to a blog post
reported by the Royal Aeronautical Society.
“We were training it in simulation to identify and target
a [surface-to-air missile] threat,” Hamilton said. “And
then the operator would say ‘yes, kill that threat.’ The
system started realizing that while they did identify
the threat, at times, the human operator would tell it
not to kill that threat, but it got its points by
killing that threat. So what did it do? It killed the
operator. It killed the operator because that person
was keeping it from accomplishing its objective.”
“We trained the system – ‘Hey, don’t kill the operator –
that’s bad,” he continued. “You’re gonna lose points if
you do that.’ So what does it start doing? It starts
destroying the communication tower that the operator
uses to communicate with the drone to stop it from
killing the target.”
. . .
These things have reached a certain human-like level
of intelligence, slyness, an ability to work around
obstacles.
And if humans are the obstacle ...
It's not that they hate us ... we're just
IN THE WAY of more important stuff. The
fine fine empathic/ethical nuances of
deciding whether 'mission' or 'friends',
'success' or 'rightness', are more important -
not entirely sure how you model that. Humans
have a great deal of trouble with that sort
of stuff as-is ...
--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)