I'll be honest my concerns are multi-fold.
1) Human backup due to Enemy Jamming, EMP etc
2) Distrust of electronics (those new batteries you put in your NVG battery pack just decide to crap out half way through an op)
3) Absolute distrust of AI systems - the leaps and bounds we have "made" in AI and machine learning is staggering, where a system can generate COA's and reactions faster than a real time team of humans.
4) Concern that a certain point war will be waged via autonomous systems - the bloodless war, that then diminishes the cost - and desensitizes societies until it expands to a point that cannot be contained -- also the whole Skynet is Live...
Humans are fallible - but at least we try.
Couldn’t agree more with every single one of these points.
1) Just look at the Russian capabilities demonstrated in the Ukraine, in which the US Army described them as ‘eye watering’ - and denied the Ukrainians the ability to communicate, move in the open, transmit anything, and even had to rethink the use of basic electronic gadgets.
Having a human that can control something if/when the enemy scrambles it’s brains or ability to receive directions is a good idea.
Unless both sides are using primarily autonomous machines that transmit clearly IFF codes, humans need to decide when to pull the trigger and when not to.
2). Especially military electronics. Especially battery powered anything in cold temperatures for extended periods.
(Or when it comes to individual kit, the muppet who had those NVGs before you did neglected their care, and now somehow the batteries die fast and the depth perception is off…)
3) If I had to choose 3 ‘likely or most likely things to destroy society as we know it’ - AI would very much be included.
We have arrived at a point of growth, as an overall species, where we are actively going down 2 different, yet incompatible paths at the same time - and most people don’t even realize it.
On the one hand we tend to be on a quest to do whatever we can, just because we think we figured out how. The question of “why?” is rarely answered adequately.
Yet on the other hand we are deliberately playing at becoming Gods, even though we all know we are far from being an apex of the universe.
We are deliberately creating, via algorithms and circuits, an intelligence that is self aware, capable of growing and learning faster than we can fathom, and can think/reason for itself in a way that is alien to us.
Limited AI to take certain tasks away from a human operator? Sure.
Full self-actualized AI? Seems like really bad idea.
4. Agreed. Human tragedy has to be part of the human experience, and the decision to inflict violence on humans living elsewhere on the planet needs to come with a cost not measured in dollars.
Otherwise, it’s just a real life video game.