I, Robot , I Soldier.

  • Come along to the amazing Summer Moot (21st July - 2nd August), a festival of bushcrafting and camping in a beautiful woodland PLEASE CLICK HERE for more information.
What if Terminator was actually a warning from a future generation... they knew a genuine warning sign wouldn't work, so they sent back in time a movie script...

Ah, it didn't like the emoji...
So apparently one cannot use the astonished emoji..
 
  • Haha
Reactions: Broch
Perhaps we could replace all governments with AI systems that adhere to some kind of universal code of ethics. All decisions would be based on logic and science, no need for elections or political parties that follow this or that ideology. Imagine, all the current issues that become politicsed would be dealt with on a purely logical basis. There would be no need for robotic warfare. Let's face it, as humans, whether in democratic societies or otherwise, our track record to date isn't great. Admittedly I'm being slightly flippant here.
 
So, what is the logical thing for a 'government' to do if there is poverty, lack of tradeable resources, famine, drought, illness, insurrection, crime ..... ? What do you think an AI 'logical' solution would be?

At least most governments know they have to tread a middle road :)

Actually, we're drifting off the subject a bit, sorry TeeDee :)
 
  • Like
Reactions: GreyCat
So, what is the logical thing for a 'government' to do if there is poverty, lack of tradeable resources, famine, drought, illness, insurrection, crime ..... ? What do you think an AI 'logical' solution would be?
That's the point; hopefully the AI solution would be the most logical one. Speaking of illness, how about COVID? Say we had entrusted decision making on pandemic management to AI, at least in part, I wonder how the outcome would have differed? Same with interest rate setting and economic decisions, could AI outperform modellers and decision makers? Will AI sort of creep into all areas bit by bit?
Back to warfare though. We talk about intelligent robots on the battlefield, but what about AI on overarching tactical level, taking a 'command' role in a war? What are people's thoughts on that?
 
Is that a sort of Pokemon?

Not far off. Anime AI Spider Tank. Basically Hello Kitty with Stinger Missles.

The AI making the decisions is the idea always thrown up in Sci-fi that concludes with getting rid of humans as they are actually the problem.

It’s not an illogical response.

I’m going to bring up 2000AD and Dredd. ;)

The ABC Warriors are thinking independent soldiers of the dystopian future. A series was too aggressive and some were wanted criminals. B series were too passive and tried to talk through everyone’s feelings. C were maybe “just right”. Now if you could teach a machine compassion you’re doing better than a lot of people.
 
Last edited:
I think we underestimate how pervasive AI is in current situation modelling; I'd be very surprised if it is not used extensively in evaluating outcomes of different submitted scenarios. I was involved in a project that was trying to use 'knowledge based modelling' from the battlefield in the 80's.

My point about the 'logical' outcome was that an AI may well decide that the best recourse for its 'sponsor' is genocide, war, etc. Just because it's logical does not make it moral.
 
Yet I read about research how a particular species of slime mould was very good at modelling traffic flow / demand. Apparently putting input in the form of sugar concentration at node points, cities, and letting the mould grow and flow between them. Only four options in their decision making process but it made for better modelling than the best supercomputers apparently.

It's amazing what has been used to make decisions that may or may not affect us.
 
Not far off. Anime AI Spider Tank. Basically Hello Kitty with Stinger Missles.

The AI making the decisions is the idea always thrown up in Sci-fi that concludes with getting rid of humans as they are actually the problem.

It’s not an illogical response.

I’m going to bring up 2000AD and Dredd. ;)

The ABC Warriors are thinking independent soldiers of the dystopian future. A series was too aggressive and some were wanted criminals. B series were too passive and tried to talk through everyone’s feelings. C were maybe “just right”. Now if you could teach a machine compassion you’re doing better than a lot of people.


I grew 'up' on 2000AD ... and Marshall Law... and Tank Girl.....
 
  • Like
Reactions: Ozmundo
My point about the 'logical' outcome was that an AI may well decide that the best recourse for its 'sponsor' is genocide, war, etc. Just because it's logical does not make it moral.

Totally agree with this! Interestingly (may have been covered before) but that ties in exactly with the title of this, i robot....the whole premise was machines rising up against humans because their one of their unbreakable rules was to keep humans safe, so it decided that humans were incapable of not harming each other, so to keep them safe it should lock them all in .....or something like that, I don't recall it exactly....

Isn't it typically the case that a lot of human problems can be solved with violence...? Doesn't mean it SHOULD be used, doesn't mean its ethically, or morally correct to do so, doesn't mean that we shouldn't seek the harder option which won't open a pandoras box....
 
My point about the 'logical' outcome was that an AI may well decide that the best recourse for its 'sponsor' is genocide, war, etc. Just because it's logical does not make it moral.
That's worth a whole new thread in itself :)
But in the context of my post, by 'logical' I meant what's best and morally right and acceptable for us - although what's morally acceptable to some will not be to others, of course. It presumably would mean AI following a code of ethics, if such a thing is possible.
 
That's worth a whole new thread in itself :)
But in the context of my post, by 'logical' I meant what's best and morally right and acceptable for us - although what's morally acceptable to some will not be to others, of course. It presumably would mean AI following a code of ethics, if such a thing is possible.

How would one do that I wonder.

 
  • Like
Reactions: Ozmundo
Ethics and morals? What you want them to be surely. If they're your own that is. Are you saying AI can't learn acceptable moral or ethical boundaries that humans can accept if not agree to?
 
Ethics and morals? What you want them to be surely. If they're your own that is. Are you saying AI can't learn acceptable moral or ethical boundaries that humans can accept if not agree to?

Do you think Humans all exist with the same ethics and moral values? Pretty sure there is a large divergence on those
 
Apart from the fear of a tyrannical psycopath getting their hands on an army of robot killing machines, it seems to be the concept of a machine mind which frightens people. These are completely alien to us, just schooled in the knowledge of us and our doings. We can not relate to them, our first experience of an intelligence to rival our own. What do they think of us? Will they judge us with their cold logic, or will we be seen as insignificant as we have often seen other non human life forms and therefore ttreated as such? I think the latter has some truth, there is a tendency to judge others by our own mindset.
There is no going back now however and I find it very disappointing that most money and research seems to be going to military applications.
 
  • Like
Reactions: TeeDee
There is no going back now however and I find it very disappointing that most money and research seems to be going to military applications.

I'm not convinced 'most' is; AI is being applied to academic, scientific, medical, financial and commercial issues every day. As I said, it is already all pervasive - there are few areas of life where it is not being applied at one level or another.
 
  • Like
Reactions: Glass-Wood-Steel

BCUK Shop

We have a a number of knives, T-Shirts and other items for sale.

SHOP HERE