Sigh, it's exactly that kind of thinking that got our future generations into trouble that made them have to send back Arnold Schwartzenegger to fix things...!We can only travel forward in time
I am just waiting for a Tachikoma to be released.
That's the point; hopefully the AI solution would be the most logical one. Speaking of illness, how about COVID? Say we had entrusted decision making on pandemic management to AI, at least in part, I wonder how the outcome would have differed? Same with interest rate setting and economic decisions, could AI outperform modellers and decision makers? Will AI sort of creep into all areas bit by bit?So, what is the logical thing for a 'government' to do if there is poverty, lack of tradeable resources, famine, drought, illness, insurrection, crime ..... ? What do you think an AI 'logical' solution would be?
Is that a sort of Pokemon?
Not far off. Anime AI Spider Tank. Basically Hello Kitty with Stinger Missles.
The AI making the decisions is the idea always thrown up in Sci-fi that concludes with getting rid of humans as they are actually the problem.
It’s not an illogical response.
I’m going to bring up 2000AD and Dredd.
The ABC Warriors are thinking independent soldiers of the dystopian future. A series was too aggressive and some were wanted criminals. B series were too passive and tried to talk through everyone’s feelings. C were maybe “just right”. Now if you could teach a machine compassion you’re doing better than a lot of people.
My point about the 'logical' outcome was that an AI may well decide that the best recourse for its 'sponsor' is genocide, war, etc. Just because it's logical does not make it moral.
That's worth a whole new thread in itselfMy point about the 'logical' outcome was that an AI may well decide that the best recourse for its 'sponsor' is genocide, war, etc. Just because it's logical does not make it moral.
That's worth a whole new thread in itself
But in the context of my post, by 'logical' I meant what's best and morally right and acceptable for us - although what's morally acceptable to some will not be to others, of course. It presumably would mean AI following a code of ethics, if such a thing is possible.
Ethics and morals? What you want them to be surely. If they're your own that is. Are you saying AI can't learn acceptable moral or ethical boundaries that humans can accept if not agree to?
Ethics and morals? What you want them to be surely. If they're your own that is. Are you saying AI can't learn acceptable moral or ethical boundaries that humans can accept if not agree to?
There is no going back now however and I find it very disappointing that most money and research seems to be going to military applications.