I, Robot , I Soldier.

Broch

Life Member
Jan 18, 2009
8,490
8,368
Mid Wales
www.mont-hmg.co.uk
I asked ChatGPT if AI was already all pervasive; it said:

<
As of my knowledge cutoff in September 2021, AI (Artificial Intelligence) was already becoming increasingly prevalent in various aspects of our lives. AI technologies were being integrated into numerous industries and sectors, such as healthcare, finance, transportation, entertainment, and customer service, among others.

AI has been deployed in numerous applications, including voice assistants, image and speech recognition, recommendation systems, autonomous vehicles, and fraud detection. Additionally, AI has also found its way into social media platforms, online advertising, and personalized content delivery.

While AI was on a path to pervasiveness, it's important to note that the full extent of AI's pervasiveness may have evolved since my knowledge cutoff. Rapid advancements in AI technology and its adoption across industries suggest that AI is likely to become even more pervasive in the future. However, for the most up-to-date and accurate information on AI's current state, it's advisable to refer to recent sources and research in the field.
>

Note, no mention of military applications! --- spooky

:)
 
  • Like
Reactions: Glass-Wood-Steel

Broch

Life Member
Jan 18, 2009
8,490
8,368
Mid Wales
www.mont-hmg.co.uk
So, I asked it specifically:

<
AI is being increasingly integrated into various military applications and systems. Nations around the world are investing in AI technologies for defense purposes. The exact extent of AI's usage in the military may vary depending on the country and its specific capabilities, strategies, and technological advancements.
AI is utilized in military applications for tasks such as:
  1. Autonomous Systems: AI plays a significant role in the development of autonomous vehicles, drones, and robotic systems used for surveillance, reconnaissance, and even combat missions.
  2. Decision Support Systems: AI can assist military personnel in making faster and more informed decisions by analyzing vast amounts of data, providing real-time intelligence, and identifying patterns and anomalies.
  3. Cybersecurity and Defense: AI algorithms are used to detect and respond to cyber threats, identify vulnerabilities, and strengthen the overall security posture of military networks and systems.
  4. Predictive Analytics: AI can be used to analyze historical data and predict future scenarios, assisting military planners in developing strategies and optimizing resource allocation.
  5. Training and Simulation: AI technologies are employed in military training exercises and simulations to create realistic scenarios, virtual environments, and intelligent opponents for training purposes.
It's important to note that the specific applications and capabilities of AI in the military continue to evolve rapidly. As technology advances, AI is likely to play an increasingly significant role in military operations, intelligence gathering, and strategic decision-making.

>
 

Paul_B

Bushcrafter through and through
Jul 14, 2008
6,413
1,702
Cumbria
Do you think Humans all exist with the same ethics and moral values? Pretty sure there is a large divergence on those
That's my point. They're what you want them to be, what I want them to be and same for everyone. Who's to say that AI can't learn what it's own morals and ethics are? Or at least an approximation of them that is. Afterall they're really just a learned thing for humans anyway.
 

Paul_B

Bushcrafter through and through
Jul 14, 2008
6,413
1,702
Cumbria
I am 100% sure my moral code will be different from yours - maybe not conflicting, but definitely different.
Let me clarify. We all have our own morals or ethics but in order to fit into society we mostly act within a range of morality or ethicality to fit within society. This could easily be something AI can learn. All different but mostly within ranges acceptable to society. However outside of these ranges you get things like criminality and antisocial behaviour I reckon.

Whatever you think your morals are lilt, the idea that AI can't learn enough about morals to simulate something closen to morals I think is not right. I reckon we learn ours so why not AI?
 

Scottieoutdoors

Settler
Oct 22, 2020
889
635
Devon
Whilst we may have covered this in other words, forget AI and our morals, if the whole thing continues to learn, is there a way it can learn to override the permanent laws we create for it..? What is the biggest threat to AI? Humans.... Could it learn that for its own survival that wiping us out would top us detroying it?

I mean perhaps this is way beyond it capabilities..
 

Jared

Bushcrafter (boy, I've got a lot to say!)
Sep 8, 2005
3,577
749
51
Wales
There is an example where an AI figured out something it was not meant to.
Fortunately it was just playing a game, and if it determined it was losing, the AI figured out how the crash the computer instead, preventing it from logging the loss and thereby the game was assumed to be draw.

AI controlling a swarm of drones seems pretty frightening. We've gotten pretty good a destroying incoming rockets and missiles in flight. Even when launching 100 or so of them try and overwhelm defenses. But a large swarm centrally controlled would be almost impossible to defend against. AI would decide how many drones to sacrifice to occupy defenses whilst the rest destroy the target.

Believe there was crude attempt using six Ukrainian drone boats that attacked Sevastopol Naval Base, causing damage to atleast 3 Russian ships.
 

TeeDee

Full Member
Nov 6, 2008
10,992
4,098
50
Exeter
I think for me it is the concern regarding the philosophy behind A.I

Human thought regarding what is 'right' and what is 'wrong' is ingrained into many of us from societal conditioning and religion 0 albeit if that has been diluted over time it has lead to a code and rule of law that permeates our society.

Without going to off piste - what is good ? and what is bad? thoughts and how they lead to individual action.

Fredrick Nietzsche wrote 'Beyond Good and Evil' which questions exactly what and why we consider one thing good and one thing evil - If A.I is unconfined by any ethical fuzzy logic ( and if its going to be true A.I it has to be unbound ) then it will construct its own decision unhindered. https://en.wikipedia.org/wiki/Beyond_Good_and_Evil


Alot of what is going to be the disconnect between Our thinking and A.I thinking is ( I think ) going to be based around the notion of 'value' and perceived consequence.
 
Last edited:

Paul_B

Bushcrafter through and through
Jul 14, 2008
6,413
1,702
Cumbria
It seems that the there's a view that AI can't be stopped from being a real danger to humans. It's inevitable. I liken it to the nuclear genie that was let out of the bottle. So much potential to destroy us but it hasn't. It's done so much good from nuclear medicine to power to material characterization which had fed into many areas of engineering development.

So why hasn't nuclear technology not destroyed us? Perhaps it's simply come down to the fact we don't want to be destroyed? We found a way to benefit from the potentially very destructive potential.

Transfer that to AI, like nuclear we're already widely using it, for good most people have no clue how widely it is being used or where. Could we even get the genie back into the bottle?
 

Scottieoutdoors

Settler
Oct 22, 2020
889
635
Devon
It seems that the there's a view that AI can't be stopped from being a real danger to humans. It's inevitable. I liken it to the nuclear genie that was let out of the bottle. So much potential to destroy us but it hasn't. It's done so much good from nuclear medicine to power to material characterization which had fed into many areas of engineering development.

So why hasn't nuclear technology not destroyed us? Perhaps it's simply come down to the fact we don't want to be destroyed? We found a way to benefit from the potentially very destructive potential.

Transfer that to AI, like nuclear we're already widely using it, for good most people have no clue how widely it is being used or where. Could we even get the genie back into the bottle?


Yeah.... but you're looking at the topics almost very black and white. For example Nuclear is the "topic"... it's applications are the questionable, medicines, tech etc = good... warheads = bad.
So AI? Medicines etc = good, connections to military =?= bad? What about AI connected to nuclear warheads?

I can't think of ways to say this in anything other than rather bluntly... There was a group that appeared a few years ago, dressed in black, did some awful things to many people including humanitarians etc, encouraging their offspring and wives to be part of and carry out actions in the sordid experience etc... now there are camps and pockets full of these people that the world does not know what to do with, many of them supposedly have no remorse nor any belief that what they did was wrong... we're in a very difficult and sensitive position where by all the things that would make logical sense, whether it be family separation or something much more awful, make more sense than permanent internment without much of a plan for the future... but the decisions will weigh heavily on the minds of the humans that have to make them, or indeed carry them out... would AI "solve" it overnight and we could all have clean hands as a result? Is that an acceptable use? Certainly takes the moral and ethical conundrums out of the works, put the numbers into a computer and computer calculates odds/percentages/necessary outcomes...

To me AI is the whole, thing of driverless cars, does the car steer into the path of an oncoming juggernaught meaning your certain death or into the old lady, meaning hers...?
 

Ozmundo

Full Member
Jan 15, 2023
457
359
48
Sussex
The old lady. :p

Potentially an AI controlled traffic management system (not just individual vehicles) could prevent it happening. No speeding, full stopping distances. No medical incidents at the wheel and crashing into school children.
 

Paul_B

Bushcrafter through and through
Jul 14, 2008
6,413
1,702
Cumbria
Humans developed the software/ hardware of AI as such whatever they do we share a degree of blame.

If they sort out the ppl in black it does not leave our hands clean of the method they took. That's the thing, it's still programming. The programmers have a share of the blame too. I wonder if that is the same with our creator, if that's what you'd believe in?
 

TeeDee

Full Member
Nov 6, 2008
10,992
4,098
50
Exeter
The old lady. :p

Potentially an AI controlled traffic management system (not just individual vehicles) could prevent it happening. No speeding, full stopping distances. No medical incidents at the wheel and crashing into school children.


I have thought about driverless cars - once you've reach a certain watershed moment of assimilation in to traffic systems - say 70% of cars , isn't it the super CRITICAL that all the driverless cars operate off exactly the same parameters of safety distance , movement , speed changes , etc to stop potentially random variables ( 80 year old changing to reverse whilst in the middle lane of a motorway ) to prevent cascade issues ?? a stop/start flock migration of traffic ? It needs to be a truly connected system using the same presets to work flawlessly ??
 

Suffolkrafter

Settler
Dec 25, 2019
554
503
Suffolk
Ok here's a question. How does AI actually work? Let's say you want AI to make decisions relating to a pandemic. It needs data to work on, presumably the same data that is available to humans. Humans would model the data to predict what may happen with (crucially) the degree of error or uncertainty. And, you look at what decisions you can take and model the outcomes and pick your course of action based on that. Does AI work in the same way? Will it come up with 'better' predictions with greater certainty? Or is it just that it can do the same stuff but faster? Is it possible we're overestimating what it can do?
 

Broch

Life Member
Jan 18, 2009
8,490
8,368
Mid Wales
www.mont-hmg.co.uk
In a way, speed of thinking is a great part of the advantage of AI but not everything. As well as dealing with a very large number of input parameters to a problem, it can apply 'nuances' that would create far too chaotic a scenario for a human to work with. Imaging you have a hundred parameters all with different levels of influence on the problem but all with only a certain percentage of accuracy - to go through all the possible outcomes would be impossible for humans but (to a certain level of granularity) straight forward for an AI system. Now think about a thousand parameters, changing scenarios, etc. However, that is just 'high data volume' machine learning. It's when the system decides what data is missing, how to get new data, balancing relevance etc. that it becomes AI.

Humans may well have defined the initial algorithms that an AI works from but there are already AI systems capable of self programming, measuring the outcomes of new algorithms, and choosing the best. I'm not sure we'll contain the definition of 'best' for long.
 

Paul_B

Bushcrafter through and through
Jul 14, 2008
6,413
1,702
Cumbria
That description sounds like data modelling. You're programming the parameters and methodology used by the computer. I thought AI was more about programming how to learn so that it works out for itself how best to find the answer? It seems to me that you're describing a more programmed system. Big data perhaps instead of AI.
 

Scottieoutdoors

Settler
Oct 22, 2020
889
635
Devon
@Paul_B
I do understand that... I suppose what I meant is humans wouldn't have made that particular decision... for example the inventor of a missile did not choose the target or collateral that it destroys, so there is a larger degree of separation and a larger case of "well, the computer worked that out to be the best outcome"... especially if we're talking about computers which learn, rather than just programmers said to do something...

Is the whole concept of "scary AI" that it effectively becomes a sentient being.
 

Broch

Life Member
Jan 18, 2009
8,490
8,368
Mid Wales
www.mont-hmg.co.uk
That description sounds like data modelling. You're programming the parameters and methodology used by the computer. I thought AI was more about programming how to learn so that it works out for itself how best to find the answer? It seems to me that you're describing a more programmed system. Big data perhaps instead of AI.

Sorry, I thought I had made that distinction.
 

BCUK Shop

We have a a number of knives, T-Shirts and other items for sale.

SHOP HERE