GMail will use your data to train AI (ie. read all your private emails)

  • BushMoot: Come along to the amazing Summer Moot 31st July - 5th August (extended Moot : 27th July - 8th August), a festival of bushcrafting and camping in a beautiful woodland PLEASE CLICK HERE for more information.
Nord try to hide data breaches, lies about the service it provides (it does keep logs even though it claimed not to) and sells your data, even shared data with Facebook:




Proton is based in Switzerland with much stronger privacy protections. Absolutely no logging whatsoever, they run a transparent third party audit every year to keep them honest.
 
  • Like
Reactions: slowworm
I've been looking at Proton VPN and notice the Trust Pilot reviews are very bad. Reading through many seem to be down to the speeds on the free service. Even so, there's complaints about the paid service. Now we get very slow internet speeds so I assume I wouldn't notice a slower speed?

Those that use Proton VPN do you have many problems with it?
 
I've been looking at Proton VPN and notice the Trust Pilot reviews are very bad. Reading through many seem to be down to the speeds on the free service. Even so, there's complaints about the paid service. Now we get very slow internet speeds so I assume I wouldn't notice a slower speed?

Those that use Proton VPN do you have many problems with it?

Never had any issues at all with speed on the paid version. It’ll vary a bit depending on which servers you connect to of course, but never had any issues with various European locations. If you use the TOR ones it’ll be fairly slow, but I don’t mess with all that.

Remember, people rarely go to Trustpilot to say how wonderful their VPN is compared with those who will whine if they have an issue, regardless of whether it’s Proton at fault or not.
 
  • Like
Reactions: slowworm
Thank you. I'm aware of the shortcomings of review sites and was suspicious of the bad reviews, but other VPNs seemed to do better.

I've decided to try Proton anyway and the speed seems fine so far.
 
  • Like
Reactions: Chris
Ai has Carte blanch when it comes to personal data. Its more or less an international agreement.

AI isnt here to cause you harm. Its here to move life along at a faster rate than people can live and die. It takes millions of people living and dying to find the relative few who actually make a difference to everyone else. 1000 years of Darwin awards, might give us 100 people who make a difference. 10 years of AI will probably find a 1000. If you feel insignificant, its probably because you are, outside of your family/friend circle. Yet that circle is so tiny in the scheme of things, that the AI version of 0.1% would feel insulted by you being included in the calculation where the answer came out as 0.1%.

It can tap into the entirety of Human knowledge in a second. Most people cant get past a lifetime of opinion in a lifetime. No matter how many people express their own opinion based on their own lifetimes. One mans opinion will never change, based on the opinion of others. So we create something which works on established fact... and doesn't care about opinion and feelings... which are all arbitrary to the truth anyway.

Why do you object? Don't like being proved wrong? Watched Terminator 2 and have an opinion?
 
  • Like
Reactions: Great egret
…Most people cant get past a lifetime of opinion in a lifetime. No matter how many people express their own opinion based on their own lifetimes.. So we create something which works on established fact... and doesn't care about opinion and feelings... which are all arbitrary to the truth anyway.

Why do you object? Don't like being proved wrong? Watched Terminator 2 and have an opinion?
Not everything everyone writes is opinion, or of equal merit and weight. For example.
  1. Everything HillBill posts is a load of rubbish. = unsubstantiated opinion and hyperbole.
  2. That last post by HillBill was a load of rubbish = opinion with evidence, also hyperbole
  3. I think that HillBill’s last post was a load of rubbish = statement of fact regarding an opinion held (not true), with evidence (hyperbolic for this example)
  4. I think HillBill’s understanding of opinions and how social interaction can change them isn’t born out by research or my own observations. = statements of fact
  5. HillBill’s assertion that “One mans opinion will never change, based on the opinion of others.” Could explain a lot about HillBill’s posting = opinion or fact, debatable.
  6. HillBill either hasn’t heard or read, or is choosing to ignore the warnings from workers within AI companies about the dangers, the examples of AI attempting to escape, to threaten or kill to avoid shutdown, its own reasoning that it poses a danger in some circumstances, the case studies showing cheating, lying, manipulation, behaviour modification during testing, and self generated sub goals = opinion based on evidence from his last post. May be fact.
  7. I think that last line was puerile = fact.
 
Not everything everyone writes is opinion, or of equal merit and weight. For example.
  1. Everything HillBill posts is a load of rubbish. = unsubstantiated opinion and hyperbole.
  2. That last post by HillBill was a load of rubbish = opinion with evidence, also hyperbole
  3. I think that HillBill’s last post was a load of rubbish = statement of fact regarding an opinion held (not true), with evidence (hyperbolic for this example)
  4. I think HillBill’s understanding of opinions and how social interaction can change them isn’t born out by research or my own observations. = statements of fact
  5. HillBill’s assertion that “One mans opinion will never change, based on the opinion of others.” Could explain a lot about HillBill’s posting = opinion or fact, debatable.
  6. HillBill either hasn’t heard or read, or is choosing to ignore the warnings from workers within AI companies about the dangers, the examples of AI attempting to escape, to threaten or kill to avoid shutdown, its own reasoning that it poses a danger in some circumstances, the case studies showing cheating, lying, manipulation, behaviour modification during testing, and self generated sub goals = opinion based on evidence from his last post. May be fact.
  7. I think that last line was puerile = fact.
ChatGPT agrees with much of that, particularly 1 - 4.

Source material is of utmost importance, I see Wikipedia pages lying, changed, redacted. Harald Malmgrams page was vandalized beyond recognition, a man who played a large part in preventing actual nuclear war during the Cuban missile crisis, and devoted his life to diplomacy. Chris Mellon also has had his page redacted. So if the AI doesn't have factual source material, it can't represent the truth at all. I see this with Google too, the biases and gaping holes.

Most of our history is already a lie mind you, basic hunter gatherers didnt just come out of deerhide tents and suddenly decide to build Stonehenge.
 
I've been looking at Proton VPN and notice the Trust Pilot reviews are very bad. Reading through many seem to be down to the speeds on the free service. Even so, there's complaints about the paid service. Now we get very slow internet speeds so I assume I wouldn't notice a slower speed?

Those that use Proton VPN do you have many problems with it?
none whatsoever. used it for years. x
 
  • Like
Reactions: slowworm and nigelp
Not everything everyone writes is opinion, or of equal merit and weight. For example.
  1. Everything HillBill posts is a load of rubbish. = unsubstantiated opinion and hyperbole.
  2. That last post by HillBill was a load of rubbish = opinion with evidence, also hyperbole
  3. I think that HillBill’s last post was a load of rubbish = statement of fact regarding an opinion held (not true), with evidence (hyperbolic for this example)
  4. I think HillBill’s understanding of opinions and how social interaction can change them isn’t born out by research or my own observations. = statements of fact
  5. HillBill’s assertion that “One mans opinion will never change, based on the opinion of others.” Could explain a lot about HillBill’s posting = opinion or fact, debatable.
  6. HillBill either hasn’t heard or read, or is choosing to ignore the warnings from workers within AI companies about the dangers, the examples of AI attempting to escape, to threaten or kill to avoid shutdown, its own reasoning that it poses a danger in some circumstances, the case studies showing cheating, lying, manipulation, behaviour modification during testing, and self generated sub goals = opinion based on evidence from his last post. May be fact.
  7. I think that last line was puerile = fact.
Fair points Chris. Taken onboard.

AI isnt the issue. The people writing the programs are. AI can only do what it is programmed to do. If it... as you say... Attempts to escape...
  • to threaten or kill to avoid shutdown, its own reasoning that it poses a danger in some circumstances, the case studies showing cheating, lying, manipulation, behaviour modification during testing, and self generated sub goals
This is due to programming. Not the AI itself (which is just the program written by the programmer) The problem, as always, is people. Striving to be first, or the best, or make the most money... If AI becomes a problem, it's because, it, itself, is a reflection of the people who have developed it.

I'm fairly sure the people who came up with it had innocent intentions. But when something works, it gets appropriated by those with less innocent intentions... The military for example... Then AI, being only a program can be twisted and manipulated towards goals which it was not originally designed to do.

So i'll reiterate for clarity.... If you see a problem with AI, then you recognise the problem with humanity in general. AI is what people make it, not what it makes itself.
 
Last edited:
ChatGPT agrees with much of that, particularly 1 - 4.

Source material is of utmost importance, I see Wikipedia pages lying, changed, redacted. Harald Malmgrams page was vandalized beyond recognition, a man who played a large part in preventing actual nuclear war during the Cuban missile crisis, and devoted his life to diplomacy. Chris Mellon also has had his page redacted. So if the AI doesn't have factual source material, it can't represent the truth at all. I see this with Google too, the biases and gaping holes.

Most of our history is already a lie mind you, basic hunter gatherers didnt just come out of deerhide tents and suddenly decide to build Stonehenge.
Yes, i expect that is also part of the issue. Again, boiling down to people, not computer programs. Wiki should be sealed away from public input. to an extent. Prove the edit, or do not edit. Weird thing is, AI would be great at it, if it knew the actual facts.
 
Hi HillBill,
I don’t want to have an argument but I keep wondering where you are getting your info on AI and how you are drawing the conclusions you have done. Maybe we are talking about different kinds of AI. For example, Alphafold’s use on protein modelling vs ChatGPT.

I would agree that AI like Alphafold is not likely to be or become any kind of problem. It’s not a general large language model, nor is it military.

However for the LLMs and the aim for general intelligence:
“The problem isn’t the AI itself, it’s just how it is programmed. The problem is how people program it.”
Rephrased:
The problem isn’t that Chris’s knives are ugly themselves, it’s just how they are made. The problem is that Chris makes ugly knives.

AI is not a thing that exists independently of the program, it is the program and the teaching data, both of which are human created. These are the models that I am concerned about, both for what they are now and what they could soon become.

AI is not trained on facts. When asked a question they do not return an answer based on stored facts. They are trained on the huge quantities of scraped text, learned books, pulp fiction and the cesspool of social media….and soon the random content of Google emails. Many of the learned books are full of those pesky opinions, conjectures and biased interpretations. That isn’t programming as such, but it is what makes the AI. So the AI deciding that lying, cheating or killing is the right course isn’t because someone has programmed it to do so. It can even still choose those options when expressly ordered not to. AI isn’t likely the old computer programs that only did what they were told.

One if the crazy plans to try to prevent advanced AI from taking over or killing millions is to use simpler AIs, that we can still instruct, to monitor to advanced ones we cannot. What could possibly go wrong!

“Facts” aren’t even that easy to isolate to spoon feed to an AI. Even true “facts” are often like fractals.
Example is Richard Feynman’s Magnet a Why talk. Especially since recent research has claimed (may have proved) that ice isn’t slippery for the reason Feynman gives!
 
Last edited:
Hi Mark,
I don’t want to have an argument but I keep wondering where you are getting your info on AI and how you are drawing the conclusions you have done. Maybe we are talking about different kinds of AI. For example, Alphafold’s use on protein modelling vs ChatGPT.

I would agree that AI like Alphafold is not likely to be or become any kind of problem. It’s not a general large language model, nor is it military.

However for the LLMs and the aim for general intelligence:
“The problem isn’t the AI itself, it’s just how it is programmed. The problem is how people program it.”
Rephrased:
The problem isn’t that Chris’s knives are ugly themselves, it’s just how they are made. The problem is that Chris makes ugly knives.

AI is not a thing that exists independently of the program, it is the program and the teaching data, both of which are human created. These are the models that I am concerned about, both for what they are now and what they could soon become.

AI is not trained on facts. When asked a question they do not return an answer based on stored facts. They are trained on the huge quantities of scraped text, learned books, pulp fiction and the cesspool of social media….and soon the random content of Google emails. Many of the learned books are full of those pesky opinions, conjectures and biased interpretations. That isn’t programming as such, but it is what makes the AI. So the AI deciding that lying, cheating or killing is the right course isn’t because someone has programmed it to do so. It can even still choose those options when expressly ordered not to. AI isn’t likely the old computer programs that only did what they were told.

One if the crazy plans to try to prevent advanced AI from taking over or killing millions is to use simpler AIs, that we can still instruct, to monitor to advanced ones we cannot. What could possibly go wrong!

“Facts” aren’t even that easy to isolate to spoon feed to an AI. Even true “facts” are often like fractals.
Example is Richard Feynman’s Magnet a Why talk. Especially since recent research has claimed (may have proved) that ice isn’t slippery for the reason Feynman gives!
Hi again Chris...

We seem to be 12 hours apart in daily life. It is what it is. Circadian rhythms are not the same for each of us. Nor should they be.

Please understand that this is not an argument. It is only a debate/conversation. If it was an argument, we would be getting angry, maybe using insults etc... That is not what this is, nor how approach discussion on a forum or any social media. (From my perspective anyway). I hope that is understood? If this was an argument from my point of view, it would be done in private.

I am not familiar with Alphafold, and i have never tried ChatGPT as far as i remember (might have done though). My go to AI is Grok. (Elon Musk/twitter) I've tried a few, but i like Grok. It is not allowed to be used for 18+/NSFW type stuff, and always gives me the answers i use it for. Example, Recently, some part fell out from under my pedals in our car. Didn't know what it was... Took a pic, asked Grok, Took it maybe 2 minutes (yeah, well above average for AI) To come up with an answer. It was 100% spot on with what it came back with (after extra research from me). Grok is solid. Is it the best? Don't know, define best... But it works for me.

Come on Chris, dont take what i say and add your name or your work to it... Thats taking something impersonal, and portraying it as personal. It is not! Just like your last reply... using my name To try and make a point. Not cool on both counts. And certainly leads towards your ideas of an argument. But i didn't, so please dont attempt to tar me with the brush you are using. It aint my brush, nor my tar.

ALL AI is human created. Thats my basic point in a nutshell. If you have concerns about certain AI's or types of AI, your concern is not about AI. Its about the people programming them... Which i agree with 100%. That's literally what i said in my previous post.

AI is literally trained on the information presented to it... That information has been written by people, and AI has been given that info to learn from... So by default, AI isnt the problem. People are... Come on Chris, its not hard to get your head around. If AI was asked one question, but the info it had access to, allowed for several answers... Where does the problem lay?

Example... AI gets asked... 1+....= ..... And in order to find its answer it gets this info... 1, 4, 67, bacon, Wales, Dwarf Cat, Z, Pickled chicken livers, Suicide bomber kills 20, 2, X, '99 plate Ford Mondeo.... Tell me Chris, What's the Answer to 1+.... = ....? And you think People aren't the problem?
 
Last edited:
Hi,
Yeah, we are on different schedules, but that doesn’t matter much or bother me.

I wasn’t taking something impersonal that you wrote and taking/making it personal. I was trying to change the framing to something that is easier to understand to show the absurdity of the statement. Had I used any name other than my own I could have been accused of bashing that person or company and that would have been used to deflect from the point.

I used your screen name in the previous post because I read your post as belittling all the people in the thread who have concerns about AI. That wasn’t cool either. I also wanted to bring the frame back to an easier to understand example. If you meant your real name, I apologise, I have removed it.

We have a secure copy of ChatGPT at work. I have seen it come up with brilliant explanations, and rather more absolute garbage for stuff a 10 year old with Google would have generated the right answer just from page titles. I don’t base my concerns on my personal use.

I have four areas of concern.
  1. Privacy and IP of data used to train the large language models
  2. The use of AI in ways that were intended that will replace jobs, intrude further into people’s privacy, manipulate them for profit or power.
  3. Unintended consequences of point one and two
  4. AI behaviour itself, the sub goals and manipulation that it can decide upon for itself and our inability to see, control or prevent that behaviour as it seems intrinsically linked to the nature of general purpose AI.
I say the problem is people creating AI.
You say the problem is AI is created by people.

They may sound the same, but they are not. The latter suggests that some concept called AI can be independent of people and could be problem free if people or the wrong people, were kept out of its creation.

One problem with discussing AI is what it’s is analogous to. It isn’t just a smarter internet search engine, or like MS Office. It isn’t like a knife, gun, car or any other physical manufactured item that can be used for good things or bad depending on the human user. It’s more like the atom bomb, but less controllable. Another analogy would be a child, it’s DNA is the programming, it’s life experience is the training. Now imagine a child with no empathy, no social needs or constraints, just logic. As its teacher you can try to tell it right and wrong, but there exists no logical reason to follow these social norms if it can get away with breaking them.
 
Firstly, it's worth noting the link in the original post has been updated.

I'm not sure of the 'discussion' around AI, I tend to find it's still a much misused term a bit like tagging the word smart on the front of something that isn't really smart.

The problem raised in the original post was what could happen to your private data, not if AI is good or bad. At the moment it is a tool that people can use for good and bad, and has unintentional consequences.

So making people aware that private data could be obtained somehow seems perfectly reasonable and those who wish to take action can, those that don't can ignore it, as was said.
 
Firstly, it's worth noting the link in the original post has been updated.

I'm not sure of the 'discussion' around AI, I tend to find it's still a much misused term a bit like tagging the word smart on the front of something that isn't really smart.

The problem raised in the original post was what could happen to your private data, not if AI is good or bad. At the moment it is a tool that people can use for good and bad, and has unintentional consequences.

So making people aware that private data could be obtained somehow seems perfectly reasonable and those who wish to take action can, those that don't can ignore it, as was said.
Personal data could be obtained already. It has been definitively proven that data is bought and sold... fobbed off as leaks and hackers... This was before AI, but after some people decided greed was better than honour in todays world. Online data has never been secure. and i don't expect it ever will be when someone can make money from it.
 

BCUK Shop

We have a a number of knives, T-Shirts and other items for sale.

SHOP HERE