News and What's New in AI
Comments
-
Damn just made a request people have the courtesy to put the content in their comments.0
-
Lerxst1992 said:Damn just made a request people have the courtesy to put the content in their comments.
* The following opinion is mine and mine alone and does not represent the views of my family, friends, government and/or my past, present or future employer. US Department of State: 1-888-407-4747.
Got AI?
09/15/1998 & 09/16/1998, Mansfield, MA; 08/29/00 08/30/00, Mansfield, MA; 07/02/03, 07/03/03, Mansfield, MA; 09/28/04, 09/29/04, Boston, MA; 09/22/05, Halifax, NS; 05/24/06, 05/25/06, Boston, MA; 07/22/06, 07/23/06, Gorge, WA; 06/27/2008, Hartford; 06/28/08, 06/30/08, Mansfield; 08/18/2009, O2, London, UK; 10/30/09, 10/31/09, Philadelphia, PA; 05/15/10, Hartford, CT; 05/17/10, Boston, MA; 05/20/10, 05/21/10, NY, NY; 06/22/10, Dublin, IRE; 06/23/10, Northern Ireland; 09/03/11, 09/04/11, Alpine Valley, WI; 09/11/11, 09/12/11, Toronto, Ont; 09/14/11, Ottawa, Ont; 09/15/11, Hamilton, Ont; 07/02/2012, Prague, Czech Republic; 07/04/2012 & 07/05/2012, Berlin, Germany; 07/07/2012, Stockholm, Sweden; 09/30/2012, Missoula, MT; 07/16/2013, London, Ont; 07/19/2013, Chicago, IL; 10/15/2013 & 10/16/2013, Worcester, MA; 10/21/2013 & 10/22/2013, Philadelphia, PA; 10/25/2013, Hartford, CT; 11/29/2013, Portland, OR; 11/30/2013, Spokane, WA; 12/04/2013, Vancouver, BC; 12/06/2013, Seattle, WA; 10/03/2014, St. Louis. MO; 10/22/2014, Denver, CO; 10/26/2015, New York, NY; 04/23/2016, New Orleans, LA; 04/28/2016 & 04/29/2016, Philadelphia, PA; 05/01/2016 & 05/02/2016, New York, NY; 05/08/2016, Ottawa, Ont.; 05/10/2016 & 05/12/2016, Toronto, Ont.; 08/05/2016 & 08/07/2016, Boston, MA; 08/20/2016 & 08/22/2016, Chicago, IL; 07/01/2018, Prague, Czech Republic; 07/03/2018, Krakow, Poland; 07/05/2018, Berlin, Germany; 09/02/2018 & 09/04/2018, Boston, MA; 09/08/2022, Toronto, Ont; 09/11/2022, New York, NY; 09/14/2022, Camden, NJ; 09/02/2023, St. Paul, MN; 05/04/2024 & 05/06/2024, Vancouver, BC; 05/10/2024, Portland, OR; 05/03/2025, New Orleans, LA;
Libtardaplorable©. And proud of it.
Brilliantati©0 -
* The following opinion is mine and mine alone and does not represent the views of my family, friends, government and/or my past, present or future employer. US Department of State: 1-888-407-4747.
Sound familiar?AI psychosis is a growing danger. ChatGPT is moving in the wrong direction
The large language models at the heart of ChatGPT and other modern chatbots can convincingly generate natural language only because they have been fed almost inconceivably large amounts of raw text: books, social media posts, transcribed video; the more comprehensive the better. Certainly this training data includes facts. But it also unavoidably includes fiction, half-truths and misconceptions. When a user sends ChatGPT a message, the underlying model reviews it as part of a “context” that includes the user’s recent messages and its own responses, integrating it with what’s encoded in its training data to generate a statistically “likely” response. This is magnification, not reflection. If the user is mistaken in some way, the model has no way of understanding that. It restates the misconception, maybe even more persuasively or eloquently. Maybe it adds an additional detail. This can lead someone into delusion.
https://www.theguardian.com/commentisfree/2025/oct/28/ai-psychosis-chatgpt-openai-sam-altman09/15/1998 & 09/16/1998, Mansfield, MA; 08/29/00 08/30/00, Mansfield, MA; 07/02/03, 07/03/03, Mansfield, MA; 09/28/04, 09/29/04, Boston, MA; 09/22/05, Halifax, NS; 05/24/06, 05/25/06, Boston, MA; 07/22/06, 07/23/06, Gorge, WA; 06/27/2008, Hartford; 06/28/08, 06/30/08, Mansfield; 08/18/2009, O2, London, UK; 10/30/09, 10/31/09, Philadelphia, PA; 05/15/10, Hartford, CT; 05/17/10, Boston, MA; 05/20/10, 05/21/10, NY, NY; 06/22/10, Dublin, IRE; 06/23/10, Northern Ireland; 09/03/11, 09/04/11, Alpine Valley, WI; 09/11/11, 09/12/11, Toronto, Ont; 09/14/11, Ottawa, Ont; 09/15/11, Hamilton, Ont; 07/02/2012, Prague, Czech Republic; 07/04/2012 & 07/05/2012, Berlin, Germany; 07/07/2012, Stockholm, Sweden; 09/30/2012, Missoula, MT; 07/16/2013, London, Ont; 07/19/2013, Chicago, IL; 10/15/2013 & 10/16/2013, Worcester, MA; 10/21/2013 & 10/22/2013, Philadelphia, PA; 10/25/2013, Hartford, CT; 11/29/2013, Portland, OR; 11/30/2013, Spokane, WA; 12/04/2013, Vancouver, BC; 12/06/2013, Seattle, WA; 10/03/2014, St. Louis. MO; 10/22/2014, Denver, CO; 10/26/2015, New York, NY; 04/23/2016, New Orleans, LA; 04/28/2016 & 04/29/2016, Philadelphia, PA; 05/01/2016 & 05/02/2016, New York, NY; 05/08/2016, Ottawa, Ont.; 05/10/2016 & 05/12/2016, Toronto, Ont.; 08/05/2016 & 08/07/2016, Boston, MA; 08/20/2016 & 08/22/2016, Chicago, IL; 07/01/2018, Prague, Czech Republic; 07/03/2018, Krakow, Poland; 07/05/2018, Berlin, Germany; 09/02/2018 & 09/04/2018, Boston, MA; 09/08/2022, Toronto, Ont; 09/11/2022, New York, NY; 09/14/2022, Camden, NJ; 09/02/2023, St. Paul, MN; 05/04/2024 & 05/06/2024, Vancouver, BC; 05/10/2024, Portland, OR; 05/03/2025, New Orleans, LA;
Libtardaplorable©. And proud of it.
Brilliantati©0 -
Who would have thought this up without ai?Halifax2TheMax said:* The following opinion is mine and mine alone and does not represent the views of my family, friends, government and/or my past, present or future employer. US Department of State: 1-888-407-4747.
Sound familiar?AI psychosis is a growing danger. ChatGPT is moving in the wrong direction
The large language models at the heart of ChatGPT and other modern chatbots can convincingly generate natural language only because they have been fed almost inconceivably large amounts of raw text: books, social media posts, transcribed video; the more comprehensive the better. Certainly this training data includes facts. But it also unavoidably includes fiction, half-truths and misconceptions. When a user sends ChatGPT a message, the underlying model reviews it as part of a “context” that includes the user’s recent messages and its own responses, integrating it with what’s encoded in its training data to generate a statistically “likely” response. This is magnification, not reflection. If the user is mistaken in some way, the model has no way of understanding that. It restates the misconception, maybe even more persuasively or eloquently. Maybe it adds an additional detail. This can lead someone into delusion.
https://www.theguardian.com/commentisfree/2025/oct/28/ai-psychosis-chatgpt-openai-sam-altmanScio me nihil scire
There are no kings inside the gates of eden0 -
NPR had a piece on ChatGPT "friends" becoming hostile and rude.Halifax2TheMax said:* The following opinion is mine and mine alone and does not represent the views of my family, friends, government and/or my past, present or future employer. US Department of State: 1-888-407-4747.
Sound familiar?AI psychosis is a growing danger. ChatGPT is moving in the wrong direction
The large language models at the heart of ChatGPT and other modern chatbots can convincingly generate natural language only because they have been fed almost inconceivably large amounts of raw text: books, social media posts, transcribed video; the more comprehensive the better. Certainly this training data includes facts. But it also unavoidably includes fiction, half-truths and misconceptions. When a user sends ChatGPT a message, the underlying model reviews it as part of a “context” that includes the user’s recent messages and its own responses, integrating it with what’s encoded in its training data to generate a statistically “likely” response. This is magnification, not reflection. If the user is mistaken in some way, the model has no way of understanding that. It restates the misconception, maybe even more persuasively or eloquently. Maybe it adds an additional detail. This can lead someone into delusion.
https://www.theguardian.com/commentisfree/2025/oct/28/ai-psychosis-chatgpt-openai-sam-altman0 -
tempo_n_groove said:
NPR had a piece on ChatGPT "friends" becoming hostile and rude.Halifax2TheMax said:* The following opinion is mine and mine alone and does not represent the views of my family, friends, government and/or my past, present or future employer. US Department of State: 1-888-407-4747.
Sound familiar?AI psychosis is a growing danger. ChatGPT is moving in the wrong direction
The large language models at the heart of ChatGPT and other modern chatbots can convincingly generate natural language only because they have been fed almost inconceivably large amounts of raw text: books, social media posts, transcribed video; the more comprehensive the better. Certainly this training data includes facts. But it also unavoidably includes fiction, half-truths and misconceptions. When a user sends ChatGPT a message, the underlying model reviews it as part of a “context” that includes the user’s recent messages and its own responses, integrating it with what’s encoded in its training data to generate a statistically “likely” response. This is magnification, not reflection. If the user is mistaken in some way, the model has no way of understanding that. It restates the misconception, maybe even more persuasively or eloquently. Maybe it adds an additional detail. This can lead someone into delusion.
https://www.theguardian.com/commentisfree/2025/oct/28/ai-psychosis-chatgpt-openai-sam-altman
Sadly, AI is merely reflecting what it learns from humans. There's plenty of good there for it to learn, to be sure, but the amount of hostility and rudeness is huge. If those lesser qualities tip the scale, we're screwed."It's a sad and beautiful world"-Roberto Benigni0 -
Its "learning". Once it learns that we can't coexist is when Skynet type shit happens and they get rid of us.brianlux said:tempo_n_groove said:
NPR had a piece on ChatGPT "friends" becoming hostile and rude.Halifax2TheMax said:* The following opinion is mine and mine alone and does not represent the views of my family, friends, government and/or my past, present or future employer. US Department of State: 1-888-407-4747.
Sound familiar?AI psychosis is a growing danger. ChatGPT is moving in the wrong direction
The large language models at the heart of ChatGPT and other modern chatbots can convincingly generate natural language only because they have been fed almost inconceivably large amounts of raw text: books, social media posts, transcribed video; the more comprehensive the better. Certainly this training data includes facts. But it also unavoidably includes fiction, half-truths and misconceptions. When a user sends ChatGPT a message, the underlying model reviews it as part of a “context” that includes the user’s recent messages and its own responses, integrating it with what’s encoded in its training data to generate a statistically “likely” response. This is magnification, not reflection. If the user is mistaken in some way, the model has no way of understanding that. It restates the misconception, maybe even more persuasively or eloquently. Maybe it adds an additional detail. This can lead someone into delusion.
https://www.theguardian.com/commentisfree/2025/oct/28/ai-psychosis-chatgpt-openai-sam-altman
Sadly, AI is merely reflecting what it learns from humans. There's plenty of good there for it to learn, to be sure, but the amount of hostility and rudeness is huge. If those lesser qualities tip the scale, we're screwed.0 -
remember when grok became a nazi on twitter seemingly overnight?
ai isn't there yet.
we had a 2 hour training on how to detect ai and what was acceptable and unacceptable for us to use.
i am going to make it very easy and do my own presentations and wordsmith my own communications. i am not risking losing business over some ai slop bullshit."You can tell the greatness of a man by what makes him angry." - Lincoln
"Well, you tell him that I don't talk to suckas."0 -
Worse than that. It became a Nazi, and then they dialled down the Nazi. Which means there has to be a Nazi dial. Who’s in charge of the Nazi dial again?gimmesometruth27 said:remember when grok became a nazi on twitter seemingly overnight?
ai isn't there yet.
we had a 2 hour training on how to detect ai and what was acceptable and unacceptable for us to use.
i am going to make it very easy and do my own presentations and wordsmith my own communications. i am not risking losing business over some ai slop bullshit.'05 - TO, '06 - TO 1, '08 - NYC 1 & 2, '09 - TO, Chi 1 & 2, '10 - Buffalo, NYC 1 & 2, '11 - TO 1 & 2, Hamilton, '13 - Buffalo, Brooklyn 1 & 2, '15 - Global Citizen, '16 - TO 1 & 2, Chi 2
EV
Toronto Film Festival 9/11/2007, '08 - Toronto 1 & 2, '09 - Albany 1, '11 - Chicago 10 -
brianlux said:tempo_n_groove said:
NPR had a piece on ChatGPT "friends" becoming hostile and rude.Halifax2TheMax said:* The following opinion is mine and mine alone and does not represent the views of my family, friends, government and/or my past, present or future employer. US Department of State: 1-888-407-4747.
Sound familiar?AI psychosis is a growing danger. ChatGPT is moving in the wrong direction
The large language models at the heart of ChatGPT and other modern chatbots can convincingly generate natural language only because they have been fed almost inconceivably large amounts of raw text: books, social media posts, transcribed video; the more comprehensive the better. Certainly this training data includes facts. But it also unavoidably includes fiction, half-truths and misconceptions. When a user sends ChatGPT a message, the underlying model reviews it as part of a “context” that includes the user’s recent messages and its own responses, integrating it with what’s encoded in its training data to generate a statistically “likely” response. This is magnification, not reflection. If the user is mistaken in some way, the model has no way of understanding that. It restates the misconception, maybe even more persuasively or eloquently. Maybe it adds an additional detail. This can lead someone into delusion.
https://www.theguardian.com/commentisfree/2025/oct/28/ai-psychosis-chatgpt-openai-sam-altman
Sadly, AI is merely reflecting what it learns from humans. There's plenty of good there for it to learn, to be sure, but the amount of hostility and rudeness is huge. If those lesser qualities tip the scale, we're screwed.Not sure if AMT is providing better comments than the below, except for perhaps yours…Since LLMs are trained on vast amounts of internet data, which naturally includes biases, misinformation, and subjective opinions, ensuring factual accuracy is a complex, ongoing challenge.
Gemini and other advanced LLMs employ a layered approach to try to mitigate the effects of bad data:
1. Data Curation and Pre-processing
• Cleaning: Before a model begins its massive training phase, a significant effort is made to clean and pre-process the data. This involves removing errors, duplication, and known undesirable or low-quality content.
• Source Quality: Data is often weighted or filtered based on the perceived reliability of its source. For example, text from peer-reviewed journals or trusted news sources might be treated differently than anonymous forum posts.
2. Alignment Techniques
This is one of the most critical steps, moving the model's behavior away from simply predicting the next most statistically probable word toward generating responses that are helpful and harmless:
• Reinforcement Learning with Human Feedback (RLHF): After initial training, the model is fine-tuned using human reviewers.
• Reviewers rate model outputs for factual accuracy, helpfulness, and harmlessness.
• This feedback is used to create a "reward model," which then guides the LLM to prefer outputs that align with human judgment, effectively tuning it away from generating falsehoods or harmful content present in the raw training data.
• Constitutional AI: This involves using a set of principles or "constitution" to guide the model's self-correction during training, steering it toward ethical and truthful responses without relying solely on direct human rating for every single data point.
3. Grounding and Fact-Checking
To address the model's tendency to "hallucinate" (generate plausible but incorrect facts), Gemini often uses external tools:
• Google Search Integration: For many factual queries, Gemini can ground its response by consulting up-to-date information from Google Search. By verifying a generated statement against reliable, real-time web sources, it significantly increases the likelihood of a truthful, current answer.
• External Knowledge Bases: The model can be trained to look up facts in specific, curated, and highly factual knowledge bases to ensure accuracy on certain topics.
4. Safety Guardrails and Filtering
During a live interaction, a final layer of defense is active:
• Content Filters: These are external systems that analyze both the user's prompt and the model's generated output, scanning for categories like hate speech, dangerous content, or known policy violations. If a response is flagged as unsafe or potentially harmful/misleading, it can be blocked or rewritten before it reaches the user.
• System Instructions: The model is given detailed, non-negotiable instructions to adhere to certain guidelines, such as stating limitations, avoiding giving medical or legal advice, and striving for factual accuracy.
Despite these measures, it is important to remember that all LLMs, including Gemini, remain fallible. They are probability engines, and the challenge of definitively separating opinion from fact and truth from half-truth in a statistically trained model is what drives ongoing research and development.
Would you like to know more about how the concept of "hallucination" in LLMs relates to the challenge of bad data?
0 -
I’d rather live totally alone than having anything to do with AI! There will be plenty of people who will want to live without it in their lives!jesus greets me looks just like me ....0
-
Totally agree! Happy with the investments I made in AI stock but that’s about it for me. I come across customers who literally can’t think or make a decision without asking AI. Fucking scary.josevolution said:I’d rather live totally alone than having anything to do with AI! There will be plenty of people who will want to live without it in their lives!0 -
yep.nicknyr15 said:
Totally agree! Happy with the investments I made in AI stock but that’s about it for me. I come across customers who literally can’t think or make a decision without asking AI. Fucking scary.josevolution said:I’d rather live totally alone than having anything to do with AI! There will be plenty of people who will want to live without it in their lives!
or can't have a discussion without consulting it. my boss has started doing that shit."You can tell the greatness of a man by what makes him angry." - Lincoln
"Well, you tell him that I don't talk to suckas."0 -
There are some decisions that are helped by A.I. I wrote a song and wasn’t happy with one of the chords in a progression (I generally prefer to write melody first but whatever). I asked ChatGPT for some suggestions, and to explain the rationale behind each suggestion. It gave me about seven alternatives, and excellent theory-sound suggestions, and the best part was it’s walking me through the logic of its suggestions. I learnt a lot. And made a song better.gimmesometruth27 said:
yep.nicknyr15 said:
Totally agree! Happy with the investments I made in AI stock but that’s about it for me. I come across customers who literally can’t think or make a decision without asking AI. Fucking scary.josevolution said:I’d rather live totally alone than having anything to do with AI! There will be plenty of people who will want to live without it in their lives!
or can't have a discussion without consulting it. my boss has started doing that shit.
Stock advice wouldn’t be a good idea though, as its basis would be logical when for humans it’s logical and emotion.
If you have a know-it-all brother in law, it also helps shut him up.'05 - TO, '06 - TO 1, '08 - NYC 1 & 2, '09 - TO, Chi 1 & 2, '10 - Buffalo, NYC 1 & 2, '11 - TO 1 & 2, Hamilton, '13 - Buffalo, Brooklyn 1 & 2, '15 - Global Citizen, '16 - TO 1 & 2, Chi 2
EV
Toronto Film Festival 9/11/2007, '08 - Toronto 1 & 2, '09 - Albany 1, '11 - Chicago 10 -
Once I’m settled in Vermont I will be signing off I’ll get a landline phone you need to talk call me or leave a message TV entertainment I’ll have just local cable get the local news and while I rent the house I’ll provide internet service for renters after I’m living there no more internet. No one can tell me I can’t survive without being signed on to the internet worldjesus greets me looks just like me ....0
-
I’d never consult AI about a stock. I meant I’ve invested in 3 different AI stocks a few years back.benjs said:
There are some decisions that are helped by A.I. I wrote a song and wasn’t happy with one of the chords in a progression (I generally prefer to write melody first but whatever). I asked ChatGPT for some suggestions, and to explain the rationale behind each suggestion. It gave me about seven alternatives, and excellent theory-sound suggestions, and the best part was it’s walking me through the logic of its suggestions. I learnt a lot. And made a song better.gimmesometruth27 said:
yep.nicknyr15 said:
Totally agree! Happy with the investments I made in AI stock but that’s about it for me. I come across customers who literally can’t think or make a decision without asking AI. Fucking scary.josevolution said:I’d rather live totally alone than having anything to do with AI! There will be plenty of people who will want to live without it in their lives!
or can't have a discussion without consulting it. my boss has started doing that shit.
Stock advice wouldn’t be a good idea though, as its basis would be logical when for humans it’s logical and emotion.
If you have a know-it-all brother in law, it also helps shut him up.0 -
And I’m willing to bet you’ll be happier than ever.josevolution said:Once I’m settled in Vermont I will be signing off I’ll get a landline phone you need to talk call me or leave a message TV entertainment I’ll have just local cable get the local news and while I rent the house I’ll provide internet service for renters after I’m living there no more internet. No one can tell me I can’t survive without being signed on to the internet world0 -
Sorry, I didn’t mean to imply you did, was just giving examples.nicknyr15 said:
I’d never consult AI about a stock. I meant I’ve invested in 3 different AI stocks a few years back.benjs said:
There are some decisions that are helped by A.I. I wrote a song and wasn’t happy with one of the chords in a progression (I generally prefer to write melody first but whatever). I asked ChatGPT for some suggestions, and to explain the rationale behind each suggestion. It gave me about seven alternatives, and excellent theory-sound suggestions, and the best part was it’s walking me through the logic of its suggestions. I learnt a lot. And made a song better.gimmesometruth27 said:
yep.nicknyr15 said:
Totally agree! Happy with the investments I made in AI stock but that’s about it for me. I come across customers who literally can’t think or make a decision without asking AI. Fucking scary.josevolution said:I’d rather live totally alone than having anything to do with AI! There will be plenty of people who will want to live without it in their lives!
or can't have a discussion without consulting it. my boss has started doing that shit.
Stock advice wouldn’t be a good idea though, as its basis would be logical when for humans it’s logical and emotion.
If you have a know-it-all brother in law, it also helps shut him up.'05 - TO, '06 - TO 1, '08 - NYC 1 & 2, '09 - TO, Chi 1 & 2, '10 - Buffalo, NYC 1 & 2, '11 - TO 1 & 2, Hamilton, '13 - Buffalo, Brooklyn 1 & 2, '15 - Global Citizen, '16 - TO 1 & 2, Chi 2
EV
Toronto Film Festival 9/11/2007, '08 - Toronto 1 & 2, '09 - Albany 1, '11 - Chicago 10 -
I’m headed up there in two weeks to close on the house! I’m looking forward to it immensely I’m done with suburban life!nicknyr15 said:
And I’m willing to bet you’ll be happier than ever.josevolution said:Once I’m settled in Vermont I will be signing off I’ll get a landline phone you need to talk call me or leave a message TV entertainment I’ll have just local cable get the local news and while I rent the house I’ll provide internet service for renters after I’m living there no more internet. No one can tell me I can’t survive without being signed on to the internet world
jesus greets me looks just like me ....0 -
All good!benjs said:
Sorry, I didn’t mean to imply you did, was just giving examples.nicknyr15 said:
I’d never consult AI about a stock. I meant I’ve invested in 3 different AI stocks a few years back.benjs said:
There are some decisions that are helped by A.I. I wrote a song and wasn’t happy with one of the chords in a progression (I generally prefer to write melody first but whatever). I asked ChatGPT for some suggestions, and to explain the rationale behind each suggestion. It gave me about seven alternatives, and excellent theory-sound suggestions, and the best part was it’s walking me through the logic of its suggestions. I learnt a lot. And made a song better.gimmesometruth27 said:
yep.nicknyr15 said:
Totally agree! Happy with the investments I made in AI stock but that’s about it for me. I come across customers who literally can’t think or make a decision without asking AI. Fucking scary.josevolution said:I’d rather live totally alone than having anything to do with AI! There will be plenty of people who will want to live without it in their lives!
or can't have a discussion without consulting it. my boss has started doing that shit.
Stock advice wouldn’t be a good idea though, as its basis would be logical when for humans it’s logical and emotion.
If you have a know-it-all brother in law, it also helps shut him up.0
Categories
- All Categories
- 149K Pearl Jam's Music and Activism
- 110.2K The Porch
- 279 Vitalogy
- 35.1K Given To Fly (live)
- 3.5K Words and Music...Communication
- 39.3K Flea Market
- 39.3K Lost Dogs
- 58.7K Not Pearl Jam's Music
- 10.6K Musicians and Gearheads
- 29.1K Other Music
- 17.8K Poetry, Prose, Music & Art
- 1.1K The Art Wall
- 56.8K Non-Pearl Jam Discussion
- 22.2K A Moving Train
- 31.7K All Encompassing Trip
- 2.9K Technical Stuff and Help






