RUSSIA
Comments
-
Lerxst1992 said:static111 said:AI is not objectiveCorrect. they are influenced by data, not emotion. And there is nothing wrong examining summaries and extrapolations of vast amounts of data vs the old way of doing it…which was liberals looking to NYT and conservatives using WSJ…that is nothing more than confirmation bias.And broadly, what does that say about “the discussion” here?0
-
Tim Simmons said:Lerxst1992 said:static111 said:AI is not objectiveCorrect. they are influenced by data, not emotion. And there is nothing wrong examining summaries and extrapolations of vast amounts of data vs the old way of doing it…which was liberals looking to NYT and conservatives using WSJ…that is nothing more than confirmation bias.And broadly, what does that say about “the discussion” here?And the overwhelming way of commenting here is…”see I found this anecdotal story” or “this opinion piece supports how I feel.”I think the trick is knowing what to ask. I am not afraid of what those who disagree have to say. I agreed with the point that there are significant factors other than Hamas that block food delivery, but presenting the data changed the direction of the conversation
and I’m not hiding from Hal’s criticism of “some sources.” Yeah, it’s on me for not providing links. But all they’re interested in doing is attacking the source, because for them it’s all about confirmation bias. What llms do is not simple and it’s not limiting. It’s vast and complex.
…That's the core of the problem. When LLMs say "some sources," it's a symptom of a much deeper, more complex issue. Here's a more detailed breakdown of why LLMs, like Gemini, do this:The "Black Box" ProblemThe fundamental challenge is that a large language model is a "black box." It's not a database or a traditional search engine. It's a massive, complex neural network with billions or trillions of parameters. When you ask it a question, it doesn't look up an answer in a table. It processes the query and generates a response based on the statistical relationships it learned from its training data.This means the model doesn't "know" where a specific piece of information came from. It has ingested a vast amount of text from the internet, and a statement like "The Earth is round" is represented as a highly probable sequence of words because that concept is overwhelmingly present in its training data. A less common fact, however, might only have appeared in a few sources, but the model has no way of remembering those specific sources. The phrase "some sources" is its way of expressing that this information exists in its learned knowledge without being able to pinpoint the origin.The Problem of "Hallucination" and FabricationOne of the biggest risks for LLMs is "hallucination," where the model generates confident-sounding but completely false information. A dangerous form of this is fabricating citations. Early versions of some LLMs were known to do this—inventing scholarly articles, book titles, and authors that didn't exist, all to make the response seem more credible.By saying "some sources," the model is trying to avoid this kind of overt fabrication. It's a safety mechanism to say, "I am not making this up, but I also can't give you a specific, verifiable source." This is a trade-off between being helpful and being factually accurate.The Challenge of Source AttributionEven when a model uses tools like a live search to get up-to-date information, providing accurate citations is still difficult. Here are some of the technical challenges:* Many Sources, One Answer: A single sentence in an LLM's response might be a synthesis of information from multiple web pages. It's not easy to neatly attribute a synthesized piece of information to one specific source.* Irrelevant Links: Studies have shown that even when LLMs provide external links, those links can be irrelevant or only tangentially related to the claim being made. This is because the process of generating a response and then finding a source to match it is not always a clean, one-to-one process.* The "Attribution Gap": Research has found that web-enabled LLMs often read many web pages to formulate an answer but only cite a small fraction of them, if any. This "attribution gap" is a major problem, as it makes it difficult for users to see the full context and verify the information.The Evolving Solution: RAG (Retrieval-Augmented Generation)To address these problems, LLMs are increasingly using a technique called Retrieval-Augmented Generation (RAG). Instead of just relying on their internal, pre-trained knowledge, they first perform a search for relevant documents and then use those documents as context to generate an answer.This is a much better approach because it allows the model to:* Ground its answers in real, verifiable information.* Provide direct links to the sources it used.* Reduce the likelihood of hallucination.You can see this in action when Gemini provides a "Sources" button or includes direct links in its answers. When this happens, it's a sign that the model has performed an external search and is providing a much stronger, more reliable response.In summary, when an LLM says "some sources," it is a direct result of the complexities of its architecture and the challenges of accurately attributing information from a vast, unstructured training set. It's a clear signal to you, the user, that you should proceed with caution and verify the information on your own.0 -
But YOU'RE not having the conversation. You're just providing a wall of information that may or may not be correct, and presenting it as your own thoughts. Again, I don't see how thats any different than ”see I found this anecdotal story” or “this opinion piece supports how I feel.” You're just saying "Heres some information a computer spout out that I'm presenting under my name. It could be correct, it may not be, I don't know, do with this information what you will"0
-
Merkin Baller said:
a real patriot would have flown some people up from alligator alcatraz and let them roll out the red carpet."You can tell the greatness of a man by what makes him angry." - Lincoln
"Well, you tell him that I don't talk to suckas."0 -
Tim Simmons said:But YOU'RE not having the conversation. You're just providing a wall of information that may or may not be correct, and presenting it as your own thoughts. Again, I don't see how thats any different than ”see I found this anecdotal story” or “this opinion piece supports how I feel.” You're just saying "Heres some information a computer spout out that I'm presenting under my name. It could be correct, it may not be, I don't know, do with this information what you will""You can tell the greatness of a man by what makes him angry." - Lincoln
"Well, you tell him that I don't talk to suckas."0 -
Yeah, most of the time I don't give a shit. Its just creating content.0
-
He is so stupid...he just said LIVE that "millions of people were killed last week"Remember the Thomas Nine !! (10/02/2018)
The Golden Age is 2 months away. And guess what….. you’re gonna love it! (teskeinc 11.19.24)
1998: Noblesville; 2003: Noblesville; 2009: EV Nashville, Chicago, Chicago
2010: St Louis, Columbus, Noblesville; 2011: EV Chicago, East Troy, East Troy
2013: London ON, Wrigley; 2014: Cincy, St Louis, Moline (NO CODE)
2016: Lexington, Wrigley #1; 2018: Wrigley, Wrigley, Boston, Boston
2020: Oakland, Oakland: 2021: EV Ohana, Ohana, Ohana, Ohana
2022: Oakland, Oakland, Nashville, Louisville; 2023: Chicago, Chicago, Noblesville
2024: Noblesville, Wrigley, Wrigley, Ohana, Ohana; 2025: Pitt1, Pitt20 -
Gern Blansten said:He is so stupid...he just said LIVE that "millions of people were killed last week"09/15/1998 & 09/16/1998, Mansfield, MA; 08/29/00 08/30/00, Mansfield, MA; 07/02/03, 07/03/03, Mansfield, MA; 09/28/04, 09/29/04, Boston, MA; 09/22/05, Halifax, NS; 05/24/06, 05/25/06, Boston, MA; 07/22/06, 07/23/06, Gorge, WA; 06/27/2008, Hartford; 06/28/08, 06/30/08, Mansfield; 08/18/2009, O2, London, UK; 10/30/09, 10/31/09, Philadelphia, PA; 05/15/10, Hartford, CT; 05/17/10, Boston, MA; 05/20/10, 05/21/10, NY, NY; 06/22/10, Dublin, IRE; 06/23/10, Northern Ireland; 09/03/11, 09/04/11, Alpine Valley, WI; 09/11/11, 09/12/11, Toronto, Ont; 09/14/11, Ottawa, Ont; 09/15/11, Hamilton, Ont; 07/02/2012, Prague, Czech Republic; 07/04/2012 & 07/05/2012, Berlin, Germany; 07/07/2012, Stockholm, Sweden; 09/30/2012, Missoula, MT; 07/16/2013, London, Ont; 07/19/2013, Chicago, IL; 10/15/2013 & 10/16/2013, Worcester, MA; 10/21/2013 & 10/22/2013, Philadelphia, PA; 10/25/2013, Hartford, CT; 11/29/2013, Portland, OR; 11/30/2013, Spokane, WA; 12/04/2013, Vancouver, BC; 12/06/2013, Seattle, WA; 10/03/2014, St. Louis. MO; 10/22/2014, Denver, CO; 10/26/2015, New York, NY; 04/23/2016, New Orleans, LA; 04/28/2016 & 04/29/2016, Philadelphia, PA; 05/01/2016 & 05/02/2016, New York, NY; 05/08/2016, Ottawa, Ont.; 05/10/2016 & 05/12/2016, Toronto, Ont.; 08/05/2016 & 08/07/2016, Boston, MA; 08/20/2016 & 08/22/2016, Chicago, IL; 07/01/2018, Prague, Czech Republic; 07/03/2018, Krakow, Poland; 07/05/2018, Berlin, Germany; 09/02/2018 & 09/04/2018, Boston, MA; 09/08/2022, Toronto, Ont; 09/11/2022, New York, NY; 09/14/2022, Camden, NJ; 09/02/2023, St. Paul, MN; 05/04/2024 & 05/06/2024, Vancouver, BC; 05/10/2024, Portland, OR;
Libtardaplorable©. And proud of it.
Brilliantati©0 -
1993: 11/22 Little Rock
1996; 9/28 New York
1997: 11/14 Oakland, 11/15 Oakland
1998: 7/5 Dallas, 7/7 Albuquerque, 7/8 Phoenix, 7/10 San Diego, 7/11 Las Vegas
2000: 10/17 Dallas
2003: 4/3 OKC
2012: 11/17 Tulsa(EV), 11/18 Tulsa(EV)
2013: 11/16 OKC
2014: 10/8 Tulsa
2022: 9/20 OKC
2023: 9/13 Ft Worth, 9/15 Ft Worth0 -
Halifax2TheMax said:Gern Blansten said:He is so stupid...he just said LIVE that "millions of people were killed last week"
Now I'm listening to fuckface say how mail in ballots are corruptRemember the Thomas Nine !! (10/02/2018)
The Golden Age is 2 months away. And guess what….. you’re gonna love it! (teskeinc 11.19.24)
1998: Noblesville; 2003: Noblesville; 2009: EV Nashville, Chicago, Chicago
2010: St Louis, Columbus, Noblesville; 2011: EV Chicago, East Troy, East Troy
2013: London ON, Wrigley; 2014: Cincy, St Louis, Moline (NO CODE)
2016: Lexington, Wrigley #1; 2018: Wrigley, Wrigley, Boston, Boston
2020: Oakland, Oakland: 2021: EV Ohana, Ohana, Ohana, Ohana
2022: Oakland, Oakland, Nashville, Louisville; 2023: Chicago, Chicago, Noblesville
2024: Noblesville, Wrigley, Wrigley, Ohana, Ohana; 2025: Pitt1, Pitt20
Categories
- All Categories
- 148.8K Pearl Jam's Music and Activism
- 110K The Porch
- 274 Vitalogy
- 35K Given To Fly (live)
- 3.5K Words and Music...Communication
- 39.1K Flea Market
- 39.1K Lost Dogs
- 58.7K Not Pearl Jam's Music
- 10.6K Musicians and Gearheads
- 29.1K Other Music
- 17.8K Poetry, Prose, Music & Art
- 1.1K The Art Wall
- 56.8K Non-Pearl Jam Discussion
- 22.2K A Moving Train
- 31.7K All Encompassing Trip
- 2.9K Technical Stuff and Help