RUSSIA

1252627282931»

Comments

  • Tim Simmons
    Tim Simmons Posts: 9,616
    static111 said:
    AI is not objective

    Correct. they are influenced by data, not emotion. And there is nothing wrong examining summaries and extrapolations of vast amounts of data vs the old way of doing it…which was liberals looking to NYT and conservatives using WSJ…that is nothing more than confirmation bias. 
    IF I believed the first thing (I don’t), I don’t see the difference in blindly believing journalists vs. blindly believing what a computer spits out. My issue is you are doing none of the cognitive work in forming an opinion (right or wrong) or finding n answer. You are just saying here’s my issue/question, print an answer. 

    And broadly, what does that say about “the discussion” here? 
  • Lerxst1992
    Lerxst1992 Posts: 7,920
    edited 1:52PM
    static111 said:
    AI is not objective

    Correct. they are influenced by data, not emotion. And there is nothing wrong examining summaries and extrapolations of vast amounts of data vs the old way of doing it…which was liberals looking to NYT and conservatives using WSJ…that is nothing more than confirmation bias. 
    IF I believed the first thing (I don’t), I don’t see the difference in blindly believing journalists vs. blindly believing what a computer spits out. My issue is you are doing none of the cognitive work in forming an opinion (right or wrong) or finding n answer. You are just saying here’s my issue/question, print an answer. 

    And broadly, what does that say about “the discussion” here? 

    And the overwhelming way of commenting here is…”see I found this anecdotal story” or “this opinion piece supports how I feel.” 

    I think the trick is knowing what to ask. I am not afraid of what those who disagree have to say. I agreed with the point that there are significant factors other than Hamas that block food delivery, but presenting the data changed the direction of the conversation 


    and I’m not hiding from Hal’s criticism of “some sources.” Yeah, it’s on me for not providing links. But all they’re interested in doing is attacking the source, because for them it’s all about confirmation bias. What llms do is not simple and it’s not limiting. It’s vast and complex.




    That's the core of the problem. When LLMs say "some sources," it's a symptom of a much deeper, more complex issue. Here's a more detailed breakdown of why LLMs, like Gemini, do this:

    The "Black Box" Problem
    The fundamental challenge is that a large language model is a "black box." It's not a database or a traditional search engine. It's a massive, complex neural network with billions or trillions of parameters. When you ask it a question, it doesn't look up an answer in a table. It processes the query and generates a response based on the statistical relationships it learned from its training data.

    This means the model doesn't "know" where a specific piece of information came from. It has ingested a vast amount of text from the internet, and a statement like "The Earth is round" is represented as a highly probable sequence of words because that concept is overwhelmingly present in its training data. A less common fact, however, might only have appeared in a few sources, but the model has no way of remembering those specific sources. The phrase "some sources" is its way of expressing that this information exists in its learned knowledge without being able to pinpoint the origin.
    The Problem of "Hallucination" and Fabrication
    One of the biggest risks for LLMs is "hallucination," where the model generates confident-sounding but completely false information. A dangerous form of this is fabricating citations. Early versions of some LLMs were known to do this—inventing scholarly articles, book titles, and authors that didn't exist, all to make the response seem more credible.
    By saying "some sources," the model is trying to avoid this kind of overt fabrication. It's a safety mechanism to say, "I am not making this up, but I also can't give you a specific, verifiable source." This is a trade-off between being helpful and being factually accurate.
    The Challenge of Source Attribution
    Even when a model uses tools like a live search to get up-to-date information, providing accurate citations is still difficult. Here are some of the technical challenges:
     * Many Sources, One Answer: A single sentence in an LLM's response might be a synthesis of information from multiple web pages. It's not easy to neatly attribute a synthesized piece of information to one specific source.
     * Irrelevant Links: Studies have shown that even when LLMs provide external links, those links can be irrelevant or only tangentially related to the claim being made. This is because the process of generating a response and then finding a source to match it is not always a clean, one-to-one process.
     * The "Attribution Gap": Research has found that web-enabled LLMs often read many web pages to formulate an answer but only cite a small fraction of them, if any. This "attribution gap" is a major problem, as it makes it difficult for users to see the full context and verify the information.
    The Evolving Solution: RAG (Retrieval-Augmented Generation)
    To address these problems, LLMs are increasingly using a technique called Retrieval-Augmented Generation (RAG). Instead of just relying on their internal, pre-trained knowledge, they first perform a search for relevant documents and then use those documents as context to generate an answer.
    This is a much better approach because it allows the model to:
     * Ground its answers in real, verifiable information.
     * Provide direct links to the sources it used.
     * Reduce the likelihood of hallucination.
    You can see this in action when Gemini provides a "Sources" button or includes direct links in its answers. When this happens, it's a sign that the model has performed an external search and is providing a much stronger, more reliable response.
    In summary, when an LLM says "some sources," it is a direct result of the complexities of its architecture and the challenges of accurately attributing information from a vast, unstructured training set. It's a clear signal to you, the user, that you should proceed with caution and verify the information on your own.

  • Tim Simmons
    Tim Simmons Posts: 9,616
    But YOU'RE not having the conversation. You're just providing a wall of information that may or may not be correct, and presenting it as your own thoughts. Again, I don't see how thats any different than ”see I found this anecdotal story” or “this opinion piece supports how I feel.” You're just saying "Heres some information a computer spout out that I'm presenting under my name. It could be correct, it may not be, I don't know, do with this information what you will"
  • gimmesometruth27
    gimmesometruth27 St. Fuckin Louis Posts: 24,128
    JointheDots on X GovPressOffice This image of the US soldiers on their  knees rolling out the red carpet for Vladimir Putin will go down as one of  the most shocking and shameful
    American troops rolling a red carpet for Putin  rMilitary
    another example of how little respect trump has for american soldiers.

    a real patriot would have flown some people up from alligator alcatraz and let them roll out the red carpet.
    "You can tell the greatness of a man by what makes him angry."  - Lincoln

    "Well, you tell him that I don't talk to suckas."
  • gimmesometruth27
    gimmesometruth27 St. Fuckin Louis Posts: 24,128
    But YOU'RE not having the conversation. You're just providing a wall of information that may or may not be correct, and presenting it as your own thoughts. Again, I don't see how thats any different than ”see I found this anecdotal story” or “this opinion piece supports how I feel.” You're just saying "Heres some information a computer spout out that I'm presenting under my name. It could be correct, it may not be, I don't know, do with this information what you will"
    and most of us don't even read it.
    "You can tell the greatness of a man by what makes him angry."  - Lincoln

    "Well, you tell him that I don't talk to suckas."
  • Tim Simmons
    Tim Simmons Posts: 9,616
    Yeah, most of the time I don't give a shit. Its just creating content.