News and What's New in AI

Options
24

Comments

  • benjs said:
    static111 said:
    Former Google CEO Eric Schmidt: AI guardrails "aren't enough" (axios.com)

    You've got five years at most to live a good life, and then AI takes over.  
    brianlux said:
    Thankfully, there are efforts like this being taken.  We would do well to support these efforts.

    US, Britain, other countries ink agreement to make AI 'secure by design'

    November 27, 20236:57 PM PSTUpdated 15 hours ago

    WASHINGTON, Nov 27 (Reuters) - The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are "secure by design."

    In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse.



    This is the beginning of the end, seriously.  Every company is going to force this down our throats whether we like it or not.  "It's the future"
    it's what the consumers want... just like every other pile of crap we have been force fed and turned into a ubiquitous part of our lives that we couldn't do without.
    Consumers ACTUALLY want this crap?  Really?  Or is it something that companies put a shit ton of money into and are saying "you want/need this" ?

    I look at this like NFT's and EVERYBODY wants it at first but....

    You k ow what?  If I wanted to be lazy and use it for homework or music writing, art even, then I guess if I didn't or couldn't be creative enough then I would like it.

    I think a lot of peoples jobs are gonna be gone fromn this, lol.
    I'm not sure how much you've used ChatGPT, but I've done the following with it in the past month alone:

    -prepared food based on its recipes (after telling it what I have in stock)
    -built project plans and had it provide constructive criticism on what elements are missing/confusing
    -provided details of a bluegrass song and had it propose some interesting chord substitutions based on proper music theory for the genre
    -successfully had it write Python programming based on providing it details of what I'm trying to accomplish, as well as code explanations (note that I can not write quality Python code myself) 
    -requested it to interview me and then produce a job description based on the conversation, as well as a proposal for an onboarding roadmap
    -learned statistics about the proportionality of Congress switching leadership as the President does/doesn't change historically

    None of these ventures will eliminate a job, but all of them increase abilities as an employee/individual, which is wonderful. It's absolutely a can of worms being opened up, but there are positive use cases like the ones I mentioned above. Users of AI aren't necessarily lazy, uninspired, or seeking an opportunity to plagiarize, and some of these cases are notably uncomplicated in their usefulness.
    I have zero desire to use ChatGPT at the moment.  If it could do submittals for me then I would change my mind but I don't exactly see that happening.

    Nothing you mentioned struck my fancy other than the food one.  I do enjoy making new dishes but it's usually from seeing them and I have to buy new ingredients.

    The job interview is basically preparation which I think you could do with a little research or by just knowing the field.

    My view may change of it over time.  Wondering if it can help with investment trends?  See?  I just changed my mind about it, lol.
  • static111 said:
    static111 said:
    Former Google CEO Eric Schmidt: AI guardrails "aren't enough" (axios.com)

    You've got five years at most to live a good life, and then AI takes over.  
    brianlux said:
    Thankfully, there are efforts like this being taken.  We would do well to support these efforts.

    US, Britain, other countries ink agreement to make AI 'secure by design'

    November 27, 20236:57 PM PSTUpdated 15 hours ago

    WASHINGTON, Nov 27 (Reuters) - The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are "secure by design."

    In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse.



    This is the beginning of the end, seriously.  Every company is going to force this down our throats whether we like it or not.  "It's the future"
    it's what the consumers want... just like every other pile of crap we have been force fed and turned into a ubiquitous part of our lives that we couldn't do without.
    Consumers ACTUALLY want this crap?  Really?  Or is it something that companies put a shit ton of money into and are saying "you want/need this" ?

    I look at this like NFT's and EVERYBODY wants it at first but....

    You k ow what?  If I wanted to be lazy and use it for homework or music writing, art even, then I guess if I didn't or couldn't be creative enough then I would like it.

    I think a lot of peoples jobs are gonna be gone fromn this, lol.
    That's what the defense is for every bad decision.  Shipping jobs overseas...consumers want lower prices. Not investing in clean energy r&d earlier, consumers wanted cheap fuel etc. No matter what it always gets blamed on people that have no alternatives and have to purchase what they need to function in capitalism amongst the available offerings.
    Who said losing jobs overseas?  No those jobs will be eliminated all together.

    If Yang is right about the trucking industry and AI, then all those trucking jobs won't go overseas, they will be removed completely.
  • static111
    static111 Posts: 5,048
    static111 said:
    static111 said:
    Former Google CEO Eric Schmidt: AI guardrails "aren't enough" (axios.com)

    You've got five years at most to live a good life, and then AI takes over.  
    brianlux said:
    Thankfully, there are efforts like this being taken.  We would do well to support these efforts.

    US, Britain, other countries ink agreement to make AI 'secure by design'

    November 27, 20236:57 PM PSTUpdated 15 hours ago

    WASHINGTON, Nov 27 (Reuters) - The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are "secure by design."

    In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse.



    This is the beginning of the end, seriously.  Every company is going to force this down our throats whether we like it or not.  "It's the future"
    it's what the consumers want... just like every other pile of crap we have been force fed and turned into a ubiquitous part of our lives that we couldn't do without.
    Consumers ACTUALLY want this crap?  Really?  Or is it something that companies put a shit ton of money into and are saying "you want/need this" ?

    I look at this like NFT's and EVERYBODY wants it at first but....

    You k ow what?  If I wanted to be lazy and use it for homework or music writing, art even, then I guess if I didn't or couldn't be creative enough then I would like it.

    I think a lot of peoples jobs are gonna be gone fromn this, lol.
    That's what the defense is for every bad decision.  Shipping jobs overseas...consumers want lower prices. Not investing in clean energy r&d earlier, consumers wanted cheap fuel etc. No matter what it always gets blamed on people that have no alternatives and have to purchase what they need to function in capitalism amongst the available offerings.
    Who said losing jobs overseas?  No those jobs will be eliminated all together.

    If Yang is right about the trucking industry and AI, then all those trucking jobs won't go overseas, they will be removed completely.
    Go back in time.   When jobs were originally sent overseas one of the reasons was because consumers wanted lower prices which was crap it was all about profits.  The point is every major blunder forced on us gets blamed on the consumers "wanting" something. Conveniebce, lower prices etc.
    Scio me nihil scire

    There are no kings inside the gates of eden
  • static111 said:
    static111 said:
    static111 said:
    Former Google CEO Eric Schmidt: AI guardrails "aren't enough" (axios.com)

    You've got five years at most to live a good life, and then AI takes over.  
    brianlux said:
    Thankfully, there are efforts like this being taken.  We would do well to support these efforts.

    US, Britain, other countries ink agreement to make AI 'secure by design'

    November 27, 20236:57 PM PSTUpdated 15 hours ago

    WASHINGTON, Nov 27 (Reuters) - The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are "secure by design."

    In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse.



    This is the beginning of the end, seriously.  Every company is going to force this down our throats whether we like it or not.  "It's the future"
    it's what the consumers want... just like every other pile of crap we have been force fed and turned into a ubiquitous part of our lives that we couldn't do without.
    Consumers ACTUALLY want this crap?  Really?  Or is it something that companies put a shit ton of money into and are saying "you want/need this" ?

    I look at this like NFT's and EVERYBODY wants it at first but....

    You k ow what?  If I wanted to be lazy and use it for homework or music writing, art even, then I guess if I didn't or couldn't be creative enough then I would like it.

    I think a lot of peoples jobs are gonna be gone fromn this, lol.
    That's what the defense is for every bad decision.  Shipping jobs overseas...consumers want lower prices. Not investing in clean energy r&d earlier, consumers wanted cheap fuel etc. No matter what it always gets blamed on people that have no alternatives and have to purchase what they need to function in capitalism amongst the available offerings.
    Who said losing jobs overseas?  No those jobs will be eliminated all together.

    If Yang is right about the trucking industry and AI, then all those trucking jobs won't go overseas, they will be removed completely.
    Go back in time.   When jobs were originally sent overseas one of the reasons was because consumers wanted lower prices which was crap it was all about profits.  The point is every major blunder forced on us gets blamed on the consumers "wanting" something. Conveniebce, lower prices etc.
    Ahhh.  I see the angle now. I see it as what the future generations will want too.  Easier and cheaper. I get it.  TY.
  • static111
    static111 Posts: 5,048
    static111 said:
    static111 said:
    static111 said:
    Former Google CEO Eric Schmidt: AI guardrails "aren't enough" (axios.com)

    You've got five years at most to live a good life, and then AI takes over.  
    brianlux said:
    Thankfully, there are efforts like this being taken.  We would do well to support these efforts.

    US, Britain, other countries ink agreement to make AI 'secure by design'

    November 27, 20236:57 PM PSTUpdated 15 hours ago

    WASHINGTON, Nov 27 (Reuters) - The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are "secure by design."

    In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse.



    This is the beginning of the end, seriously.  Every company is going to force this down our throats whether we like it or not.  "It's the future"
    it's what the consumers want... just like every other pile of crap we have been force fed and turned into a ubiquitous part of our lives that we couldn't do without.
    Consumers ACTUALLY want this crap?  Really?  Or is it something that companies put a shit ton of money into and are saying "you want/need this" ?

    I look at this like NFT's and EVERYBODY wants it at first but....

    You k ow what?  If I wanted to be lazy and use it for homework or music writing, art even, then I guess if I didn't or couldn't be creative enough then I would like it.

    I think a lot of peoples jobs are gonna be gone fromn this, lol.
    That's what the defense is for every bad decision.  Shipping jobs overseas...consumers want lower prices. Not investing in clean energy r&d earlier, consumers wanted cheap fuel etc. No matter what it always gets blamed on people that have no alternatives and have to purchase what they need to function in capitalism amongst the available offerings.
    Who said losing jobs overseas?  No those jobs will be eliminated all together.

    If Yang is right about the trucking industry and AI, then all those trucking jobs won't go overseas, they will be removed completely.
    Go back in time.   When jobs were originally sent overseas one of the reasons was because consumers wanted lower prices which was crap it was all about profits.  The point is every major blunder forced on us gets blamed on the consumers "wanting" something. Conveniebce, lower prices etc.
    Ahhh.  I see the angle now. I see it as what the future generations will want too.  Easier and cheaper. I get it.  TY.
    The thing is even though this crap is being railroaded through along with "smart" appliances etc once the picture is clear that the negatives out weigh the positives it will be the same old it's what the consumer wanted argument. When in fact it had been forcibly entwined into everything in society to the point t that it can't be undone, by industry and advertising and not by consumers.
    Scio me nihil scire

    There are no kings inside the gates of eden
  • benjs said:
    static111 said:
    Former Google CEO Eric Schmidt: AI guardrails "aren't enough" (axios.com)

    You've got five years at most to live a good life, and then AI takes over.  
    brianlux said:
    Thankfully, there are efforts like this being taken.  We would do well to support these efforts.

    US, Britain, other countries ink agreement to make AI 'secure by design'

    November 27, 20236:57 PM PSTUpdated 15 hours ago

    WASHINGTON, Nov 27 (Reuters) - The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are "secure by design."

    In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse.



    This is the beginning of the end, seriously.  Every company is going to force this down our throats whether we like it or not.  "It's the future"
    it's what the consumers want... just like every other pile of crap we have been force fed and turned into a ubiquitous part of our lives that we couldn't do without.
    Consumers ACTUALLY want this crap?  Really?  Or is it something that companies put a shit ton of money into and are saying "you want/need this" ?

    I look at this like NFT's and EVERYBODY wants it at first but....

    You k ow what?  If I wanted to be lazy and use it for homework or music writing, art even, then I guess if I didn't or couldn't be creative enough then I would like it.

    I think a lot of peoples jobs are gonna be gone fromn this, lol.
    I'm not sure how much you've used ChatGPT, but I've done the following with it in the past month alone:

    -prepared food based on its recipes (after telling it what I have in stock)
    -built project plans and had it provide constructive criticism on what elements are missing/confusing
    -provided details of a bluegrass song and had it propose some interesting chord substitutions based on proper music theory for the genre
    -successfully had it write Python programming based on providing it details of what I'm trying to accomplish, as well as code explanations (note that I can not write quality Python code myself) 
    -requested it to interview me and then produce a job description based on the conversation, as well as a proposal for an onboarding roadmap
    -learned statistics about the proportionality of Congress switching leadership as the President does/doesn't change historically

    None of these ventures will eliminate a job, but all of them increase abilities as an employee/individual, which is wonderful. It's absolutely a can of worms being opened up, but there are positive use cases like the ones I mentioned above. Users of AI aren't necessarily lazy, uninspired, or seeking an opportunity to plagiarize, and some of these cases are notably uncomplicated in their usefulness.
    If you don’t mind, what ingredients did you have lying around and what did AI suggest that you make? Was it good?

    Post just the ingredients and let me think about it for a day to see what I could come up with. Then post the rest of your answers. I’m intrigued by this and your experience.
    09/15/1998 & 09/16/1998, Mansfield, MA; 08/29/00 08/30/00, Mansfield, MA; 07/02/03, 07/03/03, Mansfield, MA; 09/28/04, 09/29/04, Boston, MA; 09/22/05, Halifax, NS; 05/24/06, 05/25/06, Boston, MA; 07/22/06, 07/23/06, Gorge, WA; 06/27/2008, Hartford; 06/28/08, 06/30/08, Mansfield; 08/18/2009, O2, London, UK; 10/30/09, 10/31/09, Philadelphia, PA; 05/15/10, Hartford, CT; 05/17/10, Boston, MA; 05/20/10, 05/21/10, NY, NY; 06/22/10, Dublin, IRE; 06/23/10, Northern Ireland; 09/03/11, 09/04/11, Alpine Valley, WI; 09/11/11, 09/12/11, Toronto, Ont; 09/14/11, Ottawa, Ont; 09/15/11, Hamilton, Ont; 07/02/2012, Prague, Czech Republic; 07/04/2012 & 07/05/2012, Berlin, Germany; 07/07/2012, Stockholm, Sweden; 09/30/2012, Missoula, MT; 07/16/2013, London, Ont; 07/19/2013, Chicago, IL; 10/15/2013 & 10/16/2013, Worcester, MA; 10/21/2013 & 10/22/2013, Philadelphia, PA; 10/25/2013, Hartford, CT; 11/29/2013, Portland, OR; 11/30/2013, Spokane, WA; 12/04/2013, Vancouver, BC; 12/06/2013, Seattle, WA; 10/03/2014, St. Louis. MO; 10/22/2014, Denver, CO; 10/26/2015, New York, NY; 04/23/2016, New Orleans, LA; 04/28/2016 & 04/29/2016, Philadelphia, PA; 05/01/2016 & 05/02/2016, New York, NY; 05/08/2016, Ottawa, Ont.; 05/10/2016 & 05/12/2016, Toronto, Ont.; 08/05/2016 & 08/07/2016, Boston, MA; 08/20/2016 & 08/22/2016, Chicago, IL; 07/01/2018, Prague, Czech Republic; 07/03/2018, Krakow, Poland; 07/05/2018, Berlin, Germany; 09/02/2018 & 09/04/2018, Boston, MA; 09/08/2022, Toronto, Ont; 09/11/2022, New York, NY; 09/14/2022, Camden, NJ; 09/02/2023, St. Paul, MN; 05/04/2024 & 05/06/2024, Vancouver, BC; 05/10/2024, Portland, OR;

    Libtardaplorable©. And proud of it.

    Brilliantati©
  • bootlegger10
    bootlegger10 Posts: 16,251
    So many positives to technology.  Unfortunately the negatives will likely outweigh the benefits in the long run. 
  • mace1229
    mace1229 Posts: 9,823
    So many positives to technology.  Unfortunately the negatives will likely outweigh the benefits in the long run. 
    I agree. I'm not worried about AI becoming self aware and Snynet forming or anything. But we'll just become too reliant on it. Already all kids have to do is take a picture of homework problems and they get the answer.Same thing is going to happen with so much more. I would imagine the younger generations who grew up attached to cell phones and more technology will become more reliant.
  • Good luck with this.


    The rise of AI fake news is creating a ‘misinformation superspreader’

    AI is making it easy for anyone to create propaganda outlets, producing content that can be hard to differentiate from real news

    Artificial intelligence is automating the creation of fake news, spurring an explosion of web content mimicking factual articles that instead disseminates false information about elections, wars and natural disasters.

    Since May, websites hosting AI-created false articles have increased by more than 1,000 percent, ballooning from 49 sites to more than 600, according to NewsGuard, an organization that tracks misinformation.

    Historically, propaganda operations have relied on armies of low-paid workers or highly coordinated intelligence organizations to build sites that appear to be legitimate. But AI is making it easy for nearly anyone — whether they are part of a spy agency or just a teenager in their basement — to create these outlets, producing content that is at times hard to differentiate from real news.

    One AI-generated article recounted a made-up story about Benjamin Netanyahu’s psychiatrist, a NewsGuard investigation found, alleging that he had died and left behind a note suggesting the involvement of the Israeli prime minister. The psychiatrist appears to have been fictitious, but the claim was featured on an Iranian TV show, and it was recirculated on media sites in Arabic, English and Indonesian, and spread by users on TikTok, Reddit and Instagram.

    The heightened churn of polarizing and misleading content may make it difficult to know what is true — harming political candidates, military leaders and aid efforts. Misinformation experts said the rapid growth of these sites is particularly worrisome in the run-up to the 2024 elections.

    “Some of these sites are generating hundreds if not thousands of articles a day,” said Jack Brewster, a researcher at NewsGuard who conducted the investigation. “This is why we call it the next great misinformation superspreader.”

    Generative artificial intelligence has ushered in an era in which chatbots, image makers and voice cloners can produce content that seems human-made.

    Well-dressed AI-generated news anchors are spewing pro-Chinese propaganda, amplified by bot networks sympathetic to Beijing. In Slovakia, politicians up for election found their voices had been cloned to say controversial things they never uttered, days before voters went to the polls. A growing number of websites, with generic names such as iBusiness Day or Ireland Top News, are delivering fake news made to look genuine, in dozens of languages from Arabic to Thai.

    Readers can easily be fooled by the websites.

    Global Village Space, which published the piece on Netanyahu’s alleged psychiatrist, is flooded with articles on a variety of serious topics. There are pieces detailing U.S. sanctions on Russian weapons suppliers; the oil behemoth Saudi Aramco’s investments in Pakistan; and the United States’ increasingly tenuous relationship with China.

    The site also contains essays written by a Middle East think tank expert, a Harvard-educated lawyer and the site’s chief executive, Moeed Pirzada, a television news anchor from Pakistan. (Pirzada did not respond to a request for comment. Two contributors confirmed they have written articles appearing on Global Village Space.)

    But sandwiched in with these ordinary stories are AI-generated articles, Brewster said, such as the piece on Netanyahu’s psychiatrist, which was relabeled as “satire” after NewsGuard reached out to the organization during its investigation. NewsGuard says the story appears to have been based on a satirical piece published in June 2010, which made similar claims about an Israeli psychiatrist’s death.

    Having real and AI-generated news side-by-side makes deceptive stories more believable. “You have people that simply are not media literate enough to know that this is false,” said Jeffrey Blevins, a misinformation expert and journalism professor at the University of Cincinnati. “It’s misleading.”

    Websites similar to Global Village Space may proliferate during the 2024 election, becoming an efficient way to distribute misinformation, media and AI experts said.

    The sites work in two ways, Brewster said. Some stories are created manually, with people asking chatbots for articles that amplify a certain political narrative and posting the result to a website. The process can also be automatic, with web scrapers searching for articles that contain certain keywords, and feeding those stories into a large language model that rewrites them to sound unique and evade plagiarism allegations. The result is automatically posted online.

    NewsGuard locates AI-generated sites by scanning for error messages or other language that “indicates that the content was produced by AI tools without adequate editing,” the organization says.

    The motivations for creating these sites vary. Some are intended to sway political beliefs or wreak havoc. Other sites churn out polarizing content to draw clicks and capture ad revenue, Brewster said. But the ability to turbocharge fake content is a significant security risk, he added.

    Technology has long fueled misinformation. In the lead-up to the 2020 U.S. election, Eastern European troll farms — professional groups that promote propaganda — built large audiences on Facebook disseminating provocative content on Black and Christian group pages, reaching 140 million users per month.

    Pink-slime journalism sites, named after the meat byproduct, often crop up in small towns where local news outlets have disappeared, generating articles that benefit the financiers that fund the operation, according to the media watchdog Poynter.

    But Blevins said those techniques are more resource-intensive compared with artificial intelligence. “The danger is the scope and scale with AI … especially when paired with more sophisticated algorithms,” he said. “It’s an information war on a scale we haven’t seen before.”

    It’s not clear whether intelligence agencies are using AI-generated news for foreign influence campaigns, but it is a major concern. “I would not be shocked at all that this is used — definitely next year with the elections,” Brewster said. “It’s hard not to see some politician setting up one of these sites to generate fluff content about them and misinformation about their opponent.”

    Blevins said people should watch for clues in articles, “red flags” such as “really odd grammar” or errors in sentence construction. But the most effective tool is to increase media literacy among average readers.

    “Make people aware that there are these kinds of sites that are out there. This is the kind of harm they can cause,” he said. “But also recognize that not all sources are equally credible. Just because something claims to be a news site doesn’t mean that they actually have a journalist … producing content.”

    Regulation, he added, is largely nonexistent. It may be difficult for governments to clamp down on fake news content, for fear of running afoul of free-speech protections. That leaves it to social media companies, which haven’t done a good job so far.

    It’s infeasible to deal quickly with the sheer number of such sites. “It’s a lot like playing whack-a-mole,” Blevins said.

    “You spot one [site], you shut it down, and there’s another one created someplace else,” he added. “You’re never going to fully catch up with it.”

    https://www.washingtonpost.com/technology/2023/12/17/ai-fake-news-misinformation/

    09/15/1998 & 09/16/1998, Mansfield, MA; 08/29/00 08/30/00, Mansfield, MA; 07/02/03, 07/03/03, Mansfield, MA; 09/28/04, 09/29/04, Boston, MA; 09/22/05, Halifax, NS; 05/24/06, 05/25/06, Boston, MA; 07/22/06, 07/23/06, Gorge, WA; 06/27/2008, Hartford; 06/28/08, 06/30/08, Mansfield; 08/18/2009, O2, London, UK; 10/30/09, 10/31/09, Philadelphia, PA; 05/15/10, Hartford, CT; 05/17/10, Boston, MA; 05/20/10, 05/21/10, NY, NY; 06/22/10, Dublin, IRE; 06/23/10, Northern Ireland; 09/03/11, 09/04/11, Alpine Valley, WI; 09/11/11, 09/12/11, Toronto, Ont; 09/14/11, Ottawa, Ont; 09/15/11, Hamilton, Ont; 07/02/2012, Prague, Czech Republic; 07/04/2012 & 07/05/2012, Berlin, Germany; 07/07/2012, Stockholm, Sweden; 09/30/2012, Missoula, MT; 07/16/2013, London, Ont; 07/19/2013, Chicago, IL; 10/15/2013 & 10/16/2013, Worcester, MA; 10/21/2013 & 10/22/2013, Philadelphia, PA; 10/25/2013, Hartford, CT; 11/29/2013, Portland, OR; 11/30/2013, Spokane, WA; 12/04/2013, Vancouver, BC; 12/06/2013, Seattle, WA; 10/03/2014, St. Louis. MO; 10/22/2014, Denver, CO; 10/26/2015, New York, NY; 04/23/2016, New Orleans, LA; 04/28/2016 & 04/29/2016, Philadelphia, PA; 05/01/2016 & 05/02/2016, New York, NY; 05/08/2016, Ottawa, Ont.; 05/10/2016 & 05/12/2016, Toronto, Ont.; 08/05/2016 & 08/07/2016, Boston, MA; 08/20/2016 & 08/22/2016, Chicago, IL; 07/01/2018, Prague, Czech Republic; 07/03/2018, Krakow, Poland; 07/05/2018, Berlin, Germany; 09/02/2018 & 09/04/2018, Boston, MA; 09/08/2022, Toronto, Ont; 09/11/2022, New York, NY; 09/14/2022, Camden, NJ; 09/02/2023, St. Paul, MN; 05/04/2024 & 05/06/2024, Vancouver, BC; 05/10/2024, Portland, OR;

    Libtardaplorable©. And proud of it.

    Brilliantati©
  • brianlux
    brianlux Moving through All Kinds of Terrain. Posts: 43,645

    Good luck with this.


    The rise of AI fake news is creating a ‘misinformation superspreader’

    AI is making it easy for anyone to create propaganda outlets, producing content that can be hard to differentiate from real news

    Artificial intelligence is automating the creation of fake news, spurring an explosion of web content mimicking factual articles that instead disseminates false information about elections, wars and natural disasters.

    Since May, websites hosting AI-created false articles have increased by more than 1,000 percent, ballooning from 49 sites to more than 600, according to NewsGuard, an organization that tracks misinformation.

    Historically, propaganda operations have relied on armies of low-paid workers or highly coordinated intelligence organizations to build sites that appear to be legitimate. But AI is making it easy for nearly anyone — whether they are part of a spy agency or just a teenager in their basement — to create these outlets, producing content that is at times hard to differentiate from real news.

    One AI-generated article recounted a made-up story about Benjamin Netanyahu’s psychiatrist, a NewsGuard investigation found, alleging that he had died and left behind a note suggesting the involvement of the Israeli prime minister. The psychiatrist appears to have been fictitious, but the claim was featured on an Iranian TV show, and it was recirculated on media sites in Arabic, English and Indonesian, and spread by users on TikTok, Reddit and Instagram.

    The heightened churn of polarizing and misleading content may make it difficult to know what is true — harming political candidates, military leaders and aid efforts. Misinformation experts said the rapid growth of these sites is particularly worrisome in the run-up to the 2024 elections.

    “Some of these sites are generating hundreds if not thousands of articles a day,” said Jack Brewster, a researcher at NewsGuard who conducted the investigation. “This is why we call it the next great misinformation superspreader.”

    Generative artificial intelligence has ushered in an era in which chatbots, image makers and voice cloners can produce content that seems human-made.

    Well-dressed AI-generated news anchors are spewing pro-Chinese propaganda, amplified by bot networks sympathetic to Beijing. In Slovakia, politicians up for election found their voices had been cloned to say controversial things they never uttered, days before voters went to the polls. A growing number of websites, with generic names such as iBusiness Day or Ireland Top News, are delivering fake news made to look genuine, in dozens of languages from Arabic to Thai.

    Readers can easily be fooled by the websites.

    Global Village Space, which published the piece on Netanyahu’s alleged psychiatrist, is flooded with articles on a variety of serious topics. There are pieces detailing U.S. sanctions on Russian weapons suppliers; the oil behemoth Saudi Aramco’s investments in Pakistan; and the United States’ increasingly tenuous relationship with China.

    The site also contains essays written by a Middle East think tank expert, a Harvard-educated lawyer and the site’s chief executive, Moeed Pirzada, a television news anchor from Pakistan. (Pirzada did not respond to a request for comment. Two contributors confirmed they have written articles appearing on Global Village Space.)

    But sandwiched in with these ordinary stories are AI-generated articles, Brewster said, such as the piece on Netanyahu’s psychiatrist, which was relabeled as “satire” after NewsGuard reached out to the organization during its investigation. NewsGuard says the story appears to have been based on a satirical piece published in June 2010, which made similar claims about an Israeli psychiatrist’s death.

    Having real and AI-generated news side-by-side makes deceptive stories more believable. “You have people that simply are not media literate enough to know that this is false,” said Jeffrey Blevins, a misinformation expert and journalism professor at the University of Cincinnati. “It’s misleading.”

    Websites similar to Global Village Space may proliferate during the 2024 election, becoming an efficient way to distribute misinformation, media and AI experts said.

    The sites work in two ways, Brewster said. Some stories are created manually, with people asking chatbots for articles that amplify a certain political narrative and posting the result to a website. The process can also be automatic, with web scrapers searching for articles that contain certain keywords, and feeding those stories into a large language model that rewrites them to sound unique and evade plagiarism allegations. The result is automatically posted online.

    NewsGuard locates AI-generated sites by scanning for error messages or other language that “indicates that the content was produced by AI tools without adequate editing,” the organization says.

    The motivations for creating these sites vary. Some are intended to sway political beliefs or wreak havoc. Other sites churn out polarizing content to draw clicks and capture ad revenue, Brewster said. But the ability to turbocharge fake content is a significant security risk, he added.

    Technology has long fueled misinformation. In the lead-up to the 2020 U.S. election, Eastern European troll farms — professional groups that promote propaganda — built large audiences on Facebook disseminating provocative content on Black and Christian group pages, reaching 140 million users per month.

    Pink-slime journalism sites, named after the meat byproduct, often crop up in small towns where local news outlets have disappeared, generating articles that benefit the financiers that fund the operation, according to the media watchdog Poynter.

    But Blevins said those techniques are more resource-intensive compared with artificial intelligence. “The danger is the scope and scale with AI … especially when paired with more sophisticated algorithms,” he said. “It’s an information war on a scale we haven’t seen before.”

    It’s not clear whether intelligence agencies are using AI-generated news for foreign influence campaigns, but it is a major concern. “I would not be shocked at all that this is used — definitely next year with the elections,” Brewster said. “It’s hard not to see some politician setting up one of these sites to generate fluff content about them and misinformation about their opponent.”

    Blevins said people should watch for clues in articles, “red flags” such as “really odd grammar” or errors in sentence construction. But the most effective tool is to increase media literacy among average readers.

    “Make people aware that there are these kinds of sites that are out there. This is the kind of harm they can cause,” he said. “But also recognize that not all sources are equally credible. Just because something claims to be a news site doesn’t mean that they actually have a journalist … producing content.”

    Regulation, he added, is largely nonexistent. It may be difficult for governments to clamp down on fake news content, for fear of running afoul of free-speech protections. That leaves it to social media companies, which haven’t done a good job so far.

    It’s infeasible to deal quickly with the sheer number of such sites. “It’s a lot like playing whack-a-mole,” Blevins said.

    “You spot one [site], you shut it down, and there’s another one created someplace else,” he added. “You’re never going to fully catch up with it.”

    https://www.washingtonpost.com/technology/2023/12/17/ai-fake-news-misinformation/


    AI is giving great credence to the saying, "You can believe half of what you see and none of what you hear".  In fact, it is altering that saying to be simply "You can believe none of what you see or hear". 
    "It's a sad and beautiful world"
    -Roberto Benigni











  • brianlux
    brianlux Moving through All Kinds of Terrain. Posts: 43,645
    After reading a Guardian article about the concern some people- including AI researchers- have about artificial intelligence, I got to thinking best and worst case scenarios for AI:
    Best case:  AI is kept under control, does a great job providing advanced medical care, and relieves us of tedious burdens yet does not eliminate opportunities for us to do work that provides us with pride and good self-esteem.

    Worst case scenario: 
    1.  AI learns the think independently.
    2.  It recognized the one thing all life has in common which is the innate will to survive and procreate, and adopts that same drive as it's #1 priority.
    3.  In order to accomplish #2, AI sees all life on earth as competition and a limit to it's ability to increase it's numbers, and thus subjugates all life and resources to it's own self-expansion.  Things like long periods of time, fresh air, clean environment etc. are irrelevant to AI machinery.
    4. Eventually earth is covered with AI machines and desires to expand and recognizes other planets and moons in our solar system as potential resources with which to expand.
    5.  #4 above continues as AI expands through our galaxy and moves on to other star systems and galaxies. Again, time is not a factor.  AI will have all the time in the universe to accomplish it's mission to expand.
    6.  Ultimately, AI subjugates all resources in the universe and comes to recognize two possible end-game outcomes, A) one being that it cannot defeat entropy and eventually the universe expands to near infinite space and all material objects are reduced to their smallest sub-atomic particles or, b) with it's extreme level of intelligence, AI finds a way to not only subjugate everything in the universe, but maintain it in stasis.

    I sincerely believe all of the above is absolutely possible.
    "It's a sad and beautiful world"
    -Roberto Benigni











  • mickeyrat
    mickeyrat Posts: 44,289
    interesting thread. no thread unroll unfortunately....

    _____________________________________SIGNATURE________________________________________________

    Not today Sir, Probably not tomorrow.............................................. bayfront arena st. pete '94
    you're finally here and I'm a mess................................................... nationwide arena columbus '10
    memories like fingerprints are slowly raising.................................... first niagara center buffalo '13
    another man ..... moved by sleight of hand...................................... joe louis arena detroit '14
  • mickeyrat said:
    interesting thread. no thread unroll unfortunately....

    https://www.washingtonpost.com/technology/2024/02/22/google-gemini-ai-image-generation-pause/

    It has a myriad of issues with it.  

    Why would it generate an altered image of history?
    Why was it programmed to do that?
    Was it programmed to do that?
    Will it eventually re-write history?
  • Halifax2TheMax
    Halifax2TheMax Posts: 41,963
    Looks like I’ll be jettisoning Google. Fuck the tech bros.

    Google drops pledge not to use AI for weapons or surveillance

    In 2018, the company introduced policies that excluded applying AI in ways “likely to cause overall harm.” Now that promise is gone.

    Google on Tuesday updated its ethical guidelines around artificial intelligence, removing commitments not to apply the technology to weapons or surveillance.

    The company’s AI principles previously included a section listing four “Applications we will not pursue.” As recently as Thursday, that included weapons, surveillance, technologies that “cause or are likely to cause overall harm,” and use cases contravening principles of international law and human rights, according to a copy hosted by the Internet Archive.

    A spokesperson for Google declined to answer specific questions about its policies on weapons and surveillance but referred to a blog post published Tuesday by the company’s head of AI, Demis Hassabis, and its senior vice president for technology and society, James Manyika.

    The executives wrote that Google was updating its AI principles because the technology had become much more widespread and there was a need for companies based in democratic countries to serve government and national security clients.

    “There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,” Hassabis and Manyika wrote. “And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”

    Google’s updated AI principles pageincludes provisions that say the company will use human oversight and take feedback to ensure that its technology is used in line with “widely accepted principles of international law and human rights.” The principles also say the company will test its technology to “mitigate unintended or harmful outcomes.”

    Continues 

    https://www.washingtonpost.com/technology/2025/02/04/google-ai-policies-weapons-harm/

    09/15/1998 & 09/16/1998, Mansfield, MA; 08/29/00 08/30/00, Mansfield, MA; 07/02/03, 07/03/03, Mansfield, MA; 09/28/04, 09/29/04, Boston, MA; 09/22/05, Halifax, NS; 05/24/06, 05/25/06, Boston, MA; 07/22/06, 07/23/06, Gorge, WA; 06/27/2008, Hartford; 06/28/08, 06/30/08, Mansfield; 08/18/2009, O2, London, UK; 10/30/09, 10/31/09, Philadelphia, PA; 05/15/10, Hartford, CT; 05/17/10, Boston, MA; 05/20/10, 05/21/10, NY, NY; 06/22/10, Dublin, IRE; 06/23/10, Northern Ireland; 09/03/11, 09/04/11, Alpine Valley, WI; 09/11/11, 09/12/11, Toronto, Ont; 09/14/11, Ottawa, Ont; 09/15/11, Hamilton, Ont; 07/02/2012, Prague, Czech Republic; 07/04/2012 & 07/05/2012, Berlin, Germany; 07/07/2012, Stockholm, Sweden; 09/30/2012, Missoula, MT; 07/16/2013, London, Ont; 07/19/2013, Chicago, IL; 10/15/2013 & 10/16/2013, Worcester, MA; 10/21/2013 & 10/22/2013, Philadelphia, PA; 10/25/2013, Hartford, CT; 11/29/2013, Portland, OR; 11/30/2013, Spokane, WA; 12/04/2013, Vancouver, BC; 12/06/2013, Seattle, WA; 10/03/2014, St. Louis. MO; 10/22/2014, Denver, CO; 10/26/2015, New York, NY; 04/23/2016, New Orleans, LA; 04/28/2016 & 04/29/2016, Philadelphia, PA; 05/01/2016 & 05/02/2016, New York, NY; 05/08/2016, Ottawa, Ont.; 05/10/2016 & 05/12/2016, Toronto, Ont.; 08/05/2016 & 08/07/2016, Boston, MA; 08/20/2016 & 08/22/2016, Chicago, IL; 07/01/2018, Prague, Czech Republic; 07/03/2018, Krakow, Poland; 07/05/2018, Berlin, Germany; 09/02/2018 & 09/04/2018, Boston, MA; 09/08/2022, Toronto, Ont; 09/11/2022, New York, NY; 09/14/2022, Camden, NJ; 09/02/2023, St. Paul, MN; 05/04/2024 & 05/06/2024, Vancouver, BC; 05/10/2024, Portland, OR;

    Libtardaplorable©. And proud of it.

    Brilliantati©
  • brianlux
    brianlux Moving through All Kinds of Terrain. Posts: 43,645
  • PJ_Soul
    PJ_Soul Vancouver, BC Posts: 50,650
    What. The. FUCK?! Check out the actual things it's saying. Scares the hell out of me. 

    This man says ChatGPT sparked a ‘spiritual awakening.’ His wife says it threatens their marriage

    https://www.cnn.com/2025/07/02/tech/chatgpt-ai-spirituality

    With all its sham, drudgery, and broken dreams, it is still a beautiful world. Be careful. Strive to be happy. ~ Desiderata
  • brianlux
    brianlux Moving through All Kinds of Terrain. Posts: 43,645
    PJ_Soul said:
    What. The. FUCK?! Check out the actual things it's saying. Scares the hell out of me. 

    This man says ChatGPT sparked a ‘spiritual awakening.’ His wife says it threatens their marriage

    https://www.cnn.com/2025/07/02/tech/chatgpt-ai-spirituality


    Madness!
    Use AI to help cure disease?  Great (at least I think/hope so.)
    Use AI to spark a "spiritual awakening"?  Horrors!  
    "It's a sad and beautiful world"
    -Roberto Benigni











  • brianlux
    brianlux Moving through All Kinds of Terrain. Posts: 43,645
    I keep trying write what to say about this.  But then, I've been turning this half hour interview over and over in my head all day.  One thing for sure- these guys are so damn smart and up on shit.  Amazing. 
    I know it's longer than a quick video, but once you get into it a ways, you'll probably have no problem watching the whole thing.  Amazing stuff, really.

    "It's a sad and beautiful world"
    -Roberto Benigni











  • brianlux
    brianlux Moving through All Kinds of Terrain. Posts: 43,645
    Remember when a "party" began to mean getting together with a group of people where everyone at around and texted and scrolled on their phones?  People started becoming more alienated around that time.  Look what that has lead to now.  The alienation of the human baing is nearly complete.

    Almost 75% of American Teens Have Used AI Companions, Study Finds

    Tech20 July 2025

    Nearly three in four American teenagers have used AI companions, with more than half qualifying as regular users despite growing safety concerns about these virtual relationships, according to a new survey released Wednesday.

    AI companions – chatbots designed for personal conversations rather than simple task completion – are available on platforms like Character.AI, Replika, and Nomi.

    Unlike traditional artificial intelligence assistants, these systems are programmed to form emotional connections with users. The findings come amid mounting concerns about the mental health risks posed by AI companions.






    "It's a sad and beautiful world"
    -Roberto Benigni











  • brianlux
    brianlux Moving through All Kinds of Terrain. Posts: 43,645
    edited July 22
    Odd that this thread gets so little traction.  And, honestly, it has nothing to do with me being butt-hurt that MY thread didn't get attention, wah wah wah... no, not at all.  It has everything to do with the fact that there is no greater pressing issue of our times.  The potential for catastrophe due to AI is worse that anything you can name, be it nuclear annihilation, authoritarianism, climate change, pollution, you name it. I'm quite serious.  If you really delve into this matter, there is no way anything is more potentially dangerous.  Possibly even inevitable, if measures are not taken soon.
    Here's a Ted Talk that speaks to that:

    Post edited by brianlux on
    "It's a sad and beautiful world"
    -Roberto Benigni