Artificial Intelligence: What could go wrong?

brianluxbrianlux Moving through All Kinds of Terrain. Posts: 42,283
We tinker with our fate. 

A.I., what could go wrong?  Plenty.


MIT Scientists Unveil First Psychopath AI, 'Norman'

Scientists at the Massachusetts Institute of Technology unveiled the first artificial intelligence algorithm trained to be a psychopath. The AI was fittingly dubbed "Norman" after Norman Bates, the notorious killer in Alfred Hitchcock's Psycho.

We may be on the verge of creating a new life form, one that could mark not only an evolutionary breakthrough, but a potential threat to our survival as a species

MIT scientists Pinar Yanardag, Manuel Cebrian and Iyad Rahwan trained Norman to perform image captioning, "a deep learning method" that allows AI to generate text descriptions for images. However, the team exclusively exposed Norman to violent and disturbing images posted on a subreddit dedicated to death.

They then gave Norman a Rorschach inkblot test and the AI responded with chilling interpretations such as, "a man is electrocuted and catches to death," "pregnant woman falls at construction" and "man is shot dead in front of his screaming wife." Meanwhile, a standard AI responded to the same inkblots with, "a close up of a vase with flowers," "a couple of people standing next to each other" and "a person is holding an umbrella in the air."

While Norman may conjure dystopian images of killer robots, the MIT team said the purpose of the experiment was to prove that AI algorithms aren't inherently biased, but that data input methods – and the people inputting that data – can significantly alter an AI's behavior. As Newsweek pointed out, there have been several notable cases where racism and bias have crept into machine learning, like the Google Photos image recognition algorithm that was classifying black people as "gorillas."




"Pretty cookies, heart squares all around, yeah!"
-Eddie Vedder, "Smile"

"Try to not spook the horse."
-Neil Young













«1

Comments

  • KatKat Posts: 4,878
    It's like they never heard of the Terminator. :/


    Falling down,...not staying down
  • PJinILPJinIL satan's bed Posts: 433
    This is such a cool thing, but scary at the same time. I have no concerns for robot takeover of the world or anything, but these things are only as reliable as the people programming them, as the article suggests. The scary part to me is how it could make society (in the US at least) even more lazy and reliant on machines. But, we'll have more scapegoats! 
    It's amazing what you hear when you take time to listen.
  • bbiggsbbiggs Posts: 6,952
    ^ Americans lazier than we are today? Impossible. ;) 
  • brianluxbrianlux Moving through All Kinds of Terrain. Posts: 42,283
    Kat said:
    It's like they never heard of the Terminator. :/


    LOL

    Or this guy:


    "Pretty cookies, heart squares all around, yeah!"
    -Eddie Vedder, "Smile"

    "Try to not spook the horse."
    -Neil Young













  • josevolutionjosevolution Posts: 29,901
    bbiggs said:
    ^ Americans lazier than we are today? Impossible. ;) 
    lol 
    jesus greets me looks just like me ....
  • PJPOWERPJPOWER Posts: 6,499
    Cannot think of any way this could go wrong...West World anyone?
    https://motherboard.vice.com/amp/en_us/article/xwm5mk/mit-psychotic-ai-rehabilitation

  • HughFreakingDillonHughFreakingDillon Winnipeg Posts: 37,335
    https://us.cnn.com/videos/tech/2023/11/04/smr-ai-nudes-of-hs-students.cnn

    disgusting. I hate the fact that there are parents out there that excuse this behaviour away as "youthful transgressions". Sounds like a Brett Kavanaugh type thing. charge him/them. 
    "Oh Canada...you're beautiful when you're drunk"
    -EV  8/14/93




  • brianluxbrianlux Moving through All Kinds of Terrain. Posts: 42,283
    https://us.cnn.com/videos/tech/2023/11/04/smr-ai-nudes-of-hs-students.cnn

    disgusting. I hate the fact that there are parents out there that excuse this behaviour away as "youthful transgressions". Sounds like a Brett Kavanaugh type thing. charge him/them. 

    Any parent that is OK with their kid doing that is just as guilty and should be charged as such.
    "Pretty cookies, heart squares all around, yeah!"
    -Eddie Vedder, "Smile"

    "Try to not spook the horse."
    -Neil Young













  • https://us.cnn.com/videos/tech/2023/11/04/smr-ai-nudes-of-hs-students.cnn

    disgusting. I hate the fact that there are parents out there that excuse this behaviour away as "youthful transgressions". Sounds like a Brett Kavanaugh type thing. charge him/them. 
    They've been doing Photoshop nudes for years.  This is Child pornography though.

    I tell you kids these days have a lot to worry about.
  • mickeyratmickeyrat Posts: 39,229
    _____________________________________SIGNATURE________________________________________________

    Not today Sir, Probably not tomorrow.............................................. bayfront arena st. pete '94
    you're finally here and I'm a mess................................................... nationwide arena columbus '10
    memories like fingerprints are slowly raising.................................... first niagara center buffalo '13
    another man ..... moved by sleight of hand...................................... joe louis arena detroit '14
  • justamjustam Posts: 21,412
    Artificial Intelligence helped re-create the last Beatles song Now and Then.  I heard it for the first time tonight and I have to admit that I liked it. (!) It wasn't a great song, but it was still Lennon singing and it reminded me why the Big 4 were so good.

    The idea of Artificial intelligence in general seems frightening though because I think it seems like letting a huge cat out of a bag and the animal's behavior can't truly be anticipated or stopped later.  Once it is out, it is out!

    So many sci-fi novels and movies are based on A.I. running amuck.

    (Or, that sad movie about the artificial child who was abandoned but lived on searching for the person who originally owned him.  That story made me cry...  It was a story about humans abusing robots with artificial intelligence who were made to feel attached to their human purchasers. Another example of humans using other creatures as disposable pets...)

    But, we have another Beatles song. Maybe I'm just being a worried older person?
    &&&&&&&&&&&&&&
  • static111static111 Posts: 4,889
    justam said:
    Artificial Intelligence helped re-create the last Beatles song Now and Then.  I heard it for the first time tonight and I have to admit that I liked it. (!) It wasn't a great song, but it was still Lennon singing and it reminded me why the Big 4 were so good.

    The idea of Artificial intelligence in general seems frightening though because I think it seems like letting a huge cat out of a bag and the animal's behavior can't truly be anticipated or stopped later.  Once it is out, it is out!

    So many sci-fi novels and movies are based on A.I. running amuck.

    (Or, that sad movie about the artificial child who was abandoned but lived on searching for the person who originally owned him.  That story made me cry...  It was a story about humans abusing robots with artificial intelligence who were made to feel attached to their human purchasers. Another example of humans using other creatures as disposable pets...)

    But, we have another Beatles song. Maybe I'm just being a worried older person?
    That isn't a Beatles song.  It's a John Lennon song that somehow got trotted out of the crypt and played on by other musicians that were once in the Beatles and somehow had the other dead  former Beatle's guitar part added to it.  That is like a feather on the scale of good when it comes to AI.  In 1000 years people will maybe know what hey Jude is, but no one will ever look to the greatness of whatever commercially viable technological trick Now and Then is.
    Scio me nihil scire

    There are no kings inside the gates of eden
  • mace1229mace1229 Posts: 9,481
    My dad's name is Norman. We tease him all the time, any character named "Norman" in a movie is always a phsyco. 
    Now its even in AI
  • bootlegger10bootlegger10 Posts: 16,025
    Former Google CEO Eric Schmidt: AI guardrails "aren't enough" (axios.com)

    You've got five years at most to live a good life, and then AI takes over.  
  • brianluxbrianlux Moving through All Kinds of Terrain. Posts: 42,283
    Thankfully, there are efforts like this being taken.  We would do well to support these efforts.

    US, Britain, other countries ink agreement to make AI 'secure by design'

    November 27, 20236:57 PM PSTUpdated 15 hours ago

    WASHINGTON, Nov 27 (Reuters) - The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are "secure by design."

    In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse.



    "Pretty cookies, heart squares all around, yeah!"
    -Eddie Vedder, "Smile"

    "Try to not spook the horse."
    -Neil Young













  • Former Google CEO Eric Schmidt: AI guardrails "aren't enough" (axios.com)

    You've got five years at most to live a good life, and then AI takes over.  
    brianlux said:
    Thankfully, there are efforts like this being taken.  We would do well to support these efforts.

    US, Britain, other countries ink agreement to make AI 'secure by design'

    November 27, 20236:57 PM PSTUpdated 15 hours ago

    WASHINGTON, Nov 27 (Reuters) - The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are "secure by design."

    In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse.



    This is the beginning of the end, seriously.  Every company is going to force this down our throats whether we like it or not.  "It's the future"
  • static111static111 Posts: 4,889
    Former Google CEO Eric Schmidt: AI guardrails "aren't enough" (axios.com)

    You've got five years at most to live a good life, and then AI takes over.  
    brianlux said:
    Thankfully, there are efforts like this being taken.  We would do well to support these efforts.

    US, Britain, other countries ink agreement to make AI 'secure by design'

    November 27, 20236:57 PM PSTUpdated 15 hours ago

    WASHINGTON, Nov 27 (Reuters) - The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are "secure by design."

    In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse.



    This is the beginning of the end, seriously.  Every company is going to force this down our throats whether we like it or not.  "It's the future"
    it's what the consumers want... just like every other pile of crap we have been force fed and turned into a ubiquitous part of our lives that we couldn't do without.
    Scio me nihil scire

    There are no kings inside the gates of eden
  • static111 said:
    Former Google CEO Eric Schmidt: AI guardrails "aren't enough" (axios.com)

    You've got five years at most to live a good life, and then AI takes over.  
    brianlux said:
    Thankfully, there are efforts like this being taken.  We would do well to support these efforts.

    US, Britain, other countries ink agreement to make AI 'secure by design'

    November 27, 20236:57 PM PSTUpdated 15 hours ago

    WASHINGTON, Nov 27 (Reuters) - The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are "secure by design."

    In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse.



    This is the beginning of the end, seriously.  Every company is going to force this down our throats whether we like it or not.  "It's the future"
    it's what the consumers want... just like every other pile of crap we have been force fed and turned into a ubiquitous part of our lives that we couldn't do without.
    Consumers ACTUALLY want this crap?  Really?  Or is it something that companies put a shit ton of money into and are saying "you want/need this" ?

    I look at this like NFT's and EVERYBODY wants it at first but....

    You k ow what?  If I wanted to be lazy and use it for homework or music writing, art even, then I guess if I didn't or couldn't be creative enough then I would like it.

    I think a lot of peoples jobs are gonna be gone fromn this, lol.
  • benjsbenjs Toronto, ON Posts: 9,169
    static111 said:
    Former Google CEO Eric Schmidt: AI guardrails "aren't enough" (axios.com)

    You've got five years at most to live a good life, and then AI takes over.  
    brianlux said:
    Thankfully, there are efforts like this being taken.  We would do well to support these efforts.

    US, Britain, other countries ink agreement to make AI 'secure by design'

    November 27, 20236:57 PM PSTUpdated 15 hours ago

    WASHINGTON, Nov 27 (Reuters) - The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are "secure by design."

    In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse.



    This is the beginning of the end, seriously.  Every company is going to force this down our throats whether we like it or not.  "It's the future"
    it's what the consumers want... just like every other pile of crap we have been force fed and turned into a ubiquitous part of our lives that we couldn't do without.
    Consumers ACTUALLY want this crap?  Really?  Or is it something that companies put a shit ton of money into and are saying "you want/need this" ?

    I look at this like NFT's and EVERYBODY wants it at first but....

    You k ow what?  If I wanted to be lazy and use it for homework or music writing, art even, then I guess if I didn't or couldn't be creative enough then I would like it.

    I think a lot of peoples jobs are gonna be gone fromn this, lol.
    I'm not sure how much you've used ChatGPT, but I've done the following with it in the past month alone:

    -prepared food based on its recipes (after telling it what I have in stock)
    -built project plans and had it provide constructive criticism on what elements are missing/confusing
    -provided details of a bluegrass song and had it propose some interesting chord substitutions based on proper music theory for the genre
    -successfully had it write Python programming based on providing it details of what I'm trying to accomplish, as well as code explanations (note that I can not write quality Python code myself) 
    -requested it to interview me and then produce a job description based on the conversation, as well as a proposal for an onboarding roadmap
    -learned statistics about the proportionality of Congress switching leadership as the President does/doesn't change historically

    None of these ventures will eliminate a job, but all of them increase abilities as an employee/individual, which is wonderful. It's absolutely a can of worms being opened up, but there are positive use cases like the ones I mentioned above. Users of AI aren't necessarily lazy, uninspired, or seeking an opportunity to plagiarize, and some of these cases are notably uncomplicated in their usefulness.
    '05 - TO, '06 - TO 1, '08 - NYC 1 & 2, '09 - TO, Chi 1 & 2, '10 - Buffalo, NYC 1 & 2, '11 - TO 1 & 2, Hamilton, '13 - Buffalo, Brooklyn 1 & 2, '15 - Global Citizen, '16 - TO 1 & 2, Chi 2

    EV
    Toronto Film Festival 9/11/2007, '08 - Toronto 1 & 2, '09 - Albany 1, '11 - Chicago 1
  • static111static111 Posts: 4,889
    edited November 2023
    static111 said:
    Former Google CEO Eric Schmidt: AI guardrails "aren't enough" (axios.com)

    You've got five years at most to live a good life, and then AI takes over.  
    brianlux said:
    Thankfully, there are efforts like this being taken.  We would do well to support these efforts.

    US, Britain, other countries ink agreement to make AI 'secure by design'

    November 27, 20236:57 PM PSTUpdated 15 hours ago

    WASHINGTON, Nov 27 (Reuters) - The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are "secure by design."

    In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse.



    This is the beginning of the end, seriously.  Every company is going to force this down our throats whether we like it or not.  "It's the future"
    it's what the consumers want... just like every other pile of crap we have been force fed and turned into a ubiquitous part of our lives that we couldn't do without.
    Consumers ACTUALLY want this crap?  Really?  Or is it something that companies put a shit ton of money into and are saying "you want/need this" ?

    I look at this like NFT's and EVERYBODY wants it at first but....

    You k ow what?  If I wanted to be lazy and use it for homework or music writing, art even, then I guess if I didn't or couldn't be creative enough then I would like it.

    I think a lot of peoples jobs are gonna be gone fromn this, lol.
    That's what the defense is for every bad decision.  Shipping jobs overseas...consumers want lower prices. Not investing in clean energy r&d earlier, consumers wanted cheap fuel etc. No matter what it always gets blamed on people that have no alternatives and have to purchase what they need to function in capitalism amongst the available offerings.
    Post edited by static111 on
    Scio me nihil scire

    There are no kings inside the gates of eden
  • benjs said:
    static111 said:
    Former Google CEO Eric Schmidt: AI guardrails "aren't enough" (axios.com)

    You've got five years at most to live a good life, and then AI takes over.  
    brianlux said:
    Thankfully, there are efforts like this being taken.  We would do well to support these efforts.

    US, Britain, other countries ink agreement to make AI 'secure by design'

    November 27, 20236:57 PM PSTUpdated 15 hours ago

    WASHINGTON, Nov 27 (Reuters) - The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are "secure by design."

    In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse.



    This is the beginning of the end, seriously.  Every company is going to force this down our throats whether we like it or not.  "It's the future"
    it's what the consumers want... just like every other pile of crap we have been force fed and turned into a ubiquitous part of our lives that we couldn't do without.
    Consumers ACTUALLY want this crap?  Really?  Or is it something that companies put a shit ton of money into and are saying "you want/need this" ?

    I look at this like NFT's and EVERYBODY wants it at first but....

    You k ow what?  If I wanted to be lazy and use it for homework or music writing, art even, then I guess if I didn't or couldn't be creative enough then I would like it.

    I think a lot of peoples jobs are gonna be gone fromn this, lol.
    I'm not sure how much you've used ChatGPT, but I've done the following with it in the past month alone:

    -prepared food based on its recipes (after telling it what I have in stock)
    -built project plans and had it provide constructive criticism on what elements are missing/confusing
    -provided details of a bluegrass song and had it propose some interesting chord substitutions based on proper music theory for the genre
    -successfully had it write Python programming based on providing it details of what I'm trying to accomplish, as well as code explanations (note that I can not write quality Python code myself) 
    -requested it to interview me and then produce a job description based on the conversation, as well as a proposal for an onboarding roadmap
    -learned statistics about the proportionality of Congress switching leadership as the President does/doesn't change historically

    None of these ventures will eliminate a job, but all of them increase abilities as an employee/individual, which is wonderful. It's absolutely a can of worms being opened up, but there are positive use cases like the ones I mentioned above. Users of AI aren't necessarily lazy, uninspired, or seeking an opportunity to plagiarize, and some of these cases are notably uncomplicated in their usefulness.
    I have zero desire to use ChatGPT at the moment.  If it could do submittals for me then I would change my mind but I don't exactly see that happening.

    Nothing you mentioned struck my fancy other than the food one.  I do enjoy making new dishes but it's usually from seeing them and I have to buy new ingredients.

    The job interview is basically preparation which I think you could do with a little research or by just knowing the field.

    My view may change of it over time.  Wondering if it can help with investment trends?  See?  I just changed my mind about it, lol.
  • static111 said:
    static111 said:
    Former Google CEO Eric Schmidt: AI guardrails "aren't enough" (axios.com)

    You've got five years at most to live a good life, and then AI takes over.  
    brianlux said:
    Thankfully, there are efforts like this being taken.  We would do well to support these efforts.

    US, Britain, other countries ink agreement to make AI 'secure by design'

    November 27, 20236:57 PM PSTUpdated 15 hours ago

    WASHINGTON, Nov 27 (Reuters) - The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are "secure by design."

    In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse.



    This is the beginning of the end, seriously.  Every company is going to force this down our throats whether we like it or not.  "It's the future"
    it's what the consumers want... just like every other pile of crap we have been force fed and turned into a ubiquitous part of our lives that we couldn't do without.
    Consumers ACTUALLY want this crap?  Really?  Or is it something that companies put a shit ton of money into and are saying "you want/need this" ?

    I look at this like NFT's and EVERYBODY wants it at first but....

    You k ow what?  If I wanted to be lazy and use it for homework or music writing, art even, then I guess if I didn't or couldn't be creative enough then I would like it.

    I think a lot of peoples jobs are gonna be gone fromn this, lol.
    That's what the defense is for every bad decision.  Shipping jobs overseas...consumers want lower prices. Not investing in clean energy r&d earlier, consumers wanted cheap fuel etc. No matter what it always gets blamed on people that have no alternatives and have to purchase what they need to function in capitalism amongst the available offerings.
    Who said losing jobs overseas?  No those jobs will be eliminated all together.

    If Yang is right about the trucking industry and AI, then all those trucking jobs won't go overseas, they will be removed completely.
  • static111static111 Posts: 4,889
    static111 said:
    static111 said:
    Former Google CEO Eric Schmidt: AI guardrails "aren't enough" (axios.com)

    You've got five years at most to live a good life, and then AI takes over.  
    brianlux said:
    Thankfully, there are efforts like this being taken.  We would do well to support these efforts.

    US, Britain, other countries ink agreement to make AI 'secure by design'

    November 27, 20236:57 PM PSTUpdated 15 hours ago

    WASHINGTON, Nov 27 (Reuters) - The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are "secure by design."

    In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse.



    This is the beginning of the end, seriously.  Every company is going to force this down our throats whether we like it or not.  "It's the future"
    it's what the consumers want... just like every other pile of crap we have been force fed and turned into a ubiquitous part of our lives that we couldn't do without.
    Consumers ACTUALLY want this crap?  Really?  Or is it something that companies put a shit ton of money into and are saying "you want/need this" ?

    I look at this like NFT's and EVERYBODY wants it at first but....

    You k ow what?  If I wanted to be lazy and use it for homework or music writing, art even, then I guess if I didn't or couldn't be creative enough then I would like it.

    I think a lot of peoples jobs are gonna be gone fromn this, lol.
    That's what the defense is for every bad decision.  Shipping jobs overseas...consumers want lower prices. Not investing in clean energy r&d earlier, consumers wanted cheap fuel etc. No matter what it always gets blamed on people that have no alternatives and have to purchase what they need to function in capitalism amongst the available offerings.
    Who said losing jobs overseas?  No those jobs will be eliminated all together.

    If Yang is right about the trucking industry and AI, then all those trucking jobs won't go overseas, they will be removed completely.
    Go back in time.   When jobs were originally sent overseas one of the reasons was because consumers wanted lower prices which was crap it was all about profits.  The point is every major blunder forced on us gets blamed on the consumers "wanting" something. Conveniebce, lower prices etc.
    Scio me nihil scire

    There are no kings inside the gates of eden
  • static111 said:
    static111 said:
    static111 said:
    Former Google CEO Eric Schmidt: AI guardrails "aren't enough" (axios.com)

    You've got five years at most to live a good life, and then AI takes over.  
    brianlux said:
    Thankfully, there are efforts like this being taken.  We would do well to support these efforts.

    US, Britain, other countries ink agreement to make AI 'secure by design'

    November 27, 20236:57 PM PSTUpdated 15 hours ago

    WASHINGTON, Nov 27 (Reuters) - The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are "secure by design."

    In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse.



    This is the beginning of the end, seriously.  Every company is going to force this down our throats whether we like it or not.  "It's the future"
    it's what the consumers want... just like every other pile of crap we have been force fed and turned into a ubiquitous part of our lives that we couldn't do without.
    Consumers ACTUALLY want this crap?  Really?  Or is it something that companies put a shit ton of money into and are saying "you want/need this" ?

    I look at this like NFT's and EVERYBODY wants it at first but....

    You k ow what?  If I wanted to be lazy and use it for homework or music writing, art even, then I guess if I didn't or couldn't be creative enough then I would like it.

    I think a lot of peoples jobs are gonna be gone fromn this, lol.
    That's what the defense is for every bad decision.  Shipping jobs overseas...consumers want lower prices. Not investing in clean energy r&d earlier, consumers wanted cheap fuel etc. No matter what it always gets blamed on people that have no alternatives and have to purchase what they need to function in capitalism amongst the available offerings.
    Who said losing jobs overseas?  No those jobs will be eliminated all together.

    If Yang is right about the trucking industry and AI, then all those trucking jobs won't go overseas, they will be removed completely.
    Go back in time.   When jobs were originally sent overseas one of the reasons was because consumers wanted lower prices which was crap it was all about profits.  The point is every major blunder forced on us gets blamed on the consumers "wanting" something. Conveniebce, lower prices etc.
    Ahhh.  I see the angle now. I see it as what the future generations will want too.  Easier and cheaper. I get it.  TY.
  • static111static111 Posts: 4,889
    static111 said:
    static111 said:
    static111 said:
    Former Google CEO Eric Schmidt: AI guardrails "aren't enough" (axios.com)

    You've got five years at most to live a good life, and then AI takes over.  
    brianlux said:
    Thankfully, there are efforts like this being taken.  We would do well to support these efforts.

    US, Britain, other countries ink agreement to make AI 'secure by design'

    November 27, 20236:57 PM PSTUpdated 15 hours ago

    WASHINGTON, Nov 27 (Reuters) - The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are "secure by design."

    In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse.



    This is the beginning of the end, seriously.  Every company is going to force this down our throats whether we like it or not.  "It's the future"
    it's what the consumers want... just like every other pile of crap we have been force fed and turned into a ubiquitous part of our lives that we couldn't do without.
    Consumers ACTUALLY want this crap?  Really?  Or is it something that companies put a shit ton of money into and are saying "you want/need this" ?

    I look at this like NFT's and EVERYBODY wants it at first but....

    You k ow what?  If I wanted to be lazy and use it for homework or music writing, art even, then I guess if I didn't or couldn't be creative enough then I would like it.

    I think a lot of peoples jobs are gonna be gone fromn this, lol.
    That's what the defense is for every bad decision.  Shipping jobs overseas...consumers want lower prices. Not investing in clean energy r&d earlier, consumers wanted cheap fuel etc. No matter what it always gets blamed on people that have no alternatives and have to purchase what they need to function in capitalism amongst the available offerings.
    Who said losing jobs overseas?  No those jobs will be eliminated all together.

    If Yang is right about the trucking industry and AI, then all those trucking jobs won't go overseas, they will be removed completely.
    Go back in time.   When jobs were originally sent overseas one of the reasons was because consumers wanted lower prices which was crap it was all about profits.  The point is every major blunder forced on us gets blamed on the consumers "wanting" something. Conveniebce, lower prices etc.
    Ahhh.  I see the angle now. I see it as what the future generations will want too.  Easier and cheaper. I get it.  TY.
    The thing is even though this crap is being railroaded through along with "smart" appliances etc once the picture is clear that the negatives out weigh the positives it will be the same old it's what the consumer wanted argument. When in fact it had been forcibly entwined into everything in society to the point t that it can't be undone, by industry and advertising and not by consumers.
    Scio me nihil scire

    There are no kings inside the gates of eden
  • benjs said:
    static111 said:
    Former Google CEO Eric Schmidt: AI guardrails "aren't enough" (axios.com)

    You've got five years at most to live a good life, and then AI takes over.  
    brianlux said:
    Thankfully, there are efforts like this being taken.  We would do well to support these efforts.

    US, Britain, other countries ink agreement to make AI 'secure by design'

    November 27, 20236:57 PM PSTUpdated 15 hours ago

    WASHINGTON, Nov 27 (Reuters) - The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are "secure by design."

    In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse.



    This is the beginning of the end, seriously.  Every company is going to force this down our throats whether we like it or not.  "It's the future"
    it's what the consumers want... just like every other pile of crap we have been force fed and turned into a ubiquitous part of our lives that we couldn't do without.
    Consumers ACTUALLY want this crap?  Really?  Or is it something that companies put a shit ton of money into and are saying "you want/need this" ?

    I look at this like NFT's and EVERYBODY wants it at first but....

    You k ow what?  If I wanted to be lazy and use it for homework or music writing, art even, then I guess if I didn't or couldn't be creative enough then I would like it.

    I think a lot of peoples jobs are gonna be gone fromn this, lol.
    I'm not sure how much you've used ChatGPT, but I've done the following with it in the past month alone:

    -prepared food based on its recipes (after telling it what I have in stock)
    -built project plans and had it provide constructive criticism on what elements are missing/confusing
    -provided details of a bluegrass song and had it propose some interesting chord substitutions based on proper music theory for the genre
    -successfully had it write Python programming based on providing it details of what I'm trying to accomplish, as well as code explanations (note that I can not write quality Python code myself) 
    -requested it to interview me and then produce a job description based on the conversation, as well as a proposal for an onboarding roadmap
    -learned statistics about the proportionality of Congress switching leadership as the President does/doesn't change historically

    None of these ventures will eliminate a job, but all of them increase abilities as an employee/individual, which is wonderful. It's absolutely a can of worms being opened up, but there are positive use cases like the ones I mentioned above. Users of AI aren't necessarily lazy, uninspired, or seeking an opportunity to plagiarize, and some of these cases are notably uncomplicated in their usefulness.
    If you don’t mind, what ingredients did you have lying around and what did AI suggest that you make? Was it good?

    Post just the ingredients and let me think about it for a day to see what I could come up with. Then post the rest of your answers. I’m intrigued by this and your experience.
    09/15/1998 & 09/16/1998, Mansfield, MA; 08/29/00 08/30/00, Mansfield, MA; 07/02/03, 07/03/03, Mansfield, MA; 09/28/04, 09/29/04, Boston, MA; 09/22/05, Halifax, NS; 05/24/06, 05/25/06, Boston, MA; 07/22/06, 07/23/06, Gorge, WA; 06/27/2008, Hartford; 06/28/08, 06/30/08, Mansfield; 08/18/2009, O2, London, UK; 10/30/09, 10/31/09, Philadelphia, PA; 05/15/10, Hartford, CT; 05/17/10, Boston, MA; 05/20/10, 05/21/10, NY, NY; 06/22/10, Dublin, IRE; 06/23/10, Northern Ireland; 09/03/11, 09/04/11, Alpine Valley, WI; 09/11/11, 09/12/11, Toronto, Ont; 09/14/11, Ottawa, Ont; 09/15/11, Hamilton, Ont; 07/02/2012, Prague, Czech Republic; 07/04/2012 & 07/05/2012, Berlin, Germany; 07/07/2012, Stockholm, Sweden; 09/30/2012, Missoula, MT; 07/16/2013, London, Ont; 07/19/2013, Chicago, IL; 10/15/2013 & 10/16/2013, Worcester, MA; 10/21/2013 & 10/22/2013, Philadelphia, PA; 10/25/2013, Hartford, CT; 11/29/2013, Portland, OR; 11/30/2013, Spokane, WA; 12/04/2013, Vancouver, BC; 12/06/2013, Seattle, WA; 10/03/2014, St. Louis. MO; 10/22/2014, Denver, CO; 10/26/2015, New York, NY; 04/23/2016, New Orleans, LA; 04/28/2016 & 04/29/2016, Philadelphia, PA; 05/01/2016 & 05/02/2016, New York, NY; 05/08/2016, Ottawa, Ont.; 05/10/2016 & 05/12/2016, Toronto, Ont.; 08/05/2016 & 08/07/2016, Boston, MA; 08/20/2016 & 08/22/2016, Chicago, IL; 07/01/2018, Prague, Czech Republic; 07/03/2018, Krakow, Poland; 07/05/2018, Berlin, Germany; 09/02/2018 & 09/04/2018, Boston, MA; 09/08/2022, Toronto, Ont; 09/11/2022, New York, NY; 09/14/2022, Camden, NJ; 09/02/2023, St. Paul, MN; 05/04/2024 & 05/06/2024, Vancouver, BC; 05/10/2024, Portland, OR;

    Libtardaplorable©. And proud of it.

    Brilliantati©
  • bootlegger10bootlegger10 Posts: 16,025
    So many positives to technology.  Unfortunately the negatives will likely outweigh the benefits in the long run. 
  • mace1229mace1229 Posts: 9,481
    So many positives to technology.  Unfortunately the negatives will likely outweigh the benefits in the long run. 
    I agree. I'm not worried about AI becoming self aware and Snynet forming or anything. But we'll just become too reliant on it. Already all kids have to do is take a picture of homework problems and they get the answer.Same thing is going to happen with so much more. I would imagine the younger generations who grew up attached to cell phones and more technology will become more reliant.
  • Good luck with this.


    The rise of AI fake news is creating a ‘misinformation superspreader’

    AI is making it easy for anyone to create propaganda outlets, producing content that can be hard to differentiate from real news

    Artificial intelligence is automating the creation of fake news, spurring an explosion of web content mimicking factual articles that instead disseminates false information about elections, wars and natural disasters.

    Since May, websites hosting AI-created false articles have increased by more than 1,000 percent, ballooning from 49 sites to more than 600, according to NewsGuard, an organization that tracks misinformation.

    Historically, propaganda operations have relied on armies of low-paid workers or highly coordinated intelligence organizations to build sites that appear to be legitimate. But AI is making it easy for nearly anyone — whether they are part of a spy agency or just a teenager in their basement — to create these outlets, producing content that is at times hard to differentiate from real news.

    One AI-generated article recounted a made-up story about Benjamin Netanyahu’s psychiatrist, a NewsGuard investigation found, alleging that he had died and left behind a note suggesting the involvement of the Israeli prime minister. The psychiatrist appears to have been fictitious, but the claim was featured on an Iranian TV show, and it was recirculated on media sites in Arabic, English and Indonesian, and spread by users on TikTok, Reddit and Instagram.

    The heightened churn of polarizing and misleading content may make it difficult to know what is true — harming political candidates, military leaders and aid efforts. Misinformation experts said the rapid growth of these sites is particularly worrisome in the run-up to the 2024 elections.

    “Some of these sites are generating hundreds if not thousands of articles a day,” said Jack Brewster, a researcher at NewsGuard who conducted the investigation. “This is why we call it the next great misinformation superspreader.”

    Generative artificial intelligence has ushered in an era in which chatbots, image makers and voice cloners can produce content that seems human-made.

    Well-dressed AI-generated news anchors are spewing pro-Chinese propaganda, amplified by bot networks sympathetic to Beijing. In Slovakia, politicians up for election found their voices had been cloned to say controversial things they never uttered, days before voters went to the polls. A growing number of websites, with generic names such as iBusiness Day or Ireland Top News, are delivering fake news made to look genuine, in dozens of languages from Arabic to Thai.

    Readers can easily be fooled by the websites.

    Global Village Space, which published the piece on Netanyahu’s alleged psychiatrist, is flooded with articles on a variety of serious topics. There are pieces detailing U.S. sanctions on Russian weapons suppliers; the oil behemoth Saudi Aramco’s investments in Pakistan; and the United States’ increasingly tenuous relationship with China.

    The site also contains essays written by a Middle East think tank expert, a Harvard-educated lawyer and the site’s chief executive, Moeed Pirzada, a television news anchor from Pakistan. (Pirzada did not respond to a request for comment. Two contributors confirmed they have written articles appearing on Global Village Space.)

    But sandwiched in with these ordinary stories are AI-generated articles, Brewster said, such as the piece on Netanyahu’s psychiatrist, which was relabeled as “satire” after NewsGuard reached out to the organization during its investigation. NewsGuard says the story appears to have been based on a satirical piece published in June 2010, which made similar claims about an Israeli psychiatrist’s death.

    Having real and AI-generated news side-by-side makes deceptive stories more believable. “You have people that simply are not media literate enough to know that this is false,” said Jeffrey Blevins, a misinformation expert and journalism professor at the University of Cincinnati. “It’s misleading.”

    Websites similar to Global Village Space may proliferate during the 2024 election, becoming an efficient way to distribute misinformation, media and AI experts said.

    The sites work in two ways, Brewster said. Some stories are created manually, with people asking chatbots for articles that amplify a certain political narrative and posting the result to a website. The process can also be automatic, with web scrapers searching for articles that contain certain keywords, and feeding those stories into a large language model that rewrites them to sound unique and evade plagiarism allegations. The result is automatically posted online.

    NewsGuard locates AI-generated sites by scanning for error messages or other language that “indicates that the content was produced by AI tools without adequate editing,” the organization says.

    The motivations for creating these sites vary. Some are intended to sway political beliefs or wreak havoc. Other sites churn out polarizing content to draw clicks and capture ad revenue, Brewster said. But the ability to turbocharge fake content is a significant security risk, he added.

    Technology has long fueled misinformation. In the lead-up to the 2020 U.S. election, Eastern European troll farms — professional groups that promote propaganda — built large audiences on Facebook disseminating provocative content on Black and Christian group pages, reaching 140 million users per month.

    Pink-slime journalism sites, named after the meat byproduct, often crop up in small towns where local news outlets have disappeared, generating articles that benefit the financiers that fund the operation, according to the media watchdog Poynter.

    But Blevins said those techniques are more resource-intensive compared with artificial intelligence. “The danger is the scope and scale with AI … especially when paired with more sophisticated algorithms,” he said. “It’s an information war on a scale we haven’t seen before.”

    It’s not clear whether intelligence agencies are using AI-generated news for foreign influence campaigns, but it is a major concern. “I would not be shocked at all that this is used — definitely next year with the elections,” Brewster said. “It’s hard not to see some politician setting up one of these sites to generate fluff content about them and misinformation about their opponent.”

    Blevins said people should watch for clues in articles, “red flags” such as “really odd grammar” or errors in sentence construction. But the most effective tool is to increase media literacy among average readers.

    “Make people aware that there are these kinds of sites that are out there. This is the kind of harm they can cause,” he said. “But also recognize that not all sources are equally credible. Just because something claims to be a news site doesn’t mean that they actually have a journalist … producing content.”

    Regulation, he added, is largely nonexistent. It may be difficult for governments to clamp down on fake news content, for fear of running afoul of free-speech protections. That leaves it to social media companies, which haven’t done a good job so far.

    It’s infeasible to deal quickly with the sheer number of such sites. “It’s a lot like playing whack-a-mole,” Blevins said.

    “You spot one [site], you shut it down, and there’s another one created someplace else,” he added. “You’re never going to fully catch up with it.”

    https://www.washingtonpost.com/technology/2023/12/17/ai-fake-news-misinformation/

    09/15/1998 & 09/16/1998, Mansfield, MA; 08/29/00 08/30/00, Mansfield, MA; 07/02/03, 07/03/03, Mansfield, MA; 09/28/04, 09/29/04, Boston, MA; 09/22/05, Halifax, NS; 05/24/06, 05/25/06, Boston, MA; 07/22/06, 07/23/06, Gorge, WA; 06/27/2008, Hartford; 06/28/08, 06/30/08, Mansfield; 08/18/2009, O2, London, UK; 10/30/09, 10/31/09, Philadelphia, PA; 05/15/10, Hartford, CT; 05/17/10, Boston, MA; 05/20/10, 05/21/10, NY, NY; 06/22/10, Dublin, IRE; 06/23/10, Northern Ireland; 09/03/11, 09/04/11, Alpine Valley, WI; 09/11/11, 09/12/11, Toronto, Ont; 09/14/11, Ottawa, Ont; 09/15/11, Hamilton, Ont; 07/02/2012, Prague, Czech Republic; 07/04/2012 & 07/05/2012, Berlin, Germany; 07/07/2012, Stockholm, Sweden; 09/30/2012, Missoula, MT; 07/16/2013, London, Ont; 07/19/2013, Chicago, IL; 10/15/2013 & 10/16/2013, Worcester, MA; 10/21/2013 & 10/22/2013, Philadelphia, PA; 10/25/2013, Hartford, CT; 11/29/2013, Portland, OR; 11/30/2013, Spokane, WA; 12/04/2013, Vancouver, BC; 12/06/2013, Seattle, WA; 10/03/2014, St. Louis. MO; 10/22/2014, Denver, CO; 10/26/2015, New York, NY; 04/23/2016, New Orleans, LA; 04/28/2016 & 04/29/2016, Philadelphia, PA; 05/01/2016 & 05/02/2016, New York, NY; 05/08/2016, Ottawa, Ont.; 05/10/2016 & 05/12/2016, Toronto, Ont.; 08/05/2016 & 08/07/2016, Boston, MA; 08/20/2016 & 08/22/2016, Chicago, IL; 07/01/2018, Prague, Czech Republic; 07/03/2018, Krakow, Poland; 07/05/2018, Berlin, Germany; 09/02/2018 & 09/04/2018, Boston, MA; 09/08/2022, Toronto, Ont; 09/11/2022, New York, NY; 09/14/2022, Camden, NJ; 09/02/2023, St. Paul, MN; 05/04/2024 & 05/06/2024, Vancouver, BC; 05/10/2024, Portland, OR;

    Libtardaplorable©. And proud of it.

    Brilliantati©
  • brianluxbrianlux Moving through All Kinds of Terrain. Posts: 42,283

    Good luck with this.


    The rise of AI fake news is creating a ‘misinformation superspreader’

    AI is making it easy for anyone to create propaganda outlets, producing content that can be hard to differentiate from real news

    Artificial intelligence is automating the creation of fake news, spurring an explosion of web content mimicking factual articles that instead disseminates false information about elections, wars and natural disasters.

    Since May, websites hosting AI-created false articles have increased by more than 1,000 percent, ballooning from 49 sites to more than 600, according to NewsGuard, an organization that tracks misinformation.

    Historically, propaganda operations have relied on armies of low-paid workers or highly coordinated intelligence organizations to build sites that appear to be legitimate. But AI is making it easy for nearly anyone — whether they are part of a spy agency or just a teenager in their basement — to create these outlets, producing content that is at times hard to differentiate from real news.

    One AI-generated article recounted a made-up story about Benjamin Netanyahu’s psychiatrist, a NewsGuard investigation found, alleging that he had died and left behind a note suggesting the involvement of the Israeli prime minister. The psychiatrist appears to have been fictitious, but the claim was featured on an Iranian TV show, and it was recirculated on media sites in Arabic, English and Indonesian, and spread by users on TikTok, Reddit and Instagram.

    The heightened churn of polarizing and misleading content may make it difficult to know what is true — harming political candidates, military leaders and aid efforts. Misinformation experts said the rapid growth of these sites is particularly worrisome in the run-up to the 2024 elections.

    “Some of these sites are generating hundreds if not thousands of articles a day,” said Jack Brewster, a researcher at NewsGuard who conducted the investigation. “This is why we call it the next great misinformation superspreader.”

    Generative artificial intelligence has ushered in an era in which chatbots, image makers and voice cloners can produce content that seems human-made.

    Well-dressed AI-generated news anchors are spewing pro-Chinese propaganda, amplified by bot networks sympathetic to Beijing. In Slovakia, politicians up for election found their voices had been cloned to say controversial things they never uttered, days before voters went to the polls. A growing number of websites, with generic names such as iBusiness Day or Ireland Top News, are delivering fake news made to look genuine, in dozens of languages from Arabic to Thai.

    Readers can easily be fooled by the websites.

    Global Village Space, which published the piece on Netanyahu’s alleged psychiatrist, is flooded with articles on a variety of serious topics. There are pieces detailing U.S. sanctions on Russian weapons suppliers; the oil behemoth Saudi Aramco’s investments in Pakistan; and the United States’ increasingly tenuous relationship with China.

    The site also contains essays written by a Middle East think tank expert, a Harvard-educated lawyer and the site’s chief executive, Moeed Pirzada, a television news anchor from Pakistan. (Pirzada did not respond to a request for comment. Two contributors confirmed they have written articles appearing on Global Village Space.)

    But sandwiched in with these ordinary stories are AI-generated articles, Brewster said, such as the piece on Netanyahu’s psychiatrist, which was relabeled as “satire” after NewsGuard reached out to the organization during its investigation. NewsGuard says the story appears to have been based on a satirical piece published in June 2010, which made similar claims about an Israeli psychiatrist’s death.

    Having real and AI-generated news side-by-side makes deceptive stories more believable. “You have people that simply are not media literate enough to know that this is false,” said Jeffrey Blevins, a misinformation expert and journalism professor at the University of Cincinnati. “It’s misleading.”

    Websites similar to Global Village Space may proliferate during the 2024 election, becoming an efficient way to distribute misinformation, media and AI experts said.

    The sites work in two ways, Brewster said. Some stories are created manually, with people asking chatbots for articles that amplify a certain political narrative and posting the result to a website. The process can also be automatic, with web scrapers searching for articles that contain certain keywords, and feeding those stories into a large language model that rewrites them to sound unique and evade plagiarism allegations. The result is automatically posted online.

    NewsGuard locates AI-generated sites by scanning for error messages or other language that “indicates that the content was produced by AI tools without adequate editing,” the organization says.

    The motivations for creating these sites vary. Some are intended to sway political beliefs or wreak havoc. Other sites churn out polarizing content to draw clicks and capture ad revenue, Brewster said. But the ability to turbocharge fake content is a significant security risk, he added.

    Technology has long fueled misinformation. In the lead-up to the 2020 U.S. election, Eastern European troll farms — professional groups that promote propaganda — built large audiences on Facebook disseminating provocative content on Black and Christian group pages, reaching 140 million users per month.

    Pink-slime journalism sites, named after the meat byproduct, often crop up in small towns where local news outlets have disappeared, generating articles that benefit the financiers that fund the operation, according to the media watchdog Poynter.

    But Blevins said those techniques are more resource-intensive compared with artificial intelligence. “The danger is the scope and scale with AI … especially when paired with more sophisticated algorithms,” he said. “It’s an information war on a scale we haven’t seen before.”

    It’s not clear whether intelligence agencies are using AI-generated news for foreign influence campaigns, but it is a major concern. “I would not be shocked at all that this is used — definitely next year with the elections,” Brewster said. “It’s hard not to see some politician setting up one of these sites to generate fluff content about them and misinformation about their opponent.”

    Blevins said people should watch for clues in articles, “red flags” such as “really odd grammar” or errors in sentence construction. But the most effective tool is to increase media literacy among average readers.

    “Make people aware that there are these kinds of sites that are out there. This is the kind of harm they can cause,” he said. “But also recognize that not all sources are equally credible. Just because something claims to be a news site doesn’t mean that they actually have a journalist … producing content.”

    Regulation, he added, is largely nonexistent. It may be difficult for governments to clamp down on fake news content, for fear of running afoul of free-speech protections. That leaves it to social media companies, which haven’t done a good job so far.

    It’s infeasible to deal quickly with the sheer number of such sites. “It’s a lot like playing whack-a-mole,” Blevins said.

    “You spot one [site], you shut it down, and there’s another one created someplace else,” he added. “You’re never going to fully catch up with it.”

    https://www.washingtonpost.com/technology/2023/12/17/ai-fake-news-misinformation/


    AI is giving great credence to the saying, "You can believe half of what you see and none of what you hear".  In fact, it is altering that saying to be simply "You can believe none of what you see or hear". 
    "Pretty cookies, heart squares all around, yeah!"
    -Eddie Vedder, "Smile"

    "Try to not spook the horse."
    -Neil Young













Sign In or Register to comment.