Artificial Intelligence: What could go wrong?
MIT Scientists Unveil First Psychopath AI, 'Norman'
Scientists at the Massachusetts Institute of Technology unveiled the first artificial intelligence algorithm trained to be a psychopath. The AI was fittingly dubbed "Norman" after Norman Bates, the notorious killer in Alfred Hitchcock's Psycho.
We may be on the verge of creating a new life form, one that could mark not only an evolutionary breakthrough, but a potential threat to our survival as a species
MIT scientists Pinar Yanardag, Manuel Cebrian and Iyad Rahwan trained Norman to perform image captioning, "a deep learning method" that allows AI to generate text descriptions for images. However, the team exclusively exposed Norman to violent and disturbing images posted on a subreddit dedicated to death.
They then gave Norman a Rorschach inkblot test and the AI responded with chilling interpretations such as, "a man is electrocuted and catches to death," "pregnant woman falls at construction" and "man is shot dead in front of his screaming wife." Meanwhile, a standard AI responded to the same inkblots with, "a close up of a vase with flowers," "a couple of people standing next to each other" and "a person is holding an umbrella in the air."
While Norman may conjure dystopian images of killer robots, the MIT team said the purpose of the experiment was to prove that AI algorithms aren't inherently biased, but that data input methods – and the people inputting that data – can significantly alter an AI's behavior. As Newsweek pointed out, there have been several notable cases where racism and bias have crept into machine learning, like the Google Photos image recognition algorithm that was classifying black people as "gorillas."
-Eddie Vedder, "Smile"
Comments
-Eddie Vedder, "Smile"
https://motherboard.vice.com/amp/en_us/article/xwm5mk/mit-psychotic-ai-rehabilitation
disgusting. I hate the fact that there are parents out there that excuse this behaviour away as "youthful transgressions". Sounds like a Brett Kavanaugh type thing. charge him/them.
-EV 8/14/93
Any parent that is OK with their kid doing that is just as guilty and should be charged as such.
-Eddie Vedder, "Smile"
I tell you kids these days have a lot to worry about.
Not today Sir, Probably not tomorrow.............................................. bayfront arena st. pete '94
you're finally here and I'm a mess................................................... nationwide arena columbus '10
memories like fingerprints are slowly raising.................................... first niagara center buffalo '13
another man ..... moved by sleight of hand...................................... joe louis arena detroit '14
The idea of Artificial intelligence in general seems frightening though because I think it seems like letting a huge cat out of a bag and the animal's behavior can't truly be anticipated or stopped later. Once it is out, it is out!
So many sci-fi novels and movies are based on A.I. running amuck.
(Or, that sad movie about the artificial child who was abandoned but lived on searching for the person who originally owned him. That story made me cry... It was a story about humans abusing robots with artificial intelligence who were made to feel attached to their human purchasers. Another example of humans using other creatures as disposable pets...)
But, we have another Beatles song. Maybe I'm just being a worried older person?
There are no kings inside the gates of eden
Now its even in AI
You've got five years at most to live a good life, and then AI takes over.
US, Britain, other countries ink agreement to make AI 'secure by design'
WASHINGTON, Nov 27 (Reuters) - The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are "secure by design."
In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse.
-Eddie Vedder, "Smile"
There are no kings inside the gates of eden
I look at this like NFT's and EVERYBODY wants it at first but....
You k ow what? If I wanted to be lazy and use it for homework or music writing, art even, then I guess if I didn't or couldn't be creative enough then I would like it.
I think a lot of peoples jobs are gonna be gone fromn this, lol.
-prepared food based on its recipes (after telling it what I have in stock)
-built project plans and had it provide constructive criticism on what elements are missing/confusing
-provided details of a bluegrass song and had it propose some interesting chord substitutions based on proper music theory for the genre
-successfully had it write Python programming based on providing it details of what I'm trying to accomplish, as well as code explanations (note that I can not write quality Python code myself)
-requested it to interview me and then produce a job description based on the conversation, as well as a proposal for an onboarding roadmap
-learned statistics about the proportionality of Congress switching leadership as the President does/doesn't change historically
None of these ventures will eliminate a job, but all of them increase abilities as an employee/individual, which is wonderful. It's absolutely a can of worms being opened up, but there are positive use cases like the ones I mentioned above. Users of AI aren't necessarily lazy, uninspired, or seeking an opportunity to plagiarize, and some of these cases are notably uncomplicated in their usefulness.
EV
Toronto Film Festival 9/11/2007, '08 - Toronto 1 & 2, '09 - Albany 1, '11 - Chicago 1
There are no kings inside the gates of eden
Nothing you mentioned struck my fancy other than the food one. I do enjoy making new dishes but it's usually from seeing them and I have to buy new ingredients.
The job interview is basically preparation which I think you could do with a little research or by just knowing the field.
My view may change of it over time. Wondering if it can help with investment trends? See? I just changed my mind about it, lol.
If Yang is right about the trucking industry and AI, then all those trucking jobs won't go overseas, they will be removed completely.
There are no kings inside the gates of eden
There are no kings inside the gates of eden
Post just the ingredients and let me think about it for a day to see what I could come up with. Then post the rest of your answers. I’m intrigued by this and your experience.
Libtardaplorable©. And proud of it.
Brilliantati©
Good luck with this.
The rise of AI fake news is creating a ‘misinformation superspreader’
AI is making it easy for anyone to create propaganda outlets, producing content that can be hard to differentiate from real news
Since May, websites hosting AI-created false articles have increased by more than 1,000 percent, ballooning from 49 sites to more than 600, according to NewsGuard, an organization that tracks misinformation.
Historically, propaganda operations have relied on armies of low-paid workers or highly coordinated intelligence organizations to build sites that appear to be legitimate. But AI is making it easy for nearly anyone — whether they are part of a spy agency or just a teenager in their basement — to create these outlets, producing content that is at times hard to differentiate from real news.
One AI-generated article recounted a made-up story about Benjamin Netanyahu’s psychiatrist, a NewsGuard investigation found, alleging that he had died and left behind a note suggesting the involvement of the Israeli prime minister. The psychiatrist appears to have been fictitious, but the claim was featured on an Iranian TV show, and it was recirculated on media sites in Arabic, English and Indonesian, and spread by users on TikTok, Reddit and Instagram.
The heightened churn of polarizing and misleading content may make it difficult to know what is true — harming political candidates, military leaders and aid efforts. Misinformation experts said the rapid growth of these sites is particularly worrisome in the run-up to the 2024 elections.
“Some of these sites are generating hundreds if not thousands of articles a day,” said Jack Brewster, a researcher at NewsGuard who conducted the investigation. “This is why we call it the next great misinformation superspreader.”
Generative artificial intelligence has ushered in an era in which chatbots, image makers and voice cloners can produce content that seems human-made.
Well-dressed AI-generated news anchors are spewing pro-Chinese propaganda, amplified by bot networks sympathetic to Beijing. In Slovakia, politicians up for election found their voices had been cloned to say controversial things they never uttered, days before voters went to the polls. A growing number of websites, with generic names such as iBusiness Day or Ireland Top News, are delivering fake news made to look genuine, in dozens of languages from Arabic to Thai.
Readers can easily be fooled by the websites.
Global Village Space, which published the piece on Netanyahu’s alleged psychiatrist, is flooded with articles on a variety of serious topics. There are pieces detailing U.S. sanctions on Russian weapons suppliers; the oil behemoth Saudi Aramco’s investments in Pakistan; and the United States’ increasingly tenuous relationship with China.
The site also contains essays written by a Middle East think tank expert, a Harvard-educated lawyer and the site’s chief executive, Moeed Pirzada, a television news anchor from Pakistan. (Pirzada did not respond to a request for comment. Two contributors confirmed they have written articles appearing on Global Village Space.)
But sandwiched in with these ordinary stories are AI-generated articles, Brewster said, such as the piece on Netanyahu’s psychiatrist, which was relabeled as “satire” after NewsGuard reached out to the organization during its investigation. NewsGuard says the story appears to have been based on a satirical piece published in June 2010, which made similar claims about an Israeli psychiatrist’s death.
Having real and AI-generated news side-by-side makes deceptive stories more believable. “You have people that simply are not media literate enough to know that this is false,” said Jeffrey Blevins, a misinformation expert and journalism professor at the University of Cincinnati. “It’s misleading.”
Websites similar to Global Village Space may proliferate during the 2024 election, becoming an efficient way to distribute misinformation, media and AI experts said.
The sites work in two ways, Brewster said. Some stories are created manually, with people asking chatbots for articles that amplify a certain political narrative and posting the result to a website. The process can also be automatic, with web scrapers searching for articles that contain certain keywords, and feeding those stories into a large language model that rewrites them to sound unique and evade plagiarism allegations. The result is automatically posted online.
NewsGuard locates AI-generated sites by scanning for error messages or other language that “indicates that the content was produced by AI tools without adequate editing,” the organization says.
The motivations for creating these sites vary. Some are intended to sway political beliefs or wreak havoc. Other sites churn out polarizing content to draw clicks and capture ad revenue, Brewster said. But the ability to turbocharge fake content is a significant security risk, he added.
Technology has long fueled misinformation. In the lead-up to the 2020 U.S. election, Eastern European troll farms — professional groups that promote propaganda — built large audiences on Facebook disseminating provocative content on Black and Christian group pages, reaching 140 million users per month.
Pink-slime journalism sites, named after the meat byproduct, often crop up in small towns where local news outlets have disappeared, generating articles that benefit the financiers that fund the operation, according to the media watchdog Poynter.
But Blevins said those techniques are more resource-intensive compared with artificial intelligence. “The danger is the scope and scale with AI … especially when paired with more sophisticated algorithms,” he said. “It’s an information war on a scale we haven’t seen before.”
It’s not clear whether intelligence agencies are using AI-generated news for foreign influence campaigns, but it is a major concern. “I would not be shocked at all that this is used — definitely next year with the elections,” Brewster said. “It’s hard not to see some politician setting up one of these sites to generate fluff content about them and misinformation about their opponent.”
Blevins said people should watch for clues in articles, “red flags” such as “really odd grammar” or errors in sentence construction. But the most effective tool is to increase media literacy among average readers.
“Make people aware that there are these kinds of sites that are out there. This is the kind of harm they can cause,” he said. “But also recognize that not all sources are equally credible. Just because something claims to be a news site doesn’t mean that they actually have a journalist … producing content.”
Regulation, he added, is largely nonexistent. It may be difficult for governments to clamp down on fake news content, for fear of running afoul of free-speech protections. That leaves it to social media companies, which haven’t done a good job so far.
It’s infeasible to deal quickly with the sheer number of such sites. “It’s a lot like playing whack-a-mole,” Blevins said.
“You spot one [site], you shut it down, and there’s another one created someplace else,” he added. “You’re never going to fully catch up with it.”
https://www.washingtonpost.com/technology/2023/12/17/ai-fake-news-misinformation/
Libtardaplorable©. And proud of it.
Brilliantati©
AI is giving great credence to the saying, "You can believe half of what you see and none of what you hear". In fact, it is altering that saying to be simply "You can believe none of what you see or hear".
-Eddie Vedder, "Smile"