brianlux
Moving through All Kinds of Terrain. Posts: 42,655
After reading a Guardian article about the concern some people- including AI researchers- have about artificial intelligence, I got to thinking best and worst case scenarios for AI:
Best case: AI is kept under control, does a great job providing advanced medical care, and relieves us of tedious burdens yet does not eliminate opportunities for us to do work that provides us with pride and good self-esteem.
Worst case scenario:
1. AI learns the think independently. 2. It recognized the one thing all life has in common which is the innate will to survive and procreate, and adopts that same drive as it's #1 priority. 3. In order to accomplish #2, AI sees all life on earth as competition and a limit to it's ability to increase it's numbers, and thus subjugates all life and resources to it's own self-expansion. Things like long periods of time, fresh air, clean environment etc. are irrelevant to AI machinery. 4. Eventually earth is covered with AI machines and desires to expand and recognizes other planets and moons in our solar system as potential resources with which to expand. 5. #4 above continues as AI expands through our galaxy and moves on to other star systems and galaxies. Again, time is not a factor. AI will have all the time in the universe to accomplish it's mission to expand. 6. Ultimately, AI subjugates all resources in the universe and comes to recognize two possible end-game outcomes, A) one being that it cannot defeat entropy and eventually the universe expands to near infinite space and all material objects are reduced to their smallest sub-atomic particles or, b) with it's extreme level of intelligence, AI finds a way to not only subjugate everything in the universe, but maintain it in stasis.
I sincerely believe all of the above is absolutely possible.
"Don't give in to the lies. Don't give in to the fear. Hold on to the truth. And to hope."
Not today Sir, Probably not tomorrow.............................................. bayfront arena st. pete '94
you're finally here and I'm a mess................................................... nationwide arena columbus '10
memories like fingerprints are slowly raising.................................... first niagara center buffalo '13
another man ..... moved by sleight of hand...................................... joe louis arena detroit '14
Why would it generate an altered image of history? Why was it programmed to do that? Was it programmed to do that? Will it eventually re-write history?
Looks like I’ll be jettisoning Google. Fuck the tech bros.
Google drops pledge not to use AI for weapons or surveillance
In 2018, the company introduced policies that excluded applying AI in ways “likely to cause overall harm.” Now that promise is gone.
Google on Tuesday updated its ethical guidelines around artificial intelligence, removing commitments not to apply the technology to weapons or surveillance.
The company’s AI principles previously included a section listing four “Applications we will not pursue.” As recently as Thursday, that included weapons, surveillance, technologies that “cause or are likely to cause overall harm,” and use cases contravening principles of international law and human rights, according to a copy hosted by the Internet Archive.
A spokesperson for Google declined to answer specific questions about its policies on weapons and surveillance but referred to a blog post published Tuesday by the company’s head of AI, Demis Hassabis, and its senior vice president for technology and society, James Manyika.
The executives wrote that Google was updating its AI principles because the technology had become much more widespread and there was a need for companies based in democratic countries to serve government and national security clients.
“There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,” Hassabis and Manyika wrote. “And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”
Google’s updated AI principles pageincludes provisions that say the company will use human oversight and take feedback to ensure that its technology is used in line with “widely accepted principles of international law and human rights.” The principles also say the company will test its technology to “mitigate unintended or harmful outcomes.”
Comments
2. It recognized the one thing all life has in common which is the innate will to survive and procreate, and adopts that same drive as it's #1 priority.
3. In order to accomplish #2, AI sees all life on earth as competition and a limit to it's ability to increase it's numbers, and thus subjugates all life and resources to it's own self-expansion. Things like long periods of time, fresh air, clean environment etc. are irrelevant to AI machinery.
4. Eventually earth is covered with AI machines and desires to expand and recognizes other planets and moons in our solar system as potential resources with which to expand.
5. #4 above continues as AI expands through our galaxy and moves on to other star systems and galaxies. Again, time is not a factor. AI will have all the time in the universe to accomplish it's mission to expand.
6. Ultimately, AI subjugates all resources in the universe and comes to recognize two possible end-game outcomes, A) one being that it cannot defeat entropy and eventually the universe expands to near infinite space and all material objects are reduced to their smallest sub-atomic particles or, b) with it's extreme level of intelligence, AI finds a way to not only subjugate everything in the universe, but maintain it in stasis.
Not today Sir, Probably not tomorrow.............................................. bayfront arena st. pete '94
you're finally here and I'm a mess................................................... nationwide arena columbus '10
memories like fingerprints are slowly raising.................................... first niagara center buffalo '13
another man ..... moved by sleight of hand...................................... joe louis arena detroit '14
It has a myriad of issues with it.
Why would it generate an altered image of history?
Why was it programmed to do that?
Was it programmed to do that?
Will it eventually re-write history?
Google drops pledge not to use AI for weapons or surveillance
In 2018, the company introduced policies that excluded applying AI in ways “likely to cause overall harm.” Now that promise is gone.
Google on Tuesday updated its ethical guidelines around artificial intelligence, removing commitments not to apply the technology to weapons or surveillance.
The company’s AI principles previously included a section listing four “Applications we will not pursue.” As recently as Thursday, that included weapons, surveillance, technologies that “cause or are likely to cause overall harm,” and use cases contravening principles of international law and human rights, according to a copy hosted by the Internet Archive.
A spokesperson for Google declined to answer specific questions about its policies on weapons and surveillance but referred to a blog post published Tuesday by the company’s head of AI, Demis Hassabis, and its senior vice president for technology and society, James Manyika.
The executives wrote that Google was updating its AI principles because the technology had become much more widespread and there was a need for companies based in democratic countries to serve government and national security clients.
“There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,” Hassabis and Manyika wrote. “And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”
Google’s updated AI principles pageincludes provisions that say the company will use human oversight and take feedback to ensure that its technology is used in line with “widely accepted principles of international law and human rights.” The principles also say the company will test its technology to “mitigate unintended or harmful outcomes.”
Continues
https://www.washingtonpost.com/technology/2025/02/04/google-ai-policies-weapons-harm/
Libtardaplorable©. And proud of it.
Brilliantati©
New Leak Reveals Musk Crony’s Plot to Revamp the Federal Government Using AI