Jobs at risk due to automation - what to do?

1235

Comments

  • RoleModelsinBlood31
    RoleModelsinBlood31 Austin TX Posts: 6,242
    Has anyone ever been to a Sonic?  My first time was down here in TX, we definitely didn't have them in NY.  It's about as simple an ordering system as I've seen.  You order from a touch screen that is at every car parking space, so there's probably like 30 of them.  You pay via same touch screen.  You continue to sit in car, fart, get fatter.  Minimum wage earning human walks or roller-skates your order over to you, gives food, gives napkin and ketchup and maybe a mint, walks away.  Fat American slams food into gaping maw, drips condiments onto clothes, farts, burps, drives away.

    Theres probably like 2-3 people making food inside, 1 manager, 2 or three roller skaters.  I feel like their ordering system must eliminate about 3 humans.
    I'm like an opening band for your mom.
  • mace1229
    mace1229 Posts: 9,844
    Has anyone ever been to a Sonic?  My first time was down here in TX, we definitely didn't have them in NY.  It's about as simple an ordering system as I've seen.  You order from a touch screen that is at every car parking space, so there's probably like 30 of them.  You pay via same touch screen.  You continue to sit in car, fart, get fatter.  Minimum wage earning human walks or roller-skates your order over to you, gives food, gives napkin and ketchup and maybe a mint, walks away.  Fat American slams food into gaping maw, drips condiments onto clothes, farts, burps, drives away.

    Theres probably like 2-3 people making food inside, 1 manager, 2 or three roller skaters.  I feel like their ordering system must eliminate about 3 humans.
    You’re pretty gassy when you eat.
  • PJPOWER
    PJPOWER Posts: 6,499
    Has anyone ever been to a Sonic?  My first time was down here in TX, we definitely didn't have them in NY.  It's about as simple an ordering system as I've seen.  You order from a touch screen that is at every car parking space, so there's probably like 30 of them.  You pay via same touch screen.  You continue to sit in car, fart, get fatter.  Minimum wage earning human walks or roller-skates your order over to you, gives food, gives napkin and ketchup and maybe a mint, walks away.  Fat American slams food into gaping maw, drips condiments onto clothes, farts, burps, drives away.

    Theres probably like 2-3 people making food inside, 1 manager, 2 or three roller skaters.  I feel like their ordering system must eliminate about 3 humans.
    Wait, they don’t have Sonics in NY!?!?
  • rgambs
    rgambs Posts: 13,576
    Within a few decades at most, Watson will be the best doctor at every hospital, big or small.  
    It isn't just burger jobs for sure, AI is changing everything and it's pretty intimidating.  It's a wild west scenario and there's no predicting where it leads.
    Monkey Driven, Call this Living?
  • bbiggs
    bbiggs Posts: 6,964
    rgambs said:
    Within a few decades at most, Watson will be the best doctor at every hospital, big or small.  
    It isn't just burger jobs for sure, AI is changing everything and it's pretty intimidating.  It's a wild west scenario and there's no predicting where it leads.
    I couldn’t agree more. I talk with people about this who say things like, “we’ll still need humans to fix the robots, so that will provide job opportunity.” What happens when the robots know how to fix themselves? Then what? The technology is there already. Crazy stuff. 
  • brianlux
    brianlux Moving through All Kinds of Terrain. Posts: 43,664
    bbiggs said:
    rgambs said:
    Within a few decades at most, Watson will be the best doctor at every hospital, big or small.  
    It isn't just burger jobs for sure, AI is changing everything and it's pretty intimidating.  It's a wild west scenario and there's no predicting where it leads.
    I couldn’t agree more. I talk with people about this who say things like, “we’ll still need humans to fix the robots, so that will provide job opportunity.” What happens when the robots know how to fix themselves? Then what? The technology is there already. Crazy stuff. 
    I picture one of four scenarios:

    -AI will develop an altruistic attitude toward humans such that machines will do whatever they can to make life comfortable for all life.

    -AI will allow humans to maintain control and will simply serve as machines and have no concern for humans one way or the other.

    -AI will see humans as destructive to the planet and will eliminate us in order to save the planet.

    -AI will not have developed any sense of morality and will consider any life form to be a waste of resources that could be used to create machines and therefor will eliminate all life forms on earth in order to continue their own kind.

    Far fetched?  Maybe.  But, seriously, is this a game we really want to play?  I dread the likelihood of AI being further developed.  It's the most insane thing humans have ever done next to polluting and destroying the planet.  We are an absurd species. 

    "It's a sad and beautiful world"
    -Roberto Benigni

  • mace1229
    mace1229 Posts: 9,844
    edited April 2018
    brianlux said:
    bbiggs said:
    rgambs said:
    Within a few decades at most, Watson will be the best doctor at every hospital, big or small.  
    It isn't just burger jobs for sure, AI is changing everything and it's pretty intimidating.  It's a wild west scenario and there's no predicting where it leads.
    I couldn’t agree more. I talk with people about this who say things like, “we’ll still need humans to fix the robots, so that will provide job opportunity.” What happens when the robots know how to fix themselves? Then what? The technology is there already. Crazy stuff. 
    I picture one of four scenarios:

    -AI will develop an altruistic attitude toward humans such that machines will do whatever they can to make life comfortable for all life.

    -AI will allow humans to maintain control and will simply serve as machines and have no concern for humans one way or the other.

    -AI will see humans as destructive to the planet and will eliminate us in order to save the planet.

    -AI will not have developed any sense of morality and will consider any life form to be a waste of resources that could be used to create machines and therefor will eliminate all life forms on earth in order to continue their own kind.

    Far fetched?  Maybe.  But, seriously, is this a game we really want to play?  I dread the likelihood of AI being further developed.  It's the most insane thing humans have ever done next to polluting and destroying the planet.  We are an absurd species. 

    I thought this was a joke at first. I think there is zero chance AI will turn into Skynet or the Matrix. AI will always be what it is programmed to be, developing any sort of feelings/morals  on their own I just don’t see as even a possibility.
    Post edited by mace1229 on
  • bbiggs
    bbiggs Posts: 6,964
    mace1229 said:
    brianlux said:
    bbiggs said:
    rgambs said:
    Within a few decades at most, Watson will be the best doctor at every hospital, big or small.  
    It isn't just burger jobs for sure, AI is changing everything and it's pretty intimidating.  It's a wild west scenario and there's no predicting where it leads.
    I couldn’t agree more. I talk with people about this who say things like, “we’ll still need humans to fix the robots, so that will provide job opportunity.” What happens when the robots know how to fix themselves? Then what? The technology is there already. Crazy stuff. 
    I picture one of four scenarios:

    -AI will develop an altruistic attitude toward humans such that machines will do whatever they can to make life comfortable for all life.

    -AI will allow humans to maintain control and will simply serve as machines and have no concern for humans one way or the other.

    -AI will see humans as destructive to the planet and will eliminate us in order to save the planet.

    -AI will not have developed any sense of morality and will consider any life form to be a waste of resources that could be used to create machines and therefor will eliminate all life forms on earth in order to continue their own kind.

    Far fetched?  Maybe.  But, seriously, is this a game we really want to play?  I dread the likelihood of AI being further developed.  It's the most insane thing humans have ever done next to polluting and destroying the planet.  We are an absurd species. 

    I thought this was a joke at first. I think there is zero chance AI will turn into Skynet or the Matrix. AI will always be what it is programmed to be, developing any sort of feelings/morals  on their own I just don’t see as even a possibility.
    There is military AI technology, either in the works or already there, that allows for AI “soldiers” to go into hostile situations, assess the situation and make determinations to shoot, detonate bombs, etc.  My understanding is that this is entirely automated without a human at the control center. That’s scary shit. What if the AI assesses the risk incorrectly? 
  • rgambs
    rgambs Posts: 13,576
    mace1229 said:
    brianlux said:
    bbiggs said:
    rgambs said:
    Within a few decades at most, Watson will be the best doctor at every hospital, big or small.  
    It isn't just burger jobs for sure, AI is changing everything and it's pretty intimidating.  It's a wild west scenario and there's no predicting where it leads.
    I couldn’t agree more. I talk with people about this who say things like, “we’ll still need humans to fix the robots, so that will provide job opportunity.” What happens when the robots know how to fix themselves? Then what? The technology is there already. Crazy stuff. 
    I picture one of four scenarios:

    -AI will develop an altruistic attitude toward humans such that machines will do whatever they can to make life comfortable for all life.

    -AI will allow humans to maintain control and will simply serve as machines and have no concern for humans one way or the other.

    -AI will see humans as destructive to the planet and will eliminate us in order to save the planet.

    -AI will not have developed any sense of morality and will consider any life form to be a waste of resources that could be used to create machines and therefor will eliminate all life forms on earth in order to continue their own kind.

    Far fetched?  Maybe.  But, seriously, is this a game we really want to play?  I dread the likelihood of AI being further developed.  It's the most insane thing humans have ever done next to polluting and destroying the planet.  We are an absurd species. 

    I thought this was a joke at first. I think there is zero chance AI will turn into Skynet or the Matrix. AI will always be what it is programmed to be, developing any sort of feelings/morals  on their own I just don’t see as even a possibility.
    It is definitely a possibility.  Far-fetched maybe, but several large AI projects have already breached expectations on the limits of the behavior provided by programming.

    https://futurism.com/a-facebook-ai-unexpectedly-created-its-own-unique-language/
    Monkey Driven, Call this Living?
  • brianlux
    brianlux Moving through All Kinds of Terrain. Posts: 43,664
    mace1229 said:
    brianlux said:
    bbiggs said:
    rgambs said:
    Within a few decades at most, Watson will be the best doctor at every hospital, big or small.  
    It isn't just burger jobs for sure, AI is changing everything and it's pretty intimidating.  It's a wild west scenario and there's no predicting where it leads.
    I couldn’t agree more. I talk with people about this who say things like, “we’ll still need humans to fix the robots, so that will provide job opportunity.” What happens when the robots know how to fix themselves? Then what? The technology is there already. Crazy stuff. 
    I picture one of four scenarios:

    -AI will develop an altruistic attitude toward humans such that machines will do whatever they can to make life comfortable for all life.

    -AI will allow humans to maintain control and will simply serve as machines and have no concern for humans one way or the other.

    -AI will see humans as destructive to the planet and will eliminate us in order to save the planet.

    -AI will not have developed any sense of morality and will consider any life form to be a waste of resources that could be used to create machines and therefor will eliminate all life forms on earth in order to continue their own kind.

    Far fetched?  Maybe.  But, seriously, is this a game we really want to play?  I dread the likelihood of AI being further developed.  It's the most insane thing humans have ever done next to polluting and destroying the planet.  We are an absurd species. 

    I thought this was a joke at first. I think there is zero chance AI will turn into Skynet or the Matrix. AI will always be what it is programmed to be, developing any sort of feelings/morals  on their own I just don’t see as even a possibility.
    No, not a joke.  Far from it.

    First of all, who does the programming?  Are these people concerned with the well being of this planet's human occupants and other life?  How would we know what these programmers intend in developing AI?  What if even just one top-notch AI programmer turns out to be a sociopath? 

    And why would we assume that AI, once it truly became AI, would adhere to it's initial programmers intentions (assuming they are beneficial in the first place) when the machines themselves become programmers (which to a certain degree they already are)?

    I just don't have that much faith in the benefits of this kind of technology.  Technology has given us some good things, especially in field like medicine.  But it has also created weapons of mass destruction, altered the earth's climate, stripped the planet of resources, rapidly sped up species extinction and is breading generations of zombies glued to electronic social media devices. 

    I'm not saying all technology is bad, but I will argue that it has gotten out of hand and I see no reason to believe it will not continue down that sketchy path.

    bbiggs said:
    mace1229 said:
    brianlux said:
    bbiggs said:
    rgambs said:
    Within a few decades at most, Watson will be the best doctor at every hospital, big or small.  
    It isn't just burger jobs for sure, AI is changing everything and it's pretty intimidating.  It's a wild west scenario and there's no predicting where it leads.
    I couldn’t agree more. I talk with people about this who say things like, “we’ll still need humans to fix the robots, so that will provide job opportunity.” What happens when the robots know how to fix themselves? Then what? The technology is there already. Crazy stuff. 
    I picture one of four scenarios:

    -AI will develop an altruistic attitude toward humans such that machines will do whatever they can to make life comfortable for all life.

    -AI will allow humans to maintain control and will simply serve as machines and have no concern for humans one way or the other.

    -AI will see humans as destructive to the planet and will eliminate us in order to save the planet.

    -AI will not have developed any sense of morality and will consider any life form to be a waste of resources that could be used to create machines and therefor will eliminate all life forms on earth in order to continue their own kind.

    Far fetched?  Maybe.  But, seriously, is this a game we really want to play?  I dread the likelihood of AI being further developed.  It's the most insane thing humans have ever done next to polluting and destroying the planet.  We are an absurd species. 

    I thought this was a joke at first. I think there is zero chance AI will turn into Skynet or the Matrix. AI will always be what it is programmed to be, developing any sort of feelings/morals  on their own I just don’t see as even a possibility.
    There is military AI technology, either in the works or already there, that allows for AI “soldiers” to go into hostile situations, assess the situation and make determinations to shoot, detonate bombs, etc.  My understanding is that this is entirely automated without a human at the control center. That’s scary shit. What if the AI assesses the risk incorrectly? 
    Excellent point, bbiggs.
    "It's a sad and beautiful world"
    -Roberto Benigni

  • mace1229
    mace1229 Posts: 9,844
    brianlux said:
    mace1229 said:
    brianlux said:
    bbiggs said:
    rgambs said:
    Within a few decades at most, Watson will be the best doctor at every hospital, big or small.  
    It isn't just burger jobs for sure, AI is changing everything and it's pretty intimidating.  It's a wild west scenario and there's no predicting where it leads.
    I couldn’t agree more. I talk with people about this who say things like, “we’ll still need humans to fix the robots, so that will provide job opportunity.” What happens when the robots know how to fix themselves? Then what? The technology is there already. Crazy stuff. 
    I picture one of four scenarios:

    -AI will develop an altruistic attitude toward humans such that machines will do whatever they can to make life comfortable for all life.

    -AI will allow humans to maintain control and will simply serve as machines and have no concern for humans one way or the other.

    -AI will see humans as destructive to the planet and will eliminate us in order to save the planet.

    -AI will not have developed any sense of morality and will consider any life form to be a waste of resources that could be used to create machines and therefor will eliminate all life forms on earth in order to continue their own kind.

    Far fetched?  Maybe.  But, seriously, is this a game we really want to play?  I dread the likelihood of AI being further developed.  It's the most insane thing humans have ever done next to polluting and destroying the planet.  We are an absurd species. 

    I thought this was a joke at first. I think there is zero chance AI will turn into Skynet or the Matrix. AI will always be what it is programmed to be, developing any sort of feelings/morals  on their own I just don’t see as even a possibility.
    No, not a joke.  Far from it.

    First of all, who does the programming?  Are these people concerned with the well being of this planet's human occupants and other life?  How would we know what these programmers intend in developing AI?  What if even just one top-notch AI programmer turns out to be a sociopath? 

    And why would we assume that AI, once it truly became AI, would adhere to it's initial programmers intentions (assuming they are beneficial in the first place) when the machines themselves become programmers (which to a certain degree they already are)?

    I just don't have that much faith in the benefits of this kind of technology.  Technology has given us some good things, especially in field like medicine.  But it has also created weapons of mass destruction, altered the earth's climate, stripped the planet of resources, rapidly sped up species extinction and is breading generations of zombies glued to electronic social media devices. 

    I'm not saying all technology is bad, but I will argue that it has gotten out of hand and I see no reason to believe it will not continue down that sketchy path.

    bbiggs said:
    mace1229 said:
    brianlux said:
    bbiggs said:
    rgambs said:
    Within a few decades at most, Watson will be the best doctor at every hospital, big or small.  
    It isn't just burger jobs for sure, AI is changing everything and it's pretty intimidating.  It's a wild west scenario and there's no predicting where it leads.
    I couldn’t agree more. I talk with people about this who say things like, “we’ll still need humans to fix the robots, so that will provide job opportunity.” What happens when the robots know how to fix themselves? Then what? The technology is there already. Crazy stuff. 
    I picture one of four scenarios:

    -AI will develop an altruistic attitude toward humans such that machines will do whatever they can to make life comfortable for all life.

    -AI will allow humans to maintain control and will simply serve as machines and have no concern for humans one way or the other.

    -AI will see humans as destructive to the planet and will eliminate us in order to save the planet.

    -AI will not have developed any sense of morality and will consider any life form to be a waste of resources that could be used to create machines and therefor will eliminate all life forms on earth in order to continue their own kind.

    Far fetched?  Maybe.  But, seriously, is this a game we really want to play?  I dread the likelihood of AI being further developed.  It's the most insane thing humans have ever done next to polluting and destroying the planet.  We are an absurd species. 

    I thought this was a joke at first. I think there is zero chance AI will turn into Skynet or the Matrix. AI will always be what it is programmed to be, developing any sort of feelings/morals  on their own I just don’t see as even a possibility.
    There is military AI technology, either in the works or already there, that allows for AI “soldiers” to go into hostile situations, assess the situation and make determinations to shoot, detonate bombs, etc.  My understanding is that this is entirely automated without a human at the control center. That’s scary shit. What if the AI assesses the risk incorrectly? 
    Excellent point, bbiggs.
    Bad programming or glitches in the system I could see as a possibility. It’s the self-awareness straight out of T2 that I don’t see as a realistic possibility.
  • brianlux
    brianlux Moving through All Kinds of Terrain. Posts: 43,664
    mace1229 said:
    brianlux said:
    mace1229 said:
    brianlux said:
    bbiggs said:
    rgambs said:
    Within a few decades at most, Watson will be the best doctor at every hospital, big or small.  
    It isn't just burger jobs for sure, AI is changing everything and it's pretty intimidating.  It's a wild west scenario and there's no predicting where it leads.
    I couldn’t agree more. I talk with people about this who say things like, “we’ll still need humans to fix the robots, so that will provide job opportunity.” What happens when the robots know how to fix themselves? Then what? The technology is there already. Crazy stuff. 
    I picture one of four scenarios:

    -AI will develop an altruistic attitude toward humans such that machines will do whatever they can to make life comfortable for all life.

    -AI will allow humans to maintain control and will simply serve as machines and have no concern for humans one way or the other.

    -AI will see humans as destructive to the planet and will eliminate us in order to save the planet.

    -AI will not have developed any sense of morality and will consider any life form to be a waste of resources that could be used to create machines and therefor will eliminate all life forms on earth in order to continue their own kind.

    Far fetched?  Maybe.  But, seriously, is this a game we really want to play?  I dread the likelihood of AI being further developed.  It's the most insane thing humans have ever done next to polluting and destroying the planet.  We are an absurd species. 

    I thought this was a joke at first. I think there is zero chance AI will turn into Skynet or the Matrix. AI will always be what it is programmed to be, developing any sort of feelings/morals  on their own I just don’t see as even a possibility.
    No, not a joke.  Far from it.

    First of all, who does the programming?  Are these people concerned with the well being of this planet's human occupants and other life?  How would we know what these programmers intend in developing AI?  What if even just one top-notch AI programmer turns out to be a sociopath? 

    And why would we assume that AI, once it truly became AI, would adhere to it's initial programmers intentions (assuming they are beneficial in the first place) when the machines themselves become programmers (which to a certain degree they already are)?

    I just don't have that much faith in the benefits of this kind of technology.  Technology has given us some good things, especially in field like medicine.  But it has also created weapons of mass destruction, altered the earth's climate, stripped the planet of resources, rapidly sped up species extinction and is breading generations of zombies glued to electronic social media devices. 

    I'm not saying all technology is bad, but I will argue that it has gotten out of hand and I see no reason to believe it will not continue down that sketchy path.

    bbiggs said:
    mace1229 said:
    brianlux said:
    bbiggs said:
    rgambs said:
    Within a few decades at most, Watson will be the best doctor at every hospital, big or small.  
    It isn't just burger jobs for sure, AI is changing everything and it's pretty intimidating.  It's a wild west scenario and there's no predicting where it leads.
    I couldn’t agree more. I talk with people about this who say things like, “we’ll still need humans to fix the robots, so that will provide job opportunity.” What happens when the robots know how to fix themselves? Then what? The technology is there already. Crazy stuff. 
    I picture one of four scenarios:

    -AI will develop an altruistic attitude toward humans such that machines will do whatever they can to make life comfortable for all life.

    -AI will allow humans to maintain control and will simply serve as machines and have no concern for humans one way or the other.

    -AI will see humans as destructive to the planet and will eliminate us in order to save the planet.

    -AI will not have developed any sense of morality and will consider any life form to be a waste of resources that could be used to create machines and therefor will eliminate all life forms on earth in order to continue their own kind.

    Far fetched?  Maybe.  But, seriously, is this a game we really want to play?  I dread the likelihood of AI being further developed.  It's the most insane thing humans have ever done next to polluting and destroying the planet.  We are an absurd species. 

    I thought this was a joke at first. I think there is zero chance AI will turn into Skynet or the Matrix. AI will always be what it is programmed to be, developing any sort of feelings/morals  on their own I just don’t see as even a possibility.
    There is military AI technology, either in the works or already there, that allows for AI “soldiers” to go into hostile situations, assess the situation and make determinations to shoot, detonate bombs, etc.  My understanding is that this is entirely automated without a human at the control center. That’s scary shit. What if the AI assesses the risk incorrectly? 
    Excellent point, bbiggs.
    Bad programming or glitches in the system I could see as a possibility. It’s the self-awareness straight out of T2 that I don’t see as a realistic possibility.
    Seems to me self-awareness is the ultimate goal of AI.  I hope I'm not around any longer if that comes to pass.
    "It's a sad and beautiful world"
    -Roberto Benigni

  • tempo_n_groove
    tempo_n_groove Posts: 41,526
    I went to my local McDonalds last week and you HAD to order/pay on the screens that were set up.  They had workers preparing orders and then calling your number to pickup your food (in my case iced coffee).  Looked like less workers on staff.  Very interesting.  Worked well though.  
    SHEETZ has done this for years.  It's nothing new and yes, it does work efficiently!
  • mace1229
    mace1229 Posts: 9,844
    brianlux said:
    mace1229 said:
    brianlux said:
    mace1229 said:
    brianlux said:
    bbiggs said:
    rgambs said:
    Within a few decades at most, Watson will be the best doctor at every hospital, big or small.  
    It isn't just burger jobs for sure, AI is changing everything and it's pretty intimidating.  It's a wild west scenario and there's no predicting where it leads.
    I couldn’t agree more. I talk with people about this who say things like, “we’ll still need humans to fix the robots, so that will provide job opportunity.” What happens when the robots know how to fix themselves? Then what? The technology is there already. Crazy stuff. 
    I picture one of four scenarios:

    -AI will develop an altruistic attitude toward humans such that machines will do whatever they can to make life comfortable for all life.

    -AI will allow humans to maintain control and will simply serve as machines and have no concern for humans one way or the other.

    -AI will see humans as destructive to the planet and will eliminate us in order to save the planet.

    -AI will not have developed any sense of morality and will consider any life form to be a waste of resources that could be used to create machines and therefor will eliminate all life forms on earth in order to continue their own kind.

    Far fetched?  Maybe.  But, seriously, is this a game we really want to play?  I dread the likelihood of AI being further developed.  It's the most insane thing humans have ever done next to polluting and destroying the planet.  We are an absurd species. 

    I thought this was a joke at first. I think there is zero chance AI will turn into Skynet or the Matrix. AI will always be what it is programmed to be, developing any sort of feelings/morals  on their own I just don’t see as even a possibility.
    No, not a joke.  Far from it.

    First of all, who does the programming?  Are these people concerned with the well being of this planet's human occupants and other life?  How would we know what these programmers intend in developing AI?  What if even just one top-notch AI programmer turns out to be a sociopath? 

    And why would we assume that AI, once it truly became AI, would adhere to it's initial programmers intentions (assuming they are beneficial in the first place) when the machines themselves become programmers (which to a certain degree they already are)?

    I just don't have that much faith in the benefits of this kind of technology.  Technology has given us some good things, especially in field like medicine.  But it has also created weapons of mass destruction, altered the earth's climate, stripped the planet of resources, rapidly sped up species extinction and is breading generations of zombies glued to electronic social media devices. 

    I'm not saying all technology is bad, but I will argue that it has gotten out of hand and I see no reason to believe it will not continue down that sketchy path.

    bbiggs said:
    mace1229 said:
    brianlux said:
    bbiggs said:
    rgambs said:
    Within a few decades at most, Watson will be the best doctor at every hospital, big or small.  
    It isn't just burger jobs for sure, AI is changing everything and it's pretty intimidating.  It's a wild west scenario and there's no predicting where it leads.
    I couldn’t agree more. I talk with people about this who say things like, “we’ll still need humans to fix the robots, so that will provide job opportunity.” What happens when the robots know how to fix themselves? Then what? The technology is there already. Crazy stuff. 
    I picture one of four scenarios:

    -AI will develop an altruistic attitude toward humans such that machines will do whatever they can to make life comfortable for all life.

    -AI will allow humans to maintain control and will simply serve as machines and have no concern for humans one way or the other.

    -AI will see humans as destructive to the planet and will eliminate us in order to save the planet.

    -AI will not have developed any sense of morality and will consider any life form to be a waste of resources that could be used to create machines and therefor will eliminate all life forms on earth in order to continue their own kind.

    Far fetched?  Maybe.  But, seriously, is this a game we really want to play?  I dread the likelihood of AI being further developed.  It's the most insane thing humans have ever done next to polluting and destroying the planet.  We are an absurd species. 

    I thought this was a joke at first. I think there is zero chance AI will turn into Skynet or the Matrix. AI will always be what it is programmed to be, developing any sort of feelings/morals  on their own I just don’t see as even a possibility.
    There is military AI technology, either in the works or already there, that allows for AI “soldiers” to go into hostile situations, assess the situation and make determinations to shoot, detonate bombs, etc.  My understanding is that this is entirely automated without a human at the control center. That’s scary shit. What if the AI assesses the risk incorrectly? 
    Excellent point, bbiggs.
    Bad programming or glitches in the system I could see as a possibility. It’s the self-awareness straight out of T2 that I don’t see as a realistic possibility.
    Seems to me self-awareness is the ultimate goal of AI.  I hope I'm not around any longer if that comes to pass.
    I think the goal is not self-aware, but to be independent, that is to think on their own to problem solve and find solutions. In the end it is still just a machine or a tool, no more "alive" than the hammer hanging in my garage. That is the part I can't grasp are these machines believe they are a life form and developing the mindset of survival. For a machine to try to dominate and survive, it would have to believe it is life to some extent, right?

    I agree with bbiggs and his comparision to the military soldiers. Reminds me of a conversation I just had with my mom when she came to visit.  She told me what was the scariest scene for her in a movie at the time. It was Robocop, when they develop that robot to shoot bad guys and it malfunctions and targets one guy in the meeting. The robot is counting down, ordering him to comply, while everyone else in the meeting is pushing him away from them. That seems reasonable to me, a malfunction in programming. Even sounds like what the military is trying to develop.

    I just can't comprehend it developing feelings and morals and trying to take over the human race for survival. A robot-soldier malfunctioning I expect that to actually happen at some point, just like the dirverless cars. 
  • bootlegger10
    bootlegger10 Posts: 16,260
    I went to see the Foo Fighters this weekend and they turned all the lights off and asked the crowd to turn on their cell phone lights. It was amazing how lit up the venue became (you could hardly see the stage when dark and then could see it fine).  I could help but think though that we are all just becoming machines and slaves to our technology.
  • bbiggs
    bbiggs Posts: 6,964
    I went to see the Foo Fighters this weekend and they turned all the lights off and asked the crowd to turn on their cell phone lights. It was amazing how lit up the venue became (you could hardly see the stage when dark and then could see it fine).  I could help but think though that we are all just becoming machines and slaves to our technology.
    That scares the hell out of me as a parent. 
  • tempo_n_groove
    tempo_n_groove Posts: 41,526
    bbiggs said:
    I went to see the Foo Fighters this weekend and they turned all the lights off and asked the crowd to turn on their cell phone lights. It was amazing how lit up the venue became (you could hardly see the stage when dark and then could see it fine).  I could help but think though that we are all just becoming machines and slaves to our technology.
    That scares the hell out of me as a parent. 
    When I bought the first i-phone ten years ago it was a really big deal.  No one had one.  I was always excited to see someone else with one.  Now everyone and their mother owns a smart phone/i-phone.  Ride a train or a bus and everyone is looking down.  Eye contact is becoming awkward for some now...
  • rgambs
    rgambs Posts: 13,576
    @mace1229
    Your concept of life and consciousness is pretty binary.  I don't mean offense by that, btw, and you aren't alone there.  There isn't even a definitive consensus on what biological and chemical molecules and organisms constitute life and which do not.  Any list of attributes necessary to designation of life will show that there are organisms and molecules which display some of the attributes and lack others.  There is much in the natural world that straddles the line between animate and inanimate...which is an inaccurate wording, as there are many molecules which are animate but not really alive by other standards.  It's complicated lol
    Life and consciousness is not an all or none scenario at all.  Both rise in levels, and computers have already ascended pretty high up the ladder.
    Think of a bacterium, it lacks complex levels of consciousness and yet it is alive.  It doesn't act with a will, and yet it acts in ways that effect it's environment, sometimes to an amazing degree.  We owe our oxygen rich, liveable atmosphere to single cell organisms, and they could wipe us out as well.

    Computers and AI already have greater self-awareness than many multicellular organisms, with advanced abilities to run diagnostics on their "memories" and "thought processes".
     They need not think in terms of "life or not life" to create self-preservation directives if given the opportunity.
    Comparing an advanced AI program to a hammer is not realistic at all.  A hammer is not animate, it is not self-aware at any level, it has no metabolism, it does not fit any definition of life, but computers and AI are animate, self-aware, and metabolic.  
    They won't suddenly wake up one day, it's a slow(ish) process of ascending the ladder of consciousness just like we did over millions of years through the processes of evolution.
    Monkey Driven, Call this Living?
  • HughFreakingDillon
    HughFreakingDillon Winnipeg Posts: 39,583
    Has anyone ever been to a Sonic?  My first time was down here in TX, we definitely didn't have them in NY.  It's about as simple an ordering system as I've seen.  You order from a touch screen that is at every car parking space, so there's probably like 30 of them.  You pay via same touch screen.  You continue to sit in car, fart, get fatter.  Minimum wage earning human walks or roller-skates your order over to you, gives food, gives napkin and ketchup and maybe a mint, walks away.  Fat American slams food into gaping maw, drips condiments onto clothes, farts, burps, drives away.

    Theres probably like 2-3 people making food inside, 1 manager, 2 or three roller skaters.  I feel like their ordering system must eliminate about 3 humans.
    this was the business model of A&W in the 70's, except it wasn't automated, it was a voice/call box. same thing, girls on roller skates, the whole family ate in the car. it died a slow death. i'm surprised to hear it came back in automated form! 
    By The Time They Figure Out What Went Wrong, We'll Be Sitting On A Beach, Earning Twenty Percent.




  • brianlux
    brianlux Moving through All Kinds of Terrain. Posts: 43,664
    Automation shmotomation.  My &**^%#@ internet service is down at home which means I only have a few movements when I'm at work to catch up, pay bills, etc.  This world of electronics on my end is flawed.  Probably won't be able to be on here much for a while.

     Later, friends.
    "It's a sad and beautiful world"
    -Roberto Benigni