AI, Great Friend or Dangerous Foe?

The #1 community for Gun Owners in Indiana

Member Benefits:

  • Fewer Ads!
  • Discuss all aspects of firearm ownership
  • Discuss anti-gun legislation
  • Buy, sell, and trade in the classified section
  • Chat with Local gun shops, ranges, trainers & other businesses
  • Discover free outdoor shooting areas
  • View up to date on firearm-related events
  • Share photos & video with other members
  • ...and so much more!
  • AI, Great Friend or Dangerous Foe?


    • Total voters
      59

    jamil

    code ho
    Site Supporter
    Rating - 0%
    0   0   0
    Jul 17, 2011
    62,361
    113
    Gtown-ish
    Long story short, we don't know, like at all. We can't qualify one iota of the human existence. One thing everyone always seems to get wrong regarding the robot workers they think will replace them is that it will require "terminator" like full locomotion. Not true at all. If there is no need for bi-pedal locomotion or opposable thumb type dexterity it will not be implemented in said robot.
    This is what gets me about the discussion of AI. People are anthropomorphizing it as if it thinks for itself. It doesn’t think. It doesn’t have consciousness. It’s not human like. It has zero emotional intelligence. It learns through data and pattern recognition. So its capabilities are limited to the completeness and quality of the data it trained on.
     

    jamil

    code ho
    Site Supporter
    Rating - 0%
    0   0   0
    Jul 17, 2011
    62,361
    113
    Gtown-ish
    That would at most be a side effect. The more data you curate for a model, the more biased it becomes. Some models, like voice conversion models, the more data you curate the better the result will likely be. Large language data models, like the ChatGPT and such, the opposite is true as the more biased you make it, the less useful is it. The primary reason for censorship is Divide and Conquer.
    I’m not sure if this is what you’re saying… more data doesn’t mean more bias unless the more data includes bias. The best training data is the most diverse, quality data.

    Let’s say you’re a software engineer and you’re working with a fairly new code library. You want to know how to do a certain thing with it. If the training data for the AI were exclusively from stackoverflow, the AI would produce bad advice about as often as stackoverflow does.

    Bottom line is, AI is only as accurate as the quality of data it trains on. Another example, if it only trained on official narratives of covid, it would tell tell you to follow the science, where TheScience™ is prescribed by people who stood to make a lot of money from it.
     

    ZurokSlayer7X9

    Expert
    Site Supporter
    Rating - 100%
    1   0   0
    Jan 12, 2023
    946
    93
    NWI
    I’m not sure if this is what you’re saying… more data doesn’t mean more bias unless the more data includes bias. The best training data is the most diverse, quality data.

    Let’s say you’re a software engineer and you’re working with a fairly new code library. You want to know how to do a certain thing with it. If the training data for the AI were exclusively from stackoverflow, the AI would produce bad advice about as often as stackoverflow does.

    Bottom line is, AI is only as accurate as the quality of data it trains on. Another example, if it only trained on official narratives of covid, it would tell tell you to follow the science, where TheScience™ is prescribed by people who stood to make a lot of money from it.
    Yeah, I could have been more clear with what I meant, let me re-phrase. The more you curate the data, the more bias is put in, which will likely not be good for the model.
     

    buckwacker

    Master
    Rating - 100%
    11   0   0
    Mar 23, 2012
    3,153
    97
    It’s not gonna be AI that does us in.
    I don't know. Turning over too much decision making to AI might do it. We can see the beginning of the social conditioning needed to build the required level of trust in enough of the public in that application of AI.

    I know an Army colonel who told me snippets (can't reveal classified info) about technology being tested that makes me think ruh-roh. The way its sold sounds great until you start thinking about possible darker implications.
     

    sixGuns

    Sharpshooter
    Rating - 100%
    8   0   0
    Aug 24, 2020
    365
    43
    Grabill
    This is what gets me about the discussion of AI. People are anthropomorphizing it as if it thinks for itself. It doesn’t think. It doesn’t have consciousness. It’s not human like. It has zero emotional intelligence. It learns through data and pattern recognition. So its capabilities are limited to the completeness and quality of the data it trained on.
    You're right over target. This is what people don't seem to grasp when discussing AI. I like to direct people to John Searle's Chinese Room argument. The symbols (the data) have no meaning to a computer. My professor called it symbol pushing. A computer is just pushing symbols around and we (humans) give meaning to the symbols. Even in binary, the very basis of computing, the 1 or 0 are just symbols. Humans gave them meaning.

    There is no physical manifestation of the number 1 in the material world. Humans can envision the idea of nothing (0), and agree on this... thing in front of us, let's call it "1." (Edit: the "1" is from two very, very early humans, inventing numbers, sharing the experience of... say a lone tree in the middle of nowhere where none are else found. An idea is proposed, that we create this new abstract concept "1." I mean, we know there are others, but how many? Both agree. Another new abstract concept is proposed. "2." It's that 1 tree, plus another 1 tree, because we know there are more trees than this "1" here. "Ok, I agree," said the other. So on and so forth.) Bam! Numbers. Recursion. Proofs. Math. It's why we've even achieved anything we have today. Yet 2+2=5 can be right in some situations, right? Clown world.

    Computers compute numbers. Computers don't understand what numbers are, we do. Why do humans give meaning to things? How do humans give meaning to things? Why can humans do these things? How can two people both like pie, but we can't qualify exactly, in numerical form, how much both like pie for comparison? The rabbit hole is far, far deeper than imagined.

    It’s not gonna be AI that does us in.
    Turning over too much decision making to AI might do it.
    This is how it happens. It's not the AI. It's letting humans give that decision to the AI. It could not occur if we do not let it.
     
    Last edited:

    ditcherman

    Grandmaster
    Site Supporter
    Rating - 100%
    22   0   0
    Dec 18, 2018
    8,262
    113
    In the country, hopefully.
    You're right over target. This is what people don't seem to grasp when discussing AI. I like to direct people to John Searle's Chinese Room argument. The symbols (the data) have no meaning to a computer. My professor called it symbol pushing. A computer is just pushing symbols around and we (humans) give meaning to the symbols. Even in binary, the very basis of computing, the 1 or 0 are just symbols. Humans gave them meaning.

    There is no physical manifestation of the number 1 in the material world. Humans can envision the idea of nothing (0), and agree on this... thing in front of us, let's call it "1." Bam! Numbers. Recursion. Proofs. Math. It's why we've even achieved anything we have today. Yet 2+2=5 can be right in some situations, right? Clown world.

    Computers compute numbers. Computers don't understand what numbers are, we do. Why do humans give meaning to things? How do humans give meaning to things? Why can humans do these things? How can two people both like pie, but we can't qualify exactly, in numerical form, how much both like pie for comparison? The rabbit hole is far, far deeper than imagined.



    This is how it happens. It's not the AI. It's letting humans give that decision to the AI. It could not occur if we do not let it.
    Ok. Fine.
    I vote we don’t let it.
    Guess what, I was outvoted.

    Maybe it’s not reasoning, thinking, yet.
    But it sure seems it’s the same as.
    Just the simple act of FB showing you ads of things you’ve thought about, not talked about, thought about, is close enough for me to call it all the same.
     

    HoosierLife

    Expert
    Rating - 0%
    0   0   0
    Jun 8, 2013
    1,404
    113
    Greenwood
    Ok. Fine.
    I vote we don’t let it.
    Guess what, I was outvoted.

    Maybe it’s not reasoning, thinking, yet.
    But it sure seems it’s the same as.
    Just the simple act of FB showing you ads of things you’ve thought about, not talked about, thought about, is close enough for me to call it all the same.
    The FB thing is something different.

    Sometimes I do think they’re listening to your phone calls and reading your emails, but while that is sinister, that would still just be them taking the data they have about you and showing you ads based off that

    Even if that’s not happening, what they’re really doing is just compiling the data that they know about you and targeting you with ads they think you would be interested in.

    Which obviously is working.
     

    ditcherman

    Grandmaster
    Site Supporter
    Rating - 100%
    22   0   0
    Dec 18, 2018
    8,262
    113
    In the country, hopefully.
    The FB thing is something different.

    Sometimes I do think they’re listening to your phone calls and reading your emails, but while that is sinister, that would still just be them taking the data they have about you and showing you ads based off that

    Even if that’s not happening, what they’re really doing is just compiling the data that they know about you and targeting you with ads they think you would be interested in.

    Which obviously is working.
    Listening to calls and reading emails is so 2017, man.

    Almost everyone I know would tell you that FB ads pop up after you've talked out loud about something. An ad popped up on my wife's IG for something I'd be interested in like car parts or something we had a conversation about, it messed up that time. In addition, I've talked to multiple people who have said they've just thought about something, not talked about it, not searched for it, but it pops up, and that's happened to me. That's just freaky and really unexplainable.
     

    BE Mike

    Grandmaster
    Site Supporter
    Rating - 100%
    18   0   0
    Jul 23, 2008
    7,673
    113
    New Albany
    I've read that with the current capabilities with AI that diagnosing illnesses in people is equal to or superior to human capabilities. Some AI has been shown to detect cancer where it is not observable to humans. Apparently there is a shortage of doctors and current doctors are overworked. There is a possibility of using AI to manage doctor offices. This one bothers me. It is already difficult to get appointments amended (due to oversight) or reschedule appointments, when dealing with humans. I can't imagine that dealing with AI would be easier.
     
    Top Bottom