AI, Great Friend or Dangerous Foe?

The #1 community for Gun Owners in Indiana

Member Benefits:

  • Fewer Ads!
  • Discuss all aspects of firearm ownership
  • Discuss anti-gun legislation
  • Buy, sell, and trade in the classified section
  • Chat with Local gun shops, ranges, trainers & other businesses
  • Discover free outdoor shooting areas
  • View up to date on firearm-related events
  • Share photos & video with other members
  • ...and so much more!
  • AI, Great Friend or Dangerous Foe?


    • Total voters
      59

    jamil

    code ho
    Site Supporter
    Rating - 0%
    0   0   0
    Jul 17, 2011
    62,361
    113
    Gtown-ish
    Long story short, we don't know, like at all. We can't qualify one iota of the human existence. One thing everyone always seems to get wrong regarding the robot workers they think will replace them is that it will require "terminator" like full locomotion. Not true at all. If there is no need for bi-pedal locomotion or opposable thumb type dexterity it will not be implemented in said robot.
    This is what gets me about the discussion of AI. People are anthropomorphizing it as if it thinks for itself. It doesn’t think. It doesn’t have consciousness. It’s not human like. It has zero emotional intelligence. It learns through data and pattern recognition. So its capabilities are limited to the completeness and quality of the data it trained on.
     

    jamil

    code ho
    Site Supporter
    Rating - 0%
    0   0   0
    Jul 17, 2011
    62,361
    113
    Gtown-ish
    That would at most be a side effect. The more data you curate for a model, the more biased it becomes. Some models, like voice conversion models, the more data you curate the better the result will likely be. Large language data models, like the ChatGPT and such, the opposite is true as the more biased you make it, the less useful is it. The primary reason for censorship is Divide and Conquer.
    I’m not sure if this is what you’re saying… more data doesn’t mean more bias unless the more data includes bias. The best training data is the most diverse, quality data.

    Let’s say you’re a software engineer and you’re working with a fairly new code library. You want to know how to do a certain thing with it. If the training data for the AI were exclusively from stackoverflow, the AI would produce bad advice about as often as stackoverflow does.

    Bottom line is, AI is only as accurate as the quality of data it trains on. Another example, if it only trained on official narratives of covid, it would tell tell you to follow the science, where TheScience™ is prescribed by people who stood to make a lot of money from it.
     

    ZurokSlayer7X9

    Expert
    Site Supporter
    Rating - 100%
    1   0   0
    Jan 12, 2023
    944
    93
    NWI
    I’m not sure if this is what you’re saying… more data doesn’t mean more bias unless the more data includes bias. The best training data is the most diverse, quality data.

    Let’s say you’re a software engineer and you’re working with a fairly new code library. You want to know how to do a certain thing with it. If the training data for the AI were exclusively from stackoverflow, the AI would produce bad advice about as often as stackoverflow does.

    Bottom line is, AI is only as accurate as the quality of data it trains on. Another example, if it only trained on official narratives of covid, it would tell tell you to follow the science, where TheScience™ is prescribed by people who stood to make a lot of money from it.
    Yeah, I could have been more clear with what I meant, let me re-phrase. The more you curate the data, the more bias is put in, which will likely not be good for the model.
     

    buckwacker

    Master
    Rating - 100%
    11   0   0
    Mar 23, 2012
    3,153
    97
    It’s not gonna be AI that does us in.
    I don't know. Turning over too much decision making to AI might do it. We can see the beginning of the social conditioning needed to build the required level of trust in enough of the public in that application of AI.

    I know an Army colonel who told me snippets (can't reveal classified info) about technology being tested that makes me think ruh-roh. The way its sold sounds great until you start thinking about possible darker implications.
     

    sixGuns

    Sharpshooter
    Rating - 100%
    8   0   0
    Aug 24, 2020
    364
    43
    Grabill
    This is what gets me about the discussion of AI. People are anthropomorphizing it as if it thinks for itself. It doesn’t think. It doesn’t have consciousness. It’s not human like. It has zero emotional intelligence. It learns through data and pattern recognition. So its capabilities are limited to the completeness and quality of the data it trained on.
    You're right over target. This is what people don't seem to grasp when discussing AI. I like to direct people to John Searle's Chinese Room argument. The symbols (the data) have no meaning to a computer. My professor called it symbol pushing. A computer is just pushing symbols around and we (humans) give meaning to the symbols. Even in binary, the very basis of computing, the 1 or 0 are just symbols. Humans gave them meaning.

    There is no physical manifestation of the number 1 in the material world. Humans can envision the idea of nothing (0), and agree on this... thing in front of us, let's call it "1." Bam! Numbers. Recursion. Proofs. Math. It's why we've even achieved anything we have today. Yet 2+2=5 can be right in some situations, right? Clown world.

    Computers compute numbers. Computers don't understand what numbers are, we do. Why do humans give meaning to things? How do humans give meaning to things? Why can humans do these things? How can two people both like pie, but we can't qualify exactly, in numerical form, how much both like pie for comparison? The rabbit hole is far, far deeper than imagined.

    It’s not gonna be AI that does us in.
    Turning over too much decision making to AI might do it.
    This is how it happens. It's not the AI. It's letting humans give that decision to the AI. It could not occur if we do not let it.
     

    ditcherman

    Grandmaster
    Site Supporter
    Rating - 100%
    22   0   0
    Dec 18, 2018
    8,257
    113
    In the country, hopefully.
    You're right over target. This is what people don't seem to grasp when discussing AI. I like to direct people to John Searle's Chinese Room argument. The symbols (the data) have no meaning to a computer. My professor called it symbol pushing. A computer is just pushing symbols around and we (humans) give meaning to the symbols. Even in binary, the very basis of computing, the 1 or 0 are just symbols. Humans gave them meaning.

    There is no physical manifestation of the number 1 in the material world. Humans can envision the idea of nothing (0), and agree on this... thing in front of us, let's call it "1." Bam! Numbers. Recursion. Proofs. Math. It's why we've even achieved anything we have today. Yet 2+2=5 can be right in some situations, right? Clown world.

    Computers compute numbers. Computers don't understand what numbers are, we do. Why do humans give meaning to things? How do humans give meaning to things? Why can humans do these things? How can two people both like pie, but we can't qualify exactly, in numerical form, how much both like pie for comparison? The rabbit hole is far, far deeper than imagined.



    This is how it happens. It's not the AI. It's letting humans give that decision to the AI. It could not occur if we do not let it.
    Ok. Fine.
    I vote we don’t let it.
    Guess what, I was outvoted.

    Maybe it’s not reasoning, thinking, yet.
    But it sure seems it’s the same as.
    Just the simple act of FB showing you ads of things you’ve thought about, not talked about, thought about, is close enough for me to call it all the same.
     
    Top Bottom