AI, Great Friend or Dangerous Foe?

The #1 community for Gun Owners in Indiana

Member Benefits:

  • Fewer Ads!
  • Discuss all aspects of firearm ownership
  • Discuss anti-gun legislation
  • Buy, sell, and trade in the classified section
  • Chat with Local gun shops, ranges, trainers & other businesses
  • Discover free outdoor shooting areas
  • View up to date on firearm-related events
  • Share photos & video with other members
  • ...and so much more!
  • AI, Great Friend or Dangerous Foe?


    • Total voters
      57

    jamil

    code ho
    Site Supporter
    Rating - 0%
    0   0   0
    Jul 17, 2011
    62,352
    113
    Gtown-ish
    Long story short, we don't know, like at all. We can't qualify one iota of the human existence. One thing everyone always seems to get wrong regarding the robot workers they think will replace them is that it will require "terminator" like full locomotion. Not true at all. If there is no need for bi-pedal locomotion or opposable thumb type dexterity it will not be implemented in said robot.
    This is what gets me about the discussion of AI. People are anthropomorphizing it as if it thinks for itself. It doesn’t think. It doesn’t have consciousness. It’s not human like. It has zero emotional intelligence. It learns through data and pattern recognition. So its capabilities are limited to the completeness and quality of the data it trained on.
     

    jamil

    code ho
    Site Supporter
    Rating - 0%
    0   0   0
    Jul 17, 2011
    62,352
    113
    Gtown-ish
    That would at most be a side effect. The more data you curate for a model, the more biased it becomes. Some models, like voice conversion models, the more data you curate the better the result will likely be. Large language data models, like the ChatGPT and such, the opposite is true as the more biased you make it, the less useful is it. The primary reason for censorship is Divide and Conquer.
    I’m not sure if this is what you’re saying… more data doesn’t mean more bias unless the more data includes bias. The best training data is the most diverse, quality data.

    Let’s say you’re a software engineer and you’re working with a fairly new code library. You want to know how to do a certain thing with it. If the training data for the AI were exclusively from stackoverflow, the AI would produce bad advice about as often as stackoverflow does.

    Bottom line is, AI is only as accurate as the quality of data it trains on. Another example, if it only trained on official narratives of covid, it would tell tell you to follow the science, where TheScience™ is prescribed by people who stood to make a lot of money from it.
     
    Top Bottom