• workerONE@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    2
    ·
    edit-2
    10 months ago

    If you put AI in a robot, one that has been trained to walk and plug itself in for electricity, how are you going to be sure it doesn’t kill people? I mean, this robot arm didn’t know it killed someone, it didn’t know not to kill someone… it’s going to be worse if robots are walking around with people. There was a little crab shaped robot with AI controlling it- they chopped off one of it’s legs to see how it would react, and later they noticed that it was identifying people’s faces and tracking them. They had never taught this thing what a person was or what they looked like, it had just figured it out itself and probably knew a person had cut off it’s leg too.

    • Neato@kbin.social
      link
      fedilink
      arrow-up
      6
      ·
      10 months ago

      The same way you make anything safe: design and testing. If you’re asking for the actual mechanisms to prevent accidental human interactions, you can look into all the tech that powers automatic doors, elevator doors, and some of the “driverless” car tech now. But in essence: a lot of IR and pressure sensors with pattern matching required for using motion above a certain power/resistance. Essentially when a machine hasn’t determined it’s “on the job” by a very narrow programming and testing margin, it should be able to be stopped from a very small amount of force or sensory input.

      But in the end as with most things, the actual results will be people doing their best, accidents happening, companies getting sued, and legislation or standards being written.

    • smashboy@kbin.social
      link
      fedilink
      arrow-up
      4
      ·
      10 months ago

      I can’t find anything about that spontaneous facial recognition/tracking, do you have a source for that?