• ApeNo1@lemm.ee
    link
    fedilink
    English
    arrow-up
    30
    ·
    1 year ago

    I won’t comment on the ethical pros and cons of this being deployed in airports but from a system perspective it needs to be much higher than 97%. LAX processes about 240,000 travellers a day. 3% error translates to over 7000 travellers a day being incorrectly processed. What you want is closer to 99.9 so the error is in the hundreds and can reasonably be corrected with human intervention. This may sound like an easy push but anyone experienced in training AI/ML systems knows this is still a fair bit of work and every single percent increase in accuracy is significant.

    • cyd@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      I imagine the vast majority of those 3% error cases get rerouted to a human border official for handling. This is basically a sanity check, and sounds reasonable. The use of AI in the first instance shouldn’t be making things worse, since AI is already superior than humans at facial recognitiob. I wouldn’t be surprised if normal border officials have a significantly higher than 3% error rate in face matching.

      • Riskable@programming.dev
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        rerouted to a human border official for handling.

        No, the person who was misidentified will be routed to a human TSA agent for harassment. Every single time they fly.

      • mightyfoolish@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        Will this be like the “random” checks if your complexion is olive or darker or if your name seems kind of funny?

      • ApeNo1@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        100% this will already be better than humans, but similar to say autonomous driving, the goals should be better than human otherwise we see vendors doing just enough to achieve the simple goal of saving costs or making sales. I would hope they run this in parallel and the system flags anything with confidence less than a threshold for human scrutiny and comparison. Analysing the human decisions in parallel to the AI decisions will help to refine the models and also give some visibility to current accuracy with just human checks. This training and review aspect is a lot of work.

    • Riskable@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      I also want to know the statistics in regards to people in makeup. With a bit of makeup I bet you could get this system to think you’re whoever’s photo is on your ID.

      Camera-based systems are usually quite easy to fool so it could result in a seriously false sense of security.

      • Piecemakers@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        A “false sense of security” has always been TSA’s mission statement, so that’s on-brand AF.