A new law went into effect in New York City on Wednesday that requires any business using A.I. in its hiring to submit that software to an audit to prove that it does not result in racist or sexist…

  • elscallr@kbin.social
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    I’m not talking about the people who might be good for the job. I’m talking about the 600 dime a dozen script kiddies that apply for high level devops jobs or as senior engineers. A model can reject those just fine, and it takes the sweat off the human ops people to screen them.

    • Machinist3359@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      That’s simply not how hiring works at most institutions.

      For high traffic lower level positions, hiring managers resent getting given these AI tools. You wind up with candidates that are best at manipulating AI, not the most qualified. Their previous method, basic sorting and hitting the first acceptable worker (rather than the absolute best), is much more efficient use of their time.

      For higher level positions, networking plays a much more significant roll. Since it’s a much more significant decision, companies are also less likely to entrust it to an AI.

      Screening out unserious applicants is easier than you think, and can be addressed without a blackbox of potential lawsuits