Chronotope’s avatarChronotope’s Twitter Archive—№ 100,784

            1. …in reply to @WillOremus
              WillOremus undersequoias I think because machine learning takes time to train and they never bothered to train it against white meme-throwing terrorists because they have a cultural blind spot.
          1. …in reply to @Chronotope
            WillOremus undersequoias It's like the face algorithm that couldn't see black people or the Google one that mis-classified them as monkeys and wasn't fixable. Or Google surfacing holocaust denial in search. These are all problems that surface from a lack of diversity & forward thinking in the ML group.
        1. …in reply to @Chronotope
          WillOremus undersequoias The model is wrong, their incentive structure is wrong, and their lack of diverse employees in high enough positions to impact project planning and testing make an enormous blind spot in their QA.
      1. …in reply to @Chronotope
        WillOremus undersequoias Together it creates a huge blindspot that allows then to leave their ML learning to accept white nationalism and misogy to bloom in their platform and it is impossible to unwind from their algorithms.
    1. …in reply to @Chronotope
      WillOremus undersequoias I mean, think about the implications of the misclassifying of black people? Not only was no one of color in the QA team, the engineering team, and their group of testers, none of those people even had friends who were black! That's crazy.
  1. …in reply to @Chronotope
    WillOremus undersequoias That's a huge blind spot and it no doubt has to do with why they can keep ISIS mostly off the platform, but not 8chan terrorists. That's even assuming that there aren't people in the loop who are unchecked white supremacists, a whole extra layer of concern.


Search tweets' text