-
WillOremus undersequoias I think because machine learning takes time to train and they never bothered to train it against white meme-throwing terrorists because they have a cultural blind spot.
-
WillOremus undersequoias It's like the face algorithm that couldn't see black people or the Google one that mis-classified them as monkeys and wasn't fixable. Or Google surfacing holocaust denial in search. These are all problems that surface from a lack of diversity & forward thinking in the ML group.
-
WillOremus undersequoias The model is wrong, their incentive structure is wrong, and their lack of diverse employees in high enough positions to impact project planning and testing make an enormous blind spot in their QA.
-
WillOremus undersequoias Together it creates a huge blindspot that allows then to leave their ML learning to accept white nationalism and misogy to bloom in their platform and it is impossible to unwind from their algorithms.
-
WillOremus undersequoias *misogyny
-
WillOremus undersequoias I mean, think about the implications of the misclassifying of black people? Not only was no one of color in the QA team, the engineering team, and their group of testers, none of those people even had friends who were black! That's crazy.
-
WillOremus undersequoias That's a huge blind spot and it no doubt has to do with why they can keep ISIS mostly off the platform, but not 8chan terrorists. That's even assuming that there aren't people in the loop who are unchecked white supremacists, a whole extra layer of concern.