Chronotope’s avatarChronotope’s Twitter Archive—№ 88,929

                          1. At the end of the day all systems problems are caused by human problems and there's no system design solution that can solve for human problems unless the system makes its goal to educate or ignore particular swaths of humanity.
                        1. …in reply to @Chronotope
                          That's the biggest vulnerability of targeting systems. They are built on top of human data and human assumptions. Algorithms aren't racist or sexist, they don't hold political views, but their input can be. Targeting systems, by their nature, magnify human assumptions...
                      1. …in reply to @Chronotope
                        I have a friend who is a scientist, young and also a woman. Targeting systems assume her to be a middle aged white male because that's what they perceive scientists to be...
                    1. …in reply to @Chronotope
                      Is that the fault of the programmer? Partly. But also the fault of: thousands of humans who input those assumptions through tracked actions; The complementary tagging systems that tag science equipment as 'for middle aged white men'; ...
                  1. …in reply to @Chronotope
                    the article writers and publication executives who tag their content as such (or systems *they* use that automatically do based on previous input) are also at fault. There is an ecosystem of human failure on which targeting systems are built and there's no way to avoid that.
                1. …in reply to @Chronotope
                  There's no such thing as machine intelligence in our current technological world, there is only machine amplification of human conceits. And most human assumptions about other people are wrong.
              1. …in reply to @Chronotope
                The fundamental problem with all targeting systems is the assumption that automated information tracking and collation at scale makes it any more accurate then ad agencies getting 20 people in a room and asking them questions in the 50s.
            1. …in reply to @Chronotope
              There is no safe user data targeting system b/c they all assume: 1. humans understand each other. 2. logical assumptions can be made about one human from the actions of another. The more complex the system, the more ingrained these assumptions are. And the less accurate.
          1. …in reply to @Chronotope
            I'm not a huge Gladwell fan, but outliers definitely exist. The higher the scale of data collected the more those outliers unduly influence or escape the system (based on the programmers assumptions). As user data collection systems scale up they inevitably become less accurate.
        1. …in reply to @Chronotope
          But our modern world believes the opposite and uses those systems not just to make decisions but to push policy and practice which takes those dumb systems and their amplifications of stupid human assumptions and re-imposes them back on to the world.
      1. …in reply to @Chronotope
        B/c user data amplifies inequalities, blurs differences, and cannot account for individuals, any system, as it scales up, becomes more inevitably a tool of totalitarianism. Some of these systems are accidentally totalitarian, which is even more frighting then purposeful versions.
    1. …in reply to @Chronotope
      User data tracking systems cannot account for the fact that we are not all the same, or that large groups of us are not all the same. Is it any wonder it is a natural tool of fascistic nationalists. Targeting systems at scale--by necessity--must make the same assumptions as Nazis
  1. …in reply to @Chronotope
    Which is to say: it assumes only particular types of people belong to particular groups and enforces those borders strongly within its system of assumptions and then by reiterating those assumptions out into the real world.
    1. …in reply to @Chronotope
      Systems like YouTube can't escape amplifying conspiracy theories and racism because the assumptions on which the system is built assumes that if anyone with particular characteristics likes those videos, everyone with those characteristics does...
      1. …in reply to @Chronotope
        And because these things amplify, more characteristics get added to the data set for targeting. My point is that user data targeting systems aren't dangerous because they amplify Nazis. They're dangerous because they assume we're all Nazis.


Search tweets' text