Frank Pasquale – Black Box Society – chapter 1 (pp 1-11)
This chapter talks about our privacy and how there is none. Machines and devices are tracking our every move, what we look at, where we spend our money, etc. Camera’s are everywhere now – hidden in plain sight (think of phones) and are constantly keeping tabs and surveillance on us. We each have a quantified data file that they use to ‘define’ us.
Cathy O’Neill – The era of blind faith in big data must end (Ted Talk, 13m)
Talks about how algorithms often aren’t built to have biases, but accidentally do. For example, the idea of making an algorithm for hiring people like the successful people at Fox would mean that only men were hired. Algorithms are based on the past – and if our world was perfect, that would be fine, but our world isn’t perfect. So, we need to work really hard to create algorithms that won’t accidentally have biases against certain people.
Virginia Eubanks – Automating Inequality (talk, 45m)
Eubanks furthers a discussion we kind of talked about last week: the idea of algorithms being fair, but stereotyping/being more likely to help one group of people than another. However, in this case she’s talking about a person’s financial income/poverty and how that is judged.
Janet Vertesi – My Experiment Opting Out of Big Data… (Time, short article)
This was a very interesting article. I’ve heard about the target girl before, but I never thought about what it would be like to try and avoid something like this happening. It would be (and clearly proved to be) very hard to hide from the internet that you’re pregnant. I personally think I would have for sure failed – I never would have used a different browser or bought everything in gift cards (lol). It was pretty crazy though that she started to get flagged for possible illicit activity. Privacy is hardly obtainable in this day in age.
Walliams and Lucas – The Computer Says No (comedy skit, 2m)
This skit was very funny and shows exactly why we can’t leave everything up to computers. They can accidentally make mistakes – and this was a good example of a possible big mistake. I thought it was particularly interesting though that this example chose to use a person to communicate with – especially because of the part at the end where the woman asked the worker if they could talk to someone – to which they replied “I could but…” I just found it interesting that they took this approach versus just talking with a robot or something. But it was very successful either way!
Overall Question: Do you really think it’s possible to develop a functioning algorithm that isn’t biased in some way? Or would the only way for that to be possible be if someone (or multiple people) monitor the algorithm constatntly?