Big Data, Algorithms, Algorithmic Transparency

The Black Box Society

“But what if the ‘knowledge problem’ is not an intrinsic aspect of the market, but rather is deliberately encouraged by certain businesses?”

This is literally EXACTLY what is happening and what has been happening for probably around 80 years. This is the bustling American Capitalist landscape with infinite gains and infinite returns on investment. Disinformation and misinformation is critical to avoiding government intervention. The language in these information protection laws does a lot of good on the part of the individual – without a doubt. This does not translate to companies very well and translates to mega corporations TERRIBLY in the sense that these companies act more like countries in their input, output, and control. The magnitudes of power are vastly different than the lemonade stand or the local burger chain. When this is scaled up internationally and connected to the tree of parent companies, these systems exist more as monsters than arrangements of workers and their superiors.

The era of blind faith in big data must end

There’s a considerable lack of oversight in these massive power structures in a way that’s frankly terrifying. These algorithms streamline our tendencies as they have been and as they are.

“There’s a lot of money to be made in unfairness.”

Why don’t algorithms regard for traits related to failure? Checking for bias or fallacy? The reason this isn’t done, is likely that it doesn’t garner maximum results. It’s not sexy, it doesn’t sizzle in an investors meeting. There’s nothing exciting about stats that don’t impress or coincide with expectation.

Automating Inequality

Systems of oppression are not abstract frameworks based on linking together ideas and individual phenomena – these were literal systems with an input and “empathy override” for the actual intended output. Not that these systems are created with this awful outcome in mind, however those with control over them know very well what the outcomes are and how they interact with other systems and dynamics. Not much more to say on this other than to point out the School to Prison Pipeline of the US education system. These systems aren’t just reinforced through the feedback loop, they’re reinforced because someone chose not to stop them and enjoyed the profits too much to care about the harm.

Opting Out Big Data

Algorithms focus so heavily on behavioral patterns that once an individual decides to not participate and obscure this footprint, they must be obscuring this for a reason. There must be a goal in the users mind that can’t be achieved without being inconspicuous. Unless people would like to not be watched. Unless it’s that simple. The concept of “having nothing to hide” is a dangerous point that has exploded across how data and surveillance are treated where the rights of the individual dissolve while the rights of corporations balloon. Capitalist dystopia here we come.

The Computer Says No

The machine demands data. It is all knowing, it is correct. It has a 98.961% rate of success as defined by a group of rich people who directly benefit from the machine. No, there’s nothing wrong with this picture.

Big Data / Algorithms / Algorithmic Transparency

Janet Vertesi – My Experiment Opting Out of Big Data…  (Time, short article)

I think it’s interesting to see how Vertesi was actually at a financial disadvantage when hiding her pregnancy. Loyalty reward cards were excluded from her options when buying supplies. She couldn’t arrange a baby shower gift list. Her husband even had to buy multiple gift cards to buy a stroller without leaving a trace of their card information. In this way, it creates an institutional consequence for a citizen to live with privacy. Vertesi even points out how she felt like her reputation as a person, friend, and family member was being compromised. This adds to the pressure that the US pushes onto it’s citizens. Because there’s this social media element that’s surfaced into these recent decades, recording your identity online is socially expected upon you. Social medias set you up to self document your life for big data collectors whether we know it or not. It’s set up to feel like a personal choice (and in a way it is because social media is communicative and collaborative) but it’s only when we stop conforming that we face repercussions.

This whole idea makes me think about what it looks like to do this experiment long term. Is having privacy bad? Why is privacy so interconnected with secrecy? What about the people who don’t have access to online platforms, credit cards, etc? Like what about the people who don’t have these ways of documenting accessible to them? How does this play into ways of institutional aid and acknowledgment of these communities?

Virginia Eubanks – Automating Inequality (talk, 45m)

Why base the future from the past? And how do we integrate basic human rights into our society? How are we investigating citizens and how are we categorizing/organizing them into a paper society? How is a person’s integrity compromised through their loss of their privacy?

This keynote made me think about the effects of a data collecting society. It also raises the question of how I personally am being documented and the way that documentation changes my life. How does this compare to a different identity? And how comfortable am I with this?

Walliams and Lucas – The Computer Says No (comedy skit, 2m)

This reminds me of last week’s articles where they brought up issues of technology’s reliability. I think this skit does a great job in pointing out how people can conform so easily to what technology says. There’s not a hint of doubt in the receptionist’s mind that the computer data just might be wrong. It’s gotten to the point where the computer has higher credibility than the patient’s, who are the most direct source of information. This is building a distrust from human to human as the trust in tech rises. Although technology is made by humans, the visual element of the machinery, and the manual input necessary from the user to activate a response makes it’s answer somehow otherworldly(elevated?). It’s suddenly disconnected from humans and their inaccuracy. People can constantly be wrong, so it’s really easy to rely on machinery in order to tell the truth. But how is this working against us as a collective society? If we can’t even trust the direct source because it’s human, how are we disconnecting ourselves from having loyalty, trust, and commitment to one another? In a larger scale, how is big data replacing our voices in places as important as elections, census, social/government profiling?

Cathy O’Neill – The era of blind faith in big data must end (Ted Talk, 13m)

Cathy O’Neill develops a lot of the questions I had above in her ted talk. Algorithms are biased on what success is. They all have their own version of what success is. The people who create these systems are biased. It’s all just a mindset, but adds numbers and equations to make a quantifiable value. Cathy O’Neill brings up how numbers relate to math and how math is intimating which ultimately discourages us to question algorithms. This then sets up a blind faith in big data out of fear of being wrong.

Trusting these things so blind lead to big repercussions. Teachers can loose their jobs because an algorithm says they are not doing well. Suddenly these data collectors are so relied on. And if there is any doubt, it’s shut down automatically. Not letting people even know of the algorithm pushes this rhetoric that the mass public is incapable of understanding. To deny people access to the system denies them from being educated. Assuming they’re incapable of understanding it, assumes that they can’t ever understand truth. This further disconnects people from tech and elevates big data to a high position than the average citizen.

Random thoughts:

“Silent but deadly” “weapons of mass destruction” –> private companies sell to government, private power, “theres’s lot of money to be made in unfairness”

Standardized testing = lazy, biased, mathematical, regressive, automate status quote

DNA testing 23&me

how do we support the ones that are being scrutinized? How do we protect the people hidden, lowered, and oppressed?

Frank Pasquale – Black Box Society – chapter 1 (pp 1-11)

“real” secrecy, legal secrecy, and obfuscation.

Real secrecy: establishes a barrier between hidden content and
unauthorized access to it. Legal secrecy: obliges
those privy to certain information to keep it secret; a bank employee is obliged both by statutory authority and by terms of employment not to reveal customers’ balances to his buddies. Obfuscation involves deliberate attempts at concealment when secrecy has been
compromised.

This idea that secrecy is so private and unbreakable is interesting. I keep thinking back to the visual of locking your front door everyday. Person A can lock their door every day, use multiple chains/locks/doors. They can lock their car, lock their phone and set up a password for their laptop. But at the end of the day, our hardware is just hardware. A Person B can show up and with enough intention and supplies, they can break into the house, phone, laptop, car, etc. This habitual routine of locking our stuff functions to a very limited extent of protection. If anything it’s a false sense of protection that makes us think we are safe. That we have had some agency to protect ourselves, and that we have ownership of our machinery. But at the end of the day, its breakable. So when I’m thinking about these large companies that hold so much data from the people, that oppress communities through biased data collection, there must be a way to counter that, right? Surely they have multiple layers of protection unlike the ones they give the public (power, law, govt, money, etc) but at the end of the day it’s machinery. Is it possible to have absolute privacy? And what does privacy mean? How hidden does information have to be to be considered “private”? Does that even matter? Is it more important to look at the consequences instead?

Big Data / Algorithms / Algorithmic Transparency (26 Feb):

Frank Pasquale – Black Box Society – chapter 1 (pp 1-11)
Cathy O’Neill – The era of blind faith in big data must end (Ted Talk, 13m)

O’Neill talks about how there are clear winners and losers determined by algorithms that we have “blind faith” in. We train these algorithms to figure out what leads to success. I thought it was interesting how she related algorithms to “opinions embedded in code”, which leads to the idea that algorithms are biased, which is why it’s very problematic to blindly believe in data and algorithms. People don’t try to understand the algorithm because it’s “math” and most people “won’t want to understand it/won’t be able to understand it”. It’s kind of scary to me how she talks about algorithms being silently dangerous, making me think of how many algorithms that exist today that are super biased and dangerous. What would the world be like if we just removed computer algorithms?

Virginia Eubanks – Automating Inequality (talk, 45m)

Eubanks talks about studying inequality through data. She talks about how the solution to alleviate this kind of inequality was to build a bunch of public poorhouses that required to give up their established rights in the 1820s. (Right to vote, office, marriage, family integrity) because “poor children can be rehabilitated by interacting with wealthy”. This didn’t really work, people kept dying. She was talking about a feedback loop of inequality which I feel like I discussed a bit in the reading last week. I don’t think that a world without inequality exists.

Janet Vertesi – My Experiment Opting Out of Big Data…  (Time, short article)

This article discusses Vertesi’s experiment to try to avoid cookies/tracking data because she was a pregnant woman. I found it super interesting how she said that a single pregnant woman is SUPER valuable, and worth 200 people because they have so much buying power and are so vulnerable to advertisements. She had to do several things like not shop at certain places and delete certain friends on Facebook, making her seem and look like she was doing illicit activities. I thought this was super interesting cause it says something about how we as a society value convenience and are willing to give up a lot of freedoms and buy into big data to make our lives easier, without really thinking too much about the consequences. It almost feels like people are trying to make our lives hard if we don’t partake in it.

Walliams and Lucas – The Computer Says No (comedy skit, 2m)

This skit shows a mother and child going to a doctors office so the daughter can have an operation. The receptionist asks super simple questions and only accepts straight answers, but the mother and child give relative-ish answers. Taking that into account with the title it made me think that this is the way people interact with computers, filling out forms, because computers only take a certain kind of answer in a specific format.

Junior Ideas

I want to focus on how our current location is being monitored and how technology somewhat determines what you do. My main example would be how your avatar or “Bitmoji” on the Snapchat map changes according to what it is assuming you are doing in that location. For example, when you are in a dining hall, it says you are eating. When you are listening to music, the avatar has headphones on.

I personally find it interesting yet disturbing that this is even a feature. I remember looking at the map and seeing their Bitmojis sleeping. To what extent does this go before it goes too far? The social media platform does not have to be Snapchat, however, it does have one of the best visual representations of my idea of playing around with what my phone thinks I’m doing at a location as opposed to what I am actually doing in real life.

Junior Project Ideas

I’ve been caught between two basic ideas that I want to do:

A) A gifset exploring corporate attempts to take advantage of meme culture and social justice terminology

This would be titled “Dances I Do When The Anxiety Hits” and its gonna be a 3×4 grid of 12 or so gifs of a person doing different dances. The character will be animated and doing any sort of silly dance, but their face will be blank, sorta like this:

This is gonna look hilarious, because just imagine this face looking straight ahead on top of a body like this:

Image result for saturday shorts gif

The point of it all is however to show something that is get begging to be made into a meme, with an eye-catching and #relatable title regarding mental health, to point out how this sort of content can be used as a clever marketing strategy by corporations. What if, under one of the gifs, you can see a McDonald’s logo? What is that gifset saying? I want to explore this idea of the friendly corporation that “gets” youth culture, not out of care for its customer’s actual mental health, but out of a desire to encourage people to buy more stuff.

B) A short film on privacy infringement

This one would focus more on how advertisers try to get your information by setting a dialogue between two characters – one, a consumer/social media user, and the other a sort of personification of an algorithm trying to ply out data from the user. The resulting video will be a sort of comedy bit with the algorithm devising all sorts of leading questions to get information from the user. The point of it being to illustrate in a more human way how easy it is for any number of companies to have your information and predict what you’ll do, in a way that isn’t as dark or edgy as your typical Black Mirror set.

This will require a script, a storyboard, actors, at least three different sets, and hopefully a person to help me film, not to mention the hours of post-production work, so I don’t think this project is as likely to get off the ground as the other project, unless I manage to produce a script and half a storyboard by next week.

BFA Project Update (26 Feb)

I’m honing in on making a big ol’ immersive experience… I just don’t know what it’s going to be like. A lot of this depends on the space that I end up using. Having a whole room would be a different project than having a wall… and having a small space would be a different project than having a big space, or even from being outside.

While I’m waiting on confirmation of the space, I’m playing with and experimenting with different methods of interaction, currently solely using p5.js, although I’m open to incorporating different methods and mediums.

My current experiments with activating oscillating tones:

pianoPage Iteration 001

pianoPage Iteration 002

I also want to add video-capture aspects to my project, and even tackle this live-coding thing! My browser always seems to crash when I try live-coding. I haven’t tried running things locally yet, but that’s on my list of things to try once I get the hang of the osc tones.

Big Data / Algorithms / Algorithmic Transparency

Frank Pasquale – Black Box Society – chapter 1 (pp 1-11)

This chapter talks about our privacy and how there is none. Machines and devices are tracking our every move, what we look at, where we spend our money, etc. Camera’s are everywhere now – hidden in plain sight (think of phones) and are constantly keeping tabs and surveillance on us. We each have a quantified data file that they use to ‘define’ us.


Cathy O’Neill – The era of blind faith in big data must end (Ted Talk, 13m)

Talks about how algorithms often aren’t built to have biases, but accidentally do. For example, the idea of making an algorithm for hiring people like the successful people at Fox would mean that only men were hired. Algorithms are based on the past – and if our world was perfect, that would be fine, but our world isn’t perfect. So, we need to work really hard to create algorithms that won’t accidentally have biases against certain people.


Virginia Eubanks – Automating Inequality (talk, 45m)

Eubanks furthers a discussion we kind of talked about last week: the idea of algorithms being fair, but stereotyping/being more likely to help one group of people than another. However, in this case she’s talking about a person’s financial income/poverty and how that is judged.


Janet Vertesi – My Experiment Opting Out of Big Data…  (Time, short article)

This was a very interesting article. I’ve heard about the target girl before, but I never thought about what it would be like to try and avoid something like this happening. It would be (and clearly proved to be) very hard to hide from the internet that you’re pregnant. I personally think I would have for sure failed – I never would have used a different browser or bought everything in gift cards (lol). It was pretty crazy though that she started to get flagged for possible illicit activity. Privacy is hardly obtainable in this day in age.


Walliams and Lucas – The Computer Says No (comedy skit, 2m)

This skit was very funny and shows exactly why we can’t leave everything up to computers. They can accidentally make mistakes – and this was a good example of a possible big mistake. I thought it was particularly interesting though that this example chose to use a person to communicate with – especially because of the part at the end where the woman asked the worker if they could talk to someone – to which they replied “I could but…” I just found it interesting that they took this approach versus just talking with a robot or something. But it was very successful either way!

Overall Question: Do you really think it’s possible to develop a functioning algorithm that isn’t biased in some way? Or would the only way for that to be possible be if someone (or multiple people) monitor the algorithm constatntly?

Junior Ideas

I imagine I will have a totally different life if I have 100,000 followers. I’m a person who almost don’t use social media. I don’t use Facebook, Instagram, Ticktock. I only kept WeChat for family connection because I dislike the software mixing my personal life and work/school together. I like the feeling of having a few close friends instead of a bunch of normal friends. I like face to face interaction instead of staring at the screen and sending text all the time. But I know it will be very hard for me to gave up these platforms if I have 100,000 followers(whatever which one).

I think many people study social media as an outsider’s perspective even if they are using these platforms, which can be relatively rational. And their scholar/artist trait enables them to resist social media craving for a certain degree. But the majority of people do not have that kind of trait. And I never look down the attraction and charm of social media platforms.

I’m thinking about how does the change in social media state change a person. What’s the difference between it and the real-life social state changing. How does social media shape a person? What role does it play?

A real-life social states change in the real world: People begin to say hello to you everywhere: in class, on a bus, people want to be in the same group with you, people like to invite you to parties. A typical high school queen bee movie figure. There are a lot of high school movies talk about a person become welcome but lost he/her original charm. What will the story happen on the internet?

If I wake up in a morning and found I have tons of fans, greeting me, praise me, asking me out, inviting me to dinner, give me a present. If every time hundreds of people click “like” in a few seconds I post a moment.

What will happen?

Technology and Race Response

Laboring Infrastructure:

VR can be a tool for empathy. It allows us to step into the shoes of someone’s experience. This allows us to see from the perspective of the lives of many minorities, like refugees or people with disabilities. With this advantage, we can then start to deeper understand how people live, instead of always seeing just through our lenses.

She explains this concept as “experiencing the non-human through virtual re-embodiment.” Which speaks heavily to her point of “body transfer” and how VR lets us feel and be in the perspective of another person.