AI / Predictive Analytics / Recommendation Algorithms Response

Something is Wrong on The Internet:

… this was really weird to read …

The article delves into the weird world of kid youtube. It’s a weird niche place. It’s filled with crazy nursery rhymes, and quite frankly, loads of videos that make absolutely no fucking sense. Finger Family? What the hell is that even about? (AND why do they call the pinky finger baby finger? Is that a thing?) This sort of automated random and totally nonsense generated content is actually watched by kids. If you look at the videos, they aren’t harmful, per se, in what they are showing (somewhat) but it’s the weirdness + randomness of how it’s shown. Over time, these videos are kind of creepy. And if you’re a young child spending hours absorbing this weird content, then it becomes a form of abuse.

I didn’t even know there was a whole “kid” youtube sector. I think back to when I was young and watching youtube videos. I remember watching Charlie the Unicorn, which is just a weird and disturbing tale, yet was so funny at the time, but again I was probably 8 or 9 when watching it not like 2 or 3. Youtube is a weird hole, but the youtube holes I used to go down where never this weird…

Inadvertent Algorithmic Cruelty

We all get those “You’re Year in Review” pop-ups from Facebook when the time comes. For some they are great, but for others who had a rough year, not so much. In this case, someone’s daughter died and Facebook highlighted it as a “Year in Review” video. It’s definitely wrong but it is also algorithmically generated, so Facebook doesn’t even know what it’s doing. They pose some solutions to the issue, one by having an opt-out of this, but even at that I still think Facebook will find some way to remind you of some sort of deep memory. That’s what it’s built to do…

How TikTok Holds Our Attention:

TikTok is the new vine!!! But way more widespread, in my opinion, and it hopefully won’t die out as vine did. I personally don’t have TikTok, but my roommate does and they spend HOURS sitting in bed indulging in every random video they come across. I can’t lie, I often sit and watch TikTok with them, but oh man what a hole it is.

People love TikTok because its “unusually fun”, as the article stated. It’s easy to get sucked into and we love the nonsense that people are making. Like why do I need to watch 30+ videos of different people doing the same dance to Say So by Doja Cat and then, in turn, why do I feel the need to learn the dance myself?

We crave easy entertainment, and when you can endless scroll content that is satisfying to watch, then we continue serving that crave.

Getting Approved By AI:

The fact that a computer, algorithm or a system should decide if I’m worthy of a job is ridiculous. The tedious process of having an employer pick who their employees should be is a very personal process, but allowing technology to decide for you takes the personality out of not only the employee + the employer but the whole system and workspace itself.

How can this affect the workspace? How does this pose as a barrier for the employer + employee relationship?

What if I’m perfect for the job but the AI just can’t see that?

The Problem with Metrics is a Big Problem for AI:

the first thing I thought about when reading this is, what about students with learning disabilities? I feel fortunate enough to be an art student, I’m not constantly taking tests or doing loads of assignments that could be graded by a machine, or well I’d hope that the work I’m doing isn’t graded by a machine…

Also, why would professors want to use automated essay software to grade their student’s work? Wouldn’t that take the personality out of a student + professor relationship, if the professor isn’t even reading what a student writes?

Ben, are you reading this? Or is a machine grading me right now?

AI / Predictive Analytics / Recommendation Algorithms

Something is wrong on the internet

I’ve been saying this for YEARS but usually as a pointed gripe pertaining to some quite specific and honestly random if not also menial things. You know what? All those gripes were valid. From the lengthy down time for incredibly expensive services, the extremism on facebook, and all the goddamn Spider-Man & Elsa shit on YouTube. We need to go beyond pointing out that something is wrong and venture into the realm of just flat out saying “dude what the actual fuck is going on???”. I mean I ~know~ whats going on. It speaks to whats immediately apparent and wrong with how we use algorithms. I think we all get the main idea here – if it makes money, companies will not assess flaws unless there is a legal liability. Moralistic liabilities are not contested until we have media coverage and even then – its about….. stocks. That’s what’s happening, and its just how YouTube operates.

Getting approved by AI

Using technology to speed up processes that are highly personal and dictated on a largely subjective basis is pretty much the easiest road to dystopia. Processing applications on a clear concise goalpost basis is perfectly fine! Having a computer discern, based on a video of you talking, whether you’re worthy of a job. This computer cannot discern whether or not you get along with the rest of the team at this company. Much like facial recognition, welfare systems, and literally any given system used in the west, Blackness will correlate with failure and Black people will be harmed by this system.

How TikTok Holds Our Attention

Okay, the Anne Frank thing was funny. But also like, I’m not a Nazi so maybe those guys find it funny in a more racist and less morbid absurdist sense. TikTok doubles as a creative platform and a deeply interesting and maybe concerning example of how algorithms feed content. This is much much faster than YouTube’s content feed, though lower in volume (in minutes I mean). Algorithmic feed is something so new (generally speaking) that it’s effects won’t be understood much for another decade. What does this mean for the future? Users create whatever they want however this algorithm is meant to keep you watching – this is an issue YouTube has where it will supply you things that keep you watching even if those things are extremist, alarmist, or deceptive. Where do we find the control on the runaway effects that this has on the rest of society?

Inadvertent Algorithmic Cruelty

I think this anecdote here is heavily stilted in the sense that it leans into to technology for the void in meaning in a somewhat oblivious way. You fed that algorithm. It’s showing you what you gave it. Just because someone passed away, is a person supposed to know that you’d rather not see their face? If not, how would an algorithm? This is asking for some pretty particular sensitivity in a situation where, if anything, maybe you should seek other assistance in the case that seeing a photograph of your child gives you this much stress. Seriously. I was an abused child who would breakdown at the mention of my abusers name, just the name. I understand some of these suggestions are largely to mitigate unintentional harm on the user end, but frankly I don’t think these systems can be so incredibly smoothed out. I don’t think they should be either. Trying to change the system to fix a problem that maybe a total of 20 users may have is the easy way to break the rest of the system. If your goal is to reduce harm, you need to manage your expectations of what reduction means.

The problem with metrics is a big problem for AI

Metrics, success, failure, optimization. I smell the paycheck I earned for all those high viewing and somehow ad-friendly Spider-Man Elsa videos. So many companies employ metrics (money stats, as I like to call them here) as a means of maximizing profit. This is perfectly reasonable until every game that comes out has macro transactions, a reduce budget, procedural generated content, and weekly updates in accordance with trends across the consumer base. According to the metrics, which feed our algorithms, which determine our decisions as a company, there’s a big issue with everything being fucking trash lately. Maybe we’re not maximizing money stats enough. Maybe our optimization has too many people involved. With the definition of success in media being maximum engagement and maximum profit, addictive, violent, exploitative, or otherwise fucking horrendous content and business practices will win out every single time.

AI / Predictive Analytics / Recommendation Algorithms (11 Mar):

James Bridle – Something is wrong on the internet (Medium)

the internet is more than just for the adults. The world ranges in age groups, and the internet begins to cater to all, revealing all the demographics once hidden. Normally in America, we rarely see children adverts/commercials/shows/content/material unless we’re in that bubble as a parent, kid, grandparent, etc. The internet is different because of how accessible it is for children. In real life children can’t typically buy things with their own money so the adverts are limited to particular stores and environments where they can be found/where their parents take them. However, these past generations have shown us that children are now capable of having their individual agency on the internet. Therefore, videos like the ones described can now be advertised specifically to the kids, and not their parents. Viewings have turned to monetary gains for the video makers. Children now have power to provide creators financial results through views instead of direct dollar transactions. Video makers have now capitalized on children’s entertainment with platforms like youtube, using algorithms to reel children(and parents(?)) into endless viewing.

These automated algorithms are dangerous to those who suffer the consequences of the malfunctions. For every automation comes a percentage of fault. The problem is most prevalent when we ask where the flaws showed up, and to who? Using children as the audience we can easily see how screwed up it is these “accidents” are harmful despite their intent. This entire system is applicable to all type of algorithmitc systems in society.

What other systems can you see the faults/”accidents” be potentially harmful? Why are they necessary to keep or regulate? Is this fixable? What is being won in the sake of the consequences? What’s more important?


Rachel Metz – There’s a new obstacle to landing a job after college: Getting approved by AI (CNN)

AI is being used to challenge our roles as potential employees. Out positions in jobs are being calculated and analyzed by AI.

How does it feel knowing that AI is a potential barrier for employment? Is it justifiable?

When making a decision, we tend to do research to do an educated decision. But AI allows people to start doing “educated” guesses of what we’d look like as an employee for them. But again, AI is biased. What is is based on? What is it looking for? Why are we shifting our career identifies into quantifiable measures that suit an “ideal model”? What do you think that does to the interview process for jobs? Would you guys prefer a company to judge you through your “paper identity” or through an in person interview?


Jia Tolentino – How TikTok Holds our Attention (New Yorker) (read or listen)

TikTok: performs for personal attention retention vs Instagram/Facebook: performs for direct personal communication

young generations can become self made celebrities through social media

What’s the point of TikTok? Why are the children so good at it, and why do adults have trouble finding success in it?

Is it possible to make something so much for children and so far away from the adult, that makes it impossible to capitalize? If the adult can’t find success in a platform, what happens?

Rosa cinematic universe on Tik Tok => self made celebrity example, using talent to self make…what happens when we don’t have a company/label backing up a person’s talent? What happens when the audience becomes the agency that supports/promotes/invests directly?

Hows do we categorize different social media genres(?)

Eric Meyer – Inadvertent Algorithmic Cruelty (article)

“It feels wrong, and coming from an actual person, it would be wrong.  Coming from code, it’s just unfortunate.”

 The design is for the ideal user, the happy, upbeat, good-life user.  It doesn’t take other use cases into account.

This makes me think of finstas again…theres no space for user that is something other than happy. These platforms reduce us to minimal emotions and set our defaults to be happy, but we rarely all are…so why does social media work either way? If it’s made for the happy user, does it turn us into happy users? If we smile more will we actually become happier? Or are we facilitating an internet facade? Or is social media really making us happier? If not, then why do we stay on it?


Rachel Thomas – The problem with metrics is a big problem for AI (article)

Goodhart’s Law states that “When a measure becomes a target, it ceases to be a good measure.” 

Automated essay software judges on vocabulary/grammar.=>what about students who have learning disabilities that consequently result in grammar mistakes? What about students who don’t learn textbook English? Why is slang non professional? Who’s setting the “bar”? And what does grammar have to do with evaluating a students ideas/argumentative reflection?

AI/Predictive Analytics/Recommendation Algorithms Responses

James Bridle’s Something is wrong on the internet: I think I had a Dell laptop at 13 years old and used Facebook all the time… I remember when there were no ads on Youtube and Facebook had only ‘like’ and ‘become a fan’. Not that long ago I actually looked back through my Facebook timeline with cringe and awe. I have definitely brought up my concerns with content aimed at kids on YouTube in the recent past. It never before occurred to me how easy it is to go from a verified page to a non-verified page with autoplay on. What the heck is going on with these ‘finger family’ videos!? I don’t recall the corruption of Peppa Pig, but I do remember Spiderman Elsa… I agree that it’s not about what teenagers can or can’t handle, nor about trolls, it’s about very young impressionable minds (babies/toddlers) being traumatized by content that targets them on the internet.

Memorable Quotes: 1) “I don’t even have kids and right now I just want to burn the whole thing down.” 2) “It’s not about trolls, but about a kind of violence inherent in the combination of digital systems and capitalist incentives.” 3) “This is a deeply dark time, in which the structures we have built to sustain ourselves are being used against us — all of us — in systematic and automated ways.”

Rachel Metz’ There’s a new obstacle to landing a job after college: Getting approved by AIThis is insane! I’ve never heard of HireVue (an AI gatekeeper for entry level jobs)… Although I am horrified, I can also see how this could be convenient for employers who have to choose from a large applicant pool to fill one job position. The problem is that I highly doubt a computer can detect “empathy” or a “willingness to learn” and I think raising a laptop camera to be eye-level is ridiculous. When it comes to an AI testing for confident or negative language I’d probably strike out on both fronts… but that shouldn’t mean that I’m a bad employee! I didn’t know that there was an Electronic Privacy Information Center (EPIC). I think the EPIC asking the FTC to investigate HireVue’s algorithm is futile; then again, the FTC did look into TikTok.

Jia Tolentino’s How TikTok Holds our AttentionAm I the only one who thinks it’s shady that nobody who was affiliated with TikTok (which Tolentino got in touch with) seemed to know anything about TikTok?! Also, users not seeing a whole lot of the Hong Kong protests through this one particular social media is super sketchy… I actually did hear about what’s happening to the Uighurs not too long ago… It is strange to me that all it takes to have a chance for a record deal is to have a sound bite which gets popular on the app (not a full length song), but I do like Old Town Road. I mean how can you not? It was such a huge phenomenon! That said, I will never understand why young girls flock towards Jacob Sartorius or Jake and Logan Paul. This guy, Zhang Yiming, is giving me Chris Wiley vibes… How do these guys do these things so young?!

Rachel Thomas’ The problem with metrics is a big problem for AI: I don’t think I’ve had an algorithm grade my essays, but I do think some of my school papers have been put into an algorithm to test for plagiarism. This article reminds me a lot of Jill Walker Rettberg’s and Cathy O’Neill’s TED talks. Of course length and sophisticated words are all it takes to game an essay grading algorithm… It’s depressing that the actual content doesn’t matter all that much. Unfortunately, I think we’re living in a time where people don’t truly understand statistics. I watch stuff on YouTube all the time, but that doesn’t always cause me to feel happy —and even if some of it did, correlation does not equal causation. I have seen some white supremacist channels come down, but yeah, Youtube/Google has a problem with letting that crap infiltrate (I’m being reminded of ‘Sad by Design’ now).

Eric Meyer’s Inadvertent Algorithmic CrueltyI’m so heartbroken! I personally try to avoid these ‘look back’ algorithms like the plague… but I am a sucker for Spotify end-of-the-year playlists. Of course Facebook doesn’t really care about asking any of us for permission. It’s already not asking us how much of our privacy we want… I think we all know by now that Mark Zuckerberg’s empathy level is non-existent.

AI / Predictive Analytics / Recommendation Algorithms

James Bridle – Something is wrong on the internet (Medium)

This was a scary article. I knew already that I was going to limit my future children’s use of YouTube and the internet, but this just opened that up to a whole new level. The fact that so many inappropriate videos slip under the radar due to keywords and just bad algorithms in general is sickening. What’s even more sickening though is the author’s conclusion statement at the end – that “the system is complicit with abuse”. They are exploiting children for the sake of exploiting and gaining financially from it. They don’t care that the content is inappropriate for children. It’s just awful.


Rachel Metz – There’s a new obstacle to landing a job after college: Getting approved by AI (CNN)

This article focuses on the idea of AI interviews. This seems like a really innacurate way of hiring people – you can’t understand who a person is just based on facial expressions. Like the article said, you especially can’t tell a person’s drive and passion from facial expressions and keywords. I hope this doesn’t continue for much longer – or that we gain the technology that allows for algorithms that have a means of judging things like this better. Yet, I’d rather steer away from that because then we might be looking at more “human-like” technology and that’s a whole separate scary topic.


Jia Tolentino – How TikTok Holds our Attention (New Yorker) (read or listen)

This article talks about TikTok and other platforms similar to it. TikTok, like everything else nowadays, quickly learns what you like and caters to your interests. However, TikTok revolves around a bit of meme culture as the article explains. This is contrasted by the apps that are popular in China and are used similarly to TikTok – but house more types of content – USEFUL content like how to cook certain things – and even allows for purchases to be made through the apps. The argument here was that these Chinese versions of TikTok-like apps allow for more content and a broader audience – it’s not all meme-culture consumed by teens.


Eric Meyer – Inadvertent Algorithmic Cruelty (article)

This was so sad. What he said about algorithms being thoughtless and how this wouldn’t be okay for a person to show him is true. I don’t have facebook, but I still have experience with “your year in review” type things because of Snapchat. I think they started doing it about three years ago now…? But I’m not entirely sure. Regardless, Snapchat’s review clearly shows you that it’s looking for keywords in your memories to pull out the snaps it uses in the review like “fun” or “yay!”. However, I’ve experienced first hand how this doesn’t always pan out the way they hope. For example, sometimes I say ‘yay’ ironically – like when I’m actually really upset about something – so I’ve had Snapchat pull out really deep, emotional snaps I’ve saved for myself or just downright sad snaps where I was genuinely sad. I think what Eric was saying about implementing something that asks you whether or not you want to see it makes sense – because that shows even a little bit of consideration or empathy from the algorithm/development team.


Rachel Thomas – The problem with metrics is a big problem for AI (article)

Metrics can be – but aren’t always – useful. Metrics can show us important things we want to know, but sometimes metrics aren’t accurate in the way that their subject pool isn’t fair (thinking more specifically of the medical example in this instance). The YouTube example is another good reference: half of the watchers are bots, not real people. So, saying that those who watch more are enjoying what they watch isn’t entirely true because these bots exist.

AI / Predictive Analytics / Recommendation Algorithms

Something is Wrong on the Internet

This is not the first time I have heard of this phenomenon, its been an increasingly terrible problem that Youtube has had for years culminating in their rule change to comply with COPPA that has only created more harm for creators and not really fixed the problem. Some of these videos are just weird like the author said, they are made in an AI factory of sorts so its obvious they are not made by humans but rather through an algorithm that is based off of what children have previously watched.

While all of this content including the harmful creepy and abusive things is available to children and it is something to be concerned about, it also reminds me of when I was a kid watching Pewdiepie. The games he played were horror games not meant for children my age and the jokes he made were likely not meant for children my age either. There had to be moments when not by the fault of my parents I saw something I shouldn’t have. But I guess that also comes off as a thought like “my parents did X terrible thing and I turned out fine!” but it does make me beg the question at what point is content too much and the problem itself vs just a normal learning curve of consumption?

This is where I disagree with the author, I do think it is on Youtube and these other platforms to be the ones to take care of this problem, because they are the platform that houses such content and gave the ability for this content to be created. Not only that but the way they’ve decided to “fix” this is not the correct one, as it has not only not fixed the issue but harmed content creators who had nothing to do with the problem in the first place. From forcing adult creators making content for adults to constantly censor their work, or having doll customizing channels have to beg for us to protest the change so their content doesn’t get deleted entirely.

Theres a new Obstacle to landing a job after college

This reminds me of the students who’s attendance is monitored by an app, the only thing I can think of is what happens when the AI and algorithm is inevitably wrong? When it says overqualified candidates aren’t a good fit and under qualified ones are? Is this going to be an even greater tool for bias in these industries, what about racism, classism or ableism? I also find it baffling that an AI is meant to understand empathy when it itself likely can’t empathize. Its already been proven that standardized testing isn’t an actual show of students intelligence and understanding of the material, so why would a standardized test for interviewing be any better. No social behavior is fool proof in analyzing a person. For example, saying that not looking someone in the eye means someone is lying, when really that person could just be anxious, or on the autism spectrum, or have something else that has informed their behavior. Even other people have trouble with these things, let alone an AI being “trained” to analyze this data, which as we’ve discussed before isn’t fool proof either as data itself.

How Tik Tok Holds our Attention

What scares me the most about both reading this article and Tik Tok’s algorithm itself is how much it can become a feedback loop. This can be things like funny cat videos being the main thing on someones feed, but because of Tik Tok’s racism problem, it can also just turn into a loop of White Supremacy or other terrible things. Like most social media feeds, it caters to what the user wants, but Tik Tok has proven to be one of the most efficient social media platforms to do this. Facebook and Instagram are peppered with so many sponsored ads and recommended posts that half the time its impossible to see the actual people you follow, and I personally just hate Twitter so I cant speak to it. Tik Tok however, has your feed personalized in mere minutes, and from what I can tell while it has ads they aren’t like sponsored posts on Facebook acting as if they are more content.

I find the most interesting part about this being how Tik Tok is making the lives of musicians better, and making them more capable of growing. But like all things its about virality which is a hard thing to track and it doesn’t have a very good equation of how to become viral.

My biggest worry with Tik Tok comes from a point that I dont really think the article focused on. A lot of these people are children, and there is basically no regulations to the app to protect them. Young boys are sexualizing themselves to get even younger girls to thirst after them and make them famous. Young girls are doing sexualized dances and being horribly bullied for them. Not mentioning all the horrible stuff that can be found there and the feedback loop the algorithm makes, at the end of the day its the other people on the site that Im concerned about. Because people are the ones that make horrible comments or do creepy things.

Inadvertent Algorithmic Cruelty

I find it hard to respond to this one because at its core it’s deeply personal. But the author is right, these AI and algorithms are thoughtless, they aren’t purposefully cruel, but the people who made them probably didn’t think about the possibility that someones year wasn’t great, and didn’t think about the words, or the pictures the algorithm might choose. I think the authors idea, to make using the app a choice, was a good one and a good option for changing the platform for the better. But what are some other ways that these programs could be more empathetic? What are some ways to change the system to account for these types of situations?

The problem with metrics is a big problem for AI

The first thing that came to mind when they gave the example from google saying watching more Youtube meant people were content with what they were watching just got me thinking about mental health and executive disfunction. For example, I watch more youtube when I am having a hard time focusing or making decisions because its a default. I can be bored out of my mind and still be watching youtube because my executive disfunction makes it impossible to do anything else. Which is just one example of how Metrics cant really measure what is most important. It would (hopefully) be impossible for google to know why I was watching youtube and understand what that meant for their needs (whether or not Im happy with the content Im watching = more money for them). Which goes along with the authors point of addictive environments. If I was in a healthier place its probable that I would not be watching so much youtube, but the analytics don’t care about that.

Another example like the one in the article comes from autoplay. Now I have that feature turned off, however I also watch youtube to fall asleep (I know I know, unhealthy sleeping habit).So if autoplay is on and I fall asleep, youtube can keep playing content for hours upon hours and that games their own system.

So are there ways to track metrics that can be more accurate to the questions people might be asking? Should they be more accurate?

BFA Exhibition: Ideation

Exploring the idea of First Impressions

3 Questions I think about

  1. What are key actions that caused someone to have a lasting first impression on you?
    1. Body language, facial expressions, things that they say? Good or bad? How did they make you feel? What makes you feel close to someone? What kind of personalities are you drawn to?
  2. How often have your first impressions been proved wrong?
  3. Does a positive or negative first impression make more of an impact on you? (Which do you remember better?)

3 Mediums I’m considering

  1. Something HONY-like (Photography & Testimonial)
  2. Illustration/Drawing/Physical Medium

Big Data, Algorithms, Algorithmic Transparency

Pasquale – Black Box Society

Pasquale describes “the black box” – a sort of metaphorical box which can either (ideally) obscure your data and thus provide privacy, or, more frequently, is the mechanism by which companies or the government conceal how much of your data they have. This is really to protect the interests of those who have access to user data rather than the user themself. The release of this data is often not an option, as non-user parties will often argue that data is hidden for their own good – to protect privacy, enforce peace, etc. Honestly, I thought this article built off of the other sources this week – we’ve become reliant and complacent on what we perceive to be objective gatekeeping of our data, but in reality, we really have no clue who has access to our information. Privacy becomes a myth, and only now are individuals beginning to sound the alarm on this.

O’Neill – Ted Talk

O’Neill is concerned about the broader social repercussions of relying on algorithms, and training them solely based on archival data as opposed to controlling for systematic oppression that impacts pre-existing data. Simply because the algorithm accurately predicts success, doesn’t mean that that algorithm is correctly accounting for human bias. A lot of this comes from the laziness of company execs to spend the money into intentionally skew their data to be more equitable, and from the comfort that the population has that math is objective. If my years in high school taught me anything, it’s that math is a bad subject for bad people who have bad taste in literally everything. The same (roughly) goes for data scientists. They need to be held accountable for their actions

Eubanks – Automating Inequality

The cycle of poverty in the US was already convoluted and hard to break (thus making it a cycle). There were already cuts to welfare programs that did give the poor chances to live a decent life, but the introduction of the digital age has made certain that these inequalities stay and worsen. Eubanks brings up the example of Medicaid and Medicare early on to illustrate this – the job of human moderators to look through and vet automated decisions was replaced by programming that wasn’t properly overseen, leading to millions getting their insurance cut, and an inordinate amounts of preventable death. This topic of essentially screwing over the poor to save up on administration costs has a dangerous history. I think it’s especially relevant now in the context of a national conversation, as we now not only have one, but two presidential candidates who promise progressive reform to specifically target this. The quieter we are on this issue, the less likely it will get resolved.

Vertesi – TIME article

Vertesi talks about how hard it is to really have a private life on the internet, because even interactions outside of your direct control are monitored. There’s the example of the girl whose dad found out after Target did that she was pregnant, and Vertesi’s own example of how she was unable to conceal her pregnancy from ads because of an innocent email from her uncle. Private isn’t private as far as advertisers are concerned, because we are the commodity being sold. Personally, I find it frustrating to explain this to older people who don’t explicitly work with tech this concept, because my generation grew up with a camera pointed at us. Overall it’s just irritating how little regard is given to out personal information, and how people with the power to regulate it (again, often older Senators/Representatives) don’t even see a big problem with it in the first place.

Williams and Lucas

This one was hilarious, I can actually remember scenarios I’ve been stuck explaining something to admin and they literally keep referring to some sort of mix up in the computer system. It’s funny in a comedic context, but a little concerning when you realize the real-world implications of a lazy person at the desk. There isn’t too much to say here, it’s just deeply relatable.

Big Data & Algorithms

The book The Black Box by Frank Pasquale explains how companies use our data to determine basically everything about our lives, especially finances. With all these companies tracking us through our phones, computers, and other devices, companies sell our data and use it to determine what kind of person you are by that; if you’re reliable with making payment or receiving a loan, but by doing this, it is discriminating people, who don’t fit well into the algorithm. 

How do we fight against these companies selling our data and using it for their own agendas?

In Cathy O’ Neil’s Ted Talk, she further explains how these algorithms are biased, and how it can discriminate against minorities, women, and poor people. And, that just because the algorithm is a computer, doesn’t mean that it isn’t biased. It is biased because of the requirements that are put into the algorithm. 

How do we find out who are making these programs and how do we make regulations to stop them from using our data for nefarious purposes?

In the article written by Janet Vertsi about how not using big data made her look like a criminal. It kinda reminds me of when you (Ben Grosser) made the email program that adds in words that government trackers. 

Should people stop opting out of big data and if so how or will it have an impact on these companies?

The skit was really funny because it showed that people rely so heavily on computers, that even when they are wrong, people will still get the bad end of the stick and be at fault for it. It’s also interesting how common these mistakes are made, but people will still get negative effects from it.

Why do we put people at fault when technology and algorithms are wrong even though we know it is wrong?

The talk given by Virginia Eubanks called “Automating Inequality” describes how these data algorithms are forcing discrimination. People are put into these systems and if they don’t fit the requirements it can still screw them over even when it wasn’t their fault in the first place. This ties into both the book and ted talk when describing how these algorithms are biased, especially towards women, minorities, and poor people.

How should we protect these people that are negatively affected by these algorithms?

Big Data/Algorithms/Algorithmic transparency

The black box society

The article makes a metaphor of the current secrecy problem as “black box”, a recording device and a system whose workings are mysterious. “Knowledge is power. To scrutinize others while avoiding scrutiny oneself in one of the most important forms of power.” The law aggressively protected the secrecy in commerce while staying silent when it comes to personal privacy. 

It might be worthwhile if the decline in personal privacy was matched by comparable levels of transparency from corporations and governments. But sadly, it hasn’t. The commerce and technology industries are still keeping the box closed, using three strategies: “real” secrecy, legal secrecy and obfuscation. 

“Transparency is not just an end in itself, but an internal step on the road to intelligibility.”

The authority is increasingly using algorithms, while there are a lot of questions we need to query. Are they fair? To what extent can we trust these automatically made decisions, instead of decisions based on human reflection? The distinction between state and market is fading. 

The era of being faith in big data must end

This Ted talks about what if algorithms are wrong. There are two aspects that the algorithm could be wrong. The first one is data, the second one is the definition of success. “Algorithms are opinions embedded in code.” The marketing trick tries to make people believe that algorithms are scientific and tries to intimidate people with algorithms. Algorithms can have deep destructive effects with good intentions. Algorithms can’t make things fair, because they repeat our past practices, our patterns. They automate the status quo. In some ways, algorithms are killing minorities. 

“data laundering” A process by which technologies hide ugly truths inside black-box algorithms and call them objective. 

We should check our data integrity; we should introspect the definition of success; we should improve our accuracy and we should give a long-term effect on the attention it deserved.

Data scientist: We should not be the arbiter of truth. We should be translators of ethical discussions that happen in a large society.

Non-data scientist: This is a political fight instead of math tests. We need to demand accountability for our algorithmic overlords. 

Automating inequality

More evolution than revolution. They rationalize and recreate politics. They promise to address bias but in fact, they just hide it.

They create empathy override, it eased emotions. The talk then gives some examples of people losing qualification of medical service due to algorithms. The limitations of data itself cause enormous concern. Feedback loop.

My experiment opting out of big data made me look like a criminal

The author wrote her experience of trying to opt out of big data. And she found that she was being treated as a criminal. She was suspected by banks because she withdraw a lot of cash. She has to carefully choose her words when she needed to communicate with other people online.  What she said “No one should have to act like a criminal just to have some privacy from marketers and tech giants.” is absolutely right but happening right now.