Summary of March 25 Class Meeting

We talked about the following in today’s online class:

  • loss — a lot has been lost in this shift and it’s important to recognize it (BFA exhibition at KAM, graduation, last weeks w/ cohort (for seniors), face-to-face interaction, access to resources and tools, etc.)
  • BFA exhibition topics:
    • can the KAM part be rescheduled for later? (Ben will advocate and get back to you)
    • what will the online exhibition be? (not sure yet, will report back)
    • could NM do its own online exhibition (in addition to A+D version)? yes! volunteers for a subcommittee to think on this were: Priyankka, Natalie, Niky, Sora
    • opportunities for a NM-specific online presence to be a good thing for some of us
  • course structure going forward — only thing left is for seniors to do a BFA artwork and for juniors to make their media object. no other remaining requirements.
    • at least for the moment, we will continue to meet at 1pm on Wednesdays on Zoom, even if briefly.
    • we’ll use Zoom during class time for synchronous conversations w/ Ben
    • we’ll use MS Teams for peer critique (post something, asking for feedback).
    • embrace however this moment is affecting your work — stuck w/ low-res cameras? embrace it. can’t build what you intended? go with it and use cardboard or whatever is on hand. IOW, don’t be afraid to let this moment be visible in your work (it will be anyway!)
  • what’s next required?
    • post one paragraph (or more or less) on the blog about what your project will be. this could be the same thing you posted previously … IOW, nothing has to change. at the same time, it might be different than what you’d had planned (because of change in resources/materials/tools or rethinking the project in current moment, etc.)
    • work on the project!
  • stay safe. reach out if you need anything or have questions.

BFA Exhibition: Plan Now

I think during this time I’d like to have my work be the drawings that I’ve produced at home during this time, maybe with a sentence about how I feel that day. Some days I draw more detailed things, and on busier days I’ll draw more simple things, showing my motivation and what drawing made me feel like.

Another idea that I came up with was, since I am now back on campus quarantined with all my plushies, to make a photoseries/ Instagram account documenting the daily life of one of my plushies and how it handles social distancing.

AI / Predictive Analytics / Recommendation Algorithms (11 Mar):

James Bridle – Something is wrong on the internet (Medium)

This article kind of creeped me out a bit. It talked about content creation for children, which I’ve seen a lot of YouTubers complain about recently. It focuses on automation and how content created for children can be automated and create different variations of shows with the basic model of the character being repurposed. Something that really shocked me was the offensive T-shirts automatically generated that could be sold on Amazon. Then, because AI is being used to create content for children, the result can be a really creepy and messed up video that no one expected at all.


Rachel Metz – There’s a new obstacle to landing a job after college: Getting approved by AI (CNN)

This article hit way too close to home because I, as well as probably most people in senior year, are trying to look for jobs and are struggling. A lot of people tell me to include buzzwords in my resume somehow so “the algorithm can pick it up out of the other resumes”. I totally get why people use AI to sift through resumes since they probably get so many people applying, I find it pretty sad that a lot of people have to go through that process, it just seems a bit unfair.


Jia Tolentino – How TikTok Holds our Attention (New Yorker) (read or listen)

This article talks about TikTok and the general premise of the application. It talks about how the creators are all young, and how an older creator is hard to come by. It was interesting to me how they said that TikTok was an application in which the content is based primarily on music rather than speech/text, so that it reaches a wider audience. This way, people from all over the world can create content that reaches everyone else, content that is able to be understood and related to all around the globe.


Eric Meyer – Inadvertent Algorithmic Cruelty (article)

This article kind of made me sad because it opened with talking about how Facebook creates these “Year in Review” or like “Happy Birthday” or “Your friendaversary: Here’s your friendship history” for people without really being able to be considerate (of course) of the history one may have with these particular events/people. The result is something that may cause negative emotions in people. I actually experienced this kind of algorithmic cruelty when Facebook showed my Friendaversary with someone who I am no longer friends with in person, but stayed friends with on Facebook. I don’t think there’s any way for Facebook to stop this from happening unless it allows us to log negative interactions with people, but I don’t think that would be a very good feature to implement.


Rachel Thomas – The problem with metrics is a big problem for AI (article)

This article talks about how the most important things cannot be measured by metrics, but AI is becoming more and more relied on by people, and AI primarily focuses on metrics. The article has many points about why metrics focus on the wrong things, or why metrics measure irrelevant things, etc., and it just seems like the best option is to actually have someone there going through whatever is being measured in order to give the best opinion. It’s sad that this process is super subjective and biased at the same time. Is there even a right way to sort through data at this point?

AI / Predictive Analytics / Recommendation Algorithms (11 Mar):

James Bridle – Something is wrong on the internet (Medium)

You know, this is actually a rabbit hole I found myself extremely interested in!! (I almost want to change my junior topic to talk more about it or something similar…but I’ll email you.) More and more I see these “kids channels” on Youtube have been shadier and shadier. The fact that they have so many subscribers and I fail to believe MILLIONS these children or their parents would watch these shows (quite creepy) let alone know how to subscribe to those channels. I do believe bots or some sort of AI is involved in exploiting the youtube algorithm. Watching these videos is honestly nightmare fuel. While there is nothing decidedly gross, or disgusting or inappropriate, it’s that feeling where something just isn’t right… I’m sure you all have heard of ElsaGate (if not I suggest you look it up). Youtube needs to step up and take control of these videos. They shouldn’t be suggested, they shouldn’t even exist.

Rachel Metz – There’s a new obstacle to landing a job after college: Getting approved by AI (CNN)

It’s interesting, it kinda reminds me of the Ted Talk we watched where the computer may be biased against women because in the past only men have had the job. Because the difference between a human and a computer is that people trust computers blindly and humans can make changes to unique situations. It’s even worse when applicants don’t even know what it’s looking for (I’m not sure if employers even know). I understand that companies are growing and interviewees are plenty, but I feel it’s unfair to choose them based on an algorithm.

Jia Tolentino – How TikTok Holds our Attention (New Yorker) (read or listen)

I remember I started liking a lot of the “relationship” tiktoks and so it would continue to show me more of what I wanted. It’s so interesting to me how these creators can all of a sudden get so big, so quick. Which I guess shouldn’t be surprising given the age we live in right now. It’s on a constant timer. Even when I watched the example tiktok, if I continued to watch the screen for more than 5 seconds it would refresh with new tiktoks for you to click and enjoy, and would continue to do this the longer you stayed. I got a tiktok because I thought it would be like vine (and in some ways it is). Though most vine stars have moved on to different things now.

Eric Meyer – Inadvertent Algorithmic Cruelty (article)

Facebook memories are pretty hard to look at sometimes. Life happens quickly and the code doesn’t really know any better. I know from some of my friends, that it can also affect trans people who no longer identify with their past self anymore. Or if there was a picture of ex partners or past friendships that are no longer a part of your life.

Rachel Thomas – The problem with metrics is a big problem for AI (article)

The problem with metrics is the fact that there will always be a factor that hasn’t been considered. I feel like for a lot of these AIs, the information is taken at face value. It also feels like the things that people click on will encourage more people to click on it, even though it could be badly written or have false information in it. Like the recommended video was Russia Today’s take on the Mueller report and it garnered a lot of attention and clicks. It’s almost like that phrase where the rich get richer. The clicks get clickier? Or the fact that these metrics are looking for something in particular in a way that promotes those to cheat, for example the teachers who don’t cheat may be penalized by the system.

AI

The problem with metrics is a big problem for AI

This article talked about some problems in AI and put forward some ways to improve it. We can’t measure the things that matter most. Metrics can, and will be gamed. Metrics tend to overemphasize short-term concerns. Many metrics gather data of what we do in highly addictive environments. 

One way yo keep metrics in their place is to consider a slate of many metrics for fuller picture. Data should be combined with listening to first-person experience of those working at these companies. Another key to keeping metrics in their proper place is to keep domain experts and those who will be most impacted closely involved in their development and use.

Inadvertent Algorithmic Cruelty

The writer used his own experience to illustrate the phenomenon of inadvertent algorithmic cruelty and possible ways to avoid it. He emphasized the importance of empathetic design. He promote people to increases awareness of and consideration for the failure modes, the edge cases, the worst-case scenarios, which are definitely important aspect for designers to consider and think about.

AI / Predictive Analytics / Recommendation Algorithms

James Bridle – Something is wrong on the internet (Medium)

This essay talks about how the youtube algorithm could lead underaged/ young children to be exposed to not appropriate content. This is because people do anything for the views (such as making the video longer like the article states).  Especially if there is a common computer that the family shares, a child may be exposed to something violent. I remember when I was little my mom yelled at me for watching gaming videos that involved shooting.

How can we regulate / protect children from inappropriate content? 

Rachel Metz – There’s a new obstacle to landing a job after college: Getting approved by AI (CNN)

This article freaks me out as I look for jobs and internships. The process of getting hired by non-empathetic machines is something that is ridiculous to me. Though the use of selective AI, that means that the algorithm is looking for one specific type of individual, and that may be negative because it demonstrates some sort of conformity. There will be no dynamic interactions nor ideas due to the lack of individualism, since I feel like everyone would be similar. 

How do you feel if HR (Human Resources)  became autonomous?

Jia Tolentino – How TikTok Holds our Attention (New Yorker) (read or listen)

I personally get really addicted to tiktok once I start watching, and could just scroll endlessly. The article talks about how the recommended algorithm gets users hooked, but I also know that people try to get on the “for you page” with the #fyp in order to utilize this algorithm to get famous. The article also talks about how young individuals like Lil Nas have the ability to be famous, but I also know that many vine people and youtubers become easily famous on tiktok compared to normal individuals. 

Do you feel like anyone can become famous if they tried/ just manipulated the algorithms? 

Eric Meyer – Inadvertent Algorithmic Cruelty (article)

This article talks about how algorithms are “thoughtless” and doesn’t concern for emotions. The author of this article/ post talks about fixing awareness and considerations for. The failure modes and worst case scenarios. This article is sad because our social media brings up memories of our past which we may not want to remember. I know snapchat does a similar thing where they take you to a memory you had last year on this date. 

I know some people who just don’t use social media, do you think you could do that? 

Rachel Thomas – The problem with metrics is a big problem for AI (article)

This article talks about how AI plays a key role in optimizing metrics. While this can be useful, the article states that it can also be harmful when they are “unthinkingly applied”. This article talks about over-emphasizing metrics, and how it can mislabel useful data and bad data. I think I agree that although metrics are useful, it can be easily taken advantage/ misused. For example, I remember that I saw this college “life-hack” where students began writing random words in white while writing their essays in order for the word count to be the number they wanted it to be. 

How can we make metrics more empathetic? 

AI/Predictive Analytics/Recommendation Algorithms

Something is Wrong with the Internet – Bridle

(I mean, when hasn’t there been something wrong with the internet.)

Bridle makes the case that YouTube algorithms are effectively feeding into an increasingly nonsensical and genuinely harmful set of vids that exist not to promote any artistic or creative content that would genuinely provide entertainment to kids, but to optimize search algorithms to get tons of views. I remember seeing a TED Talk about this earlier, and when I looked it up I found it was made by the same guy who wrote this article (https://www.youtube.com/watch?v=v9EKV2nSU8w&t=568s). I think the unfortunate thing about all of this is how little concern YouTube shows to their child audience. When the Logan Paul “suicide forest” scandal was going down, and both he and his brother, Jake Paul, were criticized for how their inappropriate content was marketed towards kids as young as 7-8. On the YouTube Kids app, though directly typing “Jake Paul” will yield no results, modifying your search to “jakepaul” or something similar will. YouTube in general needs to pick up on this, because unlike adults, young children do not have the capacity to understand how to avoid distressing content.

There’s a new obstacle to landing a job after college: Getting approved by AI – Metz

This whole article is incredibly relevant to me right now, because all I’ve been doing for the past several months has been applying to internships and jobs for the summer so I can maybe not be broke next semester. I’ve noticed a significant change in articles/help websites meant to give advice for resumes and cover letters between last year and this year, and that’s the advent of advice to “get through the algorithm”. It’s the most warped thing, and as much as I am all about a cyberpunk future, I was thinking more Blade Runner and less Black Mirror. What I didn’t expect was algorithmic detection of interviews, which Metz spends a significant amount of time detailing. Unfortunately, I wasn’t able to access Yobs.io, the interview candidate simulator that will tell you how these algorithms will perceive your job competency, but regardless it’s weirdly horrifying to know that even my mannerisms will be judged by a bot.

How TikTok Holds Our Attention – Tolentino

This goes about the TikTok algorithms, and how they work hard to select content that a user attends to more, funneling them into a stream of content that they’re more likely to enjoy seeing. This does have some positive impact – uplifting musicians like Lil Nas X, providing internet celebrity to unsuspecting teenagers, etc. But it does have tons of downsides. A curious and uninformed could easily accidentally be sent into a swamp on unsavory content, ranging from jokes made into poor taste to blatantly fascist rhetoric. It’s like Facebook on crack. Not to mention, its ownership by ByteDance, a China-based company, brings up concerns for data privacy. The Chinese government is not known for benevolence, and there are concerns that apps like TikTok could be used to feed into their rhetoric. Overall it’s a mess and a half, but inevitable considering how little concern our legislators and world leaders seem to have for the online world.

Inadvertent Algorithmic Cruelty – Meyer

I feel like this entire class is just going to be all of us constantly dunking on Facebook, and honestly, rightfully so. This article was both horrifying and depressing to read, but it isn’t surprising that Facebook would have so little regard for a person’s feelings. It’s built to make you engaged, and one of the best ways to do that is to show you images it thinks you will emotionally connect to, even if that image is of your literal dead daughter. Zuckerberg has no need to care for an individual person unless they go viral, their PR has historically been poor at basic human emotions.

The problem with metrics is a big problem for AI – Thomas

The conflation of metrics with a full picture of available information is a major flaw in the way we analyze our data relations, and by extension how we interact on platforms. Thomas brings up the example of the Muller Investigation, and how Russian state TV’s coverage of in was inordinately promoted against other videos. Individuals and groups working in bad faith can game algorithms to promote general havoc, we saw that with the Christopher Wiley/Cambridge Analytica reading several weeks ago. People who have the power either willingly or out of inattention ignore the issue entirely, and it’s having real-world consequences on our political and social landscape.

Big Data/Algorithms/Algorithmic Transparency Response

I DIDN’T REALIZE I DIDN’T PUBLISH THIS LAST WEEK AHHH

My Experiment Opting Out of Big Data Made Me Look Like a Criminal
Janet tried to hide her pregnancy from all the online advertising companies because a pregnant woman is worth as much as knowing the age, sex and location of up to 200 people. Hiding a secret like this needs much cooperation (with the people you know), but I wonder if it’s ever possible to keep yourself off the grid?

The era of blind faith in big data must end
What if the algorithms are wrong? We shouldn’t blindly trust big data and fear mathematical formula in algorithms. Algorithms are not fair because they repeat our past patterns by using data in the past. O’Neil suggested we can fix the biased algorithms through algorithmic audits- data integrity check, definition of success, and accuracy.
How do we raise the public awareness of biased algorithms?

Automating inequality
Eubanks started from explaining why we’re building a digital poorhouse now in a historical standpoint. Then she gave some examples of how algorithms were never fair to everyone- Indiana food stamps, system bias towards lower class families…

The Computer Says No
I realized I’ve seen some clips (and a lot of other memes) from Little Britain. But this is exactly what’s going to happen if we blindly trust algorithms and data!

The black box
Black box can refer to data-monitoring systems that we can only see the inputs and outputs, but not the process. Like credit reports are common now, but no one knows how it was calculated. I am surprised to know that financial institutions could hide things behind the public too.
I wonder what will be the vision of a transparent society?

AI / Predictive Analytics / Recommendation Algorithms

How TikTok Holds Our Attention
The way TikTok’s algorithm works is quite different from most of our social networks- “Some social algorithms are like bossy waiters: they solicit your preferences and then recommend a menu. TikTok orders you dinner by watching you look at food”. Recommendation algorithms give us what we want and provide content based on our individual interests. We become less and less likely to separate algorithmic interest and our own. I agree that this TikTok sensation will pass fairly quickly, since people do get bored of it and more and more similar platforms are coming up. While technology is being more customized/tailored to our interests, we should also be more cautious towards it.

Inadvertent Algorithmic Cruelty
Algorithms are thoughtless- even until now I still get birthday notification from some deceased ones on FB. I know now you can report a deceased person on FB and you can memorialize the account, in that way you won’t see the person on any public spaces- whether it’s suggested friends or any reminders. I assume this is how Facebook tried to implement the empathetic design.

Something is wrong on the internet
I’ve seen Youtube Kids very differently after reading a lot of articles about Elsagate (mostly from reddit/noSleep). But I never notice ginormous amount of videos like Surprise Eggs and Little Baby Bum. “What concerns me is that this is just one aspect of a kind of infrastructural violence being done to all of us, all of the time” I wonder why these absurd videos specifically target kids?

There’s a new obstacle to landing a job after college: Getting approved by AI
I never knew AI is also involved in the recruiting process. I feel like digitizing the process will expose more falses of the system- like what if someone hacked into AI and chose themselves as the candidates? I also wonder if the performances of the candidates chosen by AI are good? Maybe in the future, we will be interviewed by AI!

The problem with metrics is a big problem for AI
I love the analogy of junk food and recommendation algorithms because it’s accurate. Metrics are everywhere in our life now- but the impacts of metrics really depend on how we use them. I also find dark pattern interesting, it could be an interesting thesis for my research.

AI / Predictive Analytics / Recommendation Algorithms

James Bridle – Something is wrong on the internet

This topic to me is particularly frustrating for me because I have a little brother at the age of 5 who is potentially in danger of being exposed to content that could in some way damage his psyche, just because of the exploitation of the YouTube algorithms by people who will do anything for views. Something made to make more convenient and appropriate content has been exploited to the max and I think it somewhat is a representation of the horrors of humans (kind of like how platforms like the black market and the dark web exist satisfy messed up minds). I think that it’s very important to filter through the content that our kids will watch because if we allow the algorithms to determine what they watch next, we might accidentally cause some form of psychological damage that will be irreversible.

Do you think that there should be human regulators on this topic? Why has this content gone unnoticed for so long and what can we do to stop it from continuing any further?

Rachel Metz – There’s a new obstacle to landing a job after college: Getting approved by AI

In the pursuit of convenience, we can see the relationship between humans and technology through our dependency and trust in artificial intelligence. HireVue is a gatekeeping AI for entry-level jobs created to help companies determine whether an applicant is suited for their company. On the surface level, I can understand how this sort of system could be helpful, especially for large companies with a myriad of applicants to go through. However, this in itself presents the issue of possible errors as well as how a system is just a machine determined to find what it was told to find. It in itself does not have the capacity to have “empathy” and have an understanding of the connection that is present in in-person interviews. I think that it would be a good idea to have an algorithm or AI determine if the applicant fits the requirements in terms of data and experience, but NOT based on how they talk in a video.

Should a system like this persist or should we continue on with in-person interactions as a determining factor for employment?

Jia Tolentino – How TikTok Holds our Attention

TikTok has become such a prominent and influential part of our contemporary culture, it is no wonder that people want to know how it works. But even those who are closely don’t quite understand how it works. It keeps its users hooked by filtering through content and determining what sort of content you would be interested in based on your interaction with it. Of course, there’s this issue of what content the application is hiding from you as well. I just find it so fascinating how big and influential it has become and yet no one really knows how it fully works, and that in itself can present some social issues that we may not be fully aware of until much later.

Eric Meyer – Inadvertent Algorithmic Cruelty

This is a very personal account of how an algorithm created to “bring joy to its users” has led to a deeper cut in someone’s heartbreak. Algorithms in its core are just machines created for convenience, it was not intended to have the heart of a human. In some sense, I can’t quite blame algorithms for what they do, however, I do think if there is blame to be placed it should be on the humans who have created it, disregarding the emotional implications it can make on people’s lives. This is definitely something that should be monitored more or at least allow people to opt-out of something that might remind them of pain.

Rachel Thomas – The problem with metrics is a big problem for AI

Metrics are meant to be something useful to determine users’ interaction and interest in the content they consume, however, it is not always accurate. They can show us valuable information and data that can help in recommending things that the numbers say that we enjoy, however, there’s a lot of other things to take into account when considering the numbers. I think about the trending pages of YouTube and how I think it doesn’t actually reflect what users watch (as it rarely shows content that is actually ranking up in numbers by YouTubers that deserve that sort of recognition), but rather, they tend to present things that are likely gaining so many viewers from bots (like TroomTroom, a click-baiting life-hack channel that I personally think shits out shit content for views that they buy and the content is subpar compared to the myriad of great content creators out there. They appear on the trending pages A LOT and I find it very frustrating.)