AI / Predictive Analytics / Recommendation Algorithms

Something is Wrong on the Internet

This is not the first time I have heard of this phenomenon, its been an increasingly terrible problem that Youtube has had for years culminating in their rule change to comply with COPPA that has only created more harm for creators and not really fixed the problem. Some of these videos are just weird like the author said, they are made in an AI factory of sorts so its obvious they are not made by humans but rather through an algorithm that is based off of what children have previously watched.

While all of this content including the harmful creepy and abusive things is available to children and it is something to be concerned about, it also reminds me of when I was a kid watching Pewdiepie. The games he played were horror games not meant for children my age and the jokes he made were likely not meant for children my age either. There had to be moments when not by the fault of my parents I saw something I shouldn’t have. But I guess that also comes off as a thought like “my parents did X terrible thing and I turned out fine!” but it does make me beg the question at what point is content too much and the problem itself vs just a normal learning curve of consumption?

This is where I disagree with the author, I do think it is on Youtube and these other platforms to be the ones to take care of this problem, because they are the platform that houses such content and gave the ability for this content to be created. Not only that but the way they’ve decided to “fix” this is not the correct one, as it has not only not fixed the issue but harmed content creators who had nothing to do with the problem in the first place. From forcing adult creators making content for adults to constantly censor their work, or having doll customizing channels have to beg for us to protest the change so their content doesn’t get deleted entirely.

Theres a new Obstacle to landing a job after college

This reminds me of the students who’s attendance is monitored by an app, the only thing I can think of is what happens when the AI and algorithm is inevitably wrong? When it says overqualified candidates aren’t a good fit and under qualified ones are? Is this going to be an even greater tool for bias in these industries, what about racism, classism or ableism? I also find it baffling that an AI is meant to understand empathy when it itself likely can’t empathize. Its already been proven that standardized testing isn’t an actual show of students intelligence and understanding of the material, so why would a standardized test for interviewing be any better. No social behavior is fool proof in analyzing a person. For example, saying that not looking someone in the eye means someone is lying, when really that person could just be anxious, or on the autism spectrum, or have something else that has informed their behavior. Even other people have trouble with these things, let alone an AI being “trained” to analyze this data, which as we’ve discussed before isn’t fool proof either as data itself.

How Tik Tok Holds our Attention

What scares me the most about both reading this article and Tik Tok’s algorithm itself is how much it can become a feedback loop. This can be things like funny cat videos being the main thing on someones feed, but because of Tik Tok’s racism problem, it can also just turn into a loop of White Supremacy or other terrible things. Like most social media feeds, it caters to what the user wants, but Tik Tok has proven to be one of the most efficient social media platforms to do this. Facebook and Instagram are peppered with so many sponsored ads and recommended posts that half the time its impossible to see the actual people you follow, and I personally just hate Twitter so I cant speak to it. Tik Tok however, has your feed personalized in mere minutes, and from what I can tell while it has ads they aren’t like sponsored posts on Facebook acting as if they are more content.

I find the most interesting part about this being how Tik Tok is making the lives of musicians better, and making them more capable of growing. But like all things its about virality which is a hard thing to track and it doesn’t have a very good equation of how to become viral.

My biggest worry with Tik Tok comes from a point that I dont really think the article focused on. A lot of these people are children, and there is basically no regulations to the app to protect them. Young boys are sexualizing themselves to get even younger girls to thirst after them and make them famous. Young girls are doing sexualized dances and being horribly bullied for them. Not mentioning all the horrible stuff that can be found there and the feedback loop the algorithm makes, at the end of the day its the other people on the site that Im concerned about. Because people are the ones that make horrible comments or do creepy things.

Inadvertent Algorithmic Cruelty

I find it hard to respond to this one because at its core it’s deeply personal. But the author is right, these AI and algorithms are thoughtless, they aren’t purposefully cruel, but the people who made them probably didn’t think about the possibility that someones year wasn’t great, and didn’t think about the words, or the pictures the algorithm might choose. I think the authors idea, to make using the app a choice, was a good one and a good option for changing the platform for the better. But what are some other ways that these programs could be more empathetic? What are some ways to change the system to account for these types of situations?

The problem with metrics is a big problem for AI

The first thing that came to mind when they gave the example from google saying watching more Youtube meant people were content with what they were watching just got me thinking about mental health and executive disfunction. For example, I watch more youtube when I am having a hard time focusing or making decisions because its a default. I can be bored out of my mind and still be watching youtube because my executive disfunction makes it impossible to do anything else. Which is just one example of how Metrics cant really measure what is most important. It would (hopefully) be impossible for google to know why I was watching youtube and understand what that meant for their needs (whether or not Im happy with the content Im watching = more money for them). Which goes along with the authors point of addictive environments. If I was in a healthier place its probable that I would not be watching so much youtube, but the analytics don’t care about that.

Another example like the one in the article comes from autoplay. Now I have that feature turned off, however I also watch youtube to fall asleep (I know I know, unhealthy sleeping habit).So if autoplay is on and I fall asleep, youtube can keep playing content for hours upon hours and that games their own system.

So are there ways to track metrics that can be more accurate to the questions people might be asking? Should they be more accurate?

Leave a Reply

Your email address will not be published. Required fields are marked *