AI / Predictive Analytics / Recommendation Algorithms

James Bridle – Something is wrong on the internet

This topic to me is particularly frustrating for me because I have a little brother at the age of 5 who is potentially in danger of being exposed to content that could in some way damage his psyche, just because of the exploitation of the YouTube algorithms by people who will do anything for views. Something made to make more convenient and appropriate content has been exploited to the max and I think it somewhat is a representation of the horrors of humans (kind of like how platforms like the black market and the dark web exist satisfy messed up minds). I think that it’s very important to filter through the content that our kids will watch because if we allow the algorithms to determine what they watch next, we might accidentally cause some form of psychological damage that will be irreversible.

Do you think that there should be human regulators on this topic? Why has this content gone unnoticed for so long and what can we do to stop it from continuing any further?

Rachel Metz – There’s a new obstacle to landing a job after college: Getting approved by AI

In the pursuit of convenience, we can see the relationship between humans and technology through our dependency and trust in artificial intelligence. HireVue is a gatekeeping AI for entry-level jobs created to help companies determine whether an applicant is suited for their company. On the surface level, I can understand how this sort of system could be helpful, especially for large companies with a myriad of applicants to go through. However, this in itself presents the issue of possible errors as well as how a system is just a machine determined to find what it was told to find. It in itself does not have the capacity to have “empathy” and have an understanding of the connection that is present in in-person interviews. I think that it would be a good idea to have an algorithm or AI determine if the applicant fits the requirements in terms of data and experience, but NOT based on how they talk in a video.

Should a system like this persist or should we continue on with in-person interactions as a determining factor for employment?

Jia Tolentino – How TikTok Holds our Attention

TikTok has become such a prominent and influential part of our contemporary culture, it is no wonder that people want to know how it works. But even those who are closely don’t quite understand how it works. It keeps its users hooked by filtering through content and determining what sort of content you would be interested in based on your interaction with it. Of course, there’s this issue of what content the application is hiding from you as well. I just find it so fascinating how big and influential it has become and yet no one really knows how it fully works, and that in itself can present some social issues that we may not be fully aware of until much later.

Eric Meyer – Inadvertent Algorithmic Cruelty

This is a very personal account of how an algorithm created to “bring joy to its users” has led to a deeper cut in someone’s heartbreak. Algorithms in its core are just machines created for convenience, it was not intended to have the heart of a human. In some sense, I can’t quite blame algorithms for what they do, however, I do think if there is blame to be placed it should be on the humans who have created it, disregarding the emotional implications it can make on people’s lives. This is definitely something that should be monitored more or at least allow people to opt-out of something that might remind them of pain.

Rachel Thomas – The problem with metrics is a big problem for AI

Metrics are meant to be something useful to determine users’ interaction and interest in the content they consume, however, it is not always accurate. They can show us valuable information and data that can help in recommending things that the numbers say that we enjoy, however, there’s a lot of other things to take into account when considering the numbers. I think about the trending pages of YouTube and how I think it doesn’t actually reflect what users watch (as it rarely shows content that is actually ranking up in numbers by YouTubers that deserve that sort of recognition), but rather, they tend to present things that are likely gaining so many viewers from bots (like TroomTroom, a click-baiting life-hack channel that I personally think shits out shit content for views that they buy and the content is subpar compared to the myriad of great content creators out there. They appear on the trending pages A LOT and I find it very frustrating.)

Leave a Reply

Your email address will not be published. Required fields are marked *