James Bridle – Something is wrong on the internet (Medium)
This article kind of creeped me out a bit. It talked about content creation for children, which I’ve seen a lot of YouTubers complain about recently. It focuses on automation and how content created for children can be automated and create different variations of shows with the basic model of the character being repurposed. Something that really shocked me was the offensive T-shirts automatically generated that could be sold on Amazon. Then, because AI is being used to create content for children, the result can be a really creepy and messed up video that no one expected at all.
Rachel Metz – There’s a new obstacle to landing a job after college: Getting approved by AI (CNN)
This article hit way too close to home because I, as well as probably most people in senior year, are trying to look for jobs and are struggling. A lot of people tell me to include buzzwords in my resume somehow so “the algorithm can pick it up out of the other resumes”. I totally get why people use AI to sift through resumes since they probably get so many people applying, I find it pretty sad that a lot of people have to go through that process, it just seems a bit unfair.
Jia Tolentino – How TikTok Holds our Attention (New Yorker) (read or listen)
This article talks about TikTok and the general premise of the application. It talks about how the creators are all young, and how an older creator is hard to come by. It was interesting to me how they said that TikTok was an application in which the content is based primarily on music rather than speech/text, so that it reaches a wider audience. This way, people from all over the world can create content that reaches everyone else, content that is able to be understood and related to all around the globe.
Eric Meyer – Inadvertent Algorithmic Cruelty (article)
This article kind of made me sad because it opened with talking about how Facebook creates these “Year in Review” or like “Happy Birthday” or “Your friendaversary: Here’s your friendship history” for people without really being able to be considerate (of course) of the history one may have with these particular events/people. The result is something that may cause negative emotions in people. I actually experienced this kind of algorithmic cruelty when Facebook showed my Friendaversary with someone who I am no longer friends with in person, but stayed friends with on Facebook. I don’t think there’s any way for Facebook to stop this from happening unless it allows us to log negative interactions with people, but I don’t think that would be a very good feature to implement.
Rachel Thomas – The problem with metrics is a big problem for AI (article)
This article talks about how the most important things cannot be measured by metrics, but AI is becoming more and more relied on by people, and AI primarily focuses on metrics. The article has many points about why metrics focus on the wrong things, or why metrics measure irrelevant things, etc., and it just seems like the best option is to actually have someone there going through whatever is being measured in order to give the best opinion. It’s sad that this process is super subjective and biased at the same time. Is there even a right way to sort through data at this point?