Something is Wrong with the Internet – Bridle
(I mean, when hasn’t there been something wrong with the internet.)
Bridle makes the case that YouTube algorithms are effectively feeding into an increasingly nonsensical and genuinely harmful set of vids that exist not to promote any artistic or creative content that would genuinely provide entertainment to kids, but to optimize search algorithms to get tons of views. I remember seeing a TED Talk about this earlier, and when I looked it up I found it was made by the same guy who wrote this article (https://www.youtube.com/watch?v=v9EKV2nSU8w&t=568s). I think the unfortunate thing about all of this is how little concern YouTube shows to their child audience. When the Logan Paul “suicide forest” scandal was going down, and both he and his brother, Jake Paul, were criticized for how their inappropriate content was marketed towards kids as young as 7-8. On the YouTube Kids app, though directly typing “Jake Paul” will yield no results, modifying your search to “jakepaul” or something similar will. YouTube in general needs to pick up on this, because unlike adults, young children do not have the capacity to understand how to avoid distressing content.
There’s a new obstacle to landing a job after college: Getting approved by AI – Metz
This whole article is incredibly relevant to me right now, because all I’ve been doing for the past several months has been applying to internships and jobs for the summer so I can maybe not be broke next semester. I’ve noticed a significant change in articles/help websites meant to give advice for resumes and cover letters between last year and this year, and that’s the advent of advice to “get through the algorithm”. It’s the most warped thing, and as much as I am all about a cyberpunk future, I was thinking more Blade Runner and less Black Mirror. What I didn’t expect was algorithmic detection of interviews, which Metz spends a significant amount of time detailing. Unfortunately, I wasn’t able to access Yobs.io, the interview candidate simulator that will tell you how these algorithms will perceive your job competency, but regardless it’s weirdly horrifying to know that even my mannerisms will be judged by a bot.
How TikTok Holds Our Attention – Tolentino
This goes about the TikTok algorithms, and how they work hard to select content that a user attends to more, funneling them into a stream of content that they’re more likely to enjoy seeing. This does have some positive impact – uplifting musicians like Lil Nas X, providing internet celebrity to unsuspecting teenagers, etc. But it does have tons of downsides. A curious and uninformed could easily accidentally be sent into a swamp on unsavory content, ranging from jokes made into poor taste to blatantly fascist rhetoric. It’s like Facebook on crack. Not to mention, its ownership by ByteDance, a China-based company, brings up concerns for data privacy. The Chinese government is not known for benevolence, and there are concerns that apps like TikTok could be used to feed into their rhetoric. Overall it’s a mess and a half, but inevitable considering how little concern our legislators and world leaders seem to have for the online world.
Inadvertent Algorithmic Cruelty – Meyer
I feel like this entire class is just going to be all of us constantly dunking on Facebook, and honestly, rightfully so. This article was both horrifying and depressing to read, but it isn’t surprising that Facebook would have so little regard for a person’s feelings. It’s built to make you engaged, and one of the best ways to do that is to show you images it thinks you will emotionally connect to, even if that image is of your literal dead daughter. Zuckerberg has no need to care for an individual person unless they go viral, their PR has historically been poor at basic human emotions.
The problem with metrics is a big problem for AI – Thomas
The conflation of metrics with a full picture of available information is a major flaw in the way we analyze our data relations, and by extension how we interact on platforms. Thomas brings up the example of the Muller Investigation, and how Russian state TV’s coverage of in was inordinately promoted against other videos. Individuals and groups working in bad faith can game algorithms to promote general havoc, we saw that with the Christopher Wiley/Cambridge Analytica reading several weeks ago. People who have the power either willingly or out of inattention ignore the issue entirely, and it’s having real-world consequences on our political and social landscape.