James Bridle – Something is wrong on the internet (Medium)
This was a scary article. I knew already that I was going to limit my future children’s use of YouTube and the internet, but this just opened that up to a whole new level. The fact that so many inappropriate videos slip under the radar due to keywords and just bad algorithms in general is sickening. What’s even more sickening though is the author’s conclusion statement at the end – that “the system is complicit with abuse”. They are exploiting children for the sake of exploiting and gaining financially from it. They don’t care that the content is inappropriate for children. It’s just awful.
Rachel Metz – There’s a new obstacle to landing a job after college: Getting approved by AI (CNN)
This article focuses on the idea of AI interviews. This seems like a really innacurate way of hiring people – you can’t understand who a person is just based on facial expressions. Like the article said, you especially can’t tell a person’s drive and passion from facial expressions and keywords. I hope this doesn’t continue for much longer – or that we gain the technology that allows for algorithms that have a means of judging things like this better. Yet, I’d rather steer away from that because then we might be looking at more “human-like” technology and that’s a whole separate scary topic.
Jia Tolentino – How TikTok Holds our Attention (New Yorker) (read or listen)
This article talks about TikTok and other platforms similar to it. TikTok, like everything else nowadays, quickly learns what you like and caters to your interests. However, TikTok revolves around a bit of meme culture as the article explains. This is contrasted by the apps that are popular in China and are used similarly to TikTok – but house more types of content – USEFUL content like how to cook certain things – and even allows for purchases to be made through the apps. The argument here was that these Chinese versions of TikTok-like apps allow for more content and a broader audience – it’s not all meme-culture consumed by teens.
Eric Meyer – Inadvertent Algorithmic Cruelty (article)
This was so sad. What he said about algorithms being thoughtless and how this wouldn’t be okay for a person to show him is true. I don’t have facebook, but I still have experience with “your year in review” type things because of Snapchat. I think they started doing it about three years ago now…? But I’m not entirely sure. Regardless, Snapchat’s review clearly shows you that it’s looking for keywords in your memories to pull out the snaps it uses in the review like “fun” or “yay!”. However, I’ve experienced first hand how this doesn’t always pan out the way they hope. For example, sometimes I say ‘yay’ ironically – like when I’m actually really upset about something – so I’ve had Snapchat pull out really deep, emotional snaps I’ve saved for myself or just downright sad snaps where I was genuinely sad. I think what Eric was saying about implementing something that asks you whether or not you want to see it makes sense – because that shows even a little bit of consideration or empathy from the algorithm/development team.
Rachel Thomas – The problem with metrics is a big problem for AI (article)
Metrics can be – but aren’t always – useful. Metrics can show us important things we want to know, but sometimes metrics aren’t accurate in the way that their subject pool isn’t fair (thinking more specifically of the medical example in this instance). The YouTube example is another good reference: half of the watchers are bots, not real people. So, saying that those who watch more are enjoying what they watch isn’t entirely true because these bots exist.