Link to final project!!

This was so much fun to do – a lot of hard work – but fun and so worth it for the final product. (We had a minor hiccup piecing it together since we had to export stuff to each other, but it’s almost 100% correct!) Also voice acting was interesting?? Haha! Not that it was anything dramatic but I’ve never done it before for characters – it was fun! 🙂 It was also just so nice to work on this project with Priyankka. I really enjoyed this semester with everyone and am sorry it ended this way, but we’ll all get through this together and come out stronger! Have a nice summer everyone! 🙂

Final Project Check in

Hello!

Here is the link to me and Priyankka’s project!! We are calling it “A Conversation”. We split up the parts and storyboarded. Next is to actually animate the frames in a video! For now, we’ve put all of our files on google drive.

https://drive.google.com/drive/folders/15Q2R2Xi_NWtuE07q0MoeDDsi5w4L5Hrv

Representative image (from my part):

Hope everyone is well! And hope you enjoy!

Project Update

Priyankka and I are going to make a storyboard of the video we planned. We have already split up the work for the storyboard. We will decide together what the character style will be so that we are generally consistent. Then I imagine we’ll edit our parts separately and combine them later. We plan to voice over our characters so as to evoke the idea of what our video would have looked like had it come to fruition. One day we may actually make it! After our talk with you the other day, we discussed possibly getting together over the summer if all of this settles down (hopefully it will have by then!).

AI / Predictive Analytics / Recommendation Algorithms

James Bridle – Something is wrong on the internet (Medium)

This was a scary article. I knew already that I was going to limit my future children’s use of YouTube and the internet, but this just opened that up to a whole new level. The fact that so many inappropriate videos slip under the radar due to keywords and just bad algorithms in general is sickening. What’s even more sickening though is the author’s conclusion statement at the end – that “the system is complicit with abuse”. They are exploiting children for the sake of exploiting and gaining financially from it. They don’t care that the content is inappropriate for children. It’s just awful.


Rachel Metz – There’s a new obstacle to landing a job after college: Getting approved by AI (CNN)

This article focuses on the idea of AI interviews. This seems like a really innacurate way of hiring people – you can’t understand who a person is just based on facial expressions. Like the article said, you especially can’t tell a person’s drive and passion from facial expressions and keywords. I hope this doesn’t continue for much longer – or that we gain the technology that allows for algorithms that have a means of judging things like this better. Yet, I’d rather steer away from that because then we might be looking at more “human-like” technology and that’s a whole separate scary topic.


Jia Tolentino – How TikTok Holds our Attention (New Yorker) (read or listen)

This article talks about TikTok and other platforms similar to it. TikTok, like everything else nowadays, quickly learns what you like and caters to your interests. However, TikTok revolves around a bit of meme culture as the article explains. This is contrasted by the apps that are popular in China and are used similarly to TikTok – but house more types of content – USEFUL content like how to cook certain things – and even allows for purchases to be made through the apps. The argument here was that these Chinese versions of TikTok-like apps allow for more content and a broader audience – it’s not all meme-culture consumed by teens.


Eric Meyer – Inadvertent Algorithmic Cruelty (article)

This was so sad. What he said about algorithms being thoughtless and how this wouldn’t be okay for a person to show him is true. I don’t have facebook, but I still have experience with “your year in review” type things because of Snapchat. I think they started doing it about three years ago now…? But I’m not entirely sure. Regardless, Snapchat’s review clearly shows you that it’s looking for keywords in your memories to pull out the snaps it uses in the review like “fun” or “yay!”. However, I’ve experienced first hand how this doesn’t always pan out the way they hope. For example, sometimes I say ‘yay’ ironically – like when I’m actually really upset about something – so I’ve had Snapchat pull out really deep, emotional snaps I’ve saved for myself or just downright sad snaps where I was genuinely sad. I think what Eric was saying about implementing something that asks you whether or not you want to see it makes sense – because that shows even a little bit of consideration or empathy from the algorithm/development team.


Rachel Thomas – The problem with metrics is a big problem for AI (article)

Metrics can be – but aren’t always – useful. Metrics can show us important things we want to know, but sometimes metrics aren’t accurate in the way that their subject pool isn’t fair (thinking more specifically of the medical example in this instance). The YouTube example is another good reference: half of the watchers are bots, not real people. So, saying that those who watch more are enjoying what they watch isn’t entirely true because these bots exist.

Big Data / Algorithms / Algorithmic Transparency

Frank Pasquale – Black Box Society – chapter 1 (pp 1-11)

This chapter talks about our privacy and how there is none. Machines and devices are tracking our every move, what we look at, where we spend our money, etc. Camera’s are everywhere now – hidden in plain sight (think of phones) and are constantly keeping tabs and surveillance on us. We each have a quantified data file that they use to ‘define’ us.


Cathy O’Neill – The era of blind faith in big data must end (Ted Talk, 13m)

Talks about how algorithms often aren’t built to have biases, but accidentally do. For example, the idea of making an algorithm for hiring people like the successful people at Fox would mean that only men were hired. Algorithms are based on the past – and if our world was perfect, that would be fine, but our world isn’t perfect. So, we need to work really hard to create algorithms that won’t accidentally have biases against certain people.


Virginia Eubanks – Automating Inequality (talk, 45m)

Eubanks furthers a discussion we kind of talked about last week: the idea of algorithms being fair, but stereotyping/being more likely to help one group of people than another. However, in this case she’s talking about a person’s financial income/poverty and how that is judged.


Janet Vertesi – My Experiment Opting Out of Big Data…  (Time, short article)

This was a very interesting article. I’ve heard about the target girl before, but I never thought about what it would be like to try and avoid something like this happening. It would be (and clearly proved to be) very hard to hide from the internet that you’re pregnant. I personally think I would have for sure failed – I never would have used a different browser or bought everything in gift cards (lol). It was pretty crazy though that she started to get flagged for possible illicit activity. Privacy is hardly obtainable in this day in age.


Walliams and Lucas – The Computer Says No (comedy skit, 2m)

This skit was very funny and shows exactly why we can’t leave everything up to computers. They can accidentally make mistakes – and this was a good example of a possible big mistake. I thought it was particularly interesting though that this example chose to use a person to communicate with – especially because of the part at the end where the woman asked the worker if they could talk to someone – to which they replied “I could but…” I just found it interesting that they took this approach versus just talking with a robot or something. But it was very successful either way!

Overall Question: Do you really think it’s possible to develop a functioning algorithm that isn’t biased in some way? Or would the only way for that to be possible be if someone (or multiple people) monitor the algorithm constatntly?

Junior Project Ideas

I’m in a bunch of other media classes right now and the topics always seem to intertwine, so I’ve suddenly been thrown into constantly thinking about media and its place in our world. I’m interested in topics of surveillance and convenience – which is tied directly to our phones. I even think now about what I say out loud because “do I really want my phone to know that about me?”

Yet, going past our relationships with phones (and surveillance), I’ve also recently been very interested in the idea of social media and ‘likes’. In another media class of mine, we watched an episode of Black Mirror called “Nosedive” – if you haven’t seen it, you should. It shows a future where you rate the people around you (between 1 & 5 stars) and everyone is constantly trying to get a higher rating. Their rating affects their lifestyle – what jobs they can have, what neighborhoods they can live in, etc. We watched this in discussion of the idea of emotional labor. The characters in Nosedive were practicing their laughs and how to be charming to everyone because that would give them a better chance at being rated higher by their peers, but then no one is truly showing each other their true selves.

I bring this up because this could become our future (although, hopefully not now that there’s a visual of what could happen). It makes me wonder about how far our society will go since social media is so important to us and is manipulating us; could we reach this point or something similar in the future?

Beyond this idea, I’m also interested in the concept of emotional labor because I feel like this heavily applies to youtubers – and I watch a lot of youtubers (I watch much more youtube than TV). In fact, I watch a lot of the ‘bigger’ youtubers and will watch all of their content just because I’ve grown to like them over the years. But I think there’s this idea of people being genuine on youtube specifically because they can choose their own content and it’s not like a tv show or series that has an extra layer of professionalism to it. However, we all have to know that people aren’t always this cheery and their lives can’t always be this glamorous – these are human beings too.

So, my main interests are in surveillance, the future of social media manipulation, and emotional labor. Hopefully, I’ll have a chance to talk about all three in my project because they all tie in well together with the help of Nosedive (which I could use as a reference).

So, my working thesis for now will be this: How does emotional labor influence what and how we see each other online from the big social media influencers to our own friends and acquaintances?

(I may need to narrow that down a bit more – let me know what you think!)

Technology and Race

Safiya Noble – Algorithms of Oppression (talk, 45m)

Safiya talks about how algorithms have been secretly been built in a more biased fashion. This has been proven time and time again – as Safiya discusses – through simple things like google searches. The example that really stood out to me was that of the “three black teenagers” and the “three white teenagers”. The first search was almost entirely mug shots of people of color, but the second search was mostly made up of candid photos of white people playing sports or something. The example of the “professional hairstyles” was equally as strong. What was even crazier to me was that google tried to cover it up by changing the algorithm by the next day.

Why do you think google allows things like this to happen? Why would they not try to make searches to do with race (maybe more specifically in image searches though) more equally represented?


Ruha Benjamin – Race After Technology (talk, 20m)

Ruha argues that racism is productive – and not always in the negative sense, but most often is. She too talks about how algorithms can be racist by looking at a specific example where an algorithm chose white patients over black ones even though the algorithm wasn’t inherently ‘racist’. Instead, the algorithm looked at cost to predict healthcare needs, but black people on average “incur fewer costs for a variety of reasons”.

Do you think it’s possible for algorithms to be created without racial bias – I ask this because we have an example of an algorithm that didn’t even take race into account, but still ended up making race-based decisions. Do you think things like this will continue to occur in the future through easy fixes? Or will we probably encounter this problem for a long time?


Lisa Nakamura – Laboring Infrastructures (talk, 30m)

Lisa talks a lot about VR and empathy – and how VR is not empathetic. It often puts a white audience into a different racial experience – often in a way to bring awareness to something going on in our world. Yet, while this audience might be affected due to the VR experience, it doesn’t always mean that anything will happen because of that. She also talks about the idea of being in someone else’s shoes; she says that being in someone else’s shoes means you’ve taken those shoes.

I’ve been dealing with VR in my Interaction II class: do you feel that VR in general is something that should be pursued and used in this method? Or are there other effective ways to get our points across?

Social Interaction, Social Photography,and Social Media Metrics

What can’t we measure in a quantified world?:

This talk focused on how our world now consists of many numbers. What can machines and technology really measure about us? – And what does the data really mean? For example, the discussion about measuring steps and the apps that collect and track your data. There is always a directly quantified element – like steps or how long you were at certain locations. She made it very clear that a lot of what’s being quantified nowadays doesn’t really make a lot of sense – like the baby thing and, again, tracking where you are throughout the day (mostly what’s silly about that one though is that it assumes things like the place you’re at for a few hours is “work” or the first time your phone moves is “when you wake up”). What’s worse is that this tracking and collecting of data is only growing in popularity.

Why do you think people want things in their life quantified? Why is there such an infatuation with numbers – and is it conscious or subconscious?

The Social Photo:

The idea of photos bridging both the past and the future is really interesting. I also love the discussion of real photographs becoming more and more of value. It’s odd, but I realize this in myself as well. A few weeks ago, my Grandfather handed me a group of photos of my younger mother and aunts and uncles. Each one had a different texture and feel – and each was a different size. It was so satisfying to have a physical image with it’s own unique features compared to the others in the pile. There’s just something very nostalgic about physical photos as well – they hold a different sort of value than our quickly taken and forgotten digital ones.

What do you think the future of photography holds? Will we enter a phase of going back to physically printed photos again?

What do Metrics Want? How Quantification Prescribes Social Interaction on FB:

This article talks about the quantification of social data in social media overall, but through the example of Facebook. Ben makes the important point of how Facebook is using a capitalistic approach by quantifying our basic need for socialization. We, in real life, are looking for relationships and friends. Yet, when likes and friends are quantified and the numbers are constantly shown in our face and compared, this data becomes addicting and has meaning attached to it that isn’t as true as we’d like to believe or admit. It’s like Ben says, if the numbers weren’t in our faces, there wouldn’t be this constant engagement with Facebook – because we wouldn’t feel the need to have ‘more’ without the quantified data constantly being compared to each other.

Do you think (or know something about!) quantifying social data like this is having major effects on our generation and how our brains work? (If so, explain your thoughts)

Interface Criticism

How to be a geek:
This article talks about how we are just starting to develop and teach everyone a language that explains software and how it works. This is because software is so complex and it’s hard to teach and understand the multitude of ways that it can be “understood, experienced, (and) put into play.” These conversations about software needs to be extended beyond the private conversations between those who are proficient with tech. A geek is someone who knows almost too much about their subject – they gush about their interest in something which can make others feel awkward by overenthusiasm – they find the information “dry” and boring. Geeks rule in their companies – usually in powerful positions. I really enjoyed this fun, educational definition of a geek and how that term is looked down upon but is actually very important. 

Do you think it will take a long time to develop a proficient language that explains software? “Software” is very general, so that can be narrowed down to just a specific software you know as well. Do you think a simple guidebook could be made quickly and be the answer? Or is any software too complex to define in just writing or through language?

Introduction: Software, a Supersensible Sensible Thing:
Story of 6 blind men and elephant. All were right to some extent, but were also all wrong. Comparable to the internet and software. Focuses on the idea of software becoming a metaphor for a lot of things – like “the mind, for culture, for ideology, for biology, and for the economy.” “Computers have become metaphors for all ‘effective procedures,’ that is, for anything that can be solved in a prescribed number of steps, such as gene expression and clerical work.” Software is difficult to comprehend, which encourages ignorance – but that is not the way to go. Software illuminates an unknown, and does so through unknowable software which makes it a paradox. 

I feel like we are like the 6 blind men and the elephant; we each know and understand something about different softwares. So, I guess the question is should we work together to educate each other? Or is this something we should expect the professionals to do? Like, should they be sharing their knowledge with us in the sense that ‘fair is fair’, or should we just focus on working together on our own time to learn new things about new softwares?

Sad by Design:
In this podcast, they start with the idea that people have now been taught that humans are bad and technology is the savior. The two speakers are on ‘Team Human’. They further dive into how kids of today are more and more attached to media – especially social media. They feel like they need to check it every few minutes and can’t get away. It’s being built into our brains. This need is also built into how all the programs are designed. They also talk about how even though we have more connections it doesn’t release the endorphins we release in person – so it’s inherently a sad version of socialization. Also, our online selves are often meant to not reflect our real selves – which, again, sort of seems to defeat the purpose. 

As people who all participate in social media, why do you think our social media personas ARE personas? I ask this because I feel it’s sort of a subconscious thing we do; so why do you feel that we respond in this way?

New way of hiding: towards metainterface realism: 
This is the scary article – the one that talks about how suddenly so many devices and softwares that we use – that we CHOOSE to use – are monitoring us in ways that we can’t see or expect. The scary part is the part about, again, how we choose to use it; these metainterfaces are being built into almost everything we use. In that sense, we want to use them because they are in things that make our lives easier and that we can use every day. However, these things are gathering information on us every second of the day – tracking and mapping our location and paths and learning what we like. These are all things we don’t agree to and this makes it all the more scary. 

Since these metainterfaces are in everything we use nowadays (think phones especially) do you think we can escape an age where this happens? For example, do you think laws will be put in sooner than later that protect future generations (since ours are probably too documented to escape at this point), or do you think we will secretly be monitored regardless?

Digital Democracies Conference talk/Cyborgology Responses

Digital Democracies Conference:

Hellen Nissenbaum covers a wide range of topics concerning web browsers and search engines. She focuses in on the idea of Obfuscation, which she explains takes the approach of obscuring your info by introducing noise instead of trying to hide it. She discusses two of her own projects during the talk: TrackMeNot and AdNauseum. TrackMeNot was made as a response to finding out that search engines save our search queries. TrackMeNot is a browser add on that sends fake queries to the search engines (by using Obfuscation!). AdNauseum is another add on that virtually clicks on and likes all of the adds that a user encounters. The program stores all of the ads clicked on in an “AdVault” which can be accessed to show what ads the industry has been sending you and, in turn, how the industry views you. It gives us some intelligence of how our individual profiles are being viewed.

Nissenbaum argues that she believes the future is private – as do many of the other authors/speakers present in this week’s readings/videos. How do you feel about this statement? Do you believe the future is private?

Cyborgology:

Jenny Davis uses her article to talk about an app called SpotterEDU. SpotterEDU advertises itself as a “automated attendance monitoring and early alerting platform”. The idea is that students download the app and then universities can keep track of who’s coming to class. While it sounds useful and interesting, it’s not worth the social cost; students would now experience full, unhidden surveillance and may be judged based on how they spend their time. One of the most effective examples is when Davis brings up the idea of financial aid under this system: “Students on financial aid may have their funding predicate on behavioral metrics such as class attendance or library time.” She furthers this point by saying, “Students who work full time may be penalized for attending class less regularly or studying from remote locations.” This system would also collect data and form an average that reflects that demographic majority – which she points out is white, upper-middle class. Which means that many demographic minorities would be flagged as abnormal and in need of more surveillance and intervention. The system may look good at a glance, but this would result in too much surveillance and the data could be exploited – because data is a valuable source.

Can you think of any experiences you’ve had that feel similar to or align with the idea of this app? For example, has there been an app that has worried you? Or does your phone’s camera scare you? Etc.