Rough Draft

https://drive.google.com/drive/folders/15Q2R2Xi_NWtuE07q0MoeDDsi5w4L5Hrv?usp=sharing

Title: A Conversation

Right now we’ve got a storyboard, script, and all the panels we need to make a full animatic! All that’s left to do is to assemble everything. Paige and I have put all our work so far in the above Google Drive folder, and we highly recommend you check out the script as well as our storyboard!

Junior Project Update

Paige and I have decided that instead of filming a full movie, we’ll make a sort of voice-over’ed animatic of a storyboard. By the end of this week we should be finished with basic character designs so that the actual drawings stay more or less consistent throughout, and we’ll get started on storyboarding/making the keyframes of the animatic by this weekend.

AI/Predictive Analytics/Recommendation Algorithms

Something is Wrong with the Internet – Bridle

(I mean, when hasn’t there been something wrong with the internet.)

Bridle makes the case that YouTube algorithms are effectively feeding into an increasingly nonsensical and genuinely harmful set of vids that exist not to promote any artistic or creative content that would genuinely provide entertainment to kids, but to optimize search algorithms to get tons of views. I remember seeing a TED Talk about this earlier, and when I looked it up I found it was made by the same guy who wrote this article (https://www.youtube.com/watch?v=v9EKV2nSU8w&t=568s). I think the unfortunate thing about all of this is how little concern YouTube shows to their child audience. When the Logan Paul “suicide forest” scandal was going down, and both he and his brother, Jake Paul, were criticized for how their inappropriate content was marketed towards kids as young as 7-8. On the YouTube Kids app, though directly typing “Jake Paul” will yield no results, modifying your search to “jakepaul” or something similar will. YouTube in general needs to pick up on this, because unlike adults, young children do not have the capacity to understand how to avoid distressing content.

There’s a new obstacle to landing a job after college: Getting approved by AI – Metz

This whole article is incredibly relevant to me right now, because all I’ve been doing for the past several months has been applying to internships and jobs for the summer so I can maybe not be broke next semester. I’ve noticed a significant change in articles/help websites meant to give advice for resumes and cover letters between last year and this year, and that’s the advent of advice to “get through the algorithm”. It’s the most warped thing, and as much as I am all about a cyberpunk future, I was thinking more Blade Runner and less Black Mirror. What I didn’t expect was algorithmic detection of interviews, which Metz spends a significant amount of time detailing. Unfortunately, I wasn’t able to access Yobs.io, the interview candidate simulator that will tell you how these algorithms will perceive your job competency, but regardless it’s weirdly horrifying to know that even my mannerisms will be judged by a bot.

How TikTok Holds Our Attention – Tolentino

This goes about the TikTok algorithms, and how they work hard to select content that a user attends to more, funneling them into a stream of content that they’re more likely to enjoy seeing. This does have some positive impact – uplifting musicians like Lil Nas X, providing internet celebrity to unsuspecting teenagers, etc. But it does have tons of downsides. A curious and uninformed could easily accidentally be sent into a swamp on unsavory content, ranging from jokes made into poor taste to blatantly fascist rhetoric. It’s like Facebook on crack. Not to mention, its ownership by ByteDance, a China-based company, brings up concerns for data privacy. The Chinese government is not known for benevolence, and there are concerns that apps like TikTok could be used to feed into their rhetoric. Overall it’s a mess and a half, but inevitable considering how little concern our legislators and world leaders seem to have for the online world.

Inadvertent Algorithmic Cruelty – Meyer

I feel like this entire class is just going to be all of us constantly dunking on Facebook, and honestly, rightfully so. This article was both horrifying and depressing to read, but it isn’t surprising that Facebook would have so little regard for a person’s feelings. It’s built to make you engaged, and one of the best ways to do that is to show you images it thinks you will emotionally connect to, even if that image is of your literal dead daughter. Zuckerberg has no need to care for an individual person unless they go viral, their PR has historically been poor at basic human emotions.

The problem with metrics is a big problem for AI – Thomas

The conflation of metrics with a full picture of available information is a major flaw in the way we analyze our data relations, and by extension how we interact on platforms. Thomas brings up the example of the Muller Investigation, and how Russian state TV’s coverage of in was inordinately promoted against other videos. Individuals and groups working in bad faith can game algorithms to promote general havoc, we saw that with the Christopher Wiley/Cambridge Analytica reading several weeks ago. People who have the power either willingly or out of inattention ignore the issue entirely, and it’s having real-world consequences on our political and social landscape.

Big Data, Algorithms, Algorithmic Transparency

Pasquale – Black Box Society

Pasquale describes “the black box” – a sort of metaphorical box which can either (ideally) obscure your data and thus provide privacy, or, more frequently, is the mechanism by which companies or the government conceal how much of your data they have. This is really to protect the interests of those who have access to user data rather than the user themself. The release of this data is often not an option, as non-user parties will often argue that data is hidden for their own good – to protect privacy, enforce peace, etc. Honestly, I thought this article built off of the other sources this week – we’ve become reliant and complacent on what we perceive to be objective gatekeeping of our data, but in reality, we really have no clue who has access to our information. Privacy becomes a myth, and only now are individuals beginning to sound the alarm on this.

O’Neill – Ted Talk

O’Neill is concerned about the broader social repercussions of relying on algorithms, and training them solely based on archival data as opposed to controlling for systematic oppression that impacts pre-existing data. Simply because the algorithm accurately predicts success, doesn’t mean that that algorithm is correctly accounting for human bias. A lot of this comes from the laziness of company execs to spend the money into intentionally skew their data to be more equitable, and from the comfort that the population has that math is objective. If my years in high school taught me anything, it’s that math is a bad subject for bad people who have bad taste in literally everything. The same (roughly) goes for data scientists. They need to be held accountable for their actions

Eubanks – Automating Inequality

The cycle of poverty in the US was already convoluted and hard to break (thus making it a cycle). There were already cuts to welfare programs that did give the poor chances to live a decent life, but the introduction of the digital age has made certain that these inequalities stay and worsen. Eubanks brings up the example of Medicaid and Medicare early on to illustrate this – the job of human moderators to look through and vet automated decisions was replaced by programming that wasn’t properly overseen, leading to millions getting their insurance cut, and an inordinate amounts of preventable death. This topic of essentially screwing over the poor to save up on administration costs has a dangerous history. I think it’s especially relevant now in the context of a national conversation, as we now not only have one, but two presidential candidates who promise progressive reform to specifically target this. The quieter we are on this issue, the less likely it will get resolved.

Vertesi – TIME article

Vertesi talks about how hard it is to really have a private life on the internet, because even interactions outside of your direct control are monitored. There’s the example of the girl whose dad found out after Target did that she was pregnant, and Vertesi’s own example of how she was unable to conceal her pregnancy from ads because of an innocent email from her uncle. Private isn’t private as far as advertisers are concerned, because we are the commodity being sold. Personally, I find it frustrating to explain this to older people who don’t explicitly work with tech this concept, because my generation grew up with a camera pointed at us. Overall it’s just irritating how little regard is given to out personal information, and how people with the power to regulate it (again, often older Senators/Representatives) don’t even see a big problem with it in the first place.

Williams and Lucas

This one was hilarious, I can actually remember scenarios I’ve been stuck explaining something to admin and they literally keep referring to some sort of mix up in the computer system. It’s funny in a comedic context, but a little concerning when you realize the real-world implications of a lazy person at the desk. There isn’t too much to say here, it’s just deeply relatable.

Junior Project Ideas

I’ve been caught between two basic ideas that I want to do:

A) A gifset exploring corporate attempts to take advantage of meme culture and social justice terminology

This would be titled “Dances I Do When The Anxiety Hits” and its gonna be a 3×4 grid of 12 or so gifs of a person doing different dances. The character will be animated and doing any sort of silly dance, but their face will be blank, sorta like this:

This is gonna look hilarious, because just imagine this face looking straight ahead on top of a body like this:

Image result for saturday shorts gif

The point of it all is however to show something that is get begging to be made into a meme, with an eye-catching and #relatable title regarding mental health, to point out how this sort of content can be used as a clever marketing strategy by corporations. What if, under one of the gifs, you can see a McDonald’s logo? What is that gifset saying? I want to explore this idea of the friendly corporation that “gets” youth culture, not out of care for its customer’s actual mental health, but out of a desire to encourage people to buy more stuff.

B) A short film on privacy infringement

This one would focus more on how advertisers try to get your information by setting a dialogue between two characters – one, a consumer/social media user, and the other a sort of personification of an algorithm trying to ply out data from the user. The resulting video will be a sort of comedy bit with the algorithm devising all sorts of leading questions to get information from the user. The point of it being to illustrate in a more human way how easy it is for any number of companies to have your information and predict what you’ll do, in a way that isn’t as dark or edgy as your typical Black Mirror set.

This will require a script, a storyboard, actors, at least three different sets, and hopefully a person to help me film, not to mention the hours of post-production work, so I don’t think this project is as likely to get off the ground as the other project, unless I manage to produce a script and half a storyboard by next week.

Technology and race

Safiya Noble

Noble touches on how our ideas of sexism racism bleed into the technology world in ways we can hardly imagine, namely through search engines. Because the programming for software is often built from user inputs, search results for any given topic will bring up what is most often searched for. In the case of race, oftentimes searches for beautiful people will yield images of white people, or searches for criminals will yield images of black people. Autocompletes for searches about women will show a number of sexist remarks, such as women can’t drive, etc. Despite how small and insignificant things like this seem, they reinforce prejudiced attitudes on a grand yet subtle scale. Personally, I am unsurprised by this, but I find it annoying how no one seems to be interested in considering this an issue, much less attempt to solve it.

Ruha Benjamin

Benjamin goes into this idea of discriminatory design, or (usually software) design that will put marginalized peoples at a disadvantaged position in regards to how we read data, and how our machines learn from us. The problem with machine learning is that it’ll correlate racial traits to varying levels of credit, trustworthiness, etc. Given that race is largely made up of social construct, the inefficiency of tech companies to address these issues of technological inequality feeds into a sort of “Jim Code”. I feel like this talk really opens up what Noble was saying earlier, and provides an argument about why digital bigotry impacts society beyond just making people feel bad.

Lisa Nakamura

Nakamura starts by talking about the failings of technology in recent years to have the consumer’s best interests in mind, and suggests that the way to resolve this is via VR. VR has this ability to physically put you in the position of others in an immersive way, which some believe will increase overall empathy. I don’t think I agree with this very much. I’m reminded of Jane Elliot’s eye color experiments – wherein she took a class full of third-graders in the 60s and taught them about discrimination by expressing prejudice to either brown-eyed kids or blue-eyed kids. While this was highly effective, she did a similar study with college students, and found they weren’t nearly as receptive, especially white people. I feel like VR or not, the older you get, the harder it’s gonna be to get out of your worldview and empathize.

Social Interaction, Social Photography, and Social Media Metrics

Jurgenson – The Social Photo

Jurgenson analyzes the aesthetic of the vintage, and why it comes about, via looking at it through the lens of authenticity. The idea is that humans value authenticity above all else, and as such photographs played a role in having tangible proof that you could hold in your hands, that signaled that you without a doubt have seen something. In the early days of Instagram, vintage filters became exceedingly popular, as they invoked these old photos, asserting a “realness” about the picture. This all sort of comes into play when examining our social consciousness in the social media age.

Ben – What Do Metrics Want? How Quantification Prescribes Social Interaction on Facebook

The article discusses further this idea of social interaction, and ties our self-worth through a frankly arbitrary set of values that we seek because of capitalism. Personally, I vibed with the anti-capitalist moods here, but I always vibe with anti-capitalist moods. Ben’s Facebook demetricator sort of reveals all the anxieties surrounding increasing your social numbers as a mode of increasing your own personal value. Instead of simply showing your worth through owning goods or just having a good life, you do so by curating the vision of a good life and reaping the rewards of likes and clicks.

Jill Walker Rettberg

This one sort of builds on the previous two readings – we live in an age of Dataism, quantifying ourselves to even the most inane details, all in pursuit of what’s functionally clout. This isn’t inherently a bad thing, but in my opinion it’s a little crazy that when you explain data tracking and social media the way Rettburg does, it really does sound like we’re living in a rejected Black Mirror script.

Interface

Wendy Chun – Programmed Visions

Chun’s book looks into the very idea of software, and what it encompasses. She sort of describes technology as this vast unknown, with possibilities we still haven’t thought of yet. Defining hardware becomes difficult without defining software, at which point there is some possibility of encompassing the vast range of functions of hardware. Even still, software itself is very broad. Personally I feel like Chun’s description of hardware versus software is a little too binary – already we’re seeing technology where the two are inseparable and hard to distinguish between. For example – the software if the Nintendo Wii was specific to the Wii, and the physical object of the Wii was dependant on the program that allowed it to run. Chun seems to acknowledge this, and I’d hope her work goes into further depth in that

Matthew Fuller – How to be a Geek

Fuller approaches software similarly to Chun, in that he acknowledges the software is an incredibly broad term, and even within individual software, there’s a ton of variety in operations. So Fuller tackles understanding technology through Geek Culture, which often produces said software. To be a geek is to have a practical obsessive interest in any sort of thing, particularly technology in this context. Geeks largely develop our software, and that software, as a result, becomes more than just an extension of humanity, but its own diverse set of values and meanings. The technology geeks make is reflective of their obsession over it, in that it is filled to the brim with data and potential commands.

Geert Lovink – Sad by Design

The podcast looks at Lovink and his ideas of “platform nihilism”, wherein he looks at how platforms such as Twitter, Facebook, etc., appear to present a sort of casually negative affect that’s subtle enough to not realize, as well as an atmosphere that lends itself to a sort of depressing mood that follows you even when you close the app. The platforms are reflective of our human nature, but more than that the companies behind each platform work under an agenda to keep you on their respective sites, so they aren’t incentivized to remove morally damaging content if it keeps you coming back.

Søren Bro Pold – New ways of hiding: towards metainterface realism

Pold describes “metainerface realism” – the idea of a system so integrated that data all just sort of converges in a cloud system. Honestly, this reading was a bit complicated and wordy, but at the very least it does get across the basic ideas of our data being sort of stolen without our knowledge and consent, and being coagulated and hidden for whatever business wishes to utilize it. Ideally, we as people who depend on technology should become more critical and technologically literate before allowing our data to go anywhere, but the mechanisms by which our data is taken are often so tricky that it’s hard to understand the scope of it all.

Surveillance/Privacy/Resistance

Zuboff, Surveillance Capitalism

Zuboff coins the word “surveillance capitalism” to refer to the growing trend of commerce in which the commodity being bought and sold is consumers’ personal information. The documentary roughly follows the frankly horrifying rise of surveillance capitalism, and how Silicon Valley has seemingly found it more profitable to cater to investors and venture capitalists by creating technology that extracts these massive amounts of what was considered “residual data” to track, predict, and manipulate human behavior by essentially tricking users.

What I found interesting about this is how Zuboff addresses common questions regarding the loss of privacy – for example that big tech companies are simply using the data to improve the user experience, by countering that those companies take far more information than they need so they can sell to advertisers what will most likely emotionally affect you. Many times discussions around this topic tend to avoid or otherwise assume that readers/viewers understand why this mass surveillance is an overt intrusion of privacy. What’s more, Zuboff validates by presenting work from academics, so that the rhetoric of these companies dismissing accusations of privacy infringement as very “tinfoil-hat” is delegitimized. It’s really easy to simply give into apathy and accept that there is no longer a such thing as a personal, private life, but I felt Zuboff makes the case that as a society we ought to be very worried.

Cadwalladr, “I made Steve Bannon’s Psychological Warfare Tool”

Cadwalladr follows the story of Chistopher Wylie, the whistleblower who broke the news on Cambridge Analytica, how far and deep the story of election-meddling goes, and his own role in it. Wylie is portrayed as a kid who really liked tech, was good at it, and came up with the idea of linking personality traits and interests to habits, political views, and even willingness to support new ideas. He didn’t think about the broader consequences of working with people like Steve Bannon, Robert Mercer, and SCL, all of whom had their own agenda they wanted to implement.

What’s wild is that as easy as it is to criticize Wylie for irresponsibly allowing and helping with this massive data collection scheme, none of us five years ago would’ve ever imagined that something like having your one weird aunt take a personality quiz the one time would’ve lead to you and everyone linked to said aunt’s data being compromised as part of a larger scheme to unethically reap data to influence the presidential election of a major world power. If anyone, the people who took advantage of Wylie’s ideas are to blame. Facebook especially knew that something fishy was going on, but they turned the other cheek, knowing that they’d profit off of the ad revenue. This incident was even referenced in Zuboff, and I think it’s important to address the human cost of our (more specifically, companies’ and bad actors’) pursuit for power and wealth.

Questions:

Do you think Wylie is adequately taking responsibility for the harm his actions have caused?

How urgent do you believe the issue of infringement of privacy to be?

Near the end of her documentary, Zuboff points out that there was a time when we lived without all this smart technology, as a means to point out that we do not have to be reliant on these companies. What do you think of this? Can we go back to a time before data became our greatest obsession?

Do you believe the parties involved in the Cambridge Analytica scandal were properly punished/held in check? What laws or rulings could be put in place to limit the ability of this sort of scandal to happen again? Do you think it’s even possible to prevent it at this point?

Define briefly “residual data” and its purposes.