BFA Exhibition Update

The Work:

My work is an image series viewable on Instagram at @socialdistancecow.

Ultimately, I plan to eventually create a zine with all of the images I have created.

The Work’s Title:

A-moo-sement (A work in progress, because I don’t know what to name it)

Artist’s Statement

My focus as an artist in quarantine is to spark happiness in peoples’ everyday lives, especially during this strange time. I emulate patterns, trends, daily activities, and behaviors that I observe myself and others exhibiting — and recreate them using my stuffed cow who is occasionally accompanied with the rest of my stuffed animal collection. I think about how seeing our behaviors recreated in a stuffed animal’s life affects how we view our own behaviors.

A Representative Image

Social Media Account Links: (Instagram)

@socialdistancecow, @chloegchan

BFA Exhibition: Plan Now

I think during this time I’d like to have my work be the drawings that I’ve produced at home during this time, maybe with a sentence about how I feel that day. Some days I draw more detailed things, and on busier days I’ll draw more simple things, showing my motivation and what drawing made me feel like.

Another idea that I came up with was, since I am now back on campus quarantined with all my plushies, to make a photoseries/ Instagram account documenting the daily life of one of my plushies and how it handles social distancing.

AI / Predictive Analytics / Recommendation Algorithms (11 Mar):

James Bridle – Something is wrong on the internet (Medium)

This article kind of creeped me out a bit. It talked about content creation for children, which I’ve seen a lot of YouTubers complain about recently. It focuses on automation and how content created for children can be automated and create different variations of shows with the basic model of the character being repurposed. Something that really shocked me was the offensive T-shirts automatically generated that could be sold on Amazon. Then, because AI is being used to create content for children, the result can be a really creepy and messed up video that no one expected at all.


Rachel Metz – There’s a new obstacle to landing a job after college: Getting approved by AI (CNN)

This article hit way too close to home because I, as well as probably most people in senior year, are trying to look for jobs and are struggling. A lot of people tell me to include buzzwords in my resume somehow so “the algorithm can pick it up out of the other resumes”. I totally get why people use AI to sift through resumes since they probably get so many people applying, I find it pretty sad that a lot of people have to go through that process, it just seems a bit unfair.


Jia Tolentino – How TikTok Holds our Attention (New Yorker) (read or listen)

This article talks about TikTok and the general premise of the application. It talks about how the creators are all young, and how an older creator is hard to come by. It was interesting to me how they said that TikTok was an application in which the content is based primarily on music rather than speech/text, so that it reaches a wider audience. This way, people from all over the world can create content that reaches everyone else, content that is able to be understood and related to all around the globe.


Eric Meyer – Inadvertent Algorithmic Cruelty (article)

This article kind of made me sad because it opened with talking about how Facebook creates these “Year in Review” or like “Happy Birthday” or “Your friendaversary: Here’s your friendship history” for people without really being able to be considerate (of course) of the history one may have with these particular events/people. The result is something that may cause negative emotions in people. I actually experienced this kind of algorithmic cruelty when Facebook showed my Friendaversary with someone who I am no longer friends with in person, but stayed friends with on Facebook. I don’t think there’s any way for Facebook to stop this from happening unless it allows us to log negative interactions with people, but I don’t think that would be a very good feature to implement.


Rachel Thomas – The problem with metrics is a big problem for AI (article)

This article talks about how the most important things cannot be measured by metrics, but AI is becoming more and more relied on by people, and AI primarily focuses on metrics. The article has many points about why metrics focus on the wrong things, or why metrics measure irrelevant things, etc., and it just seems like the best option is to actually have someone there going through whatever is being measured in order to give the best opinion. It’s sad that this process is super subjective and biased at the same time. Is there even a right way to sort through data at this point?

BFA Exhibition: Ideation

Exploring the idea of First Impressions

3 Questions I think about

  1. What are key actions that caused someone to have a lasting first impression on you?
    1. Body language, facial expressions, things that they say? Good or bad? How did they make you feel? What makes you feel close to someone? What kind of personalities are you drawn to?
  2. How often have your first impressions been proved wrong?
  3. Does a positive or negative first impression make more of an impact on you? (Which do you remember better?)

3 Mediums I’m considering

  1. Something HONY-like (Photography & Testimonial)
  2. Illustration/Drawing/Physical Medium

Big Data / Algorithms / Algorithmic Transparency (26 Feb):

Frank Pasquale – Black Box Society – chapter 1 (pp 1-11)
Cathy O’Neill – The era of blind faith in big data must end (Ted Talk, 13m)

O’Neill talks about how there are clear winners and losers determined by algorithms that we have “blind faith” in. We train these algorithms to figure out what leads to success. I thought it was interesting how she related algorithms to “opinions embedded in code”, which leads to the idea that algorithms are biased, which is why it’s very problematic to blindly believe in data and algorithms. People don’t try to understand the algorithm because it’s “math” and most people “won’t want to understand it/won’t be able to understand it”. It’s kind of scary to me how she talks about algorithms being silently dangerous, making me think of how many algorithms that exist today that are super biased and dangerous. What would the world be like if we just removed computer algorithms?

Virginia Eubanks – Automating Inequality (talk, 45m)

Eubanks talks about studying inequality through data. She talks about how the solution to alleviate this kind of inequality was to build a bunch of public poorhouses that required to give up their established rights in the 1820s. (Right to vote, office, marriage, family integrity) because “poor children can be rehabilitated by interacting with wealthy”. This didn’t really work, people kept dying. She was talking about a feedback loop of inequality which I feel like I discussed a bit in the reading last week. I don’t think that a world without inequality exists.

Janet Vertesi – My Experiment Opting Out of Big Data…  (Time, short article)

This article discusses Vertesi’s experiment to try to avoid cookies/tracking data because she was a pregnant woman. I found it super interesting how she said that a single pregnant woman is SUPER valuable, and worth 200 people because they have so much buying power and are so vulnerable to advertisements. She had to do several things like not shop at certain places and delete certain friends on Facebook, making her seem and look like she was doing illicit activities. I thought this was super interesting cause it says something about how we as a society value convenience and are willing to give up a lot of freedoms and buy into big data to make our lives easier, without really thinking too much about the consequences. It almost feels like people are trying to make our lives hard if we don’t partake in it.

Walliams and Lucas – The Computer Says No (comedy skit, 2m)

This skit shows a mother and child going to a doctors office so the daughter can have an operation. The receptionist asks super simple questions and only accepts straight answers, but the mother and child give relative-ish answers. Taking that into account with the title it made me think that this is the way people interact with computers, filling out forms, because computers only take a certain kind of answer in a specific format.

Technology and Race (19 Feb):

Safiya Noble – Algorithms of Oppression (talk, 45m)

Noble is talking about her book Algorithms of Oppression, about a campaign using “Genuine Google Searches” meant to draw attention to how women and women of color are marginalized through Google searches. The top searches showed a lot of derogatory behaviors towards women. A similar discriminatory algorithm in Google is when someone searches “three black teenagers” vs. “three white teenagers”, where when you search black teenagers, you will see images of criminals, whereas when you search for the white teens, you would get stock photos of generic white teens. Google would try to save face by fixing their searches, but there’s an issue of bias (like putting white criminals higher up in searches). Noble goes into a bunch of different ways in which Google has discriminated against people through its searching. I wonder why the search queries appeared this way, and how the algorithm was swayed in this way. Also she demonstrated how searching “____ girls” often shows really sexualized images of women.
Also, in 2015 during the Obama Presidency, searching the N-word would bring up the White House which is insane.

My thoughts for Discussion:

– Noble says that 66% of search engine users think that results produced from search engines are fair and unbiased. Why do so many of us blindly believe this? Are search results the same for everyone? What influences what appears in the search results? How can this influence public opinion? Has it done so in the past?

– Do you think your opinions on things are influenced based on the things you search for, and what kind of things do you think would be the most harmful if they were influenced?

– Is it even possible for developers to make precautions against the search engines being racist since they are not inherently racist?

– Why do you think that searching for girls vs. searching for boys produces hyper-sexualized images and results? Do you say “girl” instead of “women”? What are your thoughts on this? Why don’t people say “boys”, but more often say “men”?

– Do you think that frequency of searching racist search queries equate to racism? There seems to be a loop in which there is a lot of material online with racist ideologies, which brings more visibility to these ideas. Then, they will show up in the search engine more frequently, which may begin to perpetuate racism. Do you think there is a way to break this loop?

Ruha Benjamin – Race After Technology (talk, 20m)

Benjamin talks about how the tech industry can make it easier for people to take a stand in their beliefs. She also talks about a project called “The Innovation Project” which tried to use algorithms to predict at risk youth in the cities, which created so much backlash that a coalition formed in resistance to it called the “Stop the Cradle to Prison Algorithm Coalition” which tries to obfuscate existing algorithms’ bias towards marginalized groups. I thought this was pretty interesting — I think that it’s much easier to become aware of certain events, but it’s harder to take a big part in them because people become lazy.

Lisa Nakamura – Laboring Infrastructures (talk, 30m)

Nakamura talks about VR in her speech. Something interesting that she mentioned was how “feeling good about feeling bad” was a feeling that was specific to VR, and how VRs are essentially films despite a lot of people considering VR as “experiences”. People are using VR to put the viewer in the experience of marginalized people so they are able to experience these marginalizing events. I thought it was interesting how she talked about how VR was created to teach people to feel a certain way, to “hack” your body to make you feel more empathetic.

VR could potentially be used in pretty dangerous ways. Could different media like games and movies also “hack” your brain to make you feel certain ways?

If we watched all movies from a first-person perspective, then could we call them “experiences”? What about video games? Are they more like films or experiences?

Introducing race into VR “experiences” causes us to feel GOOD about feeling bad about these race differences. Why do we feel like we are supposed to feel these emotions? How much work and what kind of work would go into making this kind of experience to influence peoples’ feelings?

Social Interaction, Social Photography, and Social Media Metrics (12 Feb):

Nathan Jurgenson – The Social Photo – (book, pp. 1-15)

This reading discussed the rise of different technologies changing the way the masses think and perceive information. They drew a similarity to the rise of photography, and how its inception changed how people viewed time since they could suddenly capture moments of time in an image. Photos became so important so quickly, and most of the popular apps these days involve some sort of photo. The photograph has actually changed society so much — it kind of brought us to an image-obsessed society, where people care about how they present themselves and how they look above most other things.

Jill Walker Rettberg – “What can’t we measure in a quantified world?” (talk, 20m)

This TED talk is talking about tracking devices (like FitBit) that track our activities to encourage behaviors based on the information it collects about us. The speaker talks about how this quantitative behavior tracking isn’t a new thing because people like Ben Franklin tried to improve themselves through behavior tracking as well. I think it’s really helpful to track behaviors like this so we can feel accountable for our behaviors that we are trying to change.

Ben Grosser – What do Metrics Want? How Quantification Prescribes Social Interaction on FB (article)

The article is talking about how much sites (mainly Facebook) use numbers and metrics as a measure of social interaction, which in turn drives an “insatiable desire for more”. This is emphasized and brought to light in the video that Ben made with Mark Zuckerberg talking about numbers and metrics for almost a whole hour. It makes sense to me that people would become obsessed with this — humans are social creatures, so I don’t find it unusual that people want to be validated and have a sense of belonging through social metrics, to the point where their self worth depends on it, or at least is impacted in a big way through these metrics of social interaction, and makes me think about why the companies choose to show certain metrics over others, and if they are aware that they are doing this.

Senior Project Ideas

I was thinking of what really got me interested in art, which is portraiture. I always really liked drawing and painting portraits of people, but these would all be people I had never met before, or people who came from my imagination.

I’m interested in maybe revisiting my love for portraiture, and sketching the friends I feel closest to after all my years in college, somehow involving the friends I’ve made here.

Other ideas:
– relationships between people: what makes you feel close to someone?
– what makes you feel uncomfortable around someone?
-first impressions
– people posting online vs. their actual selves
– growing up

** UPDATE **

The thing that I’m the most interested in, and want to explore, are First Impressions. I’m not super sure what kind of medium or method I’d like for this particular artwork to be in. I was first thinking of having photographs or drawings of people and then have it be kind of participatory? But I don’t really know clearly what I’d like to do in general. Maybe something interactive so people can write their first impressions?

Interface Criticism / Tactical Media / Software Art (5 Feb):

Wendy Chun – Programmed Visions, (book, pp. 1-2, and optionally pp. 3-10)

I felt like this reading started out defining what new media exactly is, and it essentially explained that new media studies all center around software, which is what all new media have in common. I thought it was particularly interesting how he mentioned that we can’t truly know what is going on behind software, how someone really feels or acts vs. how they portray themselves online. This is something I’m super interested in because I know there is a lot of disconnect in this particular area.


Matthew Fuller – How to be a Geek (book, pp. 12-14, and optionally pp. 63-71)

The article describes being a “geek” as someone who is super informed and excited about a topic, and not be afraid to share about said topic with little regard to the fact that others may not take interest in the topic that the “geek” is sharing about/being enthusiastic about. I thought this was a perfect definition of geek, and I feel like we all have a bit of this geekiness inside of us. I find it quirky and interesting when someone geeks out about something, but it can be a bit much if it is a recurring thing. I am not surprised that geeks run a lot of the corporations because I feel like it’s logical that people who are super enthusiastic about something become leaders in that particular subject. I found it super interesting and relatable how they talked about how geeks often mute themselves and pass themselves as underwhelming, because I feel as though I tend to not talk about or express things that I have lots of interest in because I feel as though it isn’t socially acceptable in general.


Geert Lovink – Sad by Design (podcast w/ Douglas Rushkoff, 60m)

The podcast is talking about the changing political environment; the fact that there aren’t many traditional social movements – there are protests, but these protests do not gain as much support or participation. It seems like they are crediting this to social media platforms which allows users needing to give little effort to “support” causes, but not actually join them. Personally I think that this is true, because the ability of the internet to give us instant and quick results have bred a society that looks for instant gratification and results, and putting in little effort to get big results, and people like to turn a blind eye in order to not feel sad or negative.


Soren Pold – New ways of hiding: towards metainterface realism (article)

The article talks about how people have stopped trusting “commercial meta interfaces” by installing blockers, ads, and clearing their cookies, bringing us back to the topic that we covered last week about surveillance/privacy. As people start becoming more aware of how a bunch of our information is being recorded and sold off to companies, more and more people try to take measures to prevent this from happening. The work that Ben did on Facebook (Safebook) that is mentioned in the article aligns well with the topic because it is a method of data obfuscation through hiding Facebook content.

Surveillance / Privacy / Resistance (29 Jan):

Shoshana Zuboff – The Age of Surveillance Capitalism (video documentary, 50m)

The documentary talks about Zuboff’s fascination with security, and its exploitation by corporations. It is interesting how the information that we provide apps/companies that seem to cater to our needs by making things more convenient to us/being a service to us, take our data and sell it to other companies so that they can tailor ads and other things to us. Through this process the data that we give these companies through using their services becomes commodified, becoming the “raw material” that is sought after by other companies in order to improve themselves.


Helen Nissenbaum – Mapping Interventions – Digital Democracies Conference (talk, 30m)

The conference included a discussion of the obfuscation of the data that companies/applications collect from us. It seems like data obfuscation is something that you (Ben) are very interested in, because of ScareMail/Randomizers/etc. A lot of my friends who are in Computer science/tech-related fields commonly use data obfuscating extensions like TrackMeNot, and it really makes me feel like I should be installing these data obfuscating extensions on my own browser. I wonder what would happen if everyone in the world used these data obfuscation methods.


Carole Cadwalladr – ’I made Steve Bannon’s psychological warfare tool’ (Guardian)

This article was kind of creepy to me. Christopher Wylie basically went into Facebook to create detailed and highly personalized profiles for millions of people on the platform, and sent them political propaganda based on their profile to influence them to vote for a particular candidate. It really shows you how people can be easily swayed by the things that they read on the internet. It makes me wonder how political ads are actually distributed, and what kind of algorithm do they use to do this? If someone isn’t really into politics or hasn’t looked up any candidates to do research on them, what information is there to influence them?


Stuart A Thompson and Charlie Warzel – One Nation, Tracked (NY Times)

When I was reading this article about how so many things can be tracked based on our mobile phones, I immediately started thinking about how we could obfuscate this data. There were a lot of graphics that showed us the things that our phones can predict/say about us based on where we are and our location data. What if we left our phones at home? What is the purpose of collecting this kind of data? Who cares? Can we change whether we are tracked or not? I have so many questions about this. It brought up a question in my mind that wasn’t super related, but I wondered if we could really just go off the grid and live without our phones. We would then be free of all this tracking stuff going on, but we’d lose so much convenience and other services.


Drew Harwell – Colleges are turning phones into surveillance machines (Washington Post)

The article was about college students being tracked via their phones to see data such as when they skip class, where they are when they skip class, etc. to judge what kind of student they are and to find out what kind of habits they have. Of course, this kind of surveillance had become a hot topic of debate as students and teachers both argue for why this is an invasion of privacy, the breach of trust that this creates, and the question of why students even need to be surveilled in the first place is brought up.This article, to me, was super reminiscent of a class that I took last semester, INFO 202. We had an entire section of class in which we studied this kind of material, and in particular there was this one section of this article in which students argued that they had “Nothing to Hide”. This philosophy that it’s okay to be surveilled if we don’t have anything shady to hide is problematic because just because someone CAN surveil you doesn’t mean they SHOULD. It creates a culture with less trust, and in both short and long term this is definitely a bad thing.


Jenny Davis – A clear case for resisting student tracking (Cyborgology)

This article talks about SpotterEDU, an app that allows professors to track the behaviors of their students, and allows the gatherers of information to make assumptions of what the lives of these students must be like based on their behaviors tracked by the application. This just felt like one big privacy breach to me. I get that it is important for people to care about mental health and similar issues, but I think that this level of tracking is unnecessary. It makes the entire system seem like they are holding your hand a bit TOO much instead of letting you just make decisions based on your own merit, which I think is a really important thing for students to develop. With the implication of this application, it isn’t very possible for this growing up to happen.