Technology and race response

Databite No. 124: Ruha Benjamin
Benjamin discussed how structural racism operates in algorithms, social media, and other technology that has not yet been discovered. “Tech fixes for social problems aren’t just about tech impact, but also about social norm, racial norm, and structure shape what tools are imagined in the first place.” The best example would be the racist robot in the medical algorithm that favors white patients over sicker black patients. Indifference to social reality on the part of tech designers and adopters can be more harmful than malicious intent. The New Jim Code shows how the algorithms can extend instead of diminishing racial discrimination.
Main question:
What are some other examples of discriminatory design?
What are the ways tech designers/artists can do to construct a different social reality? Is it really possible to create fair and just algorithms in an unfair and unjust society?

Laboring Infrastructures
Nakamura talks about VR2.0. Though VR2.0 still shares some major commons with our current VR experience: expensive equipment and male-dominated developers/users, it helps generate specific engineering feelings like compassion sympathy and tears. “You can’t trust people when they speak their own truth, but you need to experience for yourself to know”.
Is VR a promising industry for you? Where do you see the industry go in the future?

Algorithms of Oppression
Noble discussed the biased algorithm towards the underrepresented people. I’ve heard of all those search engine algorithm’s racist examples (image results of three black kids vs white kids, Asian/Latino/Black girls)- but I think they tweaked the algorithm, so the search results are not the same anymore. I realized how far we’re from real gender equality even though it’s 2020 now.
Is Google the only browser having this issue? Or all the search algorithms are like this?

Tech and Race

Algorithms of Oppression: To be quite honest, I’m not surprised at all. I remember hearing of her work before, I believe from your Interactions I class and it’s honestly quite moving how a search engine, that we would think to be neutral, offers up so many racist and misogynistic suggestions. How three black teens are seen criminalized while three white teens are innocent stock photos or what women cannot/shouldn’t/should/need to do or the hypersexualization of asian/latina girls. Google is merely a reflection of our own society and what we search up. Are we really that surprised when we live in a country that voted for a racist misogynist to become the president?

How can we change a search engine to become neutral when the data is dominated by the users? Is it google’s fault, or ours?

Race After Technology: While Jim Crow is gone, Jim Code lives on through racist robots. It’s interesting to see that while this technology was never born racist, it became racist because it lacked the context of the systemic racism already put in place. This also made me think of the HP(?) computer that was racist because it could not identify a black face because most likely, the people who made it, tested it on themselves and did not consider POC (which is why a diverse work environment is so important). The Stop The Cradle To Prison Algorithm reminded me a lot on the other topic we addressed in where they also tracked through GPS location, collected, and shared the data of college students to see which students were at-risk in class. I found the white-collar early warning system hilarious because this is the type of crime that we don’t really see being reported on with facial recognition of likely perpetrators (the average white male). 

In what ways can we help black youth succeed? How can police learn empathy and create a safer relationship with these kids?

Laboring Infrastructures: I remember Lisa Nakamura! She introduces the idea of empathy in VR. Which I am still fascinated by. We are so accustomed to our own ways of life that it might be hard for us to be put in the shoes of someone entirely different. Many people don’t care about what happens to other people, as long as it’s not happening to them. Having VR give the public a chance to experience Trayvon Martin’s last moments, or even just the life of the average black woman or transgender wheelchair users still offers that unique experience of being able to feel what they feel.

I feel, however, that this idea might have to be disguised in some way because I feel like people whose mind we genuinely want to change, will not give this a shot because of the title

Technology and Race Response

Algorithms of Oppression– Safiya Noble
In this talk, I found it very interesting when Noble talked about how Kabir Ali searched up “3 black teenagers” vs “3 white teenagers” on google. For the search results of the black teenagers, the images that came up were mugshots in comparison to when it was white teenagers, only getty stock photos came up (nothing to due with criminality.) This is a problem of google’s algorithms which they didn’t formally apologize for, they only tweaked the algorithms. I found a bunch of articles where they questioned if google was “racist.” I’m curious to know how these algorithms were first put in place. I don’t think google was intentionally trying to be “racist” but these algorithms stemmed from somewhere. This reminds me of when I was in high school, my senior year english class, we talked a lot about social media, politics, as well as social issues. We focused a lot on the BLM movement, and one thing we learned about was when a young black teen was shown on the news, they would pick images that portrayed them as “criminals” or “trouble makers” rather than choosing to show more wholesome photos such as their graduation or pictures with their families. I think these algorithms are a representation of society, and it influences the things we see. If algorithms are biased, how much more will that influence and affect the people that are exposed to these patterns on the internet and in real life.

What have you searched up on google that has very biased and non representational results? I searched up ‘3 asian teenagers” and the images do not reflect asian teenagers at all. How can we fix this algorithm?

Race After Technology– Ruha Benjamin
I thought the app “citizen” was a great app to be updated on the latest crimes in the area. And the users can also upload incidents onto the app. I think it is very unfortunate that we need to have apps like these to report on the “bbq beckys” or our neighborhoods. Another thing she mentioned was the new Jim Code which are: engineered inequality, default discrimination, coded exposure, techno benevolence. She brings up an abolition approach to the new jim code which is the “Digital Defense Playbook” which are dealing with pervasive data collection and data driven systems. Their aim is to develop power not paranoia according to our data bodies. I find it very interesting how communities are fighting injustice through technology. I feel like in some ways, it can be more effective.

What are some ways you see technology fighting racial injustice?

Laboring Infrastructures– Lisa Nakamura
I think the VR ‘Pathos’ is a very interesting concept. They create empathy based VR experiences to disrupt interpersonal oppression, discrimination, and misperceptions. What I liked is what the founder of Pathos Lab, Romain Vak, had said, “The best and worst thing is that nobody in the filed of VR has any clue what’s going on. What that means is that there are no rules yer but it also means that there a lot of eggs are in a basket that is difficult to predict. There is a lot of speculation and theorists, but I don’t really think anyone knows where the heck the industry is going.” I think it’s fair to say that no one in the tech industry can fully predict what will happen in the future. Although they can look at past data, there is no way to really see what direction technology will be going. FB can predict as far as their data has collected, but they don’t know where their future lies 10 years from now.

perhaps because VR is still very new, where do you think the tech industry in VR will be in 10 years?

Technology and Race

Lisa Nakamura – Laboring Infrastructures (talk, 30m)

I really enjoyed this talk. The idea that VR systems are ultimately manipulating human emotion is really interesting. There’s this one section where Nakamura talks about how people are using VR to replicate the experiences of the marginalized in efforts to “fix racism”. There’s this idea that if people can find an emotional connection to the oppressed that we can have faith in humanity again. But faith in humanity can’t be restored with mere empathy. Empathy here is being used as an excuse to not feel bad about the opression that continues to happen. The “Yes there’s oppression still going on but I feel bad about it so it’s not that bad” is self-centered. It excuses us from handling these issues in an institutional matter that would actually make change/progress. By showing that we feel empathy, it dismisses us as the ones to be blamed, when in reality the people that can experience the VR system are probably some of the most priviledged with ability to help those who aren’t. Nakamura brings up that putting ourselves into the “shoes” of someone else can be oppressing itself when they might not even have full rights of their own body in the first place. It made me think of this article read in Interaction II (linked below). It brings up the efforts of tying together ethnography and VR so that people have global access to communal documentation. But it makes me question not just VR, but rather the way we think of documentation when it comes to marginalized communities? Whats the foundation for documenting and why is it important to take these experiences from them, and share them around the world. It’s a question of are we right to take this from them, and how do these “experiences” subject to instilled bias?

Ruha Benjamin – Race After Technology (talk, 20m)

Racial norms shape how we understand tech and society. Citizen app reflects creators biases. Instead of stopping crime, we’re guided to avoid it. So, what does this say about how we choose to handle racism and crime?

Celebrities promoting wrong apps? Jay-Z?

This entire thing about creating algorithms to find a suspect/criminal is crazy to me?? Using facial recognition from linked in to all other platforms brings into light how much data can be taken from us without out intention. This way of systematically organizing potential suspects from previous data collection seems too risky. It trusts a computer to navigate future criminalization and curate crime ratings that can be racially directed.

Safiya Noble – Algorithms of Oppression (talk, 45m)

Google “accidents” that create racial bias and stereotypes in google search. Three black boys vs white boys.

WHY is the answer always technical? Why do we always look to computers for answers? Why do answers have to disconnect emotionally? Why is emotion bad and why are we encouraged to think outside of morals/emotions?

Technology is not flat not neutral they are man-made.

Race and Technology Responses

Algorihms of Opression

This video also speaks to the bias’ of algorithms and technology and how its not a private struggle but rather a public one that communities and the people that make these technologies need to think about. She speaks to the hypersexualization of women of color in the media, and how this stereotype has been perpetuated in media even to this day. She says to combat this we must no longer be neutral in this issues, reject being color blind, and curate the web with a more guiding hand, and continue to educate (especially educate the people who make these programs so they can think about the implications of their work)

Race after Technology

The first thing that really struck me here, not necessarily as surprising, but heartbreaking, was how easy it is for something that could be deemed helpful to be turned into a way to enact racism or racial bias. The app Citizen makes sense to me in some ways but the ability for it to be abused is just so high. It reminds me of something we’ve spoken about before which is that all tech will have human bias, because it is made by us. For instance any scanning software for facial recognition might be more capable of handling white people than black because the white person programming it didn’t think about it. Therefore technology cannot be perfect because we are not perfect. I really appreciated that she spoke to how race neutrality or being “color blind” can be dangerous and harmful. It doesn’t look at the differences that matter and need to be addressed because it treats everyone as if they are the same. Professor Benjamin gave a few examples of ways that people are fighting these imbalances, but what are some other ways that we as artists might tackle these issues?

Laboring Infrastructure

Discusses the relationship between women of color’s labor and technological advancements. For example having phones to buy that give us access to the internet, which are made for stupid cheap in China. Nakamura talks about how these conditions across the board can be seen as a feature rather than a problem within the system, the cost of advancement is suffering of others. She speaks to how VR can enact empathy for these types of sufferings because once someone feels the reality, it’s hard to disbelieve it anymore. But we need to be using VR in this capacity by giving resources to marginalized groups to create in VR, instead of allowing it to remain colonized in the way that is currently happening. I think this is true for not just VR but almost all forms of creative storytelling and empathy creating. Film for example needs more women, people of color, and people with disabilities. Video games need more women, people of color, and people with disabilities. All of these things rely on technology and stories but they are mostly being used by one group instead of a large platform of groups.

In some ways this talk makes me think of Horizon Zero Dawn’s plot which speaks to humans creating things we cannot fix that inevitably destroy the world. Maybe our advancements aren’t doing that currently, but they are creating inequalities at an alarming rate.

A lot of my work tries to evoke empathy, but sometimes boarders on pity. so my question may be a little farther away from the talk itself, but at what point does storytelling and empathy push too far? Is there a too far when it comes to creating a new reality away from our white centric world? What are some ways that we can create and tell these stories and critique society that enacts change?

Technology and Race Responses

Lisa Nakamura’s Laboring Infrastructures: Nakamura starts her lecture with the fact that women of color are working low wages to make the technology we use. It isn’t until 2018 that big tech companies are questioned and become not seen as “the good guys” anymore. Facebook getting fined $5mil for privacy violations isn’t enough to deter them into doing the right thing. Ethics in AI seems to have been given more attention than empathy in VR. Tech companies tote VR as an empathy machine that’s able to ‘fix’ racism, sexism, etc. by having mostly rich straight-white males become refugees, people of color, sexual harassment survivors, transgender and/or disabled people (a.k.a. the undercommons) in VR. Nakamura calls this toxic re-embodiment because they are inhabiting the body of someone who doesn’t own themself. Examples Nakamura includes is My Beautiful Home from Pathos which invaded the privacy of people while filming, One Dark Night (a work about what happened to Trayvon Martin) which reproduces racial violence in the name of reducing it, and the BBC losing their minds over a VR revelation that the player embodies a black woman getting a haircut (if the player was revealed as white there’d probably be no story). Nakamura brings up identity tourism and describes it using Adrienne Shaw’s terminology- to identify with instead of as. Overall, the undercommons don’t have the luxury of taking off a VR headset and living without their own personal struggles. The reality is that VR is being used to optimize privileged people too busy to do the real thing (engage with people suffering); and therefore, must have a ten minute VR experience to automate their feelings.

  • Do you think Mark Zuckerberg is an empathetic guy or a failure at it? -I think having a VR tour of Puerto Rico after they’ve been through a flood just to talk about new Facebook features is insensitive.
  • Do you think VR’s ’empathy’ (or is it sympathy?) is enough to fix exacerbating inequalities, or do we really need rules/regulations?
  • What are your thoughts on ‘feeling good about feeling bad’? Can white people enjoying slave literature be seen as similar to white people enjoying VR?
  • If the ‘undercommons’ were given the chance to make their own VR, what might that look like?

Memorable quotes: 1) “If somebody is going to put you in the shoes of somebody, that means that they have stolen those persons’ shoes” 2) “VR is the totem of the overdeveloped world”

Ruha Benjamin’s Race After Technology: Benjamin says racism is productive -not an accident, outdated, or a glitch. Sociologists will say race is socially constructed, but they don’t typically say racism constructs. When it comes to technology, our social inputs matter. Benjamin describes imagination as a battlefield; and the fact is that many are forced to live in another’s imagination. Benjamin mentions the citizen app was first used to report crime, but is now used to avoid it (the Starbucks racial bias incident is coming to mind). I love that later on in the lecture the White-Collar Early Warning System counteracts this racial profiling with a much larger crime going on that’s more readily ignored (white male corporate execs embezzling millions/billions). Knowing that Google is using AI to make drone strikes more effective and Microsoft is working with ICE disappoints me, but it doesn’t surprise me. I’m also not surprised to know that racial bias in a medical algorithm favors white patients over sicker black patients. Of course it’s not focused specifically on race, or need, but on cost; and these low costs could be correlated with systemic racism. I am familiar with Michelle Alexander’s book The New Jim Crow, but I wish I was told more about The New Jim Code which covers engineered inequality and default discrimination. I also wish we didn’t need Appolition (an app that gives bail money to incarcerated black people), but I’m glad it exists. I should check out Data for Black Lives and The Digital Defense Playbook

*After the talk there is mention of a soap dispenser not dispensing onto dark skin which reminded me of this HP motion tracker video only working for those with light skin.

Safiya Noble’s Algorithms of Oppression: Algorithms are biased too and can’t make better decisions for us! Google searches put misogyny and racism to the forefront. I didn’t know that during Obama’s presidency in 2015 if you searched Google Maps for the N-word, it gave you the White House, but I do remember there being a lot of other appalling racist crap coming out during his presidency. It has been incredibly disheartening to witness (during the 2016 presidential election and after Trump) how easy it’s been for white supremacists to manipulate the internet to their advantage. Hate groups thrive in an echo chamber. I’m not surprised that googling “three black teenagers” produces a lot of mug shots while “three white teenagers” produces a bunch of phony stock images. Of course Google would try to ‘correct’ their algorithm to save face, but bumping up a white teenager in court for a hate crime still feels like an affront. I’m also not surprised that professional hairstyles for work are not only racialized, but pertain to women. It’s unfortunate that economic bias has been talked about more prevalently in regards to algorithms than racial or gender bias. It’s no coincidence that when searching for ‘(black/Asian/Latina) girls’ on the internet -you get porn, which ultimately infantilizes women and eroticizes any/all women of color. It’s apparent that old media is re-emerging in new media.

Technology and Race (19 Feb):

Safiya Noble – Algorithms of Oppression (talk, 45m)

Noble is talking about her book Algorithms of Oppression, about a campaign using “Genuine Google Searches” meant to draw attention to how women and women of color are marginalized through Google searches. The top searches showed a lot of derogatory behaviors towards women. A similar discriminatory algorithm in Google is when someone searches “three black teenagers” vs. “three white teenagers”, where when you search black teenagers, you will see images of criminals, whereas when you search for the white teens, you would get stock photos of generic white teens. Google would try to save face by fixing their searches, but there’s an issue of bias (like putting white criminals higher up in searches). Noble goes into a bunch of different ways in which Google has discriminated against people through its searching. I wonder why the search queries appeared this way, and how the algorithm was swayed in this way. Also she demonstrated how searching “____ girls” often shows really sexualized images of women.
Also, in 2015 during the Obama Presidency, searching the N-word would bring up the White House which is insane.

My thoughts for Discussion:

– Noble says that 66% of search engine users think that results produced from search engines are fair and unbiased. Why do so many of us blindly believe this? Are search results the same for everyone? What influences what appears in the search results? How can this influence public opinion? Has it done so in the past?

– Do you think your opinions on things are influenced based on the things you search for, and what kind of things do you think would be the most harmful if they were influenced?

– Is it even possible for developers to make precautions against the search engines being racist since they are not inherently racist?

– Why do you think that searching for girls vs. searching for boys produces hyper-sexualized images and results? Do you say “girl” instead of “women”? What are your thoughts on this? Why don’t people say “boys”, but more often say “men”?

– Do you think that frequency of searching racist search queries equate to racism? There seems to be a loop in which there is a lot of material online with racist ideologies, which brings more visibility to these ideas. Then, they will show up in the search engine more frequently, which may begin to perpetuate racism. Do you think there is a way to break this loop?

Ruha Benjamin – Race After Technology (talk, 20m)

Benjamin talks about how the tech industry can make it easier for people to take a stand in their beliefs. She also talks about a project called “The Innovation Project” which tried to use algorithms to predict at risk youth in the cities, which created so much backlash that a coalition formed in resistance to it called the “Stop the Cradle to Prison Algorithm Coalition” which tries to obfuscate existing algorithms’ bias towards marginalized groups. I thought this was pretty interesting — I think that it’s much easier to become aware of certain events, but it’s harder to take a big part in them because people become lazy.

Lisa Nakamura – Laboring Infrastructures (talk, 30m)

Nakamura talks about VR in her speech. Something interesting that she mentioned was how “feeling good about feeling bad” was a feeling that was specific to VR, and how VRs are essentially films despite a lot of people considering VR as “experiences”. People are using VR to put the viewer in the experience of marginalized people so they are able to experience these marginalizing events. I thought it was interesting how she talked about how VR was created to teach people to feel a certain way, to “hack” your body to make you feel more empathetic.

VR could potentially be used in pretty dangerous ways. Could different media like games and movies also “hack” your brain to make you feel certain ways?

If we watched all movies from a first-person perspective, then could we call them “experiences”? What about video games? Are they more like films or experiences?

Introducing race into VR “experiences” causes us to feel GOOD about feeling bad about these race differences. Why do we feel like we are supposed to feel these emotions? How much work and what kind of work would go into making this kind of experience to influence peoples’ feelings?

Social Interaction, Social Photography, and Social Media Metrics

Jurgenson – The Social Photo

Jurgenson analyzes the aesthetic of the vintage, and why it comes about, via looking at it through the lens of authenticity. The idea is that humans value authenticity above all else, and as such photographs played a role in having tangible proof that you could hold in your hands, that signaled that you without a doubt have seen something. In the early days of Instagram, vintage filters became exceedingly popular, as they invoked these old photos, asserting a “realness” about the picture. This all sort of comes into play when examining our social consciousness in the social media age.

Ben – What Do Metrics Want? How Quantification Prescribes Social Interaction on Facebook

The article discusses further this idea of social interaction, and ties our self-worth through a frankly arbitrary set of values that we seek because of capitalism. Personally, I vibed with the anti-capitalist moods here, but I always vibe with anti-capitalist moods. Ben’s Facebook demetricator sort of reveals all the anxieties surrounding increasing your social numbers as a mode of increasing your own personal value. Instead of simply showing your worth through owning goods or just having a good life, you do so by curating the vision of a good life and reaping the rewards of likes and clicks.

Jill Walker Rettberg

This one sort of builds on the previous two readings – we live in an age of Dataism, quantifying ourselves to even the most inane details, all in pursuit of what’s functionally clout. This isn’t inherently a bad thing, but in my opinion it’s a little crazy that when you explain data tracking and social media the way Rettburg does, it really does sound like we’re living in a rejected Black Mirror script.

Reflection W-3

What do Metrics Want? How Quantification Prescribes Social Interaction on Facebook | 

Computational Culture

The Social Photo

This material talks about the relationship between photography and social media from the documentary vision perspective. The writer wrote about vision changes at first. How we see is historically located and socially situated. Then the phenomenon of nostalgic filters became extremely popular raise the question of why they become popular and what do these photos mean. What behinds those analog-style photos is their social cognitive: authority, the feature of documentary and a sense of significance. The author talked about the social photos’ attributes and the transformation of the concepts of photography and photos. “What fundamentally makes a photo a social photo is a degree to which its existence as a stand-alone media object is subordinate to its existence as a unit of communication.” The focus is shifting.

Jill Walker Rettberg – What can’t we measure in a quantified world?

“A wonderful thing of digital media is you can measure it.”  There is something in common between the ways that we have seen photographs and the way we thinking about data and measurements. Just like photos were a representation of reality, we thoughts data representation of reality. But actually, those data usually have errors and should not be considered as reality. The word “Dataism” came out. Social media is bringing us up to be really good post-industrial citizens. We live in a day that is obsessed with numbers. 

I used to like measure myself from every perspective I can imagine. But then I found many measurements I’ve done is just creating noise in my life – they are not the truth of life and they are not actually helping. I think I gain a simpler yet happier life by throwing some criteria away.

What Do Metrics Want? How Quantification Prescribes Social Interaction on Facebook

This article detailed explained why we are in an age that everything is evaluated by numbers and how facebook constructed their rules of “more”. I like the extension very much. I think it reveals an attitude that people should have to see things and comment on things. More purified and natural.