Project Update

hellooo

After our discussion, I think it’s best to hold off on my BFA exhibition for next semester/next spring(?). However, I have still continued to work on my original project: 12 Outside. The idea for this piece was at first just about undocumented families and their relationship with the country through governmental surveillance and physical dislocation. Through a story about a grandmother and her grandson, we go through this journey as they get a knock on their door. ICE agents attempt to bring the grandmother into custody so they can question her visa status while her grandson sits as a physical and verbal buffer.

However, this quickly became more of challenge to produce due to the sudden circumstances. CO-VID 19 has made it physically imposible to produce it with actors or any other additional help. But the bigger problem is the treatment and consequences that undocumented people are facing now during these trying times. Whether it’s at the detention facilities or their intimate homes, undocumented families are still being ignored and bluntly looked over, made invisible once more. So to reflect this, I’ve made the video even without any actors physically there. I’ve tried to emulate shots like they’re still there, and I’ve added voice over of actors that have sent me sound files.

I’ve attached some stills and a screenshot of the premiere workspace. I haven’t fully rendered the video so it’s taking a long time to export. I’m also still working on mixing in the dialogue.



BFA Update

So I have decided to postpone my previous project that I was working on for the BFA. I was originally focused on making a film based installation piece (12 Outside). However, I don’t think it’s the right time to continue pursuing for the safety of everyone that would be involved in producing it (actors/production helpers). I was thinking about ways around it, but I think it would loose core qualities that the piece was centered on. It would also turn into something quite different, and I don’t think the message would still be there. Besides, the film was apart of the installation, which people would no longer be able to experience physically. I even contemplated using my family as the actors (or even just myself with different makeup/costume) hahah but that would be too distracting. It’s definitely a response to the situation we’re under, but I think it would be too many ideas going on at once.

So I’ve been thinking of other ideas I could possibly do.

Maybe I could do a film, but I would have to limit the characters or get creative with special effects makeup. It might be cool to make a film that has one person playing multiple roles. Like how different can I possibly make myself so that people don’t realize it’s the same person?

Besides that another idea is to do something code based. I liked the web camera code I did for the last gallery. Although it was set up in a physical space, it could probably work well online too. I was thinking of doing something that creates a digital “footprint” of the user using their webcam or audio. Something that could individualize the user and give them a visual(or maybe audio) response to who the user is. It’s not that thought out yet, but I like that its sharable and personal.

I could also explore photography a bit more. Maybe I could do directed portraits of myself or others that reflects global isolation. Like maybe an overly dressed person throwing out the trash or just a really vacant parking lot. Maybe even the different setups of our makeshift work rooms. Or maybe just capturing whats visible in all of my windows. I’m not sure what I want to do yet but I definitely don’t want it to hide whats going on right now. Whatever I end up doing, I want to connect it with a larger issue and set up a fundraiser alongside it. Since it’s going to be online, hopefully it can raise some money for a community that the work relates to in some way.

AI / Predictive Analytics / Recommendation Algorithms (11 Mar):

James Bridle – Something is wrong on the internet (Medium)

the internet is more than just for the adults. The world ranges in age groups, and the internet begins to cater to all, revealing all the demographics once hidden. Normally in America, we rarely see children adverts/commercials/shows/content/material unless we’re in that bubble as a parent, kid, grandparent, etc. The internet is different because of how accessible it is for children. In real life children can’t typically buy things with their own money so the adverts are limited to particular stores and environments where they can be found/where their parents take them. However, these past generations have shown us that children are now capable of having their individual agency on the internet. Therefore, videos like the ones described can now be advertised specifically to the kids, and not their parents. Viewings have turned to monetary gains for the video makers. Children now have power to provide creators financial results through views instead of direct dollar transactions. Video makers have now capitalized on children’s entertainment with platforms like youtube, using algorithms to reel children(and parents(?)) into endless viewing.

These automated algorithms are dangerous to those who suffer the consequences of the malfunctions. For every automation comes a percentage of fault. The problem is most prevalent when we ask where the flaws showed up, and to who? Using children as the audience we can easily see how screwed up it is these “accidents” are harmful despite their intent. This entire system is applicable to all type of algorithmitc systems in society.

What other systems can you see the faults/”accidents” be potentially harmful? Why are they necessary to keep or regulate? Is this fixable? What is being won in the sake of the consequences? What’s more important?


Rachel Metz – There’s a new obstacle to landing a job after college: Getting approved by AI (CNN)

AI is being used to challenge our roles as potential employees. Out positions in jobs are being calculated and analyzed by AI.

How does it feel knowing that AI is a potential barrier for employment? Is it justifiable?

When making a decision, we tend to do research to do an educated decision. But AI allows people to start doing “educated” guesses of what we’d look like as an employee for them. But again, AI is biased. What is is based on? What is it looking for? Why are we shifting our career identifies into quantifiable measures that suit an “ideal model”? What do you think that does to the interview process for jobs? Would you guys prefer a company to judge you through your “paper identity” or through an in person interview?


Jia Tolentino – How TikTok Holds our Attention (New Yorker) (read or listen)

TikTok: performs for personal attention retention vs Instagram/Facebook: performs for direct personal communication

young generations can become self made celebrities through social media

What’s the point of TikTok? Why are the children so good at it, and why do adults have trouble finding success in it?

Is it possible to make something so much for children and so far away from the adult, that makes it impossible to capitalize? If the adult can’t find success in a platform, what happens?

Rosa cinematic universe on Tik Tok => self made celebrity example, using talent to self make…what happens when we don’t have a company/label backing up a person’s talent? What happens when the audience becomes the agency that supports/promotes/invests directly?

Hows do we categorize different social media genres(?)

Eric Meyer – Inadvertent Algorithmic Cruelty (article)

“It feels wrong, and coming from an actual person, it would be wrong.  Coming from code, it’s just unfortunate.”

 The design is for the ideal user, the happy, upbeat, good-life user.  It doesn’t take other use cases into account.

This makes me think of finstas again…theres no space for user that is something other than happy. These platforms reduce us to minimal emotions and set our defaults to be happy, but we rarely all are…so why does social media work either way? If it’s made for the happy user, does it turn us into happy users? If we smile more will we actually become happier? Or are we facilitating an internet facade? Or is social media really making us happier? If not, then why do we stay on it?


Rachel Thomas – The problem with metrics is a big problem for AI (article)

Goodhart’s Law states that “When a measure becomes a target, it ceases to be a good measure.” 

Automated essay software judges on vocabulary/grammar.=>what about students who have learning disabilities that consequently result in grammar mistakes? What about students who don’t learn textbook English? Why is slang non professional? Who’s setting the “bar”? And what does grammar have to do with evaluating a students ideas/argumentative reflection?

Big Data / Algorithms / Algorithmic Transparency

Janet Vertesi – My Experiment Opting Out of Big Data…  (Time, short article)

I think it’s interesting to see how Vertesi was actually at a financial disadvantage when hiding her pregnancy. Loyalty reward cards were excluded from her options when buying supplies. She couldn’t arrange a baby shower gift list. Her husband even had to buy multiple gift cards to buy a stroller without leaving a trace of their card information. In this way, it creates an institutional consequence for a citizen to live with privacy. Vertesi even points out how she felt like her reputation as a person, friend, and family member was being compromised. This adds to the pressure that the US pushes onto it’s citizens. Because there’s this social media element that’s surfaced into these recent decades, recording your identity online is socially expected upon you. Social medias set you up to self document your life for big data collectors whether we know it or not. It’s set up to feel like a personal choice (and in a way it is because social media is communicative and collaborative) but it’s only when we stop conforming that we face repercussions.

This whole idea makes me think about what it looks like to do this experiment long term. Is having privacy bad? Why is privacy so interconnected with secrecy? What about the people who don’t have access to online platforms, credit cards, etc? Like what about the people who don’t have these ways of documenting accessible to them? How does this play into ways of institutional aid and acknowledgment of these communities?

Virginia Eubanks – Automating Inequality (talk, 45m)

Why base the future from the past? And how do we integrate basic human rights into our society? How are we investigating citizens and how are we categorizing/organizing them into a paper society? How is a person’s integrity compromised through their loss of their privacy?

This keynote made me think about the effects of a data collecting society. It also raises the question of how I personally am being documented and the way that documentation changes my life. How does this compare to a different identity? And how comfortable am I with this?

Walliams and Lucas – The Computer Says No (comedy skit, 2m)

This reminds me of last week’s articles where they brought up issues of technology’s reliability. I think this skit does a great job in pointing out how people can conform so easily to what technology says. There’s not a hint of doubt in the receptionist’s mind that the computer data just might be wrong. It’s gotten to the point where the computer has higher credibility than the patient’s, who are the most direct source of information. This is building a distrust from human to human as the trust in tech rises. Although technology is made by humans, the visual element of the machinery, and the manual input necessary from the user to activate a response makes it’s answer somehow otherworldly(elevated?). It’s suddenly disconnected from humans and their inaccuracy. People can constantly be wrong, so it’s really easy to rely on machinery in order to tell the truth. But how is this working against us as a collective society? If we can’t even trust the direct source because it’s human, how are we disconnecting ourselves from having loyalty, trust, and commitment to one another? In a larger scale, how is big data replacing our voices in places as important as elections, census, social/government profiling?

Cathy O’Neill – The era of blind faith in big data must end (Ted Talk, 13m)

Cathy O’Neill develops a lot of the questions I had above in her ted talk. Algorithms are biased on what success is. They all have their own version of what success is. The people who create these systems are biased. It’s all just a mindset, but adds numbers and equations to make a quantifiable value. Cathy O’Neill brings up how numbers relate to math and how math is intimating which ultimately discourages us to question algorithms. This then sets up a blind faith in big data out of fear of being wrong.

Trusting these things so blind lead to big repercussions. Teachers can loose their jobs because an algorithm says they are not doing well. Suddenly these data collectors are so relied on. And if there is any doubt, it’s shut down automatically. Not letting people even know of the algorithm pushes this rhetoric that the mass public is incapable of understanding. To deny people access to the system denies them from being educated. Assuming they’re incapable of understanding it, assumes that they can’t ever understand truth. This further disconnects people from tech and elevates big data to a high position than the average citizen.

Random thoughts:

“Silent but deadly” “weapons of mass destruction” –> private companies sell to government, private power, “theres’s lot of money to be made in unfairness”

Standardized testing = lazy, biased, mathematical, regressive, automate status quote

DNA testing 23&me

how do we support the ones that are being scrutinized? How do we protect the people hidden, lowered, and oppressed?

Frank Pasquale – Black Box Society – chapter 1 (pp 1-11)

“real” secrecy, legal secrecy, and obfuscation.

Real secrecy: establishes a barrier between hidden content and
unauthorized access to it. Legal secrecy: obliges
those privy to certain information to keep it secret; a bank employee is obliged both by statutory authority and by terms of employment not to reveal customers’ balances to his buddies. Obfuscation involves deliberate attempts at concealment when secrecy has been
compromised.

This idea that secrecy is so private and unbreakable is interesting. I keep thinking back to the visual of locking your front door everyday. Person A can lock their door every day, use multiple chains/locks/doors. They can lock their car, lock their phone and set up a password for their laptop. But at the end of the day, our hardware is just hardware. A Person B can show up and with enough intention and supplies, they can break into the house, phone, laptop, car, etc. This habitual routine of locking our stuff functions to a very limited extent of protection. If anything it’s a false sense of protection that makes us think we are safe. That we have had some agency to protect ourselves, and that we have ownership of our machinery. But at the end of the day, its breakable. So when I’m thinking about these large companies that hold so much data from the people, that oppress communities through biased data collection, there must be a way to counter that, right? Surely they have multiple layers of protection unlike the ones they give the public (power, law, govt, money, etc) but at the end of the day it’s machinery. Is it possible to have absolute privacy? And what does privacy mean? How hidden does information have to be to be considered “private”? Does that even matter? Is it more important to look at the consequences instead?

Technology and Race

Lisa Nakamura – Laboring Infrastructures (talk, 30m)

I really enjoyed this talk. The idea that VR systems are ultimately manipulating human emotion is really interesting. There’s this one section where Nakamura talks about how people are using VR to replicate the experiences of the marginalized in efforts to “fix racism”. There’s this idea that if people can find an emotional connection to the oppressed that we can have faith in humanity again. But faith in humanity can’t be restored with mere empathy. Empathy here is being used as an excuse to not feel bad about the opression that continues to happen. The “Yes there’s oppression still going on but I feel bad about it so it’s not that bad” is self-centered. It excuses us from handling these issues in an institutional matter that would actually make change/progress. By showing that we feel empathy, it dismisses us as the ones to be blamed, when in reality the people that can experience the VR system are probably some of the most priviledged with ability to help those who aren’t. Nakamura brings up that putting ourselves into the “shoes” of someone else can be oppressing itself when they might not even have full rights of their own body in the first place. It made me think of this article read in Interaction II (linked below). It brings up the efforts of tying together ethnography and VR so that people have global access to communal documentation. But it makes me question not just VR, but rather the way we think of documentation when it comes to marginalized communities? Whats the foundation for documenting and why is it important to take these experiences from them, and share them around the world. It’s a question of are we right to take this from them, and how do these “experiences” subject to instilled bias?

Ruha Benjamin – Race After Technology (talk, 20m)

Racial norms shape how we understand tech and society. Citizen app reflects creators biases. Instead of stopping crime, we’re guided to avoid it. So, what does this say about how we choose to handle racism and crime?

Celebrities promoting wrong apps? Jay-Z?

This entire thing about creating algorithms to find a suspect/criminal is crazy to me?? Using facial recognition from linked in to all other platforms brings into light how much data can be taken from us without out intention. This way of systematically organizing potential suspects from previous data collection seems too risky. It trusts a computer to navigate future criminalization and curate crime ratings that can be racially directed.

Safiya Noble – Algorithms of Oppression (talk, 45m)

Google “accidents” that create racial bias and stereotypes in google search. Three black boys vs white boys.

WHY is the answer always technical? Why do we always look to computers for answers? Why do answers have to disconnect emotionally? Why is emotion bad and why are we encouraged to think outside of morals/emotions?

Technology is not flat not neutral they are man-made.

Social Interaction, Social Photography, and Social Media Metrics

What Do Metrics Want? How Quantification Prescribes Social Interaction on Facebook

“The audit employs quantification as its way of understanding progress and tracking compliance.”

“That prescription starts with the transformation of the human need for personal worth, within the confines of capitalism, into an insatiable “desire for more.” Audit culture and business ontology enculturate a reliance on quantification to evaluate whether that desire has been fulfilled.”

The Facebook Demetricator also questions the fact that we have to respond to information. Yes the physical numbers bring the idea that more is more. But, if the software makes all comments/likes/actions disappear, then theres no pressure to respond/engage/add with the material at all. In this sense, we take information in a new way, and aren’t expected to provide anything in return, which can be refreshing.

Social performance is valued by metrics.

What about Finstas? Finstas are kind of an example of how social platforms limit human emotion. Finstas come about as a desire to express negative/private vulnerability to a small selected public. The accounts are made private, and they rarely exceed more than 30 followers. Not only does this reflect a comparison of intimate vs public “friendships”, but it shows an effort to express a person’s more gloomy experiences, rather than a flashy public persona. Instagram is heavily image based, but finstas carry heavy captions. However, the audience, the followers who follow the finsta, often come into uncertainty. How should they respond to flows of emotional venting? Should they like it? Comment? Suddenly, they aren’t sure what to do because Instagram limits their reactions (like Sim characters??). To make matters worse, the people who post these things often don’t get lots of likes/comments. Following along with our value for metric validation, this can hurt our emotional state even more.

How do we think this effects the kids who are born and raised alongside this technology? Is there a different result in real life communication? If we were to live so heavily dependent on social media, will that weaken our skills of human compassion and agency (that lack from software)?

Nathan Jurgenson – The Social Photo – (book, pp. 1-15)

truth/fact = nolstagia/history = vintage/faux aesthetic

Damon Winter’s Afghanistan Photo: creates an archaic narrative with the black and white vintage aesthetic, regressing war to history rather than present day. Vintage aesthetics warp time and dislocates the event from the present day. This can be dangerous for the conflicts/communities in danger presently, because the lack of urgency that the photos evoke. Is the aesthetic worth it if it means events are dislocated from their reality?

social photography=domestic “images” vs photography=artsy professional

I feel like this statement speaks to the hierarchy that museums carry over their versions of “art”. I disagree that “real photography” has “formally artistic” patterns and photography that represents more candid personal material are only “images”. Why can’t informal imagery not be considered photography? Why does art have to only hold authenticity within gallery walls?

“For my purposes here, what fundamentally
makes a photo a social photo is the degree to which its existence as a standalone media object is subordinate to its existence as a unit of
communication.” Aren’t all photos mainly communicative though? Why is the author stressing such a hard divide between traditional art photography and a now, much more accessible versatile version of photography? Both can be viewed for aesthetic/skill AND for social context/narrative.

“Without an audience for every snap, photography before social media
had to work much harder for attention; it had to be important or special or worthy to justify being seen.” Yes and no. Yes, social platforms can bring an equal playing field for photography being seen. But no, it’s not much different from the past. Having lots of Instagram followers creates the an algorithm that promotes my post to the public. Having high reputation and privilege in real life also sets your work apart even prior social photography. What do you guys think?

Wearables and how we measure ourselves through social media | Jill Walker Rettberg | TEDxBergen

baby monitor=virtual nurturer/caretaker

We’re building technology to the point where it engulfs our lives. Fitbits, baby monitors, life analysis. They all our passing a line of aiding us to taking over our actions completely. To think that a baby monitor is held responsible (to some degree) for a baby’s nutrition and health empowers technology to be the caretaker. It shifts the nurturing aspect of a parent/guardian to technology. I’m kind of against this because of how much authority metrics our held to. Will we believe quantifiable data from a computer more than a babysitter? How does that change our relationship and trust with one another? How are we taking data as “pure truth”? Dataism: Ideology that data is truth. What does that leave the guardian/parent left to do as a guardian/parent?

Measuring through a phone relies on technology to be apart of your body at all times. There lacks a separation of identity between the person and the phone if we hold all the phone’s data to be ours.

BFA Exhibition Idea

  • Subject: I’ve attached below my synopsis of the potential short film that I plan to pursue.
  • Form and Method: I plan on producing it from pre production to post. I’m considering forming a group to help me produce it since it can be quite big. Having people that can help with conceptualizing, camera work, etc will allow me to work on it more in depth. As for the final piece, I would like it to be screened at the gallery preferably with some physical set up to help emphasize the concept.
  • Context and Audience: I’m influenced heavily by my past experiences and the way they have been shaped by my identity. As a latinx women born and raised in Chicago, I strive to represent my community and the ways that we are stereotyped and given space. Aside from merging video and installation art, I am thinking about publishing it onto online platforms for a broader accessibility.

Interface Criticism / Tactical Media / Software Art (5 Feb)

Wendy Chun – Programmed Visions, (book, pp. 1-2, and optionally pp. 3-10)

Introduction: Software, a Supersensible Sensible Thing questions the global picture of new media. Chun asserts the grandiosity of the internet and the intangibility of new media that brings difficulty in constricting it into one idea/concept. As a solution, Chun asserts that the public gravitate to software in order to bring clarity to new media through the creator of. Chun compares culture to software and nature to hardware. Software is used as a cultural metaphor that brings clarity to ideas/concepts otherwise difficult to understand. By agreeing that the concept has a system of truths, any concept (like new media) is less intimidating. But Chun also implies that this may disregard the actual complexity of New Media by categorizing it as a linear system. Through this disagreement, Chun attempts to ground the idea of uncertainty that New Media actually holds.


Matthew Fuller – How to be a Geek (book, pp. 12-14, and optionally pp. 63-71)

“Equally, much of the work here operates with
the concepts of computer science as fundamentally cultural and
political, as something core to contemporary forms of life and thus as
open to theoretical cultural exploration as much as architecture,
sexuality or economics might be.”(pg 12, para 3).

People are so tied to their entitlement/ownership of technology that they forget to question the effects that coincidentally occur and shape it. Viewing technology as unfamiliar and open-ended brings new ideas of its inner mechanics to the surface. Treating it as a social science, as opposed to a strict technical science, allows us to think critically.

If technology is political and cultural, how/where can we begin to unpack it? Do we all have the same perspective to critique contemporary technology? Can there be a over arching understanding of tech?


Geert Lovink – Sad by Design (podcast w/ Douglas Rushkoff, 60m)

This podcast was really interesting to me because Lovink questions who the audience is and how this plays into contemporary technology. Despite the grandiosity of new tech, it’s drastically underused because of a lack of complete knowledge accessible for the users. He brings up how smart phones have so much potential, but the general public might not fully understand all the inner workings of the device. This idea bounces back to the last article, both relating knowledge to power.

On another note, Lovink also elaborates on how technology and the internet have shifted into a quantitative communication. People enjoy numbers that reflect their social identity because of how straightforward/assertive they’re presented. I think that it’s easier to understand how we fit into a society if we’re given these quantitative measures rather than if we consider a nonlinear relationship. To what extent is there truth to numbers?

Soren Pold – New ways of hiding: towards metainterface realism (article)

This article talks about technology and how the interfaces have steered toward minimalism in order to convince the user of it’s realistic appeal. By hiding 1)complicated interface mechanics and 2)physical motor control, contemporary tech likes to make things as realistic as possible for the viewer. It is interesting to point out that the “reality” they try to replicate is a reality that hides information from the user. But does creating a system that will go unnoticed by the viewer, makes the system less “real”? To make something so “realistic” by taking out the users control of it (in a way) kind of promotes a artificial reality. I started thinking about The Truman Show because it also manipulates tech in order for the user (Truman) to live their life unaware of the system that surrounds him.

I think all the artist and their projects were very inspiring to see. From stripping a software to its machinery or bringing global political communication into one space, they all are very exciting to hear about. The way they take a stance in new media and the variety of spaces, tactics, and methods they do opens the range of art that I can imagine doing.

Surveillance, Privacy, Resistance

Carole Cadwalladr – ’I made Steve Bannon’s psychological warfare tool’ (Guardian)

Wylie even goes as far as breaking a non-disclosure agreement that outs at risk not only himself, but his boss and affiliates. It brings into question at what point is it necessary to break non-disclosure agreements in order to provide the public with the truth.

This idea of studying personality by quantifying it reaches patterns of likes/dislikes according to profile data is really interesting. I imagine that these results are limited to the selected emotions/response allowed through Facebook. But would the same patterns be visible through different social platforms? Where is the line of Facebook profile data versus all social media profiles? Should it be fair use or censored for public/selected public?

It’s crazy to understand that politics have infiltrated the online public now in order to promote and campaign their representatives. Facebook discreetly sharing profile data goes to show how corrupt the online playing field has gotten. Manipulating, and setting up propaganda according to different communities has always been in history, but this is taking it a step further. How far is too far (in the sense that we as a public loose our freedom of thought)?

Stuart A Thompson and Charlie Warzel – One Nation, Tracked (NY Times)

Phones in a way have become the new census. Except instead of the information being disclosed to the actual participants, the data is collected secretly. Instead of it belonging to a federal / government department, it is held by big corporations that hold severe consequences for their workers if they were to ever reveal information. For me, it questions the legitimacy of a federal government. It also questions the morality of these people behind the data, what are their intentions? And why are they hiding the data in the first place?

Not only do they have information of our location, they have information of our voices and facial recognition. It reminds me of Amazon’s ties with facial recognition data that Jeff Bezos attempted to hand over to ICE in order to collect undocumented customers. By doing so, profile data is being stored and handed off to federal operations that oppress, segregate, and harm lives in America.

More info here: https://www.aljazeera.com/ajimpact/amazon-role-immigration-crackdown-190716194004183.html

Drew Harwell – Colleges are turning phones into surveillance machines (Washington Post)

When I read this I immediately thought about two apps. One of them was Kahoot. The other was an app that my friend had mentioned to me a couple semesters ago. The app they had mentioned would track your location and make sure you were in the campus during your classes. It would total points to your profile depending on how long you were in the lectures, labs, studios, etc. Kahoot also monitors the students but through anonymous data collection that they are aware of when they participate.

This article brings the topic of micromanaging into phone surveillance. I think it’s interesting that the students backlash on this because of their discomfort. They feel that it’s not only an invasion of privacy, but it sets them up to conform to being systematically surveyed. Especially if this tactic is implemented into education and academia, it become a grave and permanent mindset for upcoming generations.

Jenny Davis – A clear case for resisting student tracking (Cyborgology)

“One social consequence of SpotterEDU and similar tracking applications is that these technologies normalize surveillance and degrade autonomy. This is especially troublesome among a population of emerging adults”

I definitely agreed with this. By micromanaging students specifically, it also normalizing intensive surveillance. In addition to this, I don’t think it’s necessary. As a student, it devalues the trust, independence, and personal motivation for education and replaces it with fear of consequences/punishments. It underestimates the potential of a singular person and encourages the idea that we need someone to look after us in order to be successful. By collecting all this data, it steers towards shaping a collective persona rather than unique independent individuals.

Helen Nissenbaum – Mapping Interventions – Digital Democracies Conference (talk, 30m)

Obfuscation: “the production, inclusion, addition or communication of misleading, ambiguous, or false data in an effort to evade, distract, or confuse data gatherers or diminish the reliability (and value) of data aggregations.”

Example of manipulation of plane location through paper shreds that confused the radar frequencies during battle. Compared to spiders that make web based spiders to confuse other prey from eating their food. I’m curious about how the need to confuse and manipulate environments for the sake of private safety relates to modern day censorship of data collections.

Banning, appealing and applying legislation to prevent private data collection. Investigations reveal advertising networks play a part that continue to aid large corporations (Google).

“Information services and platforms need machine-readable humans in order efficiently to exploit out most human endeavors (sharing, learning, searching, socializing, communicating).”

Battle of morality. Who’s right is it to own information? How much of an invasion of privacy is it? And does profiling manifest efficient control.

Shoshana Zuboff – The Age of Surveillance Capitalism (video documentary, 50m)

Talks about how the biggest companies use our data that they collect. Questions how we can gain our control of our data back. Surveillance capitalism. The idea that humans are commercialized and their data is used as monetary profit among big companies.

Data not only informs the collectors of the users history, but their predictable future. With the data exchanged/sold between big businesses (Google), they can easily create algorithms that predict what the users will likely do in the future. It can inquire future geographic locations or purchases. Through this, companies can adjust their marketing and even take action within media propaganda to lure their customers.

It’s difficult to come up with a reasonable solution for the daily consumer. Since phones and technology have become to prominent and necessary, it is difficult to boycott these platforms for the sake of web privacy. While I may risk my information online if I carry my phone around, I may risk my life if I were to come to a dangerous real life situation without a phone that I could use to navigate to safety. Despite even deleting and monitoring the information my apps take from my internet profiles, there is also the constant data collection of the audio and facial recognition from the phone itself.