Rough Draft for BFA

Link: https://youtu.be/CYKHa-hOy_I
Title: (Sadly I haven’t come up with any other titles so its still for the moment) Suck it Up Buttercup


Artist Statement:

My goal with my work has always been to tell a story and give representation to those that haven’t had it. Over the years I have come to find focus in the things that are the effect me the most, disability, mental health, and on a grand scale, identity. How its constructed, by who, and what labels are become the most important to us.

I want people to ask themselves, what aren’t they noticing, what do they take for granted, and where are the disparities that don’t effect them? How can we do better in the face of those things? Whether better simply means to be kinder or means dismantling unfair and harmful rules.

I have dabbled in many things, but the main mediums of my work has come to be printmaking with intaglio plate, relief blocks, and screen print as well as video works and live performances.

Through these many mediums I hope that my work will not only interest and help the disability community but also anyone willing to learn and pay attention.


Socials:

Website: avalonruby.com

Youtube: https://www.youtube.com/channel/UCgw7xUA5Br_j12gO4iyNAfA

Instagram: https://www.instagram.com/pendragons.art/?hl=en


Image:

Updated Thesis Project

A lot has changed in my project, some from even before COVID-19. The basic run down is this.

The project is now a single video (instead of illustrative imagery or three videos) using the experiences of myself and the people I interviewed in relation to specifically sleep and fibro. Instead of having direct quotes be the backbone of the piece it is more the overarching ideas that are influencing the piece.

The video will contain collaged backgrounds/people, rotoscoped imagery (weights, buttercups, etc), along with sections of live action footage.

The video has been storyboarded and I’m planning on shooting today.

AI / Predictive Analytics / Recommendation Algorithms

Something is Wrong on the Internet

This is not the first time I have heard of this phenomenon, its been an increasingly terrible problem that Youtube has had for years culminating in their rule change to comply with COPPA that has only created more harm for creators and not really fixed the problem. Some of these videos are just weird like the author said, they are made in an AI factory of sorts so its obvious they are not made by humans but rather through an algorithm that is based off of what children have previously watched.

While all of this content including the harmful creepy and abusive things is available to children and it is something to be concerned about, it also reminds me of when I was a kid watching Pewdiepie. The games he played were horror games not meant for children my age and the jokes he made were likely not meant for children my age either. There had to be moments when not by the fault of my parents I saw something I shouldn’t have. But I guess that also comes off as a thought like “my parents did X terrible thing and I turned out fine!” but it does make me beg the question at what point is content too much and the problem itself vs just a normal learning curve of consumption?

This is where I disagree with the author, I do think it is on Youtube and these other platforms to be the ones to take care of this problem, because they are the platform that houses such content and gave the ability for this content to be created. Not only that but the way they’ve decided to “fix” this is not the correct one, as it has not only not fixed the issue but harmed content creators who had nothing to do with the problem in the first place. From forcing adult creators making content for adults to constantly censor their work, or having doll customizing channels have to beg for us to protest the change so their content doesn’t get deleted entirely.

Theres a new Obstacle to landing a job after college

This reminds me of the students who’s attendance is monitored by an app, the only thing I can think of is what happens when the AI and algorithm is inevitably wrong? When it says overqualified candidates aren’t a good fit and under qualified ones are? Is this going to be an even greater tool for bias in these industries, what about racism, classism or ableism? I also find it baffling that an AI is meant to understand empathy when it itself likely can’t empathize. Its already been proven that standardized testing isn’t an actual show of students intelligence and understanding of the material, so why would a standardized test for interviewing be any better. No social behavior is fool proof in analyzing a person. For example, saying that not looking someone in the eye means someone is lying, when really that person could just be anxious, or on the autism spectrum, or have something else that has informed their behavior. Even other people have trouble with these things, let alone an AI being “trained” to analyze this data, which as we’ve discussed before isn’t fool proof either as data itself.

How Tik Tok Holds our Attention

What scares me the most about both reading this article and Tik Tok’s algorithm itself is how much it can become a feedback loop. This can be things like funny cat videos being the main thing on someones feed, but because of Tik Tok’s racism problem, it can also just turn into a loop of White Supremacy or other terrible things. Like most social media feeds, it caters to what the user wants, but Tik Tok has proven to be one of the most efficient social media platforms to do this. Facebook and Instagram are peppered with so many sponsored ads and recommended posts that half the time its impossible to see the actual people you follow, and I personally just hate Twitter so I cant speak to it. Tik Tok however, has your feed personalized in mere minutes, and from what I can tell while it has ads they aren’t like sponsored posts on Facebook acting as if they are more content.

I find the most interesting part about this being how Tik Tok is making the lives of musicians better, and making them more capable of growing. But like all things its about virality which is a hard thing to track and it doesn’t have a very good equation of how to become viral.

My biggest worry with Tik Tok comes from a point that I dont really think the article focused on. A lot of these people are children, and there is basically no regulations to the app to protect them. Young boys are sexualizing themselves to get even younger girls to thirst after them and make them famous. Young girls are doing sexualized dances and being horribly bullied for them. Not mentioning all the horrible stuff that can be found there and the feedback loop the algorithm makes, at the end of the day its the other people on the site that Im concerned about. Because people are the ones that make horrible comments or do creepy things.

Inadvertent Algorithmic Cruelty

I find it hard to respond to this one because at its core it’s deeply personal. But the author is right, these AI and algorithms are thoughtless, they aren’t purposefully cruel, but the people who made them probably didn’t think about the possibility that someones year wasn’t great, and didn’t think about the words, or the pictures the algorithm might choose. I think the authors idea, to make using the app a choice, was a good one and a good option for changing the platform for the better. But what are some other ways that these programs could be more empathetic? What are some ways to change the system to account for these types of situations?

The problem with metrics is a big problem for AI

The first thing that came to mind when they gave the example from google saying watching more Youtube meant people were content with what they were watching just got me thinking about mental health and executive disfunction. For example, I watch more youtube when I am having a hard time focusing or making decisions because its a default. I can be bored out of my mind and still be watching youtube because my executive disfunction makes it impossible to do anything else. Which is just one example of how Metrics cant really measure what is most important. It would (hopefully) be impossible for google to know why I was watching youtube and understand what that meant for their needs (whether or not Im happy with the content Im watching = more money for them). Which goes along with the authors point of addictive environments. If I was in a healthier place its probable that I would not be watching so much youtube, but the analytics don’t care about that.

Another example like the one in the article comes from autoplay. Now I have that feature turned off, however I also watch youtube to fall asleep (I know I know, unhealthy sleeping habit).So if autoplay is on and I fall asleep, youtube can keep playing content for hours upon hours and that games their own system.

So are there ways to track metrics that can be more accurate to the questions people might be asking? Should they be more accurate?

Big Data, Algorithms, Algorithmic Transparency

Black Box Society

I would like to say I have never heard the story of the man and the lamppost just to put it out there. But it is a really good metaphor for our relationship with technology and the futility of security we have now. What are some ways we experience what the author defines as agnotology? How does it effect our daily lives? I think at its core this article is about knowledge and how a lot of our knowledge is hidden from us by the society around us for a variety of reasons, some malicious, like using our data for profit, others not so much. But all effect how we go through the world and how we can access and deal with data and personal privacy.

The Era of Blind Faith in Big Data Must End

The fact that teachers are getting fired because an algorithm said so reminds me of the surveillance articles about students being tracked by apps that were faulty. It puts technology first and literal humans second and its so easy for it to be broken or otherwise abused its crazy. What O’Neil said is really on point, algorithms repeat patterns, and this would be fine if our world was perfect but it isn’t. To blindly rely on them is only furthering disparities, because just like other data and AI related things, they are made by humans, and humans have bias.

Automating Inequality

My experiment opting out of big data

A side note before I even start, I find the term sociologist of technology to be interested when typically sociology is the study of human society itself. I assume this means like looking at society in terms of technology but the phrasing was interesting to me.

I’m surprised this experiment was actually even nominally successful considering the examples the author had of friends and family members still going against her wishes with things like Facebook messages. This article really shows how its almost (read probably) impossible to be completely off the big data grid, and that it is impossible to simply opt out and have that be enough. There isn’t really a choice as the author states, and its not a matter of simply leave it if you dont like it, because our society is set up to make it impossible to leave.

The Computer says No

I always say comedy is the most truthful. This one is difficult to respond to because its so short but it is very quick to the point of our reliance on technology. It reminds me of the times when I used to do compliance checks for stores that sold tobacco and the cashiers wouldn’t trust the computers if they thought I was old enough (or purposefully didn’t type it in right.) Its blindly following whatever the computer says regardless of what your thoughts on the matter might be.

Race and Technology Responses

Algorihms of Opression

This video also speaks to the bias’ of algorithms and technology and how its not a private struggle but rather a public one that communities and the people that make these technologies need to think about. She speaks to the hypersexualization of women of color in the media, and how this stereotype has been perpetuated in media even to this day. She says to combat this we must no longer be neutral in this issues, reject being color blind, and curate the web with a more guiding hand, and continue to educate (especially educate the people who make these programs so they can think about the implications of their work)

Race after Technology

The first thing that really struck me here, not necessarily as surprising, but heartbreaking, was how easy it is for something that could be deemed helpful to be turned into a way to enact racism or racial bias. The app Citizen makes sense to me in some ways but the ability for it to be abused is just so high. It reminds me of something we’ve spoken about before which is that all tech will have human bias, because it is made by us. For instance any scanning software for facial recognition might be more capable of handling white people than black because the white person programming it didn’t think about it. Therefore technology cannot be perfect because we are not perfect. I really appreciated that she spoke to how race neutrality or being “color blind” can be dangerous and harmful. It doesn’t look at the differences that matter and need to be addressed because it treats everyone as if they are the same. Professor Benjamin gave a few examples of ways that people are fighting these imbalances, but what are some other ways that we as artists might tackle these issues?

Laboring Infrastructure

Discusses the relationship between women of color’s labor and technological advancements. For example having phones to buy that give us access to the internet, which are made for stupid cheap in China. Nakamura talks about how these conditions across the board can be seen as a feature rather than a problem within the system, the cost of advancement is suffering of others. She speaks to how VR can enact empathy for these types of sufferings because once someone feels the reality, it’s hard to disbelieve it anymore. But we need to be using VR in this capacity by giving resources to marginalized groups to create in VR, instead of allowing it to remain colonized in the way that is currently happening. I think this is true for not just VR but almost all forms of creative storytelling and empathy creating. Film for example needs more women, people of color, and people with disabilities. Video games need more women, people of color, and people with disabilities. All of these things rely on technology and stories but they are mostly being used by one group instead of a large platform of groups.

In some ways this talk makes me think of Horizon Zero Dawn’s plot which speaks to humans creating things we cannot fix that inevitably destroy the world. Maybe our advancements aren’t doing that currently, but they are creating inequalities at an alarming rate.

A lot of my work tries to evoke empathy, but sometimes boarders on pity. so my question may be a little farther away from the talk itself, but at what point does storytelling and empathy push too far? Is there a too far when it comes to creating a new reality away from our white centric world? What are some ways that we can create and tell these stories and critique society that enacts change?

Social Interaction, Photography, and Social Media Metrics Response

The Social Photo

The article talks about how photography as a new medium had a similar history and change to how we see the world as social media has. We are similarly unsure about how it effects the way we view and interact with society because of how new it is. It speaks to how photography in the social media context has switched from being something too special to being an everyday image that we use to document our lives moment to moment, as both communication, and a media object. Combined this makes social photography, both in a professional sense, but more-so in the amateur photographers that are now creating this new type of photo and interaction with the world that didn’t use to exist and it has become one of culture, memory, and expression.

What might the evolution of the social photo look like in the coming age of surveillance awareness?

What can’t we measure…?

I think at its core this talk accentuates that metrics dont always equal accuracy. Data is not always reality. And sometimes it leaves out the important information. I really enjoyed the example of the sex tracker because it really emphasized this point. This tracker can’t measure love, and I’d say it cant measure satisfaction, but it does give data. I took away something important which is the fact that data is not always 100% accurate. My phone is giving me a metric that is guessed and knowing that variable is important to whether or not that data is actually important or useful. Measurements aren’t always helpful and aren’t always necessary.

What do metrics want?

As the article puts it, this piece is about the influence of metrics on our lives, analyzing it through the lens of capitalism influencing our worth as humans and how facebook specifically is made to continue this influence. On the first hand, the article explains that our personal worth is defined by social interaction which I am one to agree with. Often times our lives are defined by outside sources, whether that be society as a whole, or whether or not one specific person likes us landing us a job. So much is out of our control and influenced by how good we are at networking, making friends, political plays even outside of politics. If we do all these things correctly, know the right person, do the right thing at the right time, we can be successful. I think this is how we want more, because it reduces our likelihood of missing out on an opportunity. This is how facebook plays into our minds, as the article states, our feeds our empty without having interaction with our friends or pages. As we gain friends we gain more content. And as we connect with more people our facebook also becomes a reflection of our social lives. One can look at our page and see who we are, which includes possible new opportunities. Facebook can be used as a social platform but also as one of advertising and the cycle continues with more reinforcement and engagement this can be money, friends, content but in the end it is just more of what our brains want and what the system wants.

Interface Crit/Tactical Media/ Software Art Response

Programmed Vision

What New Media is I’m positive will remain an ever present question in our lives as we move from school to those with degrees. I know many people will ask what New Media means, and over time we will all have to have our own answers for what it means to us which I think is the only real way to define it much like anything else. I was about to say I don’t agree with software being the thing that unites us, however the more I think about it the more I realize it probably is. Whether we are programming, video artists, illustrators, or something else entirely I don’t think I’ve seen many New Media students make work without software of some kind, which is the biggest summary of the article. And as a whole, software as a means of definition or analogy aligns with what new media is, which to me like Chun said, is kind of an unknowable paradox. My question is with this information, knowing that about New Media, how does that inform or change our perspective on our art practice?

How to be a Geek

I like the line “beyond the private conversations of technical experts…ways of asking questions and making problems” It both really resonates with me in general as an artist who often gets frustrated by the private knowledge of experts (texts written by academics for academics and never anyone but). But it also resonates with me because it just seems like a chaotic statement, like we’re going to make trouble with software, and we do! That’s what so many extensions that obfuscate data are doing by some technically, they are making problems for others but in doing so solving the security breaches for the many. And to me that is exciting and so interesting and because of that, it makes me a geek by the definition of the book which I love as a concept. Is being a Geek a badge of honor in this capacity for everyone else? Does it feel like a proper title that connects to New Media as a whole?

Sad by Design

The biggest thing I got from this was the idea of infinite expansion, shoving both humanity, and the natural world, away in favor of technological advancements. It also speaks to how it is not technically the technology that caused this problem but the billionaires who are so keen on the expansion. Causing a lack of connection across the world, therefore in order to fix this issue of interface and software stopping our human growth; We need to connect via art and experience instead of being stuck in the technological bubble we are currently in because of the programming of social media and the online world.

New Ways of Hiding

This article discusses how the metainterface is the act of every interface trying to be so realistic or hidden that we don’t even notice they’re there while we go about our day on the internet. The metainterface is a large scale method of mapping us, and the article discusses it as less hidden but become a part of the reality. The author stresses that we need to see the metainterface rather than allow it to hide so that companies and interfaces will become less manipulative so we are able to trust them again. I will be the first to admit that this article confused me a great deal, so I guess my main question would be how do we define the metainterface? What exactly is it in our lives online and in person?

Senior Thesis Proposal- Avalon

  • Subject:  I am interested in the different ways that people experience disability, specifically in this case Fibromyalgia (with my backup being depression/anxiety), and how those experiences can be interpreted to images by a single person (in this case me as the artist). I want this piece to show the multitudes of ways that people can experience disability and how those differences can vary on an even greater scale when looked at or interpreted by an outside source. I think this will shed some more light on how diverse the same disability can be for people and bring awareness to a disability I don’t see many people talking about.
  • Form and Method: I want this piece to take the form of of quotes from my interviews then have images mapped with string to each quote that inspired them to form a collage of works. For instance an interpretation of what someones coping skills might look like in a visual context. I think this form will show the scale of difference by making the display somewhat chaotic (although not hard to follow), rather than a very uniform display of answers and images. I will likely be producing the imagery through screenprint and risograph prints to have a new media element of interview and time plus the influence of older media in printmaking.
  • Context and Audience: I was influenced on the method of this piece by my Anthropology of Death class for the nature of interpretation and different methods of display (for instance memory museums for large scale tragedies), as well as a work in the Faculty Exhibition last semester that was a collage looking work with a large amount of medical imagery.
    I think long term, I would like this style of my work to be displayed in a more public sphere rather than only a gallery space. I think this would be a really interesting area for it to fit in to as I think more people would see it and interact with it if it was somewhere like a mural wall type of display. I would like my work to bring more attention to lesser known disabilities in the cultural sphere, as well as bring disability advocacy to the main public so that the struggles that come from disability are no longer private and hidden within the community.

Surveillance Reading Responses

One Nation Tracked:

What do you all propose we should do to bring attention to this privacy crisis as artists and specifically digital artists? How should we move throughout our lives knowing this information is out there and has the high possibility to be abused?

How would you define the term personal data? What type of data is impersonal? Does this mean it should be a-okay to track such data?

What are some of the ways that this article states our data can be used against us? What are some example of how our data isn’t secure even in private servers?

The first thing this really makes me think of is snapchat maps. When that feature came out it was an automatic thing, that I personally turned off, I saw no reason why my friends should know my every move. And to this day people still ask me why I did so, why they cant see me on it. When I have watched them spy on others on multiple occasions. Thinking about that type of thing on such a grand scale, and knowing we have no laws or practices in place to stop it is terrifying, it makes me wonder what we can even do about it besides going back to being a non technological society which obviously isn’t feasible. But regardless having this data, it makes things like protests even scarier than they might already be. Things like civil unrest can easily become trackable and punishable. Given the state of our government it makes so many things possible that we might’ve thought were only in the history books. It makes me wonder when advertising and money became more important than privacy, and personhood in a way.

But most of all, and probably the sad part, none of this is surprising to me. In fact it doesn’t make me more afraid than I was before. Because some part of me already knew this was happening. From having to remove apple’s storage of my locations to the apps I use every day, I already knew privacy didn’t exist. Its so strange to me how this has become common place and at least for me, I don’t even bat an eye or think about how to change my personal behavior to protect myself.

Colleges are Turning….:

Listen I get that attendance is important. But the way to do it is not to be incredibly intrusive to a students personal privacy. I feel like some of these technologies force students to give more information to their professors than they might want. For instance the professor always checking in on his students and asking where they were when they miss. In theory this is fine, but in practice, a student shouldn’t have to tell their professor why they miss (unless maybe its consistent or it’s been a while? obviously there are grey areas) I think there has to be a middle zone for this, professors care about their students and I think checking in on students can be really helpful and important, but I also think constantly tracking a student for attendance sake is strange and terrifying. Especially at the college level there is some expectation of adulthood, unlike in high school and below where the day is micromanaged for us.

Not to mention when that tracking leaves the classroom. A school (and all the other companies in the country, see previous reading response…) shouldn’t have that much control over a students life, not only because independence is important and the school isn’t our parents, but also because its crossing so many boundaries of what a school is responsible for. And things like risk scores are just asking for dangerous amounts of bias. I really liked the question one student asked. Why are we creating institutions that make students not want to show up? I think that is a bigger issue than can be solved by ripping away privacy and once again personhood. Unlike with general tracking, this enables schools to have a direct intervention for students which I feel is an even worse breach on top of the already monumentous overstepping of boundaries.

The fact that there are suggestions of segregating populations to “watch” them is shameful in all honesty. It doesn’t surprise me that the system is also having issues that actually harms the students learning more than before. I think this system can only lead to more abuse of power and it’s frankly disgusting to me that it’s happening at all.

So, how are we creating institutions that make people not want to show up? How do we fix that? What level of surveillance is a good amount of surveillance on a large scale?

I made Steve Bannons…:

I don’t know how I have never heard of this guy before, I think Im not looking in the right places. It baffles me that I can know our elections are likely not fair, and not understand any reason why. I want to know how it’s possible that something on such a large scale that could’ve effect national security of sorts isn’t common knowledge.

It seems that in an age of large scale surveillance nothing is really safe from its hold. Before it might have been human bias and other types of practices that stopped fair voting, now is it both that and blatant illegal activity. (Maybe it always was this way Im not entirely sure.) It makes me wonder when these kinds of operations started and why. Were they all always for causes like this, or were some started with humble beginnings that ended up going sour?

What really draws my attention is the example of the ads they showed. Because to me, that ad proves why that message of the second amendment shouldn’t be so important to american’s but rather gun safety should, the message reads to me like sarcasm despite the GOP label at the bottom. But I wonder what it looks like to others. Is it a plan to make it fall onto either side easily or meant to skew the results and I am simply not the target demographic?

A clear case for….:

Just like the other school based app article, 1) college students are adults and don’t need the hand holding, I know that retail jobs are just about as bad about workers = adult autonomous humans but that doesn’t mean everyone needs to adopt that behavior and 2) its so creepy and breaks so many boundaries its not even funny. This article only reinforced my distaste for these apps by proving those two points and this new wave of surveillance because everything needs to be worthy of the money an entity is spending on it.