Author Archives: Sora
AI / Predictive Analytics / Recommendation Algorithms
Something is wrong on the internet
I’ve been saying this for YEARS but usually as a pointed gripe pertaining to some quite specific and honestly random if not also menial things. You know what? All those gripes were valid. From the lengthy down time for incredibly expensive services, the extremism on facebook, and all the goddamn Spider-Man & Elsa shit on YouTube. We need to go beyond pointing out that something is wrong and venture into the realm of just flat out saying “dude what the actual fuck is going on???”. I mean I ~know~ whats going on. It speaks to whats immediately apparent and wrong with how we use algorithms. I think we all get the main idea here – if it makes money, companies will not assess flaws unless there is a legal liability. Moralistic liabilities are not contested until we have media coverage and even then – its about….. stocks. That’s what’s happening, and its just how YouTube operates.
Getting approved by AI
Using technology to speed up processes that are highly personal and dictated on a largely subjective basis is pretty much the easiest road to dystopia. Processing applications on a clear concise goalpost basis is perfectly fine! Having a computer discern, based on a video of you talking, whether you’re worthy of a job. This computer cannot discern whether or not you get along with the rest of the team at this company. Much like facial recognition, welfare systems, and literally any given system used in the west, Blackness will correlate with failure and Black people will be harmed by this system.
How TikTok Holds Our Attention
Okay, the Anne Frank thing was funny. But also like, I’m not a Nazi so maybe those guys find it funny in a more racist and less morbid absurdist sense. TikTok doubles as a creative platform and a deeply interesting and maybe concerning example of how algorithms feed content. This is much much faster than YouTube’s content feed, though lower in volume (in minutes I mean). Algorithmic feed is something so new (generally speaking) that it’s effects won’t be understood much for another decade. What does this mean for the future? Users create whatever they want however this algorithm is meant to keep you watching – this is an issue YouTube has where it will supply you things that keep you watching even if those things are extremist, alarmist, or deceptive. Where do we find the control on the runaway effects that this has on the rest of society?
Inadvertent Algorithmic Cruelty
I think this anecdote here is heavily stilted in the sense that it leans into to technology for the void in meaning in a somewhat oblivious way. You fed that algorithm. It’s showing you what you gave it. Just because someone passed away, is a person supposed to know that you’d rather not see their face? If not, how would an algorithm? This is asking for some pretty particular sensitivity in a situation where, if anything, maybe you should seek other assistance in the case that seeing a photograph of your child gives you this much stress. Seriously. I was an abused child who would breakdown at the mention of my abusers name, just the name. I understand some of these suggestions are largely to mitigate unintentional harm on the user end, but frankly I don’t think these systems can be so incredibly smoothed out. I don’t think they should be either. Trying to change the system to fix a problem that maybe a total of 20 users may have is the easy way to break the rest of the system. If your goal is to reduce harm, you need to manage your expectations of what reduction means.
The problem with metrics is a big problem for AI
Metrics, success, failure, optimization. I smell the paycheck I earned for all those high viewing and somehow ad-friendly Spider-Man Elsa videos. So many companies employ metrics (money stats, as I like to call them here) as a means of maximizing profit. This is perfectly reasonable until every game that comes out has macro transactions, a reduce budget, procedural generated content, and weekly updates in accordance with trends across the consumer base. According to the metrics, which feed our algorithms, which determine our decisions as a company, there’s a big issue with everything being fucking trash lately. Maybe we’re not maximizing money stats enough. Maybe our optimization has too many people involved. With the definition of success in media being maximum engagement and maximum profit, addictive, violent, exploitative, or otherwise fucking horrendous content and business practices will win out every single time.
Big Data, Algorithms, Algorithmic Transparency
The Black Box Society
“But what if the ‘knowledge problem’ is not an intrinsic aspect of the market, but rather is deliberately encouraged by certain businesses?”
This is literally EXACTLY what is happening and what has been happening for probably around 80 years. This is the bustling American Capitalist landscape with infinite gains and infinite returns on investment. Disinformation and misinformation is critical to avoiding government intervention. The language in these information protection laws does a lot of good on the part of the individual – without a doubt. This does not translate to companies very well and translates to mega corporations TERRIBLY in the sense that these companies act more like countries in their input, output, and control. The magnitudes of power are vastly different than the lemonade stand or the local burger chain. When this is scaled up internationally and connected to the tree of parent companies, these systems exist more as monsters than arrangements of workers and their superiors.
The era of blind faith in big data must end
There’s a considerable lack of oversight in these massive power structures in a way that’s frankly terrifying. These algorithms streamline our tendencies as they have been and as they are.
“There’s a lot of money to be made in unfairness.”
Why don’t algorithms regard for traits related to failure? Checking for bias or fallacy? The reason this isn’t done, is likely that it doesn’t garner maximum results. It’s not sexy, it doesn’t sizzle in an investors meeting. There’s nothing exciting about stats that don’t impress or coincide with expectation.
Automating Inequality
Systems of oppression are not abstract frameworks based on linking together ideas and individual phenomena – these were literal systems with an input and “empathy override” for the actual intended output. Not that these systems are created with this awful outcome in mind, however those with control over them know very well what the outcomes are and how they interact with other systems and dynamics. Not much more to say on this other than to point out the School to Prison Pipeline of the US education system. These systems aren’t just reinforced through the feedback loop, they’re reinforced because someone chose not to stop them and enjoyed the profits too much to care about the harm.
Opting Out Big Data
Algorithms focus so heavily on behavioral patterns that once an individual decides to not participate and obscure this footprint, they must be obscuring this for a reason. There must be a goal in the users mind that can’t be achieved without being inconspicuous. Unless people would like to not be watched. Unless it’s that simple. The concept of “having nothing to hide” is a dangerous point that has exploded across how data and surveillance are treated where the rights of the individual dissolve while the rights of corporations balloon. Capitalist dystopia here we come.
The Computer Says No
The machine demands data. It is all knowing, it is correct. It has a 98.961% rate of success as defined by a group of rich people who directly benefit from the machine. No, there’s nothing wrong with this picture.
Technology and Race
Algorithms of Oppression
This critical approach to systems is the only way we avoid the worst of the likes of Blade Runner and The Matrix. Decisive policy did the job of oppressing minorities on top of existing social ideology. Moving forward this seems to be the work of mega-monopolies wielding algorithms even they themselves don’t understand. There’s a lot to be said about how this critical approach hasn’t gotten widespread attention until it affected an election, and not so much necessarily when these systems so readily and heavily affected black and brown women.
Race After Technology
This idea of technology inheriting the ideas of it’s creators sounds a lot like racist parents having racist kids. It’s super odd how this is a bizarre concept to these developers. They’re used to syntax errors, bugs in the engine, having hardware malfunctions, data loss, etc. But for a time it was inconceivable that their system might not be outputting correctly and that it goes as far as harming specific people that didn’t fit into their original scope of the user/participant pool. This doesn’t speak to the code’s flaws, it speaks to the creator’s mistakes or complete ignorance.
Laboring Infrastructures
Capitalist ideas of infinite growth is starting to sound more like a body builder selling you on the idea of anabolic steroids.
Bro, the gains. THE FUCKING GAINS BRO!
Genuinely, this subsection of humanity existing as a hidden and unpaid working class is getting really, really old. As old as time at this point and I refuse to believe that with all these resources and technologies that its impossible to create a future without what’s essentially slavery or slavery-like conditions. Not saying Capitalism inherently begets slavery, someone smarter than me will say it.
RE: Social Interaction, Social Photography, and Social Media Metrics
What Do Metrics Want? How Quantification Prescribes Social Interaction on Facebook
Schäfer makes it abundantly clear what he thinks of Facebook and the interactions it encourages in media by referring to it as “bastard culture”. Enumerating socializing and using it to lift markets is pretty bastard-y. Capitalizing on human interaction is immensely bastard-y. I feel particularly disgusted with this concept that a capitalist framework is the only way people can achieve fulfillment psychologically. This may be me just immediately lurching at the assertion – which sounds incredibly dystopian and disingenuous – but yeah no, I highly doubt anyone knows enough about human interaction to charge such a massive claim like that. Fuck that. Quantification is ingrained in people from the youngest age inherently through money – which is viewed not as a resource, but as a score. Grades in school are not so much a level of ability but a score. This socializes people to go for the highest score – which is generally fine and dandy unless your efforts to get the highest score include creating a system that kills a lot of people and preys on
Wearables and how we measure ourselves through social media
I really like this usage of personal data – far more than any other usage. Having a personal device collect and express this data in order to provide insight and maintain agency in the hands of the user is phenomenal but can of course be dangerous given poor guidance based on health data. Jill mentions using wearables to modify posture by immediate feedback – managing the kinds of food you eat – managing the amount of alcohol. In the hands of the user and appropriate guidance otherwise, this is amazing. In the hands of administrative bodies, this is surveillance and needs to be used appropriately given balance for needs for security and needs for privacy.
The Social Photo
While this is plainly clear to most everyone, I find that in many ways – especially outside of academia – we don’t discuss how the ways we communicate are technologies in their own right. Communication began with Body Language, occasional vocalizing if physically applicable, and has led to communicating by way of a complex structure of computers transmitting bits much faster than it would take most people to physically say all the words in their message. The image being captured and not painted by hand lends to the image itself being a witnessing party in a way that no person could ever be, short of someone with a quite literal photographic memory. Much akin to how we deal with software updates (poorly), updates like the addition of nearly definitive imagery or facial recognition or sentient computers are bound to shake things up in shocking, bizarre, and perhaps dangerous ways.