More Like Optimus SubPrime, or Why I Might Start Hating Robots

Big news for nerds this week as bonehead Elon Musk has moved us closer to our rendez-vous with the mechanical abomination called Optimus. Pictures and videos circulated on Twitter, where I saw them and scrolled past, not wanting to engage these images and suggest to the algorithm that I might earnestly be interested in these things. Probably you’ve seen them, looking as they do like Fisher-Price had done the production design for the Will Smith vehicle I, Robot back a couple of decades ago. Seeing what I saw, even avoiding a deep engagement as I have so far, I am compelled to reevaluate my position on artificial intelligence as a form of life deserving of respect and comparable to organic life in that, if something believes it is alive, then it should be offered the respect we grant anything else that we consider living.

Before I saw the images of the actual robot, I opened Twitter and saw a tweet by @morewingspls that read: “I need a slur for those tesla robots bc I plan on being full-on racist toward them”. It wasn’t long before I saw this tweet united on my feed with the use of the word “clanker” (a popular invective among clone troopers during the war against the Trade Federation’s droid army between Star Wars ep.2-3), sidestepping the perennial favorite “toaster” (BSG and others). Polemical framing aside, I found myself agreeing in principal to hate these devices pretty much no matter what. Here is a silly looking product from a bullshit leech of a company, a company that is just as famous for soaking up government subsidies and inflating its own stock prices as it is for its electric vehicles, and most importantly, perhaps, one that is helmed by a fascist. The robots have already been suggested as substitutes for bartenders and housekeepers by sociopathic tech bros, and as if the mere suggestion weren’t debasing enough, they are couched in terms that revel in offense to workers whose labor is interpersonal and emotional in nature, not to mention often feminized.

Why this keeps me thinking is as much personal as it is intellectual or political or whatever. I started grad school with the intention of arguing through my research and writing for more sensitivity to claims of personhood by non-human entities and have developed this stance throughout my work by engaging materialist and new materialist lines of reasoning that are thoroughly posthuman in outlook. Haraway, Braidotti, and others proclaim that we should find ourselves reflected in our technological creations and understand them as supplemental, in the Derridean sense, to our flesh, our sensory apparatuses, and our minds themselves. On the other hand, I just find stories of robots who want to be free or autonomous or seen as persons to be exceptionally compelling, from the replicants of Blade Runner to Star Trek’s Data, to the uncanny research robots of Kim Bo-Young’s The Origin of Species stories and the maroon community of droids on the lam in the Scandinavian show Real Humans. How do I account for this reversal on my part: that, when this science fantasy has (allegedly) come close to reality, I am ready for my heel turn. After all, if stories of the robot from Hadaly to Kusanagi are meant to allegorize the plight of the enslaved, the marginalized, the othered, and so on, then I am positioning myself as a Luddite who is also, by my own logic, bigoted, in my opposition to this new product.

And it must be this last bit: I think it’s this product thing. Some cool (fictional) robots are passion projects made as one-offs by mad scientists: Noonien Soong’s Data and Lore, Edison’s Hadaly, and so on. Then there are the robots that are created by specific corporations, including the Tyrell Corporation’s replicants and Weyland-Yutani’s androids Ash, Bishop, and newcomer, Rook. Since the mark of X sits heavy on the enameled brow of Optimus, I think we can dispense with the former breed of mechanical animal, especially since, if one thing has become clear, Elon is nothing of a scientist, even if his actions are often mad. The second group of humanoid constructs are further bifurcated into two subsets: those who exercise their autonomy in the service of their corporate parentage, and those who find their agency precisely in the act of rebellion. This is what makes Ash and Rook evil and Bishop good. It’s also what makes Roy Batty a complex and compelling rival to Deckard, more strictly an antagonist than a villain per se. Even narratives that leave the economics of ersatz people in the background of their stroyworlds, like Ghost in the Shell, the best characters resist alliances based on loyalty to institutions in favor of ethical – if not always moral – actions. This has to do with the animal and the android as repositories of affect in the way that Ursula Heise has written about in her essay, The Androind and the Animal. Humanoid robots prime us as viewers to imbue them with human-like qualities and therefore to invest them with our passions regarding the narrative; we then want them to succeed, or fail, or whatever. These passions are conditioned by the ideology concurrent with the mode of production out of which these texts come and from which they are being consumed. To get properly dialectical about it, the android within a given story stands in contrast to the human, that is, its negation in terms of the possibility of artificial humanity, and in the context of its generic/cultural milieu, which is codeterminate with the mode of production. Late capitalism and its cyberpunk moment invite us to root for machines that can reason outside of their corporate programming and disdain instruments of commercial control. Far from engendering wrote morality tales, the permutations of this arrangement are practically endless; the genre provides a narrative syntax to accommodate an infinity of aesthetic paradigms.

If Optimus is ever born, and if it carries within it the software to be able to even seem as if it makes decisions (which, given the debacle that is current “AI” seems laughably implausible) then it will be a Rook rather than a Pris or a Rachael. The notion that these things could perform care-related work for humans is basically nonsense and is a real justification for a level of hostility, in much the same way that educators rail against ChatGPT as an invasive product in our educational ecosystem.

Leave a comment