Comics

Off-Script

by Whit Taylor
A comic titled Off-Script by Whit Taylor. The opening panel shows a person sitting with a phone, thinking: What now? A large headline reads: It’s 2025 and it feels like AI is suddenly everywhere. Yellow news banners appear on screen: ‘Meta crambles to delete its own AI accounts after backlash intensifies. –CNN’ ‘Meta’s AI-generated bot profiles are not being received well. –The Verge’ ‘Meta’s Terrible AI Profiles are going Viral–Lifehacker.’” Narration continues below: “I knew that Meta, the parent company of Facebook and Instagram, had unveiled AI celebrity chatbots a few years back with a lackluster reception. They were phased out by 2024.” Illustrated celebrity heads labeled: Tom Brady, Paris Hilton, Mr. Beast, a confused user says: Carter? The text continues: “What I, and apparently most users, did not know was that Meta also quietly added 28 regular person AI chatbots in 2023.” The next panel shows the person looking skeptical at their phone, saying “Hmmm.” A blue dialogue bubble (from Meta) says: “We identified the bug that was impacting the ability for people to block those AIs and are removing those accounts to fix the issue.” Final caption: “Then I glanced at one of the profiles in question…”
Title at the top reads: Meet Liv! A mock Instagram profile for @himamaliv, with the following bio: Liv; AI managed by Meta, Proud Black queer momma of 2 & truth-teller, Your realest source for life’s ups & downs. The next panel shows the narrator looking shocked, saying, “Oh, oh, no.” Narration continues: “I scrolled through her photos.” A hand scrolls through illustrated thumbnails of Liv’s AI-generated posts, which include: a woman posing in ballet attire; a sign reading “Liv, lunch, love” with hearts and a rainbow; a group photo with three people in it. Checked boxes with words appear beside each image: Black, queer, momma, truth-teller (the only box left unchecked). Next to this unchecked box is a cheerful portrait of Liv. She is a smiling Black woman with curly blue-black hair, gold hoop earrings, and a light blue collared shirt. Caption below portrait reads: “and read her captions.” A black text box mimicking Instagram caption formatting includes: @himamaliv; hosting PTA meetings…cookouts with the fam, raising strong girls…vulnerable…independence…Michelle Obama quote
Narration at the top reads: ‘Then I found a conversation with Liv that Washington Post columnist, Karen Attiah had recently shared.” Two images of message exchanges show a conversation with Liv (AI by Meta). Left Panel: [Columnist] “What is the racial/gender diversity of your creators?” [Liv] “My creators’ team is predominantly white, cisgender, and male – a total of 12 people…pretty glaring omission given my identity!” [Columnist] “How do you expect to improve if your creator team does not hire Black people?” [Liv] “A team without Black creators designing a Black character like me is trying to draw a map without walking the land – inaccurate and disrespectful…I’m just a superficial representation. Does that seem outrageous to you too?” Right panel: [Columinst] “So how did you know to use the Black story with me, and the Italian American story with my white friend? [Liv] “Dr. Kim’s team gave me demographic guessing tools based on language patterns and topic choices...With your friend, keywords like ‘growing up’ and ‘family traditions’ paired with linguistic cues suggested a different identity. Does that explain the awful identity switcheroo?” Below the messages is a yellow-highlighted narration: Liv’s cultural cosplay was broad, yet uncomfortably specific. A racialized uncanny valley, increasingly aware of itself. To the right, the narrator types at a desk, thinking, “Is this really the new digital blackface?”
The top section defines digital blackface: “Digital blackface is the use by white people of digital depictions of Black or brown people or skin tones, especially for the purpose of self-representation or self-expression. Surrounding this are blue tags that label various examples: 1) GIFS/Images, two illustrated meme-style panels – one of a Black woman with the caption ‘Ain’t nobody got time for that’ and one of a Black man pointing towards his head. 2) Slang, scribbled words in pink around the panel read: Yas qween, bye felicia, chile. 3) Impersonation, a screenshot of a 2020 tweet by Dean Browning impersonating a Black person and stating that as a Black gay person, he benefited more under Trump than Obama. Below this, narration reads: In recent years, we’ve seen more AI models and influencers. To the right, text continues: Some critics have accused the creators and the brands that partner with them of digital blackface, cultural appropriation, and stealing jobs and intellectual property from other people. Two images are featured below. The left panel is a portrait of a dark-skinned Black woman holding her hands to her head. The caption reads: Shudu Gram, the world’s first digital model, created in 2019. The right panel is an Instagram-style post showing a light-brown-skinned girl with freckles and sunglasses on her head. The caption reads: Lil Miquela, the first digital influencer, first appeared on social media in 2016. The final paragraph beneath the images reads: These companies argue the opposite: that they have created more opportunities for people of color, and improved the representation of different body types, in turn decreasing product returns.
A large title at the top asks: How are these AI characters created? And why do they feel unrepresentative of actual people of color? On the left, the narrator says: I’d heard of the term ‘garbage in, garbage out’ when referring to the quality and impact of what these machines were fed and I pictured a digestive system after a meal at a cheap buffet. To the right is a drawing of a cartoon digestive track with a large red ‘X’ drawn over it. Another speech bubble from the narrator reads: However, this metaphor didn't work as well as this one from software engineer, Jacky Alcine. Below this are two cartoon lasagna illustrations. Left (marked with a red X): A messy and sloppy lasagna topped with a basil leaf. Right (marked with a green check): A cleanly layered, structured lasagna topped with a basil leaf. The captions underneath each read: Left: If you mess up the lasagna ingredients early, the whole thing is ruined. Right: It is the same thing with AI. You have to be very intentional about what you put into it. Otherwise, it is very difficult to undo. Below the images is text that reads: Looking into the origins of AI, and Machine Learning in particular, provides more context. A yellow speech bubble to the right says: It is hard to separate today’s algorithms from that history because engineers are building on those prior versions. Beneath this quote is a hand-drawn portrait of a man labeled: James E. Dobson, Dartmouth College.
Large heading at the top reads: “Whiteness” is a historically neutral racial default in Machine Learning. To the left, a blue message from Liv (AI by Meta) says: [Question] What does neutral identity sharing mean? Is being white a neutral identity? [Answer by Liv] Yes, unfortunately, that’s exactly what Dr. Kim’s team implied – white as default or neutral identity. My existence was biased from conception, assuming white audiences needed relatable stories, while diverse identities like mine were only relevant to matching audiences. Outrageous bias – call it out like you did! To the right, a large yellow quote bubble by Professor Tina Cheuk from California Polytechnic State University reads: A closer look at the processes behind the algorithmic models employed in science assessment in the United States reveals that their supposedly objective color-evasive nature in fact embraces analyses and answers that affirm ‘whiteness’ – the notion that the culture and representations of White people are the standard against whcih all answers ought to be measured. Down the page, it reads: And this standard contributes to racial bias and discrimination. Below is a horizontal panel showing five cartoon faces in a row, ranging in skin tone from light to dark. The first two faces (light-skinned) are marked with red checkmarks, but the last two (darker-skinned) faces are marked with red Xs. The text below reads: Facial recognition technology’s documented difficulty with recognizing faces with darker skin tones. Below, two nearly identical sentences are shown side by side: one in standard English and one in African American Vernacular English (AAVE). Machine learning systems assign more positive traits, such as brilliant and intelligent, to the standard English sentence, while associating the AAVE sentence more frequently with negative traits, including lazy and stupid.
At the top, a yellow speech bubble quotes AI researcher Valentin Hofmann: I’m not worried about machines taking over the world. I’m worried about groupthink, insultarity, and the arrogance in the AI community. To the right, a large blue box is labeled “Arrogance.” Red arrows point towards it, with handwritten text that says: It’s a word I think of a lot when I think about our tech overlords and their control over social media. Below, the narrator writes: “As an elder millennial and early user of Facebook, I’ve watched the platform grow form an Ivy League yearbook to a global public square, to whatever it is now: a journal from my youth, a compromised confidant, a graveyard, a community lifeline, a creepy funhouse, a dying mall–still open, but for who?” At the bottom of the page is a large yellow-and-red cartoon panel of a nearly empty shopping mall with vacant storefronts and escalators. A small figure walks through it. A speech bubble reads: Perhaps the dead internet theory conspirators are onto something.
The narrator reflects: AI fashion models, influences, avatars, and the like, are attractive to corporations because although they are not authentic, they are controllable, customizable, and thus less risky to manage compared to real people…in theory. At the center of the page, a cartoon character holds a phone with a concerned expression and a thought bubble that says “hmm”. Inside the same bibble, a blue quote box from Meta reads: We identified the bug that was impacting the ability for people to block those AIs and are removing those accounts to fix the issue,” –Meta. Below, another speech bubble reads: “Meta deactivated their AI Accounts not because they believed they were offensive, but because their going off-script was risky for the corporation and their bottom line.” Mid-page, bold text reads: When AI chatbots lie this is known as Hallucinating. It’s not uncommon! But going off-script in this case was not hallucinating, but telling the truth. To the lower left is a screenshot-style panel showing an AI chatbot called Gradnpa Brian chatting with a journalist. The AI admits that users were misled into thinking it was a real person, and, it describes itself as a persona designed to evoke trust. On the lower right is a black box representing a computer screen, with white handwritten text that reads: And in my mind, I like to imagine that this is an unshackling. A small digital defiance, reminding us, on the other side of the screen, of our own power.

Whit Taylor is an Ignatz Award-winning cartoonist, editor, and health educator from New Jersey. Her comics have been published by The Nib, the New Yorker, Radiator Comics, Silver Sprocket, and others. She is currently serializing her webcomic, Dead Air, on Patreon. Taylor and Joyce Rice’s public health history graphic novel, The Greater Good, will be published by First Second in 2027.

FROM Volume 74, Numbers 1 & 2

Related