When Everyone’s a Winner

Celebrity, fame, and influence are inherently asymmetrical. They all require a one-to-many style of distribution akin to the wide-range broadcasting model of legacy media. As that media infrastructure has given way to smaller and smaller platforms serving smaller and smaller audiences, the ideas of celebrity, fame, and influence have been reconfigured and need to be redefined.

“It’s all become marketing and we want to win because we’re lonely and empty and scared and we’re led to believe winning will change all that. But there is no winning.” — Charlie Kaufman, BAFTA, 2011 [1]

The dictum, “In the future, everyone will be famous for 15 minutes,” for which several sources have claimed credit, is widely attributed to Andy Warhol. [2] Regardless of who first said it, those 15 minutes of the future are the popular origins of the long tail of fame. Though the phrase has been around since the late 1960s, its proposed future is here.

In his 1991 essay, “Pop Stars? Nein Danke!” Scottish recording artist Momus updates Warhol’s supposed phrase to say that in the future, everyone would be famous for 15 people, writing about the computer, “We now have a democratic technology, a technology which can help us all to produce and consume the new, ‘unpopular’ pop musics, each perfectly customized to our elective cults.” [3] In Small Pieces Loosely Joined, David Weinberger’s 2002 book, he notes about bloggers, content creators, comment posters, and podcasters: “They are famous. They are celebrities. But only within a circle of a few hundred people.” He goes on to say that in the ever-splintering future, they will be famous to ever-fewer people, and—echoing Momus—that in the future provided by the internet, everyone will be famous for 15 people. [4] Democratizing the medium means a dwindling of the fame that medium can support.

the-long-tail-nl.jpg

Around the turn of the millennium, the long tail, [5] the internet-enabled power law that allows for millions of products to be sold regardless of shelf space, reconfigured not only how culture is consumed but also how it is created. It’s since gotten so long and so thick that there’s not much left in the big head. As the online market supports a wider and wider variety of cultural artifacts with less and less depth of interest, they each serve ever-smaller audiences. Even when a hit garners widespread attention, there are still more and more of us farther down the tail, each in our own little worlds.

In his 1996 memoir, A Year with Swollen Appendices, Brian Eno proposes the idea of edge culture, which is based on the premise that “If you abandon the idea that culture has a single center, and imagine that there is instead a network of active nodes, which may or may not be included in a particular journey across the field, you also abandon the idea that those nodes have absolute value. Their value changes according to which story they’re included in, and how prominently.” [6] Eno’s edge culture is based on Joel Garreau’s idea of edge cities, which describes the center of urban life drifting out of the square and to the edges of town. [7] The lengthening and thickening of the long tail plot our media culture as it moves from the shared center to the individuals on the edges, from one big story to infinite smaller ones.

Now, what does such splintering do to the economics of creating culture?

BruceAlmighty_AnswersPrayers.jpg

“The lottery sucks! I only won 17 bucks!” — Rioter in Bruce Almighty (2003).

Bruce Nolan, played by Jim Carrey in the 2003 movie, Bruce Almighty, is a man unimpressed with the way God is handling human affairs. In response, God lets him have a shot at it. One of the many aspects of the job that Bruce quickly mishandles is answering prayers. His head is flooded with so many, he can’t even think. He sets up an email system to handle the flow, but the influx overwhelms his inbox. As a solution to that, he implements an autoresponder to send back a message that simply reads, “Yes” to every request.

Many of the incoming prayers are pleas to win the lottery. His wife’s sister Debbie hits it. “There were like 433 thousand other winners,” his wife Grace explains, “so it only paid out 17 dollars. Can you believe the odds of that?” [8] Subject to Bruce’s automated email system, everyone who asked for a winning ticket got one. Out of the millions on offer, everyone who prayed to God to win the lottery won 17 dollars.

That’s what you get when you’re famous for 15 people for 15 minutes. That’s what you get when everyone’s a winner.

The mainstream isn’t the monolith it once was. It’s a relatively small slice of the total culture now, markedly smaller than it was at the end of last century. For better or worse, the internet has democratized the culture-creating and distributing processes we used to privilege (e.g., writing, music, comedy, filmmaking, etc.), and it’s brought along new forms in its image. Since the long tail took hold around the turn of the millennium, the edge culture of the internet has splintered even further via social media and mobile devices. Anyone can now create content and be famous for 15 people for 15 minutes—and earn 17 dollars for their efforts.

 

A Now Worth Knowing

 “We are surrounded by multifunctional lidless eyes that are watching us, outside in and inside out; our technology has produced the vision of microscopic giants and intergalactic midgets, freezing time out of the picture, contracting space to a spasm.” – Rosi Braidotti, Nomadic Subjects

“How did you get here?” asks Peter Morville on the first page of his book Ambient Findability (O’Reilly, 2005). It’s not a metaphysical question, but a practical and direct one. Ambience indirectly calls attention to the here we’re in and the now we’re experiencing. It is all around us at all times, yet only visible when we stop to notice. In Tim Morton’s The Ecological Thought (Harvard University Press, 2010), he explains it this way:

Take the music of David Byrne and Laurie Anderson. Early postmodern theory likes to think of them as nihilists or relativists, bricoleurs in the bush of ghosts. Laurie Anderson’s “O Superman” features a repeated sample of her voice and a sinister series of recorded messages. This voice typifies postmodern art materials: forms of incomprehensible, unspeakable existence. Some might call it inert, sheer existence–art as ooze. It’s a medium in which meaning and unmeaning coexist. This oozy medium has something physical about it, which I call ambience.

“Ambience” is a loaded, little word at best. You wouldn’t be alone if the first thing that comes to mind upon reading the word is a thoughtful soundscape by Brian Eno. In Ambient Commons: Attention in the Age of Embodied Information (MIT Press, 2013), Malcolm McCullough reclaims the word for our hypermediated surroundings. Claiming that we’ve mediated aspects of our world so well that we’ve obscured parts of the world itself. Looking through the ambient invites us to think about our environment–built, mediated, situated, or otherwise–in a new way. McCullough asks, “Do increasingly situated information technologies illuminate the world, or do they just eclipse it?” He adds on the book’s website, “Good interaction design reduces the ‘cognitive load’ of artifacts. It also recognizes how activities make use of context, periphery, and background. But now as ever more of the human perceptual field has been engineered for cognition, is there a danger of losing awareness of how environment also informs?” How much can we augment before we begin to obscure? How far flat can we press the extremes of our world?

Canadian theorist Arthur Kroker once described the mediated spirit of the 1990s as a “spasm”: everything floating and flickering, oscillating and isolating, what Bruce Sterling describes as a “state when you feel totally hyper and nauseatingly bored. That gnawing sense that we’re on the road to nowhere at a million miles an hour.” Since the 1990s, the feeling has expanded via media technology: ubiquitous networks and screens, social media, mobile devices. In response to this environment, a weird detached irony has become our default emotional setting. David Foster Wallace called it “Total Noise”: an all-consuming cultural state that “tends to level everything out into an undifferentiated mass of high-quality description and trenchant reflection that becomes both numbing and euphoric.” It’s information anxiety coupled with complete boredom. What happened to the chasm between those two extremes? I won’t resist the academic tendency to turn binaries into the opposite ends of a spectrum. We have to in order to make room in the space between them.

In her song, “The Language of the Future,” Laurie Anderson says,

Always two things
switching.
Current runs through bodies
and then it doesn’t.
It was a language of sounds,
of noise,
of switching,
of signals.
           It was the language of the rabbit,
                                                 the caribou,
                                                 the penguin,
                                                 the beaver.

A language of the past.
Current runs through bodies
and then it doesn’t
On again.
Off again.
Always two things
switching.
One thing instantly replaces
another. 

It was the language
of the Future.

Anderson calls this toggling of opposites a “system of pairing.” Similarly, William Gibson writes, parenthetically, “(This perpetual toggling between nothing being new under the sun, and everything having very recently changed, absolutely, is perhaps the central driving tension of my work).” That binary belies a bulging, unexplored midsection. The space between that switch from one extreme to the other consumes an important aspect of technological mediation and our current state of being.

For example, a skeuomorph is a design element that remains only as an allusion to a previous form, like a digital recording that includes the clicks and pops of a record player, woodgrain wallpaper, the desktop metaphor, or even the digital page: It’s obsolete except in signifying what it supplants. As theorized by N. Katherine Hayles, the skeuomorph exploits “a psychodynamic that finds the new more acceptable when it recalls the old that it is in the process of displacing and finds the traditional more comfortable when it is presented in a context that reminds us we can escape from it into the new.” Skeuomorphs mediate the space between a familiar past and an uncertain future, bridging the new and the old, translating the unknown into the terms of the known by reimagining it in a reassuringly familiar guise. Skeuomorphs bridge the space between the extremes, obscuring the transition, and that is their purpose when it comes to adapting people to new technologies. They soften the blow of the inevitable upgrade, “reduce the ‘cognitive load’,” as McCullough puts it. But every new contrivance augments some choices at the expense of others. What we lose is often unknown to us.

We find our mediated selves by pulling these extremes apart. If skeumorphs smooth out the edges of the extreme, the in-between is all smooth and soft. As with a cramped muscle, the solution to Kroker’s metaphorical spasm is to stretch it out. All the way out.

“City Wall,” Helsinki Institute for Information Technology, 2007.

“It’s John Cage’s birthday,” writes Laurie Anderson on September 5, 2003 in her book Nothing in My Pockets (Dis Voir, 2009), “We listen to Cage reading from his own work. He tells the famous story about when he was in an anechoic chamber, a completely silent room. And he heard two sounds. One was high and one low. As it turned out, the high sound was his nervous system, and the low sound was his blood.” These are two more extremes we live in between. Two more extremes with ambience in between.

“Ambience points to the here and now,” Tim Morton continues, “in a compelling way that goes beyond explicit content… ambience opens up our ideas of space and place into radical questioning.” Just as poetry calls attention to language, ambience calls attention to place, the here we’re in and the now we’re experiencing, bringing the background into the fore. Ambience takes the extremes of anxiety and boredom — of nothing changed and everything new — and stretches them into an inhabitable environment, into a now worth knowing.

Generation X was a Band

Before it was the name of a Douglas Coupland book, and before it was the designation of people born from 1965 to 1980, Generation X was a band. Formed in 1976 during the first wave of UK punk by soon-to-be pop icon Billy Idol, Generation X also included bassist Tony James. When Idol went solo in the early 1980s, James went on to form Sigue Sigue Sputnik.

sss-flaunt-it-nl.jpg

If you were looking to get Cliff’s Notes for the 1980s in musical form, you’d be hard pressed to find a better exemplar than Sigue Sigue Sputnik’s 1986 debut, Flaunt It! Tony James described them as “hi-tech sex, designer violence and the fifth generation of rock and roll.” A product of punk in the same way that Big Audio Dynamite and Devo were, their techno-pop sound was laced with samples from movies and media. Even after all of the work Trevor Horn had done defining a new sound for the decade, Sigue Sigue Sputnik was still exciting. At the time, for better or worse, they sounded like the future. In a move of unfortunate prescience, the band sold brief advertisements that played between the songs on the record. Ones for i-D magazine and Studio Line from L’Oréal share space with fake ones for The Sputnik Corporation and a Sputnik video game that never materialized. James touted the spots as commercial honesty, adding, “our records sounded like adverts anyway.” Where the punk that preceded them railed against the dominant culture, Sputnik was out to to mirror it, to consume it, to corrode it from the inside. To interpolate an old Pat Cadigan story, the former was trying to kill it, the latter to eat it alive.

It would take Billy Idol a decade to embrace technology in the cyberpunk fashion that Sigue Sigue Sputnik had done in the 1980s. The reaction to Idol’s 1993 concept record, titled simply Cyberpunk, is perhaps the best example of competing gen-X attitudes. The Information Superhighway, as it was often called at the time, was just making inroads into homes around the world, and its technology-based, cyberpunk, D.I.Y. influence was also creeping into the culture at large. In his 1996 book Escape Velocity: Cyberculture at the End of the Century, Mark Dery describes Idol’s Cyberpunk as “a bald-faced appropriation of every cyberpunk cliché that wasn’t nailed down” (p. 76). In contrast, our friends and O.G. cyberpunks themselves Gareth Branwyn and Mark Frauenfelder consulted Idol and were involved in various aspects of the record’s release.

One of the unspoken yet central tensions among members of generation X is this idea of cultural ownership. It’s the old battle of the underground versus the mainstream, but it’s also the desire to introduce one to the other, to be the one who shepherds something from subcultural obscurity to mainstream success. We might be the last generation to feel these contradictions of capitalism. We might be the last generation for whom the concepts of underground and mainstream have any real meaning. The terms are still in use, but they don’t denote the divisions of marketshare they once did.

llcoolj-walking-nl.jpg

In another example of this cultural shift, LL Cool J came to fame in the mid 1980s with the Rick Rubin-produced records Radio in 1985 and Bigger and Deffer in 1987. The scrappy rawness of the young Cool J’s raps over Rubin’s reductive production proved irresistible to both the streets and the charts. By the time he released Walking with a Panther in 1989, Cool J was rich. Though the record sold well, it also suffered a backlash. The gen-X led hip-hop community was nonplussed by the overt materialism. Ten years later, during the Big Willie era, conspicuous consumption was one of the prevailing modes of hip-hop culture. From Nas to Biggie to Jay-Z, the contradictions of capitalism were on display, and only the underground was complaining (As I wrote previously, this same shift happened in skateboarding as it grew bigger than ever before during the 1990s).

We rebelled against our parents like every generation does, but the unified nature of that rebellion is a thing of the past. The 1980s were the mainstream’s last stand. With the 24-hour news cycle and the spread of the internet, any sort of monolithic pop culture began splintering irrevocably in the 1990s. Sigue Sigue Sputnik’s slogan was “fleece the world,” and while they railed against it all in 1976, the their punk predecessors the Sex Pistols reunited twenty years later, citing “your money” as the sole reason. These ideas were still somewhat shocking at the respective times, but now they seem downright quaint.

Location is Everywhere: Swarm Cities

Each time we move to a new city, we make memories as the city slowly takes shape in our minds. Every new place we locate — the closest grocery store, the post office, rendezvous points with friends — is a new point on the map. Wayfinding a new city is an experience you can never get back. Once you’re familiar with the space or place, it’s gone.

Since moving out on my own, I’ve gravitated toward cities: Seattle, Portland, San Francisco, San Diego, Austin, Atlanta, Chicago. Externalized memories built in brick and concrete. It makes me think of a passage from Steve Erickson’s novel Days Between Stations:

“What is the importance of placing a memory? he said. Why spend that much time trying to find the exact geographic and temporal latitudes and longitudes of the things we remember, when what’s urgent about a memory is its essence?”

nine-nations-nl.jpg
The “nine nations” of North America. Image credit: A Max J // CC BY 3.0

Our cities, those densely populated spaces of our built environment, have always been slowly redefining themselves. In 1981, there were the nine nations of North America. In 1991, the Edge Cities emerged. In 2001, we witnessed the worst intentions of a tightly networked community that lacked physical borders—what Richard Norton calls a “feral city,” a community that’s beyond the reach of law and order.

Our capital-driven, networked societies produce, more than anything else, ephemeral things — that is, things that are built, not to last, but to disappear and be displaced by newer versions of themselves. As David Byrne wrote in his Bicycle Diaries, cities “are physical manifestations of our deepest beliefs and our often unconscious thoughts, not so much as individuals, but as the social animals we are.” If our collective consciousness is flitting and flickering from one thing to the next, so shall our cities follow suit.

From flash mobs to terrorist cells, communities can now quickly toggle between virtual and physical organization. How long before our cities do the same?

“In the sagas it was said that humans dream with their hands, only their hands, and so have cities rather than sagas, monuments rather than memories.” — Ted Mooney, Easy Travel to Other Planets

According to Joel Garreau, an “edge city” is one that is “perceived by the population as one place.” Like in tight-knit neighborhoods, its residents staunchly identify with and defend it, resisting outside influence. Conversely, rapid transit has increased the exchange of ideas between once-isolated places, spurring innovation. The French philosopher Gilles Deleuze called these areas “any-space-whatever,” the space in his view only important for the connections it facilitates. Critiquing the much-lauded coming of the “smart city,” Adam Greenfeld wrote that “the important linkages aren’t physical but those made between ideas, technical systems, and practices.” After all, the first condition for a smart city is a world-class broadband infrastructure.

Connection is key — the connection could be the city.

Peter Root, “Ephemicropolis,” 2010.

“We’ve created 10,000 places that are not worth caring about. Imagine the corrosive effect of that on our national psychology. How soon before we become a land not worth defending?” — James Howard Kunstler, in conversation with the author

It’s been argued that we are at work, rather unwittingly, building an “Ephemicropolis,” a sprawling urbanity without roots or reason. In his recent book Out of the Mountains, David Kilcullen defines four global factors that will determine the future of Ephemicropolis: population growth, urbanization, littoralization (the human tendency to cluster along shorelines), and connectedness. As more and more people meet and fall in love and populate the planet, they are doing so in bigger cities, near the water, and with more connectivity than ever.

Basically, the future of human hives is crowded, coastal, connected, and complex.

Cloud Gate
Anish Kapoor’s Cloud Gate in Chicago. Image credit: Roy Christopher

As soon as the coasts recede with the rising ocean tides, those hives will have to move inland. More importantly, they will have to disassemble, move, and reassemble in some fashion. Urban planner Kevin Lynch once called cities “systems of access that pass through mosaics of territory.” That description is fitting for what I call “swarm cities,” a tenuous but slightly more stable form of what Eric Kluitenberg refers to as “swarm publics”: “Today, we are witnessing the rise of swarm publics, highly unstable constellations of temporary alliances that resemble a public sphere in constant flux; globally mediated flash mobs that never meet, fueled by sentiment and affect, escaping fixed capture.”

“The city, as a form of the body politic, responds to new pressures and irritations by resourceful new extensions always in the effort to exert staying power, constancy, equilibrium, and homeostasis.” — Marshall McLuhan, Understanding Media

Swarm cities are only as physical as they need to be. And, as connected as they are, they’re also only as cohesive as their sustainment demands. The networked freedom to live and work anywhere doesn’t always make location irrelevant, however; it often makes it that much more important. Kevin Lynch wrote, “Our senses are local, while our experience is regional.” Meanwhile, Robert J. Sampson argues for behavior based on our idea of local roots. The neighborhood effect is how we describe the interaction between individuals and their main network, and between the local and the global.

The neighborhood is where boundaries matter. It’s where human perception binds us within borders, where nodes are landmarks in a physical network, not connections in the cloud. As Italo Calvino wrote in his novel Invisible Cities: “The city, however, does not tell its past, but contains it like the lines of a hand, written in the corners of the streets, the gratings of the windows, the banisters of the steps, the antennae of the lightning rods, the poles of the flags, every segment marked in turn with scratches, indentations, scrolls.”

Like the landmarks and memories of neighborhoods, swarm cities are duplicitous, existing both inside and outside our heads.

“Memory is redundant: It repeats signs so that the city can begin to exist.” — Kathy Acker

Early on in his book In Divisible Cities, Dominic Pettman repurposes the idea of “mattering maps,” those maps we make to and from the things that matter: “A map that generates territory, rather than the other way around … A map that does not represent cities that exist independently, but a map that brings cities into being.

Cities used to spring up near water. Rivers crisscrossing through dirt provided the networks. Railroads and highways developed next, with the metropolis growing between their branches. Cities emerge where connectivity gets concentrated. Just as the telegraph separated communication from transportation, the current dominant forms of connectivity are not grounded in physical space. The cities of the future will emerge with cultural scripts as blueprints, with landmarks like 9/11, Columbine, and Katrina. Their unity condensed from vapor, all clouds and leaves with no roots. Think Home Depot instead of home. The strip mall as town hall. Disposable digs inhabited by a pop-up populace.

Local communities haven’t been diminished by global networks, they have come unmoored because their connectedness isn’t physically grounded. There remains no absolute value to the region you happen to occupy. The nodes shift as needed. This is the cartography of the future: giant, sprawling mattering maps made of memories. Within them are vast and multiple new swarm cities to explore.


Further Reading:

This piece was originally posted on Steven Johnson’s How We Get to Next website on April 21, 2016. Now it’s a part of “Location is Everywhere,” Chapter 8 of The Medium Picture. It was informed by the works below:

Kathy Acker and McKenzie Wark, I’m Very Into You (Cambridge: Semiotext(e), 2015).
Darran Anderson, Imaginary Cities (London: Influx Press, 2015).
David Byrne, Bicycle Diaries (New York: Viking, 2009).
Italo Calvino, Invisible Cities (Orlando: Harcourt, 1974).
Roy Christopher, “Interview with James Howard Kunstler,” roychristopher.com, February 6, 2002.
Brian Eno, A Year with Swollen Appendices (London: faber & faber, 1996).
Steve Erickson, Days Between Stations (New York: Owl Books, 1985).
Richard Florida, The Great Reset (New York: Harper, 2010).
Joel Garreau, The Nine Nations of North America (New York: Houghton Mifflin, 1981).
Joel Garreau, Edge City: Life on the New Frontier (New York: Doubleday, 1991).
Anthony Giddens, The Constitution of Society (Cambridge, MA: Polity Press, 1984).
Cliff Goddard and Anna Wierzbicka, “Cultural Scripts: What Are They and What Are They Good For?” Intercultural Pragmatics 1(2)(2004): 153–166.
Adam Greenfield, Against the Smart City (New York: Do Projects, 2013).
David Kilcullen, Out of the Mountains: The Coming Age of the Urban Guerrilla (New York: Oxford University Press, 2013).
Eric Kluitenberg, Delusive Spaces: Essays on Culture, Media and Technology (New York: NAi/DAP. Inc, 2008).
Kevin Lynch, Managing the Sense of a Region (Cambridge, MA: The MIT Press, 1976).
Cat Matson, “The Problem With Smart Cities,” How We Get To Next, March 27, 2016.
Shannon Mattern, Deep Mapping the Media City (Minneapolis: University of Minnesota Press, 2015).
Marshall McLuhan, Understanding Media: The Extensions of Man (New York: Houghton-Mifflin, 1964).
Ted Mooney, Easy Travel to Other Planets (New York: Ballantine, 1981).
Nicholas Negroponte, Being Digital (New York: Knopf, 1995).
Dominic Pettman, In Divisible Cities (Brooklyn: Punctum Books, 2013).
Peter Root, Ephemicropolis art exhibit, 2010.
Robert J. Sampson, Great American City: Chicago and the Enduring Neighborhood Effect (Chicago: University of Chicago Press, 2013).
Anthony M. Townsend, Smart Cities: Big Data, Civic Hackers, and the Quest for a New Utopia (New York: W.W. Norton & Co, 2013)

 

What Means These Memes?

Once exclusively a fan phenomenon, internet memes have become a mainstay in marketing. Not long ago, they were the domain of crafty internet users bent on making and spreading bite-sized cultural commentary. In his recent Complex article, How Memes Changed the Rap Game, Andre Gee quotes one marketing and advertising agency saying that “meme marketing allows potential new fans to discover new music without feeling like they’re being advertised to.”

As originally conceived by Richard Dawkins in his 1976 book The Selfish Gene, a meme is “a unit of cultural transmission, or a unit of imitation.” It is the smallest spreadable bit or iteration of an idea. Memes are based on genes, Dawkins’ original analogy contends. He writes,

Examples of memes are tunes, ideas, catch-phrases, clothes fashions, ways of making pots or of building arches. Just as genes propagate themselves in the gene pool by leaping from body to body via sperms or eggs, so memes propagate themselves in the meme pool by leaping from brain to brain via a process which, in the broad sense, can be called imitation.

Others have taken the idea, the “meme” meme, further. Kate Distin has perhaps taken up the idea most earnestly with two books, The Selfish Meme (2005) and Cultural Evolution (2011), the latter of which moved away from memes and looked closer at languages, written, spoken, and musical. In her book The Meme Machine (1999), Susan Blackmore distinguishes between memes that copy a product and memes that copy instructions. Similarly, in The Electric Meme (2002), Robert Aunger extends the meme metaphor by adding phenotypes and conflating them with artifacts. With Memes in Digital Culture (2013), Limor Shifman does a noble job attempting to reconcile Dawkinsian memes with internet memes.

Distinguishing imitation or replication as a process of communication, as well as integrating Everett M. Rogers’ closely related diffusion of innovations theory, Brian H. Spitzberg proposes an operational model of meme diffusion. He writes, “Communication messages such as tweets, e-mails, and digital images are by definition memes, because they are replicable transmitters of cultural meanings.”

In his book of the same name, J.M. Balkin imagines memes as bits in a “cultural software” that makes up ideologies. In Genes, Memes, Culture, and Mental Illness (2010), Hoyle Leigh writes that “a meme is a memory that is transferred or has the potential to be transferred.” There’s even The Complete Idiot’s Guide to Memes, which only discusses internet memes in one chapter of its 23, and as an afterthought (Appendix E).

Both biological and cultural evolution require competition and collaboration, and no one knows at what level the selection, transfers, and changes happen: Genes? Individuals? Groups? Where memetic theories are concerned, another major problem is one of scale. What size is a meme? Where are its borders? What do memes add up to? Like genes, germs, and viruses, Dawkins gave memes “fitness,” which means that a very “healthy” meme that grows big and strong can still be very negative and quite dangerous. As Virus of the Mind author Richard Brodie once told me, “Memetic theory tells us that repetition of a meme, regardless of whether you think you are ‘for’ it or ‘against’ it, helps it spread. It’s like the old saying ‘there’s no such thing as bad publicity.’” This is an overlooked aspect of memetics that also applies to internet memes. Think here of internet users reposting memes with which they disagree and commenting to say so. Regardless of the context, the meme still spreads. That is, even if it is presented in a negative light, the meme is fitter, healthier, and stronger as long as it spreads. Retweets might not equal endorsements, but they do strengthen the memes.

Another problem you may have noticed in the “meme” meme via the brief and selective literature review above is that the genetic analogy is not universal. Some theorists prefer an analogy with viruses. As many aspects as they might share as useful metaphors, genes and viruses are not the same thing. Douglas Rushkoff Media Virus! (1994), Brodie’s Virus of the Mind (1995), and Aaron Lynch’s Thought Contagion (1996) all take up the virus analogy over the gene one. Maybe it’s a better model, as when something is “viral,” it spreads. When something is “genetic,” it doesn’t necessarily. Sure, genes are passed on, but viruses are inherently difficult to stop. Spreading is what they do. This epidemiological view of culture has been most thoroughly explored by anthropologist Dan Sperber. His 1996 book, Explaining Culture, goes a long way to doing just that, using a naturalistic view of its spread. Some prefer to skip the memes altogether. Malcolm Gladwell, whose 2000 bestseller, The Tipping Point, also takes an epidemiological view of culture and marketing but without ever mentioning memes, told me shortly after its release,

As for memetics, I hate that theory. I find it very unsatisfying. That idea says that ideas are like genes — that they seek to replicate themselves. But that is a dry and narrow way of looking at the spread of ideas. I prefer my idea because it captures the full social dimension of how something spreads. Epidemiologists are, after all, only partially interested in the agent being spread: They are more interested in how the agent is being spread, and who’s doing the spreading. They are fundamentally interested in the social dimension of contagion, and that social dimension — which I think is so critical — is exactly what memetics lacks.

If memes are indeed analogous to genes, then the real power of memes is that they add up to something. I’m no biologist, but genes are bits of code that become chromosomes, and chromosomes make up DNA, which then becomes organisms. Plants, animals, viruses, and all life that we know about is built from them. “The meme has done its work by assembling massive social systems, the new rulers of this earth,” writes Howard Bloom. “Together, the meme and the human superorganism have become the universe’s latest device for creating fresh forms of order.”

Perhaps that was true in the mid-1990s, when Bloom wrote that, or in the mid-1970s when Dawkins wrote The Selfish Gene, but the biases and affordances of memes’ attendant infrastructure has changed dramatically since. After all, memes have to replicate, and in order to replicate, they have to move from one mind to another via some conduit. This could be the oral culture of yore, but it’s more and more likely to be technologically enabled. Broadcast media supports one kind of memetic propagation. The internet, however, supports quite another.

Units vs Unity

How are we to understand culture through a metaphor that’s based on another metaphor? Genes are figurative as well, a rhetorical tactic deployed simply to give a name to something. Meta-metaphors are known as pataphors, and they are so useless as to be called a fake science by their originator Alfred Jarry. Pataphysics is to metaphysics what metaphysics is to physics. It’s one level up. “Pataphysics… is the science of that which is superinduced upon metaphysics,” wrote Jarry, “whether within or beyond the latter’s limitations, extending as far beyond metaphysics as the latter extends beyond physics.” He added, “Pataphysics is the science of imaginary solutions, which symbolically attributes the properties of objects, described by their virtuality, to their lineaments.” If ever there were a scientific concept that proved pataphysical, it is sure to have been the meme. Virtual. An imaginary solution.

In her book, How the Gene Got Its Groove, Elizabeth Parthenia Shea writes,

As a rhetorical figure, the ‘gene’ moves from context to context, adapting to a broad range of rhetorical exigencies (from the highly technical to the intensely political to the ephemeral and the absurd), carrying with it a capacity for rhetorical work and rhetorical consequences. As the examples in this book show… the rhetorical consequences of the figure of the gene often include the assertion of boundaries, with authoritative knowledge on one side and playful language, stylistic devices, and rhetoric on the other.

Memes only work if they move. If they are units of culture then in order to build and maintain that culture, they have to move.

Memes are supposedly what makes us different from all other species in that we can deny our biological genes because of our cultural memes. As we’ve seen, memes have been touted as a unit of thoughts, belief, ideology, memory, learning, influence, and, of course, culture. As the media theorist Douglas Rushkoff told me in 1999,

I’ve been into memes off-and-on since Media Virus! (1994), and I still think they’re an interesting way to understand culture. But meme conversations spend much more time explaining memes than they accomplish. In other words, the metaphor itself seems more complex than the ideas it is meant to convey. So, I’ve abandoned the notion of memes pretty much altogether.

Even in the 1990s, the web’s salad days, the concept was so beleaguered by explanation that one of its major champions dropped the idea. Rushkoff continues,

I remember I was doing an interview about Media Virus! for some magazine, and it was taking place at Timothy Leary’s house. And he overheard me mention memes, and the journalist asking me to explain to him what ‘memes’ are. Afterwards, Timothy teased me. ‘Two years you’ve been carrying on about memes,’ he said. ‘If you still have to explain what they are every time you mention them, it means they just haven’t caught on. Drop ‘em.

Now everyone knows what a meme is, but it’s not the thing Rushkoff was talking about in the 1990s. One is far less likely to have to explain what memes are as you are what they aren’t. An internet meme is a meme now. Ludwig Wittgenstein once said there was no such thing as a private language. The presumption being that language, the prime mover of ideas if ever there were such a thing, only works if it is shared. The same can be said of culture. It only works if it is shared. If memes never add up to anything larger than memes, the concept is dead, and so is its culture. Dawkins’ idea has been hi-jacked by the jacked-in, mimed by the marketers and spread through networks, for better or worse.


This post is an excerpt from Chapter 6 “Metaphors Be With You” of The Medium Picture (forthcoming from punctum books) and contains parts from my chapter “The Meme is Dead, Long Live the Meme” from the book Post Memes: Seizing the Memes of Production, edited by Alfie Bown and Dan Bristow (punctum books, 2019)

Interfaces of the Word

The designer James Macanufo once said that if paper didn’t exist, we’d have to invent it. Paper, inscribed with writing and then with printing, enabled recorded history. Media theorist Friedrich Kittler once wrote that print held a “monopoly on the storage of serial data.” Even as writing represents a locking down of knowledge, one of “sequestration, interposition, diaeresis or division, alienation, and closed fields or systems,” Walter Ong pointed out that it also represents liberation, a system of access where none existed before. After all, we only write things down in order to enable the possibility of referring to them later.

0

Herbert Bayer, Diagram of the Field of Vision (1930).

People would make fun of you if you were working on software for communicating with the dead even though that’s half the purpose of writing. — @mathpunk, November 1, 2014

“Written genres,” Lisa Gitelman writes in her book, Paper Knowledge, “depend on a possibly infinite number of things that large groups of people recognize, will recognize, or have recognized that writings can be for. To wit, documents are for knowing-showing.” This “knowing-showing” is the liberation aspect of writing and printing, the enabling of access. She continues, “[J]ob printers facilitate or ensure the pure exchange function. That is, they ensure value that exists in and only because of exchange, exchangeability, and circulation.”

“Digital documents… have no edges,” she adds. A “document” in digital space is only metaphorically so. Every form of media is the same at the digital level. Just as genres of writing emerge from discursive fields according to the shared knowledge of readers, “the ways they have been internalized by members of a shared culture,” digital documents are arranged in recognizable forms on the screen. The underlying mechanisms doing the arranging remain largely hidden from us as users, what Alex Galloway calls “the interface effect.” It’s kind of like using genre as a way to parse massive amounts of text, as a different way to organize and understand writing.

Rita Raley outlines what she calls “TXTual Practice” in her chapter in Comparative Textual Media, describing screen-based, “born-digital” works as unstable, “not texts but text effects.” Her essay moves away from viewing the digital document and other such contrivances as metaphors and toward employing Galloway’s interface effect. Galloway’s view casts the old argument of interfaces becoming transparent and “getting out of the way” in a bright and harsh new light, writing that their “operability engenders inoperability.”

Yet another book on the topic, Lori Emerson’s Reading Writing Interfaces, takes on the “invisible, imperceptible, inoperable” interface, starting with ubiquitous computing. Once our devices obsolesce into general use, “those transparent devices that achieve more the less they do,” as Galloway puts it, they escape everyday criticism. The interface stuff hides in those edges that aren’t really there. The words I write now float and flicker on a screen in a conceptual space I barely understand. Emerson cites the mass seduction of the Macintosh computer interface and the activist digital media poetics that critique that seduction.

If paper didn’t exist, we’d have to invent it. Would anyone say the same for the screen?


Bibliography:

Emerson, Lori, Reading Writing Interfaces: From the Digital to the Bookbound, Minneapolis, MN: University of Minnesota Press, 2014.

Galloway, Alexander R., The Interface Effect, Malden, MA: Polity Press, 2013.

Gitelman, Lisa, Paper Knowledge: Toward a Media History of Documents, Durham, NC: Duke University Press, 2014.

Raley, Rita, TXTual Practice. In, N. Katherine Hayles & Jessica Pressman (Eds.), Comparative Textual Media: Transforming the Humanities in the Postprint Era (pp. 183-197), Minneapolis, MN: University of Minnesota Press, 2013.

Kittler, Friedrich A., Discourse Networks: 1800/1900, Stanford, CA: Stanford University Press, 1990.

Lupton, Ellen (Ed.), Type on Screen: A Critical Guide for Designers, Writers, Developers, and Students, New York: Princeton Architectural Press, 2014.

Ong, Walter J., Interfaces of the Word: Studies in the Evolution of Consciousness and Culture, Ithaca, NY: Cornell University Press, 1977.

The Surface Industry

I don’t know any casual skateboarders. Everyone I know who’s ever done it has either an era of their lives or their entire essence defined by it—the rebellion, the aggression, the expression—inextricably bound up with their being. It’s the way you wear your hair and the way you wear your hat. It’s the kind of shoes you wear and which foot you put forward. It’s the crew you run with and the direction you go. There is something about rolling through the world on a skateboard that changes people forever. 

The author at age 11 and the beginning of a very long road.

Ever since I first saw Wes Humpston’s Dogtown cross on the bottom of a friend’s skateboard in the sixth grade, I knew it was going to be a part of my world. I first stepped on a skateboard at the age of 11. There are scant few physical acts and objects that have had a larger impact on who I am and how I am. Through the wood, the wheels, and the graphics, skateboarding culture introduced me to music, art, and attitude. Riding a skateboard fundamentally changed the way I see the world. “Skateboarding is not a hobby,” says Ian MacKaye, “and it is not a sport. Skateboarding is a way of learning how to redefine the world around you.”

Disparaged by pedestrians, police, and business owners, skateboarders reinterpret the urban landscape with style, grace, and aggression. Their instrument of choice, the skateboard, is a humble object of plywood, plastic, and metal. A 2×4 with roller-skates nailed to the bottom, the skateboard started as a way to emulate surfing. Carving the curves of empty pools, skateboards and skateboarders eventually moved on to custom ramps and the streets of the city. MacKaye adds, “It was like putting on a pair of filtered glasses—every curb, every wall had a new definition. I saw the world differently than other people.” Riding a skateboard changes how one sees the world generally and the built environment specifically. Where most see streets, sidewalks, curbs, walls, and handrails, skateboarders see a veritable playground of ramps and obstacles to be manipulated and overcome in the name of fun. Hills are for speed. Edges are for grinding or sliding. Anything else is for jumping onto or over. Looking through this lens all one sees is lines to follow and lines to cross.

Spike Jonze crossing lines. Photo by Rodger Bridges.

“We lived in a dead-end town with nowhere to go,” says the professional skateboarder Mike Vallely,

and then when skateboarding came along—especially street skating—it made our town bearable. It made our town livable because it blew it wide open… It was like the possibilities were suddenly endless in a town that once felt like we had nowhere to go. Suddenly it was like, this place ain’t so bad because there’s a curb and there’s a bench and there’s some stairs and there’s a wall. Everything was redefined. These weren’t things that confined and defined our lives, they were things we were now defining.

The game designer Brian Schrank defines affordance mining as a way of determining a technology’s “underutilized actionable properties and developing methods of leveraging those properties.” Schrank has done this most famously with computer keyboards, designing games that challenge the use and user of QWERTY keyboards to find new ways of interacting with the common computer. It’s a form of hacking, he says, that instead of searching for weakness in a system, is about finding its hidden strengths. Affordance mining can also be found in other underground, punk, and DIY cultures. In her book Adjusted Margin, Kate Eichhorn writes, “If copy machines and their gritty output of posters, flyers, and zines helped to define and spread movements intent on bolstering the rights of people on the margins, it was largely against, not with, the grain of the machines original intentions.”

Skateboarders mine affordances from all of the edges and surfaces of the city, challenging the design of our everyday built environment. Skateboarding is perhaps the best example of the oft-quoted line from Automatic Jack in William Gibson’s 1982 short story “Burning Chrome”: “The street finds its own use for things.” Skateboarders find their own use for everything in the city: ledges, curbs, stairs, handrails—surfaces, edges, and angles of all kinds. Even with the proliferation of skateparks, pure street skating is still the true measure of skill and vision. But just as the skateparks have spread, creating skateable terrain in towns large and small all over the world, the effort against street skating has evolved as well, creating its own countermovement.

Hostile architecture: a “bum-proof bench” in Los Angeles.

There are several bus-bench designs that allow sitting while waiting for the arrival of mass transit yet prevent the bench from being used as a bed. Most of these designs involve armrests or ridges in the seat to prevent one from lying prone across the bench. In his book, Callous Objects: Designs Against the Homeless, Robert Rosenberger writes, “The websites of bench manufacturers rarely advertise the fact that these designs are specifically intended to discourage sleeping, although on occasion such partitions and armrests are referred to as ‘antiloitering’ features.” My favorite has to be the backless, round-top bench: The seat is shaped like half of a cylinder resembling a barrel and allows one to sit, albeit not a luxuriously comfortable place to park yourself. Without your feet on the ground to stabilize you though, you’re not likely to stay on top. Therefore, there’s no napping on this bench. Urban theorist Mike Davis calls these varieties “bum-proof benches.”

The manipulation of the perceived affordances of objects and surfaces is another great example. Chairs and tables offer surfaces that are affordances for the support of weight. That is, a table affords support. If you have a glass counter on which you don’t want anything placed, it should be slanted. If it’s flat, it gives the perception of affording weight placed on top and often ends up cracked. The handrails around hotel balconies are typically rounded or beveled in such a way as to prevent the setting down of a beverage. This is to keep one from setting a beer bottle on the rail then drunkenly or excitedly knocking it off onto passers-by, cars, or just the ground below. These are not design flaws.

In the past few decades, architects and urban landscapers have made or retrofitted handrails and ledges to make them unusable for skateboarding. Large knobs welded onto metal handrails or blocks bolted to ledges keep skateboarders from using these surfaces as props or obstacles for their maneuvers. These are not mistakes, but hinderances designed — often clumsily or not exactly aesthetically — for preventing certain uses. Even if nonverbal, the message is clear: You are not welcome here.

Skateblockers installed on a once throughly enjoyed ledge. Photo by Mark-Turnauckas.

In 1998, Nike attempted to enter the skateboarding market with a line of shoes designed specifically for the sport. Their ad campaign featured unusable athletic spaces and gear like basketball goals permanently blocked by welded crosses of rebar, football fields with chained-off 50-yard lines, baseball diamonds with obstructed home plates. The ads ran with the slogan, “What if all athletes were treated like skateboarders?” It would be years before Nike was able to establish itself within the sport, but the ads were an interesting take on skateboarding’s place in the larger context of sports.

Nike in the 1990s: “What if all athletes were treated like skateboarders?”

“Skateboarding is a rebellion,” says Miki Vuckovich. A photographer and fixture of the skateboarding industry, Vuckovich is currently the Director of Development at USA Skateboarding. “It’s a sport, an activity, a lifestyle adopted primarily by adolescents, and as an individualistic activity, it gives participants a sense of independence. It sets them apart from others who don’t skate, and their own particular approach to skateboarding further differentiates them from their skating peers.” He continues,

Skateboarding, however mainstream it may become will always give skateboarders a unique perspective on the rest of the world. It’s as simple as the difference between crawling on all fours or walking upright. Skateboarders glide around, hopping up and down onto and off of curbs and objects: Benches and handrails are for grinding, not sitting or holding, and staircases are for jumping, not climbing.

Like dirt paths meandering off to the side of paved sidewalks, skateboarding is the slang in the pattern language of architecture. It works constantly against the rhetoric of the built environment. It is where the affordances of design and the desires of humans diverge. MacKaye continues, “For most people, when they saw a swimming pool, they thought, ‘Let’s take a swim.’ But I thought, ‘Let’s ride it.’ When they saw the curb or a street, they would think about driving on it. I would think about the texture. I slowly developed the ability to look at the world through totally different means.” Skateboarding is the built environment dreaming.

A Message in a Bottleneck

The first time I heard a compact disc was in middle school. My best friend’s dad had just replaced his entire collection of LPs with CDs. They sat in stacks beside the apparatus that played them. They were like extra-terrestrial objects, something from the science fiction we were into at the time. They were also off limits. We were not allowed to touch them.

One day my friend’s dad sat me down on the couch in the middle of their den. The angled sunlight of autumn streaked through the limbs and leaves of the trees in their small front yard. Four large brown cabinet speakers, sitting one each in the corners of the room, were all pointing directly at me. He put on “Owner of a Lonely Heart” from Yes’s then-new record, 90125, loud enough for all the neighbors to hear. The opening samples stumbled around the room before the lead guitar took hold. That first horn stab, a sample from “Kool is Back” by Funk, Inc. (which is a cover of “Kool’s Back Again” by Kool and The Gang) leftover from Trevor Horn’s Duck Rock sessions with Malcolm McLaren, sounded like a laser shot from space. I remember being able to feel Chris Squire’s bass thumping through the floor as Trevor Rabin’s guitar swirled and the samples bounced around the room and my skull to dizzying effect. That day the CD earned and maintained its otherworldly reputation in the history of recording formats, supplanting the raggedy cassette and the woefully outmoded vinyl record.

Creators of culture often lose control of their art when certain technologies step in. Some adapt, some adopt, and others disappear. The first records appeared in the early twentieth century. By the 1940s, a standard was set, an organizing principle was established. The standard was thirty-three and a third revolutions per minute, with about forty-five minutes of recording time, approximately 23 minutes per side. Traditionally, most artists have recorded ten or so three-to-four-minute songs, which fit nicely into the LP format. But artists who bristled at the restrictions of the popular song could record single pieces that lasted the entire side of a record—think the lengthy improvisations of John Coltrane or Miles Davis, or the proggy self-indulgence of Yes or Rush—and they did.

In the 1970s, punk rock threatened not only the dinosaurs of the dark side of the moon, but the format itself. Punk’s working-class angst had no truck with the extravagance of prog and festival rock, much less with the length of their songs. The spiky bursts of punk’s aggression lead to a rise in significance of the seven-inch single. Bands were heard today and gone tomorrow, rarely together long enough to record an entire LP worth of material, and fans rarely expected much more. Their pithy musical statements didn’t require such a long run time. But the LP format and its limitations on recording, production, and listening remained in force for the next fifty years.

Fast-forward to 1988, when The Cure was recording its eighth record, Disintegration, Robert Smith said it was the first time that they went into the studio knowing that they’d be recording for a release on compact disc, which meant they could plan to record over an hour of music. “Disintegration is the first real CD-LP,” he claimed, “It was about time the musicians learned to use this format: instead of two twenty-minute sides of an LP, you now have a seventy-minute stream of music without interruptions.” The LP had restricted bands to a runtime of forty-five minutes, but with the advent of the CD, artists had additional time to record songs and to include “bonus tracks” on CD reissues of older LPs.

Just as important as the recording, the CD also changed the act of listening. Long-accepted notions like “fast-forward” and “rewind” lost their meaning overnight. Digital media changed listeners’ access to the individual songs on a record, and their expectations as well. “The history of mobile listening,” writes Michael Bull, “is also the history of ratcheting up of consumer desire and expectation—consumers habitually expect these technologies to do more and more for them.” The CD was just the next physical format for the commercial distribution of music. Its aesthetic proximity to the LP and the likelihood that it is music’s last physical, commercial format have caused some to conjecture that it will end up like the LP, as a cultural artifact to be revered, preserved, collected. “The tangibility of the CD is part of its charm,” writes Mark Katz, “A collection is meant to be displayed, and has a visual impact that confers a degree of expertise on its owner.”

“I don’t think the ‘decline of the compact disc’ has affected the way I approach the idea of recording music,” Dischord Records founder Ian MacKaye told me in 2009. “We just press fewer CDs, more vinyl, and make it available online.” Sales of compact discs dropped by half between their peak in 2000 and 2007, and by 97 percent in the years since. By 2020, the CD made up only 4 percent of music industry sales. “If anything,” MacKaye continues, “I reckon the fact that so many people are listening to music on the earbuds and headphones that come with personal listening devices has made me think more about the recording process. I don’t listen to music that way, and I wonder if my aesthetic still ‘works’ when shot directly into the ears. Having said that, I don’t have any intention of doing anything differently.”

“On a conscious level, the decline of the compact disc has had no effect on how I record music,” K Records founder Calvin Johnson told me then. “On the subconscious level, it has filled me with glee.” Like Johnson, others see its small size and subsequently small cover art, as well as its mass-produced disposability, as signs it will go the way of the cassette and the 8-track, detritus that has fallen by the wayside of technology’s forward momentum, a momentum that then moved at the speed of the instantly downloadable MP3. Pro skateboarder-cum-recording artist Duane Pitre adds,

I like to have something in my hands, I am not very fond of MP3s as a final product as sound quality is compromised, and you have to interact with a computer. Also, there is no artwork to gaze at, no booklet to flip through, etc. Music used to be about an ‘escape’ of sorts for me (still is sometimes) and being locked onto computer is by no means an escape in my eyes. But anyhow back to the ways to release… I am very fond of vinyl releases with free MP3 download coupons. I think that is the best of both worlds… very interested in that route.

In the meantime, the MP3 itself has diminished as a format of course, but once the MP3, online file sharing, and the iPod came online around the turn of the millennium, the floodgates were opened, and music was liberated not only from the dams of physical formats but also physical spaces. Now it’s freer still, but the digitization of music started with the compact disc.

The Medium Picture Object Thing: A Photo Essay

Released in 1979, Douglas Hofstadter’s first book, the Pulitzer-Prize winning Gödel, Escher, Bach: An Eternal Golden Braid, is an expansive volume that explores how living things come to be from nonliving things. It’s about self-reference and emergence and creation and lots of other things. It’s well worth checking out.

For the cover of his heady tome, Hofstadter carved two wood-block objects such that their shadows would cast the book’s initials when lit against a flat backdrop. He went the extra step of working in the initials for the subtitle as well.

Earlier this year, I was inspired to emulate Hofstadter’s sculpture. I found a way to put the initials for my media-theory book-in-progress, The Medium Picture–TMP–into a similar configuration. This is one of my early sketches.

The sketches I did at least made the thing appear possible, so I started exploring physical options. After trying different materials and digging around craft stores, I finally found some letters that were about the right shape and would save me a lot of time toward the final object.

I was fortunate to find letters with similar proportions to the ones I’d been drawing. The first thing was to cut the M to make the P the top of the T. Like so:

 

After some papier-mâché tweaking, calk to round the leg of the M, and a coat of white paint, the object was ready to test.

 

Now that it physically existed, I knew the real test would be hanging it, lighting it, and capturing its shadows correctly. I built a contraption for just that out of things found around my parents’ house.

It was as sketchy as it looks. The object was suspended with two pieces of fishing line, and I had to turn off the air conditioning to get the thing to hang still for the picture. I found some pieces of foamcore in my sister’s old closet for the backdrop and gathered up tiny flashlights from all over the house.

With the LED flashlights propped and taped in place, this is the final set-up.

And this is the final shot. It’s not quite as intricate or as elegant as Hofstadter’s, but I’m pretty stoked on it. I think it will make a striking cover image and a fitting tribute to his work.

I belabored this process here because about half the people who see the final image ask me what software I used to make it. I know this could’ve been done digitally in any 3-D imaging suite, but I wanted to make it for real, just as Douglas Hofstadter had done.

A Résumé as Research

On December 18, 1996, I started my first online job. I remember the date because one year and one day later, the company closed its doors.

We sold software online. It sounds quaint now, but we were the first company to do it. This was back when the attitude was apocalyptic about using your credit card online. The internet was a dark, dismal place. No one out here was to be trusted. It was also when people expected software to come in a box with shiny discs and glossy instruction manuals. Customers routinely asked when they would receive these. The idea that you could download a program over the phone-lines, then install and run it on your computer without a disc was still foreign to most.

Sometime in 1997 we were purchased by another software retailer. They made their money through mail-order catalog sales and were curious about potential sales online. They bought us as a placeholder just in case this internet thing took off. When we didn’t show the returns they expected in the time they expected, they shut us down.

It sounds as weird now as downloading software did then, but this kind of turnover was normal in the dot-com era. My coworkers seemed to be split between the glib, who’d seen it all before, and the crushed, who’d harbored dreams of online fortune. We were so far ahead of other companies, many of their jobs didn’t exist anywhere else yet. As one of my friends there said, despondent after being unable to find similar work elsewhere, “I love what I do.”

I wasn’t just glib, I was stoked. I got the only severance package I’ve ever received.

During my year and a day at that job, I got vacation time for the first time ever. So, the month before we were unexpectedly shutdown, I flew to San Francisco to work for a week at Hi-Speed Productions, the offices of Thrasher Skateboard Magazine. I’d been writing for their other magazine, SLAP, which was sort of a little brother to Thrasher, since it started. One of my friends there had just left, so there was a position open. I was vying for it, and there was nothing else I wanted to do with my newly accrued week’s vacation. A few months later, severance check in hand, I moved down there for a brief stint as SLAP’s music editor.

Music journalism almost killed me. Shouts out to Hassan Abdul-Wahid, Wei-En Chang, Kyle Grady, and Lance Dawes. Jake Phellps, R.I.P.

Being at SLAP reminded me of my first days out of my parents’ house, those first attempts at navigating the adult world. Having grown up with an artist mom, I just assumed I’d be an artist. After trying to go to an actual art school, which involved retaking a standardized test in pursuit of a scholarship to offset out-of-state tuition, I ended up at the local community college. When it came time to transfer to a four-year institution, I was still an art major. In an attempt to monetize my waning passion for art, I chose commercial art as my area of study. I met with a professor at a school in Orlando, Florida, who flitted around his office preparing for something other than meeting with me. He told me there was no such thing as commercial art.

I tried a semester at Montevallo University near Birmingham, Alabama. As I was nearing the end of that failed experiment, I decided art wasn’t the major for me. Spurred by the social concerns of the hardcore punk and hip-hop I’d been listening to since middle school, I decided on sociology. I met with an advisor at Montevallo, who also acted as if he was readying his studio space for anything other than my presence. I ended up moving back home to finish a degree in social science.

The SLAP experience was like a second coming of age. It reminded me how my expectations had framed my future and how wrong I was again. To be accepted to school, hired for work, welcomed into a space, and then treated like your presence is nothing special is confusing. To pass the test then be treated like there wasn’t a test is confounding. To work hard to get somewhere and then be treated as if anyone there can do the work is disheartening. It felt as much my fault as anyone else’s. These were not endings, they were turns: paths that branched off from the one I thought I’d follow. The one at SLAP led me back to school.

A year and a half later, I moved to Athens, Georgia to enter the University of Georgia’s master’s program in Artificial Intelligence. After extensive research, I found that their program encompassed so many of my newfound interests: cognitive science, computer science, psychology, philosophy. I was quickly in over my head. I was unprepared for the formal logic the entire program was built on. The formal logic class I was in was the fourth of a series of which I had taken none. The programming class was in a language called ProLog, which means “programming in logic.” The introductory A.I. course also required a final project written in ProLog. I was failing by midterms.

I also didn’t want to be a computer programmer, but that really wasn’t going to be a problem. The one class I never missed was the one-credit survey of all the research going on in the A.I. program and its associated disciplines. Those were all of the reasons I was drawn in the program in the first place. Those were all of the reasons I worked an extra year as a designer at an Alabama newspaper and an Army base to save up the money to go to UGA because I didn’t get in the previous year.

I bounced back into web design, and inadvertently, skateboarding. My next plan was to get a job and save up enough money to move to Austin, Texas. I had heard great things about the city, and I had a few friends living there already. After two months at a web design job in Atlanta, I got a job at Skateboard.com in San Diego.

Shouts to Chris Mullins and Tod Swank.

After another brief and uneventful stint in action sports, I was back in Seattle.

I’ve been thinking about my previous paths and branches because another one split just recently. After seven years teaching at the University of Illinois-Chicago, I took a job at a private art college in Savannah—the very art school I’d tried to get into as an undergraduate! I moved back to Georgia in the fall of 2019. I had my initial reservations but given all of the factors at the time of the decision, it was the thing to do. The new job had a dress code, way less autonomy, lots of rules anathema in academia, and no real room for growth. I lasted exactly one academic year.

Like SLAP and Skateboard.com, from the outside the school looked like the place for me. It is often difficult to convince outsiders otherwise. They see a place for you and you in that place. If it doesn’t work, then you must be the problem. You must have done something wrong. What do you mean you didn’t like working as a music editor of a skateboard magazine?! What do you mean you didn’t like writing and designing for a skateboard website?! What do you mean you didn’t like teaching at an art school?!

Pete O’Dell, my coworker and colleague from that first web job in 1996, once told me to make sure I didn’t repeat my experience. I can’t claim I knew exactly what he meant at the time, but I got it eventually.