Beautiful, Beautiful Alienation: Walkmen, Phones, and (Not) Watches

Photo credit: Viewminder / Foter / CC BY-NC-ND

I was relatively late to the whole Walkman thing. It wasn’t until I was in high school that I got ahold of my own portable cassette player, partly because I didn’t discover a love of contemporary music until I was 12 or so, and partly because I never thought to ask for one. (I had pretty much exhausted my enthusiasm for my Weird Al tapes, they being pretty much the only thing I ever listened to.) I don’t remember how I finally got one (a spare of my dad’s? a gift from grandma?), but when I did acquire one, and armed it with Thomas Dolby’s The Golden Age of Wireless, my life was changed.

Suddenly, I could remove myself from the world around me, something I as a bullied, nervous, self-loathing teen was desperate to do. In place of the hurtful, disapproving world, I could immerse myself sensorially in a rich world of melody, pathos, cleverness, and imaginativeness. It’s a cliché to say that “music saved my life,” but it’s no exaggeration to say that I was able to get through some of my most miserable years because I discovered the joys and the escape of music, enjoyed alone.

The Web and the larger online world have in many ways been the Walkman of my social existence. There was no Web to speak of when I was a teenager, but I did have Prodigy and later America Online (which I hear goes by another, shorter name now). These technologies — which included things like chat rooms, message boards, email, and later social networks — facilitated communication and interaction, yes, but from a much safer remove. There were layers of abstraction that conveniently hid most of who I was, and only let out the things I specifically authorized. I could speak, joke, argue, play, and even flirt, and never have to worry that I was being disqualified for my appearance, my clothes, or even how I simply held my body, all things that invited open mockery in meatspace. Like a personal cassette player, the online world let me enter a rich new world while also being blissfully alone.

Last month at The Awl, John Herrman wrote about “the asshole theory of technology,” which I’ll get into in a bit. He writes about the dawn of the Walkman:

Sony was worried that its portable stereo would be alienating. This turned out to be true. But the impulse to correct it was wrong: the thing that made it alienating was precisely the thing that made it good. The more compelling a gadget is, the more you use it, the more the people around you resent you for using it, the more they are pressured to use it themselves. (The fact that these devices are now all connected to each other only accelerates the effect.)

For me, I was already alienated. I needed someplace to be while alienated, a way to make use of my alienation. Music helped, and the advent of the online world was a significant leap (and iPods too). And then we got smartphones, and all of that and more became instantly available wherever I was, from a small rectangle in my pocket.

So back to this asshole theory. Herrman means to apply it to the likely success of the Apple Watch, as a new gadget for users to alienate themselves with, while simultaneously wooing the would-be-alienated:

This is the closest thing we have to a law of portable gadgetry: the more annoying it is to the people around you, the “better” the concept. The more that using it makes you seem like an asshole to people who aren’t using it, the brighter its commercial prospects. [ . . . ] It will succeed if it can create new rude exclusionary worlds for its wearers (this is why I wouldn’t underrate the weird “Taptic” communications stuff). It will succeed, in other words, to whatever extent it allows people to be assholes.

Maybe this is true for the Apple Watch, that the air of exclusiveness and elitism that it projects, and the in-crowd-only communications aspect of it, will drive its success. But the theory doesn’t work for me in terms of personal stereos, iPods, the Internet, and smartphones. I don’t care if they make me seem like an asshole (perhaps they do). I care that they get me away from all the other assholes, everywhere.

And a watch can’t do that.

My Son and Papa Dreadnoughtus

7vZZraeDTcTRKoj3YqTM8en1bTKnMoxlobfAHrjyBuTM=w1134-h1034-no
You may recall that last year they announced the discovery of a dinosaur species which they called Dreadnoughtus, thought to be the single largest land animal to ever live. Cool, right? Suck it, Argentinosaurus!

Anyway, my 5-year-old son has a project this week in his preschool class on dinosaurs, his favorite subject. He had to choose one to report on, and build a poster based on what he learned. Well he already knows gobs of facts about all manner of dinosaur species, so in order to up the ante and challenge him a bit, we chose, you guessed it, Dreadnoughtus.

He was really enthusiastic about it, he knew he’d be the only kid to choose it, and he threw himself into learning new facts about it, and especially drawing his masterful picture.

I snapped a picture of my wonderful boy and his project, and shared it to the inter-social-webs. And guess who responded to the tweet? None other than paleontologist Ken Lacovara, the paleontologist who discovered Dreadnoughtus! (He describes himself in his Twitter bio as “Papa to Dreadnoughtus.”) He’s the guy laying next to the fossil in the picture on my son’s poster above. He tweeted:

Nice! Please tell him I said he did a great job!

unnamedAnd on my contention that my boy would “kick those other kids’ [projects] butts,” Lacovara said:

Totally

I echo what my wife Jess said about this: It’s this kind of thing that’s so wonderful about the social Internet. That my preschool-age son could excitedly work on a project about a dinosaur, and almost instantly be encouraged and congratulated by the very person who discovered it.

Anyway, thank you, Dr. Lacovara!

Industry as an Intrinsic End

Alan Jacobs foreswears the Internet of analytics:

Twitter and Tumblr […] have something important in common, which they share with most social media sites: they invite you to measure people’s response to you. For many people this probably means nothing, but for me it has always had an effect. Over the years I developed a sense of how many RTs a tweet was likely to earn, how many reblogs or likes a Tumblr post would receive – and I couldn’t help checking to see if my guesses were right. I never really cared anything about numbers of followers, and for a long time I think I covertly prided myself on that; but eventually I came to understand that I wanted my followers, however many there happened to be, to notice what I was saying and to acknowledge my wit or wisdom in the currency of RTs and faves. And over time I believe that desire shaped what I said, what I thought – what I noticed. I think it dulled my brain. I think it distracted me from the pursuit of more difficult, challenging ideas that don’t readily fit into the molds of social media.

His decision:

I won’t be writing less, nor will I be producing fewer words online, I suspect. But they’ll come in larger chunks, and I’ll either be getting paid for it or working out less coherent and fully-formed thoughts right here on my own turf, where Google Analytics isn’t installed, where comments are not enabled, and where, therefore, I don’t have the first idea how many people are reading this or whether they like it.

In the same space of time, I read this piece from before Christmas by Arthur C. Brooks, in which he advises against excessive “attachment” to the rewards of our labors. He’s talking about assigning too much emotional and existential value to money and material goods, but presume the inference is to something like “pageviews” instead of money, to get where this is going:

Our daily lives often consist of a dogged pursuit of practicality and usefulness at all costs. This is a sure path toward the attachment we need to avoid. … Countless studies show that doing things for their own sake — as opposed to things that are merely a means to achieve something else — makes for mindfulness and joy.

So if not for the eyeballs, if not for the attention, why bother? That’s my usual question. Brooks says:

This manifestly does not mean we should abandon productive impulses. On the contrary, it means we need to treat our industry as an intrinsic end. This is the point made famously in the Hindu text the Bhagavad Gita, where work is sanctified as inherently valuable, not as a path to a payoff.

So my best bet in order to mitigate my chronic and recurring ennui over blogging and podcasting and other creative endeavors would be to stop trying to find out whether anyone’s paying attention. Ignore Google Analytics, ignore my favs and RTs on Twitter, disregard shares and likes on Facebook, etc. Just do the work because I want to.

I frankly don’t know if I’m capable. But I should think about it.

2014’s Paradigm Shifts in Tech

Technology is all about change, and rapid change at that. But even with the pace of technological development being dizzyingly fast, there are still larger paradigms, grander assumptions and codes of conventional wisdom, that are more or less static. In 2014, though, a lot of those paradigms shifted, and many of our preconceptions and understandings were altered, enlightened, or totally overturned. Here’s a short list of some of those paradigm shifts in tech in 2014.

Microsoft the Scrappy Upstart

Satya-Nadella-quotes

In another age, Microsoft was the Borg, the unstoppable and loathed behemoth that destroyed all in its path. Then, sometime in the middle-to-late twenty-aughts, it became the ridiculous giant, releasing silly products, failing to even approach the hipness caché of its once-defeated rival Apple, and headed by a boorish clown prince. Zunes? Windows Vista? The Kin smartphone? Windows 8? “Scroogled”? Each risible in its own way.

And then Microsoft got a new boss, and Satya Nadella’s ascent immediately changed the public perception of the company, especially among the tech punditocracy. The products still weren’t fantastic (Windows 8.1, Surface Pro 3), but the company began to emphasize its role as a service provider, ubiquitous not in terms of desktop machines, but in terms of the various services through which all manner of machines and OSes did their work. Think OneNote, Office 360 on iPad and Android, Azure, and OneDrive. The tide had turned, and now as Google and Apple (and Facebook and Amazon) battled for supremacy, Microsoft would simply work with anyone.

To get a strong sense of the change in attitude toward Microsoft, listen to prime-Apple-blogger John Gruber’s interview of Microsoft beat reporter Ed Bott on The Talk Show early this year, recorded at a Microsoft conference, at which Gruber was featured as a marquee user of Microsoft services. Gruber and Bott were full of hope and admiration for the old Borg, which would have been unthinkable even five years ago. It is a new day indeed.

“I Was into Big Phones Before it Was Cool”

b53x-800

When Samsung unveiled the Galaxy Note in 2011, it was ridiculed for being absurdly huge, as though anyone who bought one should be embarrassed about it. Today, the original Galaxy Note would be considered “medium sized” compared to today’s flagship phones, almost all of which have displays over 5 inches. Meanwhile, even larger phablets are objects of high desire and status, such as the Galaxy Note 4 and the iPhone 6 Plus. “Mini” phones (the 4.7-inch HTC One Mini, for example) are those with displays bigger than the biggest displays offered by Apple as recently as 2013, which topped out at 4 inches.

No longer silly, phablets are now considered high-productivity machines, the mark of a busy, engaged technophile, and are perceived to be eating well into the tablet market. (They’re still too big for me, but even I could be turned.) Big phones are now just phones.

Podcast Inception

At some point in 2014, it was decided that everyone in tech must have a podcast. If you worked for a tech site, you had a podcast (like me!). If you worked at a tech company, you had a podcast. If you’d just lost your tech job, your new tech job was to have a podcast. And on those podcasts, they woud have as guests and co-hosts who also had podcasts, because, of course, everyone had a podcast. On those podcasts, they would talk to their fellow podcasts hosts about podcasts, making podcasts, the future of podcasts, the monetization of podcasts, and podcast apps.

I predict that sometime in the middle of 2015, there will be a Podcast Singularity which will swallow up all tech podcasts into an infinitely dense pundit which will consider how this will affect the podcast industry, and will be sponsored by Squarespace.

Amazon’s Weird Hardware

Amazon was on a roll. The Kindle had proven itself to be an excellent piece of hardware years ago, and solidified this position with the magnificent Paperwhite in 2012. In 2013, its Fire tablets had become genuinely high-quality devices that were well-suited to most of the things anyone would want a tablet for, with strong builds, good performance, and beautiful screens. It seemed like Amazon was a serious hardware company now.

Then it released the Fire Phone, and everyone got a queasy feeling in their stomachs. A half-baked, gimmicky device that was incredibly overpriced, it landed with a thud, and Amazon continues to slash its price to clear out its inventory. (People really like the Kindle Voyage, I should note, and the Fire TV has been much better received as a set-top box, though my own experience with the Fire TV Stick was very poor.)

And then they awkwardly previewed the Amazon Echo, the weird cylinder that caters to the dumb informational needs of a creepy family, and the head-scratching turned to scalp-scraping. Amazon’s status as a serious hardware maker was no longer a given.

The Revolution Will Not Be Tablet-Optimized

3024790-inline-i-1-apple-ipad-air-your-verse-anthem

The iPad was going to be the PC for everyone. Most people would not even bother with a computer with a monitor and a keyboard, they’d just get a tablet, and that’d be it. PCs would be for professionals in specific situations that required a lot of power and peripherals. For the rest of humanity, it would be tablets all the way down.

Of course, now we know that in 2014, tablet growth has slowed, and few people use their tablets as their primary computing device. Instead, they’re causual devices for reading, browsing, and watching video. Despite the niche cases heralded in Apple’s “Verse” ads, on the whole, tablets have become the kick-back magazines of the gadget world.

That’s fine! I’ve written before that iPads/tablets are “zen devices of choice,” the computer you use when you don’t have to be using a computer, unlike smartphones and PCs which are “required” for work and day-to-day business.

The shift this year is the realization that tablets are (probably) not going to take over the PC landscape, especially as phones get bigger, and laptops get cheaper and sleeker. Could there be any better argument against an iPad-as-PC-replacement than Apple’s own 11″ MacBook Air? Even Microsoft, which once positioned its Surface machines as iPad replacements now markets them as MacBook competitors. Why? Because tablets just don’t matter that much, they’re more for fun, and the Surface is for serious business.

Forcing the tablet to be a PC has proven so far to be awkward and hacky, and PCs themselves are better than ever. The iPad revolution may never be. Which, again, is fine, but in 2014, we realized it.

(And relatedly, e-readers aren’t dead!)

The Souring of Twitter

shutterstock_210185854

Twitter hasn’t always made the best decisions, and sometimes even its staunchest defenders have had to wonder what the company really wants to make of its crucial service. But to my mind, in 2014 the overall feeling toward Twitter has tipped from reluctant embrace to general disapproval. It’s gotten worse on privacy, it’s been MIA or unhelpful in handling abuse and harassment, and it’s began to seriously monkey with what makes Twitter Twitter. And more and more, I read pieces about once-avid Twitterers saying just how miserable the torrent of negativity makes people feel. Once the underdog to Facebook that all those in the know called home, it now looks like a hapless, heartless, clueless company that has no idea how good of a thing it has.

You Have Died of Ethics in Games Journalism

messages-image271433116-100527871-medium.idge

Tech has always been a boy’s club, but in 2014, a lot of the industry decided it shouldn’t be anymore. As more and more instances of harassment, abuse, sexism, and overt misogyny were exposed – in the wider tech industry and in gaming particularly – the more people stood up to declare the status quo unacceptable. A wider embrace of inclusiveness and encouragement of women in tech emerged, along with, of course, a counter-reaction of hatred and attacks from those who liked things as they were.

2014 forced the tech universe to confront some very, very ugly things about itself. But it will likely prove a net win, as more of us work to fix it than don’t.

(I have this shirt with the above image, and it’s here.)

Google’s Glass Jaw

In 2013, Google Glass was the future, the way all things tech would soon be. In 2014, no one wears them, a consumer version seems to remain a fuzzy concept, and even those who were breathlessly enthusiastic about it have felt their novelty wane. The tech punditocracy is now waking up from its Google Glass hangover, and they’re all a little embarrassed.

Now, of course, we’re all excited about watches. It remains to be seen what we feel like the next morning.

The Embassy of Google

embassies_mass_ave_credit-Mieko-Yamaguchi
I really like Google+ for the most part (as much as I hate the name), and I find the interactions that I have there to be, on the average, much higher in quality than those I have over Facebook or Twitter. Part of that I know is because there just aren’t all that many people there, so there’s less noise in my individual feed, and those who are there are going to be, well, at least a little more like me: tech-enthusiast, early adopter, cultural geeks. I come across very little abuse or vitriol, and see almost none of the social signaling that is so prevalent, even defining, of the other platforms; the constant wearing one’s obviously-correct politics on one’s sleeve to show that one is on The Right Side of Things. On the whole, it’s calm, thoughtful discussion or commentary a given topic.

So why is that? Well, truth be told, it’s not that the political discussion I see on Google+ is somehow superior, but that it’s not really there at all. There’s not much in the way of social signaling of ideologies because that’s simply not what surfaces, and it’s not what I’ve chosen to have surfaced. Instead, I see a lot about tech and a bit about nerd culture. Perhaps if I followed (or in the G+ parlance, “circled”) more explicitly political individuals and groups I might see more of the eye-roll inducing things I find on Twitter, but I haven’t, so I don’t.

And thinking about it more, Google+ isn’t even really just about tech generally, but a particularly geeky view of tech. In other words, it’s a lot about things related to Google, at least at a secondary or tertiary level. It’s no coincidence that I became much more interested in Google+ when I first started using Android devices, lost interest in it when I moved back into an all-Apple ecosystem, and came back to it when I got back into Android. Google+ on Android is a fantastic place to read and talk about Google and Android.*

On This Week in Google on the TWiT network this week, Danny Sullivan of the site Search Engine Land made an astute observation, almost as a throwaway comment, but it stuck with me. He said:

Google+ is the Apple Store for Google.

Let me unpack why this is such an interesting thought.

I used to work at an Apple Store, and while there it became clear to me that while I’m sure Apple makes untold bazillions just off the in-person retail transactions that take place at its stores, these places serve a grander purpose.

Apple Stores are really Apple Embassies. Apple places these outposts in locations where human beings are already primed to spend money (malls, shopping districts, etc.), and in these locations they not only sell products but make the case for Apple as a whole. The employees are ambassadors and diplomats for Apple the quasi-nation-state, conducting negotiations, solving diplomatic crises, and establishing and building on relationships. I would bet you that the way Apple Stores have represented and delivered the message of the Apple brand has resulted in more revenue and growth than the raw sales of products that take place in those same stores. I can’t prove it, but I bet it’s true.

Google+ serves a similar purpose for Google, though unintentionally. It’s a meeting place for those who use and appreciate what Google does and its surrounding services and technologies. It creates a forum and meeting place that represents Google’s design aesthetic and preferred modes of communication, just as the Apple Stores do for Apple. It’s a walled garden, one that doesn’t interact particularly well with outside platforms, which is similar to Apple in a way, and actually an exception for Google generally, which usually makes a point of undergirding everything it possibly can across all of technology. And as for it being not a “real” place, it still works: Apple is about physical products, so it makes sense that its embassy is a brick-and-mortar retail location. It makes equal sense that Google, a software/cloud company, would have its embassy exist virtually, in a browser, in the cloud.

And while Apple likes its customers to give it a lot of money for its products, and is thus represented by a store, Google likes to give away its product, and instead consume its users data: opinions, interests, routines, etc., and so there is no monetary side to Google+.

So perhaps what might be best for Google+ is for its parent company to accept that it will never be a Facebook competitor, but that it does potentially serve an extremely valuable service as the Apple Store of Google. Perhaps it might have Google employees inhabit it specifically for the purpose of being available to users, maybe attach company-run tech support exchanges for help with Android, search, and other aspects of Google’s massive online existence. Perhaps the Play Store could be more directly integrated to the Google+ experience so that users could seemlessly purchase new content while its being discussed (or “plus–1’d”). If Google decides to embrace Google+ as its embassy, it might thrive in a whole new, and potentially more valuable way than they ever intended.

 


* I think it’s important to note that Google+ is also full of noise. Android-centric feeds and communities are chock full of pointless screenshots of home screens and launcher themes, there’s a lot of poorly-written garbage, and a lot of complaining about battery life and whether one’s Nexus device has gotten the latest software update with morally acceptable speed.

Kermit the Frog Performs Twitter’s Strategy Statement

Twitter released its strategy statement to investors on November 12, 2014, to (at best) mixed reception. Here, Kermit the Frog performs the statement, word-for-word.

See also Kermit performing Mitt Romney’s explanation about releasing his tax returns from 2012.

Death by a Thousand Emotional Microtransactions

shutterstock_58511576
How much do you care what people think of you? How much do you care what people you’ve never met think of you? How much do you care what people you’ve never met think about any individual choice you make or opinion you share? How much do you care what people you’ve never met think of the specifics of the format, timing, wording, tone, or technological means of the opinion you’ve shared?

If you use Twitter, or social media in general, you already kind of know the answer, or at least you’re learning it.

I have been learning some lessons myself about social media; how it can be used either passively or with intention; how it informs our personal identity; and how I have allowed it too much unfettered access to my nervous system, among other things. Clearly, for all its benefits, Twitter is also an enormous source of potential stress, eliciting what I call the Torrent of Feelings. I won’t get into the myriad factors that make this so. Browse this here blog, and you’ll see other ruminations on this subject.

What occurs to me lately is that a lot of the stress that Twitter (et. al.) engenders has to do with our perceptions of being judged. The more you present yourself on one of these platforms (and I’ll just use Twitter for now, since it’s my traditional platform and I’m tired of typing provisos indicating “et cetera”), the more you have your sense of self and identity wrapped up in it. And that can make one sensitive to the scrutiny that comes with such exposure.

Freddie deBoer recently put it like this:

“You’re doing it wrong” is the internet’s truest, most genuine expression of itself. For whatever reason, the endless exposure to other people’s minds has made the vague feeling that someone, somewhere, is judging you into the most powerful force in the world.

But what is being judged? The more I think about it, the more I think the answer is “everything.” And not “everything” in the sense of one’s whole self. That is happening, but it’s piecemeal. Very piecemeal, granular in the extreme. Because of course no one can encapsulate their whole selves in a tweet, or even a series of them, so judgment comes in small units. The hyperscrutinization that people experience (I know I do) on Twitter happens tweet by tweet, and on down.

Of course you can be called out for the substance of your opinions and choices, whether deservedly or not. But you can also be derided for your word choice, the timing of your tweet, your grammar, your nuance, your lack of nuance, your hashtag use, your frequency of tweeting or lack thereof, what client you’ve chosen to tweet from, and so on. And in those instances, though they are highly focused, the effect on the recipient is to add it to the collections of judgments about themselves as people. As Boone Gorges puts it, “A life spent on Twitter is a death by a thousand emotional microtransactions.”

And while I strongly advocate using Twitter and social media with great intention, there’s not much you can do about this micro-judgment phenomenon besides not using Twitter. That’s because Twitter is used by humans (usually), and humans, even the ones we really like, also tend toward the shallow and the knee-jerk response in an environment that fosters that kind of thing. Gorges again:

Every tweet I read or write elicits some small (or not so small) emotional reaction: anger, mirth, puzzlement, guilt, anxiety, frustration. I’ve tried to prune my following list so that when I do find myself engaging in a genuine way, it’s with a person I genuinely want to engage with. But there’s a limit to how much pruning can be done, when unfollowing a real-life friend is the online equivalent of punting his puppy across the room. So all day long, I’m in and out of the stream, always reacting to whatever’s coming next.

And there’s a domino effect. Especially during times of collective stress (such as the siege on Ferguson, the death of someone notable, etc.), those on the periphery peek in, see the Torrent of Feelings swirling around them, which causes them to judge the validity of that. Erin Kissane writes:

In the flood of information and emotion from something like Ferguson (or war crimes or an epidemic) … there we all are, gradually drowning. So people get huffy about the volume emotion that these events arouse—angry that others are angry about the wrong things or too many things or in the wrong register. … (I am properly angry, you are merely “outraged.”)

It should be noted that of the three writers quoted here, all three have left Twitter. DeBoer’s been gone for a while I think, and the other two announced their exit in the quoted posts.

Now, I’m not leaving. I have too much invested socially and professionally in Twitter to foreswear it. I will have to make do with diligent pruning, and accept that it will require a degree of fluidity: maybe I mute or unfollow certain people at certain times, and then bring them back to my feed at other times, for example. I will probably screw some of it up.

All of this is to say that Twitter is valuable, but we human beings are so damned vulnerable. The Twitter service does not care at all about this vulnerability, and probably thrives as a result of it. But I think we can do a lot to both harness Twitter’s positive value while being highly mindful of its power to kill by a thousand cuts (and this is before we even get to outright abuse, harassments, and threats, which is a related problem at a much higher temperature). I’ll be thinking about these things as I tweet and react, but also as I take in the reactions of others to me. It won’t be easy.

Image by Shutterstock.

The Real People Who Serve As the Internet’s Depravity Filter

An incredible investigative piece in Wired by Adrian Chen reports on the lives of contract content moderators, folks whose job it is to go through content posted to online platforms (such as Facebook, YouTube, Whisper, etc.), and deal with the content that violates a platform’s policies or the law. And yes, we’re talking about the really bad stuff: Not just run-of-the-mill pornography or lewd images, but examples of humanity at its worst, from torture, sexual assault (involving adults, children, and animals), and beheadings.
Just reading Chen’s piece is a traumatic experience in and of itself, knowing what material is out there, what unthinkable behavior real people are engaging in, and what the relentless exposure to this content must do to the psyches of these grossly underpaid contract workers, whose lives are slowly being ruined, their well-being slowly poisoned, post by post and video by video. Simply reading this article will probably require some recovery time.

I can’t have a blog about tech, culture, and humanism without at least acknowledging what Chen has brought into daylight. I don’t think I have any novel observations at the outset, having just read it, still somewhat teetering on my heels. But here are some thoughts and questions that it raises for me:

First, the obvious: Are the major tech companies for whom this work is done really aware of what they put these moderators through? From the Bay Area liberal arts grads to the social-media-sweatshop moderators in the Philippines, hundreds of thousands of smart, sensitive human beings (and I think they must be smart and sensitive to have the kind of judgment and empathy required to do this kind of work) are having their minds eaten alive, losing their ability to trust, to love, to feel joy, with disorders that mirror, or explicitly are, post-traumatic stress. Do Mark Zuckerberg or Larry Page or whoever it is that runs Whisper give a damn? (Given how little Twitter has done to deal with abuse and harassment of its users, I think it’s safe to presume for now that they probably don’t.)

Also, now that we know what these folks are exposed to, what can we as users of these services do about it? What will we do about it? (I fear the answer is probably similar to what we all did when we learned about the conditions in factories in China: more or less nothing.)

Here’s what affected me the most about all of this. This report was a reminder of the depths of human depravity. Now, it’s not news that there are horrible people doing horrible things to each other, and likely ever shall it be. But something about the way it’s described in this report amplifies it for me. If these hundreds of thousands of moderators are being overwhelmed, deluged with violence and death and evil in all manner of their cruelly novel variations, how many of our fellow humans are perpetrators? These moderators are only catching the portion of these people who either get caught in the act or purposefully broadcast their actions. What more must be taking place? I can barely stand to ask the question.

Bearing witness to a video of a man doing something I cannot bear to recount here to a young girl, one moderator points us to the insidiousness of all of this, emphasis mine:

The video was more than a half hour long. After watching just over a minute, Maria began to tremble with sadness and rage. Who would do something so cruel to another person? She examined the man on the screen. He was bald and appeared to be of Middle Eastern descent but was otherwise completely unremarkable. The face of evil was someone you might pass by in the mall without a second glance.

Chen writes of how these moderators no longer feel they can trust the people in their day-to-day lives. You can see why.

Finally, I’ll be thinking about the fact that its these devices and services that I am so fascinated and often entranced by that are the delivery vessels for this horror. It is tempting to relegate one’s thinking about the tech revolution as one of liberation and renaissance. But these tools are available to us all, to the best of us and the worst. What then? What now?

Jekyll and Hyde: Forefathers of Internet Trolls

shutterstock_128497004
There is a degree of serendipity to my first reading of The Strange Case of Dr. Jekyll and Mr. Hyde. I downloaded it to my iPad mostly on a whim, thinking it might be a good idea to dip into some of the 19th-century science fiction to which I am almost entirely unread, save for Frankenstein. (Next up, War of the Worlds!) I expected, similarly to Frankenstein, a book-length recounting of Dr. Jekyll’s agony as he is compelled to rent himself in two. I presumed it’d be chapter after chapter of his turning into Hyde, doing bad things, turning back to himself, and feeling shitty about it, and the moral would be something to do with how dangerous it is to mess with the science of life.

Not at all! You know this, of course, if you’ve read it yourself. (And if you haven’t, spoilers ahoy.) But what a refreshing surprise it was that the very premise of the crisis, a man who has learned to transform into a kind of bizarro version of himself, isn’t even revealed until quite near the end, when Jekyll himself is already dead. It was quite a wonderful book. (And it helped that it was short, as I’m a painfully slow reader, and even I finished it in a single sitting.)

To the serendipitous part. There was something about the specificity of what Jekyll identifies about his Hyde side that screamed contemporary relevance to me. In his closing letter, Jekyll reveals how he was surprised to find that his division of personalities was asymmetrical; there was no even split between Good Jekyll and Bad Jekyll. Rather, changing into Hyde was a way to release all the nascent ugliness within him, and changing back, he found he remained his whole self. Hyde was the monster within Jekyll, but there was no pure angel to balance. Hyde is always part of Jekyll, even when contained.

Here’s how he puts it. When he turned into Hyde…

…my virtue slumbered; my evil, kept awake by ambition, was alert and swift to seize the occasion; and the thing that was projected was Edward Hyde. Hence, although I had now two characters as well as two appearances, one was wholly evil, and the other was still the old Henry Jekyll, that incongruous compound of whose reformation and improvement I had already learned to despair. The movement was thus wholly toward the worse.

Reading this, it immediately occurred to me that Edward Hyde is a 19th-century version of the Internet troll. Ostensibly normal people, whose moral compasses seem more or less calibrated, when introduced to the power and anonymity of the Internet often unleash the absolute worst sides of themselves.

In the most egregious cases, we have trolls who threaten and harass and cause real-world damage. What are these people like in their day-to-day lives, in person? I doubt that most of them would be immediately identifiable as the monsters they become online.

But even for the most well-meaning among us, including myself, the immediacy of the social web can make it too easy for us to slip into hostility, arrogance, and hubris, at degrees we’d blush at if given a moment to pause and consider.

There is a little troll in all of us. There is a little Hyde in all of us.

Henry Jekyll was an entirely upstanding and moral man in his daily life, but he found a way to create a Victorian-era avatar, and project his inner troll into physical world to satisfy his darkest impulses, and add kindling to his baseless rage. In his confession, he notes how Hyde began his independent existence as small and emaciated, having been largely denied sustenance within the whole of Jekyll. But now free, he could nourish himself and grow stronger by acting on his aggression and hate.

That’s right, Jekyll fed the troll. And look what happened: Confusion, fear, chaos, and death.

And what might the future hold? Perhaps Robert Louis Stevenson saw it, and we’re already living it. Jekyll also writes in his confession:

With every day, and from both sides of my intelligence, the moral and the intellectual, I thus drew steadily nearer to that truth, by whose partial discovery I have been doomed to such a dreadful shipwreck: that man is not truly one, but truly two. I say two, because the state of my own knowledge does not pass beyond that point. Others will follow, others will outstrip me on the same lines; and I hazard the guess that man will be ultimately known for a mere polity of multifarious, incongruous and independent denizens.

We are not bound by the limits of Jekyll’s story, where a chemical concoction manifests only one additional “self.” On the Internet, it is trivially simple for one person to contain – and project – multitudes.

We have always had Hydes among us, I think, but the Internet has made them more visible, and better able to organize and combine their loathsome efforts, under cloaks of obscurity. In the midst of things like “Gamergate” and the non-stop torrent of rage and abuse to which the social media landscape plays host, it seems to me that there might never have been a time when this book was more relevant. The case of Dr. Jekyll and Mr. Hyde suddenly doesn’t seem so strange. Indeed, it feels very familiar.


Image by Shutterstock.