The iMortal Show, Episode 2: Your Self, in Pixels

Image by Shutterstock.
Your browser does not support the audio element.

You are what you tweet? Do your “likes” tell the story of who you are?

For so many of us, major portions of our lives are lived non-corporeally. We don’t define ourselves solely by what we do in physical space in interaction with other live bodies, but also through pixelated representations of ourselves on social media.

How do we strike a balance between the real world and the streams of Twitter and Facebook? Can we be more truly ourselves online? When we tweet and share and comment, what parts of ourselves do we reveal, and what do we keep hidden or compartmentalized?

The way social media defines our identity is what we’re talking about on Episode 2 of the iMortal Show, with my guests: Activist and communications expert Sarah Jones, and master of tech, twitter, and self-ridicule Chris Sawyer.

Download the episode here.

Subscribe in iTunes or by RSS.


The iMortal Show, Episode 2: “Your Self, in Pixels”

Originally recorded September 3, 2014.

Produced and hosted by Paul Fidalgo.

Theme music by Smooth McGroove, used with permission.

Running time: 43 minutes.

Links from the show:

Sarah Jones on Twitter, at Nonprophet Status, and her blog Anthony B. Susan.

Chris Sawyer (with “sentient skin tags”) on Twitter.

Previous episode: “Ersatz Geek”

iMortal posts:

Advertisements

Skepticism Warranted in the Panic Over Twitter Changes

Image by Shutterstock
The thing that’s been giving the online world a collective ulcer is the idea that Twitter is going to fundamentally change the way its service works by bringing Facebook-style curation to its real-time firehose. But is it really? Despite the recent rending of garments by the Twitter faithful, I have found that skepticism is warranted.

The panic began with the implementation of a system whereby tweets favorited by people you follow might appear out of context in your timeline, and things you favorite might emerge likewise in other people’s feeds. I wrote about how this was an ominous sign of Twitter mucking with what makes it great: real-time access to the zeitgeist, filtered only by whom you choose to follow.

Then the Wall Street Journal reported that Twitter’s CFO Anthony Noto had indicated that changes were coming to the traditional Twitter timeline:

Twitter’s timeline is organized in reverse chronological order, a delivery system that has not changed since the product was created eight years ago and one that some early adopters consider sacred to the core Twitter experience. But this “isn’t the most relevant experience for a user,” Noto said. Timely tweets can get buried at the bottom of the feed if the user doesn’t have the app open, for example. “Putting that content in front of the person at that moment in time is a way to organize that content better.”

Mathew Ingram’s analysis of this at GigaOm is what really had people reaching for their pitchforks and torches, writing that it “sounds like a done deal” and that coming modifications “could change the nature of the service dramatically.”

That sounds really scary to folks who have stuck with Twitter since the beginning, and truly value what it provides. It stands as a stark contrast to the heavily-curated experience of Facebook, its immediacy giving it its power and unique position in the media universe.

But as even Ingram acknowledged in a subsequent post, this change — an algorithmic approach to the timeline — was probably meant for the “discover” tab of the site, which is already heavily curated by the service, and within Twitter’s search feature. In fact, that’s what the Journal article even says:

Noto said the company’s new head of product, Daniel Graf, has made improving the service’s search capability one of his top priorities for 2015.

“If you think about our search capabilities we have a great data set of topical information about topical tweets,” Noto said. “The hierarchy within search really has to lend itself to that taxonomy.” With that comes the need for “an algorithm that delivers the depth and breadth of the content we have on a specific topic and then eventually as it relates to people,” he added.

Sure sounds to me like he’s just talking about search, since that’s what he actually says. It didn’t help matters, I suppose, that whoever wrote the Journal’s headline chose to blare, “Twitter Puts the Timeline on Notice.” Because, no, it didn’t. Not here, anyway.

After Ingram’s first piece, in fact, Dick Costolo, Twitter’s CEO (who I presume is in a position to know) tweeted, “[Noto] never said a ‘filtered feed is coming whether you like it or not’. Goodness, what an absurd synthesis of what was said.

What really settles all of this for me, though, is an interview given by Katie Jacobs Stanton, who is Twitter’s new media chief. What I read from what she tells Re/Code is that Twitter’s current and long-term strategies continue to revolve around the real-time, unfiltered timeline. Here’s Stanton on Twitter as a companion to live TV viewing (emphasis mine):

What’s happened is that every day our users have been able to transform the service into this second screen experience while watching live TV because Twitter has a lot of those characteristics. It’s live, it’s public, it’s conversational, it’s mobile. Television is something really unique, really powerful about Twitter and we’ve obviously made a big investment in that whole experience.

Here she is on the value Twitter provides during breaking news events:

We have a number of unique visitors that come to Twitter to get breaking news, to hear what people have to say. Joan Rivers died [Thursday] and people were grieving on Twitter and talking about her, but they’re also coming to listen to the voices on Twitter as they pay respect to Joan Rivers. This happens all the time. There’s also the broader syndication of tweets. News properties of the world embed tweets and cite tweets. That’s really unique.

Note how she emphasizes the fact that Twitter is not incidentally serving as this kind of platform, but that this makes it uniquely crucial to the wider media universe, for consumers of news and those producing it.

She later refers to Twitter as “the operating system of the news.” This does not sound to me like a service that is intent on dismantling what its own media boss is touting as its chief virtue.

Twitter will of course go through changes, and its experiments with favorites-from-nowhere is an example of that. It won’t always be exactly as it is. But I’m beginning to believe that the recent panic (including my own) is unwarranted. I suspect that the people at Twitter understand why people use it as religiously and obsessively as they do, that they use it very differently from the way they use Facebook, and that there needs to always be a way for a user to tap into the raw stream.

Maybe, down the road, the initial experience Twitter presents a user with is one that is more curated, more time-shifted, but I suspect that the firehose will always be there for faithful.

Learning Not to Be Tormented by the Twitter Torrent

I took a vacation from work last week, but I’m not good at vacations. One way or the other, I usually find some way to taint what should be a chance to relax with stress and labor. Sometimes that source of stress can be my own children. Not so much this time. This time, it was Twitter.

At first I had narrowed this epiphany to the bunch of jerks who attacked me when I tweeted in support of Anita Sarkeesian, and after my post on video games’ brutalization of women. And yes, that was stressful, and it’s not my fault that lots of people are jerks and decide to act on their jerkishness. But as the sun set on the last day of my time off, I realized that jerks on Twitter weren’t the sole problem for me, nor even at the core. It’s Twitter.

Two weeks ago, I, like hundreds of thousands of people I suspect, allowed the harrowing and upsetting news from Ferguson, Missouri eat me alive, night after night. I felt a kind of moral obligation to keep my eyes affixed to Tweetdeck as every outrageous development crossed the zeitgeist in real time. I could be of now help, and I couldn’t change the minds of those who thought the police’s siege was justified, but I wouldn’t allow myself to stop internally churning over every distressing incident. Tweet by tweet. Helpless watching and tweeting my feelings only served, in the end, to put a dent in my well-being.

It wasn’t a bad idea for me to be informed, or to feel a deep compassion for the peaceful people whose very humanity was being challenged by our system, there represented by a militarized police force. I know there is real value in being well-informed and empathetic . But there is educating yourself, and there is abusing yourself.

In the midst of the blowback over my tweets and posts about Anita Sarkeesian and video games, I found myself wounded with every attack. Sometimes the lashings came from people I sort of knew on Twitter, which stung in a particular way that unleashed all sorts of self-doubting anxieties. But even the stupid and overtly hostile attacks from trolls and other miscellaneous dingbats hurt. These were snarky, mean-spirited attempts at zingers from fools, devoid of sense, and they still upset me.

Put aside why I “let” these things affect me as severely as they do. Folks, this is what it is to be like me. The better question to ask is why I place myself in such a position where I can be affected.

Marco Arment recently discovered something similar after a Twitter-fight with a tech journalist, which apparently really got to him, and he’s not exactly a shrinking violet:

Much of the stress I felt during this is from the amount of access to me that I grant to the public….We allow people access to us 24/7. We’re always in public, constantly checking an anonymous comment box, trying to explain ourselves to everyone, and trying to win unwinnable arguments with strangers who don’t matter in our lives at all.

We allow this access because of what we feel we’re getting in return: all the benefits of the Twitter firehose, every tweet in real time from those we’ve chosen to follow, plus (and this is the big one) a platform to reach as many people who choose to follow us, plus however many people follow them, should they pass our content along. When Twitter’s great, it’s really, really great. “It’s like 51% good and 49% bad,” as Brent Simmons recently put it. “I don’t see it getting any better. Hopefully it can hold the line at just-barely-worth-it.”

And it’s not just about people ganging up on me, it’s about exposing myself to the Great Torrent of Feelings that Twitter can become. As I just talked about in my last post, Twitter and other social media can, at their worst, serve as platforms for one group of people to vociferously agree with each other at the expense of another group of people, who are just, like, “the worst.” Regardless of my orientation to this dynamic, whether I’m in the agreeing-group, the dissenting group, or simply watching it take place, it’s dispiriting, frustrating, and if it’s about an issue or group of people I care about, upsetting.

Here’s Frank Chimero, expressing something that rings true for me as well:

My feed (full of people I admire) is mostly just a loud, stupid, sad place. Basically: a mirror to the world we made that I don’t want to look into. The common way to refute my complaint is to say that I’m following the wrong people. I think I’m following the right people, I’m just seeing the worst side of them while they’re stuck in an inhospitable environment. It’s exasperating to be stuck in a stream.

And here’s a kicker: While the groupthink and mob dynamics are Twitter at its worst, I think Ferguson is an example of Twitter at its best: the real-time documentation of an existentially crucial event, with contributions from people who are participants and first-hand witnesses to developments, along with analysis and reaction from people watching from outside. But just because it’s important and useful doesn’t mean it’s healthy for a given individual to drink all of it in night after night.

As I grew wearier this week, I took breaks from Twitter. I didn’t do any kind of cold-turkey abstention or detox. I just put it aside for a while. I took almost one whole day away before finally writing and posting my article on video games, and over the course of the week, I affirmatively decided not to allow Twitter or social media to be a part of my routine or my passive phone-checking. I chose not to put it in front of me when I was playing with my kids. I assembled some toys and organized my daughter’s room with only a podcast, and little phone checking. I rode my bike more than I ever have. I turned off all of its notifications for reading a book on my iPad. I even checked out some dead-tree books from the library, in large part so that when I was reading them, the Great Torrent of Feelings could not reach me.

As I so often write under this blog’s banner, social media is best used with great intention. I usually mean this in terms of fostering your personal identity or in curating what content you’re exposed to. But it also applies to how much of your time and attention you allow it to claim overall. I have defaulted, I fear, to a stance in which the Twitter Torrent was granted passage through my nervous system as often, and for as long, as anyone else using it wanted. This week, while not the most relaxing and diverting vacation I could have hoped for, has at least taught me to be more specific and, yes, intentional, about my time in the Torrent.

Hat tip to Alan Jacobs, from whom I found a couple of these quotes, and usually deserves many hat tips.

Let’s Agree to Disagree with Those Other People: The Stifling of Meaningful Dissent Online

A recent study from Pew that’s getting a lot of attention suggests that social media use is contributing to a dynamic in which people are afraid to express opinions that might dissent from what appears to be the majority consensus both on and offline. It appears as though Facebook, Twitter, and other social media lead people to be afraid to disagree out loud both on these platforms and in real life.

The pushback against this that I’ve heard amounts to some version of this: “That can’t be true, people are arguing all the time online!” This anecdotal observation means, I suppose, that the study is bunk.

But I don’t think so. Believe me, I understand that there is a lot of disagreeing and arguing and dissenting online. But I think it’s largely between networks of people as opposed to within them. Perhaps the most obvious example from my own skepto-atheist experience is the nightly volley of tweets between atheists and religious believers, where each side is batting debate points back and forth over the answer to Life, the Universe, and Everything.

But these aren’t friends or amiable acquaintances having a thoughtful disagreement. These are people who are more or less strangers to each other, shelling one another with 140 characters’ worth of theological (or atheological) rhetoric.

No one in these debates need muster the courage to stand against the prevailing opinion of the crowd, as everyone knows where everyone stands before the argument even begins. The same can be said for much of what passes for “debate” online. Liberals versus conservatives is an easy example. If progressive activist X gets into a Twitter tit-for-tat with religious right crusader Y, it’s not an act of bravery on either participants’ part, but a chance to showboat. And that’s fine.

Someone who has a disagreement within their own circles, though, does face a tougher situation than they might have in the days before social media omnipresence (and near-omniscience). If someone has a different political point of view on a certain issue from most of their friends, it can take a little steeling of the spine to bring it up, and for most of recent history, this could be confined to a conversation among friends at a party or over drinks. If there was blowback or discomfort, the blast radius was limited to a few folks and in an isolated occasion.

Today, however, say something that doesn’t jibe with your circle’s line of thinking on a touchy subject, and you can expect all hell to rain down upon you from friends, friends-of-friends, and anyone else whose social Venn diagram even slightly butts up against theirs. Depending on the subject and the opinion in question, the reaction can be intense, hostile, overwhelming, and ultimately silencing. I know this phenomenon has made me rather hesitant, which is part of what made it so difficult to post my previous article, as it dissented with what many of my friends believed (apparently very strongly). As of this writing I still haven’t even tweeted out the link myself.

And where it really gets interesting is how this spills over into meatspace. Give it a moment’s thought, and you can see how the stultification of dissent online can effect one’s in-person interactions. Everyone you know in real life is probably also connected to you online, and will more or less instantly become aware of any meaningful disagreements on sensitive issues, as well as aware of the torrent of pushback one receives online as a result. Your real-life friends, in other words, will both know you think differently about something and see how you’re being pilloried for it. This, I can imagine, makes both parties – the dissenter and the members of their networks – dubious about the wisdom of opening one’s mouth. Plus, any dissent aired solely in meatspace can quickly find its way online by way of someone else’s reporting of it.

At the same time, the social rewards of being part of a cohesive group with a strongly-held, identical opinion on an issue are also apparent. You are safe within your own camp to lob snark, sarcasm, talking points, or missives of righteousness to the opposing camp (which is equally cohesive) or at some other poor sucker who was damn fool enough to disagree with his of her friends. If they still are friends, because of course they now hold the wrong opinion about a Very Important Issue.

Freddie deBoer recently noted how this phenomenon spoils online discourse, not just in debate, but in all manner of expression:

The elite internet is never worse– never– than when the people who create it decide that so-and-so is just the worst. When everybody suddenly decides that someone is a schmuck, it leads to the most tiresome and self-aggrandizing forms of groupthink imaginable.

So yes, there is plenty of argument online. But actually relatively little open disagreement. The disagreement most people who sneer at this study are observing is really just agreement on the position that those other people (or that one poor dumb bastard) on the other side are wrong.

It’s people, astride very tall horses, agreeing at other people.

Twitter is Monkeying with What Makes it Great

Original image from Shutterstock.
Twitter has a lot of problems. It doesn’t seem to have the wherewithal to deal with abuse and harassment on its platform, it’s managed to antagonize the developer community by limiting anyone’s ability to make new apps and interfaces to the service, and, oh yeah, it still doesn’t really know how to make money for itself. But the core service is something truly valuable and truly simple, and in that simplicity it has been – dare I say it? Yes I dare! – revolutionary.

But under the shadow of Facebook’s supermassive user base and Google’s vast resources underpinning so much of what we know of as the Web, Twitter seems willing to at least experiment with making fundamental changes to what makes it so great in the first place.

Anyone using the Twitter web interface might have noticed already that not only are retweets (when one posts someone else’s tweet on their own timeline) appearing in users’ feeds, but so are other people’s favorites (when you click the star on a tweet). Not all favorites from all followers, but those determined by algorithm to be of potential interest to you.

Here’s how Twitter itself explains the new order (with my emphasis):

[W]hen we identify a Tweet, an account to follow, or other content that’s popular or relevant, we may add it to your timeline. This means you will sometimes see Tweets from accounts you don’t follow. We select each Tweet using a variety of signals, including how popular it is and how people in your network are interacting with it. Our goal is to make your home timeline even more relevant and interesting.

That’s right, Twitter is playing with building its own Facebook-like curation brain. Or to put it another way, Twitter is putting kinks in its own firehose.

This is disconcerting to longtime Twitter users for a number of reasons. First is the relinquishing of control being forced on the user: what was once a raw feed of posts from a list of people entirely determined by the user will become one where content is inserted that the user may not even want there. As John Gruber put it, “That your timeline only shows what you’ve asked to be shown is a defining feature of Twitter.” Maybe not for long.

Another issue is that this content can be time-shifted, meaning that the immediacy of dipping into one’s Twitter stream for the second-by-second zeitgeist will become diluted at best, and meaningless at worst.

But also, this one relatively minor change in the grand scheme of things signifies an entirely different concept for what a “favorite” means on Twitter. It’s really never been entirely clear to me what clicking the star on someone’s tweet was supposed to signify, but as with many things on Twitter, folks have made it their own. For some it’s the equivalent of a nod or smile of approval without a verbal response, for others it serves the purpose of a bookmark, so you can return to it later. It’s never been meant as a “signal” to Twitter to provide more content like that tweet. Importantly, it’s always mainly been between the user and the original tweeter (not entirely, as one can click through on a given tweet and see all those who have favorited something), and now that’s completely gone. Now you have to assume that your favorites, along with your retweets, will be broadcast, put in front of people in their timelines.

Dan Frommer says changes like this may be necessary for Twitter’s longtime viability:

The bottom line is that Twitter needs to keep growing. The simple stream of tweets has served it well so far, and preservationists will always argue against change. But if additions like these—or even more significant ones, like auto-following newly popular accounts, resurfacing earlier conversations, or more elaborate features around global events, like this summer’s World Cup—could make Twitter useful to billions of potential users, it will be worth rewriting Twitter’s basic rules.

But with events around the world being as they are, the value of the Twitter firehose hasn’t been this clear since perhaps Iran’s Green Revolution. For Twitter to be monkeying with its fundamentals, the things that make it stand apart from Facebook and other platforms, is frightening. I have to hope that if Twitter does take this too far, that another platform will emerge that can be all that was good about Twitter, and also attract a critical mass of users to make it valuable.

Maybe we should have given App.net more of a shot.

Ferguson as Portrayed by Facebook and Twitter: Algorithms Have Consequences

Image source.
If Facebook’s algorithm is a brain, then Twitter is a stream of conscience. The Facebook brain decides what will and will not show up in your newsfeed based on an unknown array of factors, a major category of which is who has paid for extra attention (“promoted posts”). Twitter, on the other hand, is a firehose. If you follow 1000 people, you’ll see more or less whatever they tweet, at the time they tweet it, at the time you decide to look.

As insidious as it feels, the Facebook brain serves a function by curating what would otherwise be a deluge of information. For those with hundreds or thousands of “friends,” seeing everything anyone posts as it happens would be a disaster. Everyone is there, everyone is posting, and no one wants to consume every bit of that.

Twitter is used by fewer people, often by those more savvy in social media communication (though I’m sure that’s debatable), the content is intentionally limited in scope to 140 characters including links and/or images, and the expectation of following is different than with Facebook. On Facebook, we “follow” anyone we know, family of all ages and interests, old acquaintances, and anyone else we come across in real life or online. On Twitter, it’s generally accepted that we can follow whomever we please, and as few as we please, and there need be no existing social connection. I can only friend someone who agrees to it on Facebook (though I can follow many more), but I can follow anyone I like on Twitter whose account is not private.

So the Facebook brain curates for you, the Twitter firehose is curated by you.

Over the past few days, we have learned that there are significant social implications to these differences. Zeynep Tufekci has written a powerful and much talked about piece at Medium centered on the startling idea that net neutrality, and the larger ideal of the unfiltered Internet, are human rights issues, illustrated by the two platforms’ “coverage” (for lack of a better word) of the wrenching events in Ferguson, Missouri. She says that transparent and uncensored Internet communication “is a free speech issue; and an issue of the voiceless being heard, on their own terms.”

Her experience of the situation in Ferguson, as seen on Twitter and Facebook, mirrored my own:

[The events in Ferguson] unfolded in real time on my social media feed which was pretty soon taken over by the topic — and yes, it’s a function of who I follow but I follow across the political spectrum, on purpose, and also globally. Egyptians and Turks were tweeting tear gas advice. Journalists with national profiles started going live on TV. And yes, there were people from the left and the right who expressed outrage.

… I switched to non net-neutral Internet to see what was up. I mostly have a similar a composition of friends on Facebook as I do on Twitter.

Nada, zip, nada.

No Ferguson on Facebook last night. I scrolled. Refreshed.

Okay, so one platform has a lot about an unfolding news event, the other doesn’t. Eventually, Facebook began to reflect some of what was happening elsewhere, and Ferguson information did begin to filter up. But so what? If you want real-time news, you use Twitter. If you want a more generalized and friendly experience, you use Facebook. Here’s the catch, according to Tufekci:

[W]hat if Ferguson had started to bubble, but there was no Twitter to catch on nationally? Would it ever make it through the algorithmic filtering on Facebook? Maybe, but with no transparency to the decisions, I cannot be sure.

Would Ferguson be buried in algorithmic censorship?

Without Twitter, we get no Ferguson. The mainstream outlets have only lately decided that Ferguson, a situation in which a militarized police force is laying nightly violent siege to a U.S. town of peaceful noncombatants, is worth their attention, and this is largely because the story has gotten passionate, relentless coverage by reporters and civilians alike on Twitter.

Remember, Tufekci and I both follow many of the same people on both platforms, and neither of us saw any news of Ferguson surface there until long after the story had already broken through to mainstream attention on Twitter. What about folks who don’t use Twitter? Or don’t have Facebook friends who pay attention? What if that overlap was so low that Ferguson remained a concern solely of Twitter users?

And now think about what would have happened if there was no Twitter. Or if Twitter adopted a Facebook algorithmic model, and imposed its own curation brain on content. Would we as a country be talking about the siege on Ferguson now? If so, might we be talking about it solely in terms of how these poor, wholesome cops were threatened by looting hoodlums, and never hear the voices of the real protesters, the real residents of Ferguson, whose homes were being fired into and children were being tear-gassed?

As I suggested on Twitter last night, “Maybe the rest of the country would pay attention if the protesters dumped ice buckets on their heads. Probably help with the tear gas.” Tufekci writes, “Algorithms have consequences.” I’ve been writing a lot about how platforms like Facebook and Twitter serve to define our personal identities. With the Facebook brain as a sole source, the people of Ferguson may have had none at all. With the Twitter firehose, we began to know them.

What Happens When You Starve the Facebook Brain?

The Facebook algorithm, the “brain” which decides what content to feature, what content to bury, and what content to put in front of you, is being tested mightily of late. One writer tried to game the Facebook brain by disguising his posts as major life events in hopes of seeing them rise to the top. Another tried to overwhelm the brain (and himself) by clicking “like” on literally everything he saw.

Elan Morgan had a different idea altogether. Instead of gaming the Facebook brain, she more or less ignored it. Taking the opposite tack from Mat Honan, the Wired writer who liked all content for 48 hours without discrimination (and suffered for it), Morgan stopped clicking like altogether. She describes her troubles with the entire concept:

I actually felt pangs of guilt over not liking some updates, as though the absence of my particular Like would translate as a disapproval or a withholding of affection. I felt as though my ability to communicate had been somehow hobbled. The Like function has saved me so much comment-typing over the years that I likely could have written a very quippy, War-and-Peace-length novel by now.

Rather than give the Facebook brain a deluge of contradictory feedback as Honan did, Morgan gave it none at all, leaving the Facebook brain with little data with which to base its curation on. The result? Well, in a way, she got Facebook – the one we all used to like – back:

Now that I am commenting more on Facebook and not clicking Like on anything at all, my feed has relaxed and become more conversational.

Imagine that. This is what drew me to Facebook to begin with, when it still seemed to be a platform mainly for college students. It distinguished itself from MySpace by having a clean, uncomplicated interface, and with a news stream that didn’t necessesitate going to an individual’s page to interact. And when you wanted to interact, to comment or ask a question, it was quick and easy.

But when Facebook turned so strongly in the direction of heavy algorithm-based curation as almost literally everyone began posting on it, it turned into something that resembled a WalMart lined with cheesy inspirational posters. Community and interaction became incidental to passive consumption of content. Passive, save for the “like.”

Morgan saw this too:

I had been suffering a sense of disconnection within my online communities prior to swearing off Facebook likes. It seemed that there were fewer conversations, more empty platitudes and praise, and a dearth of political and religious pageantry. It was tiring and depressing. After swearing off the Facebook Like, though, all of this changed. I became more present and more engaged, because I had to use my words rather than an unnuanced Like function. I took the time to tell people what I thought and felt, to acknowledge friend’s lives, to share both joys and pains with other human beings. It turns out that there is more humanity and love in words than there are in the use of the Like.

I think this is an experiment very much worth pursuing. As Mike Daisey wrote (on Facebook) in response to Morgan’s piece, “[I]t might help make it closer to being a discussion board, which is what I wish it to be.” Same here.

But there are different perspectives on the “like.” Anil Dash wrote back in 2011 how he uses likes, Twitter “favoriting,” and other forms of social media up-voting with specific intention:

[F]avoriting or liking things for me is a performative act, but one that’s accessible to me with the low threshold of a simple gesture. It’s the sort of thing that can only happen online, but if I could smile at a person in the real world in a way that would radically increase the likelihood that others would smile at that person, too, then I’d be doing that all day long.

This idea, likes as a stand-in for in-person smiles and nods, is part of what Morgan finds problematic, that they are substanceless. “The Like is the wordless nod of support in a loud room,” she writes. “It’s the easiest of yesses, I-agrees, and me-toos.”

There’s nothing wrong with yesses and me-toos, of course. Sometimes that’s really all that’s worth saying, and that’s okay. I think the trick is to know when a mere “like” or “fav” truly is sufficient, when a more substantive response is warranted, and when it’s best, or just okay, to let something go by without expressing an opinion at all. After all, not every opinion needs expressing, does it?

I keep coming back to the idea of using Facebook and other social media with intention, knowing that there is an algorithm behind this platform that dominates so much of our online experiences, and acting on that understanding. That might mean you become far more judicious with your likes, and favoring prose responses over a mere thumbs-up. And maybe it means you eschew a reaction on Facebook’s platform altogether (thereby bypassing the Facebook brain altogether), and put your response into a blog post, a tweet, or a private email.

Just because something starts or is discovered on Facebook doesn’t mean it has to stay there. That brain doesn’t own you.