Fret No More

Gerrit_Dou_-_Young_violinist_sitting_in_his_study_room

Time flies when you’re having fun, and it flies at Mach 5 when you’re not. When I hear my kids complain, “I’m bored,” I tell them how much I envy them. Oh, to be bored! To have no immediate demands on my time, energy, and attention! Boredom may appear to be an unpleasant state, but it’s also a harbinger and a breeding ground of things worth doing. It’s the preamble for activities of choice, not obligation.

By mere coincidence I read in succession two pieces on how terrible we humans are at perceiving time and its passage, and how we might alter those perceptions in a more meaningful and satisfying way. They are both entirely convincing, and yet they each offer conflicting ideal states of mind. Or they might not.

First, Alan Jacobs in The Guardian. (I have never met this man, but I swear I count him among the most valuable teachers of my life.) Jacobs refers to our culture, as driven by our various media, as “presentist.” He writes, “The social media ecosystem is designed to generate constant, instantaneous responses to the provocations of Now.” There’s no way to think deeply or consider alternate or broader perspectives because the fire hose of stimuli never ceases.

The only solution is to cultivate “temporal bandwidth,” which Jacobs defines as “an awareness of our experience as extending into the past and the future.” Less “now” and more “back then, now, and later.” And the way we do that is to read books. Old books, preferably. “To read old books is to get an education in possibility for next to nothing.”

That education sets the stage for one’s mind to not only absorb the wisdom and the mistakes of the past, but to contemplate how they “reverberate into the future”:

You see that some decisions that seemed trivial when they were made proved immensely important, while others which seemed world-transforming quickly sank into insignificance. The “tenuous” self, sensitive only to the needs of This Instant, always believes — often incorrectly — that the present is infinitely consequential.

But cultivating temporal bandwidth is happening less and less, it seems. And as Jacobs says in a separate post, “Those who once might have been readers are all shouting at one another on Twitter.”

But while Jacobs recommends steering us away from believing the present to be of prime significance, David Cain at Raptitude urges us to grasp the present more tightly, and let concerns about the past and future fade to periphery.

And it is all to address the same basic problem: we feel washed away by the force and flow of time. Comparing an adult’s perceptions of time to a child’s, Cain writes:

As we become adults, we tend to take on more time commitments. We need to work, maintain a household, and fulfill obligations to others. […] Because these commitments are so important to manage, adult life is characterized by thoughts and worries about time. For us, time always feels limited and scarce, whereas for children, who are busy experiencing life, it’s mostly an abstract thing grownups are always fretting about. There’s nothing we grownups think about more than time — how things are going to go, could go, or did go.

Cain doesn’t point to social media or cultural illiteracy as culprits, but rather our disproportionate fixation on the past and the future. It may be that Cain is largely discussing a different scale of time than is Jacobs. Cain seems to be referring to our fixation on what has happened in the relatively recent past (10 minutes ago or 10 years ago, for example) and what the immediate future bodes (say, the next couple of hours or the next couple of months). Jacobs, by emphasizing the reading of “old books” (and by quoting lines from Horace) is certainly thinking of a much deeper past and a more distant future, spans that transcend our own lifetimes.

But as I said, Cain recommends regarding the past and future less, and home in on the present. “The more life is weighted towards attending to present moment experience, the more abundant time seems,” he says. And the way to attend to that present moment, as clichéd as it might sound these days, is through mindfulness, which can mean meditation or any activities “that you can’t do absent-mindedly: arts and crafts, sports, gardening, dancing.” Here’s why:

It’s only when we’re fretting about the future or reminiscing over the past that life seems too short, too fast, too out of control. When your attention is invested in present-moment experience, there is always exactly enough time. Every experience fits perfectly into its moment.

Note that Cain never mentions reading as one of those activities that one can’t do absent-mindedly. I don’t know about you, but if I read absent-mindedly I’m probably not actually reading at all, or at least not in such a way that I’ll retain anything. So whether or not he intended it or agrees with it, I’m throwing “reading books” into that list.

This is the bridge that connects these seemingly-conflicting viewpoints, making them complementary. Much of this rests on the difference in time scale I referred to, which, if taken into account, begins to form a complete picture. Few would argue with the idea that fretting about the immediate past and future is detrimental to one’s experience of time, or that contemplation and consideration of history and the long-term repercussions of our actions is a waste of time.

They key word here might indeed be “fretting.” In this sense, the definition of “fretting” isn’t limited to “worrying,” but describes a broader practice of wasting energy and attention on things within a narrow temporal scope without taking any meaningful action to address whatever concerns might be contained within. We fret about choices we’ve made and what such-and-such a person is thinking about us or how we’ll ever manage to get through the day, week, or year with our sanity intact. We rarely fret about how the Khwarazmian Empire was woefully unprepared for the Mongol army under Genghis Khan in 1219, or how the human inhabitants of TRAPPIST-1d will successfully harvest the planet’s resources to support a growing populace.

And of course, nothing engenders fretting like social media. Already primed for fretting by the demands of work, family, and self-doubt, now we can fret in real time (and repeatedly) over anything relatives, acquaintances, total strangers, politicians, celebrities, and algorithms flash before our awareness. It is possible to exist in a state of permanent fret.

Let me tell you, time really freaking zooms when you’re fretting.

So let’s combine the recommendations of Jacobs and Cain to address our temporal-perception crisis. Let’s get off of Facebook and Twitter, let’s turn off the television, and let’s get to that stack of books (or list of ebooks if you prefer) and read. Let’s allow our brains to expand our awareness, considerations, and moral circle beyond this moment, this year, this era. Let’s not burden ourselves with the exhausting worries about what we’re reading or how long it will take to read it or what else we should be reading but aren’t. Let’s make time to chat with our kids and our parents, and write, tinker, draw, arrange, organize, build, repair, or tend as best suits us. Let’s stop and breathe and think of nothing for a few minutes as we focus on the present instant in time and space, even to the atomic level. And then let’s think big, daring, universe-spanning thoughts beyond all measure.

Let’s be bored, and let that boredom nudge, inspire, or shock us into activity, be it infinitesimal or polycosmic.

It will take practice. It will not be easy. Let’s accept that this, too, is a journey of time and effort and moments.

And let us fret no more.

 


If you feel so inclined, you can support my work through Patreon.

Advertisements

Madame Defarge’s Memes

madamedefarge_2119297bAt Wired, Issie Lapowsky summarizes some research that tells us something that is not surprising, that more or less no one is ever persuaded to change their mind about a political position because of a post they saw on Facebook.

I suppose people do actually think that their social media posts are badly-needed ammunition in the political war of ideas, and that their fierce, impassioned, and ironclad arguments will surely win over the misguided. I assume they really do think that. Intellectually.

But the truth, which I believe they at least feel at a gut level, is that these political social media posts are social tokens, signifiers of belonging to a particular group, earning good will and social capital by reaffirming that which they all already believe. That’s largely why I write political tweets, usually because I think I can do so in a funny way and get some positive validation that might begin to fill the abyss that is my self-esteem. My zinger about Trump or my spirited defense of Hillary isn’t going to move the needle one teeny tiny little bit in anyone’s mind, and I have no expectation that it will.

At this level it’s harmless (other than those perilous moments when my tweets are not affirmed and I fail to achieve validation). The problem arises when the posts and tweets and memes go from social tokens to something more like Madame Defarge’s knitting. Outside of the more black-and-white world of election-year D vs. R posts, social media posts involving politics and heated social issues are designed to affirm via othering, by striking clear delineations between the good people and everyone else who is irredeemably bad for failing to check every ideological box, whether they know those boxes exist or not.

And it’s not just reactions to one’s own posts that do this work. It’s the posts of others. Lapowsky writes:

The majority of both Republicans and Democrats say they judge their friends based on what they write on social media about politics. What’s more, 12 percent of Republicans, 18 percent of Democrats, and 9 percent of independents who responded say they’ve unfriended someone because of those posts.

So it’s not political persuasion, as we might like to believe, it’s shaking the trees for villains to fall out of, it’s political partitioning.

In the film Bananas, the Castro-like ruler Esposito delivers his first speech to his people, and tells them, “All citizens will be required to change their underwear every half-hour. Underwear will be worn on the outside so we can check.”

The kind of social media I’m talking about is that underwear you just changed, and you’re pretty damned proud that you did it after only 29 minutes.

The Real People Who Serve As the Internet’s Depravity Filter

An incredible investigative piece in Wired by Adrian Chen reports on the lives of contract content moderators, folks whose job it is to go through content posted to online platforms (such as Facebook, YouTube, Whisper, etc.), and deal with the content that violates a platform’s policies or the law. And yes, we’re talking about the really bad stuff: Not just run-of-the-mill pornography or lewd images, but examples of humanity at its worst, from torture, sexual assault (involving adults, children, and animals), and beheadings.
Just reading Chen’s piece is a traumatic experience in and of itself, knowing what material is out there, what unthinkable behavior real people are engaging in, and what the relentless exposure to this content must do to the psyches of these grossly underpaid contract workers, whose lives are slowly being ruined, their well-being slowly poisoned, post by post and video by video. Simply reading this article will probably require some recovery time.

I can’t have a blog about tech, culture, and humanism without at least acknowledging what Chen has brought into daylight. I don’t think I have any novel observations at the outset, having just read it, still somewhat teetering on my heels. But here are some thoughts and questions that it raises for me:

First, the obvious: Are the major tech companies for whom this work is done really aware of what they put these moderators through? From the Bay Area liberal arts grads to the social-media-sweatshop moderators in the Philippines, hundreds of thousands of smart, sensitive human beings (and I think they must be smart and sensitive to have the kind of judgment and empathy required to do this kind of work) are having their minds eaten alive, losing their ability to trust, to love, to feel joy, with disorders that mirror, or explicitly are, post-traumatic stress. Do Mark Zuckerberg or Larry Page or whoever it is that runs Whisper give a damn? (Given how little Twitter has done to deal with abuse and harassment of its users, I think it’s safe to presume for now that they probably don’t.)

Also, now that we know what these folks are exposed to, what can we as users of these services do about it? What will we do about it? (I fear the answer is probably similar to what we all did when we learned about the conditions in factories in China: more or less nothing.)

Here’s what affected me the most about all of this. This report was a reminder of the depths of human depravity. Now, it’s not news that there are horrible people doing horrible things to each other, and likely ever shall it be. But something about the way it’s described in this report amplifies it for me. If these hundreds of thousands of moderators are being overwhelmed, deluged with violence and death and evil in all manner of their cruelly novel variations, how many of our fellow humans are perpetrators? These moderators are only catching the portion of these people who either get caught in the act or purposefully broadcast their actions. What more must be taking place? I can barely stand to ask the question.

Bearing witness to a video of a man doing something I cannot bear to recount here to a young girl, one moderator points us to the insidiousness of all of this, emphasis mine:

The video was more than a half hour long. After watching just over a minute, Maria began to tremble with sadness and rage. Who would do something so cruel to another person? She examined the man on the screen. He was bald and appeared to be of Middle Eastern descent but was otherwise completely unremarkable. The face of evil was someone you might pass by in the mall without a second glance.

Chen writes of how these moderators no longer feel they can trust the people in their day-to-day lives. You can see why.

Finally, I’ll be thinking about the fact that its these devices and services that I am so fascinated and often entranced by that are the delivery vessels for this horror. It is tempting to relegate one’s thinking about the tech revolution as one of liberation and renaissance. But these tools are available to us all, to the best of us and the worst. What then? What now?

The iMortal Show, Episode 2: Your Self, in Pixels

Image by Shutterstock.
Your browser does not support the audio element.

You are what you tweet? Do your “likes” tell the story of who you are?

For so many of us, major portions of our lives are lived non-corporeally. We don’t define ourselves solely by what we do in physical space in interaction with other live bodies, but also through pixelated representations of ourselves on social media.

How do we strike a balance between the real world and the streams of Twitter and Facebook? Can we be more truly ourselves online? When we tweet and share and comment, what parts of ourselves do we reveal, and what do we keep hidden or compartmentalized?

The way social media defines our identity is what we’re talking about on Episode 2 of the iMortal Show, with my guests: Activist and communications expert Sarah Jones, and master of tech, twitter, and self-ridicule Chris Sawyer.

Download the episode here.

Subscribe in iTunes or by RSS.


The iMortal Show, Episode 2: “Your Self, in Pixels”

Originally recorded September 3, 2014.

Produced and hosted by Paul Fidalgo.

Theme music by Smooth McGroove, used with permission.

Running time: 43 minutes.

Links from the show:

Sarah Jones on Twitter, at Nonprophet Status, and her blog Anthony B. Susan.

Chris Sawyer (with “sentient skin tags”) on Twitter.

Previous episode: “Ersatz Geek”

iMortal posts:

Skepticism Warranted in the Panic Over Twitter Changes

Image by Shutterstock
The thing that’s been giving the online world a collective ulcer is the idea that Twitter is going to fundamentally change the way its service works by bringing Facebook-style curation to its real-time firehose. But is it really? Despite the recent rending of garments by the Twitter faithful, I have found that skepticism is warranted.

The panic began with the implementation of a system whereby tweets favorited by people you follow might appear out of context in your timeline, and things you favorite might emerge likewise in other people’s feeds. I wrote about how this was an ominous sign of Twitter mucking with what makes it great: real-time access to the zeitgeist, filtered only by whom you choose to follow.

Then the Wall Street Journal reported that Twitter’s CFO Anthony Noto had indicated that changes were coming to the traditional Twitter timeline:

Twitter’s timeline is organized in reverse chronological order, a delivery system that has not changed since the product was created eight years ago and one that some early adopters consider sacred to the core Twitter experience. But this “isn’t the most relevant experience for a user,” Noto said. Timely tweets can get buried at the bottom of the feed if the user doesn’t have the app open, for example. “Putting that content in front of the person at that moment in time is a way to organize that content better.”

Mathew Ingram’s analysis of this at GigaOm is what really had people reaching for their pitchforks and torches, writing that it “sounds like a done deal” and that coming modifications “could change the nature of the service dramatically.”

That sounds really scary to folks who have stuck with Twitter since the beginning, and truly value what it provides. It stands as a stark contrast to the heavily-curated experience of Facebook, its immediacy giving it its power and unique position in the media universe.

But as even Ingram acknowledged in a subsequent post, this change — an algorithmic approach to the timeline — was probably meant for the “discover” tab of the site, which is already heavily curated by the service, and within Twitter’s search feature. In fact, that’s what the Journal article even says:

Noto said the company’s new head of product, Daniel Graf, has made improving the service’s search capability one of his top priorities for 2015.

“If you think about our search capabilities we have a great data set of topical information about topical tweets,” Noto said. “The hierarchy within search really has to lend itself to that taxonomy.” With that comes the need for “an algorithm that delivers the depth and breadth of the content we have on a specific topic and then eventually as it relates to people,” he added.

Sure sounds to me like he’s just talking about search, since that’s what he actually says. It didn’t help matters, I suppose, that whoever wrote the Journal’s headline chose to blare, “Twitter Puts the Timeline on Notice.” Because, no, it didn’t. Not here, anyway.

After Ingram’s first piece, in fact, Dick Costolo, Twitter’s CEO (who I presume is in a position to know) tweeted, “[Noto] never said a ‘filtered feed is coming whether you like it or not’. Goodness, what an absurd synthesis of what was said.

What really settles all of this for me, though, is an interview given by Katie Jacobs Stanton, who is Twitter’s new media chief. What I read from what she tells Re/Code is that Twitter’s current and long-term strategies continue to revolve around the real-time, unfiltered timeline. Here’s Stanton on Twitter as a companion to live TV viewing (emphasis mine):

What’s happened is that every day our users have been able to transform the service into this second screen experience while watching live TV because Twitter has a lot of those characteristics. It’s live, it’s public, it’s conversational, it’s mobile. Television is something really unique, really powerful about Twitter and we’ve obviously made a big investment in that whole experience.

Here she is on the value Twitter provides during breaking news events:

We have a number of unique visitors that come to Twitter to get breaking news, to hear what people have to say. Joan Rivers died [Thursday] and people were grieving on Twitter and talking about her, but they’re also coming to listen to the voices on Twitter as they pay respect to Joan Rivers. This happens all the time. There’s also the broader syndication of tweets. News properties of the world embed tweets and cite tweets. That’s really unique.

Note how she emphasizes the fact that Twitter is not incidentally serving as this kind of platform, but that this makes it uniquely crucial to the wider media universe, for consumers of news and those producing it.

She later refers to Twitter as “the operating system of the news.” This does not sound to me like a service that is intent on dismantling what its own media boss is touting as its chief virtue.

Twitter will of course go through changes, and its experiments with favorites-from-nowhere is an example of that. It won’t always be exactly as it is. But I’m beginning to believe that the recent panic (including my own) is unwarranted. I suspect that the people at Twitter understand why people use it as religiously and obsessively as they do, that they use it very differently from the way they use Facebook, and that there needs to always be a way for a user to tap into the raw stream.

Maybe, down the road, the initial experience Twitter presents a user with is one that is more curated, more time-shifted, but I suspect that the firehose will always be there for faithful.

Twitter is Monkeying with What Makes it Great

Original image from Shutterstock.
Twitter has a lot of problems. It doesn’t seem to have the wherewithal to deal with abuse and harassment on its platform, it’s managed to antagonize the developer community by limiting anyone’s ability to make new apps and interfaces to the service, and, oh yeah, it still doesn’t really know how to make money for itself. But the core service is something truly valuable and truly simple, and in that simplicity it has been – dare I say it? Yes I dare! – revolutionary.

But under the shadow of Facebook’s supermassive user base and Google’s vast resources underpinning so much of what we know of as the Web, Twitter seems willing to at least experiment with making fundamental changes to what makes it so great in the first place.

Anyone using the Twitter web interface might have noticed already that not only are retweets (when one posts someone else’s tweet on their own timeline) appearing in users’ feeds, but so are other people’s favorites (when you click the star on a tweet). Not all favorites from all followers, but those determined by algorithm to be of potential interest to you.

Here’s how Twitter itself explains the new order (with my emphasis):

[W]hen we identify a Tweet, an account to follow, or other content that’s popular or relevant, we may add it to your timeline. This means you will sometimes see Tweets from accounts you don’t follow. We select each Tweet using a variety of signals, including how popular it is and how people in your network are interacting with it. Our goal is to make your home timeline even more relevant and interesting.

That’s right, Twitter is playing with building its own Facebook-like curation brain. Or to put it another way, Twitter is putting kinks in its own firehose.

This is disconcerting to longtime Twitter users for a number of reasons. First is the relinquishing of control being forced on the user: what was once a raw feed of posts from a list of people entirely determined by the user will become one where content is inserted that the user may not even want there. As John Gruber put it, “That your timeline only shows what you’ve asked to be shown is a defining feature of Twitter.” Maybe not for long.

Another issue is that this content can be time-shifted, meaning that the immediacy of dipping into one’s Twitter stream for the second-by-second zeitgeist will become diluted at best, and meaningless at worst.

But also, this one relatively minor change in the grand scheme of things signifies an entirely different concept for what a “favorite” means on Twitter. It’s really never been entirely clear to me what clicking the star on someone’s tweet was supposed to signify, but as with many things on Twitter, folks have made it their own. For some it’s the equivalent of a nod or smile of approval without a verbal response, for others it serves the purpose of a bookmark, so you can return to it later. It’s never been meant as a “signal” to Twitter to provide more content like that tweet. Importantly, it’s always mainly been between the user and the original tweeter (not entirely, as one can click through on a given tweet and see all those who have favorited something), and now that’s completely gone. Now you have to assume that your favorites, along with your retweets, will be broadcast, put in front of people in their timelines.

Dan Frommer says changes like this may be necessary for Twitter’s longtime viability:

The bottom line is that Twitter needs to keep growing. The simple stream of tweets has served it well so far, and preservationists will always argue against change. But if additions like these—or even more significant ones, like auto-following newly popular accounts, resurfacing earlier conversations, or more elaborate features around global events, like this summer’s World Cup—could make Twitter useful to billions of potential users, it will be worth rewriting Twitter’s basic rules.

But with events around the world being as they are, the value of the Twitter firehose hasn’t been this clear since perhaps Iran’s Green Revolution. For Twitter to be monkeying with its fundamentals, the things that make it stand apart from Facebook and other platforms, is frightening. I have to hope that if Twitter does take this too far, that another platform will emerge that can be all that was good about Twitter, and also attract a critical mass of users to make it valuable.

Maybe we should have given App.net more of a shot.

Ferguson as Portrayed by Facebook and Twitter: Algorithms Have Consequences

Image source.
If Facebook’s algorithm is a brain, then Twitter is a stream of conscience. The Facebook brain decides what will and will not show up in your newsfeed based on an unknown array of factors, a major category of which is who has paid for extra attention (“promoted posts”). Twitter, on the other hand, is a firehose. If you follow 1000 people, you’ll see more or less whatever they tweet, at the time they tweet it, at the time you decide to look.

As insidious as it feels, the Facebook brain serves a function by curating what would otherwise be a deluge of information. For those with hundreds or thousands of “friends,” seeing everything anyone posts as it happens would be a disaster. Everyone is there, everyone is posting, and no one wants to consume every bit of that.

Twitter is used by fewer people, often by those more savvy in social media communication (though I’m sure that’s debatable), the content is intentionally limited in scope to 140 characters including links and/or images, and the expectation of following is different than with Facebook. On Facebook, we “follow” anyone we know, family of all ages and interests, old acquaintances, and anyone else we come across in real life or online. On Twitter, it’s generally accepted that we can follow whomever we please, and as few as we please, and there need be no existing social connection. I can only friend someone who agrees to it on Facebook (though I can follow many more), but I can follow anyone I like on Twitter whose account is not private.

So the Facebook brain curates for you, the Twitter firehose is curated by you.

Over the past few days, we have learned that there are significant social implications to these differences. Zeynep Tufekci has written a powerful and much talked about piece at Medium centered on the startling idea that net neutrality, and the larger ideal of the unfiltered Internet, are human rights issues, illustrated by the two platforms’ “coverage” (for lack of a better word) of the wrenching events in Ferguson, Missouri. She says that transparent and uncensored Internet communication “is a free speech issue; and an issue of the voiceless being heard, on their own terms.”

Her experience of the situation in Ferguson, as seen on Twitter and Facebook, mirrored my own:

[The events in Ferguson] unfolded in real time on my social media feed which was pretty soon taken over by the topic — and yes, it’s a function of who I follow but I follow across the political spectrum, on purpose, and also globally. Egyptians and Turks were tweeting tear gas advice. Journalists with national profiles started going live on TV. And yes, there were people from the left and the right who expressed outrage.

… I switched to non net-neutral Internet to see what was up. I mostly have a similar a composition of friends on Facebook as I do on Twitter.

Nada, zip, nada.

No Ferguson on Facebook last night. I scrolled. Refreshed.

Okay, so one platform has a lot about an unfolding news event, the other doesn’t. Eventually, Facebook began to reflect some of what was happening elsewhere, and Ferguson information did begin to filter up. But so what? If you want real-time news, you use Twitter. If you want a more generalized and friendly experience, you use Facebook. Here’s the catch, according to Tufekci:

[W]hat if Ferguson had started to bubble, but there was no Twitter to catch on nationally? Would it ever make it through the algorithmic filtering on Facebook? Maybe, but with no transparency to the decisions, I cannot be sure.

Would Ferguson be buried in algorithmic censorship?

Without Twitter, we get no Ferguson. The mainstream outlets have only lately decided that Ferguson, a situation in which a militarized police force is laying nightly violent siege to a U.S. town of peaceful noncombatants, is worth their attention, and this is largely because the story has gotten passionate, relentless coverage by reporters and civilians alike on Twitter.

Remember, Tufekci and I both follow many of the same people on both platforms, and neither of us saw any news of Ferguson surface there until long after the story had already broken through to mainstream attention on Twitter. What about folks who don’t use Twitter? Or don’t have Facebook friends who pay attention? What if that overlap was so low that Ferguson remained a concern solely of Twitter users?

And now think about what would have happened if there was no Twitter. Or if Twitter adopted a Facebook algorithmic model, and imposed its own curation brain on content. Would we as a country be talking about the siege on Ferguson now? If so, might we be talking about it solely in terms of how these poor, wholesome cops were threatened by looting hoodlums, and never hear the voices of the real protesters, the real residents of Ferguson, whose homes were being fired into and children were being tear-gassed?

As I suggested on Twitter last night, “Maybe the rest of the country would pay attention if the protesters dumped ice buckets on their heads. Probably help with the tear gas.” Tufekci writes, “Algorithms have consequences.” I’ve been writing a lot about how platforms like Facebook and Twitter serve to define our personal identities. With the Facebook brain as a sole source, the people of Ferguson may have had none at all. With the Twitter firehose, we began to know them.