It’s easy to focus on the benefits of online community and social networks. But they have their dark side, too. And until online leaders (and social network sites such as Facebook and Digg) learn to cope with the problems of community management, the chances of businesses effectively exploiting these collaboration tools are somewhere between “slim” and “none.”
Online communities are collections of people who connect based on shared interests. The only thing these people may have in common is that they write software in C#, or they have stained glass as a hobby, or they have passionate feelings about a political candidate, or they get excited about books. The area of commonality might be simply a desire to be entertained by “cool stuff,” which is how Digg became popular. A community (whether forum, mailing list or social network) enables people to find one another irrespective of distance, geography, race-color-creed and all that stuff.
That’s great—and it’s why I’ve spent most of my life as an online community maven, starting with expensive dial-up BBSes at 1200bps and using late, lamented proprietary online services you never heard of.
But social networks have grown exponentially since then. The Facebooks and Diggs of the world have millions of users, and thus greater challenges around community management. For them to survive—and thrive—they have to figure out how to manage groups and the members within them.
Overall, we like to think that people can work things out for themselves, and in most cases they can. But there are always cranks, and people who get a wild hair up their butts, and those who see the community as their own personal marketing opportunity. Some community members seem to specialize in learning where the moderator or terms-of-service draws the line, and wiggling their little toes right on it.
It makes sense for a social network to set reasonable rules of conduct, but frankly, few of them are doing a very good job of managing that responsibility. As these services try to cross the chasm from teenagers’ toys to business environments, they have to get better.
Need examples of recent community management failures? Let’s start with a recent propensity for social networks to ban members for perceived breaches.
As my colleague Jarina D’Auria wrote about some months ago, Facebook’s automated system decided that her perfectly-reasonable businesslike activity was the act of a spammer, and locked her out of the service with no warning or opportunity to respond. Magically, she got back in after telling the company she was reporting on the incident. (Sometimes it’s nice to have the power of the press.)
However, apparently a Facebook virus is causing some havoc in the user community, with the company (or rather, its software) auto-banning everyone who falls prey to it. With nothing but silence when users complain.
That’s pitiful. Businesses don’t stay in business unless they actually listen to customers. Particularly when the customer isn’t actually at fault.
Another example is Digg, which had a recent rash of member bannings—80 of them, according to TechCrunch—particularly of top contributors. The company’s reasoning, according to that blog post, is that the Digg terms of service prohibit the use of scripts to submit stories.
Which might wash, except that one prominent Digg member who was banned, DiggBoss, wrote a script that (a) did not submit stories and (b) enhanced Digg.com’s functionality. (In short: Digg lets you e-mail friends to share stories you think are cool. Nothing in the existing software prevents someone from sending the same “shout” to recipients a dozen times. DiggBoss’s script removed the duplication if the friend already dugg the story.) The result? DiggBoss was told he was banned for life. Gosh. That’s a long time. You think that’s maybe a little excessive?
Now mind you, it’s Digg’s party and they can make all the rules they want. Digg has plenty of challenges (many of which they’ve created for themselves, in my humble opinion, though that’s another issue). But from a community management point of view, this situation is a major failure. The banning happened without conversation, without warning, without any opportunity for the community member to respond… even when (or maybe because) it’s someone who was an active and committed participant.
It’s fine to moderate the content of a community; as CIO.com’s Advice & Opinion BlogMom, I do it every day. It’s my duty to ensure the real participants are protected from the aforementioned cranks and aggressive self-marketers. But it is a real human who makes the decisions and who listens to would-be contributors. Sometimes they are misguided about appropriate behavior, and a few words of explanation sets them aright. I firmly believe that ignorance is curable. Unless it’s unrelenting spam, everyone gets a warning.
This is not always an easy call. I’ve been on both sides of the keyboard in these sorts of situations. For example, the right thing to do in user disputes is usually for the moderator to keep out of it and let people work things out for themselves, including letting users declare one another devil-spawn. Community members don’t always get along; sometimes (such as political forums) that’s part of the appeal. Every message-reading client since Tapcis for CompuServe has had an option for “ignore this user,” and that’s fine.
However, social networks, and any community that enables voting, adds a new and occasionally uncomfortable twist. Now, instead of a one-on-one grudge match, your online enemy can vote-down a contribution you make to reddit, for instance, or mark an Amazon review as “not helpful” (when they actually mean “I disagree”—common with political or religious books). Within bounds, that’s okay.
But then there’s the strange “stalkers,” which we active Amazon reviewers call “neginators.” As top-70 Amazon reviewer Duffbert explained, “People may follow all your reviews and vote Not Helpful because they flat out don’t like you. Or they may feel you can’t possibly read that many books, therefore you must be cheating, and the reviews are marked Not Helpful. There’s even the situations where other reviewers below you (or above you) vote Not Helpful in order to try and boost their own ranking by bringing down yours. It’s vicious and pathetic behavior, but it exists.”
I’m not saying that Amazon has to step in and slap the wrists of people who vote my reviews unhelpful. (Not that I’d mind….) In actual fact, they quietly do so, especially if a reviewer points out that (as has happened to me) someone visits the last 10 book reviews you entered, overnight, and marks them all down. But I’ve seen similar behavior on reddit (damn, there go my votes) and in other social networks. Such behavior makes it harder to draw more people into the community as active participants—a necessity for long-term survival of these services. As Duffbert said, “If you’re not thick-skinned, it’s best to stay out of the reviewing waters.”
But it’s one thing for users to squabble among themselves, under the watchful eye of a ListMom, and another for the online service itself to behave badly. It’s pretty ironic, considering their businesses are inherently based on the idea of people being social.
Online communities and social networks are a conversation, even when the response is another individual voting on the post’s value. By failing to effectively manage the user community—especially when they don’t respond to user complaints—companies forget that they, too, are part of that conversation.