Ran into a few very interesting (and very different) articles last week, which I wanted to comment on (more posts to follow).

First up is a blog posting on Sciencebase that quotes chemist (and blogger) Joerg Kurt Wegner, with a proposal that the solution for information overload is to do away with editorial oversight and instead rely on social software. Now, obviously, I’m heavily biased here, and I admit that up front. I’m an editor, it’s what I do for a living, and if I didn’t think I made valuable contributions, I would do something else. That said, there are several problems with Wegner’s proposal.
—article continues—

“He suggests that one of the best visualizations for this difference is Alexa’s web ranking statistic comparing Wikipedia and Encyclopaedia Britannica. Wikipedia is a social information gathering process and Britannica is an editorial process. The graph shows that Wikipedia increased its access and popularity dramatically compared to Britannica. “Based on this, I would conclude that the benefit (not only the plain access) is higher for the social service,” Wegner says.”

I’m not sure how he’s able to conclude that the “benefit” whatever that is, is higher on one of these sites rather than the other. Popularity is a terrible measure of quality. Mylie Cyrus sold more cd’s last year than Louis Armstrong. Access to her work, and its popularity increased dramatically compared to Armstrong’s. So is the “benefit” of her music greater than that of Satchmo’s? Not to defend the quality of Britannica, but the obvious reason Wikipedia sees more access is that it does better in Google rankings than Britannica and most people don’t get past the first few results in a Google search. Does a higher search engine ranking automatically mean the material is of higher quality?

Larry Sanger, one of the founders of Wikipedia has this to say:

“You might maintain that people are already using Wikipedia a lot, and that that implies a great deal of trust. This is true, as far as it goes; but people use many sources that they themselves believe to be unreliable, via Google searches, for example. (I do so all the time, though perhaps I shouldn’t.) Perhaps Wikipedia is better described as one of those sources regarded as unreliable which people read anyway. And in this case, one might say, there’s no problem: Wikipedia is being read, and it is of minimally adequate and increasing reliability.”

Is “minimally adequate” good enough for scientific research? Is the mediocre work of a popular scientist, one with lots of friends and collaborators more likely to rise in this situation than the stellar and groundbreaking work of an unpopular, unfriendly scientist who works alone?

“Honestly, I cannot see how any editorial process can cope with this problem,” says Wegner. Social software in contrast might be able to tackle this challenge. “Social software is driven by small dedicated user groups (oligarchies),” he explains, “So, compared to an editorial process the number of ‘real’ contributors might actually not be higher. However, the enrichment of diverse and really interested people might be better. If you think that you need for science the smartest set of diverse people, then ’social software’ cannot be a bad choice, right?”

The thinking seems a bit muddled here as well. Is he implying that journal editors are not “really interested” in the papers they’re reviewing? Isn’t a person who edits for their livelihood more likely to be dedicated to finding quality than a group of volunteers doing so in their spare time? Are you really likely to get the “smartest set of diverse people” spending lots and lots of time sifting through articles, or are they more likely to spend their time doing original research? Have you ever tried to edit a Wikipedia article that you know is wrong, only to have your edit quickly removed from the site? I certainly have, and I’m not alone (see here, here, here and here for just a few examples). What you tend to get on Wikipedia, is not a collection of input from the world’s greatest experts on a given subject, but instead a few overzealous guardians protecting their patch of turf or obsessing over vaguely-defined rules. The real experts often try to pitch in, but they are quickly shunned and give up on Wikipedia out of frustration. As we all know, Wikipedia is a collection of information from the most persistent contributor, not the smartest, or most accurate contributor.

More from Larry Sanger:

“Consequently, nearly everyone with much expertise but little patience will avoid editing Wikipedia, because they will–at least if they are editing articles on articles that are subject to any sort of controversy–be forced to defend their edits on article discussion pages against attacks by nonexperts. This is not perhaps so bad in itself. But if the expert should have the gall to complain to the community about the problem, he or she will be shouted down (at worst) or politely asked to “work with” persons who have proven themselves to be unreasonable (at best).”

Jason Scott, of textfiles.com has this to say:

“Wikipedia really wastes energy, that’s it’s little secret. You say, wow, this is … you know, it’s an amazingly inefficient process. Not just like, we could really tune the spark plugs and it’ll run a little better. I mean literally just dropping ballast as it goes.

This happens constantly. There is not only a dislike of experts, there is a hatred of experts. Experts are derided on Wikipedia because they don’t tend to follow the rules. They tend to put down cited sources and then say “I don’t really care about your view of notability, I just proved it, done.” And when you say “well, screw you” and then undo it and they realize that they can’t follow the rules, they leave.

So you end up with a certain level of quality that’s maintained, especially in certain kinds of articles, where undisputed facts — basically any case where you can prove something without a shadow of a doubt, in a way that it actually happened — gets through on Wikipedia but a lot of other filtering does not.”

I think that the solution to information overload is more, and better editorial oversight, not less, and that we’re better off getting that editorial oversight from a group of highly-qualified, highly-motivated individuals,. I think there’s a poor understanding in a lot of the scientific community about the amount of work and the quality of work needed to produce a top-notch journal. Most of the better journals you read are rejecting 90% or more of the articles that get submitted. That’s a lot to wade through, even if you distribute the work widely. Which part of the bottom 5% are you willing to spend time evaluating? How many days of your work week are you willing to sacrifice to reviewing material, most of which isn’t going to make the grade for you? And as noted above, if you do away with the editorial process, you tend toward popular, but average material. Don’t take my word for it, instead here’s someone with more than ten years’ experience in doing just that, a pioneer in exploring the group dynamics that led to Web 2.0, Rob Malda, one of the founders of Slashdot:

“When you’re building a system like this you’re balancing the wisdom of the crowds versus the tyranny of the mob. Sometimes a crowd is really smart, but some things don’t work so well by committee. Crowds work when you have a tightly knit group of people with similar interests, but when you have a loosely knit community you get “Man Gets Hit in Crotch With Football” and Everybody Loves Raymond, where it’s just good enough to not suck.”