Last night an old friend and I talked about our hesitation to publish our work. He's in the middle of a Philosophy PhD program at Fordham, and he admitted that he reviews some real crap that people publish.
"Whenever I start writing a paper, I always intend to publish it," I said. "But when I'm done I realize that I really haven't contributed anything new to the pool of knowledge."
"There's so much work out there that doesn't add anything," he agreed. "Not only that, work that doesn't contribute anything to the pool actually pollutes the waters. Bad articles are pollution."
"Exactly. Junk-work muddies the waters and makes it harder for everyone else to find the good stuff."
This reminded me of a blog entry I read by Daniel Solove on Concurring Opinions. He writes:
"The reality is that most law review articles aren't all that great. This is to be expected. In nearly any field, much of what is written isn't all that great. We'd be lucky if 10% is really good. Up the production level, and you get a lot more mediocre and bad work, and only a little more good work. What's happening, in other words, is that the worthwhile articles are becoming needles in an ever-growing haystack."
He suggests that we need a system where the industry recommends the cream of the crop. I endorse this. I'm thinking of the methods of StumbleUpon.com, where law review articles could be given a "thumbs up" if readers find them worthwhile. Like Stumble, the system would allow readers to comment on the articles (think of the potential for debate!). Similarly, users wouldn't be able to "bomb" their own article - they can only vote once, and a writer can't thumbs-up his/her own article. Stumble doesn't "rank" sites per se, and actually I think that would be something to implement for law review articles.
Cons: Researchers might focus too heavily on the highly-rated works, skewing the pool towards older articles. But that would make the discovery and use of new or obscure works more exciting!