Non-consensual wisdom

Previously, Shane Greenup brought to my attention two very interesting software projects, with somewhat similar goals: his own rbutr (currently in beta testing), and Dan Whaley’s (currently being planned and prototyped).

Rbutr (pronounced “rebutter”) allows its user base to link together web pages that rebut one another. These links eventually form conversation chains and webs that may span any number of websites, without needing or seeking the consent of the website owners. I, as a blogger, would have no control (or, at least, no veto) over rbutr links connecting my blog posts to someone else’s refutation of them, but these links would be available for any reader (who uses rbutr) to see and follow. has the even more ambitious goal of providing an “annotation layer” for the Internet. Any arbitrary passage of text (as well as other media types, including images, video and audio) within any web page may be adorned with a critical remark, visible to anyone else using the software, again without the consent of the web site owner. It aims to be a fine-grained peer-review system for, well, everything.

The minds behind are open about the fact that others have tried and failed (to varying extents) in the goal of creating a “web annotator”. However, they seem very determined to identify and learn from past mistakes. Perhaps the most important of these has been the lack of quality control in creating annotations. In a previous post I mentioned a similar project called Dispute Finder. I now gather that Dispute Finder’s database itself may have been overrun by misinformation. As one article explains:

Third, and most critical in my thinking, there will be [in] an extensive reputation system to qualify and rank comments based on the expertise of the commenter. The lack of this was part of what doomed an earlier project called Dispute Finder. I thought for a while that it would evolve into the tool skeptics needed, but very quickly the data in that tool was awash in conspiracy theories and other nonsense, with no way provided to sort by quality. is bringing together a pool of experts to determine how to create a “reputation model” to prevent this sort of thing from happening again. After all, Wikipedia seems to manage commendably well to resist incursions from interest groups1. Even the Slashdot moderation system seems to successfully raise up interesting and insightful comments at the expense of mundane and simplistic ones. I feel that our collective intelligence, though sometimes disorganised, is often under-appreciated.

Projects like this might prove an attractive middle road between (a) the Internet as a anarchic incubator of (mis-)information, and (b) the Internet as an oppressively-sanitised, centrally-regulated newspaper. Join the dots, for instance, between and the current debate over media regulation in Australia. Libertarian-minded newspapers and bloggers take furious offence to any suggestion that their activities should be overseen by The Government.

It would be hard to mount quite the same argument, with quite the same emotive imagery, against or rbutr. While non-consensual, there is certainly no coercion involved — no fines, no censorship, no forced apologies, etc. There is nothing here that need be sanctioned by those in power. The system operates on a purely informative level. Affected websites are not required to do anything, and nobody is required to use the system in the first place. Such systems can only succeed if people choose to use them. That (presumably) will only happen as long as they meet a socially/psychologically acceptable level of reasonableness and transparency.

But neither is or rbutr a “toothless tiger”. It would surely be a blow to authors’ and editors’ egos and credibility to have third-party corrections publicly scribbled over their otherwise majestic prose. They would have to contend with new, publicly-known metrics that assess aspects of their intellectual integrity, not just “hits” and the like that demonstrate their popularity. They would no longer enjoy the same flexibility with the truth, considering that their errors may be almost immediately visible. Any third-party annotations could easily become the most attention-grabbing parts of an article, destroying at a glance whatever the original (accidental or deliberate) misinformation may have been.

As a result, there would surely some backlash from tabloid newspapers and bloggers upon discovering that they no longer have absolute control over what their readers read when visiting their sites. They might even consider it a threat to their business model. Operators like Andrew Bolt certainly seem to make a career out of saying things that need to be corrected (while at the same time exhibiting extraordinary defensiveness).

If it works, could initially make a lot of people very, very angry. There could be lawsuits — particularly of the defamation variety, I imagine — and that could be a problem for a non-profit organisation. But, if it gets that far, the idea of a peer-reviewed Internet has already won.


  1. That isn’t to say I’d rely on the quality of Wikipedia, necessarily. However, for a publicly-editable resource, it is curiously bereft of the kind of backhanded misinformation and puerile simplicity you find in many — even professional — online news articles or blog posts, and the outright lunacy you find in the comments section underneath (present company excepted, of course). []