Update–04-03-08. Version 2.0 of this talk is available here. This, the original version was delivered at a publishing meeting to publishers. V 2.0, the kinder, gentler version, was presented at a meeting of developmental biologists.
Last week I gave a talk at the American Association of Publishers Professional and Scholarly Publishing (AAP/PSP) meeting in Washington, DC. I was part of a panel discussion on “Innovative and Evolving Websites in STM Publishing” along with representatives from the New England Journal of Medicine, the Royal Society of Chemistry and the American Chemical Society. While the other talks were a bit more evangelical, or mostly presented a look at new technologies that had been incorporated into the societies’ own journals, I tried to be a bit more practical, taking more of a hard look at what’s currently being tried, whether it’s succeeding and the reasons behind that success/failure. I’m posting my talk below, in hopes of receiving further feedback. This talk was delivered to a room full of publishers, so it’s directed with that audience in mind. In a few months, I’m giving a similar talk to a meeting of scientists, the users of these sites rather than the creators. So I’d love to hear from users as to your thoughts on how Web 2.0 is serving your needs.
—article continues—
I began the talk with some background about CSHL Press (you can get that here) and about me. I’ll skip most of it, other than to note that for several years I served as CSHL Press’ Online Content Manager, overseeing much of our web strategy, and that I now run CSH Protocols, our attempt to move our successful laboratory manual publishing business online. I noted that I spent six years doing research in a lab where new technology development was the raison d’etre, that I consider myself a technophile, and that I’m always on the lookout for new and exciting tools. I should also note that CSHL Press is a Biology-focused publisher, and my comments address that market specifically. The lessons learned are definitely applicable elsewhere, but it may depend on the culture of a given field.
Odds are, if you follow the scientific news and literature, you’ll be familiar with the large amount of hype Web 2.0 technologies and their potential use in science publishing have been receiving. There have been lots of articles, lots of publishers (as well as individuals) jumping into the ring with efforts to capitalize on this growing trend. This slide presents a rough list and is far from comprehensive. I put it together spending about half an hour with Google, but you can see how many entrants there are in the market (a new one just came to my attention as I was writing this). The list includes “social networking sites” which host blogs, connect collaborators, discussion boards, etc., “referral sites” which let you tag and promote interesting papers or store all of your references online, and a few others endeavors, some incorporating wikipedia-type information, others looking to take advantage of the growth in online video. So, the question is, how are these sites doing, are they being used by mainstream scientists?
I’ve been conducting an informal poll for the last four months, asking every scientist I know what science blogs they read or social networking science websites they use. I’ve spoken with undergraduates, graduate students, postdocs, PI’s and department chairs. So far, 100% of those polled have answered, “None” (I’ve cleverly illustrated this concept using the parlance of the internet, where the main purpose appears to be posting cute pictures of cats). This response was really disappointing. If you spend much time in the science blogosphere, everyone seems to be talking about these great tools and the changes they’re making in research science. But when you step away from the enthusiasts and speak with the majority of scientists, you find out that they don’t have much interest in using many of these new technologies. The whole situation reminds me quite a bit of what one saw online regarding the Linux operating system 5 to 10 years ago. You saw great enthusiasm, and predictions that Linux was soon to take over the computing world. The rest of the world shrugged, and went back to their Windows computers to get their work done.
I’ll try to discuss some of the reasons things aren’t working and make a few suggestions that might help. I certainly don’t have all the answers, I’m sorry to say, but hopefully I’m pointing in the right direction. Most of it boils down to the tools not being well designed for the desired audience. The hype is right in some ways–the potential is there, and it is something we should be excited about, but we’re failing to channel that potential into compelling tools that will catch on with the community. A successful tool will address a need of the user, and will do so in the context of the culture of the user. You’re unlikely to get a well-established culture to change just to suit your tool, no matter how much promise it shows. While most of the tools available are clever ideas, or seem useful on the surface, their lack of traction should be telling you that something’s not quite right. Although these comments are meant to address Web 2.0 sites, they’re applicable to most other websites as well (and I’ll use a few as examples).
Time is the key component here, something I’ll mention many times in this talk. Web 2.0 is based on the idea of user-created content. You put up a site, the users create the content. It requires a hefty time investment from the users. The problem is, you’re dealing with a base of users who are overscheduled and overworked. Furthermore, there’s no professional incentive to participate in the content-building at your site. The quote used here from a Postdoc is important:
“I can barely keep up with the literature in my field and with what my labmates are doing. Who has time to spend reading some grad student’s blog?”
You also have the paradox that those whose input would matter most to your site are the least likely to contribute. I’m more interested in reading comments from a prominent researcher, or seeing her tags on papers that she finds interesting, rather than the less-informative opinions of a beginning graduate student. And yet those early graduate students are much more likely to be active on your site. Later in their careers, as they become more successful, their schedules get more crowded, and participation will wane just as it is becoming most valuable.
The goal should be to create tools that save time and effort, not new ways of investing huge amounts of time and effort. A couple of examples–
Molecular Cloning is our best-known laboratory manual (known informally as “Maniatis” or “The Bible”). When the most recent edition was released, we created a website to accompany it, MolecularCloning.com (it’s now defunct). The site was essentially an electronic, online version of most of the book. You bought the book, you got a unique user number that let you access the material. Only a relatively small percentage of book buyers ever logged on to the site using their number. An even smaller number became regular users.
If you think in terms of time, in terms of efficiency, it’s fairly obvious why this site failed to catch on. The site only offered the same material the user had already purchased in the form of the book. When it was time to do an experiment, the user had 2 courses he could take:
1) Go to the computer, go to the Molecular Cloning website, Log-in, search for the protocol, print the protocol, do the experiment.
2) Grab the book off the shelf, do the experiment.
So what we had done is essentially turn a 2 step process into a 6 step process. The advantages we were offering, better search capabilities and discussion forums, didn’t outweigh the downside of losing efficiency. We’ve tried to address these issues with CSH Protocols by taking advantage of scale. Instead of one book you’ve already purchased, CSH Protocols offers material from more than 25 books, along with material from CSHL Courses and newly published protocols not available anywhere else. Although you do have to do the steps above, it’s still quicker than looking through an entire library of books for your protocol, especially since your lab is unlikely to own all of the books CSH Protocols contains. Because CSH Protocols is sold by institutional subscription, the cost to an individual lab is also vastly lower than purchasing the content in book form (and we’ve eliminated the logging-in step above). So far, given our high level of traffic and usage, the advantages of scale and the efficiency that provides are providing something more useful to readers than our previous efforts.
As another example (and I’ve blogged about this previously), in a discussion with blogger and technology enthusiast William Gunn, he talked about using Connotea for journal clubs. Again, let’s review the choices for the user:
1) Go to the computer, go to the Connotea site, sign up for a Connotea account, log-in, add the link to the paper that’s going to be discussed, add tags to the paper, e-mail those links out to members of the journal club. Members receive the e-mail, go to Connotea, create an account, log-in, go to the link they were sent, follow that link to the actual journal’s website, download the pdf, read, discuss.
2) Go to the computer, download the pdf for the paper, e-mail it to the members of the group, read, discuss.
You tell me which is a more efficient use of precious time.
Like many other journals, CSH Protocols has discussion forums where one can comment on a paper, or ask questions about a technique. This seems like a no-brainer on a methods site, right? If you’re having trouble with an experiment, why not ask the crowd for suggestions? Creation of these commenting forums in a journal seems to inevitably lead to a second phenomenon, a few months later. This secondary phenomenon takes place in the form of an editorial wondering why no one is using the forums to leave comments. In the 1.5 years of CSH Protocols’ existence, we’ve seen a total of around four or five comments left on more than 1,000 articles published. There are many reasons behind the failure of these commenting systems (I’ll discuss cultural reasons later), but the main reason is that there’s no incentive for leaving comments. Again, you’re dealing with a limited amount of time, so why spend it on something for which you receive no credit? Where’s the upside in leaving a comment on someone’s paper?
In our case, as a methods journal, our readers are usually doing experiments that involve expensive reagents and that take up valuable time. If they’re having technical issues, does it make sense to ask strangers for help, or would a more directed search for a trusted source who has done the method before make sense? Would you trust the advice of a stranger when each experiment costs hundreds of dollars in reagents? And are those theoretical strangers really wandering through our articles looking for poor souls in need of advice? Why would they be wasting their time looking to play the good samaritan when they’ve got work to do as well?
This quote comes from a talk given by Ian Rogers of Yahoo! Music at a Music Industry meeting, but it’s very relevant for our business as science publishers. We’re going from a world where there were fairly limited numbers of information sources, and hence lots of attention available for those sources, to a world of nearly unlimited information and attention scarcity. There is greater need now than ever for editorial oversight, for the separation of signal from noise, or as he puts it, “quality = relevance”. If you can make it easy for your readers to get the information they’re seeking (relevant to them), then you’re doing a high quality job for them, something they’re willing to pay for, even when there are free sources of less-well-organized information available.
Can you come up with ways to save your readers time, to get them the information they want faster than they could gather it on their own? Here’s one example of something we’re trying on CSH Protocols, to turn advertisements away from being an annoyance and into being a useful feature. When our readers want to use one of the techniques we publish, they have to track down and order all of the reagents they need for the experiment. This can take a lot of work digging through a stack of catalogs or going to 20 different companies’ websites to find everything. At CSH Protocols, we’re partnering with the suppliers of these reagents, Sigma-Aldrich in this case, and creating advertisements tailored for individual protocols. The user can click on one ad and be taken to the supplier’s site where a customized page has been prepared and they can order everything they need for that experiment. Everyone wins–we get advertising dollars, we make our readers happy by saving them time and Sigma is happy because they get more business.
Another big reason for the lack of adoption of new technologies is inertia. By “inertia”, I mean a couple of related things.
First, most Web 2.0 sites aren’t useful until they’ve got a high level of participation. If the users are creating content, no users = no content. If there’s no content, no users are going to bother participating, rinse, lather, repeat, the circle goes around and around.
This becomes even further burdened by the proliferation of “me too” sites. Here you see nine different sites that all serve similar purposes. If I have limited time and each site requires a substantial time investment, how am I going to choose which one I’ll use when they all offer essentially the same thing? What happens instead is that most people choose not to choose and sit things out until a clear winner emerges. For those who do pick a site, the site they’ve chosen is only one of many, so it sees less traffic than if there were fewer available, which means less content, which means it’s less useful.
The second thing I mean by “inertia” is the idea that if I’ve already got a way to do something, it’s going to take a lot to make me change to a new way. Doing something new takes effort and (again) time. If you don’t believe me, ask someone you work with to change computer operating systems, or ask your production department to switch from Quark to InDesign. If you want me to switch, you have to not just be better, you have to be way better for me to make that effort.
I’ll use the example again of the Molecular Cloning website, where putting the book online didn’t offer anything significantly better than the print version, so readers just stuck with what they already were using. As I mentioned earlier, we’re trying to address this problem by offering a product that gives the reader much more than they currently have available.
If you want your new site to be used, you need to lower the barriers to entry. Usability is often a huge barrier preventing new users from jumping in. Your tool has to be obvious, not only why you would use it, but how you would use it. The iPod is a good metaphor here. There were mp3 players around for years before the iPod, yet none saw any traction in the market. The why was obvious, as these players all offered significant advantages to carrying around a portable cd walkman. But the players were clunky and difficult to manage. One of the main reasons the iPod caught on like wildfire was because it didn’t require a manual to use. You pick up an iPod and the “how” is obvious.
Another great way to lower the barriers to entry is to use standard file formats. Work with what people already have, let them re-use the efforts they’ve already made, and then do as much of the work as possible for them. As an example here, look at the various referral sites available for keeping your reference lists online. Most of these sites allow you to input your reference list of papers from commonly used programs like EndNote. And that’s great, a really smart way to work, a big time saver. But the big time sink on these sites is going through each paper and add subject tags. If you’ve been a scientist for long, you have thousands of papers in your reference list. Why not automate this? Most papers published in journals feature keywords. Why aren’t any of the referral sites building in automated retrieval of keywords from papers listed, and setting those up as tags? Then you’d immediately have a useable database of tagged papers to start with, rather than requiring hours, if not days worth of work. It wouldn’t be perfect, but at least some of the work would be done for you.
And don’t just think about ways for the user to put their information into your site, also consider ways for them to export their information from your site. If you can provide functionality so the effort they put in at your site can be re-used elsewhere, your site suddenly becomes much more attractive. This also helps overcome the “me too” issues mentioned above. If I’m going to spend my time putting my information into your site, I want to be assured that if I have picked the wrong site, I can get my work out of your site for use on the eventual winner. If you offer that ability, the user is more likely to take the gamble.
Beyond the practical matters of time, there are also cultural reasons why the current tools are failing to catch on. I keep seeing the phrase “Myspace for scientists” used to describe new online efforts and it drives me crazy (examples here, here, here, here, here, here,…….). Myspace is targeted at a particular culture, and while it works well for that culture, the idea of shoehorning other groups into its functionality is flawed. Scientists interact in very different ways than teenager and their peers, or rock bands and their fans. Scientists don’t find collaborators by chatting online with strangers.
As an example here, let’s go back to forums that allow you to comment on published papers. If you go to a biology meeting and study the behavior of the participants, you’ll some consistent patterns. The big shots and the networkers currently on the job market sit in the front rows and ask the majority of the questions of the speaker. The graduate students sit quietly at the back and if they have a question, they approach the speaker after the talk in private. I’m not saying this is a good thing, but there does seem to be a certain etiquette involved, or at least a wariness to expose oneself in public. Now, given that the younger students are those most likely to have the time to participate on your site, isn’t it likely that they’ll continue to follow the social patterns ingrained in their field? Unlike teenagers chatting, professionals making public statements have responsibilities and there are consequences to their words. Are your readers going to be willing to leave a critical comment on a paper publicly where everyone can see it for all time? Are they going to criticize work by someone who may one day be deciding on their grants, or offering them a job?
Don’t get me wrong, there are actually some sites making great use of these new tools, and doing so in a manner appropriate to the culture.
As I said earlier, scientists don’t need social networks for chatting with online friends, or posting pictures of last weekend’s drunken bash. For scientists, social networks are all about jobs, finding openings or finding candidates for openings. SciLink is a site that understands this, and they’ve taken the LinkedIn model and tried to make a similar site for scientists (the problem here is that most scientists seem to be cutting out the middleman and just going directly to LinkedIn). Unlike Myspace, it’s not a place you would visit every day to check in on things. You’d set up a page, and really only spend much time there during the rare periods of your career when you’re actively seeking a job, or actively trying to hire someone.
Another great new Web 2.0 site comes from Scientists and Engineers for America (SEA). Given the paucity of funding available for science, and the lack of evidence-based reasoning in policy decisions seen in the US government over the last 7 or so years, more and more scientists are interested in becoming involved in the political process. But for many of us, it’s unclear how to do this, where does one start, what does one do? The SEA are creating tools that provide information and direction for scientists who want to have more of a voice in Washington. One very impressive tool is their new Science, Health and Related Policies (SHARP) Network. This is a Wikipedia-based project that gives you quick access to the science policies of elected officials and candidates. Here you’re likely to see more success of Web 2.0 tools because 1) there’s no deeply ingrained cultural precedent for scientists to express their political activism and 2) it’s an area where you’re likely to see enthusiasts, where people are passionate enough to commit the time needed to participate.
There are also many great community sites, like FlyBase and WormBase that are thriving. I’m not sure if these would really be considered “Web 2.0” sites, as most of their content is more like a standard database or an online book. If you take a look at the more interactive driven areas of these sites, like the discussion forums, you see very little traffic. Clearly there are tools that work for these communities on these sites, and tools that are haven’t caught on.
Are blogs useful tools? The answer is yes, they do serve a purpose, but you must understand what that purpose is. Like the other tools I’ve mentioned today, blogging (at least blogging well) takes a lot of time and effort. If you’re going to commit that time and effort, you should be clear on what you’re getting out of it, and who your audience really is.
As I said at the beginning of this talk, I’ve had a hard time finding any scientists who regularly read science blogs. Science blogs can be a good tool for a student, a way to improve their writing and communication skills, but it’s unlikely they’re going to come to great scientific prominence from these blogs. There also seems to be some partially justified fears that blogging is more harmful to a career than helpful. A key quote from Sean Carroll (the blogging physicist, not the evolutionary biologist):
“A blog raises your profile, but it raises your profile for something other than research,” Carroll said. And even if you are extremely productive as a scholar, he said, some professors may view a blog as sign that you could be spending more time in the laboratory or library, engaged in traditional research.”
So who reads science blogs?
1) Other science bloggers
Go read any science blog and take a look at the comments people leave. Nearly every single one contains a link to the commenter’s own blog. There’s a relatively small circular group here, of scientists who are interested in blogging, who write blogs and who read blogs and who leave comments on other peoples’ blogs. These are the enthusiasts, not the mainstream. While they may be harbingers of future developments, right now they’re not getting the attention they think they are.
2) Non-scientists / Non-specialists
Probably the majority of people who read science blogs fall into this category. The quote used on the slide is again from Sean Carroll. As he notes, scientists already have very efficient methods of communicating their work with one another (publishing papers, giving talks, e-mail). Blogs, for him, are a great way to reach people outside of your field, and, as he puts it, “We don’t have a lot of goals other than us having fun.”
The next two sets of readers are the important ones for publishers to consider.
3) Journalists
Science journalists are clearly reading blogs and using them as fodder for story ideas. Blogs serve as a great place for a scientist to translate an important finding into clear language so journalists can pick up on the significance of the discovery. Blogs can be more effective than press releases, as they can cover a subject in much more detail and over a longer period of time. An editorial blog highlighting the papers your journal is publishing is a great way to get recognition for your authors.
4) Search Engines
You want people to find the material you’re publishing. Blogs help you out here, as they create more opportunities for your published content to come up in a web search. By blogging about the published material, you create new search strings, new ways for searchers to find your articles. By adding links to the content, you help increase the position of your material in search engine listings. That’s one of the main reasons we started the Bench Marks blog, really thinking of it more as a marketing endeavor than anything else. It helps raise awareness about the material we’re publishing, drives traffic to our journal site and helps our search rankings.
Following along with that point, it’s always important to have a clear picture of why you’re creating a website, and this is even more important for Web 2.0. You shouldn’t feel pressured to jump into this world just because you keep reading articles about it, or it seems like everyone else is doing it. It’s very easy to spend a lot of money and a lot of time and never see any return. There’s not much apparent money being made on Web 2.0 for science at this point. It’s important to know why you’re building a tool from the get-go. Have a real business model, don’t just build the site and assume you’ll later find a way to monetize it. A few quick examples are shown here. We originally envisioned CSH Protocols as a fairly different entity than it has become. It’s less the free-wheeling database and community center we expected, and is morphing more and more into a monthly journal, which is good in that readers are familiar with the paradigm, and it has a clear and proven business model behind it. It’s bad in that we’ve had to spend time and effort re-working and reverse engineering the site over time as our model changed. Other models you see in use include the idea of getting users to create content, then monetizing that content by selling advertisements, basically the modus operandi of Google and the Nature Network. SciLink’s business model is more about collecting data and doing analysis for companies. They can provide information to headhunters, and to HR departments regarding trends in hiring, both internal and external to companies. And lastly there are sites like this blog, which are not revenue-generating products, but instead serve a marketing purpose.
In summary, here are the key points, ideas to keep in mind when trying to design new online tools for scientists. As I said at the beginning, the potential is great but it has yet to be realized. I don’t expect the sites that finally break through to mainstream usage in the biology community to just be copies of things that have worked elsewhere. I expect that they’ll be specifically tailored to fit the needs of the community. Hopefully, by following some of the directions pointed here, you’ll come up with the exciting new tools I’m seeking. And I’ll just end this with an illustration from the always-entertaining Adam Koford, an uplifting “unicorn chaser” to wash away all my negative talk of things that aren’t working, and hopefully to inspire us all towards a shinier future.
February 18, 2008 at 4:53 pm
David, you have a great chance to become the Nicholas Carr of web 2.0 in science with the huge traffic that it involves! 🙂 As you noticed: scientists are “overscheduled and overworked” and that’s true: I am experimentally overwhelmed right now (just after a DNA isolation and just before a PCR) and I didn’t have to much time to allocate to your article (haven’t read it thoroughly) and I advise to be a little shorter next time (for instance, you can use Slideshare when posting a slideshow) if you want to catch the full attention of experimental scientists as you’re aiming too.
I feel that the main pitfall of your analysis is that you’re talking about old lab techniques and new web lab techniques just in competitive terms (Maniatis vs. website) while forgetting about the complementary or corrective characteristics of them.
Let me tell you a nice complementary example:
Last week when I wrote the post Can you tell a good article from a bad based on the abstract and the title alone? on the A couple of weeks ago I had only 10 minutes for figuring out a post between 2 experiments in the lab during lunchtime. I discovered a very weird abstract in my Google Reader (the PubMed feed used was ‘mitochondrial apoptosis’, PubMed and feeds, not PubMed vs. feeds) and instantly wrote about it. 2 weeks later the paper was retracted and the credit goes to the science blogosphere, specially to PZ Myers’ commenters who deconstructed the paper to its bare bones. This was a nice case of the web’s collective intelligence applied to a peer reviewed paper. (couple smart reviewers vs hundreds of not that smart scientists-commenters)
http://pimm.wordpress.com/2008/01/29/can-you-tell-a-good-article-from-a-bad-article-based-on-the-abstract-and-the-title/
http://scienceblogs.com/pharyngula/2008/02/a_baffling_failure_of_peer_rev.php
http://blogs.nature.com/news/thegreatbeyond/2008/02/peer_review_a_mighty_creator_a.html
I disagree that “blogging is more harmful to a career than helpful”.
Last April I started an unofficial competition for the best laboratory web site. Within 3-4 weeks this idea was featured in Naturejobs (the printed version) and got realized by The Scientist thereby helping to create a more efficient lab website culture:
http://pimm.wordpress.com/2007/10/09/the-laboratory-website-and-video-awards-by-the-scientist/
http://www.nature.com/naturejobs/2007/070517/full/nj7142-347a.html
http://www.the-scientist.com/winners/index/
Now go back to the bench, sorry for being too long.
P.S. Since 2005-2006 I don’t like to use the term web 2.0 because it seems just too empty so I don’t consider myself a big web 2.0 crusader at all.
February 18, 2008 at 4:57 pm
“Last week when I wrote the post Can you tell a good article from a bad based on the abstract and the title alone? on the A couple of weeks ago I had only 10 minutes for figuring out a post between 2 experiments in the lab during lunchtime.”
Sorry, I was too fast, the sentence in a way that does make sense:
“A couple of weeks ago (January, 29) when I wrote the post Can you tell a good article from a bad based on the abstract and the title alone? I had only 10 minutes for figuring out a post between 2 experiments in the lab during lunchtime.”
February 18, 2008 at 8:34 pm
I do agree with some of what you say, e.g. what works well for the consumer web does not necessarily transfer to the scientific web, and the need to bring relevant content and information to scientists, but I have to disagree with a big chunk of your posts.
Lets not even talk about the lack of time, cause that’s an irrelevant argument (scientists are not any busier than people in other fields). The question then remains, why don’t scientists adopt technologies on the programmable web (which is a lot more than social networking).
First of all, if I poll a section of random scientists, I will find a decent number, especially in the biosciences who read blogs. My traffic is not coming from non-scientists, and the Nature blogs wouldn’t be half as successful as they are, and no not everyone who reads the blog is a fellow blogger (there are not that many bloggers in my space), although many who leave comments are.
Second, the problem is more fundamental. Just earlier today, there was a discussion on trying to explain to other scientists what the web was all about, since the consensus was that scientists just don’t get the web. Just look at the web services being developed by various groups. Most of them look and behave like they are circa 1997. The lack of participation is also a manifestation of scientists obsession with not sharing their knowledge outside the safe confines of a peer reviewed publication or a conference.
Third, yes scienceblogging is not in the mainstream … yet. But the pace has certainly picked up. Serious scientific blogging is less than 2 years old, and in that timescale the pace has really picked up. Peter Murray-Rust, Jonathan Eisen, etc, none of whom is a slouch, are shining examples of good scientists who use the web effectively. You have mainstream publications picking up on blogs. You have genomeweb tracking blogs, you have companies tracking blogs and other “web 2.0” sites and if blogs were not important, I don’t think I would be getting a fraction of the PR mails that I get.
The big problem is that scientists, even those who can program well are generally not web developers. That’s a different skill set and as the number of web development savvy scientists increase (like the MyGrid folks who really get the web), then we will get better sites and people will learn to work with the web the way it was meant to. Yes, we have some me too sites there, but in the end that always evens out as certain sites die out from lack of traction, just like in any other market. Rather than talk about me too sites, I’d like to ask, where are the mashups (something like EpiSpider) that make our lives better, where are the APIs that make it easy for us to use the web as a platform? I also disagree with some of your categorizations. Swivel is hardly a social networking site. It’s a site for data sharing and mashups that happens to have social features.
I’d also like to add that scientific websites, especially social types, are difficult to monetize using traditional CPC/CPM advertising, since the volume of traffic will never be sufficient.
It’s still early days. Yes, we need to do a better job by reducing barriers to entry and doing something innovative and original, and not simply replicating consumer or tech-oriented sites, but I’d say we are doing fine given how recently scientists have started leveraging the web.
February 19, 2008 at 9:12 am
Hey Attila, thanks for taking a look. I’m very interested in getting feedback here, finding out if I’m missing anything, looking for new directions, new tools that show promise. I know the “Nicholas Carr” comment was meant in jest, but I want to assure you that’s not my intention. I’m not here to devalue the efforts being made, I’m more interested in practical applications based on them. I work for a relatively small not-for-profit publishing company that’s part of a scientific research center. Unlike the big commercial publishers, we can’t afford to perform grand experiments that are likely to fail. We need to be sure in our efforts and do everything we can to insure success. A few responses:
Sorry about the length, but when transcribing a twenty minute talk, it turns out to be a lot longer than you’d think. Hopefully by looking at the rest of the blog you can see that I’m not usually this longwinded.
—I feel that the main pitfall of your analysis is that you’re talking about old lab techniques and new web lab techniques just in competitive terms (Maniatis vs. website) while forgetting about the complementary or corrective characteristics of them.—
Not exactly sure what you’re getting at here, but would be interested in hearing more. Many of the mainstream efforts in this arena are indeed meant as replacements for current ways of doing things. The referral sites are all about changing the ways you organize your references, sites offer replacements for standard lab notebooks, etc. But you’re definitely right in that it’s much easier to make a useful tool that’s complementary to what people already do, rather than asking them to completely alter their ingrained behaviors.
—This was a nice case of the web’s collective intelligence applied to a peer reviewed paper.—-
That was a great catch, and I followed the saga via your blog.
—I disagree that “blogging is more harmful to a career than helpfulâ€.—
As I said, that’s still an open question. As noted in the linked article, there are scientists who feel their blogs were probably not beneficial in their attempts to gain tenure.
An interesting note here, by the way–your two examples fall very much in line with my analysis of who reads science blogs. The first, you caught the Proteomics paper, and that was read by other bloggers and carried from there by them, the second, the lab website contest which was picked up by science journalists. And I hope you’re right that some effect will be had on laboratory website design, because right now they’re pretty awful in general.
—Since 2005-2006 I don’t like to use the term web 2.0 because it seems just too empty so I don’t consider myself a big web 2.0 crusader at all.—
I agree. It’s a vague term and one can read into it whatever one wants. But remember, the talk was directed to a meeting of publishers, it’s a buzzword, something they were quickly able to relate to. Also note that Stanford’s Highwire Press is launching their big new platform late in 2008, called “H2O”. One wonders if that term will seem obsolete by then.
February 19, 2008 at 9:48 am
Hi Deepak,
Thanks for your comments, I’ve been enjoying digging through your blog. The main response I can make to you is that the talk was given at a publishing meeting, and the sorts of websites being developed by major publishers are quite different from the bleeding edge experiments of individual programmers and scientists. Some specific comments:
—Lets not even talk about the lack of time, cause that’s an irrelevant argument (scientists are not any busier than people in other fields)—
Can’t say I agree here. Most (if not all) scientists I know work at least six days a week, the majority are in the lab all seven. The lawyers, the real estate dealers, the doctors, the police officers I know, none of them seem to work a seven day week. I went to graduate school in a place where the prevailing attitude was that if you had a life outside of the lab then you weren’t putting enough time into your experiments (kind of explains why I’m now in publishing). I know a professor who shall remain nameless here (but many of you know of whom I speak) who went into his lab on a Sunday and found no one there. So, he called a locksmith, changed all of the locks on the doors and locked his students and postdocs out of the lab for two weeks because they clearly weren’t dedicated enough and didn’t deserve to work there. How many banks do you know that would do the same?
—First of all, if I poll a section of random scientists, I will find a decent number, especially in the biosciences who read blogs. My traffic is not coming from non-scientists, and the Nature blogs wouldn’t be half as successful as they are, and no not everyone who reads the blog is a fellow blogger (there are not that many bloggers in my space), although many who leave comments are.—
I’m sure there are lots of scientists who regularly read blogs, I just haven’t run into any (at least any biologists–then again, my poll is very informal and hardly statistically accurate). And are they reading blogs in their own field or are they looking to outside interests, areas where they are the “non-specialists” I mentioned? Question–you call the Nature blogs “successful”, how would you define “success”? I see a lot of activity on that site from Nature’s editors but haven’t found much interesting content beyond that.
—The lack of participation is also a manifestation of scientists obsession with not sharing their knowledge outside the safe confines of a peer reviewed publication or a conference.—
This, to me, is where a lot of these new technologies fail. They’re asking the culture of science to change to fit the new technologies, rather than building new technologies to fit the culture of science. One can instigate change, but it’s going to be a long, slow, gradual process at best. Any tool that requires an instantaneous 180 degree turn in behavior of a well-established field is not going to see any traction.
—Third, yes scienceblogging is not in the mainstream … yet. But the pace has certainly picked up. Serious scientific blogging is less than 2 years old, and in that timescale the pace has really picked up. Peter Murray-Rust, Jonathan Eisen, etc, none of whom is a slouch, are shining examples of good scientists who use the web effectively. You have mainstream publications picking up on blogs. You have genomeweb tracking blogs, you have companies tracking blogs and other “web 2.0″ sites and if blogs were not important, I don’t think I would be getting a fraction of the PR mails that I get.—
I regularly read Eisen’s blog (he’s an author on our recent Evolution textbook) and he is an interesting and entertaining writer. You state that he uses the web “effectively” and I’d be curious to hear more from you on what you mean by that. His blog is the equivalent of an Op-Ed column, a place where he gets to voice his opinions. Is that something every scientist should be doing? If so, who’s going to read all those opinions? Is there a limited number of blogs like this that the market will bear?
—The big problem is that scientists, even those who can program well are generally not web developers. That’s a different skill set and as the number of web development savvy scientists increase (like the MyGrid folks who really get the web), then we will get better sites and people will learn to work with the web the way it was meant to.—
Yes, an excellent point. Perhaps the reason so much of what’s out there is failing is that there’s a fundamental lack of understanding of what is possible. I’d certainly agree that there’s a disconnect between those making a lot of these sites, those expected to use the sites, and those capable of making the really useful sites that could exist.
—Yes, we have some me too sites there, but in the end that always evens out as certain sites die out from lack of traction, just like in any other market. Rather than talk about me too sites, I’d like to ask, where are the mashups (something like EpiSpider) that make our lives better, where are the APIs that make it easy for us to use the web as a platform? I also disagree with some of your categorizations. Swivel is hardly a social networking site. It’s a site for data sharing and mashups that happens to have social features.—
I think the “me too” sites are important, because that’s the strategy a lot of publishers seem to be taking as far as developing new tools. There’s a lot of bandwagon jumping going on, and as the talk was delivered to publishers, I felt the flaws in that approach needed to be pointed out. And I think you’re right on the money as far as mashups and APIs go. As I noted in my talk, open-ness is something to be stressed. I put it in terms of lowering the barriers of entry to users, but it does indeed serve other purposes, particularly letting others create new entities with your content.
As far as the categorization of sites, I put all the collaboration-oriented sites under “social networking”, perhaps a misnomer, but I didn’t want to get too pedantic on an introductory slide.
—I’d also like to add that scientific websites, especially social types, are difficult to monetize using traditional CPC/CPM advertising, since the volume of traffic will never be sufficient.—
That also makes for an interesting question as to whether these are really things that publishers should be interested in making in the first place. Recent analyses point to declining traffic for mainstream sites like myspace and facebook, and many are stating that these social networking sites are going to move more toward small niche communities. If you’re dealing with such small groups, is there any strategy a publishing company can take so the site at least pays for itself?
—It’s still early days. Yes, we need to do a better job by reducing barriers to entry and doing something innovative and original, and not simply replicating consumer or tech-oriented sites, but I’d say we are doing fine given how recently scientists have started leveraging the web.—
Fair enough, but I’d like to see a little more practical analysis amidst the evangelism and self-congratulation so common in the scientific blogosphere. The issues that we both agree exist aren’t going to fix themselves and I don’t expect the rest of the world to change to fit the technology, something that many seem to be waiting for.
February 20, 2008 at 1:08 pm
Hi David, I’ve been meaning to get around to offering my thoughts on why the uptake of web-based tools is so slow. It turns out that after writing for a couple hours on my dissertation, I’m just not in the mood to write a post, or do anything else online.
I think you’re probably right that uptake is slow because people are too busy to spend time exploring what you can do with the new web stuff. That’s going to change as the people raised on social networking in college get into their careers and carry their web skills with them. It’s not really that surprising that established scientists don’t use these tools, I don’t think. I mean, architects didn’t throw out their drafting tables en masse when CAD programs first appeared, and I know many lawyers who still keep a legal library, despite the online law databases being easier to search and more up to date.
People with established systems of doing things don’t chuck everything for a new system, even if it’s way better, but they do slowly start to incorporate things as the younger generation who started out using the new tools demonstrate the benefits by out-competing the older, more experienced people.
None of this is really an earth-shattering revelation, I don’t think. In fact, the only reason we think it’s taking a “long” time for people to adopt these things is because we’re young and impatient.
So, I agree with you that we need to see more application to go with the evangelism, and I think we will. I use Connotea daily, and I can, at any given time, instantly reference and share the references on a given topic that I think are important. You can’t do that by emailing PDFs around. I can fill in the gaps with papers I missed, because other readers will have a paper that shows up under a tag in their library that I don’t have in mine. The aggregation over time of what I gave my attention to is the value I have added to Connotea, and the dividend it pays is that I get to benefit from the attention investment of all the other users. In practical terms, I find more relevant papers faster and with less effort, freeing up my ever valuable time to do other things. If you want something to combat information overload, it should be your go-to tool. Maybe it takes a little time for your investment of attention to start to pay-off, but that’s true of any investment, isn’t it? It’s paid off richly, in my case. When it comes time to assemble a reference list, it’s a couple clicks to bounce the set of papers I want through Zotero and my bibliography is made for me, and I can add, re-shuffle, and subtract references from the list with minimal effort or tedious messing around in MS Word. This means I can turn out papers/reviews/grant proposals faster than my non-savvy colleagues, and really, that’s the only thing that will lead to greater adoption of these tools.
So I think we’re in agreement mostly, that application needs to be more evident. I do have to take exception with your description of the process for using Connotea, however, and I’m not sure if you’re just being dense or what in your description of the process.
You said:
What you’re not saying is that most of the steps in 1 are one-time only steps, that don’t need to be repeated every time. As a once-off, your description is right. Connotea is no good at all as a once-off method to share a list of papers, because the main thing that it does is store and aggregate the attention data of its users. In fact, aggregating and sharing attention is kinda the whole point of Web 2.0, isn’t it? You need to read Jon Udell on this.
Once you have an account, and cookies stored on your computer, all you have to do is click on a bookmark and you’re there. It’s the same bookmark every time, too, so no messing around trying to find an email containing the link or file. Furthermore, it’s easier for multiple people to contribute to and maintain the list of interesting papers on a ongoing basis, because everyone is looking at the same list. If you get to emailing PDFs around, not only do you soon bump up against your email account quota, but the papers are spread out among various emails and there’s nothing connecting them, nor is there a way to comment on the papers in a way that remains attached to the list.
So it’s a little disingenuous to compare the two processes on a once-off basis. The point is that by working through the web browser, everyone has a common interface, and everything stays in once place, all nice and organized and in context. This is far more usable, on an ongoing basis, than emailing PDFs around. Journal clubs are run on an ongoing basis, so it just makes sense.
As I said earlier, established labs have their way of doing things, and I don’t expect them to change. I expect people who can effectively leverage Web 2.0-type information filtering to out-compete the established labs. An evolutionary process will select what works and discard what doesn’t, we just have to give it time to work and material to work on. I’m actually OK with these tools not going mainstream, because the self-selected crowd using these services with me now is the early-adopter crowd, which tends to be a little smarter than the average.
In summary, you’ve got some good points, but you’ve been a little too quick to judge everything that falls under the umbrella of web 2.0 or social networking as a whole. How about giving these things a whirl for a couple months or so and then report back on which you found to be worth the initial start-up investment of time and attention and which weren’t? You just might find a new secret weapon.
February 20, 2008 at 2:11 pm
Hey William,
Good to hear from you, I value your input on this subject, as you’re someone “in the trenches” as it were, while I’m coming at it from the perspective of an editor and a publisher. Hope the dissertation is going well. Some thoughts and further questions on your comments:
—That’s going to change as the people raised on social networking in college get into their careers and carry their web skills with them.—
This is a common meme I’ve seen a lot in my research. As the internet generation takes the reins of power, the culture will change. There’s certainly some validity to the argument, but it doesn’t help pay the bills for any site being built now. What’s going to happen to sites like Connotea, 2Collab, and the whole Nature Network if the process is so slow? Are publishers going to pull the plug on keeping these expensive and time-consuming projects going if there’s no mainstream buy-in? This argues for much of the development and support of the sites to move out of the hands of the commercial entities, which is both good and bad. You won’t be inundated by ads, or have your information sold off to advertisers, but at the same time funding is always going to be an issue, as is reaching the mainstream without the marketing force behind a given site.
Off on a tangent, but I’ve always been a bit confused, that most of the evangelists for these new online technologies are often both 1) strong open access advocates and at the same time 2) so willing to get in bed with the big for-profit publishing conglomerates like Elsevier, Nature, Seed and others when they’re the ones building online networks. Go figure.
Another thought on the same subject–participation in social networks seems to be slowing, and time the users spend on such sites is in decline (see here and here ). Will this current generation continue to use these communication resources as they age or will they abandon them once out of school for a few years? Will they force the culture to adapt to them, or will they adapt to the culture?
Your use of Connotea interests me quite a bit. If you’re just starting out, and haven’t amassed decades worth of references, it seems like a great practice to get into, immediately noting and tagging each article you read as you read it. It’s not as great a solution for an established researcher, as the tagging of thousands of papers is a daunting, if not mind-numbing task (I’ve put about 60 papers up there and found it to be tedious enough for that small of a number).
Can you tell me what you think sites like this offer you that you couldn’t get from programs hosted on your own computer like Papers, Yep, or even EndNote (or some combination of the above)? I get the idea that you can get to your reference list from any location with internet access. This strikes me as both a strength and a weakness in some ways (particularly if the papers in your list are behind a subscription system). Is it better to assume ubiquitous internet access and open access, or is it better to assume that portable devices will continue to improve enough that we’ll all just carry our library of papers around in our iPod?
I do agree with you that sites like this have amazing potential as discovery tools, seeing what others tag can lead you to papers you’d never have found on your own. Although that kind of feeds into something I mentioned in my talk, those who you’d most likely want to use for discovery are the least likely to participate in any such site.
Sorry if my description of the journal club process was harsh, it is indeed more accurate for a first-time setup than a continuous use. Then again, getting that first time accomplished is the big step, overcoming inertia. Without that, the continuous use never happens. I’ve never run into e-mail quotas at the institutions where I’ve done research, perhaps your university has different policies. But isn’t a good system of organizing and search your e-mail (or the files on your hard drive) just as effective, if not moreso? Isn’t that a practice one should be striving for regardless of use of referral sites? Can you tell me more what advantages you see in the common interface you describe, why is it more usable?
—As I said earlier, established labs have their way of doing things, and I don’t expect them to change. I expect people who can effectively leverage Web 2.0-type information filtering to out-compete the established labs. An evolutionary process will select what works and discard what doesn’t, we just have to give it time to work and material to work on. I’m actually OK with these tools not going mainstream, because the self-selected crowd using these services with me now is the early-adopter crowd, which tends to be a little smarter than the average.—
We’ll have to see. There’s a lot more to being a scientist than just sorting through available information. Some of the best scientists I know are total slobs, unorganized but brilliantly creative. I do think these tools will eventually filter through to the mainstream gradually, but as I’ve said, probably not in the form they currently hold. And there are lots of early-adopter crowds who have been flat-out wrong, just ask anyone who bought an HD-DVD player.
—How about giving these things a whirl for a couple months or so and then report back on which you found to be worth the initial start-up investment of time and attention and which weren’t?—
That’s kind of what I’ve been doing for the last 6 months or so. I have accounts on Connotea, CiteULike, SciJobs, Nature Network, LinkedIn, but my blog into Postgenomic and Scintilla, and I check in with around 50 science blogs in my RSS reader every day. My experiences may not be fully relevant, as my needs as an editor are different from the needs of a PI or a graduate student. But if you have further suggestions for sites I should be investigating, please send them along. I’d love to be able to justify building more and more useful tools into my journal (and all of CSHL Press’ publications).
February 20, 2008 at 3:59 pm
David, everybody is at a different place in their career, has different needs, and will find different things useful. I’ve invested enough time into Connotea that it’s now producing a return on my investment, but I don’t know if someone else with different needs would have found the same value. I like Connotea for discovery, and though the input might be from younger scientists, it’s still better than nothing. Yeah, Vista now supports tagging, but nothing beats the ease of sharing something that’s already online. Like I’ve said before, the main value of the web services vs. offline programs is in leveraging the effort of all the other users of the service to help you organize, find, and share things. Organizing is easier than a standalone app because you can see tags other used for the same item. Finding is easier as a result of being more organized, and sharing is obviously more simple. You’ll just have to play with it for a while and hopefully it will become more and more useful to you. If it doesn’t, try something else. I don’t know what else to say in that regard.
One question:
Do you think the online protocol database would have seen more use if it were user-editable?
If each protocol had user comments such as “If you’re working with protein X, you have to double the antibody and half the incubation time because it’s very unstable.” that would have added value to the online version that would make it worth checking instead of just pulling the copy off the shelf.
Of course, there’s no guarantee that you’d have gotten enough comments, but since the thing was online anyways, might as well take the input you do get, and it makes errata easier to handle too. One thing that’s critical to the user-experience of online databases is search. If you can’t deliver the expected results quickly and in a fashion that’s easy to browse through, you’re totally dead in the water. I think that’s mostly why Flickr has been so successful. It’s just easy to browse around on and find interesting things, and easy to contribute to. As always, the devil is in the details, isn’t it?
February 20, 2008 at 4:19 pm
Thanks William, I appreciate your taking the time to share your thoughts. Are you worried at all about online sites like Connotea going under and disappearing? I tend to liken the worry to that seen for digital music. For example, Sony ran their Sony Connect store, which went under, leaving the products their customers purchased no longer useful, as the DRM suddenly went defunct. What happens to all your hard work compiling tags and groups and such if the site ceases to exist? I guess one would hope that you’d get adequate warning and would be able to export your data. At that point one would need another service to exist, and it would need to be able to import the data in a usable manner (again, this stresses the need for common file formats, something I tried to impress upon the audience at my talk). Perhaps I’m just a hoarder in some ways. I want my important files, and the hard work I’ve done to be under my control, not subject to the terms of someone else’s EULA which allows for huge changes in the service at any moment with no recourse.
—Do you think the online protocol database would have seen more use if it were user-editable?—
Both the defunct Molecular Cloning site and the currently available CSH Protocols have the sort of features you’re talking about here. Each protocol has its own forum for leaving suggestions, asking questions, correcting any mistakes, etc. We (just like Nature Protocols and Nature Methods from what I can tell) don’t see very much usage of these features. As noted in my talk, we’ve received about 5 comments in 1.5 years on over 1,000 articles. I did discuss the reasons why I think this is the case (lack of time and no incentive).
There are plenty of wiki-based protocols sites available on the web (Open WetWare being the best known one). Personally, I still think there’s a place for vetted, reviewed and tested material from a trustworthy source when you’re doing experiments that consume important time and expensive reagents. The success we’ve had with CSH Protocols and our strong subscriber base and high usage are bearing this out.
I agree that search is very important. We’ve tried to build in a taxonomy, a hierarchical set of categories to allow for browsing. These categories serve as tags on each article, and each article allows you to explore the listed categories further. We also offer the standard advanced search capabilities Highwire offers for all its hosted journals. We’re also working on some new features for exposing readers to related articles that I can’t go into in detail yet. I’d love to experiment with user-generated tags. Any suggestions are always appreciated.
February 20, 2008 at 4:35 pm
[…] Crotty from CSHL, who I’ve corresponded with before, has again published an obituary for Science Web 2.0. I think this is premature but typical of how the media cycle works, especially on the web where […]
February 20, 2008 at 5:01 pm
You can export a Connotea library, including tags, to Zotero, so I’m not too worried about losing everything. I think the main reason I feel as comfortable with it as I do is that I’ve been watching Nature’s 2.0 push for a while now, and I’ve learned a little bit about the people behind it, and when they brought on Alf Eaton, it was pretty clear to me that at least someone there gets it. They’re doing all the right things, and I think the fact that it’s not succeeding as dramatically as we’ve come to expect for web apps just means that we’ve learned unrealistic expectations. Some things might won’t be around for the long haul. CiteULike might not be around for the long haul, YourSci might not be, but I definitely think Connotea will be.
Absent a brilliantly conceived and executed marketing effort, you’re probably just going to have to wait a little while until you see much wide adoption of your online tools. It’s a weird time right now. Certainly some properties, like del.icio.us and Flickr, are doing fantastically, and they were a little bit lucky, a little bit properly connected, but they also have a great product.
I think it’ll be worth it. It’ll be worth it for users because they’ll get more than they’re currently paying for and because it’s the kind of information they’ll have come to expect, and it’ll be worth it to you, because a good group of users can make a database richer than you could from the editorial side.
That’s more evangelism, of course, but only you can decide if you’re seeing results. I know I am.
March 5, 2008 at 12:58 pm
I think that the success of Web 2.0 and “social” networks depends on the quality of the individual participants. A blog by Michael Crichton or Richard Dawkins is going to be better used than one by…er…me for example.
But the reason for that is those people have a public verification system in the Media 0.1 world. They have books, papers, maybe movies that verify that yeah, they are really smart guys.
So, how about someone who’s intelligent but hasn’t published the latest thriller on chaos theory creating dinosaurs that take over the world? This is handled by agoric systems that provide verification and maybe enhancement to a persons “Quality of Service”.
What we need are systems that can “make” and individual have greater verifiable credibility which are entirely web based. That is, we need to level the playing field in the Internet world for individuals, so Al Gore only gets to be listened to if he jumps through the same number of Web 2.0 hoops as…say, John Bailo does, in the network of science, politics and other social networks.
March 7, 2008 at 2:05 pm
Interesting thoughts John, I assume what you’re proposing is something like a Slashdot karma system, or Cory Doctorow’s imaginary “wuffie” system. It still strikes me as a variation on the “chicken or the egg” question though. How do you build up a reputation for quality work and trustworthiness? Does being taken seriously as a scientist require actual scientific achievement outside of writing a good blog? And aren’t those most trustworthy and interesting the least likely to have the time necessary to participate?
March 9, 2008 at 9:57 pm
David, this is an excellent analysis, thank you. The question “What is the incentive†is the main one, I think. To become more relevant to scientists, all this Web 2.0 activity has to get integrated into their current workflow. In other words, it has to help a scientist to:
– find a job
– get a grant
– make their experiments easier
March 10, 2008 at 4:56 pm
It is a tough question Moshe. How do you incentivize these activities? It’s particularly tough for tools which are deliberately anonymous, like Wikis.
March 10, 2008 at 10:55 pm
Sorry about losing track of this thread. I don’t know about the busy part. Most techies I know never stop working and rarely sleep and they are probably the biggest bloggers/blog consumers out there, so I am still not convinced of the busy argument.
I guess some of my viewpoints come from my frustration with science and scientists. For a community that should be pushing boundaries, we are exceedingly resistant to any kind of change.
As for Nature, etc, a clear indication is the traffic. Without going into numbers, there is no way so many people are visiting those blogs without getting something out of them.
As far as Eisen is concerned, his blog is part of his brand, and that’s a very effective use of a blog; as an extension of your voice. Why should anyone be limited by the confines of traditional communication?
I am happy to be an evangelist. Today, I derive a ton of value from all the RSS feeds and other web resources available to me. They make me more effective at my job.
Switching gears, I think Moshe’s point #3 is the key one. How can “web 2.0” help them be more productive? If we can convey that message to scientists beyond the early adopter crowd then we are in good shape. Just look at how librarians are adopting the web. It helps them be more effective, and that’s perhaps what we need to do. Too often we get stuck in the mechanisms and technologies and not the message.
Anyway, too many scattered thoughts there. Hopefully they made some sense.
March 11, 2008 at 3:30 am
[…] some lashings of your favourite Science and you’ve got Science 2.0. See also Science -2.0 and Why Web.20 is failing biology for where Science 2.0 might […]
March 11, 2008 at 5:15 am
Interesting article, which captures most of the pros and cons of all the Web 2.0 hype. However when it comes to social sharing of in silico experiments online, conventional peer-reviewed journals are frequently not good enough. Try repeating any in silico experiment published in a peer-reviewed journal. I hope this is somewhere myExperiment (which you mentioned) can make a contribution. Web 2.0 (or Science 2.0 if you prefer) is not just about sharing publications, it should also be about sharing experiments online too – something you didn’t mention. But I enjoyed it anyway, thanks for an entertaining post.
March 11, 2008 at 6:46 am
Hi Deepak,
It’s something I’ve wondered about as well–where do computer scientists get all that time to blather on Slashdot (and everywhere else)? As I noted in the original post, there’s a strong sentiment in a lot of institutions that if you have a life outside of science, then you’re not a good scientist as clearly you’re not devoted enough to your work. Trust me, I’ve worked in those departments.
Nature–what is the traffic? Have they ever released the numbers? They seem to have a very strong contingent based in India (well, that and strong participation from their own editors). It’s interesting to me in kind of an Orkut catching on in Brazil sort of way.
Eisen–true, but he’s a very good writer (he recently co-wrote an Evolution textbook for us). The question is, how much room is there for such writing? It’s nothing new really, Sydney Brenner used to write a column in the back of TIBS (I think it was TIBS) that wasn’t all that different from blogging. But how many of those columns do you want to read every week, every day? How much time do you have to read them? Are you really going to read blogs from obscure graduate students? Is this a valuable use of their time?
Nothing wrong with evangelism, we need it if these technologies are going to break through into the mainstream. The only problem is making sure one doesn’t lose sight of the forest for the trees. If you’re doing something, and everyone you’re in contact with is doing that same thing, you start to feel like the whole world thinks that way.
Moishe’s “point #3” is the key here, really the main point of my talk–we need tools that increase productivity, not tools that demand massive time and effort commitments. My argument is that the tools currently take more time than they save. And that we, as publishers and developers, should focus more on creating tools that increase efficiency, rather than just copying things that have been popular in other arenas (Myspace).
March 11, 2008 at 10:13 am
David, a very worthwhile piece you wrote, it was referred to me by my colleague Marco Roos. I agre with many of the pitfalls you metnion, I feel that the time related ones are the weakest as they will evaporate once you fulfill the other criterion of saving a lot of time. i believe that anyone who thinks they can create anything web 2,3.o-ish for scientists without that element will just fail. We will soon publish about wikiprofessional and we feel that we have addressed many of your points (not all yet, but soon we will). We do not start from scratch but have filled the system with many authoritative databases. users can comment on the authoritative view and create a copy on the fly that can be viewed alongside the original. The name is of the editor will be visible and authoritative sources, among which some of the ones you positively mention, will take the comments for annotation. we call this community annotation, as opposed to distributed annotation which does not fly, in my opinion mostly because for the lack of ‘micro credits. We need to break through the hegemony of the Impact Factor only to make this happen. Next, we created the concept web of over a million concepts and thereby we began to take out the vast majority of redundancy of written papers. Talking about time saving is when you have to explore a new gene that comes out of a micro-array and you can see all its connected concepts at a glance. last but ot least, making changes in the concept web (via the wiki) may bring crucial new insights to your own research, as the concept web will change on a daily basis, driven by publications as well as wikiprofessional comments. I will send you a pdf of the paper as soon as it is out.
March 14, 2008 at 12:35 pm
[…] seem to have been doing a good job of riding the cultural zeitgeist with a recent blog posting, as there have been several writers who have voiced their opinions on their blogs on the subject of […]
March 14, 2008 at 2:51 pm
David
I think by and large most of us are in agreement that web 2.0 for web 2.0’s sake is not going to work. In the biopharma industry (my world) the uptake is mostly as consumers (a lot of people read blogs, especially pharma blogs), but very little production. That in itself is not a bad thing.
Where we are losing out is participation. Obviously, some of us believe that there is value in these tools (I cannot imagine a world without some of them). I doubt too many of us use tools that reduce productivity. The time commitment is there for social tools, but some of the scientific tools out there do add to our productivity, although they could be better and there are some that are missing. Every software development team worth its salt that I know uses wikis. What’s preventing scientists from using collaborative tools? It’s a cultural issue
Incentive is important, but without the cultural shift, that’s not going to amount to much.
March 19, 2008 at 2:54 pm
You Wrote:
…
To be fair, widespread adoption of OpenID will fix a lot of this problem and reduce this aspect of inertia.
It’s also astounding the number of people who ought to know better but don’t use reference management software.
March 28, 2008 at 2:33 pm
[…] long post at the Bench Marks blog examines the question of Why Web 2.0 is failing in Biology. It would make this too crazy long to engage with everything there today, but I do want to pull out […]
April 1, 2008 at 3:42 am
[…] work of social scholarship take a lot of time—time that many busy academics don’t have. In a blog post on why Web 2.0 hasn’t been adopted in the biosciences, David Crotty, executive editor of the […]
April 2, 2008 at 2:08 pm
[…] Why Web 2.0 is failing in Biology [Via Bench Marks] Last week I gave a talk at the American Association of Publishers Professional and Scholarly Publishing (AAP/PSP) meeting in Washington, DC. I was part of a panel discussion on “Innovative and Evolving Websites in STM Publishing” along with representatives from the New England Journal of Medicine, the Royal Society of Chemistry and the American Chemical Society. While the other talks were a bit more evangelical, or mostly presented a look at new technologies that had been incorporated into the societies’ own journals, I tried to be a bit more practical, taking more of a hard look at what’s currently being tried, whether it’s succeeding and the reasons behind that success/failure. I’m posting my talk below, in hopes of receiving further feedback. This talk was delivered to a room full of publishers, so it’s directed with that audience in mind. In a few months, I’m giving a similar talk to a meeting of scientists, the users of these sites rather than the creators. So I’d love to hear from users as to your thoughts on how Web 2.0 is serving your needs. [More] […]
April 3, 2008 at 6:15 pm
[…] friend Sébastien Paquet got me upset. He sent me a link to a post by David Crotty. What David says is that Wikipedia and blogging, the whole Web 2.0 fad, […]
April 4, 2008 at 3:04 am
[…] Soiland’s Fluffy Web 2.0 demo and an article by from David Crotty at Cold Spring Harbor: Why Web 2.0 is failing in biology. If there are any important science-oriented “social software” sites I missed, please […]
April 6, 2008 at 10:50 am
[…] Society for Developmental Biology Meeting. It’s an updated version of an earlier talk posted here. It’s kind of a 180 degrees turn from the previous talk, in that the first one was delivered to […]
April 6, 2008 at 1:44 pm
[…] Society for Developmental Biology Meeting. It’s an updated version of an earlier talk posted here. It’s kind of a 180 degrees turn from the previous talk, in that the first one was delivered […]
April 25, 2008 at 7:39 pm
[…] A pessimist’s view of why scientists do not participate in social networking sites. According to an anonymous postdoc: “I can barely keep up iwth the literature in my field and with what my labmates are doing. Who has time to spend reading some grad student’s blog?” This entry was written by Shanta Rohse, posted on April 25th, 2008 at 9:38 pm, filed under Linking Thinking and tagged engaging with online learning. Bookmark the permalink. Follow any comments here with the RSS feed for this post. Post a comment or leave a trackback: Trackback URL. « Purposeful Networking […]
May 9, 2008 at 5:33 am
[…] qui déplore le manque d’intérêt des biologistes pour le Web 2.0, un constat basé sur une analyse approfondie de Bench Mark, le blog de l’éditeur de CSH protocols. Parmi les raisons invoquées, le manque de temps, le […]
June 6, 2008 at 2:30 am
[…] and also in contrast to what some commercial blog platforms say). The reasons may be manifold: Bench Marks started the discussion with the observation that Time. Scientists have little time, and rather […]
June 17, 2008 at 3:06 pm
[…] the full discussion at Why Web 2.0 is failing in Biology. Tagged: scientists, usage, web […]
August 12, 2008 at 11:43 am
[…] new web 2.0 technologies (found via Bora’s blog). It’s good to see that many of his conclusions echo my own (here too), that the big problems are a lack of time and incentive. He offers some potential […]
August 22, 2008 at 4:27 pm
[…] sites and on whether the promise of “web 2.0″ is being delivered in science yet (see David Crotty’s post at CSH, Bora’s question, and musings at The Scientist over on NN). Some reasons why so-called […]
April 8, 2009 at 8:16 pm
[…] via Bench Marks » Blog Archive » Why Web 2.0 is failing in Biology. […]
April 14, 2009 at 1:31 pm
[…] given a few talks on why these new tools are failing to catch on — here’s an early one given at a publishers meeting, and a later one given to an audience of scientists. The short answer, if you don’t […]
April 14, 2009 at 2:13 pm
[…] Crotty, David. Why Web 2.O is failing in Biology. Bench Marks blog, online, Feb. 2008: http://www.cshblogs.org/cshprotocols/2008/02/14/why-web-20-is-failing-in-biology/ […]
April 22, 2009 at 7:21 am
Provocative title about a subject that really hasn’t even had a chance. We’re trying a new approach to these tools on the following site.
http://www.sci-mate.org/wiki/index.php/About_Sci-Mate
Our goal is to attract the nay-sayers and show them how having a positive attitude to cooperation can make a big difference.
April 22, 2009 at 7:53 am
Definitely a deliberately provocative title, but also a pretty accurate one. You could certainly argue that we’re in the early days, and things will catch on over time, but so far, “failing” is a good description for most efforts. And I don’t think the issue is one of attitude–most scientists I meet with are interested and looking for helpful tools. The problem is that the tools themselves generally fail to serve a useful purpose, require large time/effort investments, and aren’t that much better than what’s already available in the offline world. Better tools that save time and effort are what’s needed, not gung-ho enthusiasm (there’s way too much of that online already).
June 3, 2009 at 1:50 pm
[…] Crotty, David. Why Web 2.O is failing in Biology. Bench Marks blog, online, Feb. 2008: http://www.cshblogs.org/cshprotocols/2008/02/14/why-web-20-is-failing-in-biology/ […]
October 23, 2009 at 5:42 am
There are tons of paradigms from the past ,that shows us that some of the greatest papers were not fully appreciated immediatelly, but some of them needed more than 50 years.Publishing in a mainsteam journal, offer your paper the chance to be judged from history, because it will be there forever!!!
November 8, 2009 at 6:51 pm
Reluctance to participate in any of the 2.0 sites in part comes from fear of their losing the Darwinian competition, and thereby consigning all one’s contributions and stored data and writing to the Link-Rot Void. At a minimum, portability and mirroring would provide some reassurance, I’d think.
Edit note to somebody: whoever linked their name to Knewco put a comma after a www and killed it. ( http://www.www,wikiprofessional.org/conceptweb ) Why it requires 2 ‘www’s is also a mystery, but it does — one doesn’t work!!
November 8, 2009 at 6:57 pm
Addendum: it was Barend Mons, and the original link is http://www,wikiprofessional.org/conceptweb , which needs both a period and the addition of the second www. to work.
November 8, 2009 at 8:28 pm
Link fixed, thanks for catching that. I think the reluctance to comment goes far beyond worries about link-rot and comments disappearing. See more details in my response to your other comment here.
May 16, 2010 at 4:00 am
[…] David Crotty’s blog […]