Tuesday, December 18, 2012

Reviewing the causes of extinction due to climate change

This post is a few weeks late now, but I wanted to discuss a paper I co-lead with another student in my department, Abby Cahill, which recently became available online through the journal The Proceedings of the Royal Society B. We even got the cover photo for the print edition, which is pretty neat!  You can access the abstract for the article here, and if you want a pdf, just drop me a line. 

This article is the result of work a group of us students in E&E at Stony Brook did during, and following, a seminar lead by Prof. John Wiens.  We set out to investigate how climate change has caused documented local extinctions.  We employed systematic review methods, trying as best we could to create a replicable review study.  This was the second systematic review that I participated in.  The other review I worked on was with the Gurevitch Lab, in which we carried out a field synopsis and systematic review of invasive species research.  As authorship order would indicate, I played a much smaller part in that review than this one.  However, working with the Gurevitch Lab members during the early development of their review, I gained an appreciation for the usefulness of these review techniques.  With that said, after writing our climate change review, and a follow up we just sent off for review on the causes of range limits, I do wonder if there are any ways to make literature reviews even better (more on that later).

Back to our review - on some levels what we really wanted to know is if climate change will cause local population extinctions because organisms reach some physiological limit (e.g. it's just too hot) or by some other means (e.g. interruption of some biotic process).  Though my thesis research involves an invasive plant, many of my research interests, and most of my side projects in the Akcakaya Lab, fall squarely in the climate change / conservation biology realm.  So this question really resonated with me.  Carrying out the review yielded two major results. One, for those studies that identify the proximate causes of climate change related extinction, there are many more cases in which disruption of biotic interactions is the cause.  Second, very few studies actually identify a proximate cause of extinction.  This second point actually surprised me, but discussing our findings with more established conservation biologists, few shared in my surprise.  Considering this finding, we also tried to outline some ideas for how we may pursue conservation research to address this lack of knowledge.  Hopefully this review will be helpful to young scientists looking for research ideas.

Beyond this particular review and these particular results, carrying out this review and writing the paper made me start thinking about just how literature reviews are carried out.  At this point, I would say that I am a proponent of the systematic review approach (for background literature have a look at Pullin and Stewart's ConsBio article or the Gurevitch Lab paper cited above - which is open-access).  However, it is certainly not without its faults.  For example, to my knowledge, the best way to carry out a repeatable literature search is to use a set of search terms in a database such as ISI's Web of Science.  I've done searches in this particular database, and indeed, the results are replicable.  However, there is a temporal bias in these databases, due to the fact that many articles published prior to the 1980s do not have digitally indexed titles or abstracts.  Many of my professors have stressed the importance of not ignoring older literature, but how do we get this literature into reviews systematically? Also, there are many examples of these searches failing to find relavent literature.  When this happens we face the question of whether to add literature that we know is relavent, thus making the search results less transparent, or leaving it out, which personally, leaves me with a funny feeling of failing to cover the full 'parameter space' of my potential data, if you will.   I don't have any great ideas about how to address these issues.  At the end of the day, I find some comfort in the idea that a thorough literature search may not return every relavent study ever published, but neither does a transect return information on every plant in a population (or even that transect, given observer error), and as with the transect data, if done correctly we can say we've thoroughly 'sampled' the population.

Another concern I have with systematic reviews is that they often entail extraction of some pieces of data from studies, to be further analyzed (e.g. the number of species examined or the documented causes of local extinction).  This may seem pretty straight forward - the information I want has got to be in the paper, I mean, it was published!  But in practice, data extraction can be quite difficult, especially when you are trying to extract from hundreds of papers with several people.  We employed carefully crafted Google Forms, which despite our "carefulness" needed lots of editing once everyone started using them, and ultimately, lead to us going over our data set several times. An idea that I've been tossing around is to invite the original study authors to help in this process.  Sure, this will not always be possible.  Some authors will not have the time, some will not be able to be contacted, and of course, there's always the possibility that some will not be happy with the way you intend to include their study.  I have no reason to believe this, but I have this funny feeling that someone is going to read our ProcB paper and say "I most certainly did identify the causes of extinction for this species!".  To that I say, while the paper has been published, it doesn't mean the information shouldn't be updated.  Perhaps a review should be an active process? A wiki of some sort? The review authors can set-up the structure, provide their assessment, and then the cited study authors can be invited to edit/add/remove information about their research - with some editorial control by the review authors I suppose.  Sounds like a cool project to me. Also sounds like a lot of time and work.  But perhaps another step forward for literature reviews?