As some of you may have seen, I had a recent paper come out with Alex Dornburg, Teresa Iglesias, and Katerina Zapfe on the effects of climate change on Australia's only endemic Pokémon, kangaskhan.
While the paper is obviously intended to be humorous (seriously, check out Supplement S1 because it is ridiculous), there's actually a pretty cool new method involved here. We show that a given study design (i.e., sample size, study area, choice of predictor variables, modeling algorithm, and climate scenario) can create massive biases in the sorts of predictions you might make when building and transferring models. In some cases these can be so strong that the qualitative prediction you make (e.g., range contraction or expansion) is completely unaffected by the data; the data can only affect the magnitude of the predicted change, not the direction of it.
The super cool bit (in my opinion) is that we show that you can make a fairly simple modification to the Raes and ter Steege (2007) test that allows you to estimate how biased a given design is. This gives you some idea of which general methodological approaches let the data have the most affect on the outcome, and we even show how you can do this in a spatial context to tell you WHERE your model is more driven by bias and where it's more driven by data. We think this is a super useful new tool that may give stakeholders some quite valuable information when it comes to applying models to make decisions.
I'll set up a video tutorial on how to do this soon, and eventually we'll probably come up with some sort of wrapper function in ENMTools that simplifies the process. Right now, though, there are worked examples in the Dryad repo for the supplementary code. That's here:
Warren, Dan; Dornburg, Alex; Zapfe, Katerina; Iglesias, Teresa (2021), Data and code for analysis of effects of climate change on kangaskhan and summary of simulations from Warren et al. 2020, Dryad, Dataset, https://doi.org/10.5061/dryad.p8cz8w9px
Post a Comment