Thursday, May 26, 2022

Youtube tutorial on implementing the bias test from Warren et al. 2021

I've just posted a Youtube tutorial on how to implement the bias test from last year's paper "The effects of climate change on Australia’s only endemic Pokémon: Measuring bias in species distribution models".  It's a really neat way to see what sort of methodological biases might exist in a given study design, and can even tell you where in geographic space to trust the predictions your models make! 

Code is here:


present.files <- list.files("~/Dropbox/Ongoing Projects/Brazilian Ants and Bias/wc/",

                            pattern = ".gri", full.names = TRUE)

env.present <- stack(present.files)

future.files <- list.files("~/Dropbox/Ongoing Projects/Brazilian Ants and Bias/wc.future/",

                           pattern = ".gri", full.names = TRUE)

env.future <- stack(future.files)

adlerzi <- enmtools.species()

adlerzi$presence.points <- read.csv("~/Dropbox/Ongoing Projects/Brazilian Ants and Bias/Real Data Analyses/ant_spp/pt_data/Procryptocerus.adlerzi.csv")

adlerzi$ <- "Procryptocerus adlerzi"

adlerzi <- check.species(adlerzi)

adlerzi$range <- background.buffer(adlerzi$presence.points, 500000, mask = env.present[[1]])

adlerzi <- check.species(adlerzi)



# Based on that I'm going to go with bio1, bio12, and bio15

env.present <- env.present[[c("bio1", "bio12", "bio15")]]

env.future <- env.future[[c("bio1", "bio12", "bio15")]]



# Different resolutions so we need to resample

env.present <- resample(env.present, env.future)

adlerzi.gam <- enmtools.gam(adlerzi, env.present, test.prop = 0.2, rts.reps = 100)

adlerzi.future <- predict(adlerzi.gam, env.future)



predicted.change <- adlerzi.future$suitability - adlerzi.gam$suitability


bias.rasters <- lapply(adlerzi.gam$rts.test$rts.models,

                       function(x) predict(env.future, x$model, type = "response") - 

                         predict(env.present, x$model, type = "response"))

bias.stack <- stack(bias.rasters)

mean.pred <- calc(bias.stack, fun = mean)


sd.pred <- calc(bias.stack, fun = sd)


# Get upper and lower bounds of 95% CI

pred.95.upper <- calc(bias.stack, fun = function(x) quantile(x, probs = 0.975, na.rm= TRUE))


pred.95.lower <- calc(bias.stack, fun = function(x) quantile(x, probs = 0.025, na.rm= TRUE))


# Find which predictions fall within the CI of expected predictions under bias-only

plot(predicted.change > pred.95.lower & predicted.change < pred.95.upper)

Thursday, June 24, 2021

Problems with background test in ENMTools R package

At some point, I think fairly recently, something got messed up in the background tests of the ENMTools R package and it is building the models for the permutation tests using incorrect presence points.  I think I've found the issue and am working on a fix right now, but if you're using a recent version from CRAN or GitHub to do background tests I would not trust those results until a fix gets released.  I will do my damnedest to get that done this afternoon, Japan time.  Really sorry for any inconvenience this might have caused.

edit: This issue has now been fixed and the fixed version is live on CRAN.  If you've got any studies underway using the background test (similarity test), please reinstall ENMTools and run them again.

Sunday, May 16, 2021

Removing points from suitability plots

This is a question I got via email: how do you remove occurrence points from the suitability plots that are spit out by ENMTools modeling functions?

Luckily this is quite easy, as these are just ggplot objects.


library(ggedit) <- enmtools.maxent(iberolacerta.clade$species$monticola,

                           euro.worldclim, test.prop = 0.3)

with.points <- plot(


without.points <- remove_geom(with.points, "point", 1:2)


For some reason ggedit's remove_geom function gets mad when you pass it two layer numbers (1:2), but it still works.  It doesn't work if you only pass it one number.  Go figure.

Thursday, May 6, 2021

Fixing bugs, adding the ability to use bias layers

 Version 1.0.4 of ENMTools is now on GitHub.  It fixes some issues with the permutation tests for model fit, and just a few other minor usability fixes.  It also adds two important things:

A check.env() function that will sort through your environmental raster stack and check to see that all layers have NAs in the same grid cells.  This has been causing issues for some people because the background selection for various tests and model functions treats the top layer of the environmental raster stack as a mask.  If some of the other layers had NA values in grid cells where the first one didn't, this could cause the package to draw some NAs for data points.  Running your layers through check.env() before using them to construct range rasters or run models/tests is advisable, but you should only have to do it once.  Basically the syntax is:

env <- check.env(env)

Where "env" is your raster stack.  It may take a few minutes if your rasters are huge.

The other big change is the addition of bias layers to all modeling functions.  If you have a raster representing sampling bias, you can now pass it into any modeling function and the background points will be drawn in proportion to the value in each grid cell.  For instance:

my.glm <- enmtools.glm(my.species, env, bias = biaslayer)

As always, if you want to use the GitHub version you just run


Monday, April 5, 2021

Estimating bias in transferring species distribution models

As some of you may have seen, I had a recent paper come out with Alex Dornburg, Teresa Iglesias, and Katerina Zapfe on the effects of climate change on Australia's only endemic Pokémon, kangaskhan.

While the paper is obviously intended to be humorous (seriously, check out Supplement S1 because it is ridiculous), there's actually a pretty cool new method involved here.  We show that a given study design (i.e., sample size, study area, choice of predictor variables, modeling algorithm, and climate scenario) can create massive biases in the sorts of predictions you might make when building and transferring models.  In some cases these can be so strong that the qualitative prediction you make (e.g., range contraction or expansion) is completely unaffected by the data; the data can only affect the magnitude of the predicted change, not the direction of it.

The super cool bit (in my opinion) is that we show that you can make a fairly simple modification to the Raes and ter Steege (2007) test that allows you to estimate how biased a given design is.  This gives you some idea of which general methodological approaches let the data have the most affect on the outcome, and we even show how you can do this in a spatial context to tell you WHERE your model is more driven by bias and where it's more driven by data.  We think this is a super useful new tool that may give stakeholders some quite valuable information when it comes to applying models to make decisions.

I'll set up a video tutorial on how to do this soon, and eventually we'll probably come up with some sort of wrapper function in ENMTools that simplifies the process.  Right now, though, there are worked examples in the Dryad repo for the supplementary code.  That's here:

Warren, Dan; Dornburg, Alex; Zapfe, Katerina; Iglesias, Teresa (2021), Data and code for analysis of effects of climate change on kangaskhan and summary of simulations from Warren et al. 2020, Dryad, Dataset,

The "block" crossvalidation feature is currently broken on CRAN

 As part of fixing the recent spatstat-related issues, I somehow managed to roll back some much older changes and as a result broke the block crossvalidation features on the CRAN version of ENMTools.  I'm working on an update now that will fix it, but in the interim if you need that feature please just use the "develop" branch from GitHub.  As before, the code for that is:


devtools::install_github("danlwarren/ENMTools", ref = "develop")

Sunday, April 4, 2021

ENMTools is back on CRAN, minus ppmlasso models

 We've finished the changes necessary to come up to date with the changes to the spatstat package, and ENMTools is now back on CRAN.  Unfortunately one of the changes we had to make was to disable ppmlasso models, since they're not yet compatible with the new spatstat.  We don't know how long those are going to be unavailable, but it could be a while.