Monday, October 16, 2017

Model-based inference in historical and ecological biogeography 2017!

Matthew van Dam and I will be offering a new class in Barcelona with Transmitting Science this year, based in large part on the class that Nick Matzke and I taught last year.  It's called Model-based Inference in Historical and Ecological Biogeography, and if last year was any indication it's going to be a lot of fun.  We'll mostly be focusing on BioGeoBEARS and the new ENMTools R package, and there's a whole lot of very cool stuff going on with both.  Join us if you can!

Saturday, April 29, 2017

Solution to erroneous "please update your maxent program to version 3.3.3b or later" error when running maxent from dismo & ENMTools

I originally encountered this problem when trying to run maxent from ENMTools; but it turns out it was a combination of problem with the dismo package, and with my Mac machine having different versions of java installed (1.6 and 1.8).

I am pasting the solution that worked for me below, and to other relevant lists if I find them, since I didn't find a direct answer online.

Thanks - Nick

Thursday, October 13, 2016

Nice tutorial on conducting background tests using the Perl version of ENMTools

Here's a really nice tutorial by Daniel Romero on how to do the background test in the standalone version of ENMTools.  He walks you through setting up the program, running the analysis, and how to avoid some of the errors that might crop up when trying to run the software.

Thanks, Daniel!

Wednesday, September 28, 2016

RWTY post: New vignette for diagnosing MCMC convergence and lack thereof

This was requested by a couple of reviewers on the forthcoming RWTY app note.  It's something we had discussed doing anyway, but it's good that they forced us to sit down and do it because it's super helpful.  Basically it's a graphical rundown of two different data sets and what you can learn from the absolute legion of plots RWTY produces.  This is very much a first draft, but it's all there.  My hands are aching for typing; it was 5000 words' worth of yammering within the course of about eight hours.

It's also available in the newest version of RWTY on github by using browseVignettes("rwty")

RWTY: R We There Yet? A package for looking at MCMC chain performance in Bayesian phylogenetics

In case you're wondering why the ENMTools posts and Git commits have slammed to a halt, it's because my other R package (RWTY) just took focus.  We got really good reviewer comments back, but they require a bit of work and have a deadline so for the moment they take precedence.  I've also decided I'm going to start blogging about RWTY here along with ENMTools, because RWTY is cool as can be and I'll be darned if I want to start another blog for it.

You can find RWTY at

It's a collaboration between me, Rob Lanfear, and Anthony Geneva, and I think it's pretty darn special.

Monday, September 12, 2016

Note: Parallelization not working with Maxent models

For the time being and for the foreseeable future, Maxent models aren't working with multiple cores.  This is due to an issue with the mclapply and rJava functions in R; rJava just straight-up does not work with mclapply, and as far as I can tell there's no way to make it do so.  As it stands, ENMTools just sets the number of cores to 1 for any of the tests when "type" is set to "mx".  If anyone knows, or discovers, a workaround for this please do let me know!

Thursday, September 8, 2016

Many core functions now parallelized

I've now got parallelized code running for the background and identity tests, as well as the linear and blob rangebreak tests.  The ribbon rangebreak test is going to take longer, because it contains some necessary failure detection code that needs to be wrapped differently than the other tests.

This code is not yet on the main branch on GitHub, it's on the branch named "apply".

As it stands, each of the functions by default uses all of the cores available in the system.  You can decrease that by just supplying a "cores = x" argument, where x is however many cores you want it to use.  If you're happy using all of the cores on your system, you can just call the functions exactly as before.

Obviously the speed differences here are going to depend on how many cores you have on your system.  I've got a 24-core machine I'm working on right now, and going from 1 core to 24 on my test data results in massive speed increases - identity and background tests for 20 reps drop from ~10 minutes to ~1 minute.  Pretty slick!

Anyway, give it a shot if you can and let me know if you run into any issues with it.  Thanks to Nick Huron for reminding me!