Update May 12, 2020. I rewrote this code in Fortran.
Update July 30, 2018. I just realized that it is not needed to put the recovery events on the heap (stupid me). So I took some time to seriously squeeze out most from this approach. You can find the code at the Github page below. It’s 2-3 times faster now. An N = 10,000, m = 3, BA model network takes 0.35ms / outbreak for β = 0.3.
I needed some reasonably fast C code for SIR on networks. Funny enough, I just had C code for a discrete-time version, nothing for the standard continuous-time, exponentially-distributed duration of infection. Although there are book chapters written about this that are more readable than this post (and accompanied by Python code), I decided to write my own code from scratch* for fun. This post is about some of my minor discoveries during this coding and comments to the code itself (that you can get at Github . . it is reasonably well commented and well structured, I think, and should be instructive). Probably all points are discovered before (if not, it is anyway too little to write a paper about).
*Except some old I/O routines & data structures.
What I remember from Kiss et al. ’s book is that there are two different approaches: the Gillespie algorithm (1. Generate the time to the next event. 2. Decide what the next event will be. 3. Go to 1.) and event-driven simulation (1. Pick the next event from a priority queue. 2. Put future, consequential events in the queue. 3. Go to 1.). I decided to go for the latter, using a binary heap for the priority queue. The Gillespie algorithm comes with so much book-keeping overhead that I can’t imagine it being faster? (Or are there tricks that I can’t see?)
Using a binary heap to keep track of what the next event is turned out to be very neat. It is a binary tree of events (either a node getting infected or recovering) such that the event above (the “parent”) happens sooner than a given event, and the event below occurs later. As opposed to other partially ordered data structures, it fills index 1 to N in an array. Extracting and inserting events takes a logarithmic time of the size of the heap. There are two functions to restore the partial order of a heap when it has been manipulated—heapify-down for cases when a parent-event might happen later than its children, or heapify-up when a child event might happen sooner than its parent. Heapify-up is faster (briefly because one child has one parent, but one parent has two children). Anyways, here’s the pseudo-code for one outbreak (with a Python accent):
initialize all nodes to susceptible get a random source node i insert the infection event of i (at time zero) into the heap while the heap is not empty: i := root of heap (the smallest element) if i is infected: mark i as recovered move the last element of the heap to the root heapify-down from the root else:  mark i infected add an exponential random number to i’s event time heapify-down from i for susceptible neighbors j of i: get a candidate infection time t (exponentially distributed + current time) if t is earlier than i’s recovery time and j’s current infection time: if j is not on the heap: add it to the end of the heap update the infection time and heapify-up from j 
 i has to be susceptible here, it can’t be in the recovered state
 since this update can only make j’s next event time earlier, we can use heapify-up
I’m too lazy to make a full analysis of the running time. For an infection rate twice the recovery rate, it runs 3000 outbreaks / second on the (16000+ node) Brazilian prostitution data (the naïve projection of the original temporal network to a static graph) on my old iMac.
The greatest joy to code this is, of course, to conclude it matches theory. I will show tests for 10^7 averages of a graph that, although small, has a very complex behavior with respect to the SIR model and thus is a good test case.
(In general, the most useful graphs for testing are complex enough for as many rare situations as possible to occur but small enough for signals from inconsistencies not to be drowned out by the rest of the network. Of course, you should (and I did) test different sizes.) For this graph, the expected outbreak size as a function of the infection rate x (assuming unity recovery rate) is exactly:
The expression for the extinction time is much hairier:
(How to calculate these expressions is the topic of this paper.) Anyway, they did immediately match perfectly (usually, plotting any consistency check like this sends me into another round of debugging):
All in all, it was a fun coding project (and quick—about 2 hr of primary coding and 3 hr of debugging . . no profiling or fine-tuning). Now let’s see if I can get some science done with it 😛
Without thinking too deeply, it might not be so straightforward to modify this code to the SIS model. It could be that one event changes many other events in the heap, which would make an event-based approach slow. In general, SIS is a much nastier model to study (computationally) than SIR. Both because it can virtually run forever, and that the fact that generations of the disease become mixed up prevents many simplifications (e.g., when calculating probabilities of infection trees).
Finally, the title of this post is, of course, a wink to FFTW (Fastest Fourier Transform in the West). I’m not sure this is the fastest code publicly available, but it shouldn’t be too far from it?