Coupling of Particle Filters^{†}^{†}thanks: This research is financially supported by the Swedish Foundation for Strategic Research (SSF) via the project Assemble and the Swedish research Council (VR) via the projects Learning of complex dynamical systems (Contract number: 6372014466) and Probabilistic modeling of dynamical systems (Contract number: 62120135524).
Abstract
Particle filters provide Monte Carlo approximations of intractable quantities such as pointwise evaluations of the likelihood in state space models. In many scenarios, the interest lies in the comparison of these quantities as some parameter or input varies. To facilitate such comparisons, we introduce and study methods to couple two particle filters in such a way that the correlation between the two underlying particle systems is increased. The motivation stems from the classic variance reduction technique of positively correlating two estimators. The key challenge in constructing such a coupling stems from the discontinuity of the resampling step of the particle filter. As our first contribution, we consider coupled resampling algorithms. Within bootstrap particle filters, they improve the precision of finitedifference estimators of the score vector and boost the performance of particle marginal Metropolis–Hastings algorithms for parameter inference. The second contribution arises from the use of these coupled resampling schemes within conditional particle filters, allowing for unbiased estimators of smoothing functionals. The result is a new smoothing strategy that operates by averaging a number of independent and unbiased estimators, which allows for 1) straightforward parallelization and 2) the construction of accurate error estimates. Neither of the above is possible with existing particle smoothers.
Keywords: common random numbers, couplings, optimal transport, particle filtering, particle smoothing, resampling algorithms
1 Introduction
In the context of nonlinear state space models, particle filters provide efficient approximations of the distribution of a latent process , given noisy and partial observations (Doucet et al., 2001; Cappé et al., 2005; Doucet and Johansen, 2011). We assume that the latent process takes values in , and that the observations are in for some . The model specifies an initial distribution and a transition kernel for the Markovian latent process. Conditionally upon the latent process, the observations are independent and their distribution is given by a measurement kernel . The model is parameterized by , for . Filtering consists in approximating the distribution for all times , whereas smoothing consists in approximating the distribution for a fixed time horizon , where for , we write for the set , and for the vector .
The bootstrap particle filter (Gordon et al., 1993) generates weighted samples denoted by , for all , where the particle locations are samples in and the weights are nonnegative reals summing to one. The number of particles is specified by the user—the computational cost of the algorithm is linear in , while the approximation of by becomes more precise as increases (e.g. Del Moral, 2004; Whiteley, 2013, and references therein). An important byproduct of the particle filter for statistical inference is the likelihood estimator, defined as . The likelihood estimator is known to have expectation equal to the likelihood , and its variance has been extensively studied (Del Moral, 2004; Cérou et al., 2011; Bérard et al., 2014). The estimator is at the core of the particle marginal Metropolis–Hastings (MH) algorithm (Andrieu et al., 2010; Doucet et al., 2015), in which particle filters are run within an MH scheme, enabling inference in a large class of state space models.
We consider methods to couple particle filters. A coupling of particle filters, given two parameter values and , refers to a pair of particle systems, denoted by and , such that: 1) marginally, each system has the same distribution as if it was generated by a particle filter, respectively given and , and 2) the two systems are in some sense correlated. The same couplings can be applied to pairs of conditional particle filters (Andrieu et al., 2010), which are conditioned on different reference trajectories, instead of different parameters. In the case of particle filters, the goal is to introduce positive correlations between likelihood estimators and , which improves the performance of score estimators and of MH schemes (Deligiannidis et al., 2015; Dahlin et al., 2015). In the case of conditional particle filters, couplings lead to a new algorithm for smoothing, which is trivial to parallelize, provides unbiased estimators of smoothing functionals and accurate estimates of the associated Monte Carlo error.
Correlating estimators is a classic Monte Carlo technique for variance reduction, and can often be achieved by using common random numbers (Kahn and Marshall, 1953; Asmussen and Glynn, 2007; Glasserman and Yao, 1992). Particle filters are randomized algorithms which can be written as a deterministic function of some random variables and a parameter value. However, they are discontinuous functions of their inputs, due to the resampling steps. This discontinuity renders theoretical guarantees supporting the use of common random numbers such as Proposition 2.2 in Glasserman and Yao (1992) inapplicable. Despite various attempts (see Pitt, 2002; Lee, 2008; Malik and Pitt, 2011, and references therein), there are no standard ways of coupling particle filters. Our proposed strategy relies on common random numbers for the initialization and propagation steps, while the resampling step is performed jointly for a pair of particle systems, using ideas inspired by maximal couplings and optimal transport ideas.
Coupled resampling schemes and coupled particle filters are described in Section 2. In Section 3, they are shown to lead to various methodological developments: in particular, they are instrumental in the construction of a new smoothing estimator, in combination with the debiasing technique of Glynn and Rhee (2014). In Section 4, numerical results illustrate the gains brought by coupled particle filters in a realworld preypredator model, and Section 5 concludes. The appendices contain various additional descriptions, proofs and extra numerical results.
2 Coupled resampling
2.1 Common random numbers
Within particle filters, random variables are used to initialize, to resample and to propagate the particles. We describe bootstrap particle filters in that light. Initially, we sample for all , or equivalently, we compute where is a function and random variables. The initial weights are set to . Consider now step of the algorithm. In the resampling step, a vector of ancestor variables is sampled. The resampling step can be written , for some distribution . The propagation step consists in drawing , or equivalently, computing , where is a function and random variables. The next weights are computed as , then normalized to sum to one; and the algorithm proceeds. We refer to for all as the processgenerating variables. The resampling distribution is an algorithmic choice; a standard condition for its validity is that, under , ; various schemes satisfying this condition exist (e.g. Douc and Cappé, 2005; Murray et al., 2015).
Consider a pair of particle filters given and , producing particle systems and , that we want to make as correlated as possible. Assume that the state space is onedimensional, that and are increasing and rightcontinuous, and that and . Then Proposition 2.2 of Glasserman and Yao (1992) states that the correlation between and is maximized, among all choices of joint distributions for that have the same marginal distributions, by choosing almost surely. This justifies the use of common random numbers for the initialization step. Likewise, if the propagation function is continuous in its first and third arguments, and if the particle locations and are similar, then, intuitively, and should be similar as well. The difficulty comes from the resampling step. We can write , where are random variables, typically uniformly distributed. Since the resampling function takes values in the discrete space , it cannot be a continuous function of its arguments. In other words, even if we use the same random numbers for , a small difference between the weight vectors and might lead to sampling, for instance, in the first system and in the second system; and and have no reason to be similar. This leads to discontinuities in byproducts of the particle system, such as the likelihood estimator as a function of , for fixed common random numbers. We thus separate the randomness in processgenerating variables from the resampling step, and consider resampling algorithms designed to correlate the particles in both systems.
2.2 Coupled resampling and coupled particle filters
We use bold fonts to denote vectors of objects indexed by , for instance or , and we drop the temporal notation whenever possible, for clarity. We consider the problem of jointly resampling and . A joint distribution on is characterized by a matrix with nonnegative entries , for , that sum to one. The value represents the probability of sampling the pair . We consider the set of matrices such that and , where denotes a column vector of ones. Pairs distributed according to are such that and for all and . The choice corresponds to an independent coupling of and . Sampling from this matrix is done by sampling with probabilities and with probabilities , independently.
Any choice of probability matrix leads to a coupled resampling scheme, and to a coupled bootstrap particle filter that proceeds as follows. The initialization and propagation steps are performed as in the standard particle filter, using common processgenerating variables and the parameter values and respectively. At each step , the resampling step involves computing a matrix in , possibly using all the variables generated thus far. Then the pairs of ancestors are sampled from .
Coupled resampling schemes can be applied in generic particle methods beyond the boostrap filter. We illustrate this generality by coupling conditional particle filters. Given a trajectory , referred to as the reference trajectory, and processgenerating variables , the conditional particle filter defines a distribution on the space of trajectories, as follows. At the initial step, we compute for all , we set , and for all . At each step , we draw from a multinomial distribution, and set ; other resampling schemes can be implemented, as detailed in Chopin and Singh (2015). The propagation step computes for and sets . The weighting step computes , for all . The procedure guarantees that the reference trajectory is among the trajectories produced by the algorithm. At the final step, we draw with probabilities and retrieve the corresponding trajectory, denoted . The coupled conditional particle filter acts similarly, producing a pair of trajectories given a pair of reference trajectories and . The initialization and propagation steps follow the conditional particle filter for each system, using common random numbers . For the resampling step, we compute a probability matrix , based on the variables generated thus far, and we sample pairs of ancestor variables . We then set and . At the final step, we draw a pair of indices from , a probability matrix in , and retrieve the corresponding pair of trajectories. The coupled conditional particle filter leads to a new smoothing algorithm, described in Section 3.3.
We now investigate particular choices of matrices with the aim of correlating a pair of particle systems.
2.3 Transport resampling
Intuitively, we want to choose such that, upon sampling ancestors from , the resampled particles are as similar as possible between the two systems. Similarity between locations can be encoded by a distance , for instance the Euclidean distance in . The expected distance between the resampled particles and , conditional upon and , is given by . Denote by the distance matrix with entries . The optimal transport problem considers a matrix that minimizes the expected distance over all . Computing , either exactly or approximately, is the topic of a rich literature. Exact algorithms compute in order operations, while recent methods introduce regularized solutions , where is such that when . The regularized solution is then approximated by an iterative algorithm, yielding a matrix in order operations (Cuturi, 2013; Benamou et al., 2015). Computing the distance matrix and sampling from a generic probability matrix already cost operations in general, thus the overall cost is in operations. We denote by the matrix obtained by Cuturi’s approximation (Cuturi, 2013).
Unlike the exact solution and its regularized approximation , an approximate solution might not belong to . Directly using such a in a coupled particle filter would result in a bias, for instance in the likelihood estimator. However, we can easily construct a matrix that is close to . Introduce and , the marginals of . We compute a new matrix as for some and some probability vectors and . The marginal constraints yield a system to solve for , and . We obtain , , and . To make the best use of the transport matrix , we select to attain the upper bound. Following Cuturi (2013), we choose as a small proportion of the median of the distance matrix . As a stopping criterion for the iterative algorithm yielding , we can select as a desired value close to one, and run the iterative algorithm until the value can be chosen, i.e. until .
The computational cost of transport resampling is potentially prohibitive, but it is modelindependent and linear in the dimension of the state space. Furthermore the active research area of numerical transport might provide faster algorithms in the future. Thus, for complex dynamical systems, the cost of transport resampling might still be negligible compared to the cost of the propagation steps.
2.4 Indexcoupled resampling
Next we consider a computationally cheaper alternative to transport resampling termed indexcoupled resampling. This scheme was used by Chopin and Singh (2015) in their theoretical analysis of the conditional particle filter. It has also been used by Jasra et al. (2015) in the setting of multilevel Monte Carlo. Its computational cost is linear in . The idea of indexcoupling is to maximize the probability of sampling pairs such that , by computing the matrix with maximum entries on its diagonal. The scheme is intuitive at the initial step of the algorithm, assuming that and are similar. At step , the same random number is used to compute and from their ancestors. Therefore, by sampling , we select pairs that were computed with common random numbers at the previous step, and give them common random numbers again. The scheme maximizes the number of consecutive steps where common random numbers are given to each pair. We describe how to implement the scheme, in the spirit of maximal couplings (Lindvall, 2002), before providing more intuition.
First, for all , has to satisfy , otherwise one of the marginal constraints would be violated. We tentatively write , where (elementwise), , and is a residual matrix with zeros on the diagonal. Matrices of this form have maximum trace among all matrices in . We now look for such that and such that sampling from can be done linearly in . From the marginal constraints, the matrix needs to satisfy, for all , and . Among all the matrices that satisfy these constraints, the choice , where and , is such that we can sample pairs of indices from by sampling from and independently, for a linear cost in . Thus we define the indexcoupled matrix as
(1) 
Under model assumptions, using common random numbers to propagate a pair of particles will result in the pair of states getting closer. We can formulate assumptions on the function as a function of both of its arguments, where the expectation is with respect to . We can assume for instance that it is Lipschitz in both arguments. In an autoregressive model where , the Lipschitz constant is as a function of and as a function of . One can then find conditions (see e.g. Diaconis and Freedman, 1999) such that the distance between the two propagated particles will decrease down to a value proportional to the distance between and , when common random numbers are used to propagate the pair.
We will see in Section 4 that indexcoupled resampling can perform essentially as well as transport resampling in realworld models.
2.5 Existing approaches
Various attempts have been made to modify particle filters so that they produce correlated likelihood estimators. A detailed review is given in Lee (2008). We describe the method proposed in Pitt (2002), which is based on sorting the particles. Consider first the univariate case, . We can sort both particle systems in increasing order of and respectively, yielding and for , where the parenthesis indicate that the samples are sorted. Then, we can draw and by inverting the empirical cumulative distribution function associated with these sorted samples, using common random numbers. We might sample and such that , but and will still be close and thus and will be similar, thanks to the sorting. The method can be extended to multivariate spaces using the Hilbert spacefilling curve as mentioned in Deligiannidis et al. (2015), following Gerber and Chopin (2015). That is, we use the pseudoinverse of the Hilbert curve to map the dimensional particles to the interval , where they can be sorted in increasing order. We refer to this approach as sorted resampling, and use the implementation provided by the function hilbert_sort in The CGAL Project (2016). The cost of sorted resampling is of order .
2.6 Numerical illustration
We first illustrate the effect of coupled resampling schemes in estimating likelihood curves for a multivariate hidden autoregressive model. The process starts as , where is the identity matrix of dimension , the transition is defined by , where is , as in Guarniero et al. (2015). Finally, the measurement distribution is defined by .
We generate observations, with parameter and with . We consider a sequence of parameter values . We run a standard particle filter given , and then for each , we run a particle filter given conditionally upon the variables generated by the previous particle filter given ; more details are given in Appendix A. We use particles, and try various coupled resampling schemes. The transport resampling scheme uses and . The estimated loglikelihoods are shown in Figure 1 for five independent runs, and compared to the exact loglikelihood obtained by Kalman filters.
All the filters underestimate the loglikelihood by a significant amount, indicating that more particles would be necessary to obtain precise estimates for any given . However, we see that the shape of the loglikelihood curve is still approximately recovered when using common random numbers and certain coupled resampling schemes, indicating that we can compare loglikelihood values for different parameters, even with comparably small numbers of particles.
3 Methodological developments using coupled resampling
In this section, we develop the use of coupled resampling schemes for score estimation, for sampling algorithms targeting the posterior distribution of the parameters, and for latent state estimation. For the latter, a new smoother is proposed, conditions for its validity are given and experiments in a toy example are presented, showing its potential advantages compared to standard smoothing methods.
3.1 Finite difference estimators of the score
Consider the estimation of the loglikelihood gradient, also called the score and denoted by . We focus on univariate parameters for simplicity. A finite difference estimator (Asmussen and Glynn, 2007) of the score at the value is given by , where is a perturbation parameter. If is small, the variances of the two loglikelihood estimators can be assumed approximately equal, and thus is approximately equal to , where denotes the correlation between and . Thus, compared to using independent estimators with the same variance, the variance of the finite difference estimator can be divided by . We refer to this number as the gain. It corresponds to how many times more particles should be used in order to attain the same accuracy using independent particle filters. The bias of the gradient estimator is unchanged by the use of coupled resampling schemes, since they do not change the marginal distributions of each particle filter.
We run coupled particle filters at and for and various , and different coupled resampling schemes, over independent experiments in the hidden autoregressive model of Section 2.6. The correlations between the loglikelihood estimators are shown in Figure 2, as well as the gains. We see that the variance can be divided by approximately for small values of , but only by approximately for larger values of . Indexcoupled resampling appears to perform better than transport resampling for small values of , perhaps due to the approximation introduced in the regularized transport problem; a more detailed investigation of the transport regularization is given in the next section. Here the tuning parameters of transport resampling were set to and .
3.2 Correlated particle marginal Metropolis–Hastings
We now turn to the particle marginal MH algorithm (PMMH) (Andrieu et al., 2010), for parameter inference in state space models. Denoting the prior parameter distribution by , the algorithm generates a Markov chain targeting the posterior distribution . At iteration , a parameter is proposed from a Markov kernel , and accepted as the next state of the chain with probability
(2) 
where is the likelihood estimator produced by a filter given . In order for this algorithm to mimic the ideal underlying MH algorithm, the ratio of likelihood estimators must be an accurate approximation of the exact ratio of likelihoods (Andrieu and Vihola, 2015). The benefit of correlated likelihood estimators within pseudomarginal algorithms is the topic of recent works (Deligiannidis et al., 2015; Dahlin et al., 2015), following Lee and Holmes (2010) in the discussion of Andrieu et al. (2010).
A correlated particle marginal MH algorithm works on the joint space of the parameter , the processgenerating variables for all , and the ancestor variables , for all . Denote by the distribution of , assumed to be standard multivariate normal for simplicity, and let be a Markov kernel leaving invariant. Consider any iteration of the algorithm; the current state of the Markov chain contains , , , and the associated likelihood estimator is . The particles , for all , are deterministic given , and . The algorithm proceeds in the following way.

A parameter value is proposed: , as well as new processgenerating variables: .

A particle filter is run given , using and conditionally upon the current particle filter. That is, at each resampling step, a matrix is computed using and , and the new ancestors are sampled conditional upon . The algorithm produces ancestor variables and a likelihood estimator .

With the probability given by Eq. (2), the chain moves to the state with parameter , variables , ancestors and likelihood estimator . Otherwise, the current state of the chain is unchanged.
Appendix A contains further details on the conditional sampling of a particle filter as required by step (b) above. Appendix B contains conditions on the coupled resampling scheme for the algorithm to be exact, which are verified for sorted and indexcoupled schemes, as well as for a slightly modified transport scheme.
In the hidden autoregressive model, we specify a standard normal prior on . The distribution of the processgenerating variables is a multivariate normal distribution, and we choose the kernel to be autoregressive: , with . We use a normal random walk with a standard deviation of for the proposal on . We run each algorithm times for iterations, starting the chain from a uniform variable in , taken to be in the bulk of the posterior distribution. Figure 5 shows the obtained average acceptance rates and effective sample sizes, defined as divided by the integrated autocorrelation time and obtained with the function effectiveSize of the coda package. With indexcoupled resampling, the effective sample size can reach acceptable levels with fewer particles, compared to standard PMMH or compared to sorted resampling (as used by Deligiannidis et al. (2015)).
Transport resampling is considerably more expensive for a given choice of . For , we show the acceptance rates and effective sample sizes obtained over independent experiments with various levels of approximation to the optimal transport problem. When is close to zero and is close to one, we can achieve greater effective sample sizes with transport resampling than with the other schemes, for a fixed .
3.3 A new smoothing method
Next, we turn to an application of coupled conditional particle filters for the task of smoothing. The parameter is fixed and removed from the notation. Denote by a generic test function on , of which we want to compute the expectation with respect to the smoothing distribution ; we write for .
3.3.1 Algorithm
We build upon the debiasing technique of Glynn and Rhee (2014), which follows a series of unbiased estimation techniques (see Rhee and Glynn, 2012; Vihola, 2015, and references therein). The Rhee–Glynn estimator introduced in Glynn and Rhee (2014) uses the kernel of a Markov chain with invariant distribution , in order to produce unbiased estimators of . In the setting of smoothing, the conditional particle filter defines a Markov kernel leaving the smoothing distribution invariant Andrieu et al. (2010); extensions include backward sampling (Whiteley, 2010) and ancestor sampling (Lindsten et al., 2014). The conditional particle filter kernel has been extensively studied in Chopin and Singh (2015); Andrieu et al. (2013); Lindsten et al. (2015). The use of conditional particle filters within the Rhee–Glynn estimator naturally leads to the problem of coupling two conditional particle filters.
The Rhee–Glynn construction adapted to our context goes as follows. We draw two trajectories and from two independent particle filters, which we denote by and , with and denoting the processgenerating variables. Note that even for fixed processgenerating variables the sampled trajectories are random, due to the randomness of the resampling steps. We apply one step of the conditional particle filter to the first trajectory: we sample processgenerating variables and write . Then, for all , we apply the coupled conditional particle filter (CCPF) to the pair of trajectories, which is written , where . The resulting chains are such that

marginally, and have the same distributions as if they were generated by conditional particle filters, and thus converge under mild assumptions to the smoothing distribution;

for each , has the same distribution as , since the variables are independent and identically distributed;

under mild conditions stated below, at each iteration , there is a nonzero probability that . We refer to this event as a meeting, and introduce the meeting time , defined as .
We then define the Rhee–Glynn smoothing estimator as
(3) 
This is an unbiased estimator of with finite variance and finite computational cost, under conditions given below. The full procedure is described in Algorithm 1. To estimate the smoothing functional , one can sample estimators, for , and take the empirical average ; it is unbiased and converges to at the standard Monte Carlo rate as .
Popular smoothing techniques include the fixedlag smoother and the forward filtering backward smoother (see Doucet and Johansen, 2011; Lindsten and Schön, 2013; Kantas et al., 2015, for recent reviews). The Rhee–Glynn smoothing estimator sets itself apart in the following way, due to its form as an average of independent unbiased estimators.

Error estimators can be constructed based on the central limit theorem, allowing for an empirical assessment of the performance of the estimator. Error estimators for particle smoothers have not yet been proposed, although see Lee and Whiteley (2015b).
3.3.2 Theoretical properties
We give three sufficient conditions for the validity of Rhee–Glynn smoothing estimators.
Assumption 1.
The measurement density of the model is bounded from above:
That bound limits the influence of the reference trajectory in the conditional particle filter.
Assumption 2.
The resampling probability matrix , constructed from the weight vectors and , is such that
Furthermore, if , then is a diagonal matrix with entries given by .
One can check that the condition holds for independent and indexcoupled resampling schemes. The second part of Assumption 2 ensures that if two reference trajectories are equal, an application of the coupled conditional particle filter returns two identical trajectories.
Assumption 3.
Let be a Markov chain generated by the conditional particle filter. The test function is such that
Furthermore, there exists , and such that
This assumption relates to the validity of the conditional particle filter to estimate , addressed under general assumptions in Chopin and Singh (2015); Andrieu et al. (2013); Lindsten et al. (2015). Up to the term which can be arbitrarily small, the assumption is a requirement if we want to estimate using conditional particle filters while ensuring a finite variance.
Our main result states that the proposed estimator is unbiased and has a finite variance. Similar results can be found in Theorem 1 in Rhee (2013), Theorem 2.1 in McLeish (2012), Theorem 7 in Vihola (2015) and in Glynn and Rhee (2014).
Theorem 3.1.
The proof is given in Appendix C. The theorem uses univariate notation for and , but the Rhee–Glynn smoother can be applied to estimate multivariate smoothing functionals, for which the theorem can be interpreted componentwise.
3.3.3 Practical considerations
For a fixed computational budget, the only tuning parameter is the number of particles , which implicitly sets the number of independent estimators that can be obtained within the budget. The computational cost of producing an unbiased estimator is of order , and the expectation of is seen empirically to decrease with , so that the choice of is not obvious; in practice we recommend choosing a value of large enough so that the meeting time occurs within a few steps, but other considerations such as memory cost could be taken into account. The memory cost for each estimator is of order in average (Jacob et al., 2015). This memory cost holds also when using ancestor sampling (Lindsten et al., 2014), whereas backward sampling (Whiteley, 2010) results in a memory cost of . As in Glynn and Rhee (2014), we can appeal to Glynn and Whitt (1992) to obtain a central limit theorem parameterized by the computational budget instead of the number of samples.
The performance of the proposed estimator is tied to the meeting time. As in Chopin and Singh (2015), the coupling inequality (Lindvall, 2002) can be used to relate the meeting time with the mixing of the underlying conditional particle filter kernel. Thus, the proposed estimator is expected to work in the same situations where the conditional particle filter works. It can be seen as a framework to parallelize conditional particle filters and to obtain reliable confidence intervals. Furthermore, any improvement in the conditional particle filter directly translates into a more efficient Rhee–Glynn estimator.
The variance of the proposed estimator can first be reduced by a Rao–Blackwellization argument. In the th term of the sum in Eq. (3), the random variable is obtained by applying the test function to a trajectory drawn among trajectories, say , with probabilities . Thus the random variable is a conditional expectation of given and , which has the same expectation as . Any term or in can be replaced by similar conditional expectations. This enables the use of all the particles generated by the conditional particle filters. A further variance reduction technique is discussed in Appendix D.
3.3.4 A hidden autoregressive model with an unlikely observation
We consider the first example of Ruiz and Kappen (2016). The latent process is defined as and ; we take , and and consider time steps. The process is observed only at time , where and we assume , with . The observation is unlikely under the latent process distribution. Therefore the filtering distributions and the smoothing distributions have little overlap, particular for times close to .
We consider the problem of estimating the smoothing means, and run independent Rhee–Glynn estimators, with various numbers of particles, with ancestor sampling (Lindsten et al., 2014) and without variance reduction. For comparison, we also run a bootstrap particle filter times, with larger numbers of particles. This compensates for the fact that the Rhee–Glynn estimator requires a certain number of iterations, each involving a coupled particle filter. The average meeting times for each value of are: for , for , for , for .
For each method, we compute a confidence interval as at each time , where is the mean of the estimators and is the standard deviation. The results are shown in Figure 8. The exact smoothing means are obtained analytically and shown by black dots. The Rhee–Glynn estimators lead to reliable confidence intervals. Increasing reduces the width of the interval and the average meeting time. On the other hand, standard particle smoothers with larger numbers of particles still yield unreliable confidence intervals. The poor performance of standard particle smoothers is to be expected in the setting of highlyinformative observations (Ruiz and Kappen, 2016; Del Moral and Murray, 2015).
4 Numerical experiments in a preypredator model
We investigate the performance of the correlated particle marginal Metropolis–Hastings algorithm and of the Rhee–Glynn smoother for a nonlinear nonGaussian model. We consider the Plankton–Zooplankton model of Jones et al. (2010), which is an example of an implicit model: the transition density is intractable (Bretó et al., 2009; Jacob, 2015). The hidden state represents the population size of phytoplankton and zooplankton, and the transition from time to is given by a Lotka–Volterra equation,
where the stochastic daily growth rate is drawn from at every integer time . The propagation of each particle involves solving numerically the above equation, here using a RungeKutta method in the odeint library (Ahnert and Mulansky, 2011). The initial distribution is given by and . The parameters and represent the clearance rate of the prey and the growth efficiency of the predator. Both and parameterize the mortality rate of the predator. The observations are noisy measurements of the phytoplankton , ; is not observed. We generate observations using , , , , .
4.1 Correlated particle marginal Metropolis–Hastings
The parameter is . We specify a centered normal prior on with variance , an exponential prior on with unit rate, and uniform priors in for the four other parameters. With logarithm and logistic transforms, we map to . For the Metropolis–Hastings proposal distribution, we use a normal random walk with a covariance matrix chosen as one sixth of the covariance of the posterior, obtained from long pilot runs. We start the Markov chains at the (transformed) datagenerating parameter. We then run the particle marginal Metropolis–Hastings with particles and iterations, times independently, and obtain a mean acceptance rate of , with standard deviation of , and an effective sample size averaged over the parameters (ESS) of . With only particles, we obtain a mean acceptance of () and an ESS of . Density estimators of the posterior of with these two samplers are shown on the leftmost plots of Figure 13.
We investigate whether we can obtain better posterior approximations using the correlated Metropolis–Hastings algorithm, still with particles and iterations. We set the correlation coefficient for the propagation of the processgenerating variables to . We consider the use of indexcoupled, sorted and transport resampling. For the latter we choose and . For indexcoupled resampling, we obtain an acceptance of (), with sorted resampling () and with transport resampling (). For the ESS, we obtain for indexcoupled resampling, for sorted resampling and for transport resampling. We display the density estimators of the posterior of in the rightmost panels of Figure 13, for indexcoupled and transport resampling. Similar results are obtained with sorted resampling (not shown). However, results in Section 3.2 indicate that sorted resampling would be less efficient in higher dimension. We conclude that the correlated algorithm with particles give posterior approximations that are comparable to those obtained with a standard PMMH algorithm that uses four times more particles; thus important computational savings can be made. With the provided R implementation, the algorithm with and transport resampling takes around minutes per run, compared to minutes for the other schemes, and around minutes for the standard algorithm with .
4.2 Rhee–Glynn smoother
Next, we consider the problem of estimating the mean population of zooplankton at each time , given a fixed parameter taken to be the datagenerating one. The intractability of the transition density precludes the use of ancestor or backward sampling, or the use of forward filtering backward sampling.
We draw independent Rhee–Glynn smoothing estimators, using particles. The observed meeting times have a median of , a mean of and a maximum of . The estimator of the smoothing mean of at each time is obtained by averaging independent estimators. We compute the Monte Carlo variance of at each time, and define the relative variance as .
We combine the Rhee–Glynn estimator (denoted by “unbiased” below) with the variance reduction technique of Section 3.3.3 (denoted by “unbiased+RB”). Furthemore, we use the variance reduction of Appendix D, denoted by “unbiased+RB+m”, with chosen to be the median of the meeting time, i.e. . The latter increases the average meeting time from to . We compare the resulting estimators with a fixedlag smoother (Doucet and Johansen, 2011) with a lag parameter , and with a standard particle filter storing the complete trajectories.
We use the same number of particles and compute estimators for each method. The relative variance is shown in Figure 14. First we see that the variance reduction techniques have a significant effect, particularly for close to but also for small . In particular, the estimator with Rao–Blackwellization (“unbiased+RB+m”) achieves nearly the same relative variance as the particle filter. The cost of these estimators can be computed as the number of iterations , times twice the cost of a particle filter for each coupled particle filter. In the present setting where the average number of iterations is around five, we conclude that removing the bias from the standard particle filter can be done for an approximate tenfold increase in computational cost. As expected the fixedlag smoother leads to a significant decrease in variance. For this model, the incurred bias is negligible for (not shown), which, however, would be hard to tell if we did not have access to either unbiased methods or long runs of asymptotically exact methods.
In this model, standard particle filters and fixedlag approximations perform well, leading to smaller mean squared error than the proposed estimators, for a given computational cost. However, the proposed estimators are competitive, the tuning of the algorithm is minimal, and unbiasedness prevents the possibility of overconfident error bars as in Section 3.3.4. Therefore the proposed method trades an extra cost for convenience and reliability.