dTRAM: discrete transition-based reweighting analysis method

Frank Noefrank.noe@fu-berlin.de
FU Berlin, Arnimallee 6, 14195 Berlin, Germany

Hao Wuhwu@zedat.fu-berlin.de
FU Berlin, Arnimallee 6, 14196 Berlin, Germany

Summary: The transition-based reweighting analysis method (TRAM) is a method for analyzing configuration-space-discretized simulation trajectories produced at different thermodynamic states (temperatures, Hamiltonians, etc.) TRAM provides maximum-likelihood estimates of stationary quantities (probabilities, free energies, expectation values) at any thermodynamic state. In contrast to the weighted histogram analysis method (WHAM), TRAM does not require data to be sampled from global equilibrium, and can thus produce superior estimates for enhanced sampling data such as parallel/simulated tempering, replica exchange, umbrella sampling, or metadynamics. In addition, TRAM provides optimal estimates of Markov state models (MSMs) from the discretized state-space trajectories at all thermodynamic states. Under suitable conditions, these MSMs can be used to calculate kinetic quantities (e.g. rates, timescales). This article discussed the discrete TRAM (dTRAM), where we assume that the bias energies or reweighting factors can be discretized. In the limit of a single thermodynamic state, TRAM estimates a maximum likelihood reversible MSM, while in the limit of uncorrelated sampling data, dTRAM is identical to WHAM. dTRAM is thus a generalization to both estimators.


We assume that a set of MD or MCMC simulations have been performed, each in one of $K$ thermodynamic states (indexed by the superscript $k\in\{1,...,K\}$). For simulations in which the thermodynamic state is frequently changed, such as in replica-exchange simulations, each contiguous sequence is treated as a separate trajectory at one of the $K$ thermodynamic states. Furthermore, we assume that the data has been discretized to a configuration space partition (indexed by subscripts $i,j\in\{1,...,n\}$). We are primarily interested in the free energy, or equivalently, the equilibrium probability of discrete states in some unbiased or reference ensemble, $(\pi_{i})_{i=1,...,n}$. In addition we might be interested in the equilibrium probability of states under all biased ensembles. If the simulation trajectories are long enough, we will also be able to compute kinetic properties, as discussed later.

We will be dealing with simulations where the unbiased, or reference probability $\pi_{i}$ and the biased probability at simulation condition $k$, $\pi_{i}^{(k)}$ are related be the discrete reweighting principle reweighting: $$\pi_{i}^{(k)}=f^{(k)}\pi_{i}\gamma_{i}^{(k)}, \:\:\:\:(1)$$ with normalization constant $$f^{(k)}=\frac{1}{\sum_{l}\pi_{l}\gamma_{l}^{(k)}} \:\:\:\:(2)$$ and the known reweighting factor $$\gamma_{i}^{(k)}=\mathrm{e}^{-b_{i}^{(k)}} \:\:\:\:(3)$$ where $b_{i}^{(k)}$ is a bias energy.

Likelihood of WHAM, reversible Markov models and TRAM

The most common analysis method used in the present scenario is WHAM. WHAM uses the histogram counts $N_{i}^{(k)}$, i.e. the number of samples falling into bin $i$ at thermodynamic state $k$. Although WHAM was originally derived as a minimum-variance estimator [5][6], it can be derived as a maximum-likelihood estimator [1] using the likelihood $$L_{\mathrm{WHAM}}=\prod_{k}\prod_{i}(\pi_{i}^{(k)})^{N_{i}^{(k)}} \:\:\:\:(4)$$ which simply assumes that every count $N_{i}^{(k)}$ is independently drawn from the biased distribution $\pi_{i}^{(k)}$, which is linked to the unbiased distribution $\boldsymbol{\pi}$ via Eq. (1).

Let us now turn to reversible Markov state models [8][3][2][11]. The maximum likelihood Markov model is the transition matrix $\mathbf{P}=(p_{ij})$ between $n$ discrete configuration states, that maximizes the likelihood of the observed transitions between these states. The likelihood of a Markov model is well known [9], and simply a product of all transition probabilities corresponding to the observed trajectory. To obtain a reversible Markov state model, this likelihood is maximized using the constraints that the transition probabilities $p_{ij}$ must fulfill detailed balance with respect to the equilibrium distribution $\boldsymbol{\pi}$: $$L_{\mathrm{MSM}}=\prod_{i}\prod_{j}p_{ij}^{c_{ij}} \:\:\:\:(5)$$ such that $$\pi_{i}p_{ij}=\pi_{j}p_{ji}\:\:\:\:\:\mathrm{for}\:\mathrm{all}\: i,j, \:\:\:\:(6)$$ where $c_{ij}$ is the number of times the trajectories were observed in state $i$ at time $t$ and in state $j$ at a later time $t+\tau$, where $\tau$ is the lag time at which the Markov model is estimated. For an MSM, all simulation data producing counts $c_{ij}$, has to be generated at the same thermodynamic state (e.g. temperature, Hamiltonian), and the estimated $\mathbf{P}$ is then only valid for this thermodynamic state. The reversibility of the MSM is ensured by the constraint equations (6). Estimators that maximize Eqs. (5-6) usually provide both $\mathbf{P}$ and the equilibrium distribution $\boldsymbol{\pi}$ [2][11].

In discrete TRAM [17][16], we combine these two approaches as follows: we avoid the WHAM assumption that every count is sampled from global equilibrium, and instead treat every trajectory at thermodynamic condition $k$ as a Markov chain with the configuration-state transition counts $c_{ij}^{(k)}$. However, in contrast to Markov models we exploit the fact that equilibrium probabilities can be reweighted between different thermodynamic states via (1-2). The resulting likelihood of all $\mathbf{P}^{(k)}$ and $\boldsymbol{\pi}$, based on simulations at all thermodynamic states can be formulated as: $$L_{\mathrm{TRAM}}=\prod_{k}\prod_{i}\prod_{j}(p_{ij}^{(k)})^{c_{ij}^{(k)}} \:\:\:\:(7)$$ such that $$\pi_{i}^{(k)}p_{ij}^{(k)}=\pi_{j}^{(k)}p_{ji}^{(k)}\:\:\:\:\:\mathrm{for}\:\mathrm{all}\: i,j,k. \:\:\:\:(8)$$ Here, $\mathbf{P}^{(k)}=(p_{ij}^{(k)})$ is the Markov transition matrix at thermodynamic state $k$, and $c_{ij}^{(k)}$ are the number of transitions observed at that simulation condition. $\boldsymbol{\pi}^{(k)}$ is the vector of equilibrium probabilities of discrete states at each thermodynamic state. Note that all of these $K$ equilibrium distributions are coupled through Eqs. (1-2). Because each Markov model $\mathbf{P}^{(k)}$ must have the distribution $\boldsymbol{\pi}^{(k)}$ as a stationary distribution, all Markov models are coupled too. This is what makes the maximization of the TRAM likelihood Eqs. (7-8) difficult, and it can neither be achieved by WHAM, nor by existing MSM estimators. We call Eqs. (1,2,7,8), first proposed in [17], the TRAM equations. In dTRAM the reweighting factors $\gamma_{i}^{(k)}$ in Eqs. (1-2) are obtained by a configuration space discretization or binning, as in WHAM. This approach should be distinguished from point-wise reweighting approaches such as MBAR and TRAM [13][7].

dTRAM log-likelihood and self-consistent solution equations

We will seek the maximum likelihood of Eq. (7). As in common practice, we work with the logarithm of the likelihood, because it has the same maximum point as the likelihood but can be treated more easily: $$\log L_{\mathrm{TRAM}}=\sum_{k=1}^{K}\sum_{i=1}^{n}\sum_{j=1}^{n}c_{ij}^{(k)}\ln p_{ij}^{(k)} \:\:\:\:(9)$$ Moreover, we have the following constraints. Using detailed balance Eq. (8) with the reweighting equations (1-2) results in $$\pi_{i}\gamma_{i}^{(k)}p_{ij}^{(k)}=\pi_{j}\gamma_{j}^{(k)}p_{ji}^{(k)}\:\:\:\:\mathrm{for\: all\:}i,j,k. \:\:\:\:(10)$$ Note that the normalization factors, $f^{(k)}$, have cancelled. In addition, $\mathbf{P}^{(k)}$ should be a transition matrix: $$\sum_{j}p_{ij}^{(k)}=1\:\:\:\:\forall i,k \:\:\:\:(11)$$ and $\boldsymbol{\pi}$ should be a probability vector:

$$\begin{aligned} \sum_{j}\pi_{j} & = & 1\:\:\:\:\forall i \:\:\:\:(12)\end{aligned}$$ The normalization of $\boldsymbol{\pi}^{(k)}$ is achieved by the normalization constants in Eq. (1)-(2).

In order to solve the discrete TRAM problem we have to maximize the log likelihood (9) under the constraints (10-12). The variables are both the unbiased equilibrium probabilities $\boldsymbol{\pi}$ (providing $n-1$ variables due to the constraint (17)), and the biased transition matrices $\mathbf{P}^{(k)}$ (each having $n(n-1)/2$ remaining free variables that are not fixed by constraints (10)-(11)).

Note that changing the simulation conditions, such as bias or temperature, will modify the transition probabilities in a non-trivial way that depends on the simulation condition, the integrator and thermostat used, and the state space discretization. Therefore we cannot relate the different $\mathbf{P}^{(k)}$ without restricting the generality of our estimator. The only general connection between these Markov models is the coupling of their equilibrium distributions via Eqs. (1-2).

Using Lagrange duality theory [16], the optimal solution of the discrete TRAM problem can be shown to fulfills the following two conditions: $$\sum_{k}\sum_{j}\frac{\left(c_{ij}^{(k)}+c_{ji}^{(k)}\right)\gamma_{i}^{(k)}\pi_{i}v_{j}^{(k)}}{\gamma_{i}^{(k)}\pi_{i}v_{j}^{(k)}+\gamma_{j}^{(k)}\pi_{j}v_{i}^{(k)}}=\sum_{k}\sum_{j}c_{ji}^{(k)} \:\:\:\:(13)$$ $$\sum_{j}\frac{\left(c_{ij}^{(k)}+c_{ji}^{(k)}\right)\gamma_{j}^{(k)}\pi_{j}}{\gamma_{i}^{(k)}\pi_{i}v_{j}^{(k)}+\gamma_{j}^{(k)}\pi_{j}v_{i}^{(k)}}=1 \:\:\:\:(14)$$ where $v_{i}^{(k)}$ are unknown Lagrange multipliers. In the setting with detailed balance we can unfortunately not give a closed expression for them, but we can optimize them along with the equilibrium distribution $\boldsymbol{\pi}$. Note that the equations above do not require the transition probabilities $p_{ij}^{(k)}$ to be computed explicitly. If these are desired, they can be subsequently computed from the solution of Eqs. (13-14) via Eq. (18) below.

Solution methods

We can rewrite the self-consistent equations (13,14) to derive the following iteration (fixed-point method), that can be used to numerically solve the discrete TRAM problem. First we initialize $\boldsymbol{\pi}$ and $\mathbf{v}^{(k)}$ by the simple guess: $$\begin{aligned} \pi_{i}^{init} & := & 1/n\\ v_{i}^{(k),init} & := & \sum_{j}c_{ij}^{(k)} \:\:\:\:(15)\end{aligned}$$ and then we iterate the following two equations until $\boldsymbol{\pi}$ is converged: $$v_{i}^{(k),new}:=v_{i}^{(k)}\sum_{j}\frac{\left(c_{ij}^{(k)}+c_{ji}^{(k)}\right)\gamma_{j}^{(k)}\pi_{j}}{\gamma_{i}^{(k)}\pi_{i}v_{j}^{(k)}+\gamma_{j}^{(k)}\pi_{j}v_{i}^{(k)}} \:\:\:\:(16)$$ $$\pi_{i}^{new}:=\frac{\sum_{k,j}c_{ji}^{(k)}}{\sum_{k,j}\frac{\left(c_{ij}^{(k)}+c_{ji}^{(k)}\right)\gamma_{i}^{(k)}v_{j}^{(k)}}{\gamma_{i}^{(k)}\pi_{i}v_{j}^{(k)}+\gamma_{j}^{(k)}\pi_{j}v_{i}^{(k)}}} \:\:\:\:(17)$$ Instead of the simple $1/n$ initialization for $\pi_{i}$ in Eq. (15), we could use the standard WHAM algorithm to obtain a much better guess [5][6]. While a better starting point might be relevant for optimizing computational performance, we have not observed the estimation result to depend on this choice.

As an alternative to the fixed-point iteration (16,17), we can solve equations (13,14) by using the multidimensional Newton method for root finding available in many numerics packages.

Kinetics and the selection of the estimation lag time $\tau$

Given $\boldsymbol{\pi}$ and $\mathbf{v}^{(k)}$ at their optimal values, the transition probabilities can be computed for any thermodynamic state $k$ simulated at by: $$p_{ij}^{(k)}(\tau)=\frac{\left(c_{ij}^{(k)}(\tau)+c_{ji}^{(k)}(\tau)\right)\gamma_{j}^{(k)}\pi_{j}}{\gamma_{i}^{(k)}\pi_{i}v_{j}^{(k)}+\gamma_{j}^{(k)}\pi_{j}v_{i}^{(k)}} \:\:\:\:(18)$$ See [16] for the derivation. In Eq. (18) we have explicitly stated that transition counts, and hence the transition probabilities are estimated at a given lag time $\tau$. As a consequence of the asymptotic correctness of dTRAM (see below), the estimates of $p_{ij}^{(k)}(\tau)$ are also asymptotically correct, that is for either long trajectories or many short trajectories we will get an unbiased estimate of the transition probabilities.

In order to compute kinetics, such as transition rates or timescales, the transition matrices $\mathbf{P}^{(k)}$ do not only have to be valid for the lag time $\tau$ estimated at, but they have to be Markov models that predict the kinetics at longer times correctly. How adequate $\mathbf{P}^{(k)}$ is as a Markov model should be tested by validating that the relaxation timescales computed from the eigenvalues of $\mathbf{P}^{(k)}$ are approximately constant in $\tau$ [15] and by checking that the Chapman-Kolmogorow $\mathbf{P}^{(k)}(n\tau)\approx[\mathbf{P}^{(k)}(\tau)]^{n}$ approximately holds [11].

The $\mathbf{P}^{(k)}$ can only be used as Markov models when the contiguous simulation trajectories are long enough to support a suitable lag time $\tau$. Generalized ensemble simulations, such as replica-exchange, parallel or simulated tempering generally only provide very short contiguous trajectory pieces and are only suitable for constructing Markov models of small systems and using excellent configuration state discretizations [14][4][10].

Based on umbrella sampling simulations, the construction of Markov models at the different umbrellas $k$ is usually possible, but for the unbiased system we can only obtain the equilibrium distribution $\boldsymbol{\pi}$ and not the Markov model. The reason is that the transition matrices (18) can only be estimated at the different simulation conditions $k$, whereas the equilibrium probability of the chosen reference ensemble $\boldsymbol{\pi}$ is computed through reweighting, and is thus also available for thermodynamic states not simulated at. However, the umbrella-Markov models $\mathbf{P}^{(k)}$ could still provide useful information. For example comparing the longest relaxation timescale of each umbrella Markov model with the respective simulation length could be used as an indicator of convergence, and whether some or all simulation lengths should be increased [12].

Unbiased MD simulations at different thermodynamic states are most suitable for constructing Markov models, because one has the choice of running simulations long enough to accommodate a suitable lag time $\tau$. A systematic way of constructing such simulations is the random swapping protocol [7]. Note that such simulations may not only violate the sampling from global equilibrium, but also the sampling from local equilibrium, it is possible that the estimation of $\boldsymbol{\pi}$ and all associated stationary estimates are biased for short lag times $\tau$. Therefore, when using dTRAM to analyze unbiased MD simulations at different thermodynamic states, one should definitely compute the estimates as a function of $\tau$ in order to ensure that a large enough $\tau$ is used to obtain unbiased estimates.

dTRAM is asymptotically correct

Here we repeat the proof of [16] that dTRAM converges to the correct equilibrium distribution and transition probabilities in the limit of large statistics. In this limit, either achieved through long simulation trajectories or many short simulation trajectories, the count matrices $\mathbf{C}^{(k)}=[c_{ij}^{(k)}]$ become: $$c_{ij}^{(k)}=N_{i}^{(k)}\bar{p}_{ij}^{(k)} \:\:\:\:(19)$$ where $\bar{\mathbf{P}}^{(k)}=[\bar{p}_{ij}^{(k)}]$ is the matrix of exact transition probabilities (no statistical error), which satisfies $$\bar{\pi}_{i}\gamma_{i}^{(k)}\bar{p}_{ij}^{(k)}=\bar{\pi}_{j}\gamma_{j}^{(k)}\bar{p}_{ji}^{(k)} \:\:\:\:(20)$$ where $\boldsymbol{\pi}=[\bar{\pi}_{i}]$ are the exact stationary probabilities of configuration states. Inserting (19) into the dTRAM equations (13-14), we obtain: $$\begin{aligned} \sum_{k}\sum_{j}\frac{\left(c_{ij}^{(k)}+c_{ji}^{(k)}\right)\gamma_{i}^{(k)}\bar{\pi}_{i}N_{j}^{(k)}}{\gamma_{i}^{(k)}\bar{\pi}_{i}N_{j}^{(k)}+\gamma_{j}^{(k)}\bar{\pi}_{j}N_{i}^{(k)}} & = & \sum_{k}\sum_{j}\frac{\left(N_{i}^{(k)}\bar{p}_{ij}^{(k)}+N_{j}^{(k)}\bar{p}_{ji}^{(k)}\right)\gamma_{i}^{(k)}\bar{\pi}_{i}N_{j}^{(k)}}{\gamma_{i}^{(k)}\bar{\pi}_{i}N_{j}^{(k)}+\gamma_{j}^{(k)}\bar{\pi}_{j}N_{i}^{(k)}}\\ & = & \sum_{k}\sum_{j}\frac{\left(N_{i}^{(k)}\gamma_{j}^{(k)}\bar{\pi}_{j}+N_{j}^{(k)}\gamma_{i}^{(k)}\bar{\pi}_{i}\right)\bar{p}_{ji}^{(k)}N_{j}^{(k)}}{\gamma_{i}^{(k)}\bar{\pi}_{i}N_{j}^{(k)}+\gamma_{j}^{(k)}\bar{\pi}_{j}N_{i}^{(k)}}\\ & = & \sum_{k}\sum_{j}\bar{p}_{ji}^{(k)}N_{j}^{(k)}\\ & = & \sum_{k}\sum_{j}c_{ji}^{(k)} \:\:\:\:(21)\end{aligned}$$ and thus the first dTRAM equation is satisfied. Furthermore, we obtain: $$\begin{aligned} \sum_{j}\frac{\left(c_{ij}^{(k)}+c_{ji}^{(k)}\right)\gamma_{j}^{(k)}\bar{\pi_{j}}}{\gamma_{i}^{(k)}\bar{\pi}_{i}N_{j}^{(k)}+\gamma_{j}^{(k)}\bar{\pi}_{j}N_{i}^{(k)}} & = & \sum_{j}\frac{\left(N_{i}^{(k)}\bar{p}_{ij}^{(k)}+N_{j}^{(k)}\bar{p}_{ji}^{(k)}\right)\gamma_{j}^{(k)}\bar{\pi_{j}}}{\gamma_{i}^{(k)}\bar{\pi}_{i}N_{j}^{(k)}+\gamma_{j}^{(k)}\bar{\pi}_{j}N_{i}^{(k)}}\\ & = & \sum_{j}\frac{N_{i}^{(k)}\bar{p}_{ij}^{(k)}\gamma_{j}^{(k)}\bar{\pi_{j}}+N_{j}^{(k)}\bar{p}_{ij}^{(k)}\gamma_{i}^{(k)}\bar{\pi_{i}}}{\gamma_{i}^{(k)}\bar{\pi}_{i}N_{j}^{(k)}+\gamma_{j}^{(k)}\bar{\pi}_{j}N_{i}^{(k)}}\\ & = & \sum_{j}\bar{p}_{ij}^{(k)}\\ & = & 1 \:\:\:\:(22)\end{aligned}$$ and thus the second dTRAM equation is satisfied as well. From the above two equations, we can conclude that in the statistical limit (either achieved by long trajectories or many short trajectories), the solution of the dTRAM equations converges to the correct equilibrium distribution and the correct transition probabilities. Note that we have assumed that all trajectory data is in local equilibrium within each starting state $i$ - if this is not the case the counts $c_{ij}$ and thus also the estimated $\pi_{i}$ and $p_{ij}$ will be biased. Thus, if the data is of such a nature that local equilibrium is a concern (e.g. metadynamics or uncoupled short MD trajectories), all estimates should be computed as a function of the lag time $\tau$.

WHAM is a special case of dTRAM

As shown in [16], the commonly used WHAM method is a special case of dTRAM. Starting from the dTRAM estimator, we employ the WHAM assumption that each sample at thermodynamic state $k$ is independently generated from the biased probability distribution $\pi^{(k)}$. This means that transition probabilities $p_{ij}^{(k)}$ are equal to the probability of observing state $j$ without knowledge of $i$: $$p_{ij}^{(k)}=\pi_{j}^{(k)} \:\:\:\:(23)$$ In a setting where counts are generated independently, the transition counts $c_{ij}^{(k)}$ can be modeled by splitting up the total counts ending in bin $j$ according to the equilibrium probability that they have been in a given bin $i$ before: $$c_{ij}^{(k)}=\pi_{i}^{(k)}N_{j}^{(k)} \:\:\:\:(24)$$ Note that this selection generates actually observed histogram counts as $\sum_{i}c_{ij}^{(k)}=N_{j}^{(k)}\sum_{i}\pi_{i}^{(k)}=N_{j}^{(k)}$. Substituting $\pi_{j}^{(k)}$ in (23-24) using (1-2) and inserting the result into Eq. (18) yields the equalities $$N_{j}^{(k)}\gamma_{i}^{(k)}\pi_{i}+N_{i}^{(k)}\gamma_{j}^{(k)}\pi_{j}=\gamma_{i}^{(k)}\pi_{i}v_{j}^{(k)}+\gamma_{j}^{(k)}\pi_{j}v_{i}^{(k)} \:\:\:\:(25)$$ which must hold for all $i$ and $k$. This is exactly the case when the Lagrange multipliers become: $$v_{i}^{(k)}=N_{i}^{(k)}. \:\:\:\:(26)$$ Substituting (24) and (26) into (17) gives us the solution for the unbiased stationary probabilities: $$\pi_{i}^{\mathrm{new}}=\frac{\sum_{k}N_{i}^{(k)}}{\sum_{k}N^{(k)}f^{(k)}\gamma_{i}^{(k)}} \:\:\:\:(27)$$ $$f^{(k),\mathrm{new}}=\frac{1}{\sum_{j}\gamma_{j}^{(k)}\pi_{j}} \:\:\:\:(28)$$ which is exactly the WHAM algorithm [5][6]. Therefore, WHAM is a special case of dTRAM, suggesting that TRAM should yield estimates that are at least as good as WHAM, but should give better estimates when the WHAM assumptions of sampling from global equilibrium at condition $k$ does not hold.

A reversible Markov state model is a special case of dTRAM

Now we relate dTRAM to reversible Markov models. Suppose we only have a single thermodynamic state $k$ and one or several simulation trajectories generating counts $c_{ij}$ at this condition. In this case we can drop the index $k$, all reweighting factors are unity $\gamma_{i}\equiv1$, and equations (13-14) become: $$\sum_{j}\frac{\left(c_{ij}+c_{ji}\right)\pi_{i}v_{j}}{\pi_{i}v_{j}+\pi_{j}v_{i}}=\sum_{j}c_{ji} \:\:\:\:(29)$$ $$\sum_{j}\frac{\left(c_{ij}+c_{ji}\right)\pi_{j}}{\pi_{i}v_{j}+\pi_{j}v_{i}}=1 \:\:\:\:(30)$$ We can combine both equations to: $$\begin{aligned} \sum_{j}c_{ji}+v_{i} & = & \sum_{j}\frac{\left(c_{ij}+c_{ji}\right)\pi_{i}v_{j}}{\pi_{i}v_{j}+\pi_{j}v_{i}}+\sum_{j}\frac{\left(c_{ij}+c_{ji}\right)\pi_{j}v_{i}}{\pi_{i}v_{j}+\pi_{j}v_{i}}\nonumber \\ & = & \sum_{j}\left(c_{ij}+c_{ji}\right) \:\:\:\:(31)\end{aligned}$$ Thus we solve for the Lagrange multipliers: $$v_{i}=\sum_{j}c_{ij}=N_{i}. \:\:\:\:(32)$$ Substituting $v_{i}=N_{i}$ into (14) leads to the optimality condition for $\pi_{i}$: $$\begin{aligned} \pi_{i} & = & \sum_{j}\frac{c_{ij}+c_{ji}}{\frac{c_{i}}{\pi_{i}}+\frac{c_{j}}{\pi_{j}}} \:\:\:\:(33)\end{aligned}$$ Inserting the result into (18) yields the reversible transition matrix estimator: $$\pi_{i}p_{ij}=\frac{c_{ij}+c_{ji}}{\frac{c_{j}}{\pi_{j}}+\frac{c_{i}}{\pi_{i}}} \:\:\:\:(34)$$ which is identical to the known optimality condition for a reversible Markov model transition matrix and the corresponding iterative estimator [2][11]. Therefore, a reversible MSM is a special case of dTRAM.

Acknowledgements:

The major part of this article has been published in [16].

Citing dTRAM:

The dTRAM problem was originally introduced in [17]. The optimal solution that is now known as the dTRAM method described here was published in [16].

Bibliography

[1]: Christian Bartels and Martin Karplus: Multidimensional a daptive umbrella sampling: Application to main chain and side chain peptide conformations. J. Comp. Chem. 18, 1450-1462 (1997).

[2]: G. R. Bowman, K. A. Beauchamp, G. Boxer and V. S. Pande: Progress and challenges in the automated construction of Markov state models for full protein systems.. J. Chem. Phys. 131, 124101 (2009).

[3]: N. V. Buchete and G. Hummer: Coarse Master Equations for Peptide Folding Dynamics. J. Phys. Chem. B 112, 6057-6069 (2008).

[4]: John D. Chodera, William C. Swope, Frank Noé, Jan-Hendrik Prinz and Vijay S. Pande: Dynamical reweighting: Improved estimates of dynamical properties from simulations at multiple temperatures. J. Phys. Chem. 134, 244107 (2011).

[5]: Alan M. Ferrenberg and Robert H. Swendsen: Optimized Monte Carlo data analysis. Phys. Rev. Lett. 63, 1195-1198 (1989).

[6]: S. Kumar, D. Bouzida, R. H. Swendsen, P. A. Kollman and J. M. Rosenberg: The weighted histogram analysis method for free-energy calculations on biomolecules .1. the method. J. Comput. Chem. 13, 1011 - 1021 (1992).

[7]: Antonia S. J. S. Mey, Hao Wu and Frank Noé: TRAM: Estimating equilibrium expectations from time-correlated simulation data at multiple thermodynamic states. http://arxiv.org/abs/1407.0138 (2014).

[8]: F. Noé: Probability Distributions of Molecular Observables computed from Markov Models. J. Chem. Phys. 128, 244103 (2008).

[9]: Norris, J. R.: Markov Chains. Cambridge University Press. Cambridge Series in Statistical and Probabilistic Mathematics (1998).

[10]: J.-H. Prinz, J. D. Chodera, V. S. Pande, W. C. Swope, J. C. Smith and F. Noé: Optimal use of data in parallel tempering simulations for the construction of discrete-state Markov models of biomolecular dynamics. J. Chem. Phys. 134, 244108 (2011).

[11]: J.-H. Prinz, H. Wu, M. Sarich, B. Keller, M. Senne, M. Held, J. D. Chodera, C. Schütte and F. Noé: Markov models of molecular kinetics: Generation and Validation. J. Chem. Phys. 134, 174105 (2011).

[12]: Edina Rosta and Gerhard Hummer: Free energies from dynamic weighted histogram analysis using unbiased Markov state model. J. Chem. Theo. Comput. 11, 276-285 (2015).

[13]: Michael R. Shirts and John D. Chodera: Statistically optimal analysis of samples from multiple equilibrium states. J. Chem. Phys. 129, 124105 (2008).

[14]: Sriraman, S., Kevrekidis, I. G. and Hummer, G.: Coarse Master Equation from Bayesian Analysis of Replica Molecular Dynamics Simulations. J. Phys. Chem. B 109, 6479-6484 (2005).

[15]: Swope, W. C., Pitera, J. W. and Suits, F.: Describing protein folding kinetics by molecular dynamics simulations: 1. Theory. J. Phys. Chem. B 108, 6571-6581 (2004).

[16]: H. Wu, A. S. J. S. Mey, E. Rosta and F. Noé: Statistically optimal analysis of state-discretized trajectory data from multiple thermodynamic states. J. Chem. Phys. 141, 214106 (2014).

[17]: Hao Wu and Frank Noé: Optimal estimation of free energies and stationary densities from multiple biased simulations. SIAM Multiscale Model. Simul. 12, 25-54 (2014).