Abstract
The main physical features and operating principles of isothermal nanomachines in the microworld, common to both classical and quantum machines, are reviewed. Special attention is paid to the dual, constructive role of dissipation and thermal fluctuations, the fluctuation–dissipation theorem, heat losses and free energy transduction, thermodynamic efficiency, and thermodynamic efficiency at maximum power. Several basic models are considered and discussed to highlight generic physical features. This work examines some common fallacies that continue to plague the literature. In particular, the erroneous beliefs that one should minimize friction and lower the temperature for high performance of Brownian machines, and that the thermodynamic efficiency at maximum power cannot exceed onehalf are discussed. The emerging topic of anomalous molecular motors operating subdiffusively but very efficiently in the viscoelastic environment of living cells is also discussed.
Introduction
A myriad of minuscule molecular nanomotors (not visible in standard, classical, optical microscopes) operate in living cells and perform various tasks. These utilize metabolic energy, for example, the energy stored in ATP molecules maintained at outofequilibrium concentrations, or in nonequilibrium ion concentrations across biological membranes. Conversely, they may replenish the reserves of metabolic energy using other sources of energy, for example, light by plants, or energy of covalent bonds of various food molecules by animals [1]. The main physical principles of their operation are more or less understood by now [2,3], although the statisticomechanical details of any single particular molecular motor (e.g., a representative of a large family of kinesin motors) are not well understood.
The advances and perspectives of nanotechnology have inspired us to devise our own nanomotors [46]. Learning from nature can help to make the artificial nanomotors more efficient, and possibly even better than those found in nature. Along this way, understanding the main physical operating principles within the simplest, minimalist physical models can indeed be of help.
First of all, any periodically operating motor or engine requires a working body undergoing cyclic changes and a source of energy to drive such cyclic changes. Furthermore, it should be capable of doing work on external bodies. In the case of thermal heat engines, the source of energy is provided by heat exchange with two heat reservoirs or baths at different temperatures, T_{1}, and T_{2 }> T_{1}, with the maximum possible Carnot efficiency of η_{C} = 1 − T_{1}/T_{2} [7]. This very famous textbook result of classical thermodynamics (or rather thermostatics) is modified when the heat flow is considered as a function of time. Thus, for an infinitesimally slow heat flow occurring over a finite time, one obtains the Curzon and Ahlborn result, [7,8]. The analogy with heat engines is, however, rather misleading for isothermal engines operating at the same temperature, T_{1} = T_{2}. Here, the analogy with electrical motors is much more relevant. The analogy becomes almost literal in the case of rotary ATPsynthase [9] or flagellar bacterial motors (the electrical nanomotors of living cells). Here, the energy of a proton electrochemical gradient (an electrochemical rechargeable battery) is used to synthesize ATP molecules out of ADP and the orthophosphate P_{i} (the useful work done), in the case of ATPsynthase, or to produce mechanical motion by flagellar motors [1,3]. An ATPsynthase nanomotor can also operate in reverse [9], and the energy of ATP hydrolysis can be used to pump protons against their electrochemical gradient to recharge the “battery”. These and similar nanomotors can operate at ambient temperature in a highly dissipative environment with nearly 100% thermodynamic efficiency defined as the ratio of useful work done to the input energy spent. This is the first counterintuitive remarkable feature, which needs to be explained. It is easy to derive this result within the simplest model (see below) for an infinitesimally slow operating motor at zero power. At maximum power at a finite speed, the maximum thermodynamic efficiency within such a model is onehalf. This is still believed by many to be the maximum, theoretically possible, thermodynamic efficiency of isothermal motors at maximum power. However, this belief is born from underestimating the role played by thermal fluctuations in nonlinear stochastic dynamics and the role of the fluctuation–dissipation theorem (FDT) on the nano and microscale. It is generally wrong. It is valid only for some particular dynamics, as clarified below by giving three counterexamples. The presence of strong thermal fluctuations at ambient temperature, playing a constructive and useful role, is a profound physical feature of nanomotors as compared with the macroscopic motors of our everyday experience. It is necessary to understand and to develop an intuition for this fundamental feature. Nanomotors are necessarily Brownian engines, very different from their macroscopic counterparts.
Review
Fluctuation–dissipation theorem, the role of thermal fluctuations
Motion in any dissipative environment is necessarily related to the dissipation of energy. Particles experience a frictional force, which in the simplest case of Stokes friction is linearly proportional to the particle velocity with a viscous friction coefficient denoted as η. When the corresponding frictional energy losses are no longer compensated for by an energy supply, the motion will eventually stop. However, this does not happen in microworld for micro or nanosized particles. Their stochastic Brownian motion can persist forever even at thermal equilibrium. The energy necessary for this is supplied by thermal fluctuations. Therefore, friction and thermal noise are intimately related, which is the physical context of the fluctuation–dissipation theorem [10]. Statistical mechanics allows the development of a coherent picture to rationalize this fundamental feature of Brownian motion.
We start with some generalities that can be easily understood within a standard dynamical approach to Brownian motion that can be traced back to pioneering contributions by Bogolyubov [11], Ford, Kac and Mazur [12,13], and others. Consider a motor particle with mass M, coordinate x, and momentum p. It is subjected to a regular, dynamical force f(x,t), as well as the frictional and stochastically fluctuating forces of the environment. The latter are modeled by an elastic coupling of this particle to a set of N harmonic oscillators with masses m_{i}, coordinates q_{i}, and momenta p_{i}. This coupling is of the form , with spring constants κ_{i}. This is a standard mechanistic model of nonlinear, classical Brownian motion known within quantum dynamics as the Caldeira–Leggett model [14] upon modification of the coupling term or making a canonical transformation [13]. Both classically and quantum mechanically [13] (in the Heisenberg picture) the equations of motion are
In the quantum case, x, q_{i}, p, p_{i} are operators obeying the commutation relations , , [x,q_{i}] = 0, [p,p_{i}] = 0. Force, f(x,t), is also operator. Using Green's function of harmonic oscillators, the dynamics of bath oscillators can be excluded (projection of hyperdimensional dynamics on the (x,p) plane) and further represented simply by the initial values q_{i}(0) and p_{i}(0). This results in a generalized Langevin equation (GLE) for the motor variables
where
is a memory kernel and
is a bath force, where are the frequencies of the bath oscillators. Equation 3 is still a purely dynamical equation of motion that is exact. The dynamics of [x(t),p(t)] is completely timereversible for any given q_{i}(0) and p_{i}(0) by derivation, unless the timereversibility is dynamically broken by f(x,t) or by boundary conditions. Hence, timeirreversibility within dissipative Langevin dynamics is a statistical effect due to averaging over many trajectories. Such an averaging cannot be undone, i.e., there is no way to restore a single trajectory from their ensemble average. Considering a classical dynamics approach first, we choose initial q_{i}(0) and p_{i}(0) from a canonical, hyperdimensional, Gaussian distribution, ρ(q_{i}(0),p_{i}(0)), zerocentered in p_{i}(0) subspace and centered around x(0) in q_{i}(0) subspace, and characterized by the thermal bath temperature T, like in a typical molecular dynamics setup. Then, each ξ(t) presents a realization of a stationary, zeromean, Gaussian stochastic process, which can be completely characterized by its autocorrelation function, . Here, denotes statistical averaging done with ρ(q_{i}(0),p_{i}(0)). An elementary calculation yields the fluctuation–dissipation relation (FDR), also named the second FDT by Kubo [10]:
Notice that it is valid even for a thermal bath consisting of a single oscillator. However, a quasicontinuum of oscillators is required for the random force correlations to decay to zero in time. This is necessary for ξ(t) to be ergodic in correlations. Kubo obtained this FDT in a very different way, namely by considering the processes of dissipation caused by phenomenological memory friction characterized by the memory kernel η(t) (i.e., heat given by the particle to the thermal bath) and absorption of energy from the random force ξ(t) (i.e., heat absorbed from the thermal bath). Here, both processes are balanced at thermal equilibrium, and the averaged kinetic energy of the Brownian particle is k_{B}T/2. This is in accordance with the equipartition theorem in classical equilibrium statistical mechanics. This is a very important point. At thermal equilibrium, the net heat exchange between the motor and its environment is zero for arbitrarily strong dissipation. This is a primary, fundamental reason why the thermodynamic efficiency of isothermal nanomotors can in principle achieve unity in spite of strong dissipation. For example, the thermodynamic efficiency of an F1ATPase rotary motor can be close to 100% as recent experimental work has demonstrated [15]. For this to happen, the motor must operate most closely to thermal equilibrium in order to avoid net heat losses. One profound lesson from this is that there is no need to minimize friction on the nanoscale. This is a very misleading misconception that continues to plague research on Brownian motors. For example, the socalled dissipationless ratchets are worthless (more on this below). Very efficient motors can work at ambient temperature and arbitrarily strong friction. There is no need to go to deep, quantum cold temperatures, which require a huge energy expenditure to create in a laboratory.
Every thermal bath and its coupling to the particle can be characterized by the bath spectral density
[13,14,16]. It allows η(t) to be expressed as
and the noise spectral density via the Wiener–Khinchin theorem, , as S(ω) = 2k_{B}TJ(ω)/ω. The strict ohmic model, J(ω) = ηω, without a frequency cutoff, corresponds to the standard Langevin equation:
with uncorrelated white Gaussian thermal noise, . Such noise is singular, and its meansquare amplitude is infinite. This is, of course, a very strong idealization. A frequency cutoff must be physically present, which results in a thermal GLE description with correlated Gaussian noise.
The above derivation can also be straightforwardly repeated for quantum dynamics. This leads to a quantum GLE, which formally looks the same as Equation 3 in the Heisenberg picture with only one difference: The corresponding random force becomes operatorvalued with a complexvalued autocorrelation function as shown in Equation 8 [13,16,17].
Here, the averaging is done with the equilibrium density operator of the bath oscillators. The classical Kubo result (Equation 6) is restored in the formal limit . To obtain a quantum generalization of Equation 7, one can introduce a frequency cutoff, J(ω) = ηωexp(−ω/ω_{c}) and split into a sum of zeropoint quantum noise, , and thermal quantum noise contributions, , so that . This yields
with
where is the characteristic time of thermal quantum fluctuations. Notice the dramatic change of quantum thermal correlations, from a delta function at , to an algebraic decay for finite τ_{T} and t >> τ_{T}. The total integral of δ_{T}(t) is unity, and the total integral of the real part of the T = 0 contribution is zero. In the classical limit, , δ_{T}(t) becomes a delta function. Notice also that the real part of the first complexvalued term in Equation 9, which corresponds to zeropoint quantum fluctuations, starts from a positive singularity at the origin t = 0 in the classical, white noise limit, ω_{c}→∞, and becomes negative for t > 0. Hence, it lacks a characteristic time scale. However, it cancels precisely the same contribution, but with the opposite sign stemming formally from the thermal part in the limit t >> τ_{T} at T≠ 0. Thus, quantum correlations, which correspond to the Stokes or ohmic friction, decay nearly exponentially for ω_{c }>> 1/τ_{T}, except for the physically unachievable condition of T = 0. Here, we see two profound quantum mechanical features in the quantum operatorvalued version of the classical Langevin equation (Equation 7) with memoryless Stokes friction: First, thermal quantum noise is correlated. Second, zeropoint quantum noise is present. This is the reason why quantum Brownian motion would not stop even at absolute zero of temperature T = 0. A proper treatment of these quantum mechanical features produced a controversial discussion in the literature in the case of nonlinear quantum dynamics when f(x) is not constant or has a nonlinear dependence on x(see [16,17] for further references and details). Indeed, dissipative quantum dynamics cannot be fundamentally Markovian, as already revealed by this short explanation. This is contrary to a popular approach based on the idea of quantum semigroups, which guarantees a complete positivity of such a dynamics [18]. The main postulate of the corresponding theory (the semigroup property of the evolution operator expressing the Markovian character of evolution) simply cannot be justified on a fundamental level, thinking in terms of interacting particles and fields (a quantum field theory approach). Nevertheless, Lindblad theory and its allies, for example, the stochastic Schroedinger equation [16], are extremely useful in quantum optics where the dissipation strength is very small. The application to condensed matter with appreciably strong dissipation should, however, be done with a great care. This could lead to clearly incorrect results, which contradict exactly solvable models [16]. Nonlinear quantum Langevin dynamics is very tricky, even within a semiclassical treatment, where the dynamics is treated as classical but with colored classical noise corresponding to the real part of treated as a cnumber. As a matter of fact, quantum dissipative dynamics is fundamentally nonMarkovian, which is a primary source of all the difficulties and confusion. Exact analytical results are practically absent (except for linear dynamics), and various Markovian approximations to nonlinear nonMarkovian dynamics are controversial, being restricted to some parameter domains (e.g., weak system–bath coupling or a weak tunnel coupling/strong system–bath coupling). Moreover, they are susceptible of producing unphysical results (such as violation of the second law of thermodynamics) beyond their validity domains.
Furthermore, a profoundly quantum dynamics has often just a few relevant discrete quantum energy levels, rather than a continuum of quantum states. A twostate quantum system serves as a prominent example. Here, one may prefer a different approach to dissipative quantum dynamics (e.g., the reduced density operator method), leading to quantum kinetic equations for level populations and system coherence [1922]. This provides a description on the ensemble level and relates to the quantum Langevin equation in a similar manner as the classical Fokker–Planck equation (ensemble description) relates to the classical Langevin equation (description on the level of single trajectories).
Minimalist model of a Brownian motor
A minimalist model of a motor can be given by 1D cycling of the motor particle in a periodic potential, , as shown in Figure 1. This models the periodic turnover of the motor within a continuum of intrinsic, conformational states [3], where is a chemical cyclic reaction coordinate. The motor cycles can be driven by an energy supplied by a constant driving force or torque, F, with free energy Δμ = 2πF spent per one motor turn. The motor can perform useful work against an opposing torque or load, f_{L}, so that the total potential energy is . Overdamped Langevin dynamics is described by
where , with uncorrelated white Gaussian thermal noise ξ(t), . By introducing the stochastic dissipative force , it can be understood as a force balance equation. The net heat exchange with the environment is [23], where denotes an ensemble average over many trajectory realizations. Furthermore, is the energy pumped into the motor turnovers, and is the useful work done against external torque. The fluctuations of the motor energy are bounded and can be neglected in the balance of energy in the long run, since Q(t), E_{in}(t), and W(t) typically grow linearly (or possibly sublinearly in the case of anomalously slow dynamics with memory, see below) in time. The energy balance yields the first law of thermodynamics: Q(t) + W(t) = E_{in}(t). The thermodynamic efficiency is obviously
and independent of the potential . It reaches unity at the stalling force . Then, the motor operates infinitesimally slow, . Henceforth, a major interest present the efficiency R_{max} at the maximum of the motor power . This one is easy to find in the absence of potential , i.e., for f(x) = 0. Indeed, . This shows a parabolic dependence on f_{L} and reaches the maximum at f_{L} = F/2. Therefore, R_{max} = 1/2. Given this simple result, many have believed until now that this is a theoretical bound for the efficiency of isothermal motors at maximum power.
Digression on the role of quantum fluctuations. Within the simplest model considered (f(x) = 0) the quantum noise effects do not asymptotically play any role for T > 0. This is not generally so, especially within the framework of nonlinear dynamics and at low temperatures where it can be dominant [27]. Most strikingly, the role of the zeropoint fluctuations of vacuum (i.e., quantum noise at T = 0) is demonstrated in the Casimir effect: Two metallic plates will attract each other in an attempt to minimize the “dark energy” of electromagnetic standing waves (quantized) in the space between the two plates [28]. This effect can be used, in principle, to make a oneshot motor, which extracts energy from zeropoint fluctuations of vacuum, or “dark energy” by doing work against an external force, f_{L}. No violation of the second law of thermodynamics and/or the law of energy conservation occurs because such a “motor” cannot work cyclically. In order to repeatedly extract energy from vacuum fluctuations, one must again separate two plates and invest at least the same amount of energy in this. This example shows, nevertheless, that the role of quantum noise effects can be highly nontrivial, very important, poorly understood, and possibly confusing. And a possibility to utilize “dark energy” to do useful work in a giant, cosmic “oneshot engine” is really intriguing!
Thermodynamic efficiency of isothermal engines at maximum power can be larger than onehalf
Here it is demonstrated that the belief that R_{max} = 1/2 is a theoretical maximum is completely wrong, and in accord with some recent studies [2932], R_{max} can also achieve unity within a nonlinear dynamics regime. For this, we first find stationary in a biased periodic potential. This can be done by solving the Smoluchowski equation for the probability density P(x,t), which can be written as a continuity equation, , with the probability flux J(x,t) written in the transport form
This Smoluchowski equation is an ensemble description and counterpart to the Langevin equation (Equation 11). Here, D is the diffusion coefficient related to temperature and viscous friction by the Einstein relation, D = k_{B}T/η, and β = 1/k_{B}T is the inverse temperature. For any periodic biased potential, the constant flux, J = ω/(2π) = constant, driven by Δμ < 0, as well as the corresponding nonequilibrium steady state distribution, , can be found by twiceintegrating Equation 13, using and periodicity of . This yields the famous Stratonovich result [3335] for a steadystate angular velocity of phase rotation
with forward rotation rate
and backward rate ω_{b}(Δμ,f_{L}) defined by the second equality in Equation 14. This result is quite general. The motor power is P_{W}(f_{L}) = f_{L}ω_{f}(Δμ,f_{L})[1 − exp(β(Δμ + 2πf_{L})) and in order to find R_{max} one must find by solving dP_{W}(f_{L})/df_{L} = 0. Then, . In fact, Equation 14 is very general. It holds beyond the model of washboard potential, leading to the result in Equation 15. For example, given welldefined potential minima, one can introduce a picture of discrete states with classical Kramers rates for the transitions between those, as described in Figure 1b. Accordingly, within the simplest enzyme model, one has three discrete states. E corresponds to an empty enzyme with energy E_{1}. ES corresponds to an enzyme with a substrate molecule bound to it and energy E_{2} of the whole complex. EP corresponds to an enzyme with product molecule(s) bound to it and energy E_{3}. The forward cyclic transitions E→ES→EP→E are driven by the free energy per one molecule Δμ released in the S→P transformation facilitated by the enzyme, while the backward cycling, E→EP→ES→E, requires backward reaction, P→S. This is normally neglected in the standard Michaelis–Menthentype approach to enzyme kinetics as it is very unlikely to occur. This generally cannot be neglected for molecular motors. The simplest possible Arrhenius model for the forward rate of the whole cycle is
where 0 < δ < 1 describes the asymmetry of the potential drop. Accordingly, the backward rate is ω_{b}(Δμ,f_{L}) = ω_{0}exp[β(1 −δ) (Δμ + 2πf_{L})]. This model allows one to realize under which conditions R_{max} can exceed onehalf. Here we rephrase a recent treatment in [29,30] and come to the same conclusions. R_{max} is a solution of dP_{W}(f_{L})/df_{L} = 0, which leads to a transcendental equation for R_{max}
where r = Δμ/(k_{B}T), b = (k_{B}T/2π)∂ln ω_{f}(Δμ,f_{L})/∂f_{L}. For Equation 16, b = −δ. The limiting case b = 0 of extreme asymmetry is especially insightful. In this special case, R_{max} = [LW(e^{1+r}) − 1]/r exactly, where LW(z) denotes the Lambert Wfunction. This analytical result shows that R_{max}→1/2 as r→0, while R_{max}→1 as r→∞. Therefore, a popular statement that R_{max} is generally bounded by 1/2 is simply wrong. While it is true that in some models this Jacobi bound exists, it is generally not so. Even the simplest model of molecular motors, as considered here by following [29], completely refutes the Jacobi bound as the theoretical limit. Further insight emerges in the perturbative regime, r << 1, which yields in the lowest order of r
This is essentially the same result as in [30]. Hence, for 0 ≤ δ < 1/2, R_{max }> 1/2 for a small r, the effect is small for r << 1, but it exists.
The discussed model might seem a bit too crude. However, the result that R_{max} can achieve a theoretical limit of unity survives also within a more advanced, yet very simple model. Indeed, let us consider the simplest kind of sawtooth potential (Figure 1) inspired by the above discretestate model with δ = 0. Then, Equation 15 explicitly yields Equation 19.
The dependence of ω(Δμ,f_{L}) on := Δμ − 2πf_{L} is very asymmetric within this model, as shown in Figure 2a.
This is a typical diodetype or rectifier dependence, if the same model is applied to transport of charged particles in a spatially periodic potential, with ω(Δμ,f_{L}) corresponding to a scaled current and to voltage. Clearly, within the latter context, if an additional, sufficiently slow, periodic voltage signal, Acos(ωt), is applied at the conditions = 0, it will be rectified because of asymmetric I–V characteristics. This gives rise to a directional, dissipative current in a potential unbiased on average (both spatial and time averages are zero). The effect resulted in a huge amount of literature on rocking Brownian ratchets, in particular, and on Brownian motors, in general as described in a review article [36]. Coming back to the efficiency of molecular motors at maximum power within our model, we see clearly in Figure 2c that it can be well above 1/2, and even close to one. A sharply asymmetric dependence of P_{W} on R = f_{L}/F (Figure 2b) beyond the linear response regime, P_{W} = 4P_{max}R(1 −R), which is not shown therein because of a very small P_{max}, provides an additional clue on the origin of this remarkable effect. Interestingly, if the work of the motor is reversed, i.e., f_{L} provides the supply of energy and useful work is done against F ≤ f_{L}, then the motor rotates in the opposite direction on average. This occurs, for example, in such enzymes as F0F1ATPase [1,3,9], which presents a complex of two rotary motors F0 and F1 connected by a common shaft. The F0 motor uses an electrochemical gradient of protons to rotate the shaft which transmits the torque on the F1 motor. The mechanical torque applied to the F1 motor is used to synthesize ATP out of ADP and the phosphate group, P_{i}. This enzyme complex primarily utilizes the electrochemical gradient of protons to synthesize ATP. It can, however, also work in reverse and pump protons using the energy of ATP hydrolysis [9]. Moreover, in a separate F1ATPase motor, the energy of ATP hydrolysis can be used to create mechanical torque and do useful work against an external load, which is experimentally well studied [15]. For the reverse operation, our minimalist motor efficiency becomes , where and . In this case, indeed cannot exceed 1/2, as shown in Figure 2c in the lower curve. Such a behavior is also expected from the above discretestate model, because this corresponds to δ→1 = −b in Equation 18. This argumentation can be inverted: If a motor obeys the Jacobi bound, R_{max} ≤ 1/2, then it can violate it when working in reverse. Hence, the concept of the Jacobi bound as a fundamental limitation is clearly a dangerous misconception that should be avoided.
Minimalist model of a quantum engine
In the quantum case, discrete state models naturally emerge. For example, energy levels depicted in Figure 1b can correspond to the states of a proton pump driven by a nonequilibrium electron flow. This is a minimalist toy model for pumps like the cytochrome c oxidase proton pump [1,37]. The driving force is provided by electron energy, Δμ, released by dissipative tunneling of electrons between donor and acceptor electronic states of the pump. This process is complex. It requires, apart from intramolecular electron transfer, also uptake and release of electrons from two baths of electrons on different sides of a membrane, which can be provided, for example, by mobile electron carriers [1]. However, intramolecular electron transfer (ET) between two heme metalloclusters seems to be a rate limiting step. Such ET presents vibrationally assisted electron tunneling between two localized quantum states [38,39]. Given the weak electron tunneling coupling between the electronic states, the rate can be calculated using the quantummechanical Golden Rule. Within the classical approximation of nuclei dynamics (but not that of electrons!), and the simplest possible further approximations, one obtains the celebrated Marcus–Levich–Dogonadze rate,
for forward transfer, and ω_{b}(Δμ,Δμ_{p} = 0) = ω_{f}(Δμ,Δμ_{p} = 0) exp[Δμ/(k_{B}T)]. Here, is a quantum prefactor, where V_{tun} is the tunneling coupling, and λ is the reorganization energy of the medium. The energy released in the electron transport is used to pump protons against their electrochemical gradient, Δμ_{p}, which corresponds to 2πf_{L} within the previous model. Hence, R = Δμ_{p}/Δμ. Of course, our model should not be considered as a realistic model for cytochrome c oxidase. However, it allows a possible role of quantum effects to be highlighted that are contained in the dependence of the Marcus–Levich–Dogonadze rates on the energy bias Δμ. Namely, the existence of an inverted ET regime when the rate becomes smaller with a further increase of Δμ > λ, after reaching a maximum at Δμ = λ (activationless regime). The inverted regime is a purely quantummechanical feature. It cannot be realized within a classical adiabatic Marcus–Hush regime, for which the rate expression formally appears the same as Equation 20 but with a classical prefactor, ω_{0}. Classically, the inverted regime simply makes no physical sense. This fact can be easily realized upon plotting the lower adiabatic curve for the underlying curve crossing problem (within the Born–Oppenheimer approximation), and considering the pertinent activation barriers – the way the Marcus parabolic dependence of the activation energy on the energy bias is derived in textbooks [38]. The fact that the inverted ET regime can be used to pump electrons was first realized within a driven spin–boson model [22,4042]. The model here is, however, very different, and pumping is not relied on in the inverted ET regime. However, the latter can be used to arrive at a high R_{max}, close to one. Indeed, within this model, the former (Arrhenius rates) parameter b becomes b = −1/2 + (Δμ − Δμ_{p})/(4λ), and Equation 17 is now replaced by
A new control parameter c = λ/(k_{B}T) enters this expression. The perturbative solution of Equation 21 for r = Δμ/k_{B}T << 1 yields
to the lowest second order in Δμ/k_{B}T (compare with Equation 18). Hence, R_{max }> 1/2 for λ < 3k_{B}T and R_{max} < 1/2 for λ > 3k_{B}T in the perturbative regime. However, beyond this, R_{max} can essentially be larger than 1/2, as shown in Figure 3a.
These results are also expected for the pump working in reverse when Δμ→−Δμ. Here, we also see a huge difference with the model based on Arrhenius rates. The dependence of the rotation rate, ω, on = Δμ − Δμ_{p} is symmetric in this case. However, it exhibits a regime with a negative differential part, where , for exceeding some critical value that approaches λ for small T, as shown in Figure 3b. Here, the reason for the high performance is very different from the case of the asymmetric Arrhenius rates, or asymmetric . R_{max} can be close to one for . For this to happen, the motor should be driven deeply into the inverted ET regime. Hence, the effect is quantummechanical in nature, even if the considered setup looks purely classical. In this respect, the Pauli quantum master equation for the diagonal elements of the reduced density matrix decoupled from the offdiagonal elements has mathematical form of the classical master equation for population probabilities, and the corresponding classical probability description can be safely used. The rates entering this equation can, however, reflect such profound quantum effects as quantummechanical tunneling and yield nonArrhenius dependencies of dissipative tunneling rates on temperature and external forces. The corresponding quantum generalizations of classical results become rather straightforward. The theory of quantum nanomachines with profound quantum coherence effects is, however, still in its infancy.
Can a rocking ratchet do useful work without dissipation?
As we just showed, strong dissipation is not an obstacle for either classical or quantum Brownian machines to achieve a theoretical limit of performance. This already indicates that to completely avoid dissipation is neither possible nor desirable to achieve to develop a good nanomachine on the nanoscale. Conversely, the socalled rocking ratchets without dissipation [43,44] are not capable of performing any useful work, despite that they can produce directional transport. However, this directional transport cannot continue against any nonzero force trying to stop it, as will now be demonstrated. The stalling force can become negligibly small, and the thermodynamical efficiency of such a device is zero, very different from genuine ratchets, which must be characterized by a nonzero stalling force [36]. Therefore, a ratchet current without dissipation clearly presents an interesting but futile artefact. The rocking ratchets without dissipation should be named pseudoratchets to distinguish them from genuine ratchets characterized by a nonzero stalling force.
Let us consider the following setup. A particle in a periodic potential, V(x), is driven by a timeperiodic force, , with period . Then, U(x,t) = V(x) −xg(t), or f(x,t) = f(x) + g(t) in Equation 7. For strong dissipation and overdamped Langevin dynamics, M→ 0, the rectification current can emerge in potentials with broken spaceinversion symmetry, like one in Figure 1a, under a fully symmetric driving, g(t) = Acos(Ωt), . A broken spaceinversion symmetry means that there is no such x_{0}, so that V(−x) = V(x + x_{0}). Likewise, a periodic driving is symmetric with respect to time reversal if such a t_{0} exists (or equivalently, a phase shift ), such that g(−t) = g(t + t_{0}). Otherwise it breaks the timereversal symmetry. Also, higher moments of driving,
where n = 2,3,… are important with respect to a nonlinear response reasoning. The latter moments can also be defined for stochastic driving, using a corresponding timeaveraging, with . For overdamped dynamics, the rectification current already appears in the lowest second order of , for a potential with broken spatialinversion symmetry, and in the lowest third order of for potentials which are symmetric with respect to inversion x→−x[36]. These results were easy to anticipate for memoryless dynamics, which displays asymmetric current–force characteristics in the case of an applied static force (broken spatial symmetry), or a symmetric one (unbroken symmetry), respectively. They hold also quantum mechanically in the limit of strong dissipation. The case of weak dissipation is, however, more intricate both classically and quantum mechanically. A symmetry analysis based on the Curie symmetry principle has been developed in order to clarify the issue [36,43]. The harmonic mixing driving [45],
is especially interesting in this respect. Here, ψ is a relative phase of two harmonics, which plays a crucial role. is an absolute initial phase, which physically cannot play any role because it corresponds to a time shift t→t + t_{0} with and hence must be averaged out in the final results, if they are of any physical importance in real world. Harmonic mixing driving provides a nice testbed, because this is the simplest timeperiodic driving which can violate the timereversal symmetry. This occurs for any ψ ≠ 0,π. On the other hand, . Hence, , for ψ ≠ π/2,3π/2. Interestingly, is maximal for timereversal symmetric driving. Conversely, , when the time reversal symmetry is maximally broken. Moreover, one can show that all odd moments , n = 1,2,3,…, vanish for ψ = π/2 or 3π/2. The vanishing of odd moments for a periodic function means that it obeys a symmetry condition . Also, in application to potentials of the form , these results mean that , and , for the corresponding spatial averages. Hence, for a spaceinversion symmetric potential with , (also all higher odd moments vanish). Moreover, is maximal, when the latter symmetry is maximally broken, . This corresponds to the ratchet potentials. The origin of the rectification current can be understood as a memoryless nonlinear response in the overdamped systems: For , the current emerges already for standard harmonic driving as a second order response to driving. For (e.g., standard cosine potential, V_{2} = 0), one needs for driving to produce the ratchet effect. For the above harmonic driving, the averaged current . The same type of response behavior also features a quantummechanical, dissipative, singleband, tightbinding model for strong dissipation [46,47]. Very important is that any genuine fluctuating tilt or rocking ratchet is characterized by a nonzero stalling force, which means that the ratchet transport can sustain against a loading force and do useful work against it. It ceases at a critical stalling force. This has important implications. For example, in application to the photovoltaic effect in crystals with broken spaceinversion symmetry [36] this means that two opposite surfaces of crystal (orthogonal to current flow) will be gradually charged until the produced photovoltage stops the ratchet current flow. For a zero stalling force, no steadystate photovoltage or electromotive force can in principle emerge!
In the case of weak dissipation, however, memory effects in the current response become essential. Generally, for classical dynamics, , where ψ_{0} is a phase shift which depends on the strength of dissipation with two limiting cases: (i) ψ_{0} = 0 for for overdamped dynamics, and (ii) ψ_{0}→π/2 for vanishing dissipation η→0. In the later limit, the system becomes purely dynamical:
where we added an opposing transport loading force f_{L}. For example, it corresponds to a counterdirected electrical field in the case of charged particles. Let us consider following [43,44], the two original papers on dissipationless ratchet current in the case of f_{L} = 0, and the potential V(x) = −V_{1}sin(2πx)−V_{2}sin(4πx), or f(x) = f_{1}cos(2πx) + f_{2}cos(4πx), with f_{1} = 2πV_{1}, f_{2} = 4πV_{2}, and driven by g(t) in Equation 23. The spatial period is set to one and M = 1 in dimensionless units. The emergence of a dissipationless current within the considered dynamics has been rationalized within a symmetry analysis in [43], and the subject of directed currents due to broken time–space symmetries has been born. In an immediate followup work [44], we have, however, observed that in the above case, the directed current is produced only by breaking the timereversal symmetry by timedependent driving, but not otherwise. The breaking of the spatial symmetry of the potential alone does not originate dissipationless current. The current is maximal at ψ = π/2. No current emerges, however, at ψ = 0 even in ratchet potential with broken spaceinversion symmetry. Moreover, the presence of a second potential harmonic does not seem to affect the transport at ψ = π/2, as shown in Figure 4a. Here, there are two cases that differ by V_{2} = 0, in one case, and V_{2}≠ 0, in another one.
Moreover, when dissipation is present within the corresponding Langevin dynamics, each and every trajectory remains timereversal symmetric for ψ = 0. However, for strongly overdamped dynamics, the rectification current in a symmetric cosine potential ceases at ψ = π/2, and not at ψ = 0. Moreover, for an intermediate dissipation, it stops at some ψ_{0}, 0 < ψ_{0} < π/2, as shown in [48]. Which symmetry forbids it then, given a particular nonzero dissipation strength? Dynamic symmetry considerations fail to answer such simple questions and are thus not infallible. The symmetry of individual trajectories within a Langevin description simply does not depend on the dissipation strength, which can be easily understood from a wellknown dynamical derivation of this equation as presented above. Therefore, a symmetry argumentation based on the symmetry properties of single trajectories is clearly questionable, in general. The spontaneous breaking of symmetry is a wellknown fundamental phenomenon both in quantum field theory and the theory of phase transitions. In this respect, any chaotic Hamiltonian dynamics possess the following symmetry: for any positive Lyapunov exponent, there is a negative Lyapunov exponent having the same absolute value of the real part. The time reversal changes the sign of the Lyapunov exponents. This symmetry is spontaneously broken in Hamiltonian dynamics by considering the forward evolution in time [49]. It becomes especially obvious upon coarsegraining, which is not possible to avoid neither in real life nor in numerical experiments. By the same token, the time irreversibility of the Langevin description given timereversible trajectories is primarily a statistical and not a dynamical effect.
The emergence of such a current without dissipation has been interpreted as a reincarnation of the Maxwell–Loschmidt demon [44], and it has been argued that this demon is killed by a stochastically fluctuating absolute phase , with the relative phase ψ being fixed. In this respect, even in highly coherent light sources such as lasers, the absolute phase fluctuations cannot be avoided in principle. They yield a finite bandwidth of laser light. The phase shift ψ can be stabilized, but not the absolute phase. The typical dephasing time of semiconductor lasers used in laser pointers is in the range of nanoseconds, whereas in long tube lasers it is improved to milliseconds [50]. This is the reason why some averaging over such fluctuations must always be done (see [35], Chapter 12). The validity of this argumentation has been analytically proven in [44] with an exactly solvable example of a quantummechanical, tight binding model driven by harmonic mixing with a dichotomously fluctuating . Even more spectacularly, this is seen in dissipationless, tightbinding dynamics driven by an asymmetric, stochastic, twostate field. The current is completely absent even for , as an exact solution shows [27]. Hence, dissipation is required to produce a ratchet current under stochastic driving g(t). The validity of this result is far beyond the particular models in [27,44,46] because any coherent quantum current (one carried by Bloch electron with nonzero quasimomentum) is killed by quantum decoherence produced by a stochastic field. Any dissipationless quantum current will proceed on a time scale smaller than the decoherence time.
Moreover, it is shown here that the directed transport without dissipation found in [43,44], and the followup research cannot do any useful work against an opposing force, f_{L}. Indeed, the numerical results shown in Figure 4b reveal this clearly: After some random time (which depends, in particular, on the initial conditions and on the f_{L} load strength), the rectification current ceases. As a matter of fact, the particle then moves back much faster, with acceleration. The smaller the f_{L}, the longer the directional normal transport regime and smaller back acceleration, and nevertheless the forward transport is absent asymptotically. Therefore, this “Maxwell demon” cannot asymptotically do any useful work, unlike for example, highly efficient ionic pumps – the “Maxwell demons” of living cells working under the condition of strong friction. Plainly said, a dissipationless demon cannot charge a battery, it is futile. Therefore, the consideration of such a device as a “motor” cannot be scientifically justified. It is also clear that with vanishing friction, the thermodynamic efficiency of rocking Brownian motors also vanishes. Therefore, a naive feeling that smaller friction provides higher efficiency is completely wrong, in general.
The following is a brief summary of the major findings of this section. First, friction and noise are intimately related in the microworld, which is nicely seen from a mechanistic derivation of (generalized) Langevin dynamics. It results from hyperdimensional Hamiltonian dynamics with random initial conditions like in a molecular dynamics approach. For this reason, the thermodynamic efficiency of isothermal nanomotors can reach 100% even under conditions of very strong dissipation, in the overdamped regime where the inertial effects become negligible. Quite on the contrary, thermodynamical efficiency of lowdimensional dissipationless Hamiltonian ratchets is zero. Therefore, they cannot serve as a model for nanomotors in condensed media. Moreover, the geometrical size of some current realizations of Hamiltonian ratchets with optical lattices exceed that of F1ATPase by several orders of magnitude. In this respect, the readers should be reminded that a typical wavelength of light is about 0.5 μm, which is the reason why motors such as F1ATPase cannot be seen in a standard light microscope. Hence, the whole subject of Hamiltonian dissipationless ratchets is completely irrelevant for nanomachinery. Second, the thermodynamical efficiency at maximum power in nonlinear regimes can well exceed the upper bound of 50%, which is valid only for a linear dynamics. Therefore, nonlinear effects are generally very important to construct a highly efficient nanomachine. Third, important quantum effects can be already observed within the rate dynamics with quantum rates. For example, these rates can be obtained using a quantummechanical perturbation theory in tunnel coupling (within a Fermi’s Golden Rule description) whose particularly simple limit results in Marcus–Levich–Dogonadze rates of nonadiabatic tunneling.
Adiabatic pumping and beyond
Having realized that thermodynamic efficiency at maximum power can exceed 50%, a natural question emerges: How to arrive at such an efficiency in practice? Intuitively, the highest thermodynamical efficiency of molecular and other nanomotors can be achieved for an adiabatic modulation of potential when the potential is gradually deformed so that its deep minimum gradually moves from one place to another and a particle trapped near this minimum follows adiabatic modulation of the potential in a peristalticlike motion. The idea is that the relaxation processes are so fast (once again, a sufficiently strong dissipation is required!) that they occur almost instantly on the time scale of potential modulation. In such a way, the particle can be transferred in a highly dissipative environment from one place to another practically without heat losses, and can do useful work against a substantial load (see the discussion in [51]). If at any point in time the motor particle stays near the thermodynamic equilibrium, then in accordance with FDT, the total heat losses to the environment are close to zero. Therefore, thermodynamic efficiency of such an adiabatically operating motor can, in principle, be close to the theoretical maximum. One can imagine, given the three particular examples presented above, that it can be achieved, in principle, at the maximum of power for arbitrarily strong dissipation. The design of the motor thus becomes crucially important. Such an ideal motor can also be completely reversible. However, to arrive at the maximum thermodynamic efficiency at a finite speed is a highly nontrivial matter indeed.
Digression on the possibility of an (almost) heatless classical computation
Now, an important digression is considered. In application of these ideas to the physical principles of computation, the above physical considerations mean the following. Bitwise operation (bit “0” corresponds to one location of the potential minimum and bit “1” to another – let us assume that their energies are equal) does not require, in principle, any energy to finally dissipate. It can be stored and reused during adiabatically slow change of potential. Physical computation can, in principle, be heatless, and it can be also completely reversible at arbitrary dissipation. This is the reason why the original version of the Landauer minimum principle allegedly imposed on computation (i.e., there is a minimum of k_{B}Tln2 of energy dissipated per one bit of computation, 0→1, or 1→0 required) was completely wrong. This was recognized by the late Landauer himself [52] after Bennett [53], Fredkin and Toffoli [54] discovered how reversible computation can be done in principle [55]. Another currently popular version of the Landauer principle in formulations where one either needs to spend a minimum of k_{B}Tln2 energy to destroy or erase one bit of information, or a minimum of k_{B}Tln2 heat is released by “burning” one bit of information, is also completely wrong. These two formulations plainly and generally contradict the second law of thermodynamics, which in the differential form states that dS ≥ δQ/T (i.e., that the increase of entropy, or loss of information, dI ≡ −dS/k_{B}ln2 – a very fundamental equality, or rather tautology of the physical information theory), is equal to or exceeds the heat exchange with the environment in the units of T. For an adiabatically isolated system, δQ = 0; hence, dI ≤ 0, i.e., entropy can increase and information can diminish spontaneously, without any heat being produced into the surroundings. This is just the second law of thermodynamics rephrased. As a matter of fact, δQ = dIk_{B}Tln2 is the maximal (not minimal!) amount of heat which can be produced by “burning” information in the amount of dI bits. To create and store one bit of information, one indeed needs to spend at least k_{B}Tln2 of free energy at T = constant, but not to destroy or erase it, in principle. Information can be destroyed spontaneously; however, this can take an infinite amount of time. The Landauer principle belongs to common scientific fallacies. However, at the same time, it has established a current hype in the literature. An “economical” reason for this is that the current clock rate of computer processors has not been increased beyond 10 GHz for over one decade because of immense heat production. Plainly said, it is not possible to further cool the processors down to increase their rate, and the energy consumption becomes unreasonable. We eagerly search for a solution to this severe problem. This problem is, however, a problem of the current design of these processors and our present technology, which indeed provides severe thermodynamical limitations [56]. However, it has a little in common with the Landauer principle as heat is currently produced many orders of magnitude above the minimum of the Landauer principle, which should not be taken seriously as a rigorous, theoretical, universally valid bound anyway. Nevertheless, operation at a finite speed is inevitably related to heat loss. The question is, how to minimize this at a maximal speed? This question is clearly beyond a solution in equilibrium thermodynamics, but belongs rather to kinetic theory. The minimum energy requirements are inevitably related to the question of how fast to compute. This presents an open, unsolved problem.
Minimalist model of adiabatic pump
Coming back to the adiabatic operation of molecular motors or pumps, a minimalist model based on the time modulation of the energy levels is now analyzed. The physical background of the idea of adiabatic operation is sound. However, can it be realized in popular models characterized by discrete energy levels? The minimalist model contains just one timedependent energy level, E(t), and two constant energy levels corresponding to chemical potentials μ_{1} and μ_{2} of two baths of particles between which the transport occurs. They must be considered as electrochemical potentials for charged particles (e.g., Fermi levels of electrons in two leads) or electrochemical potentials of transferred ions in two bath solutions separated by a membrane. Pumping takes place when a time modulation of E(t) can be used to pump against Δμ = μ_{2} − μ_{1 }> 0, as shown in Figure 5a. Here, both the energy level E(t) and the corresponding rates k_{1}(t) and k_{−1}(t), k_{2}(t) and k_{−2}(t) are time dependent. Their proper description would be rate constants, if they were time independent. Given a sufficiently slow modulation and fast equilibration at any instant t, one can assume the local equilibrium conditions
Notice, that this condition is not universally valid. It can be violated by fast fluctuating fields (as shown in [22] and references cited therein) for a plenty of examples and using an approach beyond this restriction within a quantummechanical setting. The rates are generally retarded functionals of energy level fluctuations and not functions of instantaneous energy levels. However, a local equilibrium can be a very good approximation. Figure 5b rephrases the transport process in Figure 5a in terms of the states of the pump: empty (state ) and filled with one transferred particle (state 1). The former state is populated with probability p_{0}(t), and the latter one with probability p_{1}(t), p_{0}(t) + p_{1}(t) = 1. The empty level can be filled with rate k_{1}(t) from the left bath level μ_{1}, and with rate k_{−2}(t) from the right bath level μ_{2}. The filling flux is thus j_{f} = (k_{1} + k_{−2})p_{0}. Moreover, it can be emptied with rate k_{2}(t) to μ_{2}, and with rate k_{−1}(t) to μ_{1}. The corresponding master equations reduce to a single relaxation equation because of probability conservation:
where
and
The instantaneous flux between the levels μ_{1} and E(t) is
and
between the levels E(t) and μ_{2}. Clearly, the time averages
and
must coincide () because particles cannot accumulate on the level E(t).
First, we show that pumping is impossible within the approximation of a quasistatic rate, that is, when the rates are considered to be constant at a frozen instant in time and one solves the problem within this approximation. Indeed, in this case for a steadystate flux that is an instantaneous function of time we obtain:
where in the second line, Equation 25 was used. Clearly, for Δμ > 0, I(t) < 0 at any t. Averaging over time yields,
with
The current always flows from higher μ_{2} to lower μ_{1}. The same will happen for any number of intermediate levels E_{i}(t) within such an approximation.
Origin of pumping
One can, however, easily solve Equation 26 for arbitrary Γ(t) and R(t):
The first term vanishes in the limit t→∞ and a formal expression for the steadystate averaged flux, , can be readily written as shown in Equation 34 where is timeaveraged k_{−1}(t).
However, to evaluate it for some particular conditions of energy and rate modulation is generally a rather cumbersome task. The fact that pumping is possible is easy to understand, with the following protocol of energy level and rate modulation: (step 1) energy level E(t) decreases, E(t) < μ_{1}, with an increasing prefactor in k_{±}_{1}(t) (left gate opens), and a sharply decreasing prefactor in k_{±}_{2}(t) (right gate is closed), a particle enters the pump from the left; (step 2) energy level E(t) increases, E(t) > μ_{2}, and the prefactor in k_{±}_{1}(t) sharply drops, the left gate closes and the right one remains closed; (step 3) the right gate opens and the particle leaves to the right; (step 4) the right gate closes, the energy level E(t) decreases and the left gate opens, so that the initial position in 3D parameter space (two prefactors and one energy level) is repeated, and one cycle is completed. The general idea of an ionic pump with two intermittently opening/closing gates has in fact been suggested a long time ago [57].
Some general results can be obtained within this model for adiabatic slow modulation and related to an adiabatic, geometric, Berry phase, b(t). The origin of this can be understood per analogy with a similar approach used to solve the Schroedinger equation in quantum mechanics for adiabatically modulated, quasistationary energy levels [58], by making the following ansatz to solve Equation 26: p_{0}(t) = e^{ib(t)}R(t)/Γ(t) + c.c. Making a loop in a 2D space of parameters adds or subtracts 2π to b(t). Furthermore, an additional related contribution, the pumping current, appears in addition to one in Equation 32, with averaging done over one cycle period. This additional contribution is proportional to the cycling rate, ω (see [59] for details). However, it is small and cannot override one in Equation 32 consistently with the adiabatic modulation assumptions. Hence, adiabatic pumping against any substantial bias Δμ > 0 is not possible within this model. This indeed can easily be understood by making a sort of adiabatic approximation in Equation 33, , and doing an integration by parts therein, so that in the long time limit p_{0}(t) ≈ R(t)/Γ(t) + δp_{0}(t), where . The first term leads to Equation 32, and the second term corresponds to a small perturbative pump current, which vanishes as ω→0. This pump current can be observed only for Δμ = 0, where . Hence, the thermodynamic efficiency of this pump is close to zero in the adiabatic pumping regime.
Moreover, for realistic molecular pumps, e.g., driven by the energy of ATP hydrolysis, the adiabatic modulation is difficult (if even possible) to realize. A sudden modulation of the energy levels, (e.g., by a power stroke), when the energy levels jump to new discrete positions, is more relevant, especially on a singlemolecule level.
Efficient nonadiabatic pumping
The cases where E(t) takes on discrete values and is a timecontinuous semiMarkovian process can be handled differently. Especially simple is a particular case with E(t) taking just two values E_{1} and E_{2 }> E_{1} with transition rates ν_{1}, and ν_{2} between those. Then, the transport scheme in Figure 5b can be rephrased as one in Figure 5c with rate constants for the transitions to and from the energy levels E_{i}, i = 1,2, j = 1,2,−1,−2, and
Now we have three populations, p_{0} of the empty state, p_{1} of level E_{1}, and p_{2} of level E_{2}. The steady state flux can be calculated as , where are steady state populations. Straightforward (but somewhat lengthy) calculations yield Equation 36.
From the structure of this equation it is immediately clear that the flux can be positive for positive Δμ (real pumping) by considering, e.g., the limit: , , , , and ν_{1 }>> ν_{2}. Physically, it is obvious when E_{1} < μ_{1}, and E_{2 }> μ_{2}, together with (i.e., the level E_{1} is easily filled from μ_{1}, but not from μ_{2} because of a large barrier on the right side – the entrance of pump is practically closed from the right), and (i.e., the particle easily goes from E_{2} to μ_{2} and cannot go back to μ_{1} because the left entrance is now almost closed). Under these conditions, also and are well justified. Hence, we obtain for the pumping rate
This expression looks like a standard Michaelis–Menthen rate of enzyme operation, which is customly used in biophysics [3] for modeling molecular motors and pumps. The elevation of the E(t) level from E_{1} to E_{2} can be effected, e.g., by ATP binding in the case of ionic pumps, with , where c_{ATP} is the ATP concentration. This is a simple, basic model for pumps. From Equation 37 it follows that I ≈ ν_{1}at ν_{1}τ << 1, where is the sum of filling and emptying times, and it reaches the maximal pumping rate I_{max} ≈ 1/τ, for ν_{1}τ >> 1. The thermodynamic efficiency of such a pump is R = Δμ/ΔE, where ΔE = E_{2}−E_{1} is the energy invested in pumping. The derivation of the approximation in Equation 37 requires that exp(ε_{1,2}/k_{B}T) >> 1, where ε_{1} = μ_{1}−E_{1}, and ε_{1} = E_{2}−μ_{2}, which is already wellsatisfied for ε_{1,2 }> 2k_{B}T. Hence, R = Δμ/(Δμ + ε_{1} + ε_{2}) can be close to one for a large Δμ >> ε_{1} + ε_{2}. Take for example ΔE = 20k_{B}T_{r} ≈ 0.5 eV, which corresponds to the typical energy released by ATP hydrolysis. Then, for Δμ = 0.4 eV and ε_{1} = ε_{2} = 2k_{B}T_{r} ≈ 0.05 eV, R = 0.8. Notice that a typical thermodynamic efficiency of a NaK pump is about R ≈ 0.75. Such a nonadiabatic pumping can be indeed highly thermodynamically efficient with small heat losses. One should note, however, that the question of whether or not the efficiency at the maximum of power, P_{W} = IΔμ, can be larger than onehalf or even approach one within this generic model is not that simple. To answer this question, one cannot neglect the backward transport, especially when Δμ becomes close to ΔE (P_{W}(Δμ = ΔE) = 0), and a concrete model for the rates must be specified in the exact result (Equation 36). In the case of an electronic pump, like the one used in nature in nitrogenase enzymes, this can be quantum tunneling rates [60], similar to the Marcus–Levich–Dogonadze rate above. Moreover, imposing a very high barrier (intermittent in time) either on the left or right can physically correspond to the interruption of the electron tunneling pathway due to ATPinduced conformational changes, that is, to the modulation of tunnel coupling V_{tun}(t) synchronized with the modulation of E(t), as occurs in nitrogenase. This question of efficiency at maximum power will be analyzed elsewhere in detail, both for the classical and quantum rate models.
To summarize this section, the concept of the adiabatic operation of molecular machines is sound and should be pursued further. However, the simplest known adiabatic pump operates in fact at nearly zero thermodynamical efficiency, while a power stroke mechanism can be highly efficient within the same model. It seems obvious that in order to realize a thermodynamically efficient adiabatic pumping, the gentle operation of a molecular machine without erratic jumps, a continuum of states is required (or possibly many states depending continuously on an external modulation parameter). Further research is thus highly desirable and necessary.
How can biological molecular motors operate highly efficiently in highly dissipative, viscoelastic environments?
As it has been clarified above, Brownian motors can work highly efficiently in dissipative environments, causing arbitrarily strong viscous friction acting on a motor. This corresponds to the case of normal diffusion,
in a forcefree case. In a crowded environment of biological cells, diffusion can be, however, anomalously slow,
where 0 < α < 1 is the power law exponent of subdiffusion and D_{α} is the subdiffusion coefficient [61,62]. There is a huge body of growing experimental evidence for subdiffusion of particles of various sizes, from 2–3 nm (typical for globular proteins) [63,64] to 100–500 nm [6569] (typical for various endosomes), both in living cells and in crowded polymer and colloidal solutions (complex fluids) physically resembling cytoplasm. There are many theories developed to explain such a behavior [61,62]. One is based on the natural viscoelasticity of such complex liquids (see [70,71] for a review and details), which has a deep dynamical foundation (see above). Viscoelasticity that leads to the above subdiffusion corresponds to a power law memory kernel η(t) = η_{α}t^{−α}/Γ(1 −α) in Equation 3 and Equation 6. In this relation, η_{α} is a fractional friction coefficient related to D_{α} by the generalized Einstein relation, D_{α} = k_{B}T/η_{α}. Using the notion of the fractional Caputo derivative, the dissipative term in Equation 3 can be abbreviated as η_{α}d^{α}x/dt^{α}, where the fractional derivative operator d^{α}f(t)/dt^{α} acting on an arbitrary function f(t) is just defined by this abbreviation. The corresponding GLE is named the fractional Langevin equation (FLE). Its solution yields the above subdiffusion scaling exactly in the inertialess limit, M→0, corresponding precisely to the fractional Brownian motion [70,72], or asymptotically otherwise. The transport in the case of a constant force applied, f_{0}, is also subdiffusive,
These results correspond exactly to a subohmic model of the spectral density of a thermal bath [16], J(ω) = η_{α}ω^{α}, within the dynamical approach to the generalized Brownian motion. They can be easily understood with an ad hoc Markovian approximation to the memory kernel, which yields a timedependent viscous friction, with . It diverges, η_{M}(t)→∞, when t→∞, which leads to subdiffusion and subtransport within this Markovian approximation. Such an approximation can, however, be very misleading in other aspects [73]. Nevertheless, it provokes the question: How can molecular motors, such as kinesin, work very efficiently in such media characterized by virtually infinite friction, interpolating in fact between simple liquids and solids? It is important to mention that in any fluidlike environment the effective macroscopic friction, , must be finite. Hence, a memory cutoff time, τ_{max}, must exist so that . In real life, τ_{max} can be as large as minutes, or even longer than hours. Hence, on a shorter time scale and on a corresponding spatial mesoscale, it is subdiffusion (characterized by η_{α}) that can be physically relevant indeed and not the macroscopic limit of normal diffusion characterized by η_{eff}. This observation opens the way for multidimensional Markovian embedding of subdiffusive processes with long range memory upon introduction of a finite number, N, of auxiliary stochastic variables. It is based on a Prony series expansion of the powerlaw memory kernel into a sum of exponentials, , with ν_{i} = ν_{0}/b^{i}^{−1} and . This can be made numerically accurate (which is controlled by the scaling parameter, b). Apart from τ_{max} = τ_{min}b^{N}^{−1}, it possesses also a short cutoff τ_{min} = 1/ν_{0}. The latter naturally emerges in any condensed medium beyond a continuous medium approximation because of its real, atomistic nature. In numerics, it can be made of the order of a time integration step. Hence, it does not even matter within the continuous medium approximation. Even with a moderate N ≈ 10–100 (number of auxiliary degrees of freedom), Markovian embedding can be done for any realistic time scale of anomalous diffusion with sufficient accuracy [70,74]. A very efficient numerical approach based on the corresponding Markovian embedding has been developed for subdiffusion in [70,74], and for superdiffusion (α > 1) in [7577]. The idea of Markovian embedding is also very natural from the perspective that any nonMarkovian GLE dynamics presents a lowdimensional projection of a hyperdimensional, singular Markovian process described by dynamical equations of motion with random initial conditions. This fact is immediately clear from a wellknown dynamical derivation of GLE, reproduced above. Somewhat surprising is, however, that so few N ≈ 10–20 are normally sufficient in practical applications.
The action of a motor on subdiffusing cargo can be simplistically modeled (with the simplest possible theory) by a random force f(t) alternating its direction when the motor steps on a random network of cytoskeleton [78]. The driven cargo follows a diffusional process , with some exponent β, which is defined by a trajectory averaging of squared displacements δx(tt’) = x(t + t’) −x(t’) over sliding t’. Within such a model, β clearly cannot exceed 2α [79], which corresponds to subtransport with alternating direction in time. Hence, for α < 0.5, cargo superdiffusion (β > 1) could not be caused by motors within such a simple approach. However, experiments show [80,81] that freely subdiffusing cargos (e.g., α = 0.4 [80,82]) can superdiffuse when they are driven by motors also for α < 0.5 (e.g., β = 1.3 for α = 0.4 [80])). Therefore, a more appropriate modeling of the transport by molecular motors in viscoelastic environments is required. This was quite recently developed in [8385], by generalizing the pioneering works on subdiffusive rocking [70,8689] and flashing [90] ratchets.
Viscoelastic effects should be considered on the top of viscous Stokes friction caused by the water component of cytosol. Then, a basic 1D model for a large cargo (20–500 nm) pulled by a much smaller motor (2–10 nm) on an elastic linker (cf., Figure 6) can be formulated as follows [85]
This presents a generalization of a wellknown model of molecular motors [9193] by coupling the motor to a subdiffusing cargo on an elastic linker. Here, both the motor (coordinate x) and the cargo (coordinate y) are subjected to independent, thermal white noise of the environment, ξ_{m}(t), and ξ_{c}(t), respectively, which obey the corresponding FDRs. Both of the particles are overdamped and characterized by Stokes frictional forces with frictional constants η_{m}, and η_{c}. In addition, viscoelastic frictional force acts on the cargo and is characterized by the memory kernel discussed above (fractional friction model) and the corresponding stochastic thermal force, ξ(t), with algebraically decaying correlations. It obeys a corresponding FDR. The motor can pull cargo on an elastic linker with spring constant k_{L} (small extensions) and maximal extension length r_{max} (the socalled finite extension nonlinear elastic (FENE) model [94] is used here). The motor (kinesin) is bound to a microtubule and can move along it in a periodic potential, U(x + L,ζ(t)) = U(x,ζ(t)), reflecting the microtubule spatial period L, and it can do useful work against a loading force, f_{L}, directed against its motion caused by cyclic conformational fluctuations ζ(t). The microtubule is a polar periodic structure with a periodic but asymmetric distribution of positive and negative charges (overall charge is negative) [95]. The kinesin is also charged and its charge fluctuates upon binding negatively charged ATP molecules and dissociation of the products of ATP hydrolysis. This leads to dependence of the binding potential on the conformational variable ζ(t). Given two identical heads of kinesin, the minimalist model is to assume that there are only two conformational states of the motor (this is a gross oversimplification, of course) with U_{1,2}(x) := U(x,ζ_{1,2}), and U_{1}(x + L/2) = U_{2}(x) as an additional symmetry condition, so that a halfstep, L/2, is associated with conformational fluctuations 1 →2, or 2→1. During one cycle 1→2→1 in the forward direction with the rates α_{1}(x) and β_{2}(x), one ATP molecule is hydrolyzed. However, if this cycle is reversed in the backward direction with the rates β_{1}(x) and α_{2}(x) (Figure 6), one ATP molecule is synthesized. The dependence of chemical transition rates on the position x through the potential U_{1,2}(x) reflects a twoway mechanochemical coupling. It is able to incorporate allosteric effects, which indeed can be very important for optimal operation of molecular machines [6]. Such effects can possibly emerge, for example, because the probability of binding an ATP molecule (substrate) to a kinesin motor or the release of products can be influenced by the electrostatic potential of the microtubule. In the language of [6], this corresponds to an information ratchet mechanism to distinguish it from the energy ratchet, where the rates of potential switches do not depend on the motor states (no feedback) and are fixed. Such an allostery can be used to create highly efficient molecular machines [6]. In accordance with the general principles of nonequilibrium thermodynamics applied to cyclic kinetics [2],
for any x, where Δμ_{ATP} is the free energy released in ATP hydrolysis and used to drive one complete cycle in the forward direction. It can be satisfied, e.g., by choosing
The total rates
of the transitions between two energy profiles must satisfy
at thermal equilibrium. This is a condition of the thermal detailed balance, where the dissipative fluxes simultaneously vanish both in the transport direction and within the conformational space of a motor [91,92]. It is obviously satisfied for Δμ_{ATP}→0. Furthermore, on symmetry grounds, not only α_{1,2}(x + L) = α_{1,2}(x), β_{1,2}(x + L) = β_{1,2}(x), but also, α_{1}(x + L/2) = β_{2}(x) and α_{2}(x + L/2) = β_{1}(x). It should be emphasized that linear motors such as kinesin I or II work only one way: they utilize the chemical energy of ATP hydrolysis for doing mechanical work. They cannot operate in reverse on average, i.e., they cannot use mechanical work in order to produce ATP in a long run, even if a twoway mechanochemical coupling can provide such an opportunity in principle. This is very different from rotary motors such as F0F1ATPase, which is completely reversible and can operate in two opposite directions. Allosteric effects can also play a role to provide such a directional asymmetry in the case of kinesin motors. Allostery should be considered as generally important for the proper design of various motors best suited for different tasks.
For kinesins, neither cargo nor external force f_{L} should explicitly influence the chemical rate dependencies on the mechanical coordinate x. This leaves still some freedom in the use of different rate models. One possible choice is shown in Equation 45 [85].
In Equation 45, α_{1}(x) = α_{1} within the ±δ/2 neighborhood of the minimum of potential U_{1}(x) and is zero otherwise. Correspondingly, the rate β_{2}(x) = α_{1} relates to the ±δ/2 neighborhood of the minimum of potential U_{2}(x). The rationale behind this choice is that these rates correspond to lump reactions of ATP binding and hydrolysis, and if the amplitude of the binding potential is chosen to be about Δμ_{ATP}, with a sufficiently large δ, the rates ν_{1,2}(x) can be made almost independent of the position of the motor along the microtubule [85], when allosteric effects are considered to be almost negligible. This allows for the comparison of this model, featured by bidirectional mechanochemical coupling, with a corresponding flashing energy ratchet model, where the switching rates between two potential realizations are spatially independent constants, ν_{1} = ν_{2} = α_{1}. The latter model has been developed in [84]. Notice that even for reversible F1ATPase motors, such an energy ratchet model can provide very reasonable and experimentally relevant results [96]. Moreover, if the linker is very rigid, k_{L}→∞, one can exclude the dynamics of the cargo and consider a one compound particle with a renormalized Stokes friction and the same algebraically decaying memory kernel that moves subdiffusively in a flashing potential. Such an anomalous diffusion molecular motor model has been proposed and investigated in [83]. The main results of [83], which were confirmed and further generalized in [84,85], create the following emerging coherent picture of molecular motors pulling subdiffusing (when free) cargos in viscoelastic environments of living cells. First, if a normally diffusing (when free) motor is coupled to subdiffusing cargo, it will be eventually enslaved by the cargo and also subdiffuse [84]. However, when the motor is bound to a microtubule, it can be guided by the binding potential fluctuations, which are eventually induced by its own cyclic conformation dynamics driven by the free energy released in ATP hydrolysis. It either tends towards a new potential minimum after each potential change, as demonstrated in Figure 6, or can escape by fluctuation to another minimum. A large binding potential amplitude U_{0 }>> k_{B}T exceeding 10–12k_{B}T (see Figure 6 and the corresponding discussion in [84] to understand why) makes the motor strong. For a large U_{0}, the probability to escape is small, and the motor will typically slide down to a new minimum and its mechanical motion along the microtubule will be completely synchronized with the potential flashes and conformational cycles. It then steps (stochastically, but unidirectionally) to the right in Figure 6 with mean velocity v = Lα_{1}/2. In such a way, using a powerstrokelike mechanism, a strong motor such as kinesin II (with stalling force ≈ 6–8 pN) can completely overcome subdiffusion and transport even subdiffusing (when free) cargos very efficiently. This requires, however, that the flashing occurs slower than the relaxation. The larger the cargo, the larger also the fractional friction coefficient η_{α}, and the slower the relaxation. The relaxation is algebraically slow. However, it can be sufficiently fast in absolute terms on the time scale 1/α_{1}, thus this mechanism is realized for sufficiently small cargos. The results of [8385] indicate that smaller cargos, 20–100 nm, will typically be transported by strong kinesin motors quite normally, , with α_{eff} = 1, at typical motor turnover frequencies ν = α_{1}/2 ≈ 1–200 Hz, provided that f_{L} = 0. This already explains why the diffusional exponent β ≈ 2α_{eff} can be larger than 2α. However, for larger cargos of 100–300 nm, larger turnover frequencies, and when the motor works against a constant loading force f_{L}, an anomalous transport regime emerges with α ≤ α_{eff} ≤ 1. Clearly, when f_{L} approaches the stalling force , the transport becomes anomalous. The effective transport exponent α_{eff} is thus essentially determined by the binding potential strength, motor operating frequency, cargo size, and loading force, apart from α.
It is very surprising that the thermodynamic efficiency of such a transport can be very high even within the anomalous transport regime. This result is not trivial at all. Indeed, the useful work done by a motor in the anomalous regime against loading force f_{L} scales sublinearly in time, [83,88,89]. However, the free energy transformed into directional motion scales generally as , where 0 < γ ≤ 1. γ = 1 for rocking, or flashing ratchets driven by either by periodic or random twostate force, or by random fluctuations of potential characterized by a welldefined mean turnover rate ν = ν_{1}ν_{2}/(ν_{1} + ν_{2}). Then, . In the energy balance, the rest, E_{in}(t) − W(t), is dissipated as a net heat Q(t) transferred to the environment. The thermodynamic efficiency is thus [85]
where λ = γ − α_{eff}. Hence, for γ = 1. It declines algebraically in time, like the mean power . However, temporally, for the typical time required to relocate a cargo within a cell, it can be very high, especially when α_{eff} is close to one. An even more interesting result occurs in the case of bidirectional, mechanochemical coupling, because the biochemical cycling rates ν_{1,2}(x) in this case can strongly depend on the mechanical motion for a sufficiently large U_{0}, when allosteric effects start to play a very profound role. Indeed, if the available Δμ_{ATP} becomes smaller than the sum of energies required to enhance the potential energy of the motor by two potential flashes (see vertical arrow in Figure 6) during two halves of one cycle, then the enzyme cycling in its conformational space will not generally stop. It can, however, start to occur anomalously slow with a power exponent γ < 1. The average number of forward enzyme turnovers occurring with consumption of ATP molecules scales then as in time, and . This indeed happens within the model we consider here, see [85] for a particular example with U_{0} = 30k_{B}T_{r} ≈ 0.75 eV, Δμ_{ATP} = 20k_{B}T_{r} ≈ 0.5 eV, where γ ≈ 0.62 and α_{eff} ≈ 0.556 at the optimal load f_{L} ≈ 8.5 pN, when the motor pulls a large cargo at the same time. The thermodynamic efficiency declines in this case very slowly, with λ ≈ 0.067, so that R(t) is still about 70%(!) at the end point of simulations corresponding to a physical time of 3 s. Such a high efficiency is very surprising and provides one more lesson that challenges our intuition and allows us to learn and recognize the power of FDT on the nanoscale. For microscopic and nanoscopic motion occurring at thermal equilibrium, the energy lost in frictional processes is regained from random thermal forces. Therefore, heat losses can, in principle, be small even for an anomalously strong dissipation. This is the reason why the attempts to reduce friction on the nanoscale are misguided. This can, quite counterintuitively, even hamper efficiency, down to zero as the socalled dissipationless Hamiltonian (pseudo)motors reveal. One should think differently.
The efficiency at maximum power can also be high in the normal transport operating regime within the discussed model. Indeed, for U_{0} = 30k_{B}T_{r} and smaller cargo in [85], the transport remains almost normal until the maximum of efficiency is reached at about 80% for an optimal ≈ 9 pN (see Figure 8 in [85], where f_{0} corresponds to f_{L} here). The nearly linear dependence of the efficiency on load until it reaches about 70% indicates that the motor steps with almost the same maximal velocity as at zero loading force f_{L}. The following simple heuristic considerations can be used to rationalize the numerical results. The motor develops a maximal driving force F, which depends on U_{0}, the motor turnover rate, and temperature (via an entropic contribution), see Figure 6 in [84], where . This is the stalling force. The larger U_{0}, the stronger the motor and larger F. Let us assume that the motor stepping velocity declines from v_{0} to zero with increasing loading force, f_{L}, as v(f_{L}) = v_{0}[1 − (f_{L}/F)^{a}], where a ≥ 1 is a powerlaw exponent. Within the linear minimalist model of the motor considered above (and also in an inefficient transport regime within the considered model), a = 1, i.e., the motor velocity declines linearly with load. However, in a highly efficient nonlinear regime, this dependence is strongly nonlinear, a >> 1. The maximum of the motor power P_{W}(f_{L}) = f_{L}v(f_{L}) is reached at = F/(1 + a)^{1/a}, with = av_{0}/(1 + a). For a = 1, = F/2 and the dependence P_{W}(f_{L}) is parabolic. With the increase of U_{0}, a is also strongly increased and the dependence P_{W}(f_{L}) becomes strongly skewed, in agreement with numerics. Since the input motor power, P_{in}, does not depend on load within our model in the energy ratchet regimes, the motor efficiency R = P_{W}/P_{in} just reflects that of P_{W}. Hence, the maximum of R versus f_{L} does correspond to the maximum efficiency at maximum power and can exceed 1/2. The same heuristic considerations can be applied to the results presented in [93] for very efficient normal motors. Of course, these results are not necessarily experimentally relevant for, e.g., known kinesin I motors whose maximal efficiency is about 50% [3]. However, our theory can be very relevant for devising artificial motors having other tasks because it provides a biophysically very reasonable model where efficiency at maximum power can be larger than the Jacobi bound of linear stochastic dynamics. It must be stressed, however, that in the anomalous transport regime, one cannot define power and one should introduce the notion of subpower [88,89].
One should also note the following. Even at f_{L} = 0, when the thermodynamic efficiency is formally zero, R(t) = 0, something useful is done: the cargo is transferred over a certain distance by overcoming the dissipative resistance of the environment. However, neither the potential energy of the motor, nor that of the cargo is increased. This is actually normal modus operandi of linear molecular motors like kinesins I or II, very different from ionic pumps whose the primary goal is to increase the (electro)chemical potential of transferred ions (i.e., to charge a battery). Such a R = 0 regime, should, however, be contrasted with the zero efficiency of frictionless rocking pseudoratchets. In our case, the useful work is done against the environment. Pseudoratchets are not capable of doing any useful work in principle.
Conclusion
In this contribution, some of the main operating principles of minuscule Brownian machines operating on the nano and microscale are reviewed. Unlike in macroscopic machines, thermal fluctuations and noise play a profound and moreover very constructive role in the microworld. In fact, thermal noise plays the role of a stochastic lubricant, which supplies energy to the Brownian machines to compensate for their frictional losses. This is the very essence of the fluctuation–dissipation theorem: both processes (i.e., frictional losses and energy gain from the thermal noise) are completely compensated on average at thermal equilibrium. Classically, thermal noise vanishes at a temperature of absolute zero (which physically cannot be achieved anyway, in accordance with the third law of thermodynamics), and only then would friction win (classically). However, quantum noise (vacuum zeropoint fluctuations) is present even at absolute zero. Therefore, friction cannot win even at absolute zero and quantum Brownian motion never stops. These fundamental facts allow, in principle, for a complete transfer of the driver energy into useful work by isothermal Brownian engines. Their thermodynamic efficiency approaches unity when the net heat losses vanish. This happens when the motor operates close to thermal equilibrium and can be done at any, arbitrarily strong dissipation at ambient temperature. It is not necessary to perform work in the deep quantum cold or to strive for highquality quantum coherence. A striking example of this is provided by the high transport and thermodynamic efficiency of molecular motors in the subdiffusive transport regime. Operating anomalously slow (in mathematical terms, i.e., exhibiting sublinear dependence of both the transport distance and the number of motor turnovers on time), such motors can be quite fast in absolute terms and can work under a heavy load [85]. In this, and also in other aspects, the intuitive understanding of subdiffusion and subtransport as being extremally slow can be very misleading [9799]. On the other hand, the frictionless rocking pseudoratchets cannot do any useful work, as we clarified in this review.
A scientifically sound possibility to approach the theoretical maximum of thermodynamic efficiency of isothermal motors at arbitrarily strong dissipation and ambient temperatures is intrinsically related to the possibility of reversible dissipative classical computing without heat production. However, such an adiabatic operation would be infinitesimally slow. Clearly, such a motor or computer is not of practical use. Moreover, the adiabatic operation of dissipative pumps involving discrete energy levels is possible only for a vanishing load. Here, a natural question emerges: What is the thermodynamic efficiency at maximum power? The linear dynamics result that R_{max} = 1/2 is the theoretical upper bound is, however, completely wrong within nonlinear stochastic dynamics, as shown in this review with three examples. This opens the door for the design of highly efficient Brownian and molecular motors. Moreover, the recent model results in [85] for a normal transport of sufficiently small subdiffusing (when free) cargos by a kinesin motor with a very high thermodynamical efficiency at optimal external load do imply that thermodynamic efficiency at maximum power within that model can also be well above 50%. The earlier results for normal diffusion molecular motors within a very similar model [91,92] obtained in [93] also corroborate such conclusions. Such models are able to mimic allosteric interactions within minimalist model setups. Chemical allosteric interactions, which are intrinsically highly nonlinear, can optimize the performance of various molecular motors. This line of reasoning is especially important for the design of artificial molecular motors [6] and should be pursued further.
Quantum effects are also important to consider to design highly efficient molecular machines, even when quantum coherence does not play any role (that is, on the level of rate dynamics with quantum rates, like in the Pauli master equation). In particular, it has been shown in this review within the simplest model possible that quantum effects (related to the inverted regime of quantum particle transfer) can lead to thermodynamic efficiencies at maximal power larger than onehalf for the machine operating both in forward and reverse directions. Quantum coherence could also play a role here, which should be clarified in further research. Undoubtedly, quantum coherence is central for quantum computing, which is obviously reversible [55]. However, this is a different story.
I hope that the readers of this review will find it especially useful in liberating themselves (and possibly others) from some common fallacies, both spoken and unspoken, which unfortunately have pervaded the literature and hinder progress. With this work, a valid, coherent picture emerges.
References

Pollard, T. D.; Earnshaw, W. C.; LippincottSchwarz, J. Cell Biology, 2nd ed.; Saunders Elsevier: Philadelphia, PA, U.S.A., 2008.
Return to citation in text: [1] [2] [3] [4] [5] 
Hill, T. L. Free Energy Transduction and Biochemical Cycle Kinetics; Springer: Berlin, Germany, 1980.
Return to citation in text: [1] [2] 
Nelson, P. Biological Physics: Energy, Information, Life; W. H. Freeman: New York, NY, U.S.A., 2003.
Return to citation in text: [1] [2] [3] [4] [5] [6] 
Kay, E. R.; Leigh, D. A.; Zerbetto, F. Angew. Chem., Int. Ed. 2006, 46, 72–191. doi:10.1002/anie.200504313
Return to citation in text: [1] 
ErbasCakmak, S.; Leigh, D. A.; McTernan, C. T.; Nussbaumer, A. L. Chem. Rev. 2015, 115, 10081–10206. doi:10.1021/acs.chemrev.5b00146
Return to citation in text: [1] 
Cheng, C.; McGonigal, P. R.; Stoddart, J. F.; Astumian, R. D. ACS Nano 2015, 9, 8672–8688. doi:10.1021/acsnano.5b03809
Return to citation in text: [1] [2] [3] [4] [5] 
Callen, H. B. Thermodynamics and an Introduction to Thermostatistics, 2nd ed.; John Wiley & Sons: New York, NY, U.S.A., 1985.
Return to citation in text: [1] [2] 
Curzon, F. L.; Ahlborn, B. Am. J. Phys. 1975, 43, 22–24. doi:10.1119/1.10023
Return to citation in text: [1] 
Yoshida, M.; Muneyuki, E.; Hisabori, T. Nat. Rev. Mol. Cell Biol. 2001, 2, 669–677. doi:10.1038/35089509
Return to citation in text: [1] [2] [3] [4] 
Kubo, R. Rep. Prog. Theor. Phys. 1966, 29, 255. doi:10.1088/00344885/29/1/306
Return to citation in text: [1] [2] 
Bogolyubov, N. N. On some Statistical Methods in Mathematical Physics. Ukranian Academy of Sciences: Kiev, Ukraine, 1945; pp 115–137.
In Russian.
Return to citation in text: [1] 
Ford, G. W.; Kac, M.; Mazur, P. J. Math. Phys. 1965, 6, 504. doi:10.1063/1.1704304
Return to citation in text: [1] 
Ford, G. W.; Lewis, J. T.; O’Connell, R. F. Phys. Rev. A 1988, 37, 4419–4428. doi:10.1103/PhysRevA.37.4419
Return to citation in text: [1] [2] [3] [4] [5] 
Caldeira, A. O.; Leggett, A. J. Ann. Phys. 1983, 149, 374–456. doi:10.1016/00034916(83)902026
Return to citation in text: [1] [2] 
Toyabe, S.; Okamoto, T.; WatanabeNakayama, T.; Taketani, H.; Kudo, S.; Muneyuki, E. Phys. Rev. Lett. 2010, 104, 198103. doi:10.1103/PhysRevLett.104.198103
Return to citation in text: [1] [2] 
Weiss, U. Quantum Dissipative Systems, 2nd ed.; World Scientific Publishing Co Pte Ltd: Singapore, 1999.
Return to citation in text: [1] [2] [3] [4] [5] [6] 
Gardiner, C. W.; Zoller, P. Quantum Noise: a Handbook of Markovian and NonMarkovian Quantum Stochastic Methods with Applications to Quantum Optics; Springer: Berlin, Germany, 2000.
Return to citation in text: [1] [2] 
Lindblad, G. Commun. Math. Phys. 1976, 48, 119. doi:10.1007/BF01608499
Return to citation in text: [1] 
Nakajima, S. Prog. Theor. Phys. 1958, 20, 948–959. doi:10.1143/PTP.20.948
Return to citation in text: [1] 
Zwanzig, R. J. Chem. Phys. 1960, 33, 1338–1341. doi:10.1063/1.1731409
Return to citation in text: [1] 
Argyres, P. N.; Kelley, P. L. Phys. Rev. A 1966, 134, A98. doi:10.1103/PhysRev.134.A98
Return to citation in text: [1] 
Goychuk, I.; Hänggi, P. Adv. Phys. 2005, 54, 525–584. doi:10.1080/00018730500429701
Return to citation in text: [1] [2] [3] 
Sekimoto, K. J. Phys. Soc. Jpn. 1997, 66, 1234–1237. doi:10.1143/JPSJ.66.1234
Return to citation in text: [1] 
Wyman, J. Proc. Natl. Acad. Sci. U. S. A. 1975, 72, 3983–3987.
Return to citation in text: [1] 
Rozenbaum, V. M.; Yang, D.Y.; Lin, S. H.; Tsong, T. Y. J. Phys. Chem. B 2004, 108, 15880–15889. doi:10.1021/jp048200a
Return to citation in text: [1] 
Qian, H. J. Phys.: Condens. Matter 2005, 17, S3783–S3794. doi:10.1088/09538984/17/47/010
Return to citation in text: [1] 
Goychuk, I.; Grifoni, M.; Hänggi, P. Phys. Rev. Lett. 1998, 81, 649–652. doi:10.1103/PhysRevLett.81.649
Return to citation in text: [1] [2] [3] 
Lamoreaux, S. K. Rep. Prog. Phys. 2005, 68, 201–236. doi:10.1088/00344885/68/1/R04
Return to citation in text: [1] 
Schmiedl, T.; Seifert, U. EPL 2008, 83, 30005. doi:10.1209/02955075/83/30005
Return to citation in text: [1] [2] [3] 
Seifert, U. Phys. Rev. Lett. 2011, 106, 020601. doi:10.1103/PhysRevLett.106.020601
Return to citation in text: [1] [2] [3] 
Van den Broeck, C.; Kumar, N.; Lindenberg, K. Phys. Rev. Lett. 2012, 108, 210602. doi:10.1103/PhysRevLett.108.210602
Return to citation in text: [1] 
Golubeva, N.; Imparato, A.; Peliti, L. EPL 2012, 97, 60005. doi:10.1209/02955075/97/60005
Return to citation in text: [1] 
Stratonovich, R. L. Radiotekh. Elektron. (Moscow, Russ. Fed.) 1958, 3, 497.
Return to citation in text: [1] 
Stratonovich, R. L. Topics in the Theory of Random Noise; Gordon and Breah: New York, NY, U.S.A., 1967; Vol. II.
Return to citation in text: [1] 
Risken, H. The FokkerPlanck Equation: Methods of Solution and Applications, 2nd ed.; Springer: Berlin, Germany, 1989.
Return to citation in text: [1] [2] 
Reimann, P. Phys. Rep. 2002, 361, 57–265. doi:10.1016/S03701573(01)000813
Return to citation in text: [1] [2] [3] [4] [5] 
Wikström, M. Biochim. Biophys. Acta 2004, 1655, 241–247. doi:10.1016/j.bbabio.2003.07.013
Return to citation in text: [1] 
Atkins, P.; de Paula, J. Physical Chemistry; Oxford University Press: Oxford, United Kingdom, 2006; pp 896–904.
Return to citation in text: [1] [2] 
Nitzan, A. Chemical Dynamics in Condensed Phases: Relaxation, Transfer and Reactions in Condensed Molecular Systems; Oxford University Press: Oxford, United Kingdom, 2007.
Return to citation in text: [1] 
Goychuk, I. A.; Petrov, E. G.; May, V. Chem. Phys. Lett. 1996, 253, 428–437. doi:10.1016/00092614(96)003235
Return to citation in text: [1] 
Goychuk, I. A.; Petrov, E. G.; May, V. Phys. Rev. E 1997, 56, 1421–1428. doi:10.1103/PhysRevE.56.1421
Return to citation in text: [1] 
Goychuk, I. A.; Petrov, E. G.; May, V. J. Chem. Phys. 1997, 106, 4522–4530. doi:10.1063/1.473495
Return to citation in text: [1] 
Flach, S.; Yevtuschenko, O.; Zolotaryuk, Y. Phys. Rev. Lett. 2000, 84, 2358–2361. doi:10.1103/PhysRevLett.84.2358
Return to citation in text: [1] [2] [3] [4] [5] 
Goychuk, I.; Hänggi, P. Directed current without dissipation: reincarnation of a MaxwellLoschmidt demon. In Lecture Notes in Physics: Stochastic Processes in Physics, Chemistry, and Biology; Freund, J.; Pöschel, T., Eds.; Springer: Berlin, Germany, 2000; Vol. 557, pp 7–20.
Return to citation in text: [1] [2] [3] [4] [5] [6] [7] 
Wonneberger, W.; Breymayer, H.J. Z. Phys. B 1984, 56, 241–246. doi:10.1007/BF01304177
Return to citation in text: [1] 
Goychuk, I.; Hänggi, P. Europhys. Lett. 1998, 43, 503–509. doi:10.1209/epl/i1998003892
Return to citation in text: [1] [2] 
Goychuk, I.; Hänggi, P. J. Phys. Chem. B 2001, 105, 6642–6647. doi:10.1021/jp010102r
Return to citation in text: [1] 
Dykman, M. I.; Rabitz, H.; Smelyanskiy, V. N.; Vugmeister, B. E. Phys. Rev. Lett. 1997, 79, 1178. doi:10.1103/PhysRevLett.79.1178
Return to citation in text: [1] 
Gaspard, P. Physica A 2006, 369, 201–246. doi:10.1016/j.physa.2006.04.010
Return to citation in text: [1] 
Paschotta, R. Opt. Photonik 2009, 4, 48–50. doi:10.1002/opph.201190028
Return to citation in text: [1] 
Astumian, R. D. Proc. Natl. Acad. Sci. U. S. A. 2007, 104, 19715–19718. doi:10.1073/pnas.0708040104
Return to citation in text: [1] 
Landauer, R. Physica A 1999, 263, 63–67. doi:10.1016/S03784371(98)005135
Return to citation in text: [1] 
Bennet, C. H. Int. J. Theor. Phys. 1982, 21, 905–940. doi:10.1007/BF02084158
Return to citation in text: [1] 
Fredkin, E.; Toffoli, T. Int. J. Theor. Phys. 1982, 21, 219–253. doi:10.1007/BF01857727
Return to citation in text: [1] 
Feynman, R. Feynman Lectures on Computation; AddisonWesley: Reading, MA, U.S.A., 1996.
Return to citation in text: [1] [2] 
Kish, L. B. Phys. Lett. A 2002, 305, 144–149. doi:10.1016/S03759601(02)013658
Return to citation in text: [1] 
Jardetzky, O. Nature (London) 1966, 211, 969. doi:10.1038/211969a0
Return to citation in text: [1] 
Anandan, J.; Christian, J.; Wanelik, K. Am. J. Phys. 1997, 65, 180–185. doi:10.1119/1.18570
Return to citation in text: [1] 
Sinitsyn, N. A.; Nemenman, I. EPL 2007, 77, 58001. doi:10.1209/02955075/77/58001
Return to citation in text: [1] 
Goychuk, I. Mol. Simul. 2006, 32, 717–725. doi:10.1080/08927020600857297
Return to citation in text: [1] 
Barkai, E.; Garini, Y.; Metzler, R. Phys. Today 2012, 65, 29–37. doi:10.1063/PT.3.1677
Return to citation in text: [1] [2] 
Höfling, F.; Franosch, T. Rep. Prog. Phys. 2013, 76, 046602. doi:10.1088/00344885/76/4/046602
Return to citation in text: [1] [2] 
Guigas, G.; Kalla, C.; Weiss, M. Biophys. J. 2007, 93, 316–323. doi:10.1529/biophysj.106.099267
Return to citation in text: [1] 
Saxton, M. J.; Jacobson, K. Annu. Rev. Biophys. Biomol. Struct. 1997, 26, 373–399. doi:10.1146/annurev.biophys.26.1.373
Return to citation in text: [1] 
Seisenberger, G.; Ried, M. U.; Endress, T.; Büning, H.; Hallek, M.; Bräuchle, C. Science 2001, 294, 1929–1932. doi:10.1126/science.1064103
Return to citation in text: [1] 
Golding, I.; Cox, E. C. Phys. Rev. Lett. 2006, 96, 098102. doi:10.1103/PhysRevLett.96.098102
Return to citation in text: [1] 
TolićNørrelykke, I. M.; Munteanu, E.L.; Thon, G.; Oddershede, L.; BergSørensen, K. Phys. Rev. Lett. 2004, 93, 078102. doi:10.1103/PhysRevLett.93.078102
Return to citation in text: [1] 
Jeon, J.H.; Tejedor, V.; Burov, S.; Barkai, E.; SelhuberUnkel, C.; BergSørensen, K.; Oddershede, L.; Metzler, R. Phys. Rev. Lett. 2011, 106, 048103. doi:10.1103/PhysRevLett.106.048103
Return to citation in text: [1] 
Parry, B. R.; Surovtsev, I. V.; Cabeen, M. T.; O’Hern, C. S.; Dufresne, E. R.; JacobsWagner, C. Cell 2014, 156, 183–194. doi:10.1016/j.cell.2013.11.028
Return to citation in text: [1] 
Goychuk, I. Viscoelastic Subdiffusion: Generalized Langevin Equation Approach. In Advances in Chemical Physics; Rice, S. A.; Dinner, A. R., Eds.; Wiley: Hoboken, NJ, U.S.A., 2012; Vol. 150, pp 187–253. doi:10.1002/9781118197714.ch5
Return to citation in text: [1] [2] [3] [4] [5] 
Waigh, T. A. Rep. Prog. Phys. 2005, 68, 685. doi:10.1088/00344885/68/3/R04
Return to citation in text: [1] 
Goychuk, I.; Hänggi, P. Phys. Rev. Lett. 2007, 99, 200601. doi:10.1103/PhysRevLett.99.200601
Return to citation in text: [1] 
Goychuk, I. Phys. Rev. E 2015, 92, 042711. doi:10.1103/PhysRevE.92.042711
Return to citation in text: [1] 
Goychuk, I. Phys. Rev. E 2009, 80, 046125. doi:10.1103/PhysRevE.80.046125
Return to citation in text: [1] [2] 
Siegle, P.; Goychuk, I.; Talkner, P.; Hänggi, P. Phys. Rev. E 2010, 81, 011136. doi:10.1103/PhysRevE.81.011136
Return to citation in text: [1] 
Siegle, P.; Goychuk, I.; Hänggi, P. Phys. Rev. Lett. 2010, 105, 100602. doi:10.1103/PhysRevLett.105.100602
Return to citation in text: [1] 
Siegle, P.; Goychuk, I.; Hänggi, P. EPL 2011, 93, 20002. doi:10.1209/02955075/93/20002
Return to citation in text: [1] 
Caspi, A.; Granek, R.; Elbaum, M. Phys. Rev. E 2002, 66, 011916. doi:10.1103/PhysRevE.66.011916
Return to citation in text: [1] 
Bruno, L.; Levi, V.; Brunstein, M.; Despósito, M. A. Phys. Rev. E 2009, 80, 011912. doi:10.1103/PhysRevE.80.011912
Return to citation in text: [1] 
Robert, D.; Nguyen, T.H.; Gallet, F.; Wilhelm, C. PLoS One 2010, 4, e10046. doi:10.1371/journal.pone.0010046
Return to citation in text: [1] [2] [3] 
Harrison, A. W.; Kenwright, D. A.; Waigh, T. A.; Woodman, P. G.; Allan, V. J. Phys. Biol. 2013, 10, 036002. doi:10.1088/14783975/10/3/036002
Return to citation in text: [1] 
Bruno, L.; Salierno, M.; Wetzler, D. E.; Despósito, M. A.; Levi, V. PLoS One 2011, 6, e18332. doi:10.1371/journal.pone.0018332
Return to citation in text: [1] 
Goychuk, I.; Kharchenko, V. O.; Metzler, R. PLoS One 2014, 9, e91700. doi:10.1371/journal.pone.0091700
Return to citation in text: [1] [2] [3] [4] [5] 
Goychuk, I.; Kharchenko, V. O.; Metzler, R. Phys. Chem. Chem. Phys. 2014, 16, 16524–16535. doi:10.1039/C4CP01234H
Return to citation in text: [1] [2] [3] [4] [5] [6] [7] 
Goychuk, I. Phys. Biol. 2015, 12, 016013. doi:10.1088/14783975/12/1/016013
Return to citation in text: [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] 
Goychuk, I. Chem. Phys. 2010, 375, 450–457. doi:10.1016/j.chemphys.2010.04.009
Return to citation in text: [1] 
Goychuk, I.; Kharchenko, V. Phys. Rev. E 2012, 85, 051131. doi:10.1103/PhysRevE.85.051131
Return to citation in text: [1] 
Kharchenko, V. O.; Goychuk, I. Phys. Rev. E 2013, 87, 052119. doi:10.1103/PhysRevE.87.052119
Return to citation in text: [1] [2] [3] 
Goychuk, I.; Kharchenko, V. O. Math. Modell. Nat. Phenom. 2013, 8, 144–158. doi:10.1051/mmnp/20138210
Return to citation in text: [1] [2] [3] 
Kharchenko, V.; Goychuk, I. New J. Phys. 2012, 14, 043042. doi:10.1088/13672630/14/4/043042
Return to citation in text: [1] 
Jülicher, F.; Ajdari, A.; Prost, J. Rev. Mod. Phys. 1997, 69, 1269. doi:10.1103/RevModPhys.69.1269
Return to citation in text: [1] [2] [3] 
Astumian, R. D.; Bier, M. Biophys. J. 1996, 70, 637–653. doi:10.1016/S00063495(96)796054
Return to citation in text: [1] [2] [3] 
Parmeggiani, A.; Jülicher, F.; Ajdari, A.; Prost, J. Phys. Rev. E 1999, 60, 2127. doi:10.1103/PhysRevE.60.2127
Return to citation in text: [1] [2] [3] 
Herrchen, M.; Öttinger, H. C. J. NonNewtonian Fluid Mech. 1997, 68, 17. doi:10.1016/S03770257(96)01498X
Return to citation in text: [1] 
Baker, N. A.; Sept, D.; Joseph, S.; Holst, M. J.; McCammon, J. A. Proc. Natl. Acad. Sci. U. S. A. 2001, 98, 10037–10041. doi:10.1073/pnas.181342398
Return to citation in text: [1] 
PerezCarrasco, R.; Sancho, J. M. Biophys. J. 2010, 98, 2591–2600. doi:10.1016/j.bpj.2010.02.027
Return to citation in text: [1] 
Goychuk, I. Phys. Rev. E 2012, 86, 021113. doi:10.1103/PhysRevE.86.021113
Return to citation in text: [1] 
Goychuk, I. Fluct. Noise Lett. 2012, 11, 1240009. doi:10.1142/S0219477512400093
Return to citation in text: [1] 
Goychuk, I.; Kharchenko, V. O. Phys. Rev. Lett. 2014, 113, 100601. doi:10.1103/PhysRevLett.113.100601
Return to citation in text: [1]
61.  Barkai, E.; Garini, Y.; Metzler, R. Phys. Today 2012, 65, 29–37. doi:10.1063/PT.3.1677 
62.  Höfling, F.; Franosch, T. Rep. Prog. Phys. 2013, 76, 046602. doi:10.1088/00344885/76/4/046602 
59.  Sinitsyn, N. A.; Nemenman, I. EPL 2007, 77, 58001. doi:10.1209/02955075/77/58001 
3.  Nelson, P. Biological Physics: Energy, Information, Life; W. H. Freeman: New York, NY, U.S.A., 2003. 
58.  Anandan, J.; Christian, J.; Wanelik, K. Am. J. Phys. 1997, 65, 180–185. doi:10.1119/1.18570 
70.  Goychuk, I. Viscoelastic Subdiffusion: Generalized Langevin Equation Approach. In Advances in Chemical Physics; Rice, S. A.; Dinner, A. R., Eds.; Wiley: Hoboken, NJ, U.S.A., 2012; Vol. 150, pp 187–253. doi:10.1002/9781118197714.ch5 
72.  Goychuk, I.; Hänggi, P. Phys. Rev. Lett. 2007, 99, 200601. doi:10.1103/PhysRevLett.99.200601 
61.  Barkai, E.; Garini, Y.; Metzler, R. Phys. Today 2012, 65, 29–37. doi:10.1063/PT.3.1677 
62.  Höfling, F.; Franosch, T. Rep. Prog. Phys. 2013, 76, 046602. doi:10.1088/00344885/76/4/046602 
70.  Goychuk, I. Viscoelastic Subdiffusion: Generalized Langevin Equation Approach. In Advances in Chemical Physics; Rice, S. A.; Dinner, A. R., Eds.; Wiley: Hoboken, NJ, U.S.A., 2012; Vol. 150, pp 187–253. doi:10.1002/9781118197714.ch5 
71.  Waigh, T. A. Rep. Prog. Phys. 2005, 68, 685. doi:10.1088/00344885/68/3/R04 
63.  Guigas, G.; Kalla, C.; Weiss, M. Biophys. J. 2007, 93, 316–323. doi:10.1529/biophysj.106.099267 
64.  Saxton, M. J.; Jacobson, K. Annu. Rev. Biophys. Biomol. Struct. 1997, 26, 373–399. doi:10.1146/annurev.biophys.26.1.373 
65.  Seisenberger, G.; Ried, M. U.; Endress, T.; Büning, H.; Hallek, M.; Bräuchle, C. Science 2001, 294, 1929–1932. doi:10.1126/science.1064103 
66.  Golding, I.; Cox, E. C. Phys. Rev. Lett. 2006, 96, 098102. doi:10.1103/PhysRevLett.96.098102 
67.  TolićNørrelykke, I. M.; Munteanu, E.L.; Thon, G.; Oddershede, L.; BergSørensen, K. Phys. Rev. Lett. 2004, 93, 078102. doi:10.1103/PhysRevLett.93.078102 
68.  Jeon, J.H.; Tejedor, V.; Burov, S.; Barkai, E.; SelhuberUnkel, C.; BergSørensen, K.; Oddershede, L.; Metzler, R. Phys. Rev. Lett. 2011, 106, 048103. doi:10.1103/PhysRevLett.106.048103 
69.  Parry, B. R.; Surovtsev, I. V.; Cabeen, M. T.; O’Hern, C. S.; Dufresne, E. R.; JacobsWagner, C. Cell 2014, 156, 183–194. doi:10.1016/j.cell.2013.11.028 
75.  Siegle, P.; Goychuk, I.; Talkner, P.; Hänggi, P. Phys. Rev. E 2010, 81, 011136. doi:10.1103/PhysRevE.81.011136 
76.  Siegle, P.; Goychuk, I.; Hänggi, P. Phys. Rev. Lett. 2010, 105, 100602. doi:10.1103/PhysRevLett.105.100602 
77.  Siegle, P.; Goychuk, I.; Hänggi, P. EPL 2011, 93, 20002. doi:10.1209/02955075/93/20002 
96.  PerezCarrasco, R.; Sancho, J. M. Biophys. J. 2010, 98, 2591–2600. doi:10.1016/j.bpj.2010.02.027 
78.  Caspi, A.; Granek, R.; Elbaum, M. Phys. Rev. E 2002, 66, 011916. doi:10.1103/PhysRevE.66.011916 
84.  Goychuk, I.; Kharchenko, V. O.; Metzler, R. Phys. Chem. Chem. Phys. 2014, 16, 16524–16535. doi:10.1039/C4CP01234H 
70.  Goychuk, I. Viscoelastic Subdiffusion: Generalized Langevin Equation Approach. In Advances in Chemical Physics; Rice, S. A.; Dinner, A. R., Eds.; Wiley: Hoboken, NJ, U.S.A., 2012; Vol. 150, pp 187–253. doi:10.1002/9781118197714.ch5 
74.  Goychuk, I. Phys. Rev. E 2009, 80, 046125. doi:10.1103/PhysRevE.80.046125 
83.  Goychuk, I.; Kharchenko, V. O.; Metzler, R. PLoS One 2014, 9, e91700. doi:10.1371/journal.pone.0091700 
70.  Goychuk, I. Viscoelastic Subdiffusion: Generalized Langevin Equation Approach. In Advances in Chemical Physics; Rice, S. A.; Dinner, A. R., Eds.; Wiley: Hoboken, NJ, U.S.A., 2012; Vol. 150, pp 187–253. doi:10.1002/9781118197714.ch5 
74.  Goychuk, I. Phys. Rev. E 2009, 80, 046125. doi:10.1103/PhysRevE.80.046125 
83.  Goychuk, I.; Kharchenko, V. O.; Metzler, R. PLoS One 2014, 9, e91700. doi:10.1371/journal.pone.0091700 
16.  Weiss, U. Quantum Dissipative Systems, 2nd ed.; World Scientific Publishing Co Pte Ltd: Singapore, 1999. 
91.  Jülicher, F.; Ajdari, A.; Prost, J. Rev. Mod. Phys. 1997, 69, 1269. doi:10.1103/RevModPhys.69.1269 
92.  Astumian, R. D.; Bier, M. Biophys. J. 1996, 70, 637–653. doi:10.1016/S00063495(96)796054 
2.  Hill, T. L. Free Energy Transduction and Biochemical Cycle Kinetics; Springer: Berlin, Germany, 1980. 
80.  Robert, D.; Nguyen, T.H.; Gallet, F.; Wilhelm, C. PLoS One 2010, 4, e10046. doi:10.1371/journal.pone.0010046 
82.  Bruno, L.; Salierno, M.; Wetzler, D. E.; Despósito, M. A.; Levi, V. PLoS One 2011, 6, e18332. doi:10.1371/journal.pone.0018332 
80.  Robert, D.; Nguyen, T.H.; Gallet, F.; Wilhelm, C. PLoS One 2010, 4, e10046. doi:10.1371/journal.pone.0010046 
79.  Bruno, L.; Levi, V.; Brunstein, M.; Despósito, M. A. Phys. Rev. E 2009, 80, 011912. doi:10.1103/PhysRevE.80.011912 
80.  Robert, D.; Nguyen, T.H.; Gallet, F.; Wilhelm, C. PLoS One 2010, 4, e10046. doi:10.1371/journal.pone.0010046 
81.  Harrison, A. W.; Kenwright, D. A.; Waigh, T. A.; Woodman, P. G.; Allan, V. J. Phys. Biol. 2013, 10, 036002. doi:10.1088/14783975/10/3/036002 
84.  Goychuk, I.; Kharchenko, V. O.; Metzler, R. Phys. Chem. Chem. Phys. 2014, 16, 16524–16535. doi:10.1039/C4CP01234H 
85.  Goychuk, I. Phys. Biol. 2015, 12, 016013. doi:10.1088/14783975/12/1/016013 
43.  Flach, S.; Yevtuschenko, O.; Zolotaryuk, Y. Phys. Rev. Lett. 2000, 84, 2358–2361. doi:10.1103/PhysRevLett.84.2358 
44.  Goychuk, I.; Hänggi, P. Directed current without dissipation: reincarnation of a MaxwellLoschmidt demon. In Lecture Notes in Physics: Stochastic Processes in Physics, Chemistry, and Biology; Freund, J.; Pöschel, T., Eds.; Springer: Berlin, Germany, 2000; Vol. 557, pp 7–20. 
93.  Parmeggiani, A.; Jülicher, F.; Ajdari, A.; Prost, J. Phys. Rev. E 1999, 60, 2127. doi:10.1103/PhysRevE.60.2127 
43.  Flach, S.; Yevtuschenko, O.; Zolotaryuk, Y. Phys. Rev. Lett. 2000, 84, 2358–2361. doi:10.1103/PhysRevLett.84.2358 
44.  Goychuk, I.; Hänggi, P. Directed current without dissipation: reincarnation of a MaxwellLoschmidt demon. In Lecture Notes in Physics: Stochastic Processes in Physics, Chemistry, and Biology; Freund, J.; Pöschel, T., Eds.; Springer: Berlin, Germany, 2000; Vol. 557, pp 7–20. 
84.  Goychuk, I.; Kharchenko, V. O.; Metzler, R. Phys. Chem. Chem. Phys. 2014, 16, 16524–16535. doi:10.1039/C4CP01234H 
1.  Pollard, T. D.; Earnshaw, W. C.; LippincottSchwarz, J. Cell Biology, 2nd ed.; Saunders Elsevier: Philadelphia, PA, U.S.A., 2008. 
83.  Goychuk, I.; Kharchenko, V. O.; Metzler, R. PLoS One 2014, 9, e91700. doi:10.1371/journal.pone.0091700 
88.  Kharchenko, V. O.; Goychuk, I. Phys. Rev. E 2013, 87, 052119. doi:10.1103/PhysRevE.87.052119 
89.  Goychuk, I.; Kharchenko, V. O. Math. Modell. Nat. Phenom. 2013, 8, 144–158. doi:10.1051/mmnp/20138210 
83.  Goychuk, I.; Kharchenko, V. O.; Metzler, R. PLoS One 2014, 9, e91700. doi:10.1371/journal.pone.0091700 
84.  Goychuk, I.; Kharchenko, V. O.; Metzler, R. Phys. Chem. Chem. Phys. 2014, 16, 16524–16535. doi:10.1039/C4CP01234H 
85.  Goychuk, I. Phys. Biol. 2015, 12, 016013. doi:10.1088/14783975/12/1/016013 
7.  Callen, H. B. Thermodynamics and an Introduction to Thermostatistics, 2nd ed.; John Wiley & Sons: New York, NY, U.S.A., 1985. 
8.  Curzon, F. L.; Ahlborn, B. Am. J. Phys. 1975, 43, 22–24. doi:10.1119/1.10023 
27.  Goychuk, I.; Grifoni, M.; Hänggi, P. Phys. Rev. Lett. 1998, 81, 649–652. doi:10.1103/PhysRevLett.81.649 
7.  Callen, H. B. Thermodynamics and an Introduction to Thermostatistics, 2nd ed.; John Wiley & Sons: New York, NY, U.S.A., 1985. 
4.  Kay, E. R.; Leigh, D. A.; Zerbetto, F. Angew. Chem., Int. Ed. 2006, 46, 72–191. doi:10.1002/anie.200504313 
5.  ErbasCakmak, S.; Leigh, D. A.; McTernan, C. T.; Nussbaumer, A. L. Chem. Rev. 2015, 115, 10081–10206. doi:10.1021/acs.chemrev.5b00146 
6.  Cheng, C.; McGonigal, P. R.; Stoddart, J. F.; Astumian, R. D. ACS Nano 2015, 9, 8672–8688. doi:10.1021/acsnano.5b03809 
35.  Risken, H. The FokkerPlanck Equation: Methods of Solution and Applications, 2nd ed.; Springer: Berlin, Germany, 1989. 
84.  Goychuk, I.; Kharchenko, V. O.; Metzler, R. Phys. Chem. Chem. Phys. 2014, 16, 16524–16535. doi:10.1039/C4CP01234H 
2.  Hill, T. L. Free Energy Transduction and Biochemical Cycle Kinetics; Springer: Berlin, Germany, 1980. 
3.  Nelson, P. Biological Physics: Energy, Information, Life; W. H. Freeman: New York, NY, U.S.A., 2003. 
44.  Goychuk, I.; Hänggi, P. Directed current without dissipation: reincarnation of a MaxwellLoschmidt demon. In Lecture Notes in Physics: Stochastic Processes in Physics, Chemistry, and Biology; Freund, J.; Pöschel, T., Eds.; Springer: Berlin, Germany, 2000; Vol. 557, pp 7–20. 
84.  Goychuk, I.; Kharchenko, V. O.; Metzler, R. Phys. Chem. Chem. Phys. 2014, 16, 16524–16535. doi:10.1039/C4CP01234H 
10.  Kubo, R. Rep. Prog. Theor. Phys. 1966, 29, 255. doi:10.1088/00344885/29/1/306 
44.  Goychuk, I.; Hänggi, P. Directed current without dissipation: reincarnation of a MaxwellLoschmidt demon. In Lecture Notes in Physics: Stochastic Processes in Physics, Chemistry, and Biology; Freund, J.; Pöschel, T., Eds.; Springer: Berlin, Germany, 2000; Vol. 557, pp 7–20. 
9.  Yoshida, M.; Muneyuki, E.; Hisabori, T. Nat. Rev. Mol. Cell Biol. 2001, 2, 669–677. doi:10.1038/35089509 
1.  Pollard, T. D.; Earnshaw, W. C.; LippincottSchwarz, J. Cell Biology, 2nd ed.; Saunders Elsevier: Philadelphia, PA, U.S.A., 2008. 
3.  Nelson, P. Biological Physics: Energy, Information, Life; W. H. Freeman: New York, NY, U.S.A., 2003. 
48.  Dykman, M. I.; Rabitz, H.; Smelyanskiy, V. N.; Vugmeister, B. E. Phys. Rev. Lett. 1997, 79, 1178. doi:10.1103/PhysRevLett.79.1178 
9.  Yoshida, M.; Muneyuki, E.; Hisabori, T. Nat. Rev. Mol. Cell Biol. 2001, 2, 669–677. doi:10.1038/35089509 
51.  Astumian, R. D. Proc. Natl. Acad. Sci. U. S. A. 2007, 104, 19715–19718. doi:10.1073/pnas.0708040104 
55.  Feynman, R. Feynman Lectures on Computation; AddisonWesley: Reading, MA, U.S.A., 1996. 
6.  Cheng, C.; McGonigal, P. R.; Stoddart, J. F.; Astumian, R. D. ACS Nano 2015, 9, 8672–8688. doi:10.1021/acsnano.5b03809 
27.  Goychuk, I.; Grifoni, M.; Hänggi, P. Phys. Rev. Lett. 1998, 81, 649–652. doi:10.1103/PhysRevLett.81.649 
44.  Goychuk, I.; Hänggi, P. Directed current without dissipation: reincarnation of a MaxwellLoschmidt demon. In Lecture Notes in Physics: Stochastic Processes in Physics, Chemistry, and Biology; Freund, J.; Pöschel, T., Eds.; Springer: Berlin, Germany, 2000; Vol. 557, pp 7–20. 
46.  Goychuk, I.; Hänggi, P. Europhys. Lett. 1998, 43, 503–509. doi:10.1209/epl/i1998003892 
43.  Flach, S.; Yevtuschenko, O.; Zolotaryuk, Y. Phys. Rev. Lett. 2000, 84, 2358–2361. doi:10.1103/PhysRevLett.84.2358 
44.  Goychuk, I.; Hänggi, P. Directed current without dissipation: reincarnation of a MaxwellLoschmidt demon. In Lecture Notes in Physics: Stochastic Processes in Physics, Chemistry, and Biology; Freund, J.; Pöschel, T., Eds.; Springer: Berlin, Germany, 2000; Vol. 557, pp 7–20. 
97.  Goychuk, I. Phys. Rev. E 2012, 86, 021113. doi:10.1103/PhysRevE.86.021113 
98.  Goychuk, I. Fluct. Noise Lett. 2012, 11, 1240009. doi:10.1142/S0219477512400093 
99.  Goychuk, I.; Kharchenko, V. O. Phys. Rev. Lett. 2014, 113, 100601. doi:10.1103/PhysRevLett.113.100601 
93.  Parmeggiani, A.; Jülicher, F.; Ajdari, A.; Prost, J. Phys. Rev. E 1999, 60, 2127. doi:10.1103/PhysRevE.60.2127 
91.  Jülicher, F.; Ajdari, A.; Prost, J. Rev. Mod. Phys. 1997, 69, 1269. doi:10.1103/RevModPhys.69.1269 
92.  Astumian, R. D.; Bier, M. Biophys. J. 1996, 70, 637–653. doi:10.1016/S00063495(96)796054 
3.  Nelson, P. Biological Physics: Energy, Information, Life; W. H. Freeman: New York, NY, U.S.A., 2003. 
22.  Goychuk, I.; Hänggi, P. Adv. Phys. 2005, 54, 525–584. doi:10.1080/00018730500429701 
88.  Kharchenko, V. O.; Goychuk, I. Phys. Rev. E 2013, 87, 052119. doi:10.1103/PhysRevE.87.052119 
89.  Goychuk, I.; Kharchenko, V. O. Math. Modell. Nat. Phenom. 2013, 8, 144–158. doi:10.1051/mmnp/20138210 
55.  Feynman, R. Feynman Lectures on Computation; AddisonWesley: Reading, MA, U.S.A., 1996. 
56.  Kish, L. B. Phys. Lett. A 2002, 305, 144–149. doi:10.1016/S03759601(02)013658 
54.  Fredkin, E.; Toffoli, T. Int. J. Theor. Phys. 1982, 21, 219–253. doi:10.1007/BF01857727 
28.  Lamoreaux, S. K. Rep. Prog. Phys. 2005, 68, 201–236. doi:10.1088/00344885/68/1/R04 
29.  Schmiedl, T.; Seifert, U. EPL 2008, 83, 30005. doi:10.1209/02955075/83/30005 
30.  Seifert, U. Phys. Rev. Lett. 2011, 106, 020601. doi:10.1103/PhysRevLett.106.020601 
31.  Van den Broeck, C.; Kumar, N.; Lindenberg, K. Phys. Rev. Lett. 2012, 108, 210602. doi:10.1103/PhysRevLett.108.210602 
32.  Golubeva, N.; Imparato, A.; Peliti, L. EPL 2012, 97, 60005. doi:10.1209/02955075/97/60005 
33.  Stratonovich, R. L. Radiotekh. Elektron. (Moscow, Russ. Fed.) 1958, 3, 497. 
34.  Stratonovich, R. L. Topics in the Theory of Random Noise; Gordon and Breah: New York, NY, U.S.A., 1967; Vol. II. 
35.  Risken, H. The FokkerPlanck Equation: Methods of Solution and Applications, 2nd ed.; Springer: Berlin, Germany, 1989. 
15.  Toyabe, S.; Okamoto, T.; WatanabeNakayama, T.; Taketani, H.; Kudo, S.; Muneyuki, E. Phys. Rev. Lett. 2010, 104, 198103. doi:10.1103/PhysRevLett.104.198103 
1.  Pollard, T. D.; Earnshaw, W. C.; LippincottSchwarz, J. Cell Biology, 2nd ed.; Saunders Elsevier: Philadelphia, PA, U.S.A., 2008. 
37.  Wikström, M. Biochim. Biophys. Acta 2004, 1655, 241–247. doi:10.1016/j.bbabio.2003.07.013 
1.  Pollard, T. D.; Earnshaw, W. C.; LippincottSchwarz, J. Cell Biology, 2nd ed.; Saunders Elsevier: Philadelphia, PA, U.S.A., 2008. 
3.  Nelson, P. Biological Physics: Energy, Information, Life; W. H. Freeman: New York, NY, U.S.A., 2003. 
9.  Yoshida, M.; Muneyuki, E.; Hisabori, T. Nat. Rev. Mol. Cell Biol. 2001, 2, 669–677. doi:10.1038/35089509 
9.  Yoshida, M.; Muneyuki, E.; Hisabori, T. Nat. Rev. Mol. Cell Biol. 2001, 2, 669–677. doi:10.1038/35089509 
30.  Seifert, U. Phys. Rev. Lett. 2011, 106, 020601. doi:10.1103/PhysRevLett.106.020601 
29.  Schmiedl, T.; Seifert, U. EPL 2008, 83, 30005. doi:10.1209/02955075/83/30005 
30.  Seifert, U. Phys. Rev. Lett. 2011, 106, 020601. doi:10.1103/PhysRevLett.106.020601 
29.  Schmiedl, T.; Seifert, U. EPL 2008, 83, 30005. doi:10.1209/02955075/83/30005 
1.  Pollard, T. D.; Earnshaw, W. C.; LippincottSchwarz, J. Cell Biology, 2nd ed.; Saunders Elsevier: Philadelphia, PA, U.S.A., 2008. 
38.  Atkins, P.; de Paula, J. Physical Chemistry; Oxford University Press: Oxford, United Kingdom, 2006; pp 896–904. 
39.  Nitzan, A. Chemical Dynamics in Condensed Phases: Relaxation, Transfer and Reactions in Condensed Molecular Systems; Oxford University Press: Oxford, United Kingdom, 2007. 
38.  Atkins, P.; de Paula, J. Physical Chemistry; Oxford University Press: Oxford, United Kingdom, 2006; pp 896–904. 
46.  Goychuk, I.; Hänggi, P. Europhys. Lett. 1998, 43, 503–509. doi:10.1209/epl/i1998003892 
47.  Goychuk, I.; Hänggi, P. J. Phys. Chem. B 2001, 105, 6642–6647. doi:10.1021/jp010102r 
36.  Reimann, P. Phys. Rep. 2002, 361, 57–265. doi:10.1016/S03701573(01)000813 
43.  Flach, S.; Yevtuschenko, O.; Zolotaryuk, Y. Phys. Rev. Lett. 2000, 84, 2358–2361. doi:10.1103/PhysRevLett.84.2358 
45.  Wonneberger, W.; Breymayer, H.J. Z. Phys. B 1984, 56, 241–246. doi:10.1007/BF01304177 
22.  Goychuk, I.; Hänggi, P. Adv. Phys. 2005, 54, 525–584. doi:10.1080/00018730500429701 
40.  Goychuk, I. A.; Petrov, E. G.; May, V. Chem. Phys. Lett. 1996, 253, 428–437. doi:10.1016/00092614(96)003235 
41.  Goychuk, I. A.; Petrov, E. G.; May, V. Phys. Rev. E 1997, 56, 1421–1428. doi:10.1103/PhysRevE.56.1421 
42.  Goychuk, I. A.; Petrov, E. G.; May, V. J. Chem. Phys. 1997, 106, 4522–4530. doi:10.1063/1.473495 
43.  Flach, S.; Yevtuschenko, O.; Zolotaryuk, Y. Phys. Rev. Lett. 2000, 84, 2358–2361. doi:10.1103/PhysRevLett.84.2358 
44.  Goychuk, I.; Hänggi, P. Directed current without dissipation: reincarnation of a MaxwellLoschmidt demon. In Lecture Notes in Physics: Stochastic Processes in Physics, Chemistry, and Biology; Freund, J.; Pöschel, T., Eds.; Springer: Berlin, Germany, 2000; Vol. 557, pp 7–20. 
94.  Herrchen, M.; Öttinger, H. C. J. NonNewtonian Fluid Mech. 1997, 68, 17. doi:10.1016/S03770257(96)01498X 
95.  Baker, N. A.; Sept, D.; Joseph, S.; Holst, M. J.; McCammon, J. A. Proc. Natl. Acad. Sci. U. S. A. 2001, 98, 10037–10041. doi:10.1073/pnas.181342398 
91.  Jülicher, F.; Ajdari, A.; Prost, J. Rev. Mod. Phys. 1997, 69, 1269. doi:10.1103/RevModPhys.69.1269 
92.  Astumian, R. D.; Bier, M. Biophys. J. 1996, 70, 637–653. doi:10.1016/S00063495(96)796054 
93.  Parmeggiani, A.; Jülicher, F.; Ajdari, A.; Prost, J. Phys. Rev. E 1999, 60, 2127. doi:10.1103/PhysRevE.60.2127 
70.  Goychuk, I. Viscoelastic Subdiffusion: Generalized Langevin Equation Approach. In Advances in Chemical Physics; Rice, S. A.; Dinner, A. R., Eds.; Wiley: Hoboken, NJ, U.S.A., 2012; Vol. 150, pp 187–253. doi:10.1002/9781118197714.ch5 
86.  Goychuk, I. Chem. Phys. 2010, 375, 450–457. doi:10.1016/j.chemphys.2010.04.009 
87.  Goychuk, I.; Kharchenko, V. Phys. Rev. E 2012, 85, 051131. doi:10.1103/PhysRevE.85.051131 
88.  Kharchenko, V. O.; Goychuk, I. Phys. Rev. E 2013, 87, 052119. doi:10.1103/PhysRevE.87.052119 
89.  Goychuk, I.; Kharchenko, V. O. Math. Modell. Nat. Phenom. 2013, 8, 144–158. doi:10.1051/mmnp/20138210 
90.  Kharchenko, V.; Goychuk, I. New J. Phys. 2012, 14, 043042. doi:10.1088/13672630/14/4/043042 
83.  Goychuk, I.; Kharchenko, V. O.; Metzler, R. PLoS One 2014, 9, e91700. doi:10.1371/journal.pone.0091700 
84.  Goychuk, I.; Kharchenko, V. O.; Metzler, R. Phys. Chem. Chem. Phys. 2014, 16, 16524–16535. doi:10.1039/C4CP01234H 
85.  Goychuk, I. Phys. Biol. 2015, 12, 016013. doi:10.1088/14783975/12/1/016013 
10.  Kubo, R. Rep. Prog. Theor. Phys. 1966, 29, 255. doi:10.1088/00344885/29/1/306 
15.  Toyabe, S.; Okamoto, T.; WatanabeNakayama, T.; Taketani, H.; Kudo, S.; Muneyuki, E. Phys. Rev. Lett. 2010, 104, 198103. doi:10.1103/PhysRevLett.104.198103 
13.  Ford, G. W.; Lewis, J. T.; O’Connell, R. F. Phys. Rev. A 1988, 37, 4419–4428. doi:10.1103/PhysRevA.37.4419 
13.  Ford, G. W.; Lewis, J. T.; O’Connell, R. F. Phys. Rev. A 1988, 37, 4419–4428. doi:10.1103/PhysRevA.37.4419 
12.  Ford, G. W.; Kac, M.; Mazur, P. J. Math. Phys. 1965, 6, 504. doi:10.1063/1.1704304 
13.  Ford, G. W.; Lewis, J. T.; O’Connell, R. F. Phys. Rev. A 1988, 37, 4419–4428. doi:10.1103/PhysRevA.37.4419 
6.  Cheng, C.; McGonigal, P. R.; Stoddart, J. F.; Astumian, R. D. ACS Nano 2015, 9, 8672–8688. doi:10.1021/acsnano.5b03809 
14.  Caldeira, A. O.; Leggett, A. J. Ann. Phys. 1983, 149, 374–456. doi:10.1016/00034916(83)902026 
6.  Cheng, C.; McGonigal, P. R.; Stoddart, J. F.; Astumian, R. D. ACS Nano 2015, 9, 8672–8688. doi:10.1021/acsnano.5b03809 
11. 
Bogolyubov, N. N. On some Statistical Methods in Mathematical Physics. Ukranian Academy of Sciences: Kiev, Ukraine, 1945; pp 115–137.
In Russian. 
6.  Cheng, C.; McGonigal, P. R.; Stoddart, J. F.; Astumian, R. D. ACS Nano 2015, 9, 8672–8688. doi:10.1021/acsnano.5b03809 
16.  Weiss, U. Quantum Dissipative Systems, 2nd ed.; World Scientific Publishing Co Pte Ltd: Singapore, 1999. 
17.  Gardiner, C. W.; Zoller, P. Quantum Noise: a Handbook of Markovian and NonMarkovian Quantum Stochastic Methods with Applications to Quantum Optics; Springer: Berlin, Germany, 2000. 
13.  Ford, G. W.; Lewis, J. T.; O’Connell, R. F. Phys. Rev. A 1988, 37, 4419–4428. doi:10.1103/PhysRevA.37.4419 
14.  Caldeira, A. O.; Leggett, A. J. Ann. Phys. 1983, 149, 374–456. doi:10.1016/00034916(83)902026 
16.  Weiss, U. Quantum Dissipative Systems, 2nd ed.; World Scientific Publishing Co Pte Ltd: Singapore, 1999. 
13.  Ford, G. W.; Lewis, J. T.; O’Connell, R. F. Phys. Rev. A 1988, 37, 4419–4428. doi:10.1103/PhysRevA.37.4419 
16.  Weiss, U. Quantum Dissipative Systems, 2nd ed.; World Scientific Publishing Co Pte Ltd: Singapore, 1999. 
17.  Gardiner, C. W.; Zoller, P. Quantum Noise: a Handbook of Markovian and NonMarkovian Quantum Stochastic Methods with Applications to Quantum Optics; Springer: Berlin, Germany, 2000. 
24.  Wyman, J. Proc. Natl. Acad. Sci. U. S. A. 1975, 72, 3983–3987. 
25.  Rozenbaum, V. M.; Yang, D.Y.; Lin, S. H.; Tsong, T. Y. J. Phys. Chem. B 2004, 108, 15880–15889. doi:10.1021/jp048200a 
26.  Qian, H. J. Phys.: Condens. Matter 2005, 17, S3783–S3794. doi:10.1088/09538984/17/47/010 
27.  Goychuk, I.; Grifoni, M.; Hänggi, P. Phys. Rev. Lett. 1998, 81, 649–652. doi:10.1103/PhysRevLett.81.649 
3.  Nelson, P. Biological Physics: Energy, Information, Life; W. H. Freeman: New York, NY, U.S.A., 2003. 
23.  Sekimoto, K. J. Phys. Soc. Jpn. 1997, 66, 1234–1237. doi:10.1143/JPSJ.66.1234 
16.  Weiss, U. Quantum Dissipative Systems, 2nd ed.; World Scientific Publishing Co Pte Ltd: Singapore, 1999. 
19.  Nakajima, S. Prog. Theor. Phys. 1958, 20, 948–959. doi:10.1143/PTP.20.948 
20.  Zwanzig, R. J. Chem. Phys. 1960, 33, 1338–1341. doi:10.1063/1.1731409 
21.  Argyres, P. N.; Kelley, P. L. Phys. Rev. A 1966, 134, A98. doi:10.1103/PhysRev.134.A98 
22.  Goychuk, I.; Hänggi, P. Adv. Phys. 2005, 54, 525–584. doi:10.1080/00018730500429701 
16.  Weiss, U. Quantum Dissipative Systems, 2nd ed.; World Scientific Publishing Co Pte Ltd: Singapore, 1999. 
© 2016 Goychuk; licensee BeilsteinInstitut.
This is an Open Access article under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The license is subject to the Beilstein Journal of Nanotechnology terms and conditions: (http://www.beilsteinjournals.org/bjnano)