Mechanism for the Generation of Robust Circadian Oscillations through Ultransensitivity and Differential Binding Affinity

Biochemical circadian rhythm oscillations play an important role in many signaling mechanisms. In this work, we explore some of the biophysical mechanisms responsible for sustaining robust oscillations by constructing a minimal but analytically tractable model of the circadian oscillations in the KaiABC protein system found in the cyanobacteria S. elongatus. In particular, our minimal model explicitly accounts for two experimentally characterized biophysical features of the KaiABC protein system, namely, a differential binding affinity and an ultrasensitive response. Our analytical work shows how these mechanisms might be crucial for promoting robust oscillations even in suboptimal nutrient conditions. Our analytical and numerical work also identifies mechanisms by which biological clocks can stably maintain a constant time period under a variety of nutrient conditions. Finally, our work also explores the thermodynamic costs associated with the generation of robust sustained oscillations and shows that the net rate of entropy production alone might not be a good figure of merit to asses the quality of oscillations.

k dp γ 2 k dp k dp γ 2 k dp k dp γ 2 k dp k dp γ 2 k dp k dp γ 2 k dp FIG. S1: Each oscillator can exist in one of the three forms P 1 , P 2 , P 3 . The P 1 form corresponds to normal KaiC during daytime and requires KaiA for moving forward in phase. The P 3 form corresponds to the form of KaiC to which KaiA is attached as an assistant molecule, P 1 + KaiA −→ P 3 . The P 2 form corresponds to KaiC in it's dephosphorylation phase. The nucleotide bound states, KaiB binding and KaiA sequestration are implicitly assumed in the model. A f denotes the free KaiA concentration and A t stands for the total KaiA concentration. Phosphorylation corresponds to phase φ , it increases linearly from 0 to 1 as φ varies from 0 to π and decreases linearly from 1 to 0 as φ goes from π to 2π. Connections between the P 1 states denote the spontaneous dephosphorylation of KaiC subunits in the absence of KaiA. Multiple connections between P 1 and P 2 towards the highly phosphorylated states allows the system to move to the dephosphorylation phase even before all the KaiC is completely phosphorylated. The P 2 states sequester KaiA which corresponds to the fact that during the dephosphorylation phase, KaiB bound KaiC sequesters KaiA and makes it inactive. The master equation for this system is given below.

S2. TIME INDEPENDENT STEADY STATE
A. Case I : k 1 = 0 We set, k 1 = 0, j 0 = N, K d1 = K d = K D , we also make the following changes in notation k Ab,0 α −π = k Ab , k d p = k 2 , α− > α π N . Using this simplification, we can solve for the steady state solution of the system and then use linear stability analysis around the steady state to see how oscillations are set up.
: There is just one flux, J in the entire system. Once we solve for this unique flux J, we can obtain the expressions for the probability distribution of the different states in the system.
where a and b are the labels for P 1 (0) and P 3 (0) respectively. For the P 1 − P 3 connection at j = 0 and j = N we have, For other P 3 − P 3 connections, we have, If we look at P 1 − P 3 connections in the bulk, we have, When we look at the P 1 − P 2 connection at j = 0, we have, where c is the label for P 2 (0). Similarly for other P 2 − P 2 connections we have the following, Substituting the expressions of P 1 (N) (S2.3), P 2 (N) (S2.12) and P 3 (N) (S2.7) into (S2.14), we get, Since N is large and γ < 1, the LHS and RHS (S2.15) are dominated by the terms having 1 γ N . Thus only the 1 st term in the LHS and the 3 rd term in the RHS of (S2.15) contribute, other terms can be ignored. This leads to an expression for J.
The case with k 1 = 0 is challenging to solve. Unlike the previous case, where only a single flux existed in the entire system, in this case there will be many fluxes in the system.
In order to obtain the rough form of solution for P 1 , P 2 and P 3 states, we make some assumptions which are supported by numerical observations. We also go the continuum limit where the discrete master equations describing the system become a set of coupled PDE's. The boundaries for our problem are x = 0 and x = x 0 . x 0 is the point where the P 1 − P 2 connections start. Numerically it is observed that at the steady state, the probability density in the states beyond x 0 is negligible compared to the ones before it. Thus we set it as our boundary. We solve the problem for the states in the bulk and then impose certain conditions such that the boundary conditions are satisfied.
and so on and so forth for P 2 and P 3 states. Using Taylor expansion, We begin with the ansatz that when k 1 is increased from 0 to a very small number gradually, the changes in the form of the probability distribution will not change drastically. Keeping this in mind we make the assumption, k A f A f P 1 (x) ≈ k Ab α x P 3 (x)∀x ∈ Bulk. This assumption has been inspired by our solution for the k 1 = 0 case and also supported by numerical observations. It can be better written as, Adding the evolution equations for P 1 and P 3 , in the bulk, and substituting the approximation (S2.29) we get, At steady state, ∂ t P 1 (x) = 0 = ∂ t P 3 (x) ∀x. For k 1 = 0, we have k 1c = 0 = k 1d . Thus we have the simple ODE, This can have constant solutions for P 3 (x) and this is exactly what we have in the case when k 1 = 0 (S2.18). The presence of k 1 adds an extra term dependent on P 3 and due to this term we cannot have constant solutions for P 3 (x) (unless the constant solution is P 3 (x) = 0∀x). Under the assumption that the form of P 3 (x) does not deviate significantly from the solution when k 1 = 0 (P 3 (x) = b = constant), we can ignore terms containing ∂ 2 x P 3 (x). We also have k 3d << k 3c , k 1d << k 1c . Using this we can ignore terms containing k 1d and k 3d . Thus we finally arrive at the equation, Beyond the boundary at x = x 0 , the network is constructed in such a way that it either drives the probabilities into the P 2 states which are further driven towards the boundary at x 0 from right or it drives the probabilities towards the boundary at x 0 in the P 1 S7 states.
Thus we can safely assume that probability of the finding a state beyond the boundary at x 0 is close to 0. This is confirmed by numerical results. Now we can focus our entire attention to the region, x ∈ [0, x 0 ]. By using conservation of flux we can find the probabilities of all the other states. FIG. S5: In the main figures, grey corresponds to k 1 = 0, cyan to k 1 = 10 −4 , violet to k 1 = 5 × 10 −4 , red to k 1 = 10 −3 , blue to k 1 = 5 × 10 −3 , green to k 1 = 10 −2 .

C. Calculating Time-Independent Steady State numerically
As mentioned earlier, when k 1 = 0 and there are more than one connections between P 1 and P 2 states i.e. j 0 , x 0 = N, we have multiple fluxes in the system. Nevertheless, we can still find the steady-state time-independent solution for P irrespective of whether it is stable or not. An iterative procedure is adopted. The first step in this procedure is to find the free KaiA concentration in the system. In the following paragraph the procedure is described. The set of FPE's that describe the evolution of P can be expressed as, ∂ P ∂t = W ( P) P, where P is a vector of length 3N + 3. The first N+1 elements would correspond to P 1 form, the next N+1 elements would correspond to P 2 form and the last N+1 elements would correspond to P 3 form. The rate matrix, W is function of the probabilities due to the presence of the A f term which makes the entire thing non-linear. Now if we succeed in finding the free KaiA concentration at steady state, then substituting it back into W would make it a linear system to solve, and then ∂ P ∂t = W P. We can find A s f (free KaiA at steady state) using an iterative procedure as follows: 1. Initialize A f = A t for the first run and form the rate matrix, W.
2. At steady state, ∂ P ∂t = 0 = W P. Compute the eigenvector corresponding to the nullspace of W and call it v 0 .

Compute
where δ is some appropriate step size. 5. Repeat this procedure until convergence i.e.
Once we have A s f we can find P s .

S3. LINEAR STABILITY ANALYSIS
We perturb around the steady state distribution, P s . Say, P k ( j) = P s k ( j) + δ η k ( j), k = 1, 2, 3 and j = 0, ..., N. By conservation of probability, we have ∑ k, j η k ( j) = 0. Substituting P k ( j) in the differential equations, lead us to the evolution equations for η k ( j).
Notice the additional terms in evolution of η 1 and η 3 which are directly dependent on ε and ε seq . The entire thing can be expressed as ∂ η ∂t =W η = (W + W ) η. The entire matrixW can be broken into 9 parts, each representing interactions between different types of states as shown in (S3.5). The interesting blocks in the W-matrix are the η 3 − η 3 (S3.6) and η 3 − η 2 (S3.7) blocks which contain most of the terms arising due to nonlinearities.
In short, a linear stability analysis can be performed around the steady state of the system, P s , which yields upto first order, where η is the vector of perturbation (see Section S3 for a detailed expression). We work in a regime where the %ATP i.e. K d0 in our model plays a major role in deciding whether oscillations take place or not. The initial condition for every simulation is set to P 1 (0) = 1 at t=0. This corresponds to starting all reactions with all the KaiC in the unphosphorylated and ADP bound form. The simulations are allowed to run for some time in order to reach either a time-independent or a time-dependent steady state behaviour. Now let us look at the W matrix. It can be broken into 9 blocks, To make things simpler let us look at just the W matrix elements, Calculating the eigenvalues of W tells us about the stability of the steady state. Presence of +ve eigenvalues would indicate that the steady state is unstable and that a time-dependent steady state is present in the system.This would give rise to oscillations.

A. Origin of Instability
Increasing α leads to accumulation of probability density near the higher phosphorylated region of P 1 and P 3 states. Eventually, this leads to an instability. The oscillatory state is stable because higher α provides coherence to the wavepacket i.e. the phosphorylation wavepacket has a narrow width as it moves across the different states [1].
One simple way to understand the emergence of oscillations with increasing k 1 is through the Gershgorin circle theorem. The Gershgorin circle theorem provides us a way to estimate the location of the eigenvalues of any square matrix. Simply put, it says that the for any square matrix W, if we construct the pair (W ii , R i ), where R i = ∑ j, j =i |W ji |, then all the eigenvalues of W lie in the union of circles with radii R i and centred at W ii . The onset of instability means the presence of +ve eigenvalues in the W matrix as mentioned before. In a normal rate matrix, all the diagonal entries are -ve and the off-diagonal entries are +ve in a way such that sum of all element in each column is 0. Gershgorin theorem can be easily applied to this system and it can be seen that the eigenvalues will always have to be -ve (or 0). But in our case, the matrix W does have -ve off-diagonal elements, for instance −εk A f P s 1 ( j)η 3 (i), i = j term in the evolution of η 3 ( j). All such terms which are -ve in the off-diagonal position have P s 1 . It is the presence of these terms which extend the Gershgorin circles into the positive half of the plane. So, our chances of obtaining a positive eigenvalue increases if we have higher P s 1 ( j)∀ j. Now it remains to show that as K d0 increases, P s 1 either decreases or stays unchanged and when k 1 increases, P s 1 increases. From (S2.33) we know the forms of the solution for k 1 = 0. For a moment let us take A f to be fixed. This assumption is justified in the limit when k 1 is very small and there is not much change in the value of A f derived in the k 1 = 0 case. For this fixed value of A f let us consider two cases, the first when k 1 = 0 and the second when k 1 = 0. Fixed A f implies that ∑ P 3 (i) = constant = c. Let us denote the functional form for P 3 (x) as f(x) for case I and g(x) for case II. As we have shown previously f(x) is a constant function and g(x) is a strictly increasing function. ∑ x ( f (x)−g(x)) = 0 and the fact that g(x) is strictly increasing implies that f(x) and g(x) have a single point where they cross each other i.e. f ( ). This is a polynomial in α with a single sign change in the coefficients at x = x c , with 1 as a root and with the leading term (g(x N ) − f (x N )) to be positive. Thus from Decartes rule for change in signs we can can say that 1 is the only positive root of the polynomial and thus ∑ x α x (g(x) − f (x)) > 0 ∀α > 1. Thus we can say that increasing k 1 increases ∑ x P 1 (x). This in turn affects the radii of the Gershgorin circles and thus the possibility of having an eigenvalue in the positive half of the complex plane increases with increasing k 1 . S10 (a) For k 1 = 0 we have a constant solution for P 3 (φ ). For k 1 = 0, we have a solution of the form (S2.34).
(b) This graph shows how the total probability in the P 1 form at steady state increases as a function of k 1 . Linear System of Equations Non-Linear System of Equations FIG. S7: All the eigenvalues lie in within the area of the union of all circles. In The linear case, the union of all circles lies completely in the left half of the Argand plane. Thus no eigenvalues are possible which have a +ve real part. In the nonlinear case, due to the -ve off diagonal terms, there is a possibility to have eigenvalues with +ve real part. If the shaded reagion has a greater area then chances of getting an instability increases. S11 S4. FIRST PASSAGE ANALYSIS k 0 γk 0 FIG. S8: The wave of phosphorylation moves across the P 1 and P 3 states. The movement of the wavepacket can be realized as a particle hopping between sites labelled with indices i (for P 3 states) and i (for P 1 states)

FIG. S6: Origin of Instability
In the notation used to solve the Fokker Planck equations, k 1 α x would correspond to k A f A f , k 2 α x would correspond to k Ab0 α x−π , k 0 to k 1 and γ 0 to γ 1 . It is observed as the wavepacket moves, the free KaiA concentration changes with time as a function of α as, A f = A f 0 α x where x is the position of the tip of the wavepacket.