Top Banner
Frank REVIEW Input-output relations in biological systems: measurement, information and the Hill equation Steven A Frank Abstract Biological systems produce outputs in response to variable inputs. Input-output relations tend to follow a few regular patterns. For example, many chemical processes follow the S-shaped Hill equation relation between input concentrations and output concentrations. That Hill equation pattern contradicts the fundamental Michaelis-Menten theory of enzyme kinetics. I use the discrepancy between the expected Michaelis-Menten process of enzyme kinetics and the widely observed Hill equation pattern of biological systems to explore the general properties of biological input-output relations. I start with the various processes that could explain the discrepancy between basic chemistry and biological pattern. I then expand the analysis to consider broader aspects that shape biological input-output relations. Key aspects include the input-output processing by component subsystems and how those components combine to determine the system’s overall input-output relations. That aggregate structure often imposes strong regularity on underlying disorder. Aggregation imposes order by dissipating information as it flows through the components of a system. The dissipation of information may be evaluated by the analysis of measurement and precision, explaining why certain common scaling patterns arise so frequently in input-output relations. I discuss how aggregation, measurement and scale provide a framework for understanding the relations between pattern and process. The regularity imposed by those broader structural aspects sets the contours of variation in biology. Thus, biological design will also tend to follow those contours. Natural selection may act primarily to modulate system properties within those broad constraints. Published version: Frank, S. A. 2013. Input-output relations in biological systems: measurement, information and the Hill equation. Biology Direct, 8:31, doi:10.1186/1745-6150-8-31 Keywords: Biological design; Cellular biochemistry; Cellular sensors; Measurement theory; Information theory; Natural selection; Signal processing Introduction Cellular receptors and sensory systems measure input signals. Responses flow through a series of downstream processes. Final output expresses physiological or be- havioral phenotype in response to the initial inputs. A system’s overall input-output pattern summarizes its biological characteristics. Each processing step in a cascade may ultimately be composed of individual chemical reactions. Each reaction is itself an input-output subsystem. The in- put signal arises from the extrinsic spatial and tempo- ral fluctuations of chemical concentrations. The out- put follows from the chemical transformations of the reaction that alter concentrations. The overall input- output pattern of the system develops from the signal Correspondence: [email protected] Department of Ecology and Evolutionary Biology, University of California, Irvine, CA 92697–2525 USA Full list of author information is available at the end of the article processing of the component subsystems and the ag- gregate architecture of the components that form the broader system. Many fundamental questions in biology come down to understanding these input-output relations. Some systems are broadly sensitive, changing outputs mod- erately over a wide range of inputs. Other systems are ultrasensitive or bistable, changing very rapidly from low to high output across a narrow range of inputs [1]. The Hill equation describes these commonly observed input-output patterns, capturing the essence of how changing inputs alter system response [2]. I start with two key questions. How does the com- monly observed ultrasensitive response emerge, given that classical chemical kinetics does not naturally lead to that pattern? Why does the very simple Hill equa- tion match so well to the range of observed input- output relations? To answer those questions, I em- arXiv:1312.2838v1 [q-bio.MN] 10 Dec 2013
24

Input-output relations in biological systems: measurement ... · Michaelis-Menten theory of enzyme kinetics. I use the discrepancy between the expected Michaelis-Menten process of

Jun 20, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Input-output relations in biological systems: measurement ... · Michaelis-Menten theory of enzyme kinetics. I use the discrepancy between the expected Michaelis-Menten process of

Frank

REVIEW

Input-output relations in biological systems:measurement, information and the Hill equationSteven A Frank

Abstract

Biological systems produce outputs in response to variable inputs. Input-output relations tend to follow a fewregular patterns. For example, many chemical processes follow the S-shaped Hill equation relation betweeninput concentrations and output concentrations. That Hill equation pattern contradicts the fundamentalMichaelis-Menten theory of enzyme kinetics. I use the discrepancy between the expected Michaelis-Mentenprocess of enzyme kinetics and the widely observed Hill equation pattern of biological systems to explore thegeneral properties of biological input-output relations. I start with the various processes that could explain thediscrepancy between basic chemistry and biological pattern. I then expand the analysis to consider broaderaspects that shape biological input-output relations. Key aspects include the input-output processing bycomponent subsystems and how those components combine to determine the system’s overall input-outputrelations. That aggregate structure often imposes strong regularity on underlying disorder. Aggregationimposes order by dissipating information as it flows through the components of a system. The dissipation ofinformation may be evaluated by the analysis of measurement and precision, explaining why certain commonscaling patterns arise so frequently in input-output relations. I discuss how aggregation, measurement and scaleprovide a framework for understanding the relations between pattern and process. The regularity imposed bythose broader structural aspects sets the contours of variation in biology. Thus, biological design will also tendto follow those contours. Natural selection may act primarily to modulate system properties within those broadconstraints.

Published version: Frank, S. A. 2013. Input-output relations in biological systems: measurement, informationand the Hill equation. Biology Direct, 8:31, doi:10.1186/1745-6150-8-31

Keywords: Biological design; Cellular biochemistry; Cellular sensors; Measurement theory; Information theory;Natural selection; Signal processing

IntroductionCellular receptors and sensory systems measure inputsignals. Responses flow through a series of downstreamprocesses. Final output expresses physiological or be-havioral phenotype in response to the initial inputs. Asystem’s overall input-output pattern summarizes itsbiological characteristics.

Each processing step in a cascade may ultimatelybe composed of individual chemical reactions. Eachreaction is itself an input-output subsystem. The in-put signal arises from the extrinsic spatial and tempo-ral fluctuations of chemical concentrations. The out-put follows from the chemical transformations of thereaction that alter concentrations. The overall input-output pattern of the system develops from the signal

Correspondence: [email protected]

Department of Ecology and Evolutionary Biology, University of California,

Irvine, CA 92697–2525 USA

Full list of author information is available at the end of the article

processing of the component subsystems and the ag-gregate architecture of the components that form thebroader system.

Many fundamental questions in biology come downto understanding these input-output relations. Somesystems are broadly sensitive, changing outputs mod-erately over a wide range of inputs. Other systems areultrasensitive or bistable, changing very rapidly fromlow to high output across a narrow range of inputs [1].The Hill equation describes these commonly observedinput-output patterns, capturing the essence of howchanging inputs alter system response [2].

I start with two key questions. How does the com-monly observed ultrasensitive response emerge, giventhat classical chemical kinetics does not naturally leadto that pattern? Why does the very simple Hill equa-tion match so well to the range of observed input-output relations? To answer those questions, I em-

arX

iv:1

312.

2838

v1 [

q-bi

o.M

N]

10

Dec

201

3

Page 2: Input-output relations in biological systems: measurement ... · Michaelis-Menten theory of enzyme kinetics. I use the discrepancy between the expected Michaelis-Menten process of

Frank Page 2 of 24

phasize the general processes that shape input-outputrelations. Three aspects seem particularly important:aggregation, measurement, and scale.

Aggregation combines lower-level processes to pro-duce the overall input-output pattern of a system.Aggregation often transforms numerous distinct andsometimes disordered lower-level fluctuations intohighly regular overall pattern [3]. One must under-stand those regularities in order to analyze the rela-tions between pattern and process. Aggregate regular-ity also imposes constraints on how natural selectionshapes biological design [4].

Measurement describes the information capturedfrom inputs and transmitted through outputs. Howsensitive are outputs to a change in inputs? The over-all pattern of sensitivity affects the information lostduring measurement and the information that remainsinvariant between input and output. Patterns of sen-sitivity that may seem puzzling or may appear to bespecific to particular mechanisms often become simpleto understand when one learns to read the invariant as-pects of information and measurement. Measurementalso provides a basis for understanding scale [5].

Scale influences the relations between input and out-put [6]. Large input typically saturates a system, caus-ing output to become insensitive to further increasesin input. Saturated decline in sensitivity often leads tologarithmic scaling. Small input often saturates in theother direction, such that output changes slowly andoften logarithmically in response to further declines ininput. The Hill equation description of input-outputpatterns is simply an expression of logarithmic satura-tion at high and low inputs, with an increased linearsensitivity at intermediate input levels.

High input saturates output because maximum out-put is intrinsically limited. By contrast, the commonlyobserved logarithmic saturation at low input intensityremains a puzzle. The difficulty arises because typi-cal theoretical understanding of chemical kinetics pre-dicts a strong and nearly linear output sensitivity atlow input concentrations of a signal [7]. That theoret-ical linear sensitivity of chemical kinetics at low inputcontradicts the widely observed pattern of weak loga-rithmic sensitivity at low input.

I describe the puzzle of chemical kinetics in the nextsection to set the basis for a broader analysis of input-output relations. I then connect the input-output rela-tions of chemical kinetics to universal aspects of ag-gregation, measurement, and scale. Those universalproperties of input-output systems combine with spe-cific biological mechanisms to determine how biologicalsystems respond to inputs. Along the way, I considerpossible resolutions to the puzzle of chemical kineticsand to a variety of other widely observed but unex-plained regularities in input-output patterns. Finally,

I discuss the ways in which regularities of input-outputrelations shape many key aspects of biological design.

ReviewThe puzzle of chemical kineticsClassical Michaelis-Menten kinetics for chemical reac-tions lead to a saturating relationship between an in-put signal and an output response [7]. The particularpuzzle arises at very low input, for which Michaelis-Menten kinetics predict a nearly linear output re-sponse to tiny changes in input. That sensitivity at lowinput means that chemical reactions would have nearlyinfinite measurement precision with respect to tinyfluctuations of input concentration. Idealized chemi-cal reactions do have that infinite precision, and ob-servations may follow that pattern if nearly ideal con-ditions are established in laboratory studies. By con-trast, the actual input-output relations of chemical re-actions and more complex biological signals often de-part from Michaelis-Menten kinetics.

Many studies have analyzed the contrast betweenMichaelis-Menten kinetics and the observed input-output relations of chemical reactions [2]. I will discusssome of the prior studies in a later section. However,before considering those prior studies, it is useful tohave a clearer sense of the initial puzzle and of alter-native ways in which to frame the problem.

Example of Michaelis-Menten kineticsI illustrate Michaelis-Menten input-output relationswith a particular example, in which higher input con-centration of a signal increases the transformation ofan inactive molecule to an active state. Various formu-lations of Michaelis-Menten kinetics emphasize differ-ent aspects of reactions [7]. But those different formu-lations all have the same essential mass action propertythat assumes spatially independent concentrations ofreactants. Spatially independent concentrations can bemultiplied to calculate the spatial proximity betweenreactants at any point in time.

In my example, a signal, S, changes an inactive re-actant, R, to an active output, A, in the reaction

S + Rg−→ S + A,

where the rate of reaction, g, can be thought of as thesignal gain. In this reaction alone, if S > 0, all of thereactant, R, will eventually be transformed into the ac-tive form, A. (I use roman typeface for the distinct re-actant species and italic typeface for concentrations ofthose reactants.) However, I am particularly interestedin the relation between the input signal concentration,S, and the output signal concentration, A. Thus, I alsoinclude a back reaction, in which the active form, A,

Page 3: Input-output relations in biological systems: measurement ... · Michaelis-Menten theory of enzyme kinetics. I use the discrepancy between the expected Michaelis-Menten process of

Frank Page 3 of 24

0 m 3m 5m 7m0

N 2

N

Input signal, S

Out

put s

igna

l, A

Figure 1 Michaelis-Menten signal transmission. The reactiondynamics transform the concentration of the input signal, S,into the equilibrium output signal, A∗, as given by Eq. (2).Half maximal output occurs at input S = m. The totalreactant available to be transformed is N .

spontaneously transforms back to the inactive form,R, expressed as

Aδ−→ R.

The reaction kinetics follow

A = gS(N −A)− δA, (1)

in which the overdot denotes the derivative with re-spect to time, and N = R + A is the total concentra-tion of inactive plus active reactant molecules. We findthe equilibrium concentration of the output signal, A∗,as a function of the input signal, S, by solving A = 0,which yields

A∗ = N

(S

m+ S

), (2)

in which m = δ/g is the rate of the back reactionrelative to the forward reaction. Note that S/(m +S) is the equilibrium fraction of the initially inactivereactant that is transformed into the active state. AtS = m, the input signal transforms one-half of theinactive reactant into the active state.

Fig. 1 shows the consequence of this type of Michaelis-Menten kinetics for the relation between the input sig-nal and the output signal. At low input signal inten-sity, S → 0, the output is strongly (linearly) sensitiveto changes in input, with the output changing in pro-portion to S. At high signal intensity, the output isweakly (logarithmically) sensitive to changes in input,with the output changing in proportion to log(S). Theoutput saturates at A∗ → N as the input increases.

Table 1 Conceptual foundations.

Psychophysics studies human perception of quantity, such asloudness, temperature or pressure. The early work of Weber andFechner suggested that perception scales logarithmically: for agiven stimulus (input), the perception of quantity (output)changes logarithmically. That work led to modern analysis ofmeasurement and scale.

This article analyzes biological input-output relations. Myexamples focus on biochemistry. I chose that focus for tworeasons. First, most biological input-output relations mayultimately be reducible to cascades of biochemical componentreactions. The problem then becomes how to relate thebiochemical components and their connections to overall systemfunction. That relation between biochemistry and system functionis the core of modern systems biology. Second, the sharpdistinction between classical Michaelis-Menten chemical kineticsand the observed patterns of logarithmic scaling in bothbiochemistry and perception provides a good entry into theunsolved puzzles of the subject and the potential value of myperspective.

Although I focus on biochemistry, my approach derives fromother topics. I borrow the deep conceptual foundations ofmeasurement from psychophysics, the principles of aggregationfrom statistical mechanics, and aspects of information theory thatoriginally developed in studies of communication. My view is thatbiological input-output relations can only be understood in termsof aggregation, measurement and information. In this article, Ievoke those principles indirectly by building a series of specificanalyses of biochemistry and simple aspects of systems biology.

The literatures and conceptual spans are vast forpsychophysics, measurement theory, statistical mechanics andinformation theory. Here, I mention a few key entries into eachsubject. To read this article, it is not necessary to understand allof those topics. But it is necessary to see the project for what itis, an attempt to borrow deep principles from other subjects andapply those principles to biochemical aspects of systems biology,to the nature of biological input-output relations, and to theconsequences for natural selection and evolutionary design.

Gescheider [25] summarizes aspects of psychophysics relatedto my discussion of input-output patterns. History and furtherreferences can be obtained from that work. Certain aspects ofmeasurement theory followed from psychophysics [5, 6]. Thetheory developed into a broader analysis of the principles ofquantity [26–28]. Other branches of measurement theory focus onaspects of precision and calibration [29].

Statistical mechanics analyzes the ways in which aggregationleads to highly ordered systems arising from disordered underlyingcomponents. My usage follows from the proposed unity betweeninformation theory and aggregate pattern, which transcends thespecifics of physical models and instead emphasizes the patternsexpressed by probability distributions [3, 12]. That Jaynesianperspective describes how aggregation dissipates information toexpose underlying regularity. Later work [13, 14] provided aunified framework for all common probability patterns bycombining measurement theory with Jaynes’ information theoryinterpretation of statistical mechanics.

The Hill equation and observed input-output patterns

Observed input-output patterns often differ from the

simple Michaelis-Menten pattern in Fig. 1. In particu-

lar, output is often only weakly sensitive to changes in

the input signal at low input intensity. Weak sensitivity

at low input values often means that output changes

in proportion to log(S) for small S values, rather than

the linear relation between input and output at small

S values described by Michaelis-Menten kinetics.

Page 4: Input-output relations in biological systems: measurement ... · Michaelis-Menten theory of enzyme kinetics. I use the discrepancy between the expected Michaelis-Menten process of

Frank Page 4 of 24

0 1 20

1ê2

1

Input signal, x

Out

put s

igna

l, y

Figure 2 Hill equation signal transmission. The input signal,x, leads to the output, y, as given by Eq. (4). The curves ofincreasing slope correspond to k = 1, 2, 4, 8.

The Hill equation preserves the overall Michaelis-Menten pattern but alters the sensitivity at low inputsto be logarithmic rather than linear. Remarkably, thepattern of curve shapes for most biochemical reactionsand more general biological input-output relations fitreasonably well to the Hill equation

y = b

(xk

mk + xk

)(3)

or to minor variants of this equation (Table 1). Theinput intensity is x, the measured output is y, half ofmaximal response is x = m, the shape of the responseis determined by the Hill coefficient k, and the responsesaturates asymptotically at b for increasing levels ofinput.

We can simplify the expression by using the substi-tutions y = y/b, in which y is the fraction of the max-imal response, and x = x/m, in which x is the ratio ofthe input to the value that gives half of the maximalresponse. The resulting equivalent expression is

y =xk

1 + xk. (4)

Fig. 2 shows the input-output relations for differentvalues of the Hill coefficient, k. For k = 1, the curvematches the Michaelis-Menten pattern in Fig. 1. Anincrease in k narrows the input range over which theoutput responds rapidly (sensitively). For larger valuesof k, the rapid switch from low to high output responseis often called a bistable response, because the outputstate of the system switches in a nearly binary waybetween low output, or “OFF”, and high output, or“ON”. A bistable switching response is effectively a

-4 -3 -2 -1 00.000

0.002

0.004

0.006

0.008

0.010

Out

put s

igna

l, y

Input signal, log10x

Figure 3 An increasing Hill coefficient, k, causes logarithmicsensitivity to low input signals. At k = 1 (left curve), thesensitivity is linear with a steady increase in output even atvery low input levels, implying infinite precision. As kincreases, sensitivity at low input declines, and the requiredthreshold input level becomes higher and sharper to induce anoutput response of 1% of the maximum (y = 0.01). Thecurves of increasing slope correspond to k = 1, 2, 4, 8 inEq. (4), with logarithmic scaling of the input x plotted here.

biological transistor that forms a component of a bi-ological circuit [8]. Bistability is sometimes called ul-trasensitivity, because of the high sensitivity of the re-sponse to small changes in inputs when measured overthe responsive range [9].

At the k = 1 case of Michaelis-Menten, the outputresponse is linearly sensitive to very small changes atvery low input signals. Such extreme sensitivity meansessentially infinite measurement precision at tiny inputlevels, which seems unlikely for realistic biological sys-tems. As k increases, sensitivity at low input becomesmore like a threshold response, such that a minimal in-put is needed to stimulate significant change in output.Increasing k causes sensitivity to become logarithmicat low input. That low input sensitivity pattern canbe seen more clearly by plotting the input level on alogarithmic scale, as in Fig. 3.

Alternative mechanisms for simple chemical reactionsMy goal is to understand the general properties ofinput-output relations in biological systems. To de-velop that general understanding, it is useful to con-tinue with study of the fundamental input-output re-lations of simple chemical reactions. Presumably, mostinput-output relations of systems can ultimately be de-composed into simple component chemical reactions.Later, I will consider how the combination of such com-ponents influences overall system response.

Numerous studies of chemical kinetics report Hill co-efficients k > 1 rather than the expected Michaelis-Menten pattern k = 1. Resolution of that puzzling

Page 5: Input-output relations in biological systems: measurement ... · Michaelis-Menten theory of enzyme kinetics. I use the discrepancy between the expected Michaelis-Menten process of

Frank Page 5 of 24

Table 2 Literature related to the Hill equation.

The Hill equation or related expressions form a key part ofanalysis in many areas of chemistry, systems biology andpharmacology. Each subdiscipline has its own set of isolatedconceptual perspectives and mutual citation islands. Here, I list afew publications scattered over that landscape. My haphazardsample provides a sense of the disconnected nature of the topic. Iconclude from this sample that the general form of input-outputrelations is widely recognized as an important problem and thatno unified conceptual approach exists.

Cornish-Bowden’s [7] text on enzyme kinetics frequently usesthe Hill coefficient to summarize the relation between inputconcentrations and the rate of transformation to outputs. Thattext applies both of the two main approaches. First, the Hillequation simply provides a description of how changes in inputaffect output independently of the underlying mechanism. Second,numerous specific models attempt to relate particular mechanismsto observed Hill coefficients. Zhang et al. [2] provide an excellent,concise summary of specific biochemical mechanisms, includingsome suggested connections to complex cellular phenotypes.

Examples of the systems biology perspective include Kim &Ferrell [30], Ferrell [31], Cohen-Saidon et al. [32], Goentoro &Kirschner [33] and Goentoro et al. [34]. Alon’s [10] leading text insystems biology discusses the importance of the Hill equationpattern, but only considers the explicit classical chemicalmechanism of multiple binding. Those studies share the view thatspecific input-output patterns require specific underlyingmechanisms as explanations.

In pharmacology, the Hill equation provides the main approachfor describing dose-response patterns. Often, the Hill equation isused as a model to fit the data independently of mechanism. Thatdescriptive approach probably follows from the fact that manycomplex and unknown factors influence the relation between doseand response. Alternatively, some analyses focus on the keyaspect of receptor-ligand binding in the response to particulardrugs. Reviews from this area include DeLean et al. [35], Weiss[36], Rang [37] and Bindslev [38]. Related approaches arise in theanalysis of basic physiology [39].

Other approaches consider input-output responses in relationto aggregation of underlying heterogeneity, statistical mechanicsor aspects of information. Examples include Hoffman & Goldberg[40], Getz & Lansky [41], Kolch et al. [42], Tkacik & Walczak[43] and Marzen et al. [44].

Departures from the mass-action assumption ofMichaelis-Menten kinetics can explain the emergence of Hillequation input-output relations [45, 46]. Many studies analyze thekinetics of diffusion-limited departures from mass action withoutmaking an explicit connection to the Hill equation [16, 47, 48].Modeling approaches in other disciplines also consider the sameproblem of departures from spatial uniformity [49–51]

Studies often use the Hill equation or similar assumptions todescribe the shapes of input-output functions when buildingmodels of biochemical circuits [8, 11]. Those studies do not makeany mechanistic assumptions about the underlying cause of theHill equation pattern. Rather, in order to build a model circuit forregulatory control, one needs to make some assumption aboutinput-output relations.

discrepancy is the first step toward deeper understand-ing of input-output patterns (Table 2). Zhang et al. [2]review six specific mechanisms that may cause k > 1.In this section, I briefly summarize several of thosemechanisms. See Zhang et al. [2] for references.

Direct multiplication of signal input concentration

Transforming a single molecule to an active state mayrequire simultaneous binding by multiple input signal

molecules. If two signal molecules, S, must bind to asingle inactive reactant, R, to form a three moleculecomplex before transformation of R to the active state,A, then we can express the reaction as

S + S + Rg−→ SSR −−→ S + S + A,

which by mass action kinetics leads to the rate ofchange in A as

A = gS2(N −A)− δA,

in which N = R + A is the total concentration of theinactive plus active reactant molecules, and the backreaction A → R occurs at rate δ. The equilibriuminput-output relation is

A∗ = N

(S2

m2 + S2

),

which is a Hill equation with k = 2. The reactionstoichiometry, with two signal molecules combining inthe reaction, causes the reaction rate to depend mul-tiplicatively on signal input concentration. Other sim-ple schemes also lead to a multiplicative effect of sig-nal molecule concentration on the rate of reaction. Forexample, the signal may increase the rates of two se-quential steps in a pathway, causing a multiplicationof the signal concentration in the overall rate throughthe multiple steps. Certain types of positive feedbackcan also amplify the input signal multiplicatively.

Saturation and loss of information in multistepreaction cascadesThe previous section discussed mechanisms that mul-tiply the signal input concentration to increase the Hillcoefficient. Multiplicative interactions lead to logarith-mic scaling. The Hill equation with k > 1 expresseslogarithmic scaling of output at high and low inputlevels. I will return to this general issue of logarithmicscaling later. The point here is that multiplication isone sufficient way to achieve logarithmic scaling. Butmultiplication is not necessary. Other nonmultiplica-tive mechanisms that lead to logarithmic scaling canalso match closely to the Hill equation pattern. Thissection discusses two examples covered by Zhang et al.[2].

Repressor of weak input signals The key puzzle of theHill equation concerns how to generate the logarithmicscaling pattern at low input intensity. The simplestnonmultiplicative mechanism arises from an initial re-action that inactivates the input signal molecule. Thatpreprocessing of the signal intensity can create a filter

Page 6: Input-output relations in biological systems: measurement ... · Michaelis-Menten theory of enzyme kinetics. I use the discrepancy between the expected Michaelis-Menten process of

Frank Page 6 of 24

that logarithmically reduces signals of low intensity.Suppose, for example, that the repressor may becomesaturated at higher input concentrations. Then the ini-tial reaction filters out weak, low concentration, inputsbut passes through higher input concentrations.

Consider a repressor, X, that can bind to the signal,S, transforming the bound complex into an inactivestate, I, in the reaction

S + Xγ−⇀↽−β

I.

One can think of this reaction as a preprocessing filterfor the input signal. The kinetics of this input prepro-cessor can be expressed by focusing on the change inthe concentration of the bound, inactive complex

I = γ(S − I)(X − I)− βI. (5)

The signal passed through this preprocessor is theamount of S that is not bound in I complexes, whichis S′ = S − I. We can equivalently write I = S′ − S.The equilibrium relation between the input, S, and theoutput signal, S′, passed through the preprocessor canbe obtained by solving I = 0, which yields

S′(X − S + S′)− α(S − S′) = 0,

in which α = β/γ. Fig. 4a shows the relation betweenthe input signal, S, and the preprocessed output, S′.Bound inactive complexes, I, hold the signal moleculetightly and titrate it out of activity when the breakingup of complexes at rate β is slower than the formationof new complexes at rate γ, and thus α is small.

The preprocessed signal may be fed into a standardMichaelis-Menten type of reaction, such as the reac-tion in Eq. (1), with the preprocessed signal S′ driv-ing the kinetics rather than the initial input, S. Thereaction chain from initial input through final out-put starts with an input concentration, S, of whichS′ passes through the repressor filter, and S′ stimu-lates production of the active output signal concen-tration, A∗. Fig. 4b shows that titration of the initialsignal concentration, S, to a lower pass-through signalconcentration, S′, leads to low sensitivity of the finaloutput, A∗, to the initial signal input, S, as long asthe signal concentration is below the amount of therepressor available for titration, X.

When this signal preprocessing mechanism occurs,the low, essentially logarithmic, sensitivity to weakinput signals solves the puzzle of relating classicalMichaelis-Menten chemical kinetics to the Hill equa-tion pattern for input-output relations with k > 1.The curves in Fig. 4b do not exactly match the Hill

0 X 2X0

X/2

0 X 2X0

X

2X

Input signal, S

Out

put s

igna

l, A*

Prep

roce

ssed

sig

nal,

S′ (a)

(b)

Figure 4 Preprocessing of an input signal by a repressorreduces sensitivity of output to low input intensity signals.(a) Equilibrium concentration of processed signal, S′, inrelation to original signal input intensity, S, obtained bysolution of Eq. (5). The four curves from bottom to top showdecreasing levels of signal titration by the repressor for theparameter values α = 0.01, 0.1, 0.5, 1000. The top curve altersthe initial signal very little, so that S′ ≈ S, showing theconsequences of an unfiltered input signal. (b) The processedinput signal, S′, is used as the input to a standardMichaelis-Menten reaction kinetics process in Eq. (1), leadingto an equilibrium output, A∗. The curves from bottom to topderive from the corresponding preprocessed input signal fromthe upper panel.

equation. However, this signal preprocessing mecha-nism aggregated with other simple mechanisms canlead to a closer fit to the Hill equation pattern. I dis-cuss the aggregation of different mechanisms below.

This preprocessed signal system is associated withclassical chemical kinetic mechanisms, because it is thedeterministic outcome of a simple and explicit massaction reaction chain. However, the reactions are notinherently multiplicative with regard to signal inputintensity. Instead, preprocessing leads to an essentiallylogarithmic transformation of scaling and informationat low input signal intensity.

This example shows that the original notion of mul-tiplicative interactions is not a necessary conditionfor Hill equation scaling of input-output relations. In-stead, the Hill equation pattern is simply a partic-ular expression of logarithmic scaling of the input-

Page 7: Input-output relations in biological systems: measurement ... · Michaelis-Menten theory of enzyme kinetics. I use the discrepancy between the expected Michaelis-Menten process of

Frank Page 7 of 24

output relation. Any combination of processes thatleads to similar logarithmic scaling provides similarinput-output relations. Thus, the Hill equation pat-tern does not imply any particular underlying chem-ical mechanism. Rather, such input-output relationsare the natural consequence of the ways in which in-formation degrades and is transformed in relation toscale when passed through reaction sequences that actas filters of the input signal.

Opposing forward and back reactions The previoussection showed how a repressor can reduce sensitivityto low intensity input signals. A similar mechanismoccurs when there is a back reaction. For example, asignal may transform an inactive reactant into an ac-tive form, and a back reaction may return the activeform to the inactive state. If the back reaction satu-rates at low signal input intensity, then a rise in thesignal from a very low level will initially cause rela-tively little increase in the concentration of the activeoutput, inducing weak, logarithmic sensitivity to lowinput signal intensity. In effect, the low input is re-pressed, or titrated, by the strong back reaction.

This opposition between forward and back reactionswas one of the first specific mechanisms of classicalchemical kinetics to produce the Hill equation patternin the absence direct multiplicative interactions thatamplify the input signal [9]. In this section, I brieflyillustrate the opposition of forward and back reactionsin relation to the Hill equation pattern.

In the forward reaction, a signal, S, transforms aninactive reactant, R, into an active state, A. Theback reaction is catalyzed by the molecule B, whichtransforms A back into R. The balancing effects ofthe forward and back reactions in relation to satura-tion depend on a more explicit expression of classi-cal Michaelis-Menten kinetics than presented above.In particular, let the two reactions be

S + Rg−⇀↽−δ

SRφ−→ S + A

B + Aγ−⇀↽−d

BAσ−→ B + R

in which these reactions show explicitly the intermedi-ate bound complexes, SR and BA. The rate of changein the output signal, A, when the dynamics follow clas-sical equilibrium Michaelis-Menten reaction kinetics, is

A = φS0

(R

m+R

)− σB0

(A

µ+A

), (6)

in which S0 includes the concentrations of both freesignal, S, and bound signal, SR. Similarly, B0 includes

0 1 20

Nê2

N

Input signal, S`

Out

put s

igna

l, A*

Figure 5 Balance between forward and back reactions leadsto a high Hill coefficient when the reactions are saturated.The equilibrium output signal, A∗, is obtained by solvingA = 0 in Eq. (6) as a function of the input signal, S0. The

signal is given as S = φS0/σB0. The total amount of reactantis N = R+A. The half-maximal concentrations are set tom = µ = 1. The three curves illustrate the solutions forN = 1, 10, 100, with increasing Hill coefficients for higher Nvalues and greater reaction saturation levels.

the concentrations of both free catalyst, B, and boundcatalyst, BA. The half-maximal reaction rates are setby m = δ/g and µ = d/γ. The degree of saturationdepends on the total amount of reactant available,N = R + A, relative to the concentrations that givethe half-maximal reaction rates, m and µ.

When the input signal, S0, is small, the back re-action dominates, potentially saturating the forwardrate as R becomes large. Fig. 5 shows that the level ofsaturation sets the input-output pattern, with greatersaturation increasing the Hill coefficient, k.

Alternative perspectives on input-output relationsIn the following sections, I discuss alternative mecha-nisms that generate Hill equation patterns. Before dis-cussing those alternative mechanisms, it is helpful tosummarize the broader context of how biochemical andcellular input-output relations have been studied.

Explicit chemical reaction mechanismsThe prior sections linked simple and explicit chemicalmechanisms to particular Hill equation patterns of in-puts and outputs. Each mechanism provided a distinctway in which to increase the Hill coefficient above one.Many key reviews and textbooks in biochemistry andsystems biology emphasize that higher Hill coefficientsand increased input-output sensitivity arise from thesesimple and explicit deterministic mechanisms of chem-ical reactions [2, 7, 10]. The idea is that a specific pat-tern must be generated by one of a few well-definedand explicit alternative mechanisms.

Page 8: Input-output relations in biological systems: measurement ... · Michaelis-Menten theory of enzyme kinetics. I use the discrepancy between the expected Michaelis-Menten process of

Frank Page 8 of 24

Explicit chemical reaction mechanisms discussed ear-lier include: binding of multiple signal molecules tostimulate each reaction; repressors of weak input sig-nals; and opposing forward and back reactions nearsaturation. Each of these mechanisms could, in prin-ciple, be isolated from a particular system, analyzeddirectly, and linked quantitatively to the specific input-output pattern of a system. Decomposition to elemen-tal chemical kinetics and direct quantitative analysiswould link observed pattern to an explicit mechanisticprocess.

The Hill equation solely as a description of observedpatternIn the literature, the Hill equation is also used whenbuilding models of how system outputs may reactto various inputs (Table 2). The models often studyhow combinations of components lead to the overallinput-output pattern of a system. To analyze suchmodels, one must make assumptions about the input-output relations of the individual components. Typi-cally, a Hill equation is used to describe the compo-nents’ input-output functions. That description doesnot carry any mechanistic implication. One simplyneeds an input-output function to build the model orto describe the component properties. The Hill equa-tion is invoked because, for whatever reason, most ob-served input-output functions follow that pattern.

System-level mechanisms and departures from massactionAnother line of study focuses on system propertiesrather than the input-output patterns of individualcomponents. In those studies, the Hill equation patternof sensitivity does not arise from a particular chemicalmechanism in a particular reaction. Instead, sensitiv-ity primarily arises from the aggregate consequencesof the system (Table 2). In one example, numerousreactions in a cascade combine to generate Hill-likesensitivity [11]. The sensitivity derives primarily fromthe haphazard combination of different scalings in thedistinct reactions, rather than a particular chemicalprocess.

Alternatively, some studies assume that chemical ki-netics depart from the classical mass action assump-tion (Table 2). If input signal molecules tend, over thecourse of a reaction, to become spatially isolated fromthe reactant molecules on which they act, then suchspatial processes often create a Hill-like input-outputpattern by nonlinearly altering the output sensitivityto changes in inputs. I consider such spatial processesas an aggregate system property rather than a specificchemical mechanism, because many different spatialmechanisms can restrict the aggregate movement of

molecules. The aggregate spatial processes of the over-all system determine the departures from mass actionand the potential Hill-like sensitivity consequences,rather than the particular physical mechanisms thatalter spatial interactions.

These system-level explanations based on reactioncascades and spatially induced departures from massaction have the potential benefit of applying widely.Yet each particular system-level explanation is itself aparticular mechanism, although at a higher level thanthe earlier biochemical mechanisms. In any actual case,the higher system-level mechanism may or may notapply, just as each explicit chemical mechanism willsometimes apply to a particular case and sometimesnot.

A broader perspectiveAs we accumulate more and more alternative mecha-nisms that fit the basic input-output pattern, we mayask whether we are converging on a full explanation ormissing something deeper. Is there a different way toview the problem that would unite the disparate per-spectives, without losing the real insights provided ineach case?

I think there is a different, more general perspective(Table 1). At this point, I have given just enough back-ground to sketch that broader perspective. I do so inthe remainder of this section. However, it is too soonto go all the way. After giving a hint here about thefinal view, I return in the following sections to developfurther topics, after which I return to a full analysis ofthe broader ways in which to think about input-outputrelations.

The Hill equation with k > 1 describes weak, loga-rithmic sensitivity at low input and high input levels,with strong and essentially linear sensitivity throughan intermediate range. Why should this log-linear-logpattern be so common? The broader perspective onthis problem arises from the following points.

First, the common patterns of nature are exactlythose patterns consistent with the greatest number ofalternative underlying processes [3, 12]. If many dif-ferent processes lead to the same outcome, then thatoutcome will be common and will lack a strong con-nection to any particular mechanism. In any explicitcase, there may be a simple and clear mechanism. Butthe next case, with the same pattern, is likely to bemechanistically distinct.

Second, measurement and information transmissionunite the disparate mechanisms. The Hill equationwith k > 1 describes a log-linear-log measurementscale [13, 14]. The questions become: Why do biologi-cal systems, even at the lowest chemical levels of anal-ysis, often follow this measurement scaling? How does

Page 9: Input-output relations in biological systems: measurement ... · Michaelis-Menten theory of enzyme kinetics. I use the discrepancy between the expected Michaelis-Menten process of

Frank Page 9 of 24

chemistry translate into the transmission and loss ofinformation in relation to scale? Why does a univer-sal pattern of information and measurement scale ariseacross such a wide range of underlying mechanisms?

Third, this broader perspective alters the ways inwhich one should analyze biological input-output sys-tems. In any particular case, specific mechanisms re-main interesting and important. However, the relationsbetween different cases and the overall interpretationof pattern must be understood within the broaderframing.

With regard to biological design, natural selectionworks on the available patterns of variation. Becausecertain input-output relations tend to arise, natural se-lection works on variations around those natural con-tours of input-output patterns. Those natural contoursof pattern and variation are set by the universal prop-erties of information transmission and measurementscale. That constraint on variation likely influences thekinds of designs created by natural selection. To un-derstand why certain designs arise and others do not,we must understand how information transmission andmeasurement scale set the underlying patterns of vari-ation.

I return to these points later. For now, it is useful tokeep in mind these preliminary suggestions about howthe various pieces will eventually come together.

AggregationMost biological input-output relations arise througha series of reactions. Initial reactions transform theinput signal into various intermediate signals, whichthemselves form inputs for further reactions. The finaloutput arises only after multiple internal transforma-tions of the initial signal. We may think of the overallinput-output relation as the aggregate consequence ofmultiple reaction components.

A linear reaction cascade forms a simple type of sys-tem. Kholodenko el al. [11] emphasized that a cascadetends to multiply the sensitivities of each step to de-termine the overall sensitivity of the system. Fig. 6illustrates how the input-output relations of individ-ual reactions combine to determine the system-levelpattern.

To generate Fig. 6, I calculated how a cascade of12 reactions processes the initial input into the finaloutput. Each reaction follows a Hill equation input-output relation given by Eq. (4) with a half-maximalresponse at m and a Hill coefficient of k. The outputfor each reaction is multiplied by a gain, g. The pa-rameters for each reaction were chosen randomly, asshown in Fig. 6a and described in the caption.

Fig. 6 shows that a cascade significantly increasesthe Hill coefficient of the overall system above the av-erage coefficient of each reaction, and often far above

1 1.5 2 100

0.4

0.8

Hill coefficient, k

Rela

tive

prob

abilit

y

Out

put s

igna

l

Input signal

(a) (b)

Figure 6 Signal processing cascade increases the Hillcoefficient. The parameters for each reaction were chosenrandomly from a beta distribution, denoted as a randomvariable z ∼ B(α, β), which yields values in the range [0, 1].The parameters m = 100z and g = 5 + 10z were chosenrandomly and independently for each reaction from a betadistribution with α = β = 3. The parameter k for eachreaction was obtained randomly as 1 + z, yielding a range ofcoefficients 1 ≤ k ≤ 2. (a) In three separate trials, differentcombinations of (α, β) were used for the beta distribution thatgenerated the Hill coefficient, k: in the first, shown as the leftdistribution, (α, β) = (1, 6); in the second, shown in themiddle, (α, β) = (4, 4); in the third, on the right,(α, β) = (6, 2). The plot shows the peak heights normalizedfor each curve to be the same to aid visual comparison. (b)The input-output relation over the full cascade. The curvesfrom left to right correspond to the distributions for k fromleft to right in the prior panel. The input scale is normalized sothat the maximum input value for each curve coincides at 80%of the maximum output that could be obtained at infiniteinput. The observed output curves have more strongly reducedsensitivity at low input than at high input compared with theHill equation, but nonetheless match reasonably well. The bestfit Hill equation for the three curves has a Hill coefficient of,from left to right, k = 1.7, 2.2, 2.8. The average Hillcoefficient for each reaction in a cascade is, from left to right,k = 1.14, 1.5, 1.75. Each curve shows a single particularrealization of the randomly chosen reaction parameters fromthe underlying distributions.

the maximum coefficient for any single reaction. Intu-itively, the key effect at low signal input arises becauseany reaction that has low sensitivity at low input re-duces the signal intensity passed on, and such reduc-tions at low input intensity multiply over the cascade,yielding very low sensitivity to low signal input. Notein each curve that an input signal significantly abovezero is needed to raise the output signal above zero.That lower tail illustrates the loss of signal informa-tion at low signal intensity.

This analysis shows that weak logarithmic sensi-tivity at low signal input, associated with large Hillcoefficients, can arise by aggregation of many reac-tions. Thus, aggregation may be a partial solution tothe overall puzzle of log-linear-log sensitivity in input-output relations.

Aggregation by itself leaves open the question of howvariations in sensitivity arise in the individual reac-tions. Classical Michaelis-Menten reactions have linearsensitivity at low signal input, with a Hill coefficient ofk = 1. A purely Michaelis-Menten cascade with k = 1at each step retains linear sensitivity at low signal in-

Page 10: Input-output relations in biological systems: measurement ... · Michaelis-Menten theory of enzyme kinetics. I use the discrepancy between the expected Michaelis-Menten process of

Frank Page 10 of 24

put. A Michaelis-Menten cascade would not have theweak sensitivity at low input shown in Fig. 6b.

How does a Hill coefficient k > 1 arise in the indi-vidual steps of the cascade? The power of aggregationto induce pattern means that it does not matter howsuch variations in sensitivity arise. However, it is use-ful to consider some examples to gain of idea of thekinds of processes that may be involved beyond thedeterministic cases given earlier.

Signal noise versus measurement noise

Two different kinds of noise can influence the input-output relations of a system. First, the initial inputsignal may be noisy, making it difficult for the systemto discriminate between low input signals and back-ground stochastic fluctuations in signal intensity [15].The classical signal-to-noise ratio problem expressesthis difficulty by analyzing the ways in which back-ground noise in the input can mask small changes inthe average input intensity. When the signal is weakrelative to background noise, a system may be rela-tively insensitive to small increases in average input atlow input intensity.

Second, for a given input intensity, the system mayexperience noise in the detection of the signal level orin the transmission of the signal through the internalprocesses that determine the final output. Stochastic-ity in signal detection and transmission determine themeasurement noise intrinsic to the system. The ratioof measurement noise to signal intensity will often begreater at low signal input intensity, because there isrelatively more noise in the detection and transmissionof weak signals.

In this section, I consider how signal noise and mea-surement noise influence Michaelis-Menten processes.The issue concerns how much these types of noise mayweaken sensitivity to low intensity signals. A weaken-ing of sensitivity to low input distorts the input-outputrelation of a Michaelis-Menten process in a way thatleads to a Hill equation type of response with k > 1.

In terms of measurement, Michaelis-Menten pro-cesses follow a linear-log scaling, in which sensitivityremains linear and highly precise at very low signal in-put intensity, and grades slowly into a logarithmic scal-ing with saturation. By contrast, as the Hill coefficient,k, rises above one, measurement precision transformsinto a log-linear-log scale, with weaker logarithmic sen-sitivity to signal changes at low input intensity. Thus,the problem here concerns how signal noise or mea-surement noise may weaken input-output sensitivityat low input intensity.

Input signal noise may not alter Michaelis-MentensensitivityConsider the simplified Michaelis-Menten type of dy-namics given in Eq. (1), repeated here for convenience

A = gS(R−A)− δA,

where A is the output signal, S is the input signaldriving the reaction, R is the reactant transformed bythe input, g is the rate of the transforming reactionwhich is a sort of signal gain level, and δ is the rate atwhich the active signal output decays to the inactivereactant form. Thus far, I have been analyzing thistype of problem by assuming that the input signal,S, is a constant for any particular reaction, and thenvarying S to analyze the input-output relation, givenat equilibrium by Michaelis-Menten saturation

A∗ = g

(S

m+ S

), (7)

where m = δ/g. When input signal intensity is weak,such that m � S, then A∗ ≈ gS, which implies thatoutput is linearly related to input.

Suppose that S is in fact a noisy input signal sub-ject to random fluctuations. How do the fluctuationsaffect the input-output relation for inputs of low aver-age intensity? Although the dynamics can filter noisein various ways, it often turns out that the linear input-output relation continues to hold such that, for lowaverage input intensity, the average output is propor-tion to the average input, A ∝ S. Thus, signal noisedoes not change the fact that the system effectivelymeasures the average input intensity linearly and withessentially infinite precision, even at an extremely lowsignal to noise ratio.

The high precision of classical chemical kineticsarises because the mass action assumption implies thatinfinitesimal changes in input concentration are in-stantly translated into a linear change in the rate ofcollisions between potential reactants. The puzzle ofMichaelis-Menten kinetics is that mass action implieshigh precision and linear scaling at low input inten-sity, whereas both intuition and observation suggestlow precision and logarithmic scaling at low input in-tensity. Input signal noise by itself typically does notalter the high precision and linear scaling of mass ac-tion kinetics.

Although the simplest Michaelis-Menten dynamicsretain linearity and essentially infinite precision at lowinput, it remains unclear how the input-output rela-tions of complex aggregate systems respond to the sig-nal to noise ratio of the input. Feedback loops and re-action cascades strongly influence the ways in which

Page 11: Input-output relations in biological systems: measurement ... · Michaelis-Menten theory of enzyme kinetics. I use the discrepancy between the expected Michaelis-Menten process of

Frank Page 11 of 24

fluctuations are filtered between input and output.However, classical analyses of signal processing tendto focus on the filtering properties of systems only inrelation to fluctuations of input about a fixed meanvalue. By contrast, the key biological problem is howinput fluctuations alter the relation between the aver-age input intensity and the average output intensity.That key problem requires one to study the synergis-tic interactions between changes in average input andpatterns of fluctuations about the average.

For noisy input signals, what are the universal char-acteristics of system structure and signal processingthat alter the relations between average input and av-erage output? That remains an open question.

Noise in signal detection and transmission reducesmeasurement precision and sensitivity at low signalinputThe previous section considered how stochastic fluctu-ations of inputs may affect the average output. Sim-ple mass action kinetics may lead to infinite precisionat low input intensity with a linear scaling betweenaverage input and average output, independently offluctuations in noisy inputs. This section considers theproblem of noise from a different perspective, in whichthe fluctuations arise internally to the system and altermeasurement precision and signal transmission.

I illustrate the key issues with a simple model. I as-sume that, in a reaction cascade with deterministicdynamics, each reaction leads to the Michaelis-Mententype of equilibrium input-output given in Eq. (7). Tostudy how stochastic fluctuations within the systemaffect input-output relations, I assume that each re-action has a certain probability of failing to transmitits input. In other words, for each reaction, the out-put follows the equilibrium input-output relation withprobability 1 − p, and with probability p, the outputis zero.

From the standard equilibrium in Eq. (7), we sim-plify the notation by using y ≡ A∗ for output, andscale the input such that x = S/m. The probabilitythat the output is not zero is 1− p, thus the expectedoutput is

y = g

(x

1 + x

)(1− p). (8)

Let the probability of failure be p = ae−bx. Note thatas input signal intensity, x, rises, the probability offailure declines. As the signal becomes very small, theprobability of reaction failure approaches a, from therange 0 ≤ a ≤ 1.

Fig. 7 shows that stochastic failure of signal trans-mission reduces relative sensitivity to low input signals

0.8

0.00.0

0.4

0.8

0 2 4 0.0 0.2 0.4 0.0 0.04 0.08 0.120.5 1.0 1.5

(a) (b) (c) (d)

Initial input signal, x

Fina

l out

put s

igna

l

Figure 7 Stochastic failure of signal transmission reducesthe relative sensitivity to low intensity input signals. Thelower (blue) lines show the probability p = ae−bx that aninput signal fails to produce an output. The upper (red) linesshow the expected equilibrium output for Michaelis-Mententype dynamics corrected for a probability p that the output iszero. Each panel (a–d) shows a cascade of n reactions, inwhich the output of each reaction forms the input for the nextreaction, given an initial signal input of x for the first reaction.Each reaction follows Eq. (8). The number of reactions in thecascade increases from the left to the right panel asn = 1, 2, 4, 8. The other parameters for Eq. (8) are the gainper reaction, g = 1.5, the maximum probability of reactionfailure as the input declines to very low intensity, a = 0.3, andthe rate at which increasing signal intensity reduces reactionfailure, b = 10. The final output signal is normalized to 0.8 ofthe maximum output produced as the input become verylarge.

when a signal is passed through a reaction cascade.The longer the cascade of reactions, the more the over-all input-output relation follows an approximate log-linear-log pattern with an increasing Hill coefficient,k. Similarly, Fig. 8 shows that an increasing failurerate per reaction reduces sensitivity to low input sig-nals and makes the overall input-output relation moreswitch-like.

Implications for system designAn input-output response with a high Hill coefficient,k, leads to switch-like function (Fig. 3). By contrast,classical Michaelis-Menten kinetics lead to k = 1, inwhich output increases linearly with small changesin weak input signals—effectively the opposite of aswitch. Many analyses of system design focus on thisdistinction. The argument is that switch-like functionwill often be a favored feature of design, allowing a sys-tem to change sharply between states in response toexternal changes [1]. Because the intrinsic dynamics ofchemistry are thought not to have a switch like func-tion, the classical puzzle is how system design over-comes chemical kinetics to achieve switching function.

This section on stochastic signal failure presents analternative view. Sloppy components with a tendencyto fail often lead to switch-like function. Thus, whenswitching behavior is a favored phenotype, it may besufficient to use a haphazardly constructed pathwayof signal transmission coupled with weakly regulatedreactions in each step. Switching, rather than being ahighly designed feature that demands a specific mech-anistic explanation, may instead be the likely outcomeof erratic biological signal processing.

Page 12: Input-output relations in biological systems: measurement ... · Michaelis-Menten theory of enzyme kinetics. I use the discrepancy between the expected Michaelis-Menten process of

Frank Page 12 of 24

0.0 0.2 0.4 0.6 0.8 1.00.0

0.2

0.4

0.6

0.8

Normalized input signal

Fina

l out

put s

igna

l

Figure 8 Greater failure rates for reactions reduce sensitivityto low input and increase the Hill coefficient, k. The curvesarise from the same analysis as Fig. 7, in which the curvesfrom left to right are associated with an increase in themaximum failure rate as a = 0.2, 0.4, 0.6. The curves in thisfigure have n = 8 reactions in the cascade, a gain of g = 1.5,and a decline in failure with increasing input, b = 10. Thescale for the input signal is normalized so that each curve hasa final output of 0.85 at a normalized input of one.

This tendency for aggregate systems to have aswitching pattern does not mean that natural selec-tion has no role and that system design is random.Instead, the correct view may be that aggregate signalprocessing and inherent stochasticity set the contoursof variation on which natural selection and system de-sign work. In particular, the key design features mayhave to do with modulating the degree of sloppinessor stochasticity. The distribution of gain coefficients ineach reaction and the overall pattern of stochasticityin the aggregate may also be key loci of design.

My argument is that systems may be highly de-signed, but the nature of that design can only be un-derstood within the context of the natural patterns ofvariation. The intrinsic contours of variation are theheart of the matter. I will discuss that issue againlater. For now, I will continue to explore the processesthat influence the nature of variation in system input-output patterns.

Spatial correlations and departures from mass action

Chemical reactions require molecules to come neareach other spatially. The overall reaction depends onthe processes that determine spatial proximity and theprocesses that determine reaction rate given spatialproximity. Roughly speaking, we can think of the spa-tial aspects in terms of movement or diffusion, and the

transformation given spatial proximity in terms of areaction coefficient.

Classical chemical kinetics typically assumes thatdiffusion rates are relatively high, so that spatial prox-imity of molecules depends only on the average concen-tration over distances much greater than the proxim-ity required for reaction. Kinetics are therefore limitedby reaction rate given spatial proximity rather than bydiffusion rate. In contrast with classical chemical kinet-ics, much evidence suggests that biological moleculesoften diffuse relatively slowly, causing biological reac-tions sometimes to be diffusion limited (Table 2).

In this section, I discuss how diffusion-limited re-actions can increase the Hill coefficient of chemicalreactions, k > 1. That conclusion means that theinevitable limitations on the movement of biologicalmolecules may be sufficient to explain the observedpatterns of sensitivity in input-output functions anddeparture from Michaelis-Menten patterns.

Two key points emerge. First, limited diffusion tendsto cause potential reactants to become more spatiallyseparated than expected under high diffusion and ran-dom spatial distribution. The negative spatial associ-ation between reactants arises because those potentialreactants near each tend to react, leaving the nearbyspatial neighborhood with fewer potential reactantsthan expected under spatial uniformity. Negative spa-tial association of reactants reduces the rate of chem-ical transformation.

This reduction in transformation rate is stronger atlow concentration, because low concentration is as-sociated with a greater average spatial separation ofreactants. Thus, low signal input may lead to rela-tively strong reductions in transformation rate causedby limited diffusion. As signal intensity and concen-tration rise, this spatial effect is reduced. The net con-sequence is a low transformation rate at low input,with rising transformation rate as input intensity in-creases. This process leads to the the pattern charac-terized by higher Hill coefficients and switch-like func-tion, in which there is low sensitivity to input at lowsignal intensity.

Limited diffusion within the broader context ofinput-output patterns leads to the second key point. Iwill suggest that limited diffusion is simply anotherway in which systems suffer reduced measurementprecision and loss of information at low signal in-tensity. The ultimate understanding of system designand input-output function follows from understandinghow to relate particular mechanisms, such as diffu-sion or random signal loss, to the broader problems ofmeasurement and information. To understand thosebroader and more abstract concepts of measurementand information, it is necessary to work through some

Page 13: Input-output relations in biological systems: measurement ... · Michaelis-Menten theory of enzyme kinetics. I use the discrepancy between the expected Michaelis-Menten process of

Frank Page 13 of 24

of the particular details by which diffusion limitationleads to loss of information.

Departure from mass actionMost analyses of chemical kinetics assume mass action.Suppose, for example, that two molecules may combineto produce a bound complex

A + Br−→ AB

in which the bound complex, AB, may undergo furthertransformation. Mass action assumes that the rate atwhich AB forms is rAB, which is the product of theconcentrations of A and B multiplied by a bindingcoefficient, r. The idea is that the number of collisionsand potential binding reactions between A and B perunit of time changes linearly with the concentration ofeach reactant.

Each individual reaction happens at a particular lo-cation. That particular reaction perturbs the spatialassociation between reactants. Those reactants thatwere, by chance, near each other, no longer exist asfree potential reactants. Thus, a reaction reduces theprobability of finding potential reactants nearby, in-ducing a negative spatial association between poten-tial reactants. To retain the mass action rate, diffusionmust happen sufficiently fast to break down the spatialassociation. Fast diffusion recreates the mutually uni-form spatial concentrations of the reactants requiredfor mass action to hold.

If diffusion is sufficiently slow, the negative spatialassociation between reactants tends to increase overtime as the reaction proceeds. That decrease in theproximity of potential reactants reduces the overall re-action rate. Diffusion-limited reactions therefore havea tendency for the reaction rate to decline below theexpected mass action rate as the reaction proceeds.

That classical description of diffusion-limited reac-tions emphasizes the pattern of reaction rates overtime. By contrast, my focus is on the relation betweeninput and output. It seems plausible that diffusion lim-itation could affect the input-output pattern of a bi-ological system. But exactly how should we connectthe classical analysis of diffusion limitation for the re-action rate of simple isolated reactions to the overallinput-output pattern of biological systems?

The connection between diffusion and system input-output patterns has received relatively little attention.A few isolated studies have analyzed the ways in whichdiffusion limitation tends to increase the Hill coeffi-cient, supporting my main line of discussion (Table 2).However, the broad field of biochemical and cellularresponses has almost entirely ignored this issue. Thefollowing sections present a simple illustration of how

diffusion limitation may influence input-output pat-terns, and how that effect fits into the broader contextof the subject.

Example of input-output pattern under limiteddiffusion

Limited diffusion causes spatial associations betweenreactants. Spatial associations invalidate mass actionassumptions. To calculate reaction kinetics withoutmass action, one must account for spatially varyingconcentrations of reactants and the related spatialvariations in chemical transformations. There is nosimple and general way to make spatially explicit cal-culations. In some cases, simple approximations givea rough idea of outcome (Table 2). However, in mostcases, one must study reaction kinetics by spatiallyexplicit computer simulations. Such simulations keeptrack of the spatial location of each molecule, the rateat which nearby molecules react, the spatial locationof the reaction products, and the stochastic movementof each molecule by diffusion.

Many computer packages have been developed toaid stochastic simulation of spatially explicit biochem-ical dynamics. I used the package Smoldyn [16, 17].I focused on the ways in which limited diffusion mayincrease Hill coefficients. Under classical assumptionsabout chemical kinetics, diffusion rates tend to be suf-ficiently high to maintain spatial uniformity, leadingto Michaelis-Menten kinetics with a Hill coefficient ofk = 1. With lower diffusion rates, spatial associationsarise, invalidating mass action. Could such spatial as-sociations lead to increased Hill coefficients of k > 1?

Fig. 9 shows clearly that increased Hill coefficientsarise readily in a simple reaction scheme with limiteddiffusion. The particular reaction system is

S + Rg−→ S + A (9)

X + Aδ−→ X + R. (10)

Under mass action assumptions, the dynamics wouldbe identical to Eq. (1)

A = gS(N −A)− δXA,

in which N = R+A is the total concentration of inac-tive plus active reactant molecules and, in this case, wewrite the back reaction rate as δX rather than just δ asin the earlier equation. In a spatially explicit model,we must keep track of the actual spatial location ofeach X molecule, thus we need to include explicitlythe concentration X rather than include that concen-tration in a combined rate parameter. At equilibrium,

Page 14: Input-output relations in biological systems: measurement ... · Michaelis-Menten theory of enzyme kinetics. I use the discrepancy between the expected Michaelis-Menten process of

Frank Page 14 of 24

k = 1.15; m = 4170; sat = 0.99 k = 2.37; m = 14444; sat = 1.00

0.00

0.02

0.04

0.06

0.08

k = 2.28; m = 15396; sat = 1.00

0.0

0.2

0.4

0.6

0.8

1.0

0 1000 2000 3000 4000

k = 1.46; m = 302; sat = 0.96 k = 1.97; m = 1990; sat = 0.64

0.0

0.1

0.2

0.3

0.4

0.5

0.6 k = 2.37; m = 1854; sat = 0.59

0.10

Input signal concentration, S0 1000 2000 3000 4000 0 1000 2000 3000 4000

Out

put s

igna

l sat

urat

ion

leve

l

0.00

0.02

0.04

0.06

0.08

0.10

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.0

0.1

0.2

0.3

0.4

0.5

0.6

(a) (b) (c)

(f)(e)(d)

δ = 10–6 δ = 10–5 δ = 10–4

g = 10–7

g = 10–6

Figure 9 Limited diffusion and spatial association ofreactants can increase the Hill coefficient, k. Simulationsshown from the computer package Smoldyn, based on thereaction scheme in Eqs. (9,10). The concentration of the inputsignal, S, is the number of molecules per unit volume. Theother concentrations are set to N = X = 100. Diffusion ratesare 10−5 for all molecules. I ran three replicates for each inputconcentration, S. Each circle shows the average of the threereplicates. For each panel (a–f), I fit a Hill equation curve tothe observations, denoting the output as the relativesaturation level, A/N = sat

[Sk

/(mk + Sk)

]. The fitted

parameters are: k, the Hill coefficient; m, the input signalconcentration that yields one-half of maximum saturation; and“sat”, the maximum saturation level at which the output isestimated to approach an asymptotic value relative to themaximum theoretical value of one, at which all N has beentransformed into A. Because of limited diffusion, actualsaturation can be below the theoretical maximum of one.Panels (b) and (c) are limited to output responses far belowthe median, because the simulations take too long to run forhigher input concentrations.

the output signal intensity under mass action followsthe Michaelis-Menten relation

A∗ = N

(S

m+ S

),

in which m = δX/g. If we let x = S/m and y = A∗/N ,then we see that the reaction scheme here leads to anequilibrium input-output relation as in Eq. (4) thatfollows the Hill equation

y =

(xk

1 + xk

),

with k = 1.I used the Smoldyn simulation package to study reac-

tion dynamics when the mass action assumption doesnot hold. The simulations for this particular reactionscheme show input-output relations with k > 1 whenthe rates of chemical transformation are limited bydiffusion. Fig. 9 summarizes some Smoldyn computersimulations showing k significantly greater than onefor certain parameter combinations. I will not go into

great detail about these computer simulations, whichcan be rather complicated. Instead, I will briefly sum-marize a few key points, because my goal here is sim-ply to illustrate that limited diffusion can increase Hillcoefficients under some reasonable conditions.

It is clear from Fig. 9 that limited diffusion canraise the Hill coefficient significantly above one. Whatcauses the rise? It must be some aspect of spatial pro-cess, because diffusion limitation primarily causes de-parture from mass action by violating the assumptionof spatial uniformity. I am not certain which aspects ofspatial process caused the departures in Fig. 9. It ap-peared that, in certain cases, most of the transformedoutput molecules, A, were maintained in miniature re-action centers, which spontaneously formed and de-cayed.

A local reaction center arose when S and R moleculescame near each other, transforming into S and A. Ifthere was also a nearby X molecule, then X and Acaused a reversion to X and R. The R molecule couldreact again with the original nearby S molecule, whichhad not moved much because of a slow diffusion raterelative to the timescale of reaction. The cycle couldthen repeat. If formation of reaction centers rises non-linearly with signal concentration, then a Hill coeffi-cient k > 1 would follow.

Other spatial processes probably also had important,perhaps dominant, roles, but the miniature reactioncenters were the easiest to notice. In any case, the spa-tial fluctuations in concentration caused a significantincrease in the Hill coefficient, k, for certain parametercombinations.

Limited diffusion, measurement precision andinformationWhy do departures from spatial uniformity and massaction sometimes increase the Hill coefficient? Roughlyspeaking, one may think of the inactive reactant, R, asa device to measure the signal input concentration, S.The rate of SR binding is the informative measure-ment. The measurement scale is linear under spatialuniformity and mass action. The measurement preci-sion is essentially perfect, because SR complexes format a rate exactly linearly related to S, no matter howlow the concentration S may be and for any concen-tration R.

Put another way, mass action implies infinite linearmeasurement precision, even at the tiniest signal inten-sities. By contrast, with limited diffusion and spatialfluctuations in concentration, measurement precisionchanges with the scale of the input signal intensity.For example, imagine a low concentration input sig-nal, with only a few molecules in a local volume. AnSR binding transforms R into A, reducing the local

Page 15: Input-output relations in biological systems: measurement ... · Michaelis-Menten theory of enzyme kinetics. I use the discrepancy between the expected Michaelis-Menten process of

Frank Page 15 of 24

measurement capacity, because it is the R moleculesthat provide measurement.

With slow diffusion, each measurement alters theimmediate capacity for further measurement. The in-crease in information from measurement is partly off-set by the loss in measurement capacity. Put anotherway, the spatial disparity in the concentration of themeasuring device R is a loss of entropy, which is asort of gain in unrealized potential information. Asunrealized potential information builds in the spatialdisparity of R, the capacity for measurement and theaccumulation of information about S declines, perhapsreflecting a conservation principle for total informationor, equivalently, for total entropy at steady state.

At low signal concentration, each measurement re-action significantly alters the spatial distribution ofmolecules and the measurement capacity. As signalconcentration rises, individual reactions have less over-all effect on spatial disparity. Put another way, thespatial disparities increase as signal intensity declines,causing measurement to depend on scale in a mannerthat often leads to a logarithmic scaling. I return tothe problem of logarithmic scaling below.

Shaping sensitivity and dynamic range

The previous sections considered specific mechanismsthat may alter sensitivity of input-output relations inways that lead to the log-linear-log scaling of the Hillequation. Such mechanisms include stochastic failureof signal processing in a cascade or departures frommass action. Those mechanisms may be important inmany cases. However, my main argument emphasizesthat the widespread occurrence of log-linear-log scalingfor input-output relations must transcend any partic-ular mechanism. Instead, general properties of systemarchitecture, measurement and information flow mostlikely explain the simple regularity of input-output re-lations. Those general properties, which operate at thesystem level, tend to smooth out the inevitable depar-tures from regularity that must occur at smaller scales.

Brief review and setup of the general problem

An increase in the Hill coefficient, k, reduces sensitivityat low and high input signal intensity (Fig. 2). At thoseintensities, small changes in input cause little changein output. Weak sensitivity tends to be logarithmic,in the sense that output changes logarithmically withinput. Logarithmic sensitivities at low and high inputoften cause sensitivity to be strong and nearly linearwithin an intermediate signal range, with a rapid rateof change in output with respect to small changes ininput intensity. The intermediate interval over whichhigh sensitivity occurs is the dynamic range. The Hill

coefficient often provides a good summary of the input-output pattern and is therefore a useful method forstudying sensitivity and dynamic range.

The general problem of understanding biologicalinput-output systems can be described by a simplequestion. What processes shape the patterns of sensi-tivity and dynamic range in biological systems? To an-alyze sensitivity and dynamic range, we must considerthe architecture by which biological systems transforminputs to outputs.

Aggregation of multiple transformationsBiological systems typically process input signalsthrough numerous transformations before producingan output signal. Thus, the overall input-output pat-tern arises from the aggregate of the individual trans-formations. Although the meaning of “output signal”depends on context, meaningful outputs typically arisefrom multiple transformations of the original input.

I analyzed a simple linear cascade of transformationsin an earlier section. In that case, the first step in thecascade transforms the original input to an output,which in turn forms the input for the next step, andso on. If individual transformations in the cascade haveHill coefficients k > 1, the cascade tends to amplify theaggregate coefficient for the overall input-output pat-tern of the system. Amplification occurs because weaklogarithmic sensitivities at low and high inputs tend tomultiply through the cascade. Multiplication of loga-rithmic sensitivities at the outer ranges of the signalraises the overall Hill coefficient, narrows the dynamicrange, and leads to high sensitivity over intermediateinputs.

That amplification of Hill coefficients in cascadesleads back to the puzzle I have emphasized through-out this article. For simple chemical reactions, kinet-ics follow the Michaelis-Menten pattern with a Hillcoefficient of k = 1. If classical kinetics are typical,then aggregate input-output relations should also haveHill coefficients near to one. By contrast, most ob-served input-output patterns have higher Hill coeffi-cients. Thus, some aspect of the internal processingsteps must depart from classical Michaelis-Menten ki-netics.

There is a long history of study with regard to themechanisms that lead individual chemical reactions tohave increased Hill coefficients. In the first part of thisarticle, I summarized three commonly cited mecha-nisms of chemical kinetics that could raise the Hillcoefficient for individual reactions: cooperative bind-ing, titration of a repressor, and opposing saturatedforward and back reactions. Those sorts of determin-istic mechanisms of chemical kinetics do raise Hill co-efficients and probably occur in many cases. However,

Page 16: Input-output relations in biological systems: measurement ... · Michaelis-Menten theory of enzyme kinetics. I use the discrepancy between the expected Michaelis-Menten process of

Frank Page 16 of 24

the generality of raised Hill coefficients seems to betoo broad to be explained by such specific determinis-tic mechanisms.

Component failure

If the classical deterministic mechanisms of chemicalkinetics do not sufficiently explain the generality ofraised Hill coefficients, then what does explain thatgenerality? My main argument is that input-outputrelations reflect underlying processes of measurementand information. The nature of measurement and in-formation leads almost inevitably to the log-linear-logpattern of observed input-output relations. That argu-ment is, however, rather abstract. How do we connectthe abstractions of measurement and information tothe actual chemical processes by which biological sys-tems transform inputs to outputs?

To develop the connection between abstract conceptsand underlying mechanisms of chemical kinetics, I pre-sented a series of examples. I have already discussedaggregation, perhaps the most powerful and impor-tant general concept. I showed that aggregation am-plifies small departures from Michaelis-Menten kinet-ics (k = 1) into strongly log-linear-log patterns withincreased k.

In my next step, I showed that when individual com-ponents of an aggregate system have Michaelis-Mentenkinetics but also randomly fail to transmit signalswith a certain probability, the system converges on aninput-output pattern with a raised Hill coefficient. Themain assumption is that failure rate increases as signalinput intensity falls.

Certainly, some reactions in biological systems willtend to fail occasionally, and some of those failureswill be correlated with input intensity. Thus, a smalland inevitable amount of sloppiness in component per-formance of an aggregate system alters the nature ofinput-output measurement and information transmis-sion. Because the consequence of failures tends to mul-tiply through a cascade, logarithmic sensitivity at lowsignal input intensity follows inevitably.

Rather than invoke a few specific chemical mecha-nisms to explain the universality of log-linear-log scal-ing, this view invokes the universality of aggregateprocessing and occasional component failures. I amnot saying that component failures are necessarily theprimary cause of log-linear-log scaling. Rather, I ampointing out that such universal aspects must be com-mon and lead inevitably to certain patterns of mea-surement and information processing. Once one beginsto view the problem in this way, other aspects beginto fall into place.

Departure from mass actionLimited rates of chemical diffusion often occur in bi-ological systems. I showed that limited diffusion maydistort classical Michaelis-Menten kinetics to raise theHill coefficient above one. The increased Hill coeffi-cient, and associated logarithmic sensitivity at low in-put, may be interpreted as reduced measurement pre-cision for weak signals.

Regular pattern from highly disordered mechanismsThe overall conclusion is that many different mech-anisms lead to the same log-linear-log scaling. In anyparticular case, the pattern may be shaped by the clas-sical mechanisms of binding cooperativity, repressortitration, or opposing forward and back reactions. Orthe pattern may arise from the generic processes of ag-gregation, component failure, or departures from massaction.

No particular mechanism necessarily associates withlog-linear-log scaling. Rather, a broader view of the re-lations between pattern and process may help. Thatbroader view emphasizes the underlying aspects ofmeasurement and information common to all mech-anisms. The common tendency for input-output tofollow log-linear-log scaling may arise from the factthat so many different processes have the same conse-quences for measurement, scaling and information.

The common patterns of nature are exactly thosepatterns consistent with the widest, most disparaterange of particular mechanisms. When great under-lying disorder has, in the aggregate, a rigid commonoutcome, then that outcome will be widely observed,as if the outcome were a deterministic inevitabilityof some single underlying cause. The true underlyingcause arises from generic aspects of measurement andinformation, not with specific chemical mechanisms.

System designThe inevitability of log-linear-log scaling from diverseunderlying mechanisms suggests that the overall shapeof biological input-output relations may be stronglyconstrained. Put another way, the range of variation islimited by the tendency to converge to log-linear-logscaling. However, within that broad class of scaling,biological systems can tune the responses in many dif-ferent ways. The tuning may arise by adjusting thenumber of reactions in a cascade, by allowing compo-nent failure rates to increase, by using reactions signif-icantly limited by diffusion rate, and so on.

Understanding the design of input-output relationsmust focus on those sorts of tunings within the broaderscope of measurement and information transmission.The demonstration that a particular mechanism oc-curs in a particular system is always interesting and

Page 17: Input-output relations in biological systems: measurement ... · Michaelis-Menten theory of enzyme kinetics. I use the discrepancy between the expected Michaelis-Menten process of

Frank Page 17 of 24

always limited in consequence. The locus of design andfunction is not the particular mechanism of a partic-ular reaction, but the aggregate properties that arisethrough the many mechanisms that influence the tun-ing of the system.

RobustnessOverall input-output pattern often reflects the tightorder that arises from underlying disorder. Thus, per-turbations of particular mechanisms in the system mayoften have relatively little consequence for overall sys-tem function. That insensitivity to perturbation—orrobustness—arises naturally from the structure of sig-nal processing in biological systems.

To study robustness, it may not be sufficient tosearch for particular mechanisms that reduce sensitiv-ity to perturbation. Rather, one must understand theaggregate nature of variation and function, and howthat aggregate nature shapes the inherent tendencytoward insensitivity in systems [3, 4, 18]. Once oneunderstands the intrinsic properties of biological sys-tems, then one can ask how those intrinsic propertiesare tuned by natural selection.

Measurement and informationIntuitively, it makes sense to consider input-output re-lations with respect to measurement and information.However, one may ask whether “measurement” and“information” are truly useful concepts or just vagueand ultimately useless labels with respect to analyzingbiological systems. Here, I make the case that deepand useful concepts underlie “measurement” and “in-formation” in ways that inform the study of biologicaldesign (Table 1). I start by developing the abstractconcepts in a more explicit way. I then connect thoseabstractions to the nature of biological input-outputrelations.

MeasurementMeasurement is the assignment of a value to some un-derlying attribute or event. Thus, we may think ofinput-output relations in biology as measurement re-lations. At first glance, this emphasis on measurementmay seem trivial. What do we gain by thinking of everychemical reaction, perception, or dose-response curveas a process of measurement?

Measurement helps to explain why certain similar-ities in pattern continually arise. When we observecommon patterns, we are faced a question. Do com-mon aspects of pattern between different systems arisefrom universal aspects of measurement or from partic-ular mechanisms of chemistry or perception shared bydifferent systems?

Problems arise if we do not think about the distinc-tion between general properties of measurement and

specific mechanisms of particular chemical pathways.If we do not think about that distinction, we may tryto explain what is in fact a universal attribute of mea-surement by searching, in each particular system, forspecial aspects of chemical kinetics, pathway structureor physical laws that constrain perception. In the op-posite direction, we can never truly recognize the roleof particular mechanisms in generating observed pat-terns if we do not separate out those aspects of patternthat arise from universal process.

Understanding universal aspects of pattern that arisefrom measurement means more than simply analyzinghow observations are turned into numbers. Instead, wemust recognize that the structure of each problem setsvery strong constraints on numerical pattern indepen-dently of particular chemical or biological mechanisms.

Log-linear-log scalesI have mentioned that the Hill equation is simply anexpression of log-linear-log scaling. The widely recog-nized value of the Hill equation for describing biolog-ical pattern arises from its connection to that under-lying universal scale of measurement, in which smallmagnitudes scale logarithmically, intermediate magni-tudes scale linearly, and large values scale logarith-mically. Although linear and logarithmic scales arewidely used and very familiar, the actual propertiesand meanings of such scales are rarely discussed. If weconsider directly the nature of measurement scale, wecan understand more deeply how to understand therelations between pattern and process.

Consider the example of measuring distance [13].Start with a ruler that is about the length of yourhand. With that ruler, you can measure the size of allthe visible objects in your office. That scaling of ob-jects in your office with the length of the ruler meansthat those objects have a natural linear scaling in re-lation to your ruler.

Now consider the distances from your office to vari-ous galaxies. Your ruler is of no use, because you can-not distinguish whether a particular galaxy moves far-ther away by one ruler unit. Instead, for two galaxies,you can measure the ratio of distances from your of-fice to each galaxy. You might, for example, find thatone galaxy is twice as far as another, or, in general,that a galaxy is some percentage farther away than an-other. Percentage changes define a ratio scale of mea-sure, which has natural units in logarithmic measure[5]. For example, a doubling of distance always addslog(2) to the logarithm of the distance, no matter whatthe initial distance.

Measurement naturally grades from linear at localmagnitudes to logarithmic at distant magnitudes whencompared to some local reference scale. The transition

Page 18: Input-output relations in biological systems: measurement ... · Michaelis-Menten theory of enzyme kinetics. I use the discrepancy between the expected Michaelis-Menten process of

Frank Page 18 of 24

between linear and logarithmic varies between prob-lems. Measures from some phenomena remain primar-ily in the linear domain, such as measures of heightand weight in humans. Measures for other phenom-ena remain primarily in the logarithmic domain, suchas cosmological distances. Other phenomena scale be-tween the linear and logarithmic domains, such as fluc-tuations in the price of financial assets [19] or the dis-tribution of income and wealth [20].

Consider the opposite direction of scaling, from localmagnitude to very small magnitude. Your hand-lengthruler is of no value for small magnitudes, because itcannot distinguish between a distance that is a frac-tion 10−4 of the ruler and a distance that is 2× 10−4

of the ruler. At small distances, one needs a standardunit of measure that is the same order of magnitudeas the distinctions to be made. A rule of length 10−4

distinguishes between 10−4 and 2×10−4, but does notdistinguish between 10−8 and 2×10−8. At small mag-nitudes, ratios can potentially be distinguished, caus-ing the unit of informative measure to change withscale. Thus, small magnitudes naturally have a loga-rithmic scaling.

As we change from very small to intermediate to verylarge, the measurement scaling naturally grades fromlogarithmic to linear and then again to logarithmic,a log-linear-log scaling. The locus of linearity and themeaning of very small and very large differ betweenproblems, but the overall pattern of the scaling re-lations remains the same. This section analyzes thatcharacteristic scaling in relation to the Hill equationand biological input-output patterns. I start by consid-ering more carefully what measurement scales mean.I then connect the abstract aspects of measurementto the particular aspects of the Hill equation and toexamples of particular biological mechanisms.

Invariance, the essence of explanationWe began with an observation. Many different input-output relations follow the Hill equation. We thenasked: What process causes the Hill equation pattern?It turned out that many very different kinds of pro-cess lead to the same log-linear-log pattern of the Hillequation. We must change our question. What do thevery different kinds of process have in common suchthat they generate the same overall pattern?

Consider two specific processes discussed earlier, co-operative binding and departures from mass action.Those different processes may produce Hill equationpatterns with similar Hill coefficients, k. However, itis not immediately obvious why cooperative binding,departures from mass action, and many other differentprocesses should lead to a very similar pattern.

Group together all of the disparate mechanisms thatgenerate a common Hill equation pattern. When faced

with a new mechanism, how can we tell if it belongs tothe group? We might look for particular features thatare common to all members of the group. However,that does not work. Various potential members mighthave important common features. But the attributesthat they do not share might cause one potential mem-ber to have a different pattern. Common features arenot sufficient.

More often common membership arises from the fea-tures that do not matter. Think of circles. How can wedescribe whether a shape belongs to the circle class?We have to say what does not matter. For circles, itdoes not matter how much one rotates them, they al-ways look the same. Circles are invariant to any rota-tion. Equivalently, circles are symmetric with regard toany rotation. Invariance and symmetry are the samething. Subject to some constraints, if a shape is invari-ant to any rotation, it is a circle. If it is not invariant toall rotations, it is not a circle. The things that do notmatter set the shared, invariant property of a group[21–23].

A rotation is a kind of transformation. The groupis defined by the set of transformations that leave thegroup members unchanged, or invariant. We can al-ter a chemical system from cooperative binding undermass action to noncooperative binding under depar-ture from mass action, and the log-linear-log scalingmay be preserved. Such invariance arises because thedifferent processes have an underlying symmetry withregard to the transformation of information from in-puts to outputs (Table 1).

What aspects of process do not matter with respectto causing the same log-linear-log pattern of the Hillequation? How can we recognize the underlying in-variance that joins together such disparate processeswith respect to common pattern? The Hill equationexpresses measurement scale. To answer our key ques-tions, we must understand the meaning of measure-ment scale. Measurement scale itself is solely an ex-pression of invariance. A particular measurement scaleexpresses what does not matter—the invariance undertransformation that joins different kinds of processesto a common scaling.

Invariance and measurementSuppose a process transforms inputs x to outputsG(x). The process may be a reading from a measure-ment instrument or a series of chemical transforma-tions. Given that process, how should we define the as-sociated measurement scale? Definitions can, of course,be made in any way. But we should aim for somethingwith reasonable meaning.

One possible meaning for measurement is the scalethat preserves information. In particular, we seek a

Page 19: Input-output relations in biological systems: measurement ... · Michaelis-Menten theory of enzyme kinetics. I use the discrepancy between the expected Michaelis-Menten process of

Frank Page 19 of 24

scale on which we obtain the same information fromthe values of the inputs as we do from the values ofthe outputs. The measurement scale is the scale onwhich the input-output transformation does not alterthe information in the signal (Table 1).

Information is, of course, often lost between inputand output. But only certain kinds of information arelost. The measurement scale describes exactly whatsort of information is lost during the transformationfrom input to output and what sort of informationis retained. In other words, the measurement scale de-fines the invariant qualities of information that remainunchanged by the input-output process.

Different input-output processes belong to the samemeasurement scale when they share the same invari-ance that leaves particular aspects of information un-changed. For such processes, certain aspects of infor-mation remain the same whether we have access tothe original inputs or the final outputs when those val-ues are given on the associated measurement scale. Bycontrast, input-output processes that alter those sameaspects of information when input and output valuesare given by a particular measurement scale do notbelong to that scale.

Those abstract properties define a reasonable mean-ing for measurement scale. Such abstractness can behard to parse. However, it is essential to have a clearexpression of those ideas, otherwise we could never un-derstand why so many different kinds of biological pro-cesses can have such similar input-output relations,and why other processes do not share the same re-lations. It is exactly those abstract informational as-pects of measurement that unite cooperative bindingand departures from mass action into a common groupof processes that share a similar Hill equation pattern.

Measurement and informationIt is useful to express the general concepts in a simpleequation. I build up to that simple summary equationby starting with components of the overall concept.

Inputs are given by x. We denote a small change ininput by dx. An input given on the measurement scaleis T(x). The sensitivity of the measurement scale to achange in input is

mx =d T(x)

dx,

which is the change on the measurement scale, d T(x),with respect to a change in input, dx. That sensitiv-ity describes the information in the measurement scalewith respect to fluctuations in inputs [13, 14, 24]. Wemay also write

mxdx = d T(x),

providing an expression for the incremental informa-tion associated with a change in the underlying input,dx. If the scale is logarithmic, T(x) = log(x), then

mxdx = d log(x) =dx

x,

for which the sensitivity of the measurement scale de-clines as the input becomes large. On a purely logarith-mic scale, the same increment in input, dx, provides alot of information when x is small and little informa-tion when x is large.

Next, we express the relation that defines measure-ment scale. On the proper measurement scale for aparticular problem, the information from input valuesis proportional to the information from associated out-put values. Put another way, the measurement scale isthe transformation of values that makes informationinvariant to whether we use the input values or theoutput values. The measurement scale reflects thoseaspects of information that are preserved in the input-output relation, and consequently also expresses thoseaspects of information that are lost in the input-outputrelation. Although rather abstract, it is useful to com-plete the mathematical development before turning tosome examples in the next section.

The output is G(x), and the measurement scaletransforms the output by T [G(x)]. To have propor-tionality for the incremental information associatedwith a change in the underlying input, d T(x), andthe incremental information associated with a changein the associated output, d T [G(x)], we have

d T(x) ∝ d T [G(x)] (11)

in which the ∝ relationship shows the proportional-ity of information associated with the sensitivity ofinputs and outputs when expressed on the measure-ment scale. That measurement scale defines the groupof input-output processes, G(x), that preserves thesame invariant sensitivity and information propertieson the scale T(x). In other words, all such input-outputprocesses G(x) that are invariant to the measurementscale transformation T(x) belong to that measurementscale [13, 14, 24].

In this equation, we have inputs, x, with the infor-mation in those inputs, d T(x), on the measurementscale T, and outputs, G(x), with information in thoseoutputs, d T [G(x)], on the measurement scale T. Wemay abbreviate this key equation of measurement andinformation as

d T ∝ d T [G]

which we read as the information in inputs, d T, isproportional to the information in outputs, d T [G]. All

Page 20: Input-output relations in biological systems: measurement ... · Michaelis-Menten theory of enzyme kinetics. I use the discrepancy between the expected Michaelis-Menten process of

Frank Page 20 of 24

input-output relations G(x) that satisfy this relationhave the same invariant informational properties withrespect to the measurement scale T.

Linear scale

This view of measurement scale means that linearityhas an exact definition. Linearity requires that we ob-tain the same information from an increment dx onthe input scale independently of whether the actualvalue is big or small (location), and whether we uni-formly stretch or shrink all measurements by a con-stant amount. To expresses changes in location and inuniform scaling, let

T(x) = a+ bx,

which changes the initial value, x, by altering the lo-cation by a and the uniform stretching or shrinking(scaling) by b. This transformation is often called thelinear transformation. But why is that the essence oflinearity? From the first part of Eq. (11)

mxdx = d T(x) = bdx ∝ dx,

which means that an increment in measurement pro-vides a constant amount of information no matterwhat the measurement value, and that the informationis uniform apart from a constant of proportionality b.Linearity means that information in measurements isindependent of location and uniform scaling.

What sort of input-output relations, G(x), belong tothe linear measurement scale? From the second partof Eq. (11), we have d T [G(x)] ∝ dx, which we mayexpand as

d T [G(x)] = d [a+ bG(x)]

= bd G(x) ∝ dx.

Thus, any input-output relations such that d G(x) ∝dx belong to the linear scale, and any input-output re-lations that do not satisfy that condition do not belongto the linear scale. To satisfy that condition, the input-output relation must have the form G(x) = α + βx,which is itself a linear transformation. So, only linearinput-output relations attach to a linear measurementscale. If the input-output relation is not linear, thenthe proper measurement scale is not linear.

Logarithmic scale

We can run the same procedure on the logarithmicmeasurement scale, for which a simple form is T(x) =

log(x). For this scale, d T(x) = dx/x. Thus, input-output relations belong to this logarithmic scale if

d T [G(x)] = d log [G(x)]

=d G(x)

G(x)∝ dx

x.

This condition requires that G(x) ∝ xk, for whichd G(x) ∝ xk−1dx. The logarithmic measurement scaleapplies only to input-output functions that have thispower-law form (Table 1). Note that the special caseof k = 1 leads to linear scaling, but for other k valuesthe scale is logarithmic.

Linear-log and log-linear scales

The most commonly used measurement scales are lin-ear and logarithmic. But those scales are unnatural,because the properties of measurement likely changewith magnitude. As I mentioned earlier, an office ruleris fine for making linear measurements on the visibleobjects in your office. But if you scale up to cosmolog-ical distances or down in microscopic distances, younaturally grade from linear to logarithmic. A propersense of measurement requires attention to the ways inwhich information and input-output relations changewith magnitude [13, 14].

Suppose an input increment provides information as

mxdx =dx

1 + bx.

When x is small, mxdx ≈ dx, which is the linear mea-surement scale. When x is large, mxdx ≈ dx/x, whichis the logarithmic scale. The associated measurementscale is

T(x) ∝ log(1 + bx),

and the associated input-output functions satisfyG(x) ∝ (1 + bx)k. This scale grades continuously fromlinear to logarithmic. The parameter b determines therelation between magnitude and the type of scaling.

The inverse scaling grades from logarithmic at smallmagnitudes to linear as magnitude increases, with

T(x) ∝ x+ b log(x).

When x is small, the scale is logarithmic with T(x) ≈b log(x). When x is large, the scale is linear withT(x) ≈ x.

Page 21: Input-output relations in biological systems: measurement ... · Michaelis-Menten theory of enzyme kinetics. I use the discrepancy between the expected Michaelis-Menten process of

Frank Page 21 of 24

Biological input-output: log-linear-logI have emphasized that the log-linear-log scale is per-haps the most natural of all scales. Information inmeasurement increments tends to be logarithmic atsmall and large magnitudes. As one moves in either ex-treme direction, the unit of measure changes in propor-tion to magnitude to preserve consistent information.At intermediate magnitudes, changing values associatewith an approximately linear measurement scale. Formany biological input-output relations, that interme-diate, linear zone is roughly the dynamic range.

The Hill equation description of input-output rela-tions

G(x) =xk

1 + xk,

is widely useful because it describes log-linear-log scal-ing in a simple form. To check for log scaling in the lim-its of high or low input, we use T(x) = log(x), whichimplies d T(x) ∝ dx/x. In our fundamental relation ofmeasurement, we have

d T(x) ∝ d T [G(x)]

= d log [G(x)]

= k

(1

x− xk−1

1 + xk

)dx,

When x is small, d T(x) ∝ dx/x, the expression forinput-output functions associated with the logarithmicscale. When x is large, d T(x) ∝ −dx/x, which is theexpression for saturation on a logarithmic scale.

When k > 1, the input-output relation scales linearlyfor intermediate x values. One can do various calcula-tions to show the approximate linearity in the middlerange. But the main point can be seen by simply look-ing at Fig. 2.

Exact linearity occurs when the second derivative ofthe Hill equation vanishes at

x∗ =

(k − 1

k + 1

)1/k

(12)

for k > 1. Fig. 10 shows that the locus of linearityshifts from the low side as k → 1 and x∗ → 0 to thehigh side as k → ∞ and x∗ → 1. Note that x∗ = 1is the input at which the response is one-half of themaximum.

Sensitivity and informationSensitivity is the responsiveness of output for a smallchange in input. For a log-linear-log pattern, the locusof linearity is often equivalent to maximum sensitivityof the output in relation to the input. The logarithmic

1 2 3 40

1/2

1

Hill coefficient, k

Locu

s of

line

arity

, x*

Figure 10 The locus of linearity, which is the value of input,x∗, at which the log-linear-log pattern of the Hill equationbecomes exactly linear. The locus of linearity corresponds tothe peak sensitivity of the input-output relation. At x∗ = 1,output is one-half of maximal response. Plot based onEq. (12).

regimes at low and high input are relatively weaklysensitive to changes in input.

The Hill equation pattern for input, x, and output,G(x), is

G(x) =xk

1 + xk=

1

1 + e−k log(x).

The equivalent form on the right side is the classiclogistic function expressed in terms of log(x) ratherthan x. This logarithmic form is the log-logistic func-tion. Note also that G(x) varies between zero and oneas x increases from zero. Thus, G(x) is analogous toa cumulative distribution function (cdf) from proba-bility theory. These mathematical analogies for input-output curves will be useful as we continue to analyzethe meaning of input-output relations and why certainpatterns are particularly common.

Note also that k = 1 is the Michaelis-Menten patternof chemical kinetics. This relation of the input-outputcurve G(x) to chemical kinetics will be important whenwe connect general aspects of sensitivity to the puzzlesof chemical kinetics and biochemical input-output pat-terns.

The sensitivity is the change in output with respectto input. Thus, sensitivity is the derivative of G withrespect to x, which is

G(x) =kxk−1

(1 + xk)2.

This expression is analogous to the log-logistic proba-bility distribution function (pdf). Here, I obtained the

Page 22: Input-output relations in biological systems: measurement ... · Michaelis-Menten theory of enzyme kinetics. I use the discrepancy between the expected Michaelis-Menten process of

Frank Page 22 of 24

pdf in the usual way by differentiating the cdf. Not-ing that the pdf is the sensitivity of the cdf to smallchanges in value (input), we have an analogy betweenthe sensitivity of input-output relations and the gen-eral relation between the pdf and cdf of a probabilitydistribution.

Maximum sensitivity is the maximum value of G(x),which corresponds to the mode of the pdf. For k ≤1, the maximum occurs at x = 0, which means thatmeasurement sensitivity of the input-output system isgreatest when the input is extremely small. Intuitively,it seems unlikely that maximum sensitivity could beachieved when discriminating tiny input values. Fork > 1, the maximum value of the log-logistic patternoccurs when G(x) = 0, which is the point at which thesecond derivative is zero and the input-output relationis purely linear. That maximum occurs at the pointgiven in Eq. (12).

The analogy with probability provides a connectionbetween input-output functions, measurement and in-formation. A probability distribution is completely de-scribed by the information that it expresses [3, 12].That information can be split into two parts. First,certain constraints must be met that limit the possibleshapes of the distribution, such as the mean, the vari-ance, and so on. Second, the measurement scale setsthe sensitivity of the outputs in terms of randomness(entropy) and information (negative entropy) in rela-tion to changes in observed values or inputs [13, 14].

Sensitivity, measurement and the shape ofinput-output patternsThe Hill equation seems almost magical in its abilityto fit the input-output patterns of diverse biologicalprocesses. The magic arises from the fact that the Hillequation is a simple expression of log-linear-log scalingwhen the Hill coefficient is k > 1. The Hill coefficientexpresses the locus of linearity. As k declines towardone, the pattern becomes linear-log, with linearity atlow input values grading into logarithmic as input in-creases. As k drops below one, the pattern becomeseverywhere logarithmic, with declining sensitivity asinput increases.

Sensitivity and measurement scale are the deeperunderlying principles. The Hill equation is properlyviewed as just a convenient mathematical form thatexpresses a particular pattern of sensitivity, measure-ment, and the informational properties of the input-output pattern. From this perspective, one may askwhether alternative input-output functions providesimilar or better ways to express the underlying log-linear-log scale.

Frank & Smith [13, 14] presented the general re-lations between measurement scales and associated

probability distribution function (pdf) patterns. Be-cause a pdf is analogous to an expression of sensitivityfor input-output functions, we can use their system asa basis for alternatives to the Hill equation. Perhapsthe most compelling general expressions for log-linear-log scales arise from the family of beta distributions.For example, the generalized beta prime distributioncan be written as

G(x) ∝( xm

)α(1 +

( xm

)k)−β. (13)

With α = k and β = 1, we obtain a typical form ofthe Hill equation given in Eq. (3). The additional pa-rameters α and β provide more flexibility in expressingdifferent logarithmic sensitivities at high versus low in-puts.

The theory of measurement scale and probability inFrank & Smith [13, 14] also provides a way to analyzemore complex measurement and sensitivity schemes.For example, a double log scale (logarithm of a log-arithm) reduces sensitivity below classical single logscaling. Such double log scales provide a way to ex-press more extreme dissipation of signal informationin a cascade at low or high input levels.

These different expressions for sensitivity have twoadvantages. First, they provide a broader set of em-pirical relations to use for fitting data. Those empir-ical relations derive from the underlying principles ofmeasurement scale. Second, the different forms expresshypotheses about how signal processing cascades dissi-pate information in signals and alter patterns of sensi-tivity. For example, one may predict that certain sig-nal cascade architectures dissipate information morestrongly and lead to double logarithmic scaling andloss of sensitivity at certain input levels. Further the-ory could help to sort out the predicted relations be-tween signal processing architecture, the dissipation ofinformation, and the general forms of input-output re-lations.

ConclusionsNearly all aspects of biology can be reduced to in-puts and outputs. A chemical reaction is the transfor-mation of input concentrations to output concentra-tions. Developmental or regulatory subsystems arisefrom combinations of chemical reactions. Any sort ofsensory measurement of environmental inputs followsfrom chemical output responses. The response of ahoney bee colony to changes in temperature or exter-nal danger follows from perceptions of external inputsand the consequent output responses. Understandingbiology mostly has to do with description of input-output patterns and understanding the processes thatgenerate those patterns.

Page 23: Input-output relations in biological systems: measurement ... · Michaelis-Menten theory of enzyme kinetics. I use the discrepancy between the expected Michaelis-Menten process of

Frank Page 23 of 24

I focused on one simple pattern, in which outputsrise with increasing inputs. I emphasized basic chem-istry for two reasons. First, essentially all complex bi-ological processes reduce to cascades of simple chem-ical reactions. Understanding complex systems ulti-mately comes down to understanding the relation be-tween combinations of simple reactions and the result-ing patterns at the system level. Second, the chemicallevel presents a clear puzzle. The classical theory ofchemical kinetics predicts a concave Michaelis-Menteninput-output relation. By contrast, many simple chem-ical reactions follow an S-shaped Hill equation pattern.The input-output relations of many complex systemsalso tend to follow the Hill equation.

I analyzed this distinction between Michaelis-Mentenkinetics and Hill equation patterns in order to illus-trate the broad problems posed by input-output rela-tions. Several conclusions follow.

First, many distinct chemical processes lead to theHill equation pattern. The literature mostly considersthose different processes as a listing of exceptions tothe classical Michaelis-Menten pattern. Each observeddeparture from Michaelis-Menten is treated as a spe-cial case requiring an explicit mechanistic explanationchosen from the list of possibilities.

Second, I emphasized an alternative perspective. Acommon pattern is widespread because it is consis-tent with the greatest number of distinct underlyingmechanisms. Thus, the Hill equation pattern may becommon because there are so many different processesthat lead to that outcome.

Third, because a particular common pattern asso-ciates with so many distinctive underlying processes,it is a mistake to treat each observed case of that pat-tern as demanding a match to a particular underlyingprocess. Rather, one must think about the problemdifferently. What general properties cause the patternto be common? What is it about all of the differentprocesses that lead to the same outcome?

Fourth, I suggested that aggregation provides theproper framing. Roughly speaking, aggregation con-cerns the structure by which different componentscombine to produce the overall input-output relationsof the system. The power of aggregation arises from thefact that great regularity of pattern often emerges fromunderlying disorder. Deep understanding turns on theprecise relation between underlying disorder and emer-gent order.

Fifth, measurement in relation to the dissipation ofinformation sets the match between underlying disor-der and emergent order. The aggregate combinationsof input-output processing that form the overall sys-tem pattern tend to lose information in particular waysduring the multiple transformations of the initial input

signal. The remaining information carried from inputto output arises from aspects of precision and mea-surement in each processing step.

Sixth, previous work on information theory andprobability shows how aggregation may influence thegeneral form of input-output relations. In particular,certain common scaling relations tend to set the invari-ant information carried from inputs to outputs. Thosescaling relations and aspects of measurement precisiontell us how to evaluate specific mechanisms with re-spect to their general properties. Further work mayallow us to classify apparently different processes intoa few distinctive sets.

Seventh, classifying processes by their key proper-ties may ultimately lead to a meaningful and predictivetheory. By that theory, we may understand why appar-ently different processes share similar outcomes, andwhy certain overall patterns are so common. We maythen predict how overall pattern may change in rela-tion to the structural basis of aggregation in a systemand the general properties of the underlying compo-nents. More theoretical work and associated empiricaltests must follow up on that conjecture.

Eighth, I analyzed the example of fundamentalchemical kinetics in detail. My analysis supports thegeneral points listed here. Specific analyses of otherinput-output relations in terms of aggregation, mea-surement and scale will provide the basis for a moregeneral theory.

Ninth, robustness means insensitivity to perturba-tion. Because system input-output patterns tend toarise by the regularities imposed by aggregation, sys-tems naturally express order arising from underlyingdisorder in components. The order reflects broad struc-tural aspects of the system rather than tuning of par-ticular components. Perturbations to individual com-ponents will therefore tend to have relatively little ef-fect on overall system performance—the essence of ro-bustness.

Finally, natural selection and biological design maybe strongly influenced by the regularity of input-output patterns. That regularity arises inevitably fromaggregation and the dissipation of information. Thoseinevitably regular patterns set the contours that vari-ation tends to follow. Thus, biological design will alsotend to follow those contours. Natural selection mayact primarily to modulate system properties withinthose broad constraints. How do changes in extrinsicselective pressures cause natural selection to alter over-all system architecture in ways that modulate input-output patterns?

Competing interests

The author declares that he has no competing interests.

Page 24: Input-output relations in biological systems: measurement ... · Michaelis-Menten theory of enzyme kinetics. I use the discrepancy between the expected Michaelis-Menten process of

Frank Page 24 of 24

Acknowledgements

I developed this work while supported by a Velux Foundation Professorship

of Biodiversity at ETH Zurich. I benefitted greatly from many long

discussions with Paul Schmid-Hempel.

Grant information

National Science Foundation (USA) grants EF–0822399 and DEB–1251035

support my research.

References1. Tyson, J.J., Chen, K.C., Novak, B.: Sniffers, buzzers, toggles and

blinkers: dynamics of regulatory and signaling pathways in the cell.

Current Opinion in Cell Biology 15, 221–231 (2003)

2. Zhang, Q., Bhattacharya, S., Andersen, M.E.: Ultrasensitive response

motifs: basic amplifiers in molecular signalling networks. Open Biology

3, 130031 (2013). doi:10.1098/rsob.130031

3. Frank, S.A.: The common patterns of nature. Journal of Evolutionary

Biology 22, 1563–1585 (2009)

4. Kauffman, S.A.: The Origins of Order. Oxford University Press, Oxford

(1993)

5. Hand, D.J.: Measurement Theory and Practice. Arnold, London

(2004)

6. Stevens, S.S.: On the psychophysical law. Psychological Review 64,

153–181 (1957)

7. Cornish-Bowden, A.: Fundamentals of Enzyme Kinetics, 4th edn.

Wiley-Blackwell, Hoboken, NJ (2012)

8. Sarpeshkar, R.: Ultra Low Power Bioelectronics. Cambridge University

Press, Cambridge, UK (2010)

9. Goldbeter, A., Koshland, D.E.: An amplified sensitivity arising from

covalent modification in biological systems. Proceedings of the

National Academy of Sciences USA 78, 6840–6844 (1981)

10. Alon, U.: An Introduction to Systems Biology: Design Principles of

Biological Circuits. CRC press, Boca Raton, Florida (2007)

11. Kholodenko, B.N., Hoek, J.B., Westerhoff, H.V., Brown, G.C.:

Quantification of information transfer via cellular signal transduction

pathways. FEBS Letters 414, 430–434 (1997)

12. Jaynes, E.T.: Probability Theory: The Logic of Science. Cambridge

University Press, New York (2003)

13. Frank, S.A., Smith, E.: Measurement invariance, entropy, and

probability. Entropy 12, 289–303 (2010)

14. Frank, S.A., Smith, E.: A simple derivation and classification of

common probability distributions based on information symmetry and

measurement scale. Journal of Evolutionary Biology 24, 469–484

(2011)

15. Das, S., Vikalo, H., Hassibi, A.: On scaling laws of biosensors: a

stochastic approach. Journal of Applied Physics 105, 102021 (2009)

16. Andrews, S.S., Bray, D.: Stochastic simulation of chemical reactions

with spatial resolution and single molecule detail. Physical Biology 1,

137–151 (2004)

17. Andrews, S.S., Addy, N.J., Brent, R., Arkin, A.P.: Detailed simulations

of cell biology with Smoldyn 2.1. PLoS Computational Biology 6,

1000705 (2010)

18. Bluhgen, N., Herzel, H.: How robust are switches in intracellular

signaling cascades? Journal of Theoretical Biology 225, 293–300

(2003)

19. Aparicio, F.M., Estrada, J.: Empirical distributions of stock returns:

European securities markets, 1990-95. The European Journal of

Finance 7(1), 1–21 (2001)

20. Dragulescu, A.A., Yakovenko, V.M.: Exponential and power-law

probability distributions of wealth and income in the United Kingdom

and the United States. Physica A 299, 213–221 (2001)

21. Feynman, R.P.: The Character of Physical Law. MIT Press,

Cambridge, MA (1967)

22. Anderson, P.: More is different. Science 177, 393–396 (1972)

23. Weyl, H.: Symmetry. Princeton University Press, Princeton, NJ (1983)

24. Frank, S.A.: Measurement scale in maximum entropy models of species

abundance. Journal of Evolutionary Biology 24, 485–496 (2011)

25. Gescheider, G.A.: Psychophysics: The Fundamentals, 3rd edn.

Lawrence Erlbaum Associates, Mahwah, NJ (1997)

26. Krantz, D.H., Luce, R.D., Suppes, P., Tversky, A.: Foundations of

Measurement: Volume 1: Additive and Polynomial Representations.

Dover, New York (2006)

27. Krantz, D.H., Luce, R.D., Suppes, P., Tversky, A.: Foundations of

Measurement. Volume II: Geometrical, Threshold, and Probabilistic

Representations. Dover, New York (2006)

28. Suppes, P., Krantz, D.H., Luce, R.D., Tversky, A.: Foundations of

Measurement. Volume III: Representation, Axiomatization, and

Invariance. Dover, New York (2006)

29. Rabinovich, S.G.: Measurement Errors and Uncertainty: Theory and

Practice, 3rd edn. Springer, New York (2005)

30. Kim, S.Y., Ferrell, J.E.: Substrate competition as a source of

ultrasensitivity in the inactivation of Wee1. Cell 128, 1133–1145

(2007)

31. Ferrell, J.E.: Signaling motifs and Weber’s law. Molecular Cell 36,

724–727 (2009)

32. Cohen-Saidon, C., Cohen, A.A., Sigal, A., Liron, Y., Alon, U.:

Dynamics and variability of ERK2 response to EGF in individual living

cells. Molecular Cell 36, 885–893 (2009)

33. Goentoro, L., Kirschner, M.W.: Evidence that fold-change, and not

absolute level, of β-catenin dictates Wnt signaling. Molecular Cell 36,

872–884 (2009)

34. Goentoro, L., Shoval, O., Kirschner, M.W., Alon, U.: The incoherent

feedforward loop can provide fold-change detection in gene regulation.

Molecular Cell 36, 894–899 (2009)

35. DeLean, A., Munson, P., Rodbard, D.: Simultaneous analysis of

families of sigmoidal curves: application to bioassay, radioligand assay,

and physiological dose-response curves. American Journal of

Physiology-Endocrinology and Metabolism 235, 97–102 (1978)

36. Weiss, J.N.: The Hill equation revisited: uses and misuses. FASEB

Journal 11, 835–841 (1997)

37. Rang, H.P.: The receptor concept: pharmacology’s big idea. British

Journal of Pharmacology 147, 9–16 (2006)

38. Bindslev, N.: Drug-Acceptor Interactions, Chapter 10: Hill in Hell.

Co-Action Publishing, Jarfalla, Sweden (2008).

doi:10.3402/bindslev.2008.14

39. Walker, J.S., Li, X., Buttrick, P.M.: Analysing force–pCa curves.

Journal of Muscle Research and Cell Motility 31, 59–69 (2010)

40. Hoffman, A., Goldberg, A.: The relationship between receptor-effector

unit heterogeneity and the shape of the concentration-effect profile:

pharmacodynamic implications. Journal of Pharmacokinetics and

Biopharmaceutics 22, 449–468 (1994)

41. Getz, W.M., Lansky, P.: Receptor dissociation constants and the

information entropy of membrane coding ligand concentration.

Chemical Senses 26, 95–104 (2001)

42. Kolch, W., Calder, M., Gilbert, D.: When kinases meet mathematics:

the systems biology of MAPK signalling. FEBS Letters 579,

1891–1895 (2005)

43. Tkacik, G., Walczak, A.M.: Information transmission in genetic

regulatory networks: a review. Journal of Physics: Condensed Matter

23, 153102 (2011)

44. Marzen, S., Garcia, H.G., Phillips, R.: Statistical mechanics of

Monod-Wyman-Changeux (MWC) models. Journal of Molecular

Biology 425, 1433–1460 (2013)

45. Savageau, M.A.: Michaelis-Menten mechanism reconsidered:

implications of fractal kinetics. Journal of Theoretical Biology 176,

115–124 (1995)

46. Savageau, M.A.: Development of fractal kinetic theory for

enzyme-catalysed reactions and implications for the design of

biochemical pathways. Biosystems 47, 9–36 (1998)

47. ben-Avraham, D., Havlin, S.: Diffusion and Reactions in Fractals and

Disordered Systems. Cambridge Univerity Press, Cambridge, UK

(2000)

48. Schnell, S., Turner, T.E.: Reaction kinetics in intracellular

environments with macromolecular crowding: simulations and rate

laws. Progress in Biophysics and Molecular Biology 85, 235–260

(2004)

49. Dieckmann, U., Law, R., Metz, J.A.J.: The Geometry of Ecological

Interactions: Simplifying Spatial Complexity. Cambridge University

Press, Cambridge, UK (2000)

50. Ellner, S.P.: Pair approximation for lattice models with multiple

interaction scales. Journal of Theoretical Biology 210, 435–447 (2001)

51. Marro, J., Dickman, R.: Nonequilibrium Phase Transitions in Lattice

Models. Cambridge University Press, Cambridge, UK (2005)