Evolution of quantitative traits

Introduction

Let’s stop and review quickly where we’ve come and where we’re going. We started our survey of quantitative genetics by pointing out that our objective was to develop a way to describe the patterns of phenotypic resemblance among relatives. The challenge was that we wanted to do this for phenotypic traits that whose expression is influenced both by many genes and by the environment in which those genes are expressed. Beyond the technical, algebraic challenges associated with many genes, we have the problem that we can’t directly associate particular genotypes with particular phenotypes. We have to rely on patterns of phenotypic resemblance to tell us something about how genetic variation is transmitted. Surprisingly, we’ve managed to do that. We now know that it’s possible to:

Now we’re ready for the next step: applying all of these ideas to the evolution of a quantitative trait.

Evolution of the mean phenotype

We’re going to focus on how the mean phenotype in a population changes in response to natural selection, specifically in response to viability selection. Before we can do this, however, we need to think a bit more carefully about the relationship between genotype, phenotype, and fitness. Let \(F_{ij}(x)\) be the probability that genotype \(A_iA_j\) has a phenotype smaller than \(x\).3 Then \(x_{ij}\), the genotypic value of \(A_iA_j\) is \[x_{ij} = \int_{-\infty}^\infty x \mbox{\rm dF}_{ij}(x)\] and the population mean phenotype is \(p^2x_{11} + 2pqx_{12} + q^2x_{22}\).4 If an individual with phenotype \(x\) has fitness \(w(x)\), then the fitness of an individual with genotype \(A_iA_j\) is \[w_{ij} = \int_{-\infty}^\infty w(x) \mbox{\rm dF}_{ij}(x)\] and the mean fitness in the population is \(\bar w = p^2w_{11} + 2pqw_{12} + q^2w_{22}\).

Now, there’s a well known theorem from calculus known as Taylor’s theorem. It says that for any function5 \(f(x)\) \[f(x) = f(a) + \sum_{k=1}^\infty \left({{(x-a)^k} \over k!}\right) f^{(k)}(a) \quad .,\] where \(f^{(k)}(a)\) is the \(k\)th derivative of \(f(x)\) evaluated at \(a\).6 Using this theorem we can produce an approximate expression describing how the mean phenotype in a population will change in response to selection. Remember that the mean phenotype, \(\bar x\), depends both on the underlying genotypic values and on the allele frequency. So I’m going to write the mean phenotype as \(\bar x(p)\) to remind us of that dependency. The phenotype changes from one generation to the next as a result of changes in the frequency of alleles that influence the phenotype, assuming that the environmental effects on phenotypes don’t change. \[\begin{aligned} {\bar x}(p') &=& {\bar x}(p) + (p' - p)\left({d{\bar x} \over dp}\right) + o(p^2) \\ \\ {\bar x}(p) &=& p^2x_{11} + 2pqx_{12} + q^2x_{22} \\ \\ \frac{d{\bar x(p)}}{dp} &=& 2px_{11} + 2qx_{12} - 2px_{12} - 2qx_{22} \\ &=& 2\left\{ \left(px_{11} + qx_{12} - {\bar x}/2\right) - \left(px_{12} + qx_{22} - {\bar x}/2\right)\right\} \\ &=& 2\left(\alpha_1 - \alpha_2\right) \\ \\ {\bar x}(p') &\approx& {\bar x}(p) + (p' - p)\left(2(\alpha_1 - \alpha_2)\right) \\ \\ \Delta{\bar x} &=& (\Delta p)\left(2(\alpha_1 - \alpha_2)\right) \end{aligned}\] In other words, the change in mean phenotype from one generation to the next depends first on how much the frequency of the \(A_1\) allele changes and second on the difference between the additive effect of \(A_1\) and \(A_2\).

Now you need to remember (from lo those many weeks ago) that \[p' = {p^2w_{11} + pqw_{12} \over \bar w} \quad .\] Thus, \[\begin{aligned} \Delta p &=& p' - p \\ &=& {p^2w_{11} + pqw_{12} \over \bar w} - p \\ &=& {p^2w_{11} + pqw_{12} - p\bar w \over \bar w} \\ &=& p\left(pw_{11} + qw_{12} - \bar w \over \bar w \right) \quad . \end{aligned}\] Now,7 let’s do a linear regression of fitness on phenotype. After all, to make any further progress, we need to relate phenotype to fitness, so that we can use the relationship between phenotype and genotype to infer the change in allele frequencies, from which we will infer the change in mean phenotype.8 From our vast statistical knowledge, we know that the slope of this regression line is \[\beta_1 = {\mbox{Cov}(w,x) \over \mbox{Var}(x)}\] and its intercept is \[\beta_0 = \bar w - \beta_1 \bar x \quad .\] Let’s use this regression equation to determine the fitness of each genotype. This is only an approximation to the true fitness,9 but it is adequate for many purposes. \[\begin{aligned} w_{ij} &=& \int_{-\infty}^\infty w(x) \mbox{\rm dF}_{ij}(x) \\ &\approx& \int_{-\infty}^\infty (\beta_0 + \beta_1x) \mbox{\rm dF}_{ij}(x) \\ &=& \beta_0 + \beta_1x_{ij} \\ \bar w &=& \beta_0 + \beta_1\bar x \quad . \end{aligned}\] If we substitute this into our expression for \(\Delta p\) above, we get \[\begin{aligned} \Delta p &=& p\left(pw_{11} + qw_{12} - \bar w \over \bar w \right) \\ &=& p\left(p(\beta_0 + \beta_1x_{11}) + q(\beta_0 + \beta_1x_{12}) - (\beta_0 + \beta_1\bar x) \over \bar w \right) \\ &=& p\beta_1\left(px_{11} + qx_{12} - \bar x \over \bar w \right) \\ &=& p\beta_1\left(\alpha_1 - \bar x/2 \over \bar w \right) \\ &=& p\beta_1\left(\alpha_1 - (p\alpha_1 + q\alpha_2) \over \bar w \right) \\ &=& {pq\beta_1(\alpha_1 - \alpha_2) \over \bar w} \quad . \end{aligned}\] So where are we now?10 Let’s substitute this result back into the equation for \(\Delta\bar x\). When we do we get \[\begin{aligned} \Delta\bar x &=& (\Delta p)\left(2(\alpha_1 - \alpha_2)\right) \\ &=& \left( pq\beta_1(\alpha_1 - \alpha_2) \over \bar w \right) \left(2(\alpha_1 - \alpha_2)\right) \\ &=& 2pq\alpha^2\left(\beta_1 \over \bar w\right) \\ &=& V_a \left(\beta_1 \over \bar w\right) \quad . \end{aligned}\] This is great if we’ve done the regression between fitness and phenotype, but what if we haven’t?11 Let’s look at \(\mbox{Cov}(w,x)\) in a little more detail. \[\begin{aligned} \mbox{Cov}(w,x) &=& p^2\int_{-\infty}^\infty x w(x) \mbox{\rm dF}_{11}(x) + 2pq\int_{-\infty}^\infty x w(x) \mbox{\rm dF}_{12}(x) \\ && \qquad + q^2\int_{-\infty}^\infty x w(x) \mbox{\rm dF}_{22}(x) - \bar x \bar w \\ &=& p^2\left(\int_{-\infty}^\infty x w(x) \mbox{dF}_{11}(x) - x_{11}\bar w + x_{11}\bar w\right) \\ && \quad + 2pq\left(\int_{-\infty}^\infty x w(x) \mbox{dF}_{11}(x) - x_{12}\bar w + x_{12}\bar w\right) \\ && \quad + q^2\left(\int_{-\infty}^\infty x w(x) \mbox{dF}_{22}(x) - x_{22}\bar w + x_{22}\bar w\right) \\ && \quad - \bar x \bar w \\ &=& p^2\left(\int_{-\infty}^\infty x w(x) \mbox{dF}_{11}(x) - x_{11}\bar w\right) \\ && \quad + 2pq\left(\int_{-\infty}^\infty x w(x) \mbox{dF}_{11}(x) - x_{12}\bar w\right) \\ && \quad + q^2\left(\int_{-\infty}^\infty x w(x) \mbox{dF}_{22}(x) - x_{22}\bar w\right) \quad . \end{aligned}\] Now \[\begin{aligned} \int_{-\infty}^\infty x w(x) \mbox{\rm dF}_{ij}(x) - x_{ij}\bar w &=& \bar w \left( \int_{-\infty}^\infty {x w(x) \over \bar w} \mbox{\rm dF}_{ij}(x) - x_{ij} \right) \\ &=& \bar w (x_{ij}^* - x_{ij}) \quad, \end{aligned}\] where \(x_{ij}^*\) refers to the mean phenotype of \(A_iA_j\) after selection. So \[\begin{aligned} \mbox{Cov}(w,x) &=& p^2\bar w(x_{11}^* - x_{11}) + 2pq\bar w(x_{12}^* - x_{12}) q^2\bar w(x_{22}^* - x_{22}) \\ &=& \bar w(\bar x^* - \bar x) \quad , \end{aligned}\] where \(\bar x^*\) is the population mean phenotype after selection. In short,12 combining our equations for the change in mean phenotype and for the covariance of fitness and phenotype and remembering that \(\beta_1 = \mbox{Cov}(w,x)/Var(x)\)13 \[\begin{aligned} \Delta\bar x &=& V_a \left({\bar w(\bar x^* - \bar x) \over V_p} \over \bar w \right) \cr &=& h^2_N (\bar x^* - \bar x) \cr \end{aligned}\] \(\Delta\bar x = \bar x' - \bar x\) is referred to as the response to selection and is often given the symbol \(R\). It is the change in population mean between the parental generation (before selection) and the offspring beneration (before selection). \(\bar x^* - \bar x\) is referred to as the selection differential and is often given the symbol \(S\). It is the difference between the mean phenotype in the parental generation before selection and the mean phenotype in the parental generation after selection. Thus, we can rewrite our final equation as \[R = h^2_N S \quad .\] This equation is often referred to as the breeders equation.

A Numerical Example

To illustrate how this works, let’s examine the simple example in Table 1.

A simple example to illustrate response to selection in a quantitative trait.
Genotype \(A_1A_1\) \(A_1A_2\) \(A_2A_2\)
Phenotype 1.303 1.249 0.948

Given these phenotypes, \(p = 0.25\), and \(V_p = 0.16\), it follows that \({\bar x} = 1.08\) and \(h^2_N = 0.1342\). Suppose the mean phenotype after selection is 1.544. What will the phenotype be among the newly born progeny? \[\begin{aligned} S &=& \bar x^* - \bar x \\ &=& 1.544 - 1.08 \\ &=& 0.464 \\ \Delta\bar x &=& h^2_N S \\ &=& (0.1342)(0.464) \\ &=& 0.06 \\ \bar x' &=& \bar x + \Delta\bar x \\ &=& 1.08 + 0.06 \\ &=& 1.14 \end{aligned}\]

Fisher’s Fundamental Theorem of Natural Selection

Suppose the phenotype whose evolution we’re interested in following is fitness itself.14 Then we can summarize the fitnesses as illustrated in Table 2.

Fitnesses and additive fitness values used in deriving Fisher’s Fundamental Theorem of Natural Selection.
Genotype \(A_1A_1\) \(A_1A_2\) \(A_2A_2\)
Frequency \(p^2\) \(2pq\) \(q^2\)
Fitness \(w_{11}\) \(w_{12}\) \(w_{22}\)
Additive fitness value \(2\alpha_1\) \(\alpha_1 + \alpha_2\) \(2\alpha_2\)

Although I didn’t tell you this, a well-known fact about viability selection at one locus is that the change in allele frequency from one generation to the next can be written as \[\Delta p = \left({{pq} \over 2{\bar w}}\right) \left({{d{\bar w}} \over {dp}}\right) \quad .\]

Using our new friend, Taylor’s theorem, it follows immediately that \[{\bar w}' = {\bar w} + \left(\Delta p\right)\left({{d{\bar w}} \over {dp}}\right) + \left({{(\Delta p)^2} \over 2}\right) \left({{d^2{\bar w}} \over {dp^2}}\right) \quad .\] Or, equivalently \[\Delta {\bar w} = \left(\Delta p\right)\left({{d{\bar w}} \over {dp}}\right) + \left({{(\Delta p)^2} \over 2}\right) \left({{d^2{\bar w}} \over {dp^2}}\right) \quad .\]

Recalling that \({\bar w} = p^2w_{11} + 2p(1-p)w_{12} + (1-p)^2w_{22}\) we find that \[\begin{aligned} \frac{d{\bar w}}{dp} &=& 2pw_{11} + 2(1-p)w_{12} - 2pw_{12} - 2(1-p)w_{22} \\ &=& 2[(pw_{11}+qw_{12}) - (pw_{12}+qw_{22})] \\ &=& 2[(pw_{11}+qw_{12}-{\bar w}/2) - (pw_{12}+qw_{22}-{\bar w}/2)] \\ &=& 2[\alpha_1 - \alpha_2] \\ &=& 2\alpha \quad , \end{aligned}\] where the last two steps use the definitions for \(\alpha_1\) and \(\alpha_2\), and we set \(\alpha = \alpha_1 - \alpha_2\). Similarly, \[\begin{aligned} \frac{d^2{\bar w}}{dp^2} &=& 2w_{11} - 2w_{12} - 2w_{12} + 2w_{22} \\ &=& 2(w_{11} - 2w_{12} + w_{22}) \\ \end{aligned}\]

Now we can plug these back into the equation for \(\Delta\bar w\): \[\begin{aligned} \Delta {\bar w} &=& \left\{\left({{pq} \over {2{\bar w}}}\right)\left({{d{\bar w}} \over {dp}}\right) \right\} \left({{d{\bar w}} \over {dp}}\right) + {{ \left\{\left({{pq} \over {2{\bar w}}}\right)\left({{d{\bar w}} \over {dp}}\right) \right\}^2} \over 2} [2(w_{11} - 2w_{12} + w_{22})] \\ &=& \left\{\left({{pq} \over {2{\bar w}}}\right)\left(2\alpha\right)\right\}\left(2\alpha\right) + \left\{\left({{pq} \over {2{\bar w}}}\right)\left(2\alpha\right)\right\}^2 (w_{11} - 2w_{12} + w_{22}) \\ &=& {{2pq\alpha^2} \over {\bar w}} + {{p^2q^2\alpha^2} \over {{\bar w}^2}}(w_{11} - 2w_{12} + w_{22}) \\ &=& {V_a \over {\bar w}} \left\{1 + {{pq} \over {2{\bar w}}}(w_{11} - 2w_{12} + w_{22})\right\} \quad , \end{aligned}\] where the last step follows from the observation that \(V_a = 2pq\alpha^2\). The quantity \({{pq} \over {2{\bar w}}}(w_{11} - 2w_{12} + w_{22})\) is usually quite small, especially if selection is not too intense.15 So we are left with \[\Delta {\bar w} \approx {V_a \over {\bar w}} \quad .\]

Selection on multiple traits

So far we’ve studied only the evolution of a single trait, e.g., height or weight. But organisms have many traits, and they evolve at the same time. How can we understand their simultaneous evolution? The basic framework of the quantitative genetic approach was first outlined by Russ Lande and Steve Arnold .

Let \(z_1\), \(z_2\), …, \(z_n\) be the phenotype of each character that we are studying. We’ll use \(\bar{\bf z}\) to denote the vector of these characters before selection and \(\bar{\bf z}^*\) to denote the vector after selection. The selection differential, \(\bf s\), is also a vector given by \[{\bf s} = \bar{\bf z}^* - \bar{\bf z} \quad .\] Suppose \(p({\bf z})\) is the probability that any individual has phenotype \(\bf z\), and let \(W({\bf z})\) be the fitness (absolute viability) of an individual with phenotype \(\bf z\). Then the mean absolute fitness is \[\bar W = \int W({\bf z})p({\bf z})d{\bf z} \quad .\] The relative fitness of an individual with phenotype \(\bf z\) can be written as \[w({\bf z}) = {W({\bf z}) \over \bar W} \quad .\] Using relative fitnesses the mean relative fitness, \(\bar w\), is 1. Now \[\bar{\bf z}^* = \int {\bf z}w({\bf z})p({\bf z})d{\bf z} \quad .\] Recall that \(Cov(X,Y) = E(X - \mu_x)(Y - \mu_y) = E(XY) - \mu_x\mu_y\). Consider \[\begin{aligned} {\bf s} &=& \bar{\bf z}^* - \bar{\bf z} \\ &=& \int {\bf z}w({\bf z})p({\bf z})d{\bf z} - \bar {\bf z} \\ &=& E(w,z) - \bar w\bar {\bf z} \quad , \end{aligned}\] where the last step follows since \(\bar w = 1\) meaning that \(\bar w\bar{\bf z} = \bar{\bf z}\). In short, \[{\bf s} = Cov(w,z) \quad .\] That should look familiar from our analysis of the evolution of a single phenotype.

If we assume that all genetic effects are additive, then the phenotype of an individual can be written as \[{\bf z} = {\bf x} + {\bf e} \quad ,\] where \(\bf x\) is the additive genotype and \(\bf e\) is the environmental effect. We’ll denote by \(\bf G\) the matrix of genetic variances and covariances and by \(\bf E\) the matrix of environmental variances and covariances. The matrix of phenotype variances and covariances, \(\bf P\), is then given by16 \[{\bf P} = {\bf G} + {\bf E} \quad .\] Now, if we’re willing to assume that the regression of additive genetic effects on phenotype is linear17 and that the environmental variance is the same for every genotype, then we can predict how phenotypes will change from one generation to the next \[\begin{aligned} \bar{\bf x}^* - \bar{\bf x} &=& {\bf GP}^{-1}(\bar{\bf z}^* - \bar{\bf z}) \\ \bar{\bf z}' - \bar{\bf z} &=& {\bf GP}^{-1}(\bar{\bf z}^* - \bar{\bf z}) \\ \Delta\bar{\bf z} &=& {\bf GP}^{-1}{\bf s} \end{aligned}\] \({\bf GP}^{-1}\) is the multivariate version of \(h^2_N\). This equation is also the multivariate version of the breeders equation.

But we have already seen that \({\bf s} = Cov(w,z)\). Thus, \[{\bf \beta} = {\bf P}^{-1}{\bf s}\] is a set of partial regression coefficients of relative fitness on the characters, i.e., the dependence of relative fitness on that character alone holding all others constant.

Note: \[\begin{aligned} s_i &=& \sum_{j=1}^n \beta_jP_{ij} \\ &=& \beta_1P_{i1} + \cdots + \beta_iP_{ii} + \cdots + \beta_nP_{in} \end{aligned}\] is the total selective differential in character \(i\), including the indirect effects of selection on other characters.

An example: selection in a pentastomid bug

94 individuals were collected along shoreline of Lake Michigan in Parker County, Indiana after a storm. 39 were alive, 55 dead. The means of several characters before selection, the trait correlations, and the selection analysis are presented in Table 5.

Selection analysis of pentastomid bugs on the shores of Lake Michigan.
Character Mean before selection standard deviation
head 0.880 0.034
thorax 2.038 0.049
scutellum 1.526 0.057
wing 2.337 0.043
Selection analysis of pentastomid bugs on the shores of Lake Michigan.
head thorax scutellum wing
head 1.00 0.72 0.50 0.60
thorax 1.00 0.59 0.71
scutellum 1.00 0.62
wing 1.00
Selection analysis of pentastomid bugs on the shores of Lake Michigan.
Character \(s\) \(s'\) \(\beta\) \(\beta'\)
head -0.004 -0.11 -0.7 \(\pm\) 4.9 -0.03 \(\pm\) 0.17
thorax -0.003 -0.06 11.6 \(\pm\) 3.9\(^{**}\) 0.58 \(\pm\) 0.19\(^{**}\)
scutellum -0.16\(^*\) -0.28\(^*\) -2.8 \(\pm\) 2.7 -0.17 \(\pm\) 0.15
wing -0.019\(^{**}\) -0.43\(^{**}\) -16.6 \(\pm\) 4.0\(^{**}\) -0.74 \(\pm\) 0.18\(^{**}\)

The column labeled \(s\) is the selective differential for each character. The column labeled \(s'\) is the standardized selective differential, i.e., the change measured in units of standard deviation rather than on the original scale.18 A multiple regression analysis of fitness versus phenotype on the original scale gives estimates of \(\beta\), the direct effect of selection on that trait. A multiple regression analysis of fitness versus phenotype on the transformed scale gives the standardized direct effect of selection, \(\beta'\), on that trait.

Notice that the selective differential19 for the thorax measurement is negative, i.e., individuals that survived had smaller thoraces than those that died. But the direct effect of selection on thorax is strongly positive, i.e., all other things being equal, an individual with a large was more likely to survive than one with a small thorax. Why the apparent contradiction? Because the thorax measurement is positively correlated with the wing measurement, and there’s strong selection for decreased values of the wing measurement.

Cumulative selection gradients

Arnold  suggested an extension of this approach to longer evolutionary time scales. Specifically, he studied variation in the number of body vertebrae and the number of tail vertebrae in populations of Thamnophis elegans from two regions of central California. He found relatively little vertebral variation within populations, but there were considerable differences in vertebral number between populations on the coast side of the Coast Ranges and populations on the Central Valley side of the Coast Ranges. The consistent difference suggested that selection might have produced these differences, and Arnold attempted to determine the amount of selection necessary to produce these differences.

The data

Arnold collected pregnant females from two local populations in each of two sites in northern California 282 km apart from one another. Females were collected over a ten-year period and returned to the University of Chicago. Dam-offspring regressions were used to estimate additive genetic variances and covariances of vertebral number.20 Mark-release-recapture experiments in the California populations showed that females with intermediate numbers of vertebrae grow at the fastest rate, at least at the inland site, although no such relationship was found in males. The genetic variance-covariance matrix he obtained is shown in Table 6.

Genetic variance-covariance matrix for vertebral number in central Californian garter snakes.
body tail
body 35.4606 11.3530
tail 11.3530 37.2973

The method

We know from Lande and Arnold’s results that the change in multivariate phenotype from one generation to the next, \(\Delta\bar{\bf z}\), can be written as \[\Delta\bar{\bf z} = {\bf G\beta} \quad ,\] where \(\bf G\) is the genotypic variance-covariance matrix, \({\bf\beta} = {\bf P}^{-1}{\bf s}\) is the set of partial regression coefficients describing the direct effect of each character on relative fitness.21 If we are willing to assume that G remains constant, then the total change in a character subject to selection for \(n\) generations is \[\sum_{k=1}^n \Delta\bar{\bf z} = {\bf G}\sum_{k=1}^n\beta \quad .\] Thus, \(\sum_{k=1}^n\beta\) can be regarded as the cumulative selection differential associated with a particular observed change, and it can be estimated as \[\sum_{k=1}^n\beta = {\bf G}^{-1}\sum_{k=1}^n \Delta\bar{\bf z}\quad .\]

The results

The overall difference in vertebral number between inland and coastal populations can be summarized as: \[\begin{aligned} \mbox{body}_{\mbox{inland}} - \mbox{body}_{\mbox{coastal}} &=& 16.21 \\ \mbox{tail}_{\mbox{inland}} - \mbox{tail}_{\mbox{coastal}} &=& 9.69 \end{aligned}\] Given the estimate of \(\bf G\) already obtained, this corresponds to a cumulative selection gradient between inland and coastal populations of \[\begin{aligned} \beta_{\mbox{body}} &=& 0.414 \\ \beta_{\mbox{tail}} &=& 0.134 \end{aligned}\]

Applying the same technique to looking at the differences between populations within the inland site and within the coastal site we find cumulative selection gradients of \[\begin{aligned} \beta_{\mbox{body}} &=& 0.035 \\ \beta_{\mbox{tail}} &=& 0.038 \end{aligned}\] for the coastal site and \[\begin{aligned} \beta_{\mbox{body}} &=& 0.035 \\ \beta_{\mbox{tail}} &=& -0.004 \end{aligned}\] for the inland site.

The conclusions

“To account for divergence between inland and coastal California, we must invoke cumulative forces of selection that are 7 to 11 times stronger than the forces needed to account for differentiation of local populations.”

Furthermore, recall that the selection gradients can be used to partition the overall response to selection in a character into the portion due to the direct effects of that character alone and the portion due to the indirect effects of selection on a correlated character. In this case the overall response to selection in number of body vertebrae is given by \[{\bf G}_{11}\beta_1 + {\bf G}_{12}\beta_2 \quad ,\] where \({\bf G}_{11}\beta_1\) is the direct effect of body vertebral number and \({\bf G}_{12}\beta_2\) is the indirect effect of tail vertebral number. Similarly, the overall response to selection in number of tail vertebrae is given by \[{\bf G}_{12}\beta_1 + {\bf G}_{22}\beta_2 \quad ,\] where \({\bf G}_{22}\beta_2\) is the direct effect of tail vertebral number and \({\bf G}_{12}\beta_1\) is the indirect effect of body vertebral number. Using these equations it is straightforward to calculate that 91% of the total divergence in number of body vertebrae is a result of direct selection on this character. In contrast, only 51% of the total divergence in number of tail vertebrae is a result of direct selection on this character, i.e., 49% of the difference in number of tail vertebrae is attributable to indirect selection as a result of its correlation with number of body vertebrae.

The caveats

While the approach Arnold suggests is intriguing, there are a number of caveats that must be kept in mind in trying to apply it.

Creative Commons License

These notes are licensed under the Creative Commons Attribution License. To view a copy of this license, visit or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.