Geotechnical Research

E-ISSN 2052-6156
Volume 5 Issue 3, September 2018, pp. 130-142
Themed issue on interactive design
 
Open access content Subscribed content Free content Trial content

Full Text

Since guidelines for choosing ‘most probable’ parameters in ground engineering design codes are vague, concerns are raised regarding their definition, as well as the associated uncertainties. This paper introduces Bayesian inference for a new rigorous approach to obtaining the estimates of the most probable parameters based on observations collected during construction. Following the review of optimisation-based methods that can be used in back-analysis, such as gradient descent and neural networks, a probabilistic model is developed using Clough and O’Rourke’s method for retaining wall design. Sequential Bayesian inference is applied to a staged excavation project to examine the applicability of the proposed approach and illustrate the process of back-analysis.

B

width of the excavation

D

observations

D

observation point

E

Young’s modulus of retaining wall

E s

elastic modulus of the clay for undrained deformation

H

height of the retaining wall

H e

excavation depth

h avg

average spacing of the struts

I

moment of inertia of the wall section

L R

load resistance ratio

k

number of stages

N c

bearing capacity

r S u

ratio of shear strength with depth

S

system stiffness

S u

undrained shear strength

S u0

shear strength at the top of soil layer

S uavg

average undrained shear strength of the clay

S ub

undrained shear strength below the excavation level

S uu

undrained shear strength above the excavation level

W

distance between excavation base and firm stratum

x

variables

y

response

γ w

unit weight of water

δ max

maximum lateral ground movement

ε

random variable with standard normal distribution

Θ

unknown parameters

θ

unknown soil parameters

σ

standard deviation

ϕ

vector of known soil parameters

Deep excavations for underground spaces or other infrastructure have become common practice in many cities around the world in the past few decades. However, excavation-induced movement is still a major concern in most underground construction projects, since it may cause significant displacements and rotations in adjacent structures and hence lead to damage or even collapses. Therefore, accurate predictions of lateral wall deflections and surface settlements are critical in the design of excavation support systems. Excessive conservatism due to uncertainties in underground conditions and the assessment of soil properties often results in over-predictions. The observational method (OM) (Peck, 1969) can be applied in staged excavations to reduce redundant construction phases to save materials, time and costs.

The selection of parameters to be used in design within the OM has long been an issue of discussion, in particular when linked with different interpretations of the process for practical applications. Four approaches based on the timing of the decision to adopt the OM and the level of conservatism are described by Hardy et al. (2017). Most of them require the definition of ‘most probable’ conditions for design. During construction, the best estimate of future ground movements is also required to support decisions on altering the construction sequence.

In the Construction Industry Research and Information Association ground engineering design codes (Gaba et al., 2003; Nicholson et al., 1999), the ground condition most likely to occur in practice is represented by the most probable soil parameters. The most probable set of parameters is defined in C185 (Observational Method in Ground Engineering) as the probabilistic mean of all possible conditions (Nicholson et al., 1999). Hardy et al. (2017: pp. 1996–1997) define the ‘most probable value’ as the ‘arithmetical mean of the available data’. However, C580 (Embedded Retaining Walls: Guidance for Economic Design) also mentions that the most probable values have a 50% probability of exceedance, which implies that the most probable value is the median value of the distribution of the parameters (Gaba et al., 2003). The two methods of choosing most probable parameters presented in C185 and C580, respectively, achieve the same result if the parameters follow the Gaussian distribution. However, this set of most probable parameters does not necessarily predict the ground response which is ‘most likely’ to occur in practice because the ground and the system responses are usually non-linear (Houlsby and Houlsby, 2013; Murphy, 2012).

In the current literature and practice, most probable parameters are generally obtained through back-analysis as a calibration process to produce the best match between predictions and available observations of ground movements. These parameters are also expected to produce the most accurate prediction of the ground movement in future excavation stages. However, no standardised guidance is available on what constitutes the best match or how to choose the most probable parameters.

In this paper, the authors apply the Bayesian method to explore the definition of most probable parameters and demonstrate the process of obtaining those parameters through a rigorous process. The authors start by introducing and discussing some techniques used in back-analysis.

Previous studies (Yeow and Feltham, 2008; Yeow et al., 2014) demonstrated that back-analysis can be applied successfully in the OM. However, the set of parameters that produces the best match with the observations is still processed manually relying solely on engineering experience and the operator’s individual judgement, which might lead to biased results (Houlsby and Houlsby, 2013).

In a more quantitative way, back-analysis is often formalised as an optimisation problem, defined as a minimisation process of a given loss function, such as residual sum of squares. There are two categories of optimisation methods: the classical methods, such as gradient descent, and those derived from evolutionary computation, such as genetic algorithms (GAs) and neural network (NNs). Since numerical methods, such as finite elements, have become powerful tools for engineering design, they are also widely integrated into the optimisation approaches in back-analysis to study the relationship between input soil parameters and the soil movement.

Classical optimisation methods

Classical optimisation theories, such as the gradient descent method, have been successfully applied to excavation projects. Ou and Tang (1994) used it to determine two unknown parameters in the pseudo-elastic hyperbolic Duncan–Chang model, by minimising the sum of the square of differences between observed and predicted values of the horizontal wall movement. The convergence properties and the stability of the algorithm were verified through a synthetic case and a case history. Calvello and Finno (2004) used a modified gradient descent method to update four soil parameters based on the stress–strain curves obtained from laboratory tests and inclinometer readings. Finno and Calvello (2005) applied a gradient-based inverse analysis procedure to update predictions of lateral deformations observed during an excavation in Chicago glacial clays. The optimisation was based on the readings obtained from inclinometers at every stage. The soil–structure interaction was described by the hardening-soil model (Schanz et al., 1999), and one of its six parameters, the reference value for the primary loading stiffness, was optimised. The predictions for later stages based on the optimised parameter using all observations were largely improved with an accuracy of 3 mm (12·5% of the maximum displacement) for the final stage.

Genetic algorithms

The GA, inspired by the biological processes of natural selection and survival of the fittest, is one of the most popular choices in back-analysis. It is able to solve complex optimisation problems with large, discrete and non-linear models. GA incorporates implicit parallelism, which considers many points at the same time during the search process; hence, it is robust and highly efficient (Solomatine et al., 2009). For geotechnical practice, GA was found particularly suitable for identifying soil parameters when the underlying norm of the error function is complex (Levasseur et al., 2009). Furthermore, Pichler et al. (2003) noted that the evolution of the population process could provide information about both parameter sensitivity and the existence of mathematical correlations between parameters.

Levasseur et al. (2008) used a GA to estimate three parameters (shear modulus, friction angle and initial lateral earth pressure at rest) controlling the horizontal displacements of a sheet pile wall. The optimisation procedure converged to a set of reasonable solutions, but not necessarily a unique one. Further considerations, such as an assessment by geotechnical experts, would be necessary to make a reasonable choice of parameter values from the set of solutions.

Rechea et al. (2008) optimised the reference value for the stiffness of two soil layers by using both gradient descent and GA on synthetic data and horizontal deflection measurements from a published case history (Finno and Calvello 2005). They concluded that the parameters obtained by GA were close to the global optimum only when the search space was set as one-fourth to four times the actual value of the parameters. The significance of this conclusion is limited to the particulars of this case history and cannot be generalised to other cases. In addition, the authors pointed out that GA is more time consuming than the gradient descent method.

Neural networks

As an alternative approach, artificial neural networks (ANNs) consist of simple elements called neurons that are able to receive input, change their internal state and produce an output, according to the input and a predefined activation function. The network, constructed by connecting the output of certain neurons to the input of other neurons, forms a directed and weighted graph, where the neurons are the nodes and the connections between the neurons are directed edges with weights. The weights and the activation functions are updated by a process called learning, which is governed by rules (Harrington, 2012). The ANN method has been adopted in back-analysis to model the complex relations between soil parameters and ground response. The conventional predefined constitutive model can be replaced by the ANN material model. The parameters in the ANN model are optimised to predict future field measurements.

A self-learning approach called SelfSim was developed by Hashash et al. (2003, 2011, 2010, 2006). They introduced the concept of ‘training’, in which the NN material model is trained with available stress–strain data and the unknown parameters in the model are updated. Moreover, this model can be trained continuously when there are new input–output data available. The soil model obtained from the training progress can be used in the forward prediction of future excavations or later excavation stages. This approach was also applied to synthetic cases modelled by finite-element analysis (Hashash et al., 2003), and many successful applications were produced in both two-dimensional and three-dimensional case histories (Finno and Calvello, 2005; Finno and Roboski, 2005; Hashash et al., 2006, 2010). The accuracy of prediction for lateral ground movement is about 20%. However, the ANN material model can only be narrowly applied to the same case and circumstances in which it was derived and cannot be used for different soil layers with variable properties.

Limitations of optimisation-based back-analysis

When the optimisation techniques described in previous sections are applied to the back-analysis using field measurements, many concerns are raised in regard to efficiency, as well as accuracy. For example, the GA method usually requires long computation times. Moreover, these optimisation-based methods are able to analyse only a relatively small number of parameters (Miranda et al., 2011, 2015).

The ANN approach, in particular, is a black-box model in which the functional form of the relationships between model variables is unknown and needs to be estimated using data. The parameters in the ANN model represent only the connection between the network nodes as captured by weights. Therefore, the ANN model is able only to describe the end-to-end relationship and provide the output directly, but does not unfold any physical deduction processes. Hence, ANNs are not interpretable and may not be assessable by engineers.

Because the ANN model is effectively shaped by data, a large amount of data is required for training the model, but can be difficult to gather.

Optimisation methods may provide only a local optimum, which is not necessarily the optimum solution of the given problem. For example, the genetic mechanism is able to produce a set of parameters to localise the optimum in a given search space; however, there is no way to determine whether the set of obtained parameters represents the global minimum (Levasseur et al., 2009). In this case, a possible strategy to validate the result is to carry out several runs of the optimisation process with different initial guesses, potentially converging on different solutions, and compare outputs (Finno and Calvello, 2005). Therefore, the obtained optimal solution may depend heavily on the initialisation of the parameters.

Over-fitting is another potential issue with these methods. Since they assess each excavation stage independently and require a large number of measurements, the result of each computation may become overly constrained, particularly in the first excavation stages when the movement is very small.

While some of these drawbacks can be overcome by careful application, the optimisation methods mentioned previously are all deterministic, simply neglecting all uncertainties and lacking the ability to provide any measures of confidence in the accuracy of the outputs they produce (Phoon et al., 2003). Therefore, a significant limitation on the use of these methods is imposed by the nature of the problems that the methods are employed to address in geotechnical engineering – that is, soil is a heterogeneous material with inherently random characteristics. The uncertainty associated with soil parameters is further increased by the limited scope of ground investigation programmes and the variable ability of constitutive models to capture actual response, as well as any simplifications introduced in the numerical analyses. In addition, measurement error is another source of uncertainty that needs to be accounted for. In this context, adopting a probabilistic framework allows quantifying uncertainties in a rigorous manner.

Bayesian definition of the most probable’ parameters

As described in previous sections, optimisation methods can effectively estimate model parameters by comparing computed and measured ground movements. These methods are deterministic and completely disregard the uncertainties inherent in natural processes. Therefore, they are not able to capture the distribution of the parameters and their ‘most likely’ values in a probabilistic setting. To represent how likely the parameters Θ are to have generated the existing data set, the probability of event Θ happening given the observation D is described with the expression p( Θ  | D ). Therefore, back-analysis aims to find the set of parameters Θ most likely to generate the existing data set D , which is equivalent to maximising the probability p( Θ  | D ). This approach is called maximum a posteriori (MAP) estimation. The MAP estimate is also equivalent to the mode of the probability distribution p( Θ  | D ), and this set of values is indeed the most probable set of parameters.

The probability p( Θ  | D ) can be regarded as the posterior estimate in Bayes’ theorem and can be expressed as

p ( Θ | D ) = p ( D | Θ ) p ( Θ ) p ( D )
1

In the Bayesian (or epistemological) perspective, probability can be interpreted as a measure of the degree of belief. Thus, the whole process can be viewed as the evolution of the degree of belief in the parameters Θ , which is p( Θ  ) before seeing the evidence, and p( Θ  | D ) after accounting for the evidence, observations D .

The likelihood function p( D | Θ  ) expresses how probable the observed data are, given different parameters Θ . p( Θ  ) is called prior distribution and represents the knowledge of which parameters are likely to generate the data before observations are obtained. The prior distribution p( Θ  ) is defined over the space of possible parameters and can be any type of distribution.

The denominator p( D ) is called the model evidence and ensures that the posterior distribution is a valid probability density function

p ( D ) = p ( D | Θ ) p ( Θ ) d Θ
2

In general, integrating over all possible parameters to compute this integral can be hard, particularly when the model is non-linear (Murphy, 2012). Numerical techniques, therefore, need to be employed in all practical cases. A sampling method, such as Markov chain Monte Carlo (MCMC), can efficiently estimate such an integral (Murphy, 2012).

MAP estimation

If one is interested only in the most probable values of the parameters rather than the probability distribution of the parameters, the posterior may be maximised as a function of Θ

Θ ˆ MAP = argmax { p ( Θ | D ) : Θ } = argmax { p ( D | Θ ) p ( Θ ) : Θ }
3

If the posterior distribution of the parameters is known, then the MAP estimate is the mode of the posterior distribution, by definition. When computing the whole distribution is too costly, the MAP estimator, defined in Equation 4, can be used to compute a point estimate.

Taking the logarithm of the estimator, it is observed that

Θ ˆ MAP = argmax { log p ( Θ | D ) : Θ } = argmax { log [ p ( D | Θ ) p ( Θ ) ] : Θ }
4

There is a trade-off between likelihood and prior in shaping the posterior. In the process, the likelihood becomes more dominant as more data are obtained. When the number of observations becomes sufficiently large, the likelihood will overwhelm the prior, which will then have a diminished impact on the posterior. In this case, the MAP estimate will approach the maximum likelihood estimate. On the other hand, the MAP estimate is very desirable when the amount of data is small and particularly when it is of the same magnitude as the number of parameters.

Bayesian back-analysis in current applications

There are many successful applications of Bayesian inference in geotechnical engineering – for example, pile capacity analysis (Najjar and Gilbert, 2009), predictions for the depth of scour hole and its uncertainty assessment (Bolduc et al., 2008; Briaud et al., 2014), slope stability studies (Zhang and Goh, 2013) and parameter characterisation based on laboratory tests (Houlsby and Houlsby, 2013; Jung et al., 2009). In this paper, only the application of Bayesian inference to the back-analysis of staged excavations is addressed.

Back-analysis of supported excavations case histories using the Bayesian method was implemented with a regression model, known as the Kung–Juang–Hsiao–Hashash (KJHH) model, consisting of three multivariate polynomial equations for predicting the surface settlement profile, the maximum ground settlement and the maximum wall deflection (Hsiao et al., 2008; Kung et al., 2007). The power and coefficients of the functions were derived from synthetic finite-element analyses of braced excavations on a flat ground surface in soft to medium stiff clays. The KJHH model used the only properties of the softest soil and the supporting structures to predict wall and ground settlements. The predictions were improved stage by stage through updating of the bias factor embedded in the prediction model. The Bayesian method provided an approach for back-analysis, yielding useful results even with limited observations and simplified models.

Wang et al. (2012) also adopted the KJHH model and updated two parameters,
S u / σ v
and
E i / σ v
(the ratio of shear strength over vertical effective stress and the ratio of initial Young’s modulus over vertical effective stress, respectively) based on the maximum horizontal wall deflections. The authors validated this approach with centrifuge simulations and showed that the accuracy of the maximum settlement prediction can be improved and the model uncertainty reduced with Bayesian updating. They applied the same procedure to a case history, the Taipei National Enterprise Centre (TNEC), which is a seven-stage excavation in soft to medium clay (Wang et al., 2013). The soil parameters were updated with the observations of the maximum wall deflection measured at a stage in the excavation and then used to refine the predicted wall response in subsequent excavation stages. The potential for building damage in the final excavation stage was assessed by calculating the damage potential index based on the angular distortion and lateral strain using empirical equations.

Hsein Juang et al. (2013) extended the work of Wang et al. (2012) on the TNEC case history by adding the Metropolis–Hastings algorithm-based MCMC method to the implementation. Different prior distributions of the unknown parameters were tested to assess their impact on the predictions. The results showed that the prior had a significant impact on the posterior distribution, particularly when there was only one measurement point.

Qi and Zhou (2017) recognised that the KJHH model is applicable only to cases involving soft to medium clays. They developed a regression model to describe this subset of problems by using a response surface method (Box and Draper, 1987) based on finite-element modelling of ground movement at 49 locations of different wall sections in 11 case histories. The model describes the relations between 17 parameters (cohesion, friction angle and elastic modulus for six soil types from soft to stiff) and the maximum wall deflections. Since only one measurement of wall deflection is available at each stage, Bayesian inference was used with the regression model to update three parameters at a time. The other 14 parameters remained constant as the values taken from laboratory tests. They applied this approach to a four-stage excavation case in Hangzhou, China. The results showed the prediction of the final stage improved after each excavation stage.

Comparing with the optimisation methods in the section headed ‘Back-analysis in the OM’, the Bayesian approach is shown to be superior to other methods for back-analysis in many aspects. (a) The uncertainty in the soil parameters can be adequately considered. The updated parameters and the predictions are reported as distributions. This can be used for obtaining further quantities of interest, for example, to evaluate the reliability of the system by constructing a limit state function related to the updated parameters. (b) The Bayesian method can logically incorporate other sources of information, such as prior knowledge and expert judgement. Multiple parameters can be updated with only one observation. (c) Sampling methods, such as MCMC, are able to find the global optimum of the solution. (d) The physical model is distinct from the algorithm used to update the parameters, and the explicit meaning behind those parameters allows assessment of their validity. In addition, the choice of MCMC algorithm makes no difference to the results of the process, although the rate of convergence and the computational time required might be affected. Information related to the deterministic predictive function, such as values for soil properties, can be secured at each stage and transferred to the next relevant case history as prior knowledge irrespective of the MCMC algorithm used for the modelling.

This section focuses on how to apply the probabilistic model and Bayesian inference in a case history to estimate soil properties based on the observation of wall deformations. For illustrating purposes, an empirical geotechnical design method (Clough and O’Rourke, 1990) is applied as the prediction model. The main advantages of using a simple empirical method as the deterministic function are that (a) the relation between data and parameters of interest is direct and the likelihood and posterior can be formulated explicitly; (b) the computational cost is low; and (c) the number of unknown parameters that needs to be estimated is low.

Clough and O’Rourke method
Clough and O’Rourke (1990) estimated the movement caused by excavations in clay on the basis of plasticity principles and the factor of safety against base failure. As shown in Figure 1, the quantity of interest – that is, the maximum lateral movement (δ max), normalised by the excavation depth (H e) – is plotted against the system stiffness (S) for the various values of load resistance ratio (L R) against basal heave. The system stiffness is defined as
S = E I / γ w h avg 4
, where E denotes the Young’s modulus of the wall, I is the moment of inertia of the wall section, γ w is the unit weight of water and h avg is the average vertical spacing of the struts.
figure parent remove

Figure 1 Chart for predicting wall movements. Republished with permission of American Society of Civil Engineers, from Clough and O’Rourke (1990) in Design and Performance of Earth Retaining Structures: Proceedings of a Conference; permission conveyed through Copyright Clearance Center, Inc.

The load resistance ratio is defined according to Terzaghi (1943), as

L R = 1 H N c S ub γ S uu / D
5a

for a wide excavation with width larger than (2)1/2 w, where w is distance between the excavation base and the firm stratum. Otherwise, it is calculated as

L R = 1 H N c S ub γ ( 2 / 2 ) ( S uu / B )
5b

S ub and S uu are used as the undrained shear strength below and above the excavation level, respectively, and B denotes the width of the excavation, γ is the unit weight of the soil and H e is the excavation depth. Bearing capacity, N c, is defined as

N c = 4 3 [ log ( E s S uavg ) + 1 ] + 1
6

where E s is the elastic modulus of the clay for undrained deformation and S uavg is the average undrained shear strength of the clay.

Parameterising of Clough and O’Rourke design chart

The chart in Figure 1, often used in conventional design, was empirically derived. The curves in the chart can be thought of as the contours of a surface, which represents the interdependency of the normalised deflection, the system stiffness and the factor of safety against heave. In order to produce an estimate of deflection for any combination and value of the variables, a continuous description of such dependency is needed to be constructed and to be integrated into the Bayesian inference approach.

Following the argument developed by Gardoni et al. (2009), Bayesian regression and Bayesian variable selection are used to develop an analytical formulation which describes the relation between δ max/H e, S and L R. The logarithmic variance stabilising transformation, introduced by Box and Cox (1964), is adopted to change the problem variables into y = ln (δ max/H e), x 1 = ln (S) and x 2 = ln (L R), respectively. Data used in the regression analysis were generated by discretising the curves in the Clough and O’Rourke design chart (30 data points from each curve).

In view of the log transformation, the regression model with all candidate explanatory variables can be expressed as follows

y = θ 1 + θ 2 x 1 + θ 3 x 2 + θ 4 x 1 2 + θ 5 x 1 x 2 + θ 6 x 2 2 + θ 7 x 1 3 + θ 8 x 1 2 x 2 + θ 9 x 1 x 2 2 + θ 10 x 2 3 + ε σ
7

In which θ = (θ 1,θ 2,…,θ 10) is the vector of the unknown coefficients of the variables, σ is the model standard deviation and ɛ is a vector of Gaussian random variables with zero mean and unit variance. A regression analysis was conducted first by using all variables. Table 1 shows the posterior statistics of the estimated coefficients and their corresponding coefficient of variation (COV).

Table

Table 1 Posterior statistics of the coefficients in the first phase

Table 1 Posterior statistics of the coefficients in the first phase

θ
θ 1 θ 2 θ 3 θ 4 θ 5 θ 6
Mean 3·509 −0·817 −5·150 0·021 0·668 2·101
Variance 0·346 0·191 0·434 0·035 0·122 0·372
COV 0·099 0·233 0·084 1·682 0·182 0·177
θ 7 θ 8 θ 9 θ 10 σ
Mean 0·003 −0·043 −0·032 −0·702 0·084
Variance 0·002 0·009 0·038 0·187 0·006
COV 0·740 0·216 1·202 0·266 0·066

A stepwise deletion process, Bayesian variable selection, was applied to remove the least informative variables and simplify the model (Gardoni et al., 2002; O’Hara et al., 2009). During this process, the variables with the largest variance were iteratively eliminated one by one until the model error significantly increased beyond the required model accuracy. Following this strategy, the variable term ln (S)2 was deleted first since it has the largest COV (= 1·682) as shown in Table 2. After each elimination, the authors assessed the reduced model with Bayesian regression recursively and found that the model error grew significantly higher after the fifth term was deleted. Therefore, only the first four terms were removed, and the remaining ones were kept as needed explanatory variables. Figure 2 summarises this stepwise deletion process, showing the COV of the candidate variables (solid blue dots corresponding to the left axis) and the posterior mean of the model error (open black squares corresponding to the right axis) at each step. Table 2 lists the posterior statistics of the selected parameters.

Table

Table 2 Posterior statistics of parameters of selected model

Table 2 Posterior statistics of parameters of selected model

θ 1 θ 2 θ 3 θ 5 θ 6 θ 7 θ 8 σ
Mean 3·206 −0·673 −4·393 0·545 0·860 0·004 −0·035 0·084
Standard deviation 0·120 0·033 0·263 0·092 0·062 0·000 0·008 0·006
Correlation coefficient: × 10−2
θ 2 −0·335
θ 3 −2·191 0·602
θ 5 0·730 −0·207 −2·044
θ 6 0·203 −0·051 −0·576 0·070
θ 7 0·003 −0·001 −0·006 0·002 0·000
θ 8 −0·063 0·018 0·180 −0·070 −0·005 0·000
σ 0·007 −0·002 −0·021 0·008 0·000 0·000 −0·001
figure parent remove

Figure 2 Bayesian variable selection process

Based on Table 2, the formulation used to calculate the maximum deflection from the design chart in Clough and O’Rourke (1990) is

ln ( δ max H e ) = 3 326 0 673 ln S 4 393 ln L R + 0 545 ln S ln L R + 0 860 ( ln L R ) 2 + 0 004 ( ln S ) 3 0 035 ( ln S ) 2 ln L R
8

The contours derived from Equation 8 are plotted against the original design chart by Clough and O’Rourke (1990) in Figure 3.

figure parent remove

Figure 3 Analytical representation of Clough and O’Rourke (1990) design chart

Probabilistic model for Bayesian inference

As discussed in the section headed ‘Bayesian inference for back-analysis with field observations’, the definition of ‘most probable’ is proposed here in a probabilistic perspective, in which the soil parameters are treated as random variables. Before the Bayes theorem (Equation 1) is used to obtain the posterior distribution of parameters and MAP estimates, the likelihood (a probability function of θ ) needs to be constructed. A probabilistic and unbiased predictive model is developed following Gardoni et al. (2002) to describe the relations between lateral ground movement and the soil parameters and to consider uncertainties in both parameters and measurements. The probabilistic model is defined as

D ( Θ , ϕ k ) = f ˆ ( θ , ϕ k ) + σ ε k
9
f ˆ ( θ , ϕ k )
is the deterministic function predicting the response of the system, which in this application is taken as the Clough and O’Rourke method, expressed in Equation 2. θ denotes the vector of unknown soil parameters, ϕ k is the vector of known soil parameters, ε k is a random variable with standard normal distribution and σ2 is the variance for the given deterministic function.

Based on the probabilistic model, when the measurement of the maximum deflection at stage k, for k = 1,…,m, D k , is available, the likelihood function can be constructed following a transformation of the probability space from ε k to D (Tang and Ang, 2007). The resulting likelihood function is in the form of a multivariate normal distribution as shown in the equation

p ( D k | Θ ) ( 2 π σ 2 ) 1 / 2 exp { 1 2 σ 2 [ D k f ( θ , ϕ k ) ] 2 }
10

Then, the Bayes theorem expressed in Equation 1 can be applied to obtain the posterior distribution of the unknown soil parameters θ . For example, the posterior at stage 1 is

p ( Θ | D 1 ) p ( D 1 | Θ ) p ( Θ ) k = 1
11

This process can be repeated at each stage when a new observation becomes available, in a sequential use of Bayesian inference. The posterior obtained in the latest stage contains all the knowledge learnt throughout the process and brings all the information, subjective and objective, as a prior into the next stage. The posterior of the unknown parameters at stage k is

p ( Θ | D 1 , , D k ) p ( D k | Θ ) p ( Θ | D 1 , , D k 1 ) k = 2 , , m
12

The collection of all samples drawn from p( Θ  |D 1,…,D k ) is used to approximate the posterior density and to compute the quantiles, the moments and the other statistics of interest. The mode of the posterior distribution which represents the most probable set of the unknown parameters can also be obtained.

The predictive estimate of maximum deflection
D ˜
for a later stage, stage k + i, can be computed by
D ˜ ( ϕ k + i ) = D ( Θ , ϕ k + i ) p ( Θ | D 1 , , D k ) d Θ
13
Ford Engineering Design Center project description and site conditions

The Ford Engineering Design Center (FEDC) excavation project is used to illustrate the Bayesian inference process described in the previous sections. The project is located on the Northwestern University campus in Evanston, Illinois, and consists of a 44 m × 37 m internally braced excavation with uneven initial elevations on adjacent sides. Figure 4 shows a plan view of the site, including initial site elevations, dimensions of the excavation, support system geometry and layout of the instrumentation. For a more detailed description of this project, see the paper by Blackburn and Finno (2007).

figure parent remove

Figure 4 Excavation plan and instrument locations. Republished with permission of American Society of Civil Engineers, from Blackburn and Finno (2007) Journal of Geotechnical and Geoenvironmental Engineering 133(11); permission conveyed through Copyright Clearance Center, Inc. ECD, Evanston City Datum

Soil properties determined through field testing are presented in Table 3 and Figure 5, showing large variations due to the scatter in the data.

Table

Table 3 Engineering properties of soil stratigraphy (Blackburn and Finno, 2007)

Table 3 Engineering properties of soil stratigraphy (Blackburn and Finno, 2007)

Layer Elevation: m ECD Engineering properties (from CPT testing)
Sandy fill 5·2–4·2 ϕ′ = 44–48°
Medium silty sand 4·2–2 ϕ′ = 42–44°
Silty fine to medium sand 2–0 ϕ′ = 30–38°
Blodgett stratum, soft clay −0·9 to −4·9 S u = 30–120 kPa
Deerfield stratum, medium clay −4·9 to −13·1 S u = 6–127 kPa
Park Ridge stratum, stiff silty clay −13·1 to −16·8 S u = 60–251 kPa
Hardpan −16·8 to −20·7 S u > 100 kPa

ECD, Evanston City Datum; CPT, cone penetration test

figure parent remove

Figure 5 Subsurface conditions. Republished with permission of American Society of Civil Engineers, from Blackburn and Finno (2007) Journal of Geotechnical and Geoenvironmental Engineering 133(11); permission conveyed through Copyright Clearance Center, Inc.

The construction sequence is summarised in Table 4. The excavation reached a final elevation of −3·8 m from Evanston City Datum (ECD), which is in the Blodgett stratum.

Table

Table 4 Major construction stages for the FEDC case history

Table 4 Major construction stages for the FEDC case history

Excavation stage Activity
0 Potholing and sheet pile installation
1 Excavate to +0·9 m ECD and install/prestress first level of support at +1·5 m ECD
2 Excavate to −1·5 m ECD and install/prestress first level of support at −1·0 m ECD
3 Excavate to −3·8 m ECD

Inclinometer-2 (I-2) was chosen to be the observation data in this analysis because it had the maximum deflection and was located in the middle of the plane surface of the north wall, where the influence of the corners was minimal. The deflection measured by I-2 (reset after wall installation) is shown in Figure 6. The maximum cumulative deflection values from stage 1 to stage 3 are 2·5, 5 and 14 mm.

figure parent remove

Figure 6 Lateral soil movements observed at I-2, reset after wall installation. Republished with permission of American Society of Civil Engineers, from Blackburn and Finno (2007) Journal of Geotechnical and Geoenvironmental Engineering 133(11); permission conveyed through Copyright Clearance Center, Inc.

Bayesian inference with Clough and O’Rourke method

In this section, the authors plan to infer the shear strength S u based on the observations obtained in the field during construction. The observations are the maximum deflections at stage 2 and stage 3, which are 5 and 14 mm. The model is updated sequentially after stage 1, because the Clough and O’Rourke method is not applicable to the cantilever stage. Based on the posterior of S u obtained at stage 2, the model will predict the maximum deflection at stage 3. To evaluate the posterior distribution of the unknown parameters, the delayed rejection adaptive Metropolis–Hasting algorithm (Haario et al., 2006), a variant of the MCMC method, was employed.

The function (Equation 8) developed for the Clough and O’Rourke method will be applied as the deterministic function to compute the estimate
f ˆ ( θ , ϕ k )
in Equation 9. Two parameters are used to describe S u: the shear strength at the top of soil layer S u0, and the gradient
r S u
of S u with depth. The undrained shear strength below the excavation level is
S ub = S u 0 + ( 1 / 2 ) ( H e + H ) r S u
; the undrained shear strength above the excavation level is
S uu = S u 0 + ( 1 / 2 ) H e r S u
; and the average shear strength is
S uavg = S u 0 + ( 1 / 2 ) H r S u
, where H is the height of the retaining wall.
In this application, the unknown soil parameters are
θ = ( S u 0 , r S u )
and the known parameters are ϕ = (E, I, γ w, h avg, B, γ, H e, E u), summarised in Table 5 with values taken from Bryson and Zapata-Medina (2012) and Finno et al. (2007).
Table

Table 5 Summary of parameters (Blackburn and Finno, 2007)

Table 5 Summary of parameters (Blackburn and Finno, 2007)

Soil properties Unit weight of soil γ s = 19 kN/m3
Average soil stiffness E u = 3789 kPa
Unit weight of water γ w = 9·78 kN/ m3
Structural properties Width of the excavation B = 36·8 m
Length of the wall H = 14·8 m
Stiffness of retaining wall EI = 58 000 kN m2/m
Vertical strut spacing for stage 2 h avg-stage2 = 1·5 m
Vertical strut spacing for stage 3 h avg-stage3 = 2·65 m
Prior knowledge Shear strength at the top of soil layer
S u0N (40,16), with the range [10,120]
Gradient of shear strength along depth
r S u N ( 1 , 0 4 )
, with the range [0,10]
Prior and posterior distributions of parameters

The mean of the prior distribution was selected according to the approximate value by Blackburn and Finno (2007), and the range of the parameters was set as the widest variation from laboratory testing. A COV of 0·4 was assigned to the prior, which means there is moderate confidence in the mean value.

The posterior distribution after each stage is plotted in Figure 7, and its mean, variance and COV are shown in Table 6. The variance of the posterior decreases at each stage, implying that engaging observations reduces the uncertainties in the parameters. The designer’s confidence in the posterior parameters should also increase after each stage as the values of COV decrease. The reduction in the variance is more significant for S u0 than for
r S u
, indicating that the former is a more sensitive parameter in this model than the latter. The modes of the posterior distributions of S u0 and
r S u
after the final stage are 47·904 kPa and 1·453 kPa/m, which then become the most probable values of the unknown parameters. Given the these results, the most probable value of average shear strength for the Blodgett stratum is 52·1 kPa and 62·3 kPa for the Deerfield stratum, which are higher than the average value of shear strength derived from field testing (as shown in Figure 5). This result shows that the most probable values produced by Bayesian inference are less conservative than those estimated from in situ tests. These most probable values could then be used as a starting point for design of another project in similar soil conditions and construction sequence. Furthermore, the posterior distribution can also be used to calculate the probability of failure of the design.
Table

Table 6 Posterior statistics

Table 6 Posterior statistics

Stage Mode Mean σ COV
S u0: kPa
Prior 40·0 40·0 16·00 0·400
2 44·6 40·3 4·84 0·120
3 47·9 46·6 3·61 0·078
r s u
Prior 1·000 1·000 0·400 0·400
2 0·709 1·008 0·380 0·377
3 1·453 1·470 0·316 0·215
figure parent remove

Figure 7 Posterior distribution of unknown parameters

A credible interval is a range in which an unknown parameter value falls with a certain probability, such as 85%. The purple-shaded area plotted in Figure 8 is the 85% credible interval of the posterior shear strength at stage 3, and the blue-shaded area is the 95% interval. The green- and yellow-shaded areas are the 85 and 95% intervals of the initial prior. It can be seen that the range of the shear strength has significantly narrowed after taking into consideration the observations obtained during construction.

figure parent remove

Figure 8 Updated shear strength profile with credible intervals

Prediction of deformation

The estimate of deformation for later stages, obtained based on the initial prior and the posterior after stage 2, is shown in Figure 9. The prediction for stage 3 improves after the observations from stage 2 are incorporated through Bayesian updating: the error in the prediction based on the initial prior and the posterior after stage 2 is reduced from 4·30 to 2·00 mm (Table 7). Since the prior has a significant impact on the posterior distribution when the number of observation is very limited, the fit at stage 2 is still mostly controlled by the prior. The impact of a poorly chosen prior gradually fades away when more observations are obtained so the goodness of fit at stage 3 is improved compared to that at stage 2.

figure parent remove

Figure 9 Estimates of deformation for later stages based on posterior after each stage

Table

Table 7 Error of predictions for later stages and fit at current stage

Table 7 Error of predictions for later stages and fit at current stage

Error: mm
Initial prior Stage 2 Stage 3
Stage 2 2·500 1·900
Stage 3 4·300 2·000 0·200

Since the most probable condition is defined vaguely in design codes and guidelines, it is inevitable to resort to statistical approaches to quantify these parameters, but some confusion still persists on how to handle uncertainties in practice. The process of obtaining most probable parameters can be standardised using the Bayesian inference method. Obtaining most probable parameters in a probabilistic approach has the following advantages: (a) this set of parameters is designed to produce an unbiased estimate of ground movement, which is truly and rigorously most likely to occur in practice; (b) the randomness in the parameters is explicitly accounted for and confidence intervals can be drawn around mean values; (c) the posterior distribution of the parameters properly accounts for all sources of information, objective and subjective, through the likelihood functions and prior distributions.

The parameterised Clough and O’Rourke method proposed in this work can be used either during excavation construction for a rapid estimation of the maximum deflection for later stages or before construction based on the case histories collected in similar ground conditions. Although it is a simple empirical method, its prediction based on the updated parameters is sufficiently accurate for a rough assessment with a very limited amount of data. If there are more excavation stages and more observations in a staged excavation, the prediction is expected to be more accurate.

Lastly, it is worth emphasising that the framework of Bayesian inference for sequential back-analysis can be applied to all staged excavation projects. The Clough and O’Rourke method, as the deterministic function in the probabilistic model, can be replaced by any other method for retaining wall design. The procedure illustrated here can be applied to more complex numerical, constitutive and geometrical models at more closely spaced time intervals to update the quantities of interest while construction is actively progressing and assess the design iteratively within the context of the OM. However, sufficient computational capability and number of observations are required for Bayesian inference with a more complex deterministic function such as the finite-element method.

Acknowledgements

This work was supported by the UK Engineering and Physical Sciences Research Council grant EP/N021614/1, the Technology Strategy Board grant 920035 for the University of Cambridge Centre for Smart Infrastructure and Construction and a miniprojects award from the Centre for Digital Built Britain, under Innovate UK grant number 90066. Yingyan Jin was supported by the China Scholarship Council. Additional resources were provided by the Centre for Digital Built Britain.

References

Cited By

Related content

Related search

By Keyword
By Author

No search history

Recently Viewed