We present a novel framework for the probabilistic modelling of random fourth-order material tensor fields, with a focus on tensors that are physically symmetric and positive definite (SPD), of which the elasticity tensor is a prime example. Given the critical role that spatial symmetries and invariances play in determining material behaviour, it is essential to incorporate these aspects into the probabilistic description and modelling of material properties. In particular, we focus on spatial point symmetries or invariances under rotations, a classical subject in elasticity. Following this, we formulate a stochastic modelling framework using a Lie algebra representation via a memory-less transformation that respects the requirements of positive definiteness and invariance. With this, it is shown how to generate a random ensemble of elasticity tensors that allows an independent control of strength, eigenstrain, and orientation. The procedure also accommodates the requirement to prescribe specific spatial symmetries and invariances for each member of the whole ensemble, while ensuring that the mean or expected value of the ensemble conforms to a potentially ‘higher’ class of spatial invariance. Furthermore, it is important to highlight that the set of SPD tensors forms a differentiable manifold, which geometrically corresponds to an open cone within the ambient space of symmetric tensors. Thus, we explore the mathematical structure of the underlying sample space of such tensors and introduce a new distance measure or metric, called the ‘elasticity metric’, between the tensors. Finally, we model and visualize a one-dimensional spatial field of orthotropic Kelvin matrices using interpolation based on the elasticity metric.
The computational modelling of heterogeneous materials often has to resort to a statistical or probabilistic description, as the precise details are unknown or uncertain. The material properties are usually collected in tensor quantities, and these tensors are often of even order, and due to Onsager’s reciprocity relations [1] physically symmetric—in elasticity this is termed the major symmetry—and in addition, they are frequently positive definite as they are associated with, e.g., stored energy or entropy production [2]. This class of even-order physically symmetric and positive definite (SPD) tensors—and here especially the important class of fourth-order tensors and their associated symmetry classes—is the main focus in this contribution. The stochastic modelling procedure we shall describe is applicable to any such even-order physically symmetric and possibly additionally positive definite tensor, but we shall spell things out in detail for fourth-order SPD tensors, a prime example of which is the elasticity tensor, which will be taken here to stand for all such tensors.
Well known examples of such SPD tensors are, e.g., thermal conductivity—a second-order tensor mapping temperature gradients to thermal fluxes—or, as already mentioned above, the elasticity tensor of a linearly elastic material as a fourth-order tensor, mapping strains to stresses. A sixth order more involved example is the complete piezo-elasticity tensor, mapping the combination of strains and electric displacement to stress and electric field. Such a tensor of a coupled problem would be best tackled using the tensor-block matrices of Nikabadze [3], which would be a more or less straightforward extension of the concepts treated here.
The representation and stochastic modelling of second-order tensors, such as those describing diffusion phenomena, was shown in a precursor paper [4]. Here we want to follow along the general lines of this approach based on the spectral decomposition and symmetry-group reduction, but for higher-order tensors, a novel situation arises, namely the situation that the symmetry-group reduction [5–7] cannot diagonalise the tensor. We outline the general idea for SPD tensors of any even order but concentrate in detail on fourth-order tensors.
As the interest here is in the probabilistic modelling of such tensors, a simple way to realise this is to think of the tensor entries as random variables. But an important aspect of a material law described by such a tensor is its invariance under spatial symmetries [5–10]. Although the numerical values of the tensor entries may be uncertain resp. random variables, it is not uncommon that the class of spatial symmetries is known exactly, both for each individual realisation as well as for the mean. These invariances under spatial symmetries are a classical subject [11], and the different symmetry groups allow one to assign such material laws, resp. the tensors describing them into certain classes, e.g., like the well-known isotropic class, or the orthotropic class of materials. Second-order tensors in 3D show, next to these just mentioned invariance resp. symmetry classes of iso- or ortho-tropy, additionally only plan-isotropic behaviour [4,11]. Fourth-order spatial tensors, however, have a richer set of symmetry classes than second-order tensors [11–13], namely eight symmetry or invariance classes in 3D [14], and four in 2D [15], cf. also the general references [8–10].
Any even-order tensor may be viewed as a linear map when acting on the space of tensors of half the order by contracting over half the indices [16–18]. In the case of the fourth-order elasticity tensor, this space of tensors half the order is the space of strains, i.e., second-order symmetric tensors, as the elasticity tensor maps a strain tensor into the corresponding stress tensor. By choosing appropriate bases in the vector space of tensors of half the order [16], this linear map can be represented by a matrix; so the class of tensors we are interested in can thus be represented by SPD matrices. We shall switch to this matrix representation whenever it is convenient to do so. The spatial invariance resp. symmetry groups are then realised as linear transformation groups on the relevant space of half-order tensors, or, equivalently, on the space underlying the representing symmetric matrices just mentioned. These transformation groups then define invariant subspaces of the SPD matrix representing the tensor [5–7]. By transforming into a basis adapted to these invariant subspaces, the matrix takes on a block-diagonal form. This transformation to block-diagonal form is termed the symmetry resp. invariance group reduction or decomposition. Here we shall prefer the term ‘invariance’ of the tensor resp. matrix under the spatial group operations over the term ‘symmetry’, in order to avoid confusion with the physical symmetry due to Onsager’s reciprocity relations, which results in symmetric matrices.
Any SPD matrix or linear map can be spectrally decomposed with positive eigenvalues and has orthogonal eigenvectors. This spectral eigenvalue/-vector (EV) decomposition separates the size, strength, or stiffness information contained in the eigenvalues from the information about the orientation of the eigenvectors. For the sake of brevity and simplicity, in the following, we shall speak about the eigenvalues in terms of stiffness or elastic moduli as they apply to the elasticity or Hooke tensor, but obviously this can be easily translated to any other physical property. The group reduction together with the spectral decomposition are the main building blocks of our approach. In the representation proposed, and in this, we see one of the novelties of our approach, we want to keep the modelling of strength/stiffness, eigenstrain distribution, and spatial orientation separate and thus be able to control it independently. So, finally, one ends up considering random SPD matrices, where certain invariances are known for the whole population, as well as for the mean. There is already extensive work on random matrices, drawing on several sources, which is mentioned only very briefly here [19–22].
The separation of stiffness, eigenstrains, and spatial orientation information is actually an old subject, as the characterisation of elasticity tensors through their eigenvalues resp. moduli was already started by Stokes (cf. Nikabadze [3], Love [23], Todhunter and Pearson [24]) and Lord Kelvin (W.K. Thomson) [25,26]. It was little appreciated at the time and then apparently almost forgotten, to be picked up again about a century later by Fedorov [27], and subsequently among others [2,3,18,28–43]. Kelvin originally used this to classify material symmetries; a historical account may be found in studies by Helbig [44,45]. It is an early example of a tensor representation through an orthogonal tensor decomposition, and many other such representations and decompositions have been considered; a compilation and comparison of different orthogonal decompositions may be found in literature [46–48]. Furthermore, topics of tensor representation are also treated in previous studies [11,49–51], and more information on tensor decompositions may also be found in literature [16,52–65]. The representation preferred by us, based on the spectral decomposition referred to above, offers the possibility of separately modelling stiffness, orientation, and possibly the eigenstrain, as well as separately controlling their individual probability distributions. Relative to existing models this offers a direct and fine control of those properties, while being free of any limitations. It additionally offers direct control of the symmetry resp. invariance properties to be discussed below, both of the ensemble mean and of each individual realisation. This feature distinguishes our approach from existing models. This and further distinctions are discussed in more detail by Shivanand et al. [4] and will not be repeated here.
It is well known that there are eight symmetry or invariance classes for symmetric fourth-order tensors in 3D, they may be found in the references just quoted, as well as in the reference works [8–10]. To sketch the connection of the spectral or EV-decomposition with the idea of invariance under symmetry groups, it was already mentioned that one considers the representation of the elasticity tensor as a linear map through a proper choice of a basis in the space of strains. In the case of the elasticity tensor, this is known as the Kelvin notation. The eigenvalues of the Kelvin matrix are traditionally called Kelvin moduli and the corresponding eigenvectors Kelvin modes.
The invariance class of the material or elasticity tensor is identified by rotating the coordinate system of the physical space in such a way as to introduce as many zeros as possible in the tensor. This pattern then turns out to be stable under a whole subgroup of rotations, which is the point invariance resp. symmetry group of the material resp. the tensor or the Kelvin matrix. This process of group reduction is an algorithm to find orthogonal invariant subspaces which reduce the matrix to block-diagonal form [5,6], thereby also identifying the invariance resp. symmetry group. For fourth-order tensors resp. Kelvin matrices this group is represented as a subgroup of the group of orthogonal transformations on strain-space [11,51,66,67]. It also determines the eigenstrain distribution in the relevant elasticity class [12,13,35,68,69].
The group-reduced block-diagonal form alluded to above is the representation usually chosen in the literature when the different elasticity classes are discussed [8–13]. The block-diagonal form of the matrix can be further reduced to diagonal form in each block with a orthogonal transformation to a direct sum of specific eigenstrain subspaces. This last orthogonal transformation in strain-space does not result from a spatial rotation but is purely in strain-space. This consideration of iso-spectral eigenstrain distributions is a crucial extension to higher-order tensors of the analogous development in Shivanand et al. [4] for second-order tensors.
The fourth-order elasticity tensor may thus be represented in a two-stage procedure by a combined group reduction and spectral decomposition, i.e., one may represent the elasticity tensor in form of the Kelvin matrix by a positive diagonal matrix of eigenvalues resp. Kelvin moduli, an ‘eigenstrain distributor’ rotation within an invariant subspace of the block-diagonal form, and the spatially induced rotation for the symmetry group reduction. The task of representing random SPD tensor fields thus is reduced to representing these three components. It is worthwhile, and in our context essential, to observe that these three ingredients —spatially induced orthogonal transformations for the group reduction, general orthogonal transformation on strain-space to diagonalise each block, and diagonal SPD matrices of Kelvin moduli—are each members of matrix Lie groups.
In the modelling of random tensor fields, the most comfortable and desirable way is to find an unconstrained representation on a linear vector space. For the purpose of modelling or representing a random SPD tensor, a geometric point of view might be helpful. In the vector space of physically symmetric tensors of a given even order, the positive definite ones (SPD tensors) form an open convex cone, i.e., sums of SPD tensors with positive coefficients are again SPD tensors. They thus form a differentiable manifold, but not a vector space. So the desire is to represent this cone on a vector space without any constraints. One such possibility is the matrix exponential and logarithm [4,70–72], and this can be used in different guises, and will also be used here. The Lie groups just alluded to—diagonal SPD matrices and subgroups of orthogonal matrices—which we want to use in the representation, as the they allow one to independently model different aspects of the SPD tensor, are geometrically speaking more complicated than the simple vector space of physically symmetric tensors. But the exponential and logarithm may be used in this situation as well—here coinciding with the Lie group exponential and logarithm—and furnish a representation on the associated Lie algebra of the corresponding group, which is geometrically the tangent space at the group identity; effectively giving a representation on a direct sum of linear spaces.
In this context, it is worthwhile to note that the nature of the set of SPD tensors—a differentiable manifold which is geometrically an open cone in the ambient space of symmetric tensors—suggests that the calculation of averages, means, or expected values inherited from this ambient space based on a Euclidean metric is not the natural one to use. It has actually been observed that using the mean based on the Euclidean metric has some undesirable properties [73–82], in particular the so-called swelling, fattening, and shrinking effects [20,70–72,82,83].
To wit, when finding the midpoint (average) or interpolating between two points on earth’s surface, one will usually not mean the midpoint of a straight connecting line (the average of the ambient Euclidean space), but the intrinsic midpoint half way on a great circle. This intrinsic kind of average or mean using a non-Euclidean metric is called a Fréchet mean, and will be shortly recapped in Appendix 1, recalling the connection between metric and averaging. From a metric, one may define variationally the Fréchet or Karcher mean—a generalisation of the arithmetic resp. Euclidean distance based mean, see Appendix 1—to more general metric spaces [84]. This is an area of active research [82,85–89], and the ‘best’ metric and mean seem to be application dependent.
The effect of using a mean which is not adapted to the manifold can have various consequences. When modelling a non-isotropic material, this may mean a loss of anisotropy for the mean, which may be undesirable in some situations. In the above literature—particularly that coming from the medical field of diffusion tensor imaging, where these undesirable effects are not acceptable, but also in the estimation of covariance matrices—there is discussion on what metric to use [60,71,74–76,79,82,88–95]. For elasticity tensors, this topic of a metric has also attracted some attention [4,12,81,96–101], e.g., in finding the closest elasticity tensor of some invariance class to measurement data.
We formulate some desiderata for a mean resp. a metric for the case of elasticity. This will be connected to the proposed representation on a direct sum of Lie algebras, which allows one to choose inner products on the Lie algebras to define a metric, which is transported via the exponential map to the Lie group and hence gives rise to a Riemannian structure on the group, which can then be used to define a metric—which we suggest to call the ‘elasticity metric’—on the cone of positive definite tensors which satisfies the proposed desiderata; the details of this are to be found in the Appendix 1. Based on this distance one may then formulate the corresponding Fréchet mean, which may be called the ‘elasticity mean’.
To generate random SPD matrices, a reduced non-parametric approach taking account of positive definiteness was presented by Soize [21,102]. Here the algebraic property that positive elements in the algebra are squares of other elements in the algebra [103,104] is used to ascertain positive definiteness. This then forms the basis for the generation of random elasticity tensors with specified invariance in the mean and fully anisotropic invariance in each realisation, resp. controlled elasticity class invariance with constant spatial orientation of the symmetry axes in each realisation, which is shown in a series of papers [105–113].
In the area of modelling of random tensor fields with given invariance properties, the culmination of a long series of works [51,66,67,114,115] is reported in Malyarenko and Ostoja-Starzewski [11], specifically looking at stochastically homogeneous and isotropic random tensor fields of any order in a three-dimensional domain. The spatial modelling uses the spectral theorem for the covariance function, to generate a spectral or Karhunen-Loève like representation (e.g. Matthies [116]) of the random tensor field with the desired invariance properties. No particular attention is paid to positive definiteness, as the methods used are purely vector space like, such as linear combinations or series and integrals. The danger is here that although the limiting expression may well represent a positive definite tensor, in a practical computational situations these series and integrals have to be truncated resp. numerically approximated, and this step may lead to an unacceptable numerical loss of positive definiteness everywhere. In some associated numerical examples of anti-plane shear [117], positive definiteness everywhere is obtained by using a deterministic SPD base tensor and adding a random rank-one tensor—a dyadic or tensor product of identical vectors and thus automatically positive—and controlling the random coefficient so that the sum stays SPD. Here we shall also use a Karhunen-Loève like field model to represent the parameters in the Lie algebras mentioned above, but by modelling the logarithm of the tensor one can ensure positive definiteness everywhere via the exponential map for each realisation at each point in space.
Ensuring positive definiteness everywhere through the use of the exponential function, resp. the logarithm in the opposite direction, is an often used approach; it typically based on the spectral or EV-decomposition. As the spectral decomposition may be written as a sum of dyadic or tensor products of eigenvectors, this furnishes a connection to the above mentioned approach in Zhang et al. [117]. The exponential function is employed in previous studies [4,70–72,83,88,109,111], and it may be combined with the squaring approach [109,111,118] to ensure compliance with a particular invariance class. This log-modelling is quite common when measuring diffusion and conductivity tensors, i.e., focusing the description on the logarithm of the tensor. This takes advantage of the fact that the invariance classes are the same for a tensor and its exponential resp. logarithm. Also in this work, the exponential function is used in an essential way to ensure positive definiteness numerically in each realisation. The exponential function may be even used to model the rotations, essentially parametrising the angles [118]. Here we follow [4] and consequently model the eigenvalues and the rotations via the exponential function, mapping from a Lie algebra to the corresponding Lie group
The structure of the paper is outlined as follows: Section 2 collects essential preliminaries, including the notation for even-order tensors in Section 2.1 and Kelvin’s matrix representation of Hooke’s law in Section 2.2, with further details provided in Appendix 3. Section 2.3 discusses the transformation of the Kelvin matrix under rotations and briefly addresses how spatial rotations are modelled in the sequel and how they influence the strain-space. Additional background on rotations can be found in Appendix 2. This section also introduces the reduction of the Kelvin matrix to block diagonal form, its spectral decomposition, and the formulation of group invariance in elasticity. In order to make this work more easily accessible, the group theoretic material will be kept to a minimum. The representation and modelling of fourth-order SPD tensor fields, based on fields of parameters, is detailed in Section 2.4.
The main part of the paper is Section 3, where Section 3.1 covers the Lie representation of the Kelvin matrix, and Section 3.2 introduces the concept of a Fréchet mean for SPD tensors. This uses the proposed ‘elasticity metric’ which is described in Appendix 1. In Section 3.3, we formulate our requirements for a representation of SPD tensors, as well as some general thoughts and desiderata on how to average SPD tensors based on Fréchet means. Next, in Section 3.4, we introduce the framework for generating random ensembles of Kelvin matrices at a spatial point, followed by a description of spatial variation. The focus here is on modelling randomness through the separation of stiffness, eigenstrains, and spatial orientation. Section 4 explores Kelvin’s spectral decomposition of Hooke’s law in detail, with the relationship to 2D and 3D elasticity classes covered in Section 4.1 and Section 4.2, respectively. Finally, the paper concludes in Section 5.
2. Preliminaries
We consider physical space as given by , the physically interesting instances being the 2D situation with , and the full 3D case with . For the sake of simplicity, we shall consider only Cartesian coordinate systems and orthonormal bases, so we do not distinguish co- and contra-variant indices.
2.1. Even order tensors as linear maps
Vectors in like the position vector are denoted as , whereas second-order tensors (matrices) are denoted as , which will be identified with linear maps on via the matrix-vector product by (Einstein’s summation convention used).
Tensors of higher order than two will be denoted as for a tensor of order q. Considering a tensor of even order, say : —where for ease of notation we have used a multi-index — it can be naturally be viewed as a linear map on the space of tensors of half the order, by contracting the tensor product of and over the last q indices, thereby implicitly using the canonical Euclidean inner product to be introduced below:
where the last sum is particularly suggestive, as it is a matrix-vector multiplication with multi-indices. By choosing bases in , or, equivalently, by ordering the multi-indices in some way, one may easily constructs a matrix representation for the map . For the elasticity tensor, this will be done in Section 2.2 by using the so-called Kelvin notation.
2.1.0.1. Inner product and symmetry
The canonical Euclidean inner product for is denoted by . This induces an inner product for general tensors , namely . If the map in equation (1) induced by a tensor of twice the order is symmetric w.r.t. this inner product, i.e., if
then the tensor is physically symmetric (here ). And if in addition
the map and the tensor are SPD.
It is not difficult to see that SPD maps resp. matrices form an open convex cone, denoted by , in the subspace of symmetric linear maps resp. matrices. This is actually a differentiable manifold [75,96,119], and its tangent space at the identity is the ambient space of symmetric maps, in this context sometimes denoted by . If the underlying vector space is just some copy of , one often writes and instead of and . The class of even-order SPD tensors is the one of interest here, and the elasticity tensor to be introduced below is in this sense symmetric, and also usually positive definite, i.e., SPD, see also Moakher [16].
2.1.0.2. Behaviour under rotation and invariance
A rotation ( denotes the orthogonal group, see Appendix 2) (of the coordinate system) in , induces a corresponding transformation of a higher-order tensor , this is sometimes termed the Rayleigh product [43]:
In case an even-order tensor does not change under a rotation, i.e., in case
one says that it is invariant under the rotation Q, or in a more fancy way, invariant under the subgroup generated by . The maximal subgroup under which such a tensor is invariant is termed the symmetry class of the tensor . It is obvious that any even-order tensor is invariant under the inversion , i.e., the subgroup , so that always .
2.2. Hooke’s law in Kelvin’s matrix notation
Turning now to the case of linear elasticity, the fourth-order elasticity tensor acts on the symmetric infinitesimal strain ε to yield the Cauchy stress tensor
see also ‘Kelvin notation’ in Appendix 3 for details on the notation. The equation (6) is Hooke’s generalised law. Both stress and strain tensors lie in the space of symmetric second-order tensors, i.e., , and ; whereas the fourth-order elasticity tensor may be seen as an SPD map on .
Choosing a particular basis in , one may write Hooke’s law in matrix-vector notation, where the coefficient array of that basis is in the representation space , with , thereby establishing a linear one-to-one correspondence (see ’Kelvin notation’ in Appendix 3), which in 3D reads as
This notation goes back to Kelvin [25,26], see also Moakher [16], Mehrabadi and Cowin [32], Helbig [44,45]. In 2D, one has
By a slight abuse of notation, the map is again denoted the same as in 3D.
The linear relation in Hooke’s law equation (6) in becomes
with an SPD Kelvin matrix , thereby establishing a linear map , . In terms of the components of , in 3D the Kelvin matrix C and thus the map is given by
while in 2D one obtains
2.3. Group reduction and local spectral decomposition
The elasticity class [12,13,35,68,69] of the material or the elasticity tensor [11,67] is determined by rotating the coordinate system , in such a way as to reduce the number of constants in the fourth-order elasticity tensor by introducing as many zeros as possible. The tensor in this form then turns out to be invariant under a whole subgroup of rotations , which is called the symmetry or invariance group of the material resp. elasticity tensor. For simplicity and without loss of generality, and as more is not needed in our modelling approach, we restrict ourselves here to proper rotations, i.e., the subgroup of special orthogonal matrices, i.e., orthogonal matrices Q with . The symmetry or invariance group of a tensor will therefore in the following also be only considered as a subgroup of .
2.3.1. Transformations of the Kelvin matrix
As just mentioned, a rotation of the coordinate system in induces a transformation of the stress and strain tensors [32,120] and on the fourth-order elasticity tensor , as described for the general case in equation (4).
In this context, it is important to note that the linear map defined in the preceding Section 2.2 in equations (7) and (8) in 3D, and in equations (9) and (10) in 2D, is actually unitary, i.e., it preserves inner products; in fact, this is here the advantage of the Kelvin notation over the better known Voigt notation. This also means that for a rotation in physical space, which one may recall preserves the inner product in , there is a one-to-one correspondence [32,120] to a rotation , which in turn preserves the inner product in . Concretely, this means that when and , and
as well as
It is not too difficult to see that is actually an injective group homomorphism. Thus the map is a unitary representation of on , see, e.g., Fässler and Stiefel [6], Jones [7], Klapper and Hahn [121] — and the precise definition of may be found in ‘Rotations in ’ in Appendix 2. Hence the image of under , which will be denoted by , is a Lie subgroup of .
As is well known, e.g., see ‘Rotations in ’ in Appendix 2, the Lie algebra of a Lie group is the tangent vector space at the identity, and it has usually the same notation as the Lie group, but with small Fraktur letters; i.e., is the Lie algebra of . The Lie algebra of is denoted by . The dimension of , which is the dimension of its Lie algebra , is due to the injectivity of that of resp. , i.e., . The injective group homomorphism induces an injective linear map on the corresponding Lie algebras , such that , details of which may also be found in ‘Rotations in ’ in Appendix 2.
This implies that for a coordinate transform , which transforms the elasticity tensor according to equation (4) as , and whose induced transformation on is , the effect on the Kelvin matrix is given by [32,34–36]
2.3.2. Group reduction
After this preparation one may address the group reduction, [6,7] and [11–13,35,67–69]. As already indicated, one seeks a rotation such that has as many zeros as possible. For the corresponding Kelvin matrix this means (cf. equation (16)) that
has block diagonal form with diagonal blocks , [6]. This is called the group reduction of , and the blocks on the diagonal indicate invariant subspaces of . The non-diagonal minimum non-zero pattern of is characteristic for the elasticity class, and invariant under the rotations with . In fact, the elasticity classes are commonly discussed in this group reduced form [11–13,35,67–69]. When considering random choices resp. a stochastic model for C, it may be observed that the mean or expected value may be invariant under a larger group for resp. for [11], i.e., and . Some possible choices for a mean will be investigated in Sections 3.2 and 3.3.
Observe that a completely diagonal form with a diagonal would mean that equation (17) is an eigenvalue or spectral decomposition. But it is usually not possible to achieve a completely diagonal form just with rotations , see also Kowalczyk–Gajewska and Ostrowska–Maciejewska [41], Itin and Hehl [122], the general reason for which will be touched on now.
As already remarked in ‘Rotations in ’ in Appendix 2, the dimension of is for smaller than that of ——which implies that the dimension of the Lie group , is also smaller than that of the Lie group , i.e., . This in turn means that is a proper subgroup of the full Lie group of proper rotations on , i.e., . A similar relation obviously holds for its Lie algebra, and .
2.3.3. Spectral decomposition
The group reduction just discussed commonly produces a block-diagonal matrix . In order for in equation (17) to completely diagonalise in general, Q would need access to the whole group , but one is only allowed resulting from purely spatial rotations with , which as just explained is only a subgroup. Still, equation (17) is a good start, and one can now proceed a step further by noting that is symmetric and thus has a spectral decomposition
with and a diagonal matrix of eigenvalues of . As is block-diagonal, a similar structure will appear in , in fact equation (18) are separate eigenproblems for each block from the group decomposition equation (17), see, e.g., Fässler and Stiefel [6].
As is SPD, so is its rotated form , and hence is diagonally positive; such matrices are denoted by . is obviously also the matrix of eigenvalues of , and equating equation (17) with equation (18), one obtains
where and . But is a group, and , so that , and hence equation (19) is a spectral or eigenvalue decomposition of C; this equation will later form the basis for its representation.
2.3.4. Group invariance
Coming back to equation (17), and recollecting equation (5), invariance under some symmetry group for the reduced elasticity tensor means that
For the Kelvin matrix in the group reduced block diagonal form with and its associated , from equation (16) this translates to , and hence [6]
where the mapped symmetry or invariance subgroup was designated as . What equation (21) says is that and all commute, i.e., they are co-axial and can all be simultaneously diagonalised with , see also equation (18).
It may be noted that the equation (21) is homogeneous linear in . As , the equations equation (21) determine a subspace in the vector space of all symmetric matrices. As already mentioned, is an open convex cone in ; and thus the Kelvin matrices which are invariant under , and hence belong to the same elasticity class, are on the intersection of said subspace of and the cone , i.e., they all lie in the cone .
2.4. Random tensor fields
Let be a bounded domain in a d-dimensional Euclidean space (here or ) where we want to model or generate a field of elasticity tensors . Rather than the elasticity tensor, here we want equivalently to model the corresponding Kelvin matrix . As the exact value of at a location may be uncertain, either due to a highly heterogeneous material, or simply a lack of knowledge about the exact value, we assume that a probabilistic model is adopted, so that the Kelvin matrix field is modelled as a in a probability sense second-order tensor valued random field (having first and second moments), i.e., a measurable mapping
on a probability space , where represents the sample space containing all outcomes , is the σ-algebra of measurable subsets of , ℙ a probability measure, and the corresponding expectation operator is denoted by . To avoid unnecessary clutter in the notation, the argument will be omitted, unless it is necessary for understanding. The same will often be true of the argument .
2.4.1. Mean field and covariance function
The first and second-order probabilistic descriptors of such a field would be the mean field, which allows to split the field into the deterministic mean and the zero mean random part , and the covariance function
The material symmetry-group properties are now encoded in the mean and covariance function equations (23–25), see, e.g., Malyarenko and Ostoja-Starzewski [11]. Here one may see another advantage of numerically working with the Kelvin matrix—a second-order tensor—which results in a covariance which is a fourth-order valued tensor function on , although with some symmetries. In case that one would form the analogue of equation (25) for the elasticity tensor , i.e.,
this would be an eighth-order tensor on , although with many symmetries, i.e., a much larger object.
In both cases, this large number of covariance component functions encode simultaneously the spatial correlations and the symmetry-group invariances. Here we shall follow a different route, and, as will become clear in the following Section 3, encode the symmetry-group properties of the random Kelvin matrix in a field of n real parameters and a map . As will be seen later, the number of parameters is at maximum .
The first- and second-order probabilistic descriptors of these parameters then are
and they then only carry the information about the spatial and probabilistic distribution. The maximum number of covariance component functions in equation (29) is thus , but again with the normal symmetry of a symmetric matrix, and thus only . This is actually the number of independent components in both equations (25) and (26). In addition, and more importantly, as will be shown in the following Section 3, these parameters allow one to separate the information about spatial orientation, stiffness or moduli, and eigenstrain distribution through the Lie product representation, cf. Section 3.1.
2.4.2. Spatial modelling of parameter fields
The parameters will be described in general terms in Section 3.4, and then in detail for each elasticity class in Section 4. To arrive at a spatial model for the random parameter field , one usually aims at a separated or tensor product representation in a sum or integral of , e.g., [116].
Such separated representations actually follow from fairly general methods to analyse and represent parametric objects like the fields in equation (30), where the random event ω may be regarded as the parameter, resulting in tensor-product like expressions [123,124]. Here resp. are usually zero mean unit variance random variables, resp. are possibly normalised purely spatial functions, and resp. are some scaling coefficients.
Aiming for example at a Karhunen-Loève expansion of the spatial field [116], one has to solve the Fredholm integral eigenproblem with the covariance function from equation (29) as integral kernel
for positive eigenvalues and eigenvector fields and . Here it was tacitly assumed that the integral operator in equation (31) is self-adjoint and positive and has a discrete spectrum, which would be usually the case with a bounded domain and continuous for example.
It should be noted that spatial stochastic homogeneity would be evident as usual from the fact that would be a function of only the difference of the arguments, , say . In that case, the eigenproblem equation (31) is a convolution equation, and it is well known how to solve those by Fourier methods, as indeed the Fourier functions are (possibly generalised) eigenfunction of the integral operator with kernel represented by equation (31). Indeed, with the eigenfunctions known, the eigenvalues are obtained by a Fourier transform—continuous or in form of a series, depending on the integral operator equation (31) and the domain —of the matrix function above. This Fourier transform , where k is the vector of wave numbers resp. spatial frequencies dual to h, is the SPD spectral matrix. Locally diagonalised for each k, it then displays the eigenvalues of the Karhunen-Loève eigenproblem equation (31). This is actually the well-known spectral description of stochastically homogeneous random fields. Further information may be found studies by Matthies [116] and in literature; therein, see also the remarks in Section 3.4. In general— non-stochastically homogeneous—case, the Karhunen-Loève eigenproblem equation (31) has to be solved numerically [125,126].
In any case, the orthonormal eigenfunctions determine uncorrelated mean zero unit variance random variables through the projection of , setting , onto the spatial eigenvector basis :
With this one has all the ingredients for the spatial representation in equation (30), which is in this case called a Karhunen-Loève expansion, equivalent to a singular value decomposition of the random field viewed as an operator kernel [116].
As may be noted immediately, for Gaussian random fields all the uncorrelated random variables in equation (32) are also Gaussian, and thus they are also independent. For modelling purposes, this is what is usually desired, a model in independent random variables. As the parameter vector field effectively models the log of the SPD tensor field, this partly explains the allure and pre-dominance of log-normal models for such SPD tensors [83]. For cases where the log-parameters are not normal, the random variables are merely uncorrelated, but not independent. In this case, it is possible to express these random variables in Norbert Wiener’s polynomial chaos expansion [116,125,126] and the references therein. Thus one may represent by a series of multi-variate polynomials — is typically a multi-index—in independent mean zero unit variance Gaussian random variables as
with some coefficients . This then yields the field in equation (30) as a function of independent standard Gaussian variables :
where the expression may be seen as the polynomial chaos expansion coefficient.
The foregoing covers the modelling aspect. To perform the reverse task, the inverse problem of conditioning [127–129], one has to change the descriptions in equations (30), (33), and (34), e.g., spatial functions and the random variables such that has the proper statistics at each .
This can range from finding the SPD tensor closest to measurements [97,99,130] to conditioning the random SPD tensor field [111,131–136]. One may note that some of the filtering methods—e.g. the Gauss–Markov–Kalman filter—for Bayesian conditioning work directly on the polynomial chaos expansion coefficients in equation (34) [132–135,137–139].
3. Modelling of elasticity tensors
The main idea from the preceding Section 2.3 for the modelling approach to be discussed here is contained in the equations (17), (18) and (19), which are collected here for easy reference:
Here is the group reduced form of the Kelvin matrix , and with , and hence as well, and .
3.1. Lie representation
Thus, given in the form of , which means in form of an eigenstrain distribution V and eigenvalues , plus an orientation in space which describes the rotation from C to , one may form and and then generate C:
This is a mapping from to , resp. from to
3.1.1. Lie group representation of the Kelvin matrix
The representation equation (36) will be called the Lie group representation of the Kelvin matrix , as resp. and are elements from matrix Lie groups. But the diagonal SPD matrices form a commutative or Abelian Lie group under matrix multiplication, and hence is also from a Lie group. The Lie representation in equation (36) is thus one on a product of Lie groups [4,70–72,83], which is again a Lie group:
It should be noted here that the representation in equation (36) is not unique. What is unique is the spectrum of , i.e., the set of diagonal elements of . They could be re-ordered, inducing a re-ordering of the columns of . This could be avoided by insisting on an ordering, say by size, of the diagonal elements of . But even more significantly, when degenerate or multiple eigenvalues occur, as is always the case for the ‘higher’ elasticity classes (cf. Section 4), those eigenvalues are associated with invariant subspaces of dimension two or higher. This makes the quest for an orthonormal eigenvector basis of such an invariant subspace—the associated columns of —a matter of arbitrary choice.
As further detailed in ‘Elasticity distance’ in Appendix 1, for a general Lie group G, its Lie algebra , the tangent space at the identity, plays an important role [140–143]. Specifically, it is possible to use the Lie exponential map—for matrix Lie groups and algebras this is the usual matrix exponential— to represent elements of the group G on a the free vector space ; the inverse of this map is the . Here we want to parametrise the product Lie group equation (37) in this way via its Lie algebra.
3.1.2. Exponential Lie algebra representation of the Kelvin matrix
The exponential and the logarithm allow one to shift the representation from the Lie group in equation (37) to its Lie algebra , which is a combination of the Lie algebras of the individual factors:
Here is the vector space of diagonal matrices, the Lie algebra of . With the exponential map, this finally yields a representation of the Kelvin matrix C on the vector space in equation (38). It allows one also to separate the effects of spatial orientation, represented by , strain distribution, as represented by , and stiffness, as given by . In Section 3.2 and ‘Elasticity distance’ in Appendix 1, this is used further to define the ‘elasticity metric’, which is used in the sequel for Fréchet means.
It should be noted that to represent resp. , usually not all of resp. is required, as one has invariance w.r.t. rotations resp. in the symmetry group of the material. We shall not introduce the group theoretic complication which might arise from a transition to factor groups here [11,51,67], and allow possibly wrapped representations, as they occur also often in circular statistics [144,145]. A similar ‘wrapping problem’ occurs in the representation of spatial rotations on its Lie algebra via the exponential map, see also the remarks at the end of ‘Rotations in in 3D and 2D’ in Appendix 2.
As is easily seen from equations (17) and (19) and the above discussion, the group invariance properties of the symmetric tensor
and its exponential are the same, a fact which was also used by Guilleminot and Soize [109,110]. And as we are using the Lie algebra of the Lie group , we are essentially modelling in equation (39).
Note that one should actually apply some scaling prior to using the logarithm in equation (39), as the quantities involved may have physical dimensions. One could use in equation (39) a diagonal scaling matrix —often this is just a multiple of the identity, for some —of the same physical dimensions as and then compute with
Or one could use a reference matrix of the same physical dimensions as C—often is a diagonal matrix, or even just a multiple of the identity as above—and define the logarithm as
see also, e.g., Shivanand et al. [4]. It is tacitly assumed that some such transformation to non-dimensional quantities has been carried out, so there is no need to clutter the notation with it.
Together with the exponential Rodrigues formulas in Appendix 2, this means that all the invariance constraints in the Lie group representation equation (36) may be applied on the Lie algebra level in equation (38), but a little refinement is in order here. As in the product Lie group the first factor is contained in the second one, it is clear that in the Lie algebra one has , and so the second factor can be reduced to , the orthogonal complement of in . Hence we may use for the representation the direct sum of Lie algebras
Hence in order that one may represent the Kelvin matrix later in detail in Section 4, the following general picture emerges: starting with a triple one first builds the logarithm H in factored form, and then C:
where, as before, , , , and ; in between one has , and finally . This exponential representation ensures positive definiteness of C.
As what we finally seek is a measurable map to give the elasticity Kelvin matrix for a location and event , from what was just outlined in equation (43), this will be done in several steps. One of them is to produce random tensor fields with values in in equation (42), i.e., random triples . From this, at each location x and event ω, the log tensor is built according to equation (43), and then finally as just described in equation (44).
In order to put the choice of random elements into proper perspective, in the following Sections 3.2 and 3.3 some consideration on how to to choose a suitable kind of mean or expectation operator on will be given, together with some requirements for such a choice. Then we can continue on how to generate random Kelvin matrices in general in Section 3.4.
3.2. Fréchet means on manifolds
As the set of SPD maps is not a flat vector space, but a manifold which can be accessed in different ways, this leads to the question of what kind of average or mean to use for elements of , and this is tied to the question of how distances are measured on this set. As the ambient space is a vector space, it appears at first natural to calculate averages and means as one would in a vector space, using the common arithmetic or Euclidean mean. But, as mentioned already in Section 1, it has been found that by using this kind of mean, one introduces undesirable effects [73–82]. As further described, there in Section 1 and also in Appendix 1, these are the so-called the swelling, fattening, and shrinking effects [20,70–72,82,83].
3.2.1. Variational characterisation of the mean
In this context where we want to describe shortly how to generalise the arithmetic or Euclidean mean. One may note first that the mean has a variational characterisation, cf. also ‘Fréchet mean’ in Appendix 1 and [4,16,73,82,85,90,92,94,96,146–148] for further discussion, which is the key for further generalisation. This characterisation is reproduced in Appendix 1 for better reference, see equations (86–88), and it is based on a minimisation:
where the X-based variance (cf. also equation (88)) is the so-called ‘loss function’, here based on the general distance function resp. metric ϑ. In the simple case of a finite collection of items to be averaged this would be , with weights , . The usual arithmetic resp. Euclidean mean is obtained by use of the Euclidean distance in , see Appendix 1. But a Euclidean distance on is not necessarily natural, and it is what seems to be responsible for the undesirable effects just mentioned. So one has to look for other distance functions ϑ, cf. also ‘Elasticity distance’ in Appendix 1, which to use in equation (45) in place of the Euclidean metric in the variational characterisation.
3.2.2. Possible alternative means
There are several possible choices, cf. ‘Elasticity distance’ in Appendix 1 and [4,16,73,82,85,90,92,94,96,146–148], as well as the references therein. The connection with Section 3.1 is as follows. It has been mentioned here before that the set of positive-definite matrices is geometrically an open convex cone in . Hence can be equipped with the structure of a differentiable manifold.
The representation of on the product Lie group in equation (37), and via the maps equation (43) finally on the vector space in equation (38), allows one to map a Euclidean structure on the Lie algebra onto a Riemannian structure on the product Lie group , each factor separately, see Appendix 1 for more details. The Riemannian structure enables the definition of geodesics—locally shortest paths—and thus allows to define a distance between two elements of each of the factor groups as the length of the shortest geodesic between those two elements.
From this, one may define a product metric on the product Lie group [4,70–72,80,82,83], and take this as the basis for a metric on . The product Lie group used here has a finer structure than that used for by Shivanand et al. [4], or general SPD matrices by Jung et al. [70], Groisser et al. [71,72], Feragen and Fuster [82], Schwartzman [83], which only uses the spectral decomposition equation (19), which would correspond to a representation on only . Here we want to use this finer structure given by the representation in equation (36), which includes the invariance resp. symmetry group reduction, in the definition of a new metric, which is explained in detail in ‘Elasticity distance’ in Appendix 1. It is proposed to call this metric the ‘elasticity metric’ or ‘elasticity distance’ in equation (95). It allows a fine control to measure differences in spatial orientation in equation (36), differences in strain distributors , and differences in Kelvin moduli , and combine them with different weights (cf. ‘Elasticity distance’ in Appendix 1).
It is based on the product distance on (see equation (93) in ‘Elasticity distance’ in Appendix 1). For two elements with representations , based on as in equation (43), the squared product distance is given by
with two positive tuning constants . The distances on the component Lie groups are: for the rotation groups the standard Riemannian metric (cf. equation (89)) with , and completely analogous also for with ; and for also the logarithmic Riemannian metric (cf. equation (90)) . IN addition, one has now to worry about the non-uniqueness of the product representation, to obtain from in equation (46) the final elasticity distance in equation (95). Here, as shown in ‘Elasticity distance’ in Appendix 1, one minimises over all possible Lie product group representations , and, as the geodesic is only locally a shortest path, over all ‘intermediate stops’; but for small variations equation (46) is all what is needed.
To recap quickly Appendix 1 here, with this elasticity metric one may go back to equation (45), employing the variational characterisation with metric ϑ, and replace the Euclidean metric with the elasticity metric mentioned in equation (95). At the end of Appendix 1, an elasticity Fréchet mean or expected value is thus defined in equation (96) for a random elasticity tensor, utilising as loss function the -based variance in equation (97), which is based on the elasticity metric in equation (94).
3.3. Requirements for representation and mean
While the introduction of a new metric and mean where motivated in the preceding Section 3.2 by geometrical considerations, it might be actually worthwhile to consider what one may require from a metric for elasticity tensors resp. Kelvin matrices, and even for any kind of numerical representation in general.
As the goal is to be able to define, model, and represent random tensor fields on some spatial domain , or with typical point , and an event in some appropriate probability space with σ-algebra and probability measure ℙ, i.e., a measurable map , here we formulate some requirements which in our view have to be satisfied for the modelling, so that the random tensor field is usable in, e.g., stochastic FEM calculations and uncertainty quantification (e.g. Matthies [116], Soize [149] and the references therein). See also Shivanand et al. [4] for similar remarks regarding second-order SPD random tensor fields.
3.3.1. Representation requirements
The requirements we suggest should be met is that at almost all points and realisations ,
even under numerical approximations like truncation of series resp. numerical quadrature, the tensor has to be SPD, i.e., , and for all
;
the tensor has to be invariant under some group of transformations , the invariance or symmetry class of each realisation;
the mean based on some desirable metric has to be invariant under a possibly larger group of transformations , with , the invariance or symmetry class of the mean.
The first item is a necessity in order to be able to successfully calculate with the tensor field . The second item makes sure that each realisation has the proper spatial invariance resp. symmetry or elasticity class. The third item allows to control the elasticity class of the mean resp. expected value. For the most part, the concern here is the analysis, modelling, and representation of at one point and sample , so that for the sake of brevity these arguments will be dropped in the sequel, just as they have been in most of the description above, up to the point in Section 3.4 when spatial random fields will finally be considered.
3.3.2. Desiderata for a Fréchet mean
Turning now to the mean and metric addressed in the third item above, first note that a change of physical units or to non-dimensional quantities is a diagonal scaling like equation (40), and the mean should scale the same way, i.e., first scaling and the averaging should give the same result as first averaging and then scaling. Second, a change of coordinate system by some for any should affect the same change in the mean, i.e., commutativity of averaging and coordinate transform. And finally, one may note that Hooke’s law—in vector notation in equation (129)—can just as well be formulated in terms of compliances, i.e.,
Hence it seems desirable to require the ‘mean material law’ to be independent of the arbitrariness of formulating the material law, e.g., equation (129) or equation (47), and therefore demand an appropriate invariance under inversion, i.e., averaging and then inverting should give the same result as first inverting and then averaging. In simplest terms, given n Kelvin matrices , let denote their Fréchet mean based on some desirable distance , cf. Appendix 1, and let , be an arbitrary spatially induced rotation. The requirements for the mean then are:
invariance under scaling: : ;
invariance under orthogonal transformations: ;
invariance under inversion: let with mean be the inverses to the Kelvin matrices , it should hold that .
3.3.3. Desiderata for a metric to define a Fréchet mean
When looking at the variational definition of the Fréchet mean equation (45), but with another metric in place of the Euclidean metric, or the more general formulas in Appendix 1, it is quite obvious that the Fréchet mean inherits the required properties from the underlying metric on . The requirements for a desired metric , and hence for a Fréchet mean based on it, are thus to satisfy for all , and all :
quasi-invariance under scaling: ,
with a monotone ;
invariance under orthogonal transformations: ;
invariance under inversion: ;
The Euclidean distance satisfies only the first two items, and so the arithmetic or Euclidean mean only satisfies the first two requirements for means. The elasticity metric proposed here, see Appendix 1, satisfies all three requirements, and so does the associated mean. There are some other metrics which satisfy all three requirements, see the discussion by Shivanand et al. [4], Feragen and Fuster [82], but the elasticity metric seems to deal particularly well with the afore mentioned swelling, fattening, and shrinking effects.
3.4. Generation of random Kelvin matrices
At first assume that is fixed, i.e., consider the generation of random Kelvin matrices at one location. The spatial modelling will be described later, so the argument x will be suppressed. Similarly to the situation in Section 3.1 when introducing , where a ‘reference matrix’ was used in equation (41), this concept is useful also here. One may actually think of the reference matrix as an appropriate mean or expected value. It will be pointed out under what conditions this holds. Start with the Lie product group decomposition of the deterministic reference matrix as in equation (36):
from where can be recovered as in equation (44). Here one can then choose an elasticity invariance class according what is appropriate for the mean as indicated in Section 2.3, i.e., invariance under resp. .
How to pick a specific elasticity invariance class either for the mean—i.e., resp. —or for the random part—i.e., resp. —will be given in detail for each class in the next Section 4.
As a second step, in equation (44) each component is written as a product
The purpose of the rotation in equation (50), , resulting from a spatial rotation , is to transforms the desired final to the special form —that is why in equation (53) its inverse is used for the back-transformation. The special forms to be addressed below for each elasticity class have at least one distinguished rotation axis—except for tri-clinic material, which has none, and isotropic material, which does not need a spatial rotation—which as is usual taken to be the three-axis. IN addition, there may be other distinguished rotation axes in the plane orthogonal to the three-axis, i.e., the 1-2-plane. In there also would be reflections, but as here we confine ourselves to proper rotations , reflections are not considered any further.
As recalled in ‘Rotations in in 3D and 2D’ in Appendix 2, a rotation can be represented by a normalised skew matrix and a rotation angle θ through the exponential map given by the Rodrigues formulas, equation (104) in 3D and equation (107) in 2D.
In 2D the skew matrix R is constant, cf. equation (106), whereas in 3D it is conveniently specified by a vector . Here r is a unit vector, and the length of q is equal to the absolute value of θ, and from that one obtains (cf. equation (103)).
In 2D one may take the parameter (a 1D vector), and in 3D the 3D-vector (). Thus the deterministic part of is always described by a deterministic vector , (so in 2D and in 3D ), and the random part through a random vector . Finally one obtains a correspondence for equation (53),
check ‘Rotations in ’ in Appendix 2 for further details.
One way to think of this combined rotation—a concept which is employed also for the factors and in equation (53)—is that does the ‘main part’, i.e., it transforms the block diagonal form into the average spatial orientation. The additional factor introduces a random component on top of this. This will be achieved if the expected value of this random rotation —for the -metric on in equation (89)—is the identity. The total rotation then has mean . Thus, in light of equations (54–55), one may in effect say that the random orientation can be written as .
The identity corresponds to , i.e., to . Thus, for a random realisation, generate a random spatial rotation vector with mean at the origin. Note that from equations (89–50) one gleans that the -distance (cf. Appendix 1) between and is
As this shows, one possibility when this will make the corresponding have mean I in the metric is when resp. has a distribution which is symmetric about the origin. Observe that if in addition to the mean (the origin) also the variance of the vector is specified, the maximum entropy distribution [150] for is the centred normal or Gaussian distribution, which obviously is symmetric to its mean, the origin.
3.4.2. Random transformations in strain space
This is in many ways very similar to random rotations in physical space as just described, only that with in equation (51) is in , and the randomness enters here in a slightly different way.
As a general spatial rotation with cannot diagonalise the Kelvin matrix C for any elasticity invariance class, a further rotation in strain representer space is needed for this. Thus these orthogonal , described by , are part of the material description, i.e., the definition of the block diagonal group-reduced form of the Kelvin matrix , as its columns specify the Kelvin eigen-strain representers resp. the eigenvectors of the elasticity matrix in the invariant subspace that was found through the reduction of the group representation. So, although it is an abstract rotation one is talking about, this is not a rotation in physical space.
One chooses again, analogous as before for the spatial rotation, a deterministic to define the reference or mean eigen-strain, again defined by a parameter vector , where , and a random , also defined by a parameter vector . As previously for the spatial rotation in equations (54–55), one obtains
Thus again the final result may be written as . Details for these versions of the map , as well as will be given for each elasticity class in Section 4.
In case has a distribution which is symmetric about the origin, and as for spatial rotations the -distance (cf. Appendix 1) is used on the Lie product factor , similar considerations as before show that then has the -mean equal to the identity I and thus is the -mean of . And again, in case that additionally also the variance of the vector is specified, the maximum entropy distribution [150] for is the centred normal or Gaussian distribution.
3.4.3. Random Kelvin moduli
The diagonal eigenvalue matrix we want to split according to equation (52), similarly as for the spatial rotations and the rotations in strain space , i.e., . One may think of as the deterministic or mean part, and as random with mean equal to unity. Translating this into logarithmic space, one obtains a modelling on by taking as fixed such that and as a zero mean random part such that
the last equality being true as is an Abelian Lie group, and therefore and commute.
One may point out that in case the distribution of is symmetric to the origin, then the expected value of according to the logarithmic distance (see equation (90) in Appendix 1) will be the unit matrix I. And additionally it may be observed that the maximum entropy distribution for given mean and variance on is the centred the Gaussian or normal distribution.
To obtain a concise representation in real parameters, one chooses a deterministic vector , where is the number of different log-eigenvalues, of the diagonal of , and the subscript # means that the entries of are distributed at the right positions of the diagonal. In addition, the zero-mean random part is parametrised by a random vector , which is then used in . Finally one obtains (cf. equation (59))
Sometimes one wants a guarantee that for each realisation one has . This can be achieved by not generating and through their logarithms as just explained, but rather to realise that , where has to be positive. So one generates the free variables —the logarithm of —and —the logarithm of the difference —and sets , , and then . As in every realisation one has that , this guarantees that for each realisation .
3.4.4. Probability distributions of the parameters
As was described in the previous sections, the random parameters determine the stochastic behaviour of the generated Kelvin matrices C. Therefore, it should be mentioned what influence the character of the probability distributions for those random parameters has on the solvability of, e.g., the elliptic boundary value problem.
In the normal setting, as will be described in a bit more detail further below, the Kelvin matrix C and its inverse has to be essentially bounded independent of spatial position or random event for the boundary value problem to be well-posed.
The parameters and are mapped through the Lie algebra resp. into the compact Lie groups resp. , and thus have no influence on the question of boundedness. If their distributions are unbounded, they are wrapped on the compact Lie group to produce so-called circular statistics[144] —just think of the 2D case, where the Lie algebra is the real line, which is mapped via onto the circle and in this way onto the corresponding rotation matrix. Thus the mapped probability distributions of the rotation matrices Q and V are always bounded, and hence can not cause the Kelvin matrix C or its inverse to be unbounded.
This is different for the eigenvalues, the Kelvin moduli or their logarithms . In case the distribution of is bounded, so are the distributions of C or its inverse. But in case it is only known that the have finite variance, the maximum entropy distribution [151] is Gaussian, and hence unbounded. Thus it is also unbounded for C or its inverse. Further below this will be discussed further.
In Section 2.4, it was advocated to model random fields in a separated fashion like in equation (30), as this separates the probabilistic content from the spatial variation in a tensor-product like manner. Given the spatial functions, the attached random variables will be given through some kind of projection like equation (32). One possibility how to express these random variables through a polynomial chaos expansion in independent standard Gaussian random variables was already sketched at the end of Section 2.4. This can be taken as one example of the so-called NORTA (NORmal To Anything) approach [152,153], where groups of random variables with known correlation and marginal distributions are determined as functions of standard normal or Gaussian random variables.
Especially in the modelling of the Kelvin moduli, observe that what is proposed is to model the log-moduli, and then essentially take the exponential, thereby guaranteeing positivity. This means that the point-wise exponential transforms the probability distribution of the log-moduli into the one of the Kelvin moduli themselves. If, as a realistic example, the log-moduli μ were modelled as a Gaussian field, the Kelvin moduli would be log-normal. One may interpret the exponential here also in a different way: Let be the Gaussian cumulative distribution function of the log-moduli μ, and the log-normal cumulative distribution function of the Kelvin moduli λ, then . Now it easy to see, that if some other probability distribution of the Kelvin moduli λ were desired, say a probability distribution with cumulative distribution function , then one could simply take the point transformation . The thing to watch out for in this case is not to make any approximations for which could numerically destroy positivity.
3.4.5. Spatial random parametrisation
Putting everything together, one may now define with a parameter vector space and in it a deterministic vector to define the deterministic (cf. equations (43–44)) part , and also the zero mean random vector
Now we set , and following equations (50–53) one thus arrives at the construction of the desired random Kelvin matrices at any spatial point .
To extend this to the construction of random spatial fields of Kelvin matrices, one may recall Section 2.4. Thus to extend the above modelling to random fields, one observes that the above description of a parameter vector has to be extended to spatial parameter vector fields. Thus, by defining a deterministic spatial field for , and a zero mean random field , one has everything that is needed to define random fields of Kelvin matrices with all the desired properties.
The random parameter field is up to stochastic second order described by its mean and covariance , see equations (27) and (29) in Section 2.4. The covariance now takes the more detailed form of a joint correlation matrix function:
This form shows that it contains all the correlation structures between spatial orientation, encoded in , eigen-strain distribution, encoded in , and stiffness, encoded in .
Heterogeneous materials and their statistical description may be found in Torquato [154] and references therein. The family of Matérn covariance functions [155–157] are an often used family of covariance functions in this connection. They allow a separate control of smoothness and correlation length of the random field, as well as efficient approximation via low-rank tensor methods [158], which is important in the solution of the Fredholm integral equation equation (31) and field expansion equation (30). This kind of covariance function is used to describe stochastically homogeneous random fields as already described at the end of Section 2.4, where the Karhunen-Loève -eigenproblem equation (31) becomes a convolution equation. This means that the—possibly generalised —eigenfunctions are the Fourier functions. Given this, the spectrum of the operator in equation (31) is computed via Fourier transform of the covariance function [116]. In addition to the versatility of this family of covariance functions this (continuous) spectrum in the eigenproblem equation (31) is analytically known. Unfortunately, neither this covariance nor its spectrum are compactly supported—an indication that the integral operator in equation (31) is not compact. One example of a covariance function with compact support is given by Gneiting [159]. This renders the integral operator in equation (31) compact, hence it has a discrete spectrum and a proper Karhunen-Loève series expansion equation (30)—whereas in the case of a continuous spectrum this becomes an integral.
3.4.6. Boundary value problems
Let us return to the question of regularity, resp. boundedness required from the field of Kelvin matrices . The solution space for an elasticity boundary value problem with a deterministic field as coefficients for the elastic displacement field would in the simplest instance be a closed subspace of the Hilbert-Sobolev space , incorporating the essential boundary conditions, say for simplicity , i.e., vanishes on the boundary . The right-hand side of distributed loads then typically has to be in the dual space . Thus the Kelvin matrix as coefficient field has to be in , i.e., essentially bounded, in order to have a well-posed problem [116,160–163]. For that, one would also want the inverse of the Kelvin matrix, , to be essentially bounded. This is well known standard elliptic theory [103,104,160–163].
Considering the stochastic extension of the boundary value problem with a random Kelvin matrix , one approach is to require finite second moments for the random solution displacement field , i.e., to be in the tensor product Hilbert space , and the right-hand side of random distributed loads to be in its dual space . The formulation of a well-posed problem in this case requires the random Kelvin matrix to be essentially bounded, as well as its inverse [102,116,125,126,164–166].
This may be regarded as the standard case, and, as mentioned above, it requires the probability densities of the log-eigenvalues to be essentially bounded. One possibility is the quite versatile beta distribution [118]. But, as already mentioned, the maximum entropy distribution for the log-eigenvalues with known finite second moments is the normal or Gaussian distribution [151], making this a highly desired choice [83]. In this case, the eigenvalues and thus the Kelvin matrix have a lognormal distribution and are thus unbounded. It is possible to proceed also in this case, with different techniques, which either make the solution space smaller, or change the solution concept slightly; see e.g., Lord et al. [165], Dũng et al. [166], Espig et al. [167], Hoang and Schwab [168], Bachmayr et al. [169,170], Herrmann and Schwab [171] for more details.
The numerical discretisation and solution of such stochastic boundary value problems resp. SPDEs is beyond the scope of this paper, and the interested reader may find more material in, e.g., Matthies [116], Matthies and Keese [125], Matthies [126], Babuška et al. [164], Lord et al. [165], Dũng et al. [166], Espig et al. [167], Hoang and Schwab [168], Herrmann and Schwab [171], Grigoriu [172] and the references therein. The numerical solution of such SPDEs is also necessary when one wants to identify the properties of the tensor field from measurements resp. observations.
3.4.7. Identification of the parameters
The tasks of identification, calibration, and conditioning have already been briefly touched upon at the end of Section 2.4. Mathematically speaking, these are all so-called inverse problems. In our view, general techniques for such identification resp. inverse problems are the Bayesian methods [127–129,131–135, 137, 138, 173, 174] and the references therein. These probabilistic methods may be used for many purposes, e.g., for anisotropic materials [130,136]. But the detailed description of the identification of the symmetry classes of materials and of the numerical values of the properties, i.e., the eigenvalues and orientations, is a topic beyond the scope of this work, so we offer only some remarks. The identification of symmetry classes may be based, e.g., on the knowledge of the crystal structure of the material [68], or on its fabrication [175], or of its natural genesis. Otherwise one must rely on measurements or observations. For the identification of elasticity classes [176–180]. For the kind of stochastic modelling proposed here, some pointers are given in section 4.2 of Grigoriu [118].
With informative measurements or tests quite complicated material behaviour can be identified and calibrated [131–134], and also [139,181,182] for highly nonlinear and irreversible behaviour. Whether the tests or experiments are informative may be tested computationally beforehand. Thus the test or measurement and updating may be performed first virtually in the computer, and, in this way, help in the design of informative experiments.
4. Detailed lie representation
According to equation (62) in Section 3.4, the random field of Kelvin matrices can be described by a n-dimensional random field of parameters , where , and is such that describes the spatial rotations, describes the eigen-strain distribution, and describes the eigenvalues or Kelvin moduli. These parameters will now be given for all the elasticity classes. As this is simpler in 2D, this is where we start in Section 4.1, and then move to the 3D case in Section 4.2. As we will accept possible ‘wrapping’ in the parametrisation of rotations, for the sake of simplicity, it is refrained from giving the formal group-theoretic descriptions of the material symmetry resp. invariance groups. This will be described only verbally by pointing out familiar geometrical objects with the same symmetry.
Here the special form of the Kelvin matrix (see the development equations (17–19) in Section 2.3) will be given in its group reduced form, i.e., reduced to block-diagonal form. In cases where one can not achieve two or more diagonal blocks—e.g. for tri-clinic materials—one at least wants to have a form which has as many zeros in as possible. The special form can then be spatially rotated—the reverse of equation (17)—to the final form , this is all contained in the relations equation (35) at the beginning of Section 3. How to build the orthogonal from a spatial rotation is recalled for the sake of completeness from the literature in Appendix 2.
4.1. Elasticity classes in 2D
In 2D, there are four symmetry classes [15], to be shortly and informally described by pointing out a geometric figure with the same symmetry. The Hasse diagram—drawn horizontally in Figure 1—shows the relation of the lattice of elasticity symmetry or invariance groups as subgroups of the full orthogonal group, where the hooked arrow means a subgroup relation, e.g., the symmetry group of the orthotropic class is a subgroup of the symmetry group of the tetragonal class. The largest group is on the left, the isotropic class with the symmetries of a circle (i.e., the whole 2D rotation group), then the next smaller one is the tetragonal class with the symmetry of a square, then the orthotropic class with the symmetry of a general rectangle, and finally the tri-clinic class with the symmetry of a parallelogram.
Hasse diagram of symmetry subgroups of fourth-order tensors in 2D.
4.1.1. Tri-clinic material in 2D
This section covers not only tri-clinic, but also mono-clinic materials, as there is no difference in 2D. As mentioned before, these materials have in 2D the symmetry of a parallelogram. The Kelvin matrix C cannot be simplified through a group reduction with a spatial rotation to a block-diagonal form beyond the general form equation (13), where one may note that there are free parameters. But, as in 2D one has , one parameter () may be used for a spatial rotation, see equations (117) and (120) in Appendix 2, to induce one zero in the upper right corner [35], this echos the first of the relations in equation (35):
The spatial rotations resp. decide on whether the spatial orientation of the material axes changes or not. Hence, one wants to keep this possibility in the stochastic modelling process.
What is needed next is to use the spectral decomposition of , the second of the relations in equation (35). The matrix in equation (65) has generally three distinct positive eigenvalues , and . For the three distinct eigenvalues one thus needs log-eigenvalue parameters , such that .
The Kelvin matrix in equation (65) can of course be diagonalised as in equation (18), but the corresponding matrix of eigenvectors is not in , i.e., not induced by a spatial rotation.
In Section 3.1, it was explained how to keep the spatial rotation described by —cf. ‘Rotations in ’ in Appendix 2—from interfering with the ‘strain distributor’ part by introducing the orthogonal split in the corresponding Lie algebra already mentioned in equation (42): . As just mentioned, in 2D one has , and this is needed for the rotation angle parameter to describe the spatial rotation. This means angle-like parameters can now be used to describe the eigenvectors of the Kelvin matrix in equation (65), i.e., the columns of in equation (18).
There are at least two ways of generating such a : one is to use the equations (121) and (122) in ’Rotations in ’ in Appendix 2, where the parameters are directly used for a map to generate a . Another way is to use two 2D rotation matrices—cf. equation (105)—embedded in 3D depending on angles :
Here the explicit formulation of the skew matrices and the explicit use of the exponential map in the mapping chain according to ’Rotations in in 3D and 2D’ in Appendix 2 has been shortened by giving just the final result. Then with it is easily seen that (cf. equation (18))
has the form equation (65), which corresponds to the spectral decomposition as the second of the relations in equation (35). In both cases, the parameters are purely a description of the distribution of eigen-strains for the material specification. Note that the first column of the matrix of is the eigenvector or eigen-strain for the eigenvalue , the eigenvector of , and so on. If one wants to distinguish the bulk or volumetric response from the shear response, e.g., by assigning the largest of the eigenvalues to this response, then one may choose the eigenvector which has the largest projection onto the volumetric strain n (equation (132) in ’Kelvin notation’ in Appendix (3)). By assigning the largest eigenvalue to that eigenvector—it was described previously how to generate random eigenvalues where one is always larger than the others—one can ascertain that the bulk or volumetric response is always stiffer than the shear response.
By construction, the form in equation (67) is invariant under rotations in the symmetry group . Finally, the elasticity matrix for a tri-clinic material in 2D with the proper spatial orientation may be generated from equation (67) in a two-step rotation procedure via
The last equation in equation (68)—the third of the relations in equation (35)—is the concrete version of equation (53) in Section 3.4. One may recall that is the constant reference matrix — e.g. the scaling-rotation mean—which may be actually invariant under a larger symmetry group , i.e., further to the left in Figure 1. In case one of the eigenvectors is a pure shear, i.e., proportional to y in equation (133) in ’Kelvin notation’ in Appendix 3, then the material is in fact orthotropic, to be described next.
4.1.2. Orthotropic material in 2D
As already indicated, such materials have the symmetry of a general rectangle in 2D. By the group reduction process, i.e., a spatial rotation given by one parameter (), see equations (117) and (120) in Appendix 2, the Kelvin matrix for these materials may be brought into the following block-diagonal form [35], with four free parameters:
which has one obvious eigen-stiffness or Kelvin shear modulus with associated eigenvector or eigen-strain distributor
a simple shear. The other two come from the diagonalisation of the upper upper left sub-matrix of equation (69), and have the general form of a rotation of in equations (132) and (133) about the fixed vector in equation (70) in the strain-representer space :
associated with the other two Kelvin moduli . Observe that the single parameter () is not an angle in a spatial sense, but is one in the two-dimensional strain representer subspace formed by the first two components, characterising the material, or more precisely, the Kelvin eigen-strains. Here, as just before, the mapping chain according to ’Rotations in in 3D and 2D’ in Appendix 2 has been shortened to display just the final result .
The previous remarks made for tri-clinic material regarding the possible choice of the largest Kelvin module to be associated with the volumetric response apply here as well. Again, for the three generally distinct eigenvalues one needs log-eigenvalue parameters , such that . In addition, one has with equations (70) and (71) , and with that formally the same kind of spectral representation as in equation (67) for . The Kelvin elasticity matrix for an orthotropic material in 2D with the proper spatial orientation may then again be generated as in equation (53) in Section 3.4, or in equation (68) in a two-step rotation procedure, see also the remarks there. The total number of parameters is . In case in equation (71), i.e., in case the first two eigenvectors are and , the material is actually tetragonal.
4.1.3. Tetragonal material in 2D
As already indicated, such materials have the symmetry of a square in 2D. Again, by the group reduction process and a spatial rotation given by one parameter (), the Kelvin matrix for these materials may be brought into the following block-diagonal form [35], with three free parameters:
completely determining the spectral decomposition, with, as before in the 2D orthotropic case with the obvious Kelvin shear modulus and eigenvector as in equation (70), and the two additional generally distinct Kelvin moduli (): and , i.e., .
The associated eigenvectors are fixed (), corresponding to in equation (71): and , cf. equations (132) and (133), hence is a fixed matrix. As is easily observed, is a volumetric Kelvin modulus, whereas is a shear Kelvin modulus; and the previous remarks made for tri-clinic material regarding the possible choice of the largest Kelvin module to be associated with the volumetric response apply here as well.
Here again, for the three generally distinct eigenvalues one needs log-eigenvalue parameters . With the above fixed V one obtains formally the same kind of spectral representation as in equation (67) for in equation (72). The tetragonal 2D Kelvin elasticity matrix with the proper spatial orientation may then again be generated as in equation (53) in Section 3.4, or in equation (68) in a two-step rotation procedure. The total number of parameters is . In case the two shear Kelvin moduli coincide, i.e., , the material is actually isotropic.
4.1.4. Isotropic material in 2D
As already mentioned, such a material has the symmetry group of a circle in 2D. This means that no spatial rotation is needed () to bring the Kelvin matrix to the block-diagonal form equation (72) with , the Kelvin matrix has this form in any spatial orientation. Thus there are only two () free parameters for the two log-eigenvalues . The spectral decomposition as in equation (67) with the matrix V as in the tetragonal case is the final product , no spatial rotations as in equation (53) in Section 3.4, or in equation (68) are necessary, as the Kelvin matrix is invariant under such rotations.
4.2. Elasticity classes in 3D
In 3D, there are eight elasticity classes [8–11,14,35,49–51]. The Hasse diagram for 3D in Figure 2 shows the relation of the lattice of elasticity symmetry or invariance groups as subgroups of the full orthogonal group in 3D. They will be again described informally by pointing out geometric bodies with the same symmetry. On top is the isotropic class with the symmetries of a sphere, i.e., the whole orthogonal group, with the sub-classes—descending in the Hasse diagram in Figure 2 along the left branch—the cubic class with the symmetries of a cube, the tetragonal class with the symmetries of a square prism (a right prism with a square base), the orthotropic class with the symmetries of a right prism (a right prism with a general rectangle as base), the mono-clinic class with the symmetries of a right prism with a parallelogram as base, and the tri-clinic class with the symmetries of a parallelepiped (an oblique prism with a parallelogram as base). On the right branch of the Hasse diagram for 3D in Figure 2 are the transversely isotropic class with the symmetries of a circular cylinder and the trigonal class with the symmetries of a triangular prism (a right prism with a equilateral triangle as base).
Hasse diagram of symmetry subgroups of fourth-order tensors in 3D.
4.2.1. Tri-clinic material in 3D
As mentioned before, these materials have in 3D the symmetry of a parallelepiped. As in the 2D case for tri-clinic materials, the Kelvin matrix C cannot be simplified through a group reduction with a spatial rotation to a block-diagonal form beyond the general form equation (12), and there are free parameters. But, as in 3D one has , three parameters () may be used for a spatial rotation, the direction of the vector q actually being the rotation axis of , see equations (108) and (115) in Appendix 2, to induce three zeros in the right of the top row [35], this echos the first of the relations in equation (35):
As before in 2D, the spatial rotations resp. decide on whether the spatial orientation of the material axes changes or not, and thus one wants to keep this possibility in the stochastic modelling process.
The matrix in equation (73) has generally six distinct positive eigenvalues , for which one needs log-eigenvalue parameters to generate the diagonal eigenvalue matrix: .
Next one wants to use the spectral decomposition of , the second of the relations in equation (35). As a was used to generate the special form equation (73), this matrix can be diagonalised as in equation (18), but the corresponding matrix of eigenvectors is not induced by a spatial rotation, and thus generated by a . See Section 3.1 on how to keep the spatial rotation Q from interfering with the ‘strain distributor’ part by introducing the orthogonal split in the corresponding Lie algebra already mentioned in equation (42): . As , all that is needed now is a basis of the space , given in equation (116) in ’Rotations in ’ in Appendix 2. With that basis and 12 parameters one may form the vector , and then map this on to according to equation (108). This is finally mapped to , to obtain as in equation (67), which corresponds to the spectral decomposition as the second of the relations in equation (35).
The columns of the matrix are the eigenvectors corresponding to the eigenvalues . As before, in case one wants to distinguish the bulk or volumetric response from the shear response, e.g., by assigning the largest of the eigenvalues to this response, then one may choose the eigenvector which has the largest projection onto the volumetric strain n given in equation (126) in ’Kelvin notation’ in Appendix 3, and assign the largest eigenvalue to that eigenvector.
Finally, as before, the two-step rotation procedure with a spatial rotation given by may be used as in equation (68) to rotate the Kelvin matrix to the desired coordinate system. As pointed out before, the last equation in equation (68) giving here—the third relation in equation (35)—is the concrete version of equation (53) in Section 3.4, cf. the remarks for tri-clinic materials in 2D in Section 4.1. In case two of the eigenvectors are pure shears, i.e., proportional to y in equation (127) and z in equation (128) in ’Kelvin notation’ in Appendix 3, or some linear combination thereof, then the material is in fact mono-clinic, to be described next.
4.2.2. Mono-clinic material in 3D
As indicated previously, these materials have in 3D the symmetry of a right prism with a parallelogram as base. In this case [35], the Kelvin matrix C can be simplified through a group reduction with a spatial rotation to a block-diagonal form with three diagonal blocks equation (74), and in this form there are free parameters. An additional parameters are used for the spatial rotation just referenced for the group reduction process, giving the total number of parameters for mono-clinic material as ; see equations (108) and (115) in Appendix 2, this is the first relation in equation (35):
The matrix in equation (74) has generally six distinct positive eigenvalues , for which one needs, as before, log-eigenvalue parameters to generate the diagonal eigenvalue matrix: . Two eigenvalues are immediately recognisable from equation (74), namely the Kelvin moduli and with corresponding eigenvectors and .
The other four eigenvectors in come from the diagonalisation of the upper left block in equation (74). This has a spectral decomposition with . As , one needs additional parameters to model this. Taking the four-dimensional version of equation (108) in ’Rotations in ’ in Appendix 2, this is mapped to , and this on to , yielding the eigen-strains of the eigenvalues resp. Kelvin moduli of . Each column in is then extended into by padding with two zeros: to complete .
From this eigen-strain matrix and the diagonal one then obtains a random mono-clinic Kelvin matrix of the form equation (74): , as in equation (67), corresponding to the spectral decomposition as the second of the relations in equation (35). The by now familiar two-step rotation procedure with a spatial rotation given by may be used as in equation (68) to rotate the Kelvin matrix to the desired coordinate system, and again the last equation in equation (68) yielding —the third relation in equation (35)—is the concrete version of equation (53) in Section 3.4. In case , the material is in fact orthotropic.
4.2.3. Orthotropic material in 3D
It was already mentioned that these materials have in 3D the symmetry of a right prism—a prism with a general rectangle as base. As in all the previous cases, the group reduction process with a rotation leads to a block-diagonal form equation (75) with three rotation axis parameters (), see equations (108) and (115) in Appendix 2. The resulting Kelvin matrix of the form [35]:
has free parameters, so that together with the rotation axis q with parameters, in total there are parameters.
The matrix in equation (75) has generally six distinct positive eigenvalues , for which one needs, as before, log-eigenvalue parameters to generate the diagonal eigenvalue matrix . Three eigenvalues are obvious from equation (75), namely the Kelvin moduli , , and with corresponding eigenvectors , , and .
The other three eigenvectors come from the diagonalisation of the upper left block in equation (75). Its spectral decomposition is with . Therefore, additional parameters to model this. As , one may actually use Rodrigues original formula equations (102–104) in ’Rotations in in 3D and 2D’ in Appendix 2, such that p is the rotation vector, but now in the strain representer space , and not in physical space This produces a rotation matrix , yielding the eigen-strains of the eigenvalues resp. Kelvin moduli of . Each column in is then extended into by padding with three zeros: to complete . The familiar two-step rotation procedure with a spatial rotation given by may be used as in equation (68) to rotate the Kelvin matrix to the desired coordinate system.
4.2.4. Trigonal material in 3D
It was already mentioned that these materials have in 3D the symmetry of a triangular prism. As in all the previous cases, the Kelvin matrix can be simplified in the group reduction process with a rotation with three rotation axis parameters () — a so-called natural coordinate system with the spatial three-axis equal to the three-fold rotation axis—into a form [35] with free parameters (i.e., the total number of parameters is ):
The matrix in equation (76) has generally [33] only four distinct positive eigenvalues , as two of them are degenerate double eigenvalues. For this one needs log-eigenvalue parameters to generate the diagonal eigenvalue matrix , where means that the four distinct eigenvalues are put in the right six locations on the diagonal, doubling included.
Now more parameters are needed to fully describe the material via the eigenvector matrix . The two simple eigenvectors for have the general form [33]:
The angle α in strain representer space signifies a rotation about the shear y in equation (127), which means that are rotated images of n in equation (126) and zequation (128). In case , i.e., for the angle , the vectors coincide with . therefore we set in equation (77), so that corresponds to the special case.
The two two-dimensional eigen-subspaces with repeated eigenvalues and have as possible bases [33] vectors of the general form—pure shears:
Again the parameter can be interpreted as an angle, and note that for these eigenvectors all become simple shears. With this the description of the parameters is complete. The last step is the two-step rotation procedure as before to rotate the Kelvin matrix to the desired coordinate system.
4.2.5. Tetragonal material in 3D
As mentioned, these materials have in 3D the symmetry of a square prism. As in the previous cases, the Kelvin matrix can be simplified by group reduction with a rotation with three rotation axis parameters ()—such that the spatial three-axis is equal to the four-fold rotation axis—[35] with free parameters (i.e., the total number of parameters is ) into the form:
which has generally [33] five independent eigenvalues resp. Kelvin moduli. Obviously is a double eigenvalue with the two-dimensional eigen-space spanned by the vectors and . The other obvious simple eigenvalue is with eigenvector . Thus one needs log-eigenvalue parameters to generate the diagonal eigenvalue matrix: , where again means that the five distinct eigenvalues are put in the right six locations on the diagonal.
The other three eigenvectors come from the diagonalisation of the upper left block in equation (80). By simple inspection [33] it can be seen that (cf. equation (133)) is a fixed eigenvector of with eigenvalue . Padding with 3 zeros in the by now familiar fashion gives the fixed eigenvector (cf. equation (127)) of in equation (80). The other eigenvectors for the two simple eigenvalues of depend on an angle-like strain-distribution parameter , and have the general form [33] (already padded with zeros):
This then finally gives the eigenvector matrix . Thus a tetragonal material including its orientation in the physical space is completely described by the parameters . Finally, the two-step rotation procedure rotates the Kelvin matrix to the desired coordinate system.
4.2.6. Cubic material in 3D
As mentioned, these materials have in 3D the symmetry of a cube. The Kelvin matrix can be simplified by the group reduction with a rotation with three rotation axis parameters ()—such that the sides of the cube align with the axes— [35] with free parameters (i.e., the total number of parameters is ) into the form:
It has generally [33] only three independent eigenvalues resp. Kelvin moduli, one repeated eigenvalue of multiplicity two, and one repeated eigenvalue of multiplicity three, as is a triple eigenvalue with the three-dimensional eigen-space spanned by the vectors , and . Thus one needs log-eigenvalue parameters to generate the diagonal eigenvalue matrix: ; here again means that the three distinct eigenvalues are put in the right six locations on the diagonal.
As , the other three eigenvectors may be chosen as fixed. They are found to be with eigenvector (cf. equation (126)) and with the two-dimensional eigen-space spanned by the shear-vectors , cf. equations (127) and (128). The fixed matrix of eigenvectors is thus , and the six parameters to describe the material are . Finally, the usual two-step rotation procedure rotates the Kelvin matrix to the desired coordinate system.
4.2.7. Transversely isotropic material in 3D
As indicated, these materials have in 3D the symmetry of a circular cylinder. To simplify the Kelvin matrix by the group reduction, a rotation with only two rotation axis parameters is required (, as no rotation around the cylinder axis is necessary)—such that the cylinder axis aligns with the three-axis—[35] with free parameters (i.e., the total number of parameters is ) into the form:
This Kelvin matrix has generally [33] four independent eigenvalues resp. Kelvin moduli; and there are two eigenvalues of multiplicity two. One is obviously with the fixed two-dimensional eigen-space spanned by the vectors and . The other double eigenvalue is with the fixed two-dimensional eigen-space spanned by the vectors (cf. equation (127)) and . Thus one needs log-eigenvalue parameters to generate the diagonal eigenvalue matrix: , where again means that the four distinct eigenvalues are put in the proper six locations on the diagonal.
As for tetragonal material above, the other three eigenvectors come from the diagonalisation of the upper left block in equation (83), which is identical to that block for tetragonal material. The eigenvalue and its eigenvector which can be found by simple inspection [33] was already noted above. The other eigenvectors for the two simple eigenvalues of depend on an angle-like strain-distribution parameter , and have the same general form [33] as for tetragonal material equation (81), giving and . This finally yields the eigenvector matrix . Thus a transversely isotropic material including its orientation in the physical space is completely described by the parameters , and the final two-step rotation procedure rotates the Kelvin matrix to the desired coordinate system.
4.2.8. Isotropic material in 3D
As is well known, these materials have in 3D the symmetry of a sphere. No simplification of the Kelvin matrix by the group reduction process is possible, and no spatial rotations are required (). The Kelvin matrix has the same form in any coordinate system with Cowin and Mehrabadi [35] free parameters:
There is one eigenvalue with multiplicity five, , with G the well-known shear modulus, and its fixed five-dimensional eigen-space of shears spanned by
The simple Kelvin modulus , where K is the well-known bulk modulus, has the eigenvector cf. equation (126), i.e., volume change. To generate the diagonal eigenvalue matrix: , where again means that the two distinct eigenvalues are put in the proper six locations on the diagonal. Thus an isotropic material is completely described by the parameters —the log-eigenvalue parameters—to generate the diagonal eigenvalue matrix: . The eigenvector matrix V has the fixed () columns , and no final spatial rotation is needed.
4.3. Example
We consider the average orthotropic elastic properties of human cortical femoral bone in 3D, obtained from measurements on 60 specimens, by Ashman et al. [183]. The coefficients are listed in Table 1. Following this, the compliance matrix—the inverse of the elasticity matrix—in Voigt notation is constructed, as shown in equation (6) in Ashman et al. [183]. The subsequent inversion of the compliance matrix yields the elasticity matrix. Finally, by multiplying a factor of 2 to the lower-right block of diagonal elements, we derive the Kelvin matrix of an orthotropic material in a block diagonal form , as depicted in equation (75).
Orthotropic material parameters of cortical femoral bone.
Young’s modulus (GPa)
Poisson’s ratio
Shear modulus (GPa)
,
,
,
A key objective of this study is to decouple the modelling of scaling, eigen-strain, and orientation parameters in the Kelvin matrix C—see equation (19) for the spectral decomposition. To represent variations in these aspects, we model a one-dimensional field of the orthotropic elasticity matrices over the spatial domain .
To this, the SPD matrix at the left end of the domain at remains the previously defined , which for simplicity is referred to as A. On the contrary at , the Kelvin matrix denoted as B is defined separately for each variation. For scaling, the eigenvalues and of A are multiplied by a factor of 5. For rotation in physical space, the matrix B is obtained by rotating A by an angle or in radians in X-Z plane in clockwise direction, i.e., the rotation vector is . For eigen-strain variation, the upper-left block matrix of A undergoes a similar rotation, where the rotation-like vector reads . The spatial field is built by interpolating from A to B using the Elasticity metric as defined in equation (46) and ‘Elasticity distance’ in Appendix 1. Meaning that, for scaling-based interpolation, the metric reduces to log-Euclidean , whereas, for rotations and eigen-strain-based fluctuations, the interpolation is dependent on Riemannian distance for special orthogonal matrices, i.e., and , respectively. In addition, for comparison, we also interpolate the SPD matrices using the usual Euclidean metric , detailed in ’Fréchet mean’ in Appendix 1.
The visualization of interpolation of Kelvin matrices is done by analysing one of the characteristic elastic parameters such as Young’s modulus () for a given loading direction, as shown in Nordmann et al. [184]. Figure 3 illustrates the interpolation under scaling variations at given spatial points x, while Figures 4 and 5 depict the effects of variations in orientation and eigen-strain, respectively. In addition, each figure demonstrates the interpolation between A and B using both the Riemannian (non-Euclidean) and Euclidean metric.
Visualization of Young’s modulus for the interpolation of Kelvin matrices under varying scaling parameters: (a) Riemannian metric for scaling (); (b) Euclidean metric ().
Visualization of Young’s modulus for the interpolation of Kelvin matrices under varying orientation parameters: (a) Riemannian metric for rotation (); (b) Euclidean metric ().
Visualization of Young’s modulus for the interpolation of Kelvin matrices under varying eigen-strain parameters: (a) Riemannian metric for eigen-strain (); (b) Euclidean metric ().
In all three cases, we observe notable differences between Euclidean and non-Euclidean interpolation. To quantify these differences, we compute the determinant of the Kelvin matrices C, which serves as a volumetric measure. The comparison plots for the three variations—scaling, rotation, and eigen-strain—are presented in Figure 6. In the case of scaling-based interpolation (Figure 6a), the determinant of interpolants based on the Euclidean metric is consistently higher than that of the log-Euclidean distance . On the contrary, in Figure 6b and (c) for the respective rotation and eigen-strain variations, Euclidean interpolation suffers from the so-called swelling effect, wherein the determinant of any interpolant for exceeds that of the endpoint matrices at and . Notably, for fixed scaling parameters, the determinant of Riemannian interpolants remains unchanged.
Comparison of the determinant of interpolated Kelvin matrices using Euclidean and Riemannian metrics for variations in scaling, rotation, and eigen-strain: (a) Scaling; (b) Rotation; (c) Eigen-strain.
For more discussion on this, we refer the reader to Jung et al. [70], where the authors make detailed comparisons for second-order diffusion tensors.
5. Conclusion
A novel framework for the probabilistic modelling of fourth-order random material tensor fields has been introduced, with a focus on an important class of SPD tensors, such as elasticity tensors.
A crucial aspect of material laws governed by such tensors is their invariance under spatial symmetries. By employing the Kelvin notation and representing elasticity tensors as Kelvin matrices their essential tensorial properties are preserved, so that the spatial symmetries and invariances of elasticity tensors through rotations in physical space may reduce the Kelvin matrix to a block-diagonal form, the so-called group reduction. Within each group-invariant subspace of this block-diagonal form, the Kelvin matrix has been further simplified to a diagonal form through spectral decomposition.
The group reduction, combined with spectral decomposition, forms the foundation of our approach. Based on this context, the stochastic representation has the ability to separate the modelling of strength (Kelvin moduli), eigen-strain distribution, and spatial orientation, allowing for independent control of each component. This framework enables the consideration of random SPD matrices where specific invariance properties are known for both the entire population and the mean, or where the ensemble exhibits a ‘lower’ class of invariance compared to the mean. While the stochastic modelling of Kelvin elasticity matrices adheres to the prior principles established for second-order tensors in Shivanand et al. [4], the inclusion of eigen-strain distributions represents a crucial extension of these methods to higher-order tensors.
Even-order SPD tensors form an open convex cone within the vector space of physically symmetric tensors, meaning they constitute a differentiable manifold, but not a vector space. In the modelling of random tensor fields, it is often preferable to find an unconstrained representation in a linear vector space. To achieve this, we consider the matrix exponential and logarithm mappings. Within the proposed probabilistic framework, Lie groups are associated with diagonal SPD matrices and subgroups of orthogonal matrices, allowing for the independent modelling of different aspects of the random tensor. The product Lie group representation is related to the associated Lie algebras for each individual component, the mapping between them being the component exponential map, which subsequently yields the full representation of the Kelvin matrix. Furthermore, the Lie representations as n-dimensional random fields of Kelvin matrix parameters across all symmetry classes is outlined, including all eight elastic symmetry classes in 3D and the four classes in 2D. In summary, the dimension of n ranges from 2 to 6 in 2D, and from 2 to 21 in 3D, for isotropic to tri-clinic materials, respectively.
Given the nature of SPD tensors, calculating the average or mean requires special attention. The conventional Euclidean mean is often not suitable, as it exhibits an undesirable property known as the swelling effect. Instead, we focus on the more general non-Euclidean Fréchet mean, which is based on distance measurements other than the Euclidean metric. Accordingly, we establish criteria for defining an appropriate mean and metric specifically for elasticity. The term ‘elasticity metric’ or ‘elasticity distance’ is proposed for this metric, which provides fine control over measuring differences in spatial orientation, eigen-strain distribution, and Kelvin moduli, allowing for the combination of these factors with adjustable weights. Based on this distance, one may then formulate the corresponding Fréchet mean, referred to as the ‘elasticity mean’. To demonstrate this, as an example, we model a one-dimensional spatial field of orthotropic Kelvin matrices using interpolation based on the elasticity metric. The interpolated matrices are visualized by analysing their corresponding Young’s modulus. The results reveal a notable difference in Riemannian and Euclidean metric-based interpolation.
While this research focused on fourth-order SPD tensors, the stochastic modelling approach outlined here is sufficiently general to be applicable to any even-order physically symmetric tensors, including those that are also positive definite.
Footnotes
Appendix 1
Appendix 2
Appendix 3
Acknowledgements
The authors are grateful to acknowledge that the research reported in this publication was partly supported by funding from the Deutsche Forschungsgemeinschaft (DFG) (sks), and a Gay-Lussac Humboldt Research Award (hgm).
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
References
1.
de GrootSRMazurP. Non-equlibrium thermodynamics. Mineola, NY: Dover, 1984.
2.
TingTCT. Positive definiteness of anisotropic elastic constants. Math Mech Solids1996; 1: 301–314.
3.
NikabadzeMU. Eigenvalue problems of a tensor and a tensor-block matrix (TMB) of any even rank with some applications in mechanics. In: AltenbachHForestS (eds) Generalized continua as models for classical and advanced materials, Advanced Structured Materials, vol. 42. Cham: Springer, 2016, pp. 279–317.
4.
ShivanandSKRosićBMatthiesHG. Stochastic modelling of symmetric positive-definite material tensors. J Comput Phys2024; 505: 112883.
5.
RichardsP. Symmetry in numerical matrix calculations. SIAM Rev1963; 5(3): 234–248.
6.
FässlerAStiefelE. Group theoretical methods and their applications. Basel: Birkhäuser, 1992.
7.
JonesHF. Groups, representations and physics. Oxfordshire: Taylor & Francis, 1998.
8.
NewnhamRE. Properties of materials: anisotropy, symmetry, structure. Oxford: Oxford University Press, 2005.
9.
SlawinskiMA. Waves and rays in elastic continua. 2nd ed.Singapore: World Scientific, 2007.
10.
TinderRF. Tensor properties of solids: phenomenological development of the tensor properties of crystals. San Rafael, CA: Morgan & Claypool, 2008.
11.
MalyarenkoAOstoja-StarzewskiM. Tensor-valued random fields for continuum physics. Cambridge: Cambridge University Press, 2019.
12.
CowinSMehrabadiM. On the identification of material symmetry for anisotropic elastic materials. Quart J Mech Appl Math1987; 40: 451–476.
13.
BónaABucataruISlawinskiMA. Coordinate-free characterization of the symmetry classes of elasticity tensors. J Elast2007; 87(2–3): 109–132.
14.
ForteSVianelloM. Symmetry classes for elasticity tensors. J Elast1996; 43: 81–108.
MoakherM. Fourth-order Cartesian tensors: old and new facts, notions and applications. Quart J Mech Appl Math2008; 61(2): 181–203.
17.
NikabadzeMU. On some problems of tensor calculus. I. J Math Sci2009; 161: 668–697.
18.
NikabadzeMU. Topics on tensor calculus with applications to mechanics. J Math Sci2017; 225: 1–194.
19.
GuptaANagarD. Matrix variate distributions. Monographs and Surveys in Pure and Applied Mathematics. Oxfordshire: Taylor & Francis, 1999.
20.
SchwartzmanA. Random ellipsoids and false discovery rates: statistics for diffusion tensor imaging data. PhD Thesis, Stanford University, Stanford, CA, 2006.
21.
SoizeC. A nonparametric model of random uncertainties for reduced matrix models in structural dynamics. Probabilistic Eng Mech2000; 15(3): 277–294.
22.
GuilleminotJSoizeCGhanemRG. Stochastic representation for anisotropic permeability tensor random fields. Int J Numer Anal Meth Geomech2011; 36(13): 1592–1608.
23.
LoveAEH. A treatise on the mathematical theory of elasticity. 4th ed.Mineola, NY: Dover, 1944.
24.
TodhunterIPearsonK. A history of the theory of elasticity and of the strength of materials: from Galilei to Lord Kelvin, vol. 2.Mineola, NY: Dover, 1960.
25.
Thomson (Lord Kelvin)WK. XXI elements of a mathematical theory of elasticity. Part 1, on stresses and strains. Philos Trans R Soc1856; 166: 481–498.
FedorovFI. Theory of elastic waves in crystals. New York: Plenum Press, 1968.
28.
RychlewskiJ. On Hooke’s law. Prikl Matem Mekhan1984; 48: 420–435, translated into English as Journal of Mathematics and Mechanics (1984): 303–314.
29.
OstrosablinN. On the structure of the tensor of elasticity moduli. Elastic eigenstates. Contin Dynam1984; 66: 11–125.
30.
OstrosablinN. On the structure of the tensor of elasticity moduli and classification ofanisotropic materials. Zh Prikl Mekh Tekh Fiz1986; 4: 127–135.
31.
OstrosablinN. Elasticity moduli and eigenstates for materials of crystallographic syngonies. Contin Dynam1986; 75: 113–125.
32.
MehrabadiMCowinS. Eigentensors of linear anisotropic elastic materials. Quart J Mech Appl Math1990; 43: 15–41.
33.
SutcliffeS. Spectral decomposition of the elasticity tensor. J Appl Mech1992; 59: 762–773.
34.
CowinSMehrabadiM. The structure of the linear anisotropic elastic symmetries. J Mech Phys Solids1992; 40: 1459–1471.
35.
CowinSMehrabadiM. Anisotropic symmetries of linear elasticity. ASME Appl Mech Rev1995; 48: 247–285.
36.
CowinSMehrabadiM. The mirror symmetries of anisotropic elasticity. In: ParkerDEnglandA (eds) Iutam symposium on anisotropy, inhomogeneity and nonlinearity in solid mechanics, Solid Mechanics and its Applications, vol. 39, 1995, pp. 31–36. Cham: Springer.
37.
RychlewskiJ. A qualitative approach to Hooke’s tensor. Part I. Arch Mech2000; 52: 737–759.
38.
RychlewskiJ. A qualitative approach to Hooke’s tensor. Part II. Arch Mech2001; 53: 45–63.
39.
BónaABucataruISlawinskiMA. Material symmetries of elasticity tensors. Quart J Mech Appl Math2004; 57: 584–598.
40.
AnninBDOstrosablinNI. Anisotropy of elastic properties of materials. J Appl Mech Tech Phys2008; 49: 998–1014.
41.
Kowalczyk–GajewskaKOstrowska–MaciejewskaJ. Review on spectral decomposition of Hooke’s tensor for all symmetry groups of linear elastic material. Engng Trans2009; 57(3–4): 145–183.
42.
NikabadzeMU. Eigenvalue problem for tensors of even rank and its applications in mechanics. J Math Sci2017; 221: 174–204.
HelbigK. Kelvin and the early history of seismic anisotropy. In: FjærEHoltRMRathoreJS (eds) Seismic anisotropy. general series, chapter 2. Houston, TX: Society of Exploration Geophysicists, 1996, pp. 15–36.
45.
HelbigK. Review paper: what Kelvin might have written about elasticity. Geophys Prospect2013; 61: 1–20.
46.
DinçkalC. Decomposition of elastic constant tensor into orthogonal parts. PhD Thesis, Graduate School of Natural and Applied Sciences, Middle East Technical University, Ankara, 2010.
47.
DinçkalCAkgözY. Decomposition of elastic constant tensor into orthogonal parts. Int J Eng SciTech2010; 2(6): 22–46.
48.
DinçkalC. Irreducible decomposition of elastic constant tensor for anisotropic engineering materials. Mat Method Tech2012; 6(1): 71–92.
49.
OliveMAuffrayN. Symmetry classes for even order tensors. Math Mech Complex Syst2013; 1: 177–210.
50.
AuffrayNKolevBPetitotM. On anisotropic polynomial relations for the elasticity tensor. J Elast2014; 115: 77–103.
51.
MalyarenkoAOstoja-StarzewskiM. A random field formulation of Hooke’s law in all elasticity classes. J Elast2017; 127(2): 269–302.
52.
BackusG. A geometric picture of anisotropic elasticity tensors. Rev Geophys Space Phy1970; 8(3): 633–671.
53.
WalpoleLJ. Fourth-rank tensors of the thirty-two crystal classes: multiplication tables. Proc R Soc London1984; 391: 149–179.
54.
BaerheimR. Harmonic decomposition of the anisotropic elasticity tensor. Q J Mech Appl Math1993; 46(3): 391–418.
55.
BaerheimRHelbigK. Decomposition of the anisotropic elastic tensor in base tensors. Can J Explor Geophys1993; 29(1): 41–50.
56.
BaerheimR. Classification of symmetry by means of Maxwell multipoles. Quart J Mech Appl Math1998; 51(1): 73–104.
57.
ForteSVianelloM. Functional bases for transversely isotropic and transversely hemitropic invariants of elasticity tensors. Q J Mech Appl Math1998; 51: 543–552.
58.
ZhengQSZouWN. Irreducible decompositions of physical tensors of high orders. J Eng Math2000; 37: 273–288.
59.
DesmoratR. Décomposition de Kelvin et concept de contraintes effectives multiples pour les matériaux anisotropes. C R Mecanique2009; 337: 733–738.
60.
MoakherM. The algebra of fourth-order tensors with application to diffusion MRI. In: LaidlawDWeickertJ (eds) Visualization and processing of tensor fields. mathematics and visualization. Cham: Springer, 2009, pp. 57–80.
61.
ItinYHehlF. The constitutive tensor of linear elasticity: its decompositions, Cauchy relations, null Lagrangians, and wave propagation. J Math Phys2013; 54: 042903.
62.
BrowaeysJChevrotS. Decomposition of the elastic tensor and geophysical applications. Geophys J Int2004; 159: 667–678.
63.
OliveMKolevBAuffreyN. A minimal integrity basis for the elasticity tensor. Arch Rational Mech Anal2017; 226: 1–31.
64.
OliveMKolevBDesmoratB, et al. Harmonic factorization and reconstruction of the elasticity tensor. J Elast2018; 132: 67–101.
65.
DesmoratRAuffrayNDesmoratB, et al. Minimal functional bases for elasticity tensor symmetry classes. J Elast2022; 147: 201–228.
66.
MalyarenkoAOstoja-StarzewskiM. Spectral expansion of three-dimensional elasticity tensor random fields. In: SilvestrovSRančićM (eds) Engineering mathematics I, vol. 178. Cham: Springer, 2016, pp. 281–300.
67.
MalyarenkoAOstoja-StarzewskiM. Tensor random fields in continuum mechanics. In: AltenbachHEchsnerA (eds) Encyclopedia of continuum mechanics. Berlin: Springer, 2018, pp. 1–9.
68.
NyeJF. Physical properties of crystals: their representation by tensors and matrices. Oxford: Oxford University Clarendon Press, 1984.
69.
MalgrangeCRicolleauCSchlenkerM. Symmetry and Physical Properties of Crystals. Springer, 2014.
70.
JungSSchwartzmanAGroisserD. Scaling-rotation distance and interpolation of symmetric positive-definite matrices. SIAM J Matrix Anal Appl2015; 36(3): 1180–1201.
71.
GroisserDJungSSchwartzmanA. Geometric foundations for scaling-rotation statistics on symmetric positive definite matrices: minimal smooth scaling-rotation curves in low dimensions. Electron J Stat2017; 17: 1092–1159.
72.
GroisserDJungSSchwartzmanA. Foundations for a scaling-rotation geometry in the space of symmetric positive-definite matrices. arXiv: 1702.03237 [math.MG], 2017. DOI: 10.48550/arXiv.1702.03237.
73.
PennecXFillardPAyacheN. A Riemannian framework for tensor computing. Research Report RR-5255, INRIA, 2004. https://hal.inria.fr/inria-00070743
74.
AndoTLiCKMathiasR. Geometric means. Linear Algebra Appl2004; 385: 305–334.
75.
MoakherM. A differential geometric approach to the geometric mean of symmetric positive-definite matrices. SIAM J Matrix Anal Appl2005; 26(3): 735–747.
76.
ArsignyVFillardPPennecX, et al. Log-Euclidean metrics for fast and simple calculus on diffusion tensors. Magn Reson Med2006; 56(2): 411–421.
77.
ArsignyVFillardPPennecX, et al. Geometric means in a novel space structure of symmetric positive definite matrices. SIAM J Matrix Anal Appl2007; 29(1): 328–347.
78.
DrydenILKoloydenkoAZhouD. Non-Euclidean statistics for covariance matrices, with applications to diffusion tensor imaging. Ann Appl Stat2009; 3: 1102–1123.
79.
DrydenILPennecXPeyratJM. Power Euclidean metrics for covariance matrices with application to diffusion tensor imaging. arXiv: 1009.3045 [stat.ME], 2010. DOI: 10.48550/arXiv.1009.3045.
80.
WangYSalehianHChengG, et al. Tracking on the product manifold of shape and orientation for tractography from diffusion MRI. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), Columbus, OH, 23-28 June, 2014, pp. 3051–3056.
81.
FujiiJISeoY. On the Ando–Li–Mathias mean and the Karcher mean of positive definite matrices. Linear Multilinear Algebra2015; 63(3): 639–649.
82.
FeragenAFusterA. Geometries and interpolations for symmetric positive definite matrices. In: SchultzTÖzarslanEHotzI (eds) Modeling, analysis, and visualization of anisotropy. Mathematics and Visualization, Cham: Springer, 2017, pp. 85–113.
83.
SchwartzmanA. Lognormal distributions and geometric averages of symmetric positive definite matrices: lognormal positive definite matrices. International Statistical Review2016; 84(3): 456–486.
84.
NielsenFBhatiaR (eds). Matrix information geometry. Berlin: Springer, 2013
85.
ThanwerdasYPennecX. Exploration of balanced metrics on symmetric positive definite matrices. arXiv: 1909.03852 [math.DG], 2019. DOI: 10.48550/arXiv.1909.03852.
86.
ThanwerdasYPennecX. Is affine-invariance well defined on SPD matrices? A principled continuum of metrics. arXiv: 1906.01349 [math.DG], 2019. DOI: 10.48550/arXiv.1906.01349.
87.
ThanwerdasYPennecX. O(n)-invariant Riemannian metrics on SPD matrices. arXiv: 2109.05768 [math.DG], 2021. DOI: 10.48550/arXiv.2109.05768.
88.
JungSRooksBGroisserD, et al. Averaging symmetric positive-definite matrices on the space of Eigen-decompositions. arXiv: 2306.12025 [stat.ME], 2023. DOI: 10.48550/arXiv.2306.12025.
89.
GroisserDJungSSchwartzmanA. A genericity property of Fréchet sample means on Riemannian manifolds. arXiv: 2309.13823 [math.PR], 2023. DOI: 10.48550/arXiv.2309.13823.
90.
PennecX. Intrinsic statistics on Riemannian manifolds: basic tools for geometric measurements. J Math Imaging Vis2006; 25(1): 127–154.
91.
HaubergS. Principal curves on Riemannian manifolds. IEEE Trans Pattern Anal Mach Intell2015; 38(9): 1915–1921.
92.
SommerSFletcherTPennecX. Introduction to differential and Riemannian geometry. In: PennecXSommerSFletcherT (eds) Riemannian geometric statistics in medical image analysis. Amsterdam: Elsevier, 2020, pp. 3–37.
93.
PennecXSommerSFletcherT (eds) Riemannian geometric statistics in medical image analysis. The Elsevier and MICCAI Society Book Series. New York: Academic Press, 2020.
94.
GuiguiNMiolaneNPennecX. Introduction to Riemannian geometry and geometric statistics: from basic theory to implementation with Geomstats. Found Trends Mach Learn2023; 16(3): 329–493.
95.
ThanwerdasYPennecX. O(n)-invariant Riemannian metrics on SPD matrices. Linear Algebra Appl2023; 661: 163–201.
96.
MoakherM. On the averaging of symmetric positive-definite tensors. J Elast2006; 82: 273–296.
97.
MoakherMNorrisAN. The closest elastic tensor of arbitrary symmetry to an elasticity tensor of lower symmetry. J Elast2006; 85: 215–263.
98.
BucataruISlawinskiMA. Invariant properties for finding distance in space of elasticity tensors. arXiv: 0712.1082 [cond-mat.mtrl-sci], 2008. DOI: 10.48550/arXiv.0712.1082.
99.
WeberMGlügeRBertramA. Distance of a stiffness tetrad to the symmetry classes of linear elasticity. Int J Solids Struct2019; 156–157: 281–293.
GuilleminotJSoizeC. Non-Gaussian positive-definite matrix-valued random fields with constrained Eigenvalues: application to random elasticity tensors with uncertain material symmetries. Int J Numer Methods Eng2011; 88(11): 1128–1151.
106.
GuilleminotJSoizeC. Generalized stochastic approach for constitutive equation in linear elasticity: a random matrix model. Int J Numer Methods Eng2012; 90(5): 613–635.
107.
GuilleminotJSoizeC. Stochastic modeling of anisotropy in multiscale analysis of heterogeneous materials: a comprehensive overview on random matrix approaches. Mech Mat2012; 44: 35–46.
108.
GuilleminotJSoizeC. On the statistical dependence for the components of random elasticity tensors exhibiting material symmetry properties. J Elasticity2013; 111(2): 109–130.
109.
GuilleminotJSoizeC. Stochastic model and generator for random fields with symmetry properties: application to the mesoscopic modeling of elastic random media. Multiscale Model Simul2013; 11(3): 840–870.
110.
GuilleminotJSoizeC. Itô SDE-based generator for a class of non-Gaussian vector-valued random fields in uncertainty quantification. SIAM J Sci Comput2014; 36(6): A2763–A2786.
111.
NouyASoizeC. Random fields representations for stochastic elliptic boundary value problems and statistical inverse problems. Eur J Appl Math2014; 25(3): 339–373.
112.
StaberBGuilleminotJ. Stochastic modeling and generation of random fields of elasticity tensors: a unified information-theoretic approach. C R Mécanique2017; 345: 399–416.
113.
SoizeC. Stochastic elliptic operators defined by non-Gaussian random fields with uncertain spectrum. arXiv: 2106.07706 [cs.NA], 2021. DOI: 10.48550/arXiv.2106.07706.
114.
Ostoja-StarzewskiM. Microstructural randomness and scaling in mechanics of materials. New York: Chapman and Hall/CRC, 2007.
115.
MalyarenkoAOstoja-StarzewskiM. Statistically isotropic tensor random fields: correlation structures. Math Mech Complex Syst2014; 2(2): 209–231.
116.
MatthiesHG. Uncertainty quantification with stochastic finite elements. In: SteinEde BorstRHughesTJR (eds) Encyclopedia of computational mechanics, vol. Part 1: fundamentals. Hoboken, NJ: John Wiley, 2007.
117.
ZhangXMalyarenkoAPorcuE, et al. Elastodynamic problem on tensor random fields with fractal and Hurst effects. Meccanica2022; 57: 957–970.
118.
GrigoriuM. Microstructure models and material response by extreme value theory. SIAM ASA J Uncertain Quantif2016; 4: 190–217.
119.
LangS. Differential and Riemannian manifolds, Graduate Texts in Mathematics, vol. 160. Cham: Springer, 1995.
120.
MehrabadiMCowinSJaricJ. Six-dimensional orthogonal tensor representation of the rotation about an axis in three dimensions. Int J Solids Struct1995; 32(3/4): 439–449.
121.
KlapperHHahnT. Point-group symmetry and physical properties of crystals. In: HahnT (ed.) International tables for crystallography, vol. A. Cham: Springer, 2006, pp. 804–808.
122.
ItinYHehlF. Irreducible decompositions of the elasticity tensor under the linear and orthogonal groups and their physical consequences. J Phys Conf Ser2015; 597: 012046.
123.
MatthiesHGOhayonR. Analysis of parametric models: linear methods and approximations. Adv Comput Math2019; 45: 2555–2586.
124.
MatthiesHGOhayonR. Parametric models analysed with linear maps. Adv Model Simul Eng Sci2020; 7: 41.
125.
MatthiesHGKeeseA. Galerkin methods for linear and nonlinear elliptic stochastic partial differential equations. Comput Methods Appl Mech Eng2005; 194(12–16): 1295–1331.
126.
MatthiesHG. Stochastic finite elements: computational approaches to stochastic partial differential equations. Z Angew Math Mech2008; 88(11): 849–873.
127.
MosegaardKSambridgeM. Monte Carlo analysis of inverse problems. Inverse Probl2002; 18(3): 29–54.
128.
TarantolaA. Inverse problem theory and methods for model parameter estimation. Philadelphia, PA: SIAM, 2005.
129.
StuartAM. Inverse problems: a Bayesian perspective. Acta Numerica2010; 19: 451–559.
130.
FittDWyattHWoolleyTE, et al. Uncertainty quantification of elastic material responses: testing, stochastic calibration and Bayesian model selection. Mech Soft Mater2019; 1: 13.
131.
RosićBVKučerováASýkoraJ, et al. Parameter identification in a probabilistic setting. Eng Struct2013; 50: 179–196.
132.
RosićBSýkoraJPajonkO, et al. Comparison of numerical approaches to Bayesian updating. In: IbrahimbegovicA (ed.) Computational methods for solids and fluids, Computational Methods in Applied Sciences, vol. 41. Cham: Springer, 2016, pp. 427–461.
133.
MatthiesHGZanderERosićB, et al. Inverse problems in a Bayesian setting. In: IbrahimbegovicA (ed.) Computational methods for solids and fluids, Computational Methods in Applied Sciences, vol. 41. Cham: Springer, 2016, pp. 245–286.
134.
MatthiesHGZanderERosićBV, et al. Parameter estimation via conditional expectation: a Bayesian inversion. Adv Model Simul Eng Sci2016; 3: 24.
135.
MatthiesHG. Uncertainty quantification and Bayesian inversion. In: SteinEde BorstRHughesTJR (eds) Encyclopedia of computational mechanics, vol. Part 1: fundamentals, 2nd ed.Hoboken, NJ: John Wiley, 2018, pp. 1–51.
136.
SavvasDPapaioannouIStefanouG. Bayesian identification and model comparison for random property fields derived from material microstructure. Comput Methods Appl Mech Engrg2020; 365: 113026.
137.
PajonkORosićBVLitvinenkoA, et al. A deterministic filter for non-Gaussian Bayesian estimation. Physica D Nonlinear Phenom2012; 241(7): 775–788.
138.
RosićBVLitvinenkoAPajonkO, et al. Sampling-free linear Bayesian update of polynomial chaos representations. J Comput Phys2012; 231(17): 5761–5787.
139.
IbrahimbegovicAMatthiesHGKaravelićE. Reduced model of macro-scale stochastic plasticity identification by Bayesian inference: application to quasi-brittle failure of concrete. Comput Methods Appl Mech Engrg2020; 372: 113428.
140.
do CarmoMP. Riemannian geometry. Boston, MA: Birkhäuser, 1992.
141.
SpivakM. A comprehensive introduction to differential geometry, vol. 1. 3rd ed.Berkeley, CA: Publish or Perish, 1999.
142.
JostJ. Riemannian geometry and geometric analysis. 4th ed.Cham: Springer, 2005.
143.
AlexandrinoMMBettiolRG. Lie groups and geometric aspects of isometric actions. Cham: Springer, 2015.
144.
JammalamadakaSRSenguptaA. Topics in circular statistics. Series on multivariate analysis. River Edge, NJ: World Scientific, 2001.
145.
JammalamadakaSKozubowskiT. A general approach for obtaining wrapped circular distributions via mixtures. Sankhya Indian J Stat2017; 79: 133–157.
146.
OllerJMCorcueraJM. Intrinsic analysis of statistical estimation. Ann Stat1995; 23(5): 1562–1581.
147.
MoakherMZéraïM. The Riemannian geometry of the space of positive-definite matrices and its application to the regularizationof positive-definite matrix-valued data. J Math Imaging Vis2011; 40: 171–187.
148.
FletcherT. Statistics on manifolds. In: PennecXSommerSFletcherT (eds) Riemannian geometric statistics in medical image analysis. Amsterdam: Elsevier, 2020, pp. 39–74.
Rao JammalamadakaSSenguptaA. Topics in circular statistics. Singapore: World Scientific, 2001.
151.
JaynesET. Probability theory: the logic of science. Cambridge: Cambridge University Press, 2003.
152.
ChenH. Initialization for NORTA: Generation of random vectors with specified marginals and correlations. INFORMS J Comput2001; 13(4): 312–331.
153.
GhoshSHendersonSG. Behavior of the NORTA method for correlated random vector generation as the dimension increases. ACM Trans Model Comput Simul2003; 13(3): 276–294.
154.
TorquatoS. Random heterogeneous materials. Cham: Springer, 2001.
155.
MatérnB. Spatial variation. Cham: Springer, 1986.
156.
CressieNAC. Statistics for spatial data. Wiley Series in Probability and Statistics. Hoboken, NJ: John Wiley, 1993.
157.
GneitingTKleiberWSchlatherM. Matérn cross-covariance functions for multivariate random fields. J Am Stat Assoc2010; 105(491): 1167–1177.
158.
LitvinenkoAKeyesDKhoromskaiaV, et al. Tucker tensor analysis of Matérn functions in spatial statistics. Comput Methods Appl Math2019; 19(1): 101–122.
CiarletPG. The finite element method for elliptic problems. Amsterdam: North-Holland, 1978.
161.
OdenJTDemkowiczL. Applied functional analysis. New York: Chapman and Hall, 2010.
162.
OdenJTReddyJN. An introduction to the mathematical theory of finite elements. Mineola, NY: Dover, 2012.
163.
CiarletPG. Linear and nonlinear functional analysis with applications. Philadelphia, PA: SIAM, 2013.
164.
BabuškaINobileFTemponeR. On obtaining effective elasticity tensors with entries zeroing method. SIAM J Numer Anal2007; 45(3): 1005–1034.
165.
LordGJPowellCEShardlowT. An introduction to computational stochastic PDEs. Cambridge: Cambridge University Press, 2014.
166.
DũngDNguyenVKSchwabC, et al. Analyticity and sparsity in uncertainty quantification for PDEs with Gaussian random field inputs. arXiv: 2201.01912 [math.NA], 2022. DOI: 10.48550/arXiv.2201.01912.
167.
EspigMHackbuschWLitvinenkoA, et al. Efficient low-rank approximation of the stochastic Galerkin matrix in tensor formats. Comput Math Appl2014; 67(4): 818–829.
168.
HoangVHSchwabC. N-Term Wiener chaos approximation rates for elliptic PDEs with lognormal Gaussian random inputs. Math Models Methods Appl Sci2014; 24(4): 797–826.
169.
BachmayrMCohenADeVoreR, et al. Sparse polynomial approximation of parametric elliptic PDEs. Part II: lognormal coefficients. ESAIM Math Model Numer Anal2017; 51: 341–363.
170.
BachmayrMCohenAMiglioratiG. Representations of Gaussian random fields and approximation of elliptic PDEs with lognormal coefficients. J Fourier Anal Appl2018; 24: 621–649.
171.
HerrmannLSchwabC. Multilevel quasi-Monte Carlo integration with product weights for elliptic PDEs with lognormal coefficients. ESAIM Math Model Numer Anal2019; 53: 1507–1552.
172.
GrigoriuM. Stochastic systems: uncertainty quantification and propagation. Cham: Springer, 2012.
173.
MarzoukYNajmHRahnL. Stochastic spectral methods for efficient Bayesian solution of inverse problems. J Comput Phys2007; 224(2): 560–586.
174.
MarzoukYXiuD. A stochastic collocation approach to Bayesian inference in inverse problems. Commun Comput Phys2009; 6(4): 826–847.
175.
CowinS. Continuum mechanics of anisotropic materials. Cham: Springer, 2013.
176.
CowinSCMehrabadiMM. Identification of the elastic symmetry of bone and other materials. J Biomechanics1989; 22(6/7): 503–515.
177.
FrançoisMLMGeymonatGBerthaudY. On choosing effective elasticity tensors using a Monte Carlo method. Acta Geophys1998; 35(31–32): 4091–4106.
178.
DanekTKochetovMSlawinskiMA. Effective elasticity tensors in context of random errors. J Elast2015; 121: 55–67.
179.
DanekTSlawinskiMA. On choosing effective elasticity tensors using a Monte Carlo method. Acta Geophys2015; 63(1): 45–61.
180.
GierlachBDanekT. On obtaining effective elasticity tensors with entries zeroing method. Geology Geophys Environ2018; 44(2): 259–274.
181.
DobrillaSMatthiesHGIbrahimbegovicA. Considerations on the identifiability of fracture and bond properties of reinforced concrete. Int J Numer Methods Eng2023; 124: 3662–3686.
182.
IbrahimbegovicAMatthiesHGDobrillaS, et al. Synergy of stochastics and inelasticity at multiple scales: novel Bayesian applications in stochastic upscaling and fracture size and scale effects. SN Appl Sci2022; 4(7): 191.
183.
AshmanRCowinSVan BuskirkW, et al. A continuous wave technique for the measurement of the elastic properties of cortical bone. J Biomech1984; 17(5): 349–361.
MoakherM. Means and averaging in the group of rotations. SIAM J Matrix Anal Appl2002; 24(1): 1–16.
187.
GroisserDJungSSchwartzmanA. Uniqueness questions in a scaling-rotation geometry on the space of symmetric positive-definite matrices. Differ Geom Appl2021; 79: 101798.
188.
CardosoJLeiteF. Exponentials of skew-symmetric matrices and logarithms of orthogonal matrices. J Comput Appl Math2010; 233(11): 2867–2875.