# ### Sidebar

As can be seen in the flowchart below the theoretical delays are then compared with the reduced observed delays by a parameter estimation process, e.g. a classical Gauß-Markov model as will be described below, or Kalman filtering (Herring et al., 1990), or collocation (Titov & Schuh, 2000) - all three following the least-squares concept - or a square-root information filter (Bierman, 1977).

The least-squares adjustment theory allows estimating unknown parameters in an over-determined system of equations. Since there are more equations than unknown parameters, the solution will not be exactly correct for each equation, but the adjustment provides a unique solution $dx$ and minimizes the squared sum of the weighted residuals $v$. Functional and stochastic models are based on linearized observation equations:

$$A\cdot dx = l+v\ \ {\rm or}\ \ \left[ \begin{array}[]{c} A_{ro} \\ A_{po} \end{array} \right]\cdot dx =\left[\begin{array}[]{c} l_{ro} \\ l_{po} \end{array} \right]+\left[\begin{array}[]{c} v_{ro} \\ v_{po} \end{array}\right], \label{eq:7_29}$$

$$P = \left[\begin{array}[]{cc} P_{ro} & 0 \\ 0 & P_{po} \end{array}\right].$$

The design matrix $A_{ro}$ contains the first derivatives of the function of the real observations w.r.t. the estimated parameters. Short estimation intervals (e.g. 20 minutes for zenith wet delays) could lead to singularity problems if there was no observation within a time segment. Therefore, the $A_{ro}$ matrix is extended by a pseudo-observation matrix $A_{po}$, which constrains the value of variability of the parameters, either by constraining the absolute values to zero (typically used for troposphere gradients in early VLBI sessions) or by constraining the relative variation of the continuous piecewise linear functions. The reduced observations, i.e. the observed minus calculated time delays, are listed in the $l_{ro}$ vector (real observations) whereas the $l_{po}$ vector (pseudo-observations) is typically filled with zeros. The weighting of the observations is done by the weight matrix $P$.

There are different groups of parameters. Auxiliary parameters, e.g. clock parameters and sometimes also the troposphere parameters like zenith wet delays or gradients, have to be computed but are usually not of interest for the geodesists. As clock parameters, linear or quadratic polynomials (over the 24 hour session accounting for clock offset, clock frequency offset, and clock frequency drift) are estimated. Additionally, continuous piecewise linear functions with, e.g. hourly segments, are estimated with respect to a reference clock that is set to zero in order to account for rapid clock instabilities. A typical constraint for the relative variation of the piecewise linear clock function is $0.5\ \rm{ps}^2/\rm{s}$; and for time segments e.g. of 60 min this would correspond to a variance of $1800\ \rm{ps}^2$ over that time span of one hour, meaning that the difference between two adjacent clock offsets is $0\pm 13$ mm. Special care has to be paid to the detection of clock breaks that sometime occur at some station clocks (hydrogen masers, described in Section Geometric principle) during a VLBI session. The epochs of those clock breaks have to be introduced in the analysis of VLBI sessions, because separate, quadratic polynomials are used to describe the behavior of the clock before and after the break.

## The concept of piecewise linear offsets

In the Vienna VLBI Software (VieVS; Böhm et al., 2011) zenith wet delays are typically estimated every 20 to 60 minutes, and rather loose constraints are put on the variation of the zenith wet delays ($0.7\ \rm{ps}^2/\rm{s}$). So-called 'piecewise linear offsets' are used in VieVS, i.e. the functional model is based on offsets only (no rates) (see Eq. for $\Delta L_w(t)$ below). These piecewise linear offsets are estimated at integer hours (e.g., at 18 UTC, 19 UTC, …), at integer fractions of integer hours (e.g., 18:20 UTC, 18:40 UTC, …) or at integer multiples of integer hours (e.g. 18:00 UTC, 0:00 UTC, 6:00 UTC, …). In VieVS, this representation is not only possible for troposphere zenith delays and gradients, station clocks, and Earth orientation parameters, but also for coordinates of selected stations and radio sources. The Equation for $\Delta L_w(t)$ below denotes the functional model of the wet delay $\Delta L_w$ at one station represented by piecewise linear offsets $x_1$ and $x_2$ of the zenith wet delays at the integer hours $t_1$ and $t_2$. The wet mapping function at epoch $t$ of an observation which is in between the integer hours is expressed by $mf_w(t)$. The partial derivatives which have to be entered in the design matrix are given in the Eqs. for $\frac{d\Delta L_w}{dx_1}$ and $\frac{d\Delta L_w}{dx_2}$. This concept is similar for all parameters, and with this kind of parameterization all combinations (at the normal equation level) with other space geodetic techniques will be easy and straight-forward.

$$\Delta L_w(t)=mf_w(t)\cdot x_1 + mf_w(t)\cdot \frac{t-t_1}{t_2-t_1}\cdot (x_2-x_1)$$

$$\frac{d\Delta L_w}{dx_1} = mf_w(t) - mf_w(t)\cdot \frac{t-t_1}{t_2-t_1}$$

$$\frac{d\Delta L_w}{dx_2} = mf_w(t)\cdot \frac{t-t_1}{t_2-t_1}$$

In addition to the clock and troposphere parameters (zenith wet delays and gradients) mentioned above, there are many other geodetic/astrometric parameters which can by estimated from VLBI sessions (see Section 4). Earth orientation parameters – although typically estimated once per session – can also be retrieved with a higher temporal resolution applying the concept of the piecewise linear offsets. The Figure below shows hourly estimates of (UT1-UTC) during CONT08.

## Global VLBI solutions

Other 'global' parameters such as station or source coordinates can in principle also be estimated from single VLBI sessions, but they are preferably determined in a global solution, i.e., from a large number of VLBI sessions connected to a common least-squares parameter estimation. Due to limited computer memory capacity it is essential to keep the equation system small. In VLBI analysis there are auxiliary parameters in the observation equations which cannot be fixed to a priori values, even if we are not interested in them, e.g., clock parameters. Therefore, a reduction algorithm is used which is based on a separation of the normal equation system into two parts. The first part contains parameters which we want to estimate and the second part those parameters which will be reduced. Even if we 'reduce' parameters, they still belong to the functional model of unknown parameters and will be estimated implicitly.

$$\left[\begin{array}[]{cc} N_{11} & N_{12} \\ N_{21} & N_{22} \end{array}\right]\cdot \left[\begin{array}[]{c} dx_1 \\ dx_2 \end{array}\right]= \left[\begin{array}[]{c} b_1 \\ b_2 \end{array}\right].$$

In the Equ. above $N = A^TPA$ and $b = A^TPl$, and the reduction of $dx_2$ is done by executing the matrix operation

$$(N_{11}-N_{12}N_{22}^{-1}N_{21})\cdot dx_1= b_1-N_{12}N_{22}^{-1}b_2 \ {\rm or} \ N_{reduc}\cdot dx_1=b_{reduc}.$$

Stacking is used for combining normal equation systems if a parameter is contained in at least two normal equation systems and only one common value in the resulting combined system should be estimated. For a combined solution of the identical parameters ($dx_1$), the normal matrices ($N_{reduc}$) and the right hand side vectors ($b_{reduc}$) from $n$ single sessions have to be summed up:

$$N_{REDUC}=N_{reduc\_1}+N_{reduc\_2}+\ldots+N_{reduc\_n},$$

$$b_{REDUC}=b_{reduc\_1}+b_{reduc\_2}+\ldots+b_{reduc\_n}.$$

Conditions on the $N_{REDUC}$ matrix are applied in order to prevent the matrix from being singular. From the analysis of VLBI sessions we get free station networks, which are the result of adjusting observations in a model where coordinates are unknowns without fixing the coordinate system \citep{sillard2001}. With three-dimensional VLBI station networks the rank deficiency is six (the scale is determined from the observations), which means that at least six conditions have to be applied to remove the rank deficiency. In case of station coordinates three no-net-translation (NNT) and three no-net-rotation (NNR) conditions are applied on selected datum stations, and in the case of source coordinates an NNR condition is usually applied on a selected set of datum sources. In case of longer time spans NNR-rate/NNT-rate conditions are also applied on station coordinate velocities. It is very important to use stable stations and sources for the datum, because otherwise the quality of the terrestrial and celestial reference would be deteriorated. Moreover, it is absolutely necessary to take into account any episodic changes in the station coordinates, e.g. due to instrumental changes or earthquakes. Unlike positions and velocities, no scale or scale rate parameters are estimated in VLBI, as the scale directly depends on the speed of light, $c$, one of the defining natural constants. The final solution is obtained by an inversion of the normal matrix:

$$dx_1=N_{REDUC}^{-1}\cdot b_{REDUC}.$$

Since the least-squares adjustment minimizes the squared sum of weighted residuals, this value is used to scale the standard deviations of the estimates. It is determined with

$$v^TPv=(l^TPl)_{REDUC}-x_1^T b_{REDUC},$$

where the first part $(l^TPl)_{REDUC}$ depends only on observations; it has to be corrected for the influence of the reduced parameters which is known from the single normal equation systems:

$$(l^TPl)_{REDUC}=\sum_{i=1}^{n}(l^TPl-b_2^TN_{22}^{-1}b_2).$$

The second part in the Equ. for $v^TPv$ above, $x_1^Tb_{REDUC}$, depends on the combined solution. The a posteriori variance of unit weight $\sigma_0^2$ is a scaling factor for the inverse normal equation matrix, i.e., for the covariance matrix $Q$ of the estimated parameters:

$$Q=\sigma_0^2\cdot N^{-1}.$$

It is determined with

$$\sigma_0^2=\frac{v^TPv}{k-u+d},$$

where $k$ is the number of observations, $u$ the number of estimated and reduced parameters, and $d$ the number of additional condition equations.

| <= back | 