# Differences

This shows you the differences between two versions of the page.

 public:vlbi_fundamantals:least_squares_adjustment [2014/07/28 09:22]admin [The concept of piecewise linear offsets] public:vlbi_fundamantals:least_squares_adjustment [2014/07/28 09:27] (current)admin [Global VLBI solutions] Both sides previous revision Previous revision 2014/07/28 09:27 admin [Global VLBI solutions] 2014/07/28 09:22 admin [The concept of piecewise linear offsets] 2014/07/28 09:15 admin [Least-squares adjustment in VLBI] 2014/07/28 09:03 admin 2014/07/15 08:34 admin created 2014/07/28 09:27 admin [Global VLBI solutions] 2014/07/28 09:22 admin [The concept of piecewise linear offsets] 2014/07/28 09:15 admin [Least-squares adjustment in VLBI] 2014/07/28 09:03 admin 2014/07/15 08:34 admin created Line 51: Line 51: ===== Global VLBI solutions ===== ===== Global VLBI solutions ===== + Other '​global'​ parameters such as station or source coordinates can in principle also be estimated from single VLBI sessions, but they are preferably determined in a global solution, i.e., from a large number of VLBI sessions connected to a common least-squares parameter estimation. Due to limited computer memory capacity it is essential to keep the equation system small. In VLBI analysis there are auxiliary parameters in the observation equations which cannot be fixed to a priori values, even if we are not interested in them, e.g., clock parameters. Therefore, a reduction algorithm is used which is based on a separation of the normal equation system into two parts. The first part contains parameters which we want to estimate and the second part those parameters which will be reduced. Even if we '​reduce'​ parameters, they still belong to the functional model of unknown parameters and will be estimated implicitly. + $$\left[\begin{array}[]{cc} + N_{11} & N_{12} \\ + N_{21} & N_{22} + \end{array}\right]\cdot \left[\begin{array}[]{c} + dx_1 \\ + dx_2 + \end{array}\right]= \left[\begin{array}[]{c} + b_1 \\ + b_2 \end{array}\right].$$ + In the Equ. above $N = A^TPA$ and $b = A^TPl$, and the reduction of $dx_2$ is done by executing the matrix operation ​ + $$(N_{11}-N_{12}N_{22}^{-1}N_{21})\cdot dx_1= b_1-N_{12}N_{22}^{-1}b_2 \ {\rm or} \ N_{reduc}\cdot dx_1=b_{reduc}.$$ + Stacking is used for combining normal equation systems if a parameter is contained in at least two normal equation systems and only one common value in the resulting combined system should be estimated. For a combined solution of the identical parameters ($dx_1$), the normal matrices ($N_{reduc}$) and the right hand side vectors ($b_{reduc}$) from $n$ single sessions have to be summed up: + + $$N_{REDUC}=N_{reduc\_1}+N_{reduc\_2}+\ldots+N_{reduc\_n},​$$ + + $$b_{REDUC}=b_{reduc\_1}+b_{reduc\_2}+\ldots+b_{reduc\_n}.$$ + + Conditions on the $N_{REDUC}$ matrix are applied in order to prevent the matrix from being singular. From the analysis of VLBI sessions we get free station networks, which are the result of adjusting observations in a model where coordinates are unknowns without fixing the coordinate system \citep{sillard2001}. With three-dimensional VLBI station networks the rank deficiency is six (the scale is determined from the observations),​ which means that at least six conditions have to be applied to remove the rank deficiency. In case of station coordinates three no-net-translation (NNT) and three no-net-rotation (NNR) conditions are applied on selected datum stations, and in the case of source coordinates an NNR condition is usually applied on a selected set of datum sources. In case of longer time spans NNR-rate/​NNT-rate conditions are also applied on station coordinate velocities. It is very important to use stable stations and sources for the datum, because otherwise the quality of the terrestrial and celestial reference would be deteriorated. Moreover, it is absolutely necessary to take into account any episodic changes in the station coordinates,​ e.g. due to instrumental changes or earthquakes. + Unlike positions and velocities, no scale or scale rate parameters are estimated in VLBI, as the scale directly depends on the speed of light, $c$, one of the defining natural constants. The final solution is obtained by an inversion of the normal matrix: + + $$dx_1=N_{REDUC}^{-1}\cdot b_{REDUC}.$$ + + Since the least-squares adjustment minimizes the squared sum of weighted residuals, this value is used to scale the standard deviations of the estimates. It is determined with + + $$v^TPv=(l^TPl)_{REDUC}-x_1^T b_{REDUC},$$ + + where the first part $(l^TPl)_{REDUC}$ depends only on observations;​ it has to be corrected for the influence of the reduced parameters which is known from the single normal equation systems: + + $$(l^TPl)_{REDUC}=\sum_{i=1}^{n}(l^TPl-b_2^TN_{22}^{-1}b_2).$$ + + The second part in the Equ. for $v^TPv$ above, $x_1^Tb_{REDUC}$,​ depends on the combined solution. The a posteriori variance of unit weight $\sigma_0^2$ is a scaling factor for the inverse normal equation matrix, i.e., for the covariance matrix $Q$ of the estimated parameters: + + $$Q=\sigma_0^2\cdot N^{-1}.$$ + + It is determined with + + $$\sigma_0^2=\frac{v^TPv}{k-u+d},​$$ + + where $k$ is the number of observations,​ $u$ the number of estimated and reduced parameters, and $d$ the number of additional condition equations. ---- ---- | [[public:​vlbi_fundamantals:​vlbi_fundamantals|<​= back]] | | [[public:​vlbi_fundamantals:​vlbi_fundamantals|<​= back]] |