Modern Characterization of Electromagnetic Systems and its Associated Metrology
side of (1.28) over to the left side of the equation and equate it to zero as such
If the concatenated matrix [X y] has a rank of n + 1, the n + 1 columns of the matrix are linearly independent and the n + 1, m‐dimensional columns of the matrix span the same n‐dimensional space as X. In order to have a unique solution for the coefficients, a, the matrix [X +
where Ux has n columns, uy is a column vector, ∑x contains the n largest singular values diagonally, σy is the smallest singular value, Vxx is a n × n matrix, and vyy is scalar. Let us multiple both sides by matrix V.
(1.33)
Next, we will equate just the last columns of the matrix multiplication occurring in (1.32).
From the Eckart‐Young theorem, we know that {[X y] + [
To obtain [
(1.36) can be solved by first using (1.32) and (1.35) which results in
Then, from (1.34) we can rewrite (1.37) as
(1.38)
Finally, {[X y] + [
After multiplying each term in (1.39) by
(1.40)
The right‐hand side cancels and we are left with
From (1.31) and (1.41) we can solve for the model coefficient a as
(1.42)
The vector vxy is the first n elements of the n +1‐th columns of the right singular matrix V, of [X y] and vyy is the n + 1‐th element of the n + 1 columns of V. The best approximation of the model is then given by
(1.43)
This completes the total least squares solution.
1.4 Conclusion
This first chapter provides the mathematical fundamentals that will be utilized later. The principles presented are the basis of low‐rank modelling, singular value decomposition and the method of total least squares.
References
1 1 L. I. Scharf and D. Tufts, “Rank Reduction for Modeling Stationary Signals,” IEEE Transactions on Acoustics,