Details
Nothing to say, yet
Big christmas sale
Premium Access 35% OFF
Details
Nothing to say, yet
Comment
Nothing to say, yet
SVS is a method used to select persistent schedulers in algorithms. It builds a Gauss-Markov model and solves unknowns using least square methods. It uses interferograms with small spatial and temporal baselines to reduce temporal de-correlation and noise. This allows for more reliable measurements and a better solution. I'm going to introduce the basic principle of SVS. I won't introduce PS-INSTAR because the methods to select persistent schedulers are different in many algorithms. You can see the method developed by Professor Andy Hooper on the left slide. In terms of SVS, although many methods are still different, their principles are similar. In general, SVS builds a Gauss-Markov model and solves the unknowns by the least square methods. The observation error is in the front. A is the symmetric letterless unknowns to the front. X is the unknown phase change related to the reference epoch. You can see the simple figure below to understand the theory. We set the first epoch of thought images as the reference epoch. We have five interferograms between time 4 and time 1. The phase changes from t1 to t2, t1 to t3, and t1 to t4 are unknowns. We can build a design matrix to relate unknowns to observations. The unknowns can be solved by the least square methods. We will see the matrix form next slide. One characteristic of SVS is that it only uses interferograms with small spatial and temporal baselines. As I mentioned previously, SVS targets distributed scatters. The distributed scatters provide noiseless information because their surface properties change rapidly in time. So, it is easy to have temporal de-correlation. To reduce the temporal de-correlation, we should use interferograms with small temporal baselines. To further reduce noise, we can use interferograms with small spatial baselines. So that we may be likely to have a good solution with less reliable measurements.