Report on astrometry as of 2002-04-29 ------------------------------------- Astrometry observations with the 4-station system began in earnest on 2002-03-15. Since then, more than about 18 nights of data have been processed. Not all of the nights have been fully analyzed. The data were initially processed using the standard astrometry procedures in OYSTER, which had been successfully used for the reduction of the 2001 campaign, and for the reduction of the centered-feed test observations on 1999-03-06. It became immediately apparent that the dispersion corrected delays with the new data were of poor quality. This required analysis of the raw data with a set of OYSTER procedures which had been written specifically for that purpose in 2001. This analysis and development of even more interactive tools for the raw data analysis are still ongoing. Here is a (incomplete) list of observations which have received more attention so far (all centered feed unless noted otherwise): 2002-03-21: begin "analog" tracking without jumps 2002-03-31: 4way/3way astrometry 2002-04-02: relative astrometry, not reduced yet 2002-04-03: relative astrometry, not reduced yet 2002-04-10: begin new strokes 2002-04-12: off-center feed, begin new higher speed tracking 2002-04-13: off-center feed 2002-04-14: off-center feed 2002-04-26: bightest stars (centered feed) 2002-04-28: standard (centered feed) Analysis of the FDL delays indicated early that they changed only by increments of 1 micron; this bug was fixed in the embedded system beginning with 03-31, which is why none of the data before that date are being considered anymore. Subsequent tweaking of the embedded involved a set of new stroke amplitudes, and higher speed tracking since it seemed the fringes would sometimes outrun the fringe tracker. These changes have led to improvements in the overall fringe tracking performance. Some benefits can be seen in the data analysis too. Analysis of the situation concentrates on the following: Comparisons between 1999, 2001, and 2002 Comparisons between scans of very different quality within a night Run standard reduction on new off-center feed data and compare The set of diagnostic tools now includes: Residual FDL delays in wide sliding plot window Power spectrum of FDL delays Fringe power spectrum browser plot Primary/secondary power spectrum peak height histograms Fringe spectrum primary/noise peak height histograms Group delays in wide plot Corrected fringe phases Dispersion corrected delays, before/after In addition, new analysis features have been added: Capability to detect and flag bad samples based on power spectrum Switch between classic mode and new mode flagging bad samples The actual fringe position is used in the phase correction instead of just the residual FDL delay. The effect of this is noticable but small. These tools have been applied to the data, but with limited (i.e. no breakthrough) success. First, it was shown that poor dispersion correction results were confirmed by the poor performance of the corresponding data in some of the diagnostic plots. Of those, the fringe phase plot is the most critical. There seem to be the following trends: In the earlier data before the fringe tracking enhancements, the residual FDL delays would look "unnatural", i.e. composed of linear sections instead of atmospherically induced random motion. The more recent off-center feed data shows FDL delays much more like what we saw in 2001. There are some examples of where this unnatural behaviour correlates with poor fringe phases. The fringe phase plots of the new on-center feed data were initially compared with the off-center data from 2001, and it was found that it fared much poorer. Especially it seemed that the newer data would not show nicely separated peaks in the power spectrum peak histograms. It was then realized though that the 1999 centered feed data also does not look like to be of nearly the same quality of the 2001 data, which indicates that the centered feed penalty is significant and in combination with an as yet unknown cause prevents us from getting consistently usable data with the 4-station system. The are several examples of scans with very inconsistent behaviour, which makes the interpretation of what is going on very difficult and time consuming. (The diagnostic plot procedures take a fairly long time, about 15 seconds each on really fast machines at USNO, and it is not yet feasible to produce all of them for every scan and baseline.) For example, there is such a case in the 3-31 data, with a really good scan on a 2.6 mag. star, and a poor one on a much brighter star an hour later. The poor scan however shows larger FDL jitter values, whereas the good one has smaller jitter. This happens again in 4-14, with very little indications of what is wrong with the bad scan. Sometimes the poor performance correlates with hight FDL jitter values, some times more with the RMS of the FDL delays over the entire scan. There is one example of jumpy FDL delays which go hand-in-hand with poor fringe phases. This scan is on FKV0347 on 4-14, but again, confusion arises because of a comment from the observer that the FSNR was pretty good (> 100). However, I'm becoming better at predicting a good fringe phase plot from a fairly continuous residual FDL delay which does not exceed about 10 microns RMS. It is possible there is a correlation between larger FDL variations, consquently large group delay variations indicating difficulties with tracking a fringe. This then results in poor visibility phases. Comparisons of median and average dispersion corrected delays in 1999, 2001, and 2002 (all delay errors are standard deviations of the scan-averaged 200 ms coherent integrations in microns): 2002-04-12: off-center feed, standard list, 44 s average scan length med. avg. 6.77 6.93 (EC; this baseline is always bad) 2.88 3.20 (EW) 3.39 3.70 (EN) 2002-04-26: bightest stars (centered feed) Average of 44 s of data per scan EW delays: 4.3/4.3 med/avg disp. corr. delay error EN delays: 4.6/5.2 med/avg disp. corr. delay error EC: bad 2002-04-28: standard (centered feed) Average of 30 s of data per scan, but up to 50 s early night EW delays: 5.5 micron dispersion corrected delays, no significant magnitude dependence 4.9/5.4 med/avg delay errors (fdl error 11 mu) EN delays: 5.4/6.3 med/avg delay errors (fdl error 13 mu) EC delays: 12 microns delay error, fdl error 8 microns For comparison: 1999-03-06, CW 2.3/2.9 microns med/avg delay errors (fdl=10) CE 1.9/2.6 (fdl=9.8) This night has 180 s scans Selected 2001 nights: med/avg delay error in microns, CE CW: (these nights have standard 90 s scans, all are off-center) 2001-04-15: med. avg. 1.34 1.55 (CE) 1.26 1.60 (CW) 2001-04-25 1.19 1.43 1.42 1.63 2001-04-26 1.47 1.61 1.82 1.98 2001-05-09 1.77 1.93 3.28 3.34 2001-06-05 3.00 4.16 3.54 4.82 2001-06-07 2.02 2.46 2.41 3.11 (The reduction in intra-scan scatter between raw delays and dispersion corrected delays is between 3 and 5, typically.) Conclusions: The EW baseline of the new centered feed data seems to be consistent in quality with the 1999 data if one takes into account the very much shorter integration time. However, the EC baseline is bad for unknown reasons. The 3-station (i.e. pre-2002) off-center feed data is of a much higher quality than the centered feed data, despite of the shorter integration time. Also the off-center 4-station data is significantly better than the centered feed data, but not as good as 2001 even allowing for the factor of two in integration time. However, this is based on only one off-center feed night in 2002. After looking at a lot of data, I think the following can be said: The new off-center feed data has more higher quality scans, though the occasional unexplained poor scan still occurs. (Is it possible that the observer-set FSNR values are underestimated here in terms of causing corrupted data?) The new off-center feed data might not be of the same quality as the 2001 data, but standard reduction indicates that instead of reaching 2 micron RMS dispersion corrected delays in the best cases, the new data still can reach about 3 microns. This also means that the 2001 algorithm still works, and that the new system can still produce high quality data. This is good news. The centered feed data is of poor quality, but another test with a bright starlist and the latest adjustment in fringe tracking in place still remains to be done (Comment: this was done as shown above on 04-26 and 04-28). Without the beam compressors and much longer integration times, I don't have much hope of obtaining data with a good consistent quality on stars fainter than first magnitude. This also has to do with the fact that we cannot store as much data as we did before the new system was installed. In 1999, we integrated for 180 seconds; now we don't get continuous data beyond 40 seconds. I simply don't think one can get away with not implementing a critical part of the system, reducing the integration time by a factor of four, and still expect to get good astrometry. Near term work: I have only looked at the EW baseline, and there are 5 more to do! A complete analysis is much more time consuming because one has to study carefully the observer log for recorded problems which could confuse the analysis, and one has to look at several other quantities like visibility amplitudes and NAT offset histograms. Of the latter I have looked at some of them, but was never lucky to find an explanation of what was going on with the scans at hand. The amount of raw data is horrendous and it is easy to waste a lot of time if one is unlucky not to look at the diagnostics which matter most. Christian.