Next: Impact on OTF observations
Up: Generalization to a multi-pixel
Previous: Impact on tracked observations
HERA has a derotator, which ensures that the pixels do not rotate on the
sky. The sky can thus be mapped by scanning along e.g. the right ascension
or the declination axis in equatorial coordinates. We aim at obtaining a
fully sampled map, implying a distance between the rows of
, where
is the beam full width at half maximum: At 1 mm,
this corresponds typically to
. However, the pixels are typically
separated by
. We thus have to find the best
scanning strategy which will fill the hole of the instantaneous footprint
of the multi-pixel. To do this, we will use a property of the deroratator,
i.e. it can be configured so that one of the main axes of the multi-pixel
is rotated by an angle (
) from the scanning direction. Indeed, we
can ask what is the value of
needed so that the distance between
the rows of two adjacent pixels is exactly
. For a receiver of
pixels, we end up with
groups of
lines, the distance between two group of lines being noted
. A
bit of geometry gives
 |
(8) |
If we now impose that
 |
(9) |
we obtain
 |
(10) |
We can fully sample without redundancy a given fraction of the sky in a
single subscan (
) or in two parallel subscans (zigzag,
).
For HERA,
and the
value is fixed to
by the
observing wavelenght
mm.
gives
,
and
gives
,
. Current optical design implies a minimum
distance between the pixels which is only compatible with the
solution.
In summary, by setting an angle of
between one of the main axes
of a
multi-pixels and the scanning direction, we can sweep in a
fully sampled mode a given portion of the sky with two parallel scans
separated by
. The region of the sky fully sampled will
then be rectangular: the length of the rectangular side perpendicular to
the scanning direction is then
, while the length
of the rectangular size parallel to the scanning direction,
, will
depend on the observing strategy. However, there is an edge effect, due to
the rotation of the array from the scanning direction. Indeed, the edges of
the maps are not fully sampled: Thus must thus be considered as overheads.
The area of the scanned sky must thus be larger than the targeted area,
which must be fully sampled. Let's assume that the targeted area
(
) is swept as a succession of
rectangles of size
. We get
 |
(11) |
The area swept in the under-sampled edges (
) is just the area of the
rectangle whose side sizes are
and the scanning size of
multi-pixel rotated by
, i.e.
 |
(12) |
Indeed, the geometry of the edges show that half this area is covered on
each size of the targeted area. Using Eqs. 8
and 9, we obtain
 |
(13) |
We now define the mapping efficiency
as
 |
(14) |
Replacing
and
by their expressions 11
and 14, we derive
 |
(15) |
This expression indicates that the most efficient mapping strategy is to
observe very wide scans. However, avoiding the edge overheads is only one
aspect of wide-field mapping with a multi-pixels. In particular, we aim at
having the most homogeneous map as possible. To achieve this, we need to
scan as fast as possible so that the observing conditions are as comparable
as possible on the whole map. We can then repeat the map as many time as
possible so that the data affected by technical problems or bad weather
happening during one coverage can just be discarded. In any case, at least
two coverages obtained in perpendicular scanning direction is always advise
to be able to use destriping algorithms (e.g. plait algorithms). Stripes
happen because the system stability (weather, telescope, receiver and
backend) evolves from one row to the other. Getting stripes is all the more
probable than the time to scan a row is long. So this argues against making
very wide scans, which are at the same time required to decrease the
relative time spent in the edge overheads. A compromise is thus to map area
chunks which are as close as possible to squares. A way to parametrize this
is to introduce the map aspect ratio, defined as
 |
(16) |
A given area
will be mapped in chunks whose area (
) is
defined by the linear scanning speed and the time of stability of the
system (
). This gives
 |
(17) |
Using 16 to replace
by
, we
yield
 |
(18) |
This equation of the 2nd order has only one physical solution
![\begin{displaymath}
\ensuremath{n_\bot}= \frac{1}{2} \frac{\ensuremath{d_\ensu...
...}}}{\ensuremath{d_\ensuremath{\mathrm{edge}}}^2}}-1 \right] }.
\end{displaymath}](img79.png) |
(19) |
We note that this yields
 |
(20) |
![\begin{displaymath}
\quad \mbox{with} \quad
\frac{\ensuremath{a} \ensuremath{...
...ath{n_\ensuremath{\mathrm{subscan}}}}} \right) } \right] }^2}.
\end{displaymath}](img81.png) |
(21) |
This expression can be used to understand how to get the highest mapping
effiency (
). This implies to get the largest value of the
ratio. We see that the larger the
multi-pixel array, the smaller this ratio. Increasing the chunk area,
either by increasing the linear velocity (i.e. increasing the dump rate,
) or by increasing the stability time (
) will increase the
efficiency. The dump rate is fixed by the peak data rate, which gives
typically
Hz. The stability time depends on the switching
mode: It is the time between two off measurements in position switch
(typically 1 or 2 minutes) and the time between two calibrations in
frequency switch (typically 10 to 15 minutes).
Table 1:
Mapping strategy to minimize edge effects.
 |
 |
 |
 |
min. |
|
|
|
1 |
1 |
3.7 |
0.83 |
2 |
2 |
1.9 |
0.83 |
5 |
4 |
1.2 |
0.86 |
10 |
6 |
1.1 |
0.90 |
Previous equations give the impression that the aspect ratio is a free
parameter. This is not fully true because,
must be an integer.
The following algorithm ensures that we get an integer value for
with the value of
and closest to 1. Starting with
, Eq. 18 gives a value of
. We enforce the integer
nature of
with
 |
(22) |
and we recompute the associated aspect ratio with
 |
(23) |
Table 1 gives the resulting values of
,
and
as a function of the stability time (
). We see that
edge efficiencies are quite high. However, it is easier to have square
chunks when the stability time is larger.
In summary, the time spent in edges is counted as overheads. It translates
into a multiplicative efficiency (
) because we enforce a mapping
pattern through rectangular chunks. Although it is not intuitive (edge
sizes are in general unrelated to area), this is not a big assumption
because the use of a square multi-pixel anyway enforces mapping in
rectangular chunks. We now summarize the algorithm to compute
:
- Step #1: Computation of input quantities
-
 |
(24) |
 |
(25) |
 |
(26) |
 |
(27) |
- Step #2: Computation of
and

-
- Case
with

-
- Case

-
- Step #3: Computation of

-
 |
(34) |
- Step #4: Recomputation of
and
when

-
If
minute, the targeted area is too small and the PI
should use raster mapping instead of OTF mapping.
Next: Impact on OTF observations
Up: Generalization to a multi-pixel
Previous: Impact on tracked observations
Gildas manager
2011-09-07