1 BACKGROUND BACKGROUND Computes the background image of a given map. The basic idea is to make a crude mesh model of the background by finding the most likely value of the original map (within some intensity range) for each cell of the crude mesh. The most likely value is found by making an histogram of the intensity distribution in the specified range for all points lying in a circle around the cell center. The circle may (and should) be greater than the cell size. If the number of pixels within the intensity range is too small, no value is attributed to the intermediate mesh. The intermediate model is then resampled to the original map using a general triangulation technique. This procedure has several advantages over a simpler method which would compute a "smoothed" image as the background, because it is not biased by any emission outside the selected range, and it is able to interpolate over large non sampled area of the intermediate mesh. Other tasks with similar names use other interpolation algorithms. They will hopefully be merged into a single task one (with a switch for the interpolation algorithm). 2 Y_NAME$ BACKGROUND: Y_NAME$ The name of the input image on which the background emission is to be determined. 2 X_NAME$ BACKGROUND: X_NAME$ The name of the output file containing the background emission. 2 X_MIN$ BACKGROUND: X_MIN$ The minimum value of the background emission. Values lower than X_MIN$ are considered as real source structure. 2 X_MAX$ BACKGROUND: X_MAX$ The maximum value of the background emission. Values higher than X_MAX$ are considered as real source structure. 2 NBINS$ BACKGROUND: NBINS$ The number of histogram bins used to derive the value of the background emission. This number must be high enough to derive the most likely value over the smoothing circle, but small enough compared to the number of pixels in the smoothing circle to get good statistics on each bin. It should usually be of the order of WIDTH$, the radius of the smoothing circle 2 WIDTH$ BACKGROUND: WIDTH$ Radius (in pixels) of the smoothing circle. The number of pixels in the smoothing circle must be high enough to get statistically significant values. For proper (Nyquist) sampling of the background this should also be larger than twice SCALE$. 2 SCALE$ BACKGROUND: SCALE$ The ratio between the sizes of original image and background mesh. It must be a power of 2, and depends on the size scale of the background variations that you want to remove. Large values of SCALE$ will only remove very large scale background. Low values will also remove small scale background (and may be some source structure as well...) 2 N_MIN$ BACKGROUND: N_MIN$ A minimum number of pixels in the histogram to consider the result as significant. This number is used to avoid biasing the background by areas where few points lie in the range of the background emission. It should be typically 50 % of the number of pixels in the smoothing circle. The background mesh value is blanked when too few valid pixels are found, and these points are ignored by the interpolation. 1 BLANKING BLANKING Modifies the blanking value of an image. This operation cannot be done through the HEADER program, which only modifies the blanking value of the header... 2 Y_NAME$ BLANKING: Y_NAME$ The name of the input image (with old blanking value) 2 X_NAME$ BLANKING: X_NAME$ The name of the output file (with modified blanking value) 2 BLANKING$ BLANKING: BLANKING$ This is the new blanking value 2 TOLERANCE$ BLANKING: TOLERANCE$ This is the new tolerance on blanking. 1 CIRCLE CIRCLE This program computes annular averages on an input cube to produce an output map. Each line of the output map is the radial profile for one particular plane of the input cube. The task can be used to derive the radial profiles, for example in circumstellar envelopes. The radial pixel separation is always the X and Y increment. If the X and Y increments are different, the averages are therefore computed on elliptical rings, despite the name of the task. 2 Y_NAME$ CIRCLE: Y_NAME$ The name of the input cube on which the circular averages are computed. 2 X_NAME$ CIRCLE: X_NAME$ The name of the output image in which the circular averages are stored. 2 CENTER$ CIRCLE: CENTER$[2] The X and Y user coordinates of the circle center. 2 PROFILE$ CIRCLE: PROFILE$ The number of rings for which an average value is computed. The distance between successive rings is fixed by the X and Y increments. 1 CLEAN CLEAN This task contains implementations of several CLEAN-like deconvolution algorithm. The algorithm is selected through the METHOD$ keyword, which can be : SIMPLE : The straightforward direct method. CLARK : The Barry Clark major-minor cycle method. MRC : a Multi-Resolution Clean. THRESHOLD : Components are not restricted to be point sources. The input map and beam should have the same sizes, except for the SIMPLE method where different beam sizes are allowed, and must have the same grid spacing. Sources are searched only in the inner quarter of the map, but their sidelobes are removed in the full map. A smaller search box can be specified if desired. The program can process all planes of a data cube at the same time, but a separate list of component will be produced for each plane. The cleaning cannot be restarted. 2 METHOD$ CLEAN: METHOD$ Specify the algorithm to use among : SIMPLE : The straightforward direct method. Safe, but slow. CLARK : The Barry Clark major-minor cycle method. Safe and fast. MRC : a Multi-Resolution Clean. Recommended for extended sources or large images. NOT FULLY DEBUGGED. THRESHOLD : Components are not restricted to be point sources. NOT RECOMMENDED, and NOT DEBUGGED. 2 DIRTY$ CLEAN: DIRTY$ This is the name of the input "dirty map". If the map has more than two dimensions, all planes will be cleaned successively. 2 BEAM$ CLEAN: BEAM$ This is the name of the input "dirty beam" (Point Spread Function in optical jargon). The beam cannot be a cube. 2 RESIDUAL$ CLEAN: RESIDUAL$ This is the name of the output residual map. The residual map is not deleted at the end of the run. 2 CLEAN$ CLEAN: CLEAN$ This is the name of the output "clean map". 2 GAIN$ CLEAN: GAIN$ This is the gain of the subtraction loop. It should typically be chosen in the range 0.05 and 0.3. Higher values give faster convergence, while lower values give a better restitution of the extended structure. 2 NITER$ CLEAN: NITER$ This is the maximum number of components the program will accept to subtract. Once it has been reached, the program starts the restoration phase. 2 FRES$ CLEAN: FRES$ This is the minimal fraction of the peak flux in the dirty map that the program will consider as significant. Alternatively, an absolute threshold can be specified using ARES$. Once this level has been reached the program stops subtracting, and starts the restoration phase. This parameter is normalised to 1 (neither in % nor in db). It should usually be of the order of magnitude of the inverse of the expected dynamic range. 2 ARES$ CLEAN: ARES$ This is the minimal flux in the dirty map that the program will consider as significant. Alternatively, the threshold can be specified as a fraction of the peak flux using FRES$. Once this level has been reached the program stops subtracting, and starts the restoration phase. The unit for this parameter is the map unit. The parameter should usually be of the order of magnitude of the expected noise in the clean map. 2 BLC$ CLEAN: BLC$[4] These are the (pixel) coordinates of the Bottom Left Corner of the cleaning box. Only the first two coordinates are actually used. The actual cleaning window will be the intersection of the specified window with the inner quarter of the map. 2 TRC$ CLEAN: TRC$[4] These are the (pixel) coordinates of the Top Right Corner of the cleaning box. Only the first two coordinates are actually used. The actual cleaning window will be the intersection of the specified window with the inner quarter of the map. 2 KEEP$ CLEAN: KEEP$ This is a logical flag to keep cleaning after an approximate convergence has been reached. It should usually be set to .TRUE., except may be for the SIMPLE method. 2 MAJOR$ CLEAN: MAJOR$ This is the major axis (FWHP) in user coordinates of the gaussian restoring beam. 2 MINOR$ CLEAN: MINOR$ This is the minor axis (FWHP) in user coordinates of the gaussian restoring beam. 2 PA$ CLEAN: PA$ This is the position angle (from North towards East, i.e. anticlockwise) of the major axis of the gaussian restoring beam (in degrees). 2 BEAM_PATCH$ CLEAN$: BEAM_PATCH$[2] The dirty beam patch to be used for the minor cycles in CLARK and MRC method. It should be large enough to avoid doing too many major cycles, but has practically no influence on the result. This size should be specified in pixel units. Reasonable values are between N/8 and N/4, where N is the number of map pixels in the same dimension. If set to N, the CLARK algorithm becomes identical to the SIMPLE algorithm. 2 CLARK A Major-Minor cycles method in which clean components are selected using a limited beam patch, and deconvolved through Fourier transform at each major cycle. 2 MRC A Multi Resolution Clean. Two maps are cleaned instead of only one : a smooth dirty map, and the difference between this smooth map and the original map. The same process is applied to the dirty and clean beams. The final clean map is obtained adding the two clean and two residual maps with proper weighting. Since the difference map contains no flux, a limited Clean can be performed on it. The smooth map contains only larger structure, and is compressed before cleaning. Deeper clean can be performed on extended structures, because they look more point-like and have better signal to noise in the smoothed map. Hence the MRC algorithm is able to recover more extended structure than a standard Clean. 1 COMBINE COMBINE This task makes "combinations" of two input images to produce a third one. The two input images may have the same dimensions, or the first one (Z one) may have less dimensions than the second (Y) one. In the latter case, combinations will occur for all the extra planes of the Y image. For example you can divide all planes of an input (Y) 3-D cube by a 2-D (Z) image, provided each plane of the cube matches the single image... Operations are ADD X = Ay*Y + Az*Z + C MULTIPLY X = AY*Y * AZ*Z + C DIVIDE X = AY*Y / AZ*Z + C OPTICAL_DEPTH X = - LOG (AY*Y / AZ*Z + C) provided Y > TY and Z > TZ , TY and TZ being thersholds for the Y and Z images. Image combinations may also be done using the SIC arithmetic capabilities, but COMBINE offers the advantage of handling correctly blanking information. 2 Z_NAME$ COMBINE: Z_NAME$ This is the name of the input map with the smaller number of dimensions. 2 Z_FACTOR$ COMBINE: Z_FACTOR$ This is a scaling factor for map Z_NAME$. 2 Z_MIN$ COMBINE: Z_MIN$ This is a threshold on map Z_NAME$. 2 Y_NAME$ COMBINE: Y_NAME$ This is the name of the input map with the larger number of dimensions. 2 Y_FACTOR$ COMBINE: Y_FACTOR$ This is a scaling factor for map Y_NAME$. 2 Y_MIN$ COMBINE: Y_MIN$ This is a threshold on map Y_NAME$. 2 X_NAME$ COMBINE: X_NAME$ This is the name of the output map. 2 BLANKING$ COMBINE: BLANKING$ This is the blanking value chosen for the output map. 2 OFFSET$ COMBINE: OFFSET$ This is an offset added to the output map. 2 FUNCTION$ COMBINE: FUNCTION$ Selected operation. Possible operations are ADD, MULTIPLY, DIVIDE (Y by Z), and OPTICAL_DEPTH (-Log(DIVIDE)). 1 Concepts GILDAS Grenoble Image and Line Data Analysis System This package consists in three different parts : 1. A number of utilities named "tasks", which are basically non interactive programs with a parameter file. A given program (e.g. "PROG") is located in GILDAS_RUN:PROG. The parameter file is prepared by commands RUN and SUBMIT of several programs. 2. Four interactive program named VECTOR, DISPLAY, GRAPHIC and OVERLAY. VECTOR is able to execute the tasks mentionned before by means of the commands SUBMIT and RUN. DISPLAY is in addition able to make colour bit-map display of the images on an image processor. GRAPHIC is able to make graphic display of the images (contour, etc...). OVERLAY is a superset of DISPLAY and GRAPHIC. 3. GFITS, an interactive FITS to GDF (Grenoble Data Format used to store images) translator. This help library GAG_HELP:TASK summarizes help on various tasks now available. The interactive programs have their own HELP libraries in GAG_HELP. 1 CORRELATE CORRELATE Computes correlation of two images or data cubes. The result is an image (data cube) containing Out(i,j) = < In1(k-i,l-j)*In2(k,l) > averaged over k,l For mode correlation (MODE$ = YES) or Out(i,j) = < In1(k-i,l-j)**2 + In2(k,l)**2 - 2 * In1(k-i,l-j)*In2(k,l) > averaged over k,l For mode square (MODE$ = NO) Actually, linear conversion formulas are used to keep the correlation image meaningful in user coordinates. The input images must match. When used for example to recenter images, the position of the maximum of the correlation image (or equivalently of the minimum of the sum of squares image) yields the required recentering. MODE$ YES (Correlation) is to be used when the input distribution has a finite extent, while MODE$ NO (Square) can be used in any case, but is somewhat slower of course. 2 IN_NAME1$ CORRELATE : IN_NAME1$ (Character) This is the name of the first input map. 2 IN_NAME2$ CORRELATE : IN_NAME1$ (Character) This is the name of the second input map. 2 OUT_NAME$ CORRELATE : OUT_NAME$ (Character) This is the name of the output map. 2 OUT_SIZE$ CORRELATE : OUT_SIZE$ (2 Integers) Number of pixels (i,j) to keep in the correlation. If set to 0,0, the complete correlation image is computed, otherwise only the specified portion (around the center 0,0) is computed. 2 MODE$ CORRELATE : MODE$ (Logical) Select the correlation mode : simple correlation (YES) or least-square distribution (NO). 1 CV_SMOOTH CV_SMOOTH This task smoothes an image using a cross-validation algorithm. The idea behind the algorithm is to use all data points except one to estimate the value of this one point. The difference between estimated and measured values is used to determine the noise level in the map, and therefore the appropriate amount of smoothing. The smoothing is non uniform, and depends on the signal to noise: low level emission is smoothed more than strong peaks. The present algorithm has a number of restrictions which we hope to suppress in future versions: - It flatly refuses to do anything if the noise is correlated between adjacent points, but it takes quite a long time to find this... Of course this always occurs for interferometric maps. Hence, it is possible to add a small amount of extra noise to the image to by-pass this stupid restriction. - It is a basically 1D algorithm. The algorithm smoothes along axis 1 then 2 first, and in a second pass along 2 then 1. It takes the average of the two results to produce the smoothed image, and keeps the difference which should be representative of the errors. A 2D generalisation of the algorithm exists (Girard D., 1987, Rapport de Recherche RR 669-M, TIM3. Universite de Grenoble), and we hope to implement it in the near future. 2 Z_NAME$ CV_SMOOTH: Z_NAME$ This is the name of the input map. Cubes are probably not supported at present. 2 Y_NAME$ CV_SMOOTH: Y_NAME$ This is the name for the output difference map. It is not deleted automatically. 2 X_NAME$ CV_SMOOTH: X_NAME$ This is the name of the output smoothed map. 2 NOISE$ CV_SMOOTH: NOISE$ This is the rms (in map units) of the optional additional gaussian noise. The additional noise is used to fool the algorithm in cases where the noise is correlated between adjacent pixels. Specify 0 if you don't want to add extra noise to your input map. 1 DFT DFT : map making from UV data DFT makes a map from UV data using (slow) Fourier Transform. This method introduces no aliasing, and no grid correction, but it is slow. Allow about 3 minutes for 1 channel, 500 visibilities. The processing time is roughly given by TIME = 6*(NV/1000)*(NX/64)*(NY/64)*(NC/20+1) Minutes with a reasonable guess for the cost of the Sin and Cos. This task processes all table channels at a time. 2 UV_TABLE$ TASK\FILE "UV table" UV_TABLE$ = The sorted, precessed UV table name. Default extension is .UVT. A raw UV table can be sorted and precessed using UVSORT task. 2 MAP_NAME$ TASK\CHARACTER "Map name" MAP_NAME$ = The output map name. Default extension is .LMV 2 UV_TAPER$ TASK\REAL "UV taper(1/e level, meters)" UV_TAPER$ = The UV taper (to be applied in both directions). 2 WEIGHTMODE$ TASK\CHARACTER "Weight mode (NA or UN)" WEIGHTMODE$ = NAtural (optimum in terms of sensitivity) or UNiform (usually lower sidelobes) weighting. 2 MAP_SIZE$ TASK\INTEGER "Map size(2)" MAP_SIZE$[2] = Number of pixels in X and Y. Need not be a power of two, but this would be much better for any further image processing. 2 MAP_CELL$ TASK\REAL "Map cell(arc sec)" MAP_CELL$ = The map cell size (identical in X and Y). 2 UV_CELL$ TASK\REAL "UV cell(m), for unif. weighting" UV_CELL$ = The UV cell size for uniform weighting. Should be of the order of half the dish diameter, or smaller. 1 DG_SMOOTH DG_SMOOTH Smoothes an image using the Conjugate Gradient Algorithm. (Author: Didier GIRARD, Groupe d'Astrophysique) The smoothed image is the equilibrium state of a thin flexible plate constrained to pass near each height datum by a spring attached between it and the plate. The smoothing is controled by the "Smoothing Parameter" P : P = ( plate stiffness) / ( springs stiffness) A low value for P (0.001) means high fidelity to the original data and should be used for high signal to noise ratios. If the data are noisy, use higher values of P (0.1 to 10). The algorithm is iterative and needs a work space equal to 3 input maps. It usually converges in 10 iterations, and can be restarted if you save the work files. Timing is of the order of 2 seconds of microVAX II CPU time for a 128 by 128 map. It takes into account blanked pixels properly. 2 RESTART$ DG_SMOOTH: RESTART$ This is a logical flag to indicate restart from a previous run of the program. 2 P$ DG_SMOOTH: P$ This is the value of the smoothing parameters. It should remain between 0.001 and 10. Higher values give stronger smoothing. 2 NITER$ DG_SMOOTH: NITER$ This is the number of smoothing iterations. Ten iterations are usually enough to reach convergence. 2 Z_NAME$ DG_SMOOTH: Z_NAME$ This is the name of the input file. 2 Y_NAME$ DG_SMOOTH: Y_NAME$ This is the name of the internal work file. It is not deleted automatically, and it is needed if you want to restart from a previous number of iterations. 2 X_NAME$ DG_SMOOTH: X_NAME$ This is the name of the output smoothed image. 2 GUESS$ DG_SMOOTH: GUESS$ This is an initial guess for the value attributed to blanked pixels. Try to find something reasonable. Convergence can be delayed if you choose a crazy value. 1 Display An interactive program to make bit-map (colour) display of images on an image processor. Currently available on ARGS-7000 (Sigmex) and VAXStation GPX (Digital Equipment Corp.) 1 EXAMPLE This is a sample program doing nothing, but used to test GILDAS... 2 A$ EXAMPLE: A$ A real value between 0 and 1. 2 CHAIN$ EXAMPLE: CHAIN$ A text to be sent to somebody. 2 ARRAY$ EXAMPLE: ARRAY$[4] A real array of dimension 4. Enter all values, even though they are not used. 2 FILE$ EXAMPLE: FILE$ This string must be a valid file name. DecNET names are not allowed. 1 EXTRACT EXTRACT Extracts a subset of an input image. The output image can be larger than the subset; in this case the additional pixels are blanked if the output image is being created, unmodified if it already exists. It works on images of any dimensions, and any subset of the input image can be placed anywhere in the output image. This routine can also be used to build a N+P dimensional image from a set of N dimensional ones, by initializing the output image once with its full dimensions and then placing the (subset of the) input images at the appropriate place in the output image. 2 Y_NAME$ EXTRACT: YNAME_$ This is the name of the input file. 2 X_NAME$ EXTRACT: X_NAME$ This is the name of the output file. 2 BLC$ EXTRACT: BLC$[4] This is the position (in pixel units) of the Bottom Left Corner of the extracted part (in the input map of course). 0 means 1. 2 TRC$ EXTRACT: TRC$[4] This is the position (in pixel units) of the Top Right Corner of the extracted part (in the input map of course). 0 means the the current image dimension. 2 PIXEL_IN$ EXTRACT: PIXEL_IN$[4] This is the position of one pixel in the input map. Together with the position of the same pixel in the output map (see PIXEL_OUT$), it is used to align the two maps. 2 PIXEL_OUT$ EXTRACT: PIXEL_OUT$[4] This is the position of one pixel in the output map. Together with the position of the same pixel in the input map (see PIXEL_IN$), it is used to align the two maps. 2 INITIALIZE$ EXTRACT: INITIALIZE$ Answer .TRUE. if you want to initialize the output map, .FALSE. if you want to insert the extracted part in an existing output map. 2 X_DIM$ EXTRACT: X_DIM$[4] This information is only used if you initialize the output map. It is the total dimension of the output map. The answer should be TRC-BLC+1, unless you want to allow some extra space for other data. This can occur for example if you are building a cube from a set of planes. 0 means TRC-BLC+1. 1 FIELD_ALL (Old name FIELD1 not yet normalized). Makes a field labelling of an image, and computes the field parameters. It produces a GREG output file containing the field parameters in following order : Col 1 X position of centroid Col 2 Y position of centroid Col 3 Integrated Flux Col 4 Average Flux density Col 5 Number of pixels The label image produced during the process is lost. Optionally, GREG can be entered directly after the field labelling is finished. Parameter File : Using GreG $! ^Z for not using GreG Field Threshold $! In input map units Maximum number of fields $! Use a large number if you have memory Field File $! An output summary for field statistics X: Input File $! The image to be processed 1 FIELD_FIND FIELD_FIND Makes the field labelling of an image, i.e. identifies connex areas with image values higher than a given threshold and attribute them a number. The result is an image (of same size as the input one), in which the value of a pixel is the number of the field to which it belongs. A zero value is attributed to pixels under the threshold. Note that blanked pixels may adversely affect the field labelling... 2 Y_NAME$ FIELD_FIND: Y_NAME$ This is the the name of the image to be labelled. 2 X_NAME$ FIELD_FIND: X_NAME$ This is the name of the output label image. 2 THRESHOLD$ FIELD_FIND: THRESHOLD$ This is the value of the threshold used during field labelling. 1 FIELD_LIST FIELD_LIST This task computes statistics on fields of an image. It accepts as input the image itself and a label image describing the fields. The label image is an image (of same size as the input image), in which the value of a pixel is the number of the field to which it belongs. A zero value is attributed to pixels which do not belong to any field. The label image is usually obtained by running task FIELD_FIND. The output is a Table file containing : Col 1 Number of pixels Col 2 Integrated flux Col 3 X Abscissa of centroid ( i.e., Weighted by pixels value) Col 4 Y Ordinate of centroid ( idem) Col 5 Major axis of fitted ellipsis (should be correct now) Col 6 Minor axis of fitted ellipsis (idem) Col 7 Position angle of fitted ellipsis (idem) 2 Y_NAME$ FIELD_LIST: Y_NAME$ This is the image to be analysed. 2 X_NAME$ FIELD_LIST: X_NAME$ This is the label image defining the fields in the image to be analysed. 2 FIELD$ FIELD_LIST: FIELD$ This is the maximum number of fields that can be handled. Use a large value if you don't know. 2 T_NAME$ FIELD_LIST: T_NAME$ This is the name of the table that will contain the results of the analysis. 1 FIELD_STAT FIELD_STAT This task computes the integrated spectrum of a number of fields in a data cube. It requests as input a 3-D image, and a label image (usually produced by task FIELD_FIND). The output is a table where every even column is the average spectrum for one field. Odd columns contain the number of data points in the corresponding field (not an optimal storage indeed!). 2 Y_NAME$ FIELD_STAT: Y_NAME$ The name of the input data cube. 2 X_NAME$ FIELD_STAT: X_NAME$ The name of the input label image (usually produced by FIELD_FIND). 2 T_NAME$ FIELD_STAT: T_NAME$ The name of the output table. 2 FIELD$ FIELD_STAT: FIELD$ The maximum of fields the program will accept. This determines the size of the output table. Answer the value that was found by FIELD_FIND. 1 File_Format GILDAS is based on a mapping memory system which recognises a unique file format. This format may however be used for storing two somewhat different kinds of data : 1. IMAGES : images are regularly sampled 2-d, 3-d or 4-d data which require conversion formula for the coordinates along each axis. Data values may be Integer, Real or Double precision, but most algorithms support only REAL images. 2. TABLES : Tables are ensembles of columns strictly equivalent to the formatted files used by GreG as input for the X,Y,Z buffers (indeed GreG is also able to read and write Tables). The only difference is that they are unformatted, and hence access time is typically 50 times faster... The number of lines is fixed, but the number of columns may be extended indefinitely. In fact, a table may be consider as a 2-D image (and vice-versa if you want) but does not require "axis" information... Tables and Images may be produced by other softwares (mainly CLASS if one considers that GreG is a subset of GRAPHIC), and fully manipulated by the GILDAS environnement. They can be used in mathematical formulae in SIC monitor. 1 FILL_CUBE FILL_CUBE This program resamples an input data cube on a finer grid for the first two dimensions (hence the data cube is treated as an ensemble of images). The output grid may be explicitly determined from the conversion formulae of the two first axis, or implicitly from the number of pixels of the two first axis and the input cube conversion formulae. Two different methods are available for the resampling : The SLOW method, which is general and takes properly into account the input blanking value. This method is based on the same algorithm as the RANDOM command in GreG. It triangulates the input non-blanked values and uses Lagrange polynomials to interpolate on the finer grid. For optimisation purposes, the same triangulation is used for all planes, which assumes that the same pixels are blanked in all planes. If this is not the case, the individual planes should be extracted, and processed separately. The FAST method, which ignores the input blanking value. This method is based on the same algorithm as the RESAMPLE command in GreG. It is faster, but should be used only if no input pixel is blanked. If the data is undersampled, this task is the recommended second step in the data analysis of a cube produced by command ANALYSE\CUBE in CLASS, immediately after the TRANSPOSE program. Task MAKE_CUBE is preferable for oversampled data. 2 Y_NAME$ FILL_CUBE: Y_NAME$ This is the name of the input data cube. 2 X_NAME$ FILL_CUBE: X_NAME$ This is the name of the output resampled data cube. 2 PIXELS$ FILL_CUBE: PIXELS$[2] These are the dimensions of one plane in the resampled output image. They are usually set to 2 or 3 times the dimensions of the input plane. Unpredictable results will be obtained if the output dimensions are smaller than the input dimensions. 2 METHOD$ FILL_CUBE: METHOD$ This is the resampling method. FAST method is faster, but only SLOW method allows for blanked pixels. 2 MODE$ FILL_CUBE: MODE$ Determines whether the conversion formulas for the output plane are specified manually by the user (.FALSE.) or determined automatically (.TRUE.) from the input plane conversion formula and the number of pixels. Automatic mode is usually sufficient, unless you plan to compare the resampled data with an existing data cube. 2 AXIS_1$ FILL_CUBE: AXIS_1$[3] This is the conversion formula for the first axis of the output plane: reference pixel, value at the reference pixel, and distance between successive pixels. This information is only needed if you have selected the manual mode for the conversion formula. 2 AXIS_2$ FILL_CUBE: AXIS_2$[3] This is the conversion formula for the second axis of the output plane: reference pixel, value at the reference pixel, and distance between successive pixels. This information is only needed if you have selected the manual mode for the conversion formula. 1 FITS_GILDAS FITS_GILDAS Converts a disk FITS image in a GILDAS image. A disk FITS image is an 2880 bytes per block file conforming to FITS standard for the information written in the blocks. Such disk FITS images can be produced by other packages (e.g. AIPS). This task is intended for a single image. For many images, or tape FITS, use the interactive program GFITS. Images are assumed to have at most 4 axis. 2 FITS$ FITS$ Input disk FITS file name. 2 OUT$ OUT$ Output GILDAS image name 2 STYLE$ STYLE$ The "Style" of FITS files. Many supposedly standard FITS files have small (although non-fatal) deviations from the (uncomplete) FITS standard. The STYLE$ keyword is used to correct for some of these. Available styles are - STANDARD - CPC (IRAS Chopped Photometric Channel) : uses a non-standard definition of axes, and a bizarre quasi-projection system. - SPLINE (IRAS spline maps) : those maps have an incorrect flux density scale. Correction factor applied when this keyword is specified. 2 BLC$ BLC$[4] Define the bottom left corner of the input FITS file to be considered. BLC is an array of dimension 4, since images are assumed to have at most 4-D. 0 means 1. 2 TRC$ TRC$[4] Define the top right corner of the input FITS file to be considered. 0 means use the actual dimension. 1 FLOW FLOW This is a dedicated routine to produce bipolar outflow maps, with optimum signal to noise ratio and little bias. It takes as input a cube (N by NX by NY) with the velocity along the first axis, and produces a pseudo-cube cube (3 by NX by NY) containing the maps of the line width and of the red and blue lobes integrated intensities. For each spectrum, the algorithm determines the peak channel, then determines on each side of this channel the velocity at a given threshold, given as ratio to peak value. From this velocity, it integrates out to a second threshold. The area found is the contribution to the blue (or red) lobe of the flow. 2 IN$ FLOW: IN$ The input cube file name 2 OUT$ FLOW: OUT$ The output file name 2 RATIO$ FLOW: RATIO$ The ratio of the upper threshold used to define the line wing to the peak value of each spectrum. 2 THRESHOLD$ FLOW: THRESHOLD$ The absolute level used to define the full line width. 1 FOURIER FOURIER This task computes the (complex) Fourier transform of a (real) 2D image. It does not request the number of pixels to be powers of two, but the algorithm is of course much faster if they are. 2 Y_NAME$ FOURIER: Y_NAME$ The name of the input image. This should be a 2D real image. 2 X_NAME$ FOURIER: X_NAME$ The name of the output image. This will be 3D with the first dimension equal to 2, and corresponding to the REAL-IMAGINARY axis. 2 SIGNE$ FOURIER: SIGNE$ +1 for direct transform, -1 for inverse transform. 1 GAUSS_1D GAUSS_1D This task makes a multi-component non-linear gaussian fit into columns of a table. It fits a function Y = f(X) where f is the sum of up to five gaussian, and X and Y data are taken in specified columns of the input table. The method is identical to that used by the GAUSS command in CLASS (simplex followed by conjugate gradient). The format for input of initial parameters is also similar. All messages are sent to a specified output file. The fitted profiles can be kept in separate columns of the input table. 2 IN$ GAUSS_1D: IN$ The name of the input table. 2 LIST$ GAUSS_1D: LIST$ The name of the output formatted file. All messages are written to this file, including the parameters of the fitted gaussian(s). 2 COLUMN_IN$ GAUSS_1D: COLUMN_IN$[2] The number of the table column used as X and Y data. 2 NLINE$ GAUSS_1D: NLINE$ The number of gaussian components used in the fit. NLINE$ should not be larger than 5. If NLINE$ is 0, the program will attempt to guess initial values for a single component from the moments of the spectrum. If NLINE$ > 0, it will need some initial values. 2 LINE_1$ GAUSS1_D: LINE_1$[6] The initial parameters for the first gaussian component in the profile fit. These values are entered in free format as follows : Code, Value, Code, Value, Code, Value, Code, Value -Intensity- -Position- -Width- The code is an integer interpreted as follows: 0 adjustable parameter 1 fixed parameter 2 adjustable parameter (head of group) 3 parameter fixed with respect to parameter coded 2 or 4 4 fixed parameter (head of group) Codes 2 3 and 4 are used to fit depending lines (e.g. HCN, for which the displacements are 4.842 and -7.064 km/s, or -1.431 and 2.088 MHz, and line ratios 1:0.6:0.2). 2 LINE_2$ GAUSS1_D: LINE_2$[6] The initial parameters for the second gaussian component in the profile fit. These values are entered in free format as follows : Code, Value, Code, Value, Code, Value, Code, Value -Intensity- -Position- -Width- The code is an integer interpreted as follows: 0 adjustable parameter 1 fixed parameter 2 adjustable parameter (head of group) 3 parameter fixed with respect to parameter coded 2 or 4 4 fixed parameter (head of group) Codes 2 3 and 4 are used to fit depending lines (e.g. HCN, for which the displacements are 4.842 and -7.064 km/s, or -1.431 and 2.088 MHz, and line ratios 1:0.6:0.2). 2 LINE_3$ GAUSS1_D: LINE_3$[6] The initial parameters for the third gaussian component in the profile fit. These values are entered in free format as follows : Code, Value, Code, Value, Code, Value, Code, Value -Intensity- -Position- -Width- The code is an integer interpreted as follows: 0 adjustable parameter 1 fixed parameter 2 adjustable parameter (head of group) 3 parameter fixed with respect to parameter coded 2 or 4 4 fixed parameter (head of group) Codes 2 3 and 4 are used to fit depending lines (e.g. HCN, for which the displacements are 4.842 and -7.064 km/s, or -1.431 and 2.088 MHz, and line ratios 1:0.6:0.2). 2 LINE_4$ GAUSS1_D: LINE_4$[6] The initial parameters for the fourth gaussian component in the profile fit. These values are entered in free format as follows : Code, Value, Code, Value, Code, Value, Code, Value -Intensity- -Position- -Width- The code is an integer interpreted as follows: 0 adjustable parameter 1 fixed parameter 2 adjustable parameter (head of group) 3 parameter fixed with respect to parameter coded 2 or 4 4 fixed parameter (head of group) Codes 2 3 and 4 are used to fit depending lines (e.g. HCN, for which the displacements are 4.842 and -7.064 km/s, or -1.431 and 2.088 MHz, and line ratios 1:0.6:0.2). 2 LINE_5$ GAUSS1_D: LINE_5$[6] The initial parameters for the fifth gaussian component in the profile fit. These values are entered in free format as follows : Code, Value, Code, Value, Code, Value, Code, Value -Intensity- -Position- -Width- The code is an integer interpreted as follows: 0 adjustable parameter 1 fixed parameter 2 adjustable parameter (head of group) 3 parameter fixed with respect to parameter coded 2 or 4 4 fixed parameter (head of group) Codes 2 3 and 4 are used to fit depending lines (e.g. HCN, for which the displacements are 4.842 and -7.064 km/s, or -1.431 and 2.088 MHz, and line ratios 1:0.6:0.2). 2 COLUMN_OUT$ GAUSS_1D: COLUMN_OUT$[NLINE$] These are the numbers of columns where values of the fitted components will be kept. Specify one empty (or useless) column of the input table for each component that you want to store, and 0 for each component that you don't want to keep. 1 GAUSS_2D GAUSS_2D This task fits a single 2 dimensional elliptical gaussian to an image. The fitted gaussian is kept on an output image. All coordinates are USER coordinates, not pixels. Each parameter is followed by a code to indicate whether it is fixed (code 1) or variable (code 0). If the input image is actually a cube, a gaussian is fitted to each plane. 2 IN$ GAUSS_2D: IN$ The name of the input image (which can be a cube) 2 RES$ GAUSS_2D: RES$ The name of the image of fit residuals. 2 OUT$ GAUSS_2D: OUT$ The name of the output file. This is an ASCII file, formatted as a table, with columns: 1,2 : Peak intensity and error (map units) 3,4 : X position and error (radians) 5,6 : Y position and error (radians) 7,8 : Major Width and error (radians) 9,10 : Minor Width and error (radians) 11,12: Position Angle and error (radians) 13,14: Plane (and hyperplane) numbers. 2 MAX_INT$ GAUSS_2D: MAX_INT$[2] An initial guess for the peak intensity (in map units), and the corresponding control code. The code is an integer interpreted as follows : 0 adjustable parameter 1 fixed parameter 2 adjustable parameter (head of group) 3 parameter fixed with respect to parameter coded 2 or 4 4 fixed parameter (head of group) 2 X_POS$ GAUSS_2D: X_POS$[2] An initial guess for the X position of the peak (user coordinates), and the corresponding control code. The code is an integer interpreted as follows : 0 adjustable parameter 1 fixed parameter 2 adjustable parameter (head of group) 3 parameter fixed with respect to parameter coded 2 or 4 4 fixed parameter (head of group) 2 Y_POS$ GAUSS_2D: Y_POS$[2] An initial guess for the Y position of the peak (user coordiantes), and the corresponding control code. The code is an integer interpreted as follows : 0 adjustable parameter 1 fixed parameter 2 adjustable parameter (head of group) 3 parameter fixed with respect to parameter coded 2 or 4 4 fixed parameter (head of group) 2 MAJOR$ GAUSS_2D: MAJOR$[2] An initial guess for the major axis (full width to half power, in user coordinates) of the ellipse, and the corresponding control code. The code is an integer interpreted as follows : 0 adjustable parameter 1 fixed parameter 2 adjustable parameter (head of group) 3 parameter fixed with respect to parameter coded 2 or 4 4 fixed parameter (head of group) 2 MINOR$ GAUSS_2D: MINOR$[2] An initial guess for the minor axis (full width to half power, in user coordinates) of the ellipse, and the corresponding control code. The code is an integer interpreted as follows : 0 adjustable parameter 1 fixed parameter 2 adjustable parameter (head of group) 3 parameter fixed with respect to parameter coded 2 or 4 4 fixed parameter (head of group) 2 POSANG$ GAUSS_2D: POSANG$[2] An initial guess for position angle (in degrees, from North towards East) of the ellipse, and the corresponding control code. The code is an integer interpreted as follows : 0 adjustable parameter 1 fixed parameter 2 adjustable parameter (head of group) 3 parameter fixed with respect to parameter coded 2 or 4 4 fixed parameter (head of group) 2 SIGMA$ GAUSS_2D: SIGMA$ An initial estimate for the rms noise level in the image. 1 GAUSS_SMOOTH GAUSS_SMOOTH This task performs a 2-D smoothing by an elliptical gaussian. The algorithm works in the Fourier plane and the first 2 dimensions (in pixels) should therefore be powers of two (but need not be equal). It is much faster than the equivalent sky plane algorithm, but you should beware of aliasing effects when the size of the convolving gaussian becomes a sizeable fraction of the map (say 30%). 2 Y_NAME$ GAUSS_SMOOTH: Y_NAME$ The name of the input map (which can be a cube). 2 X_NAME$ GAUSS_SMOOTH: X_NAME$ The name of the output smoothed map. 2 MAJOR$ GAUSS_SMOOTH: MAJOR$ The major axis of the smoothing gaussian in user coordinates (not pixels). 2 MINOR$ GAUSS_SMOOTH: MINOR$ The minor axis of the smoothing gaussian in user coordinates (not pixels). 2 PA$ GAUSS_SMOOTH: PA$ The position angle of the major of the smoothing gaussian in (in degrees, from North towards East) 1 GILDAS_FITS GILDAS_FITS Converts a GILDAS image in a disk FITS image. A disk FITS image is an 2880 bytes per block file conforming to FITS standard for the information written in the blocks. Such disk FITS images can be produced by other packages (e.g. AIPS). This task is intended for a single image. For many images, or tape FITS, use the interactive program GFITS. 2 IN$ GILDAS_FITS: IN$ The input GILDAS image name 2 FITS$ GILDAS_FITS: FITS$ The output FITS file name 1 Graphic A big program which allows to use GreG on images. It doesnot contain any bit-map image display, but only graphic commands. 1 GRID_CUBE GRID_CUBE This task makes a cube from a table containing data regularly sampled on a 2-d grid, but possibly uncompletely sampled. Since the input data is assumed to be regularly sampled, this gridding task does not use any convolution or interpolation, but just fills a grid with values. The grid size and step is determined by the task. The input table can be created by task TABLE from a formatted file. 2 IN$ TASK\FILE "Input file name" IN$ Specify the input table. 2 OUT$ TASK\FILE "Output file name" OUT$ Specify the output file 2 XCOL$ TASK\INTEGER "Column of X coordinates" XCOL$ Indicate which column of the input table contains the X coordinate for gridding. 2 YCOL$ TASK\INTEGER "Column of Y coordinates" YCOL$ Indicate which column of the input table contains the Y coordinate for gridding. 2 MCOL$ TASK\INTEGER "First and last column to grid" MCOL$[2] Indicate the first and last columns to grid. XCOL$ and YCOL$ columns are gridded if they fgall in the range MCOL$[1] - MCOL$[2]. 2 TOLE$ TASK\REAL "Tolerance for X and Y positions" TOLE$ Specify the tolerance on X and Y position checking (in X and Y units). Note that X and Y must have a common unit. 2 BLANKING$ TASK\REAL "Blanking value and tolerance" BLANKING$[2] Indicate the blanking value to use for unsampled cells. 1 HEADER HEADER A (not so) simple minded routine to list edit and modify the header of a GDF-like file (Image only, Table edition is usually meaningless). It is functionnally equivalent to command "HEADER File" in the GRAPHIC or OVERLAY programs, allowing editing of header parameters if you have write access to the image. - To modify a header parameter, position the cursor on it and press key Gold (PF1). You can then enter the new value. When you are happy with the modified value, press Enter. - To compute the extrema of the image, press key Enter. - To exit from the edit mode, type <^Z>. 1 Help The TASK Help library contains a complete description of all GILDAS programs. These programs can only be activated from within GRAPHIC, DISPLAY, OVERLAY or VECTOR programs commands RUN and SUBMIT. From within RUN and SUBMIT commands, you can get help upon the current task by typing GOLD ? in the editor, or by answering ? to a prompt in non editing mode. 1 HISTO_CLOUD HISTO_CLOUD Makes the cross histogram of two input images. The result is a table of two columns: - First column : value of pixel (I,J) of first image - Second column : value of pixel (I,J) of second image Using the GreG command POINT on this output table will produce a cloud plot of the correlation between the two images. This program is reasonably well adapted for small images, but you should use HISTO_CROSS for large images. 2 Z_NAME$ HISTO_CLOUD: Z_NAME$ This is the name of the first image used in the cross correlation. 2 Y_NAME$ HISTO_CLOUD: Y_NAME$ This is the name of the second image used in the cross correlation. 2 X_NAME$ HISTO_CLOUD: X_NAME$ This is the name of the output table. 1 HISTO_CROSS HISTO_CROSS Computes the cross histogram of two input images. The result is an image of dimensions (Number of slots for first image, Number of slots for second image) which represents the density distribution of the correlation. The value at a given (I,J) is thus the number of pixels in the input images that have the value corresponding to slot I in the first image and to slot J in the second one. The output image can be used as input to task REGRESSION to evaluate some statistical parameters of the correlation. See also HISTO_CLOUD for a slightly different information. 2 Z_NAME$ HISTO_CROSS: Z_NAME$ This is the name of the first image used in the correlation. 2 Z_BIN$ HISTO_CROSS: Z_BIN$ This is the number of histogram slots to be used for the first image. 2 Z_MIN$ HISTO_CROSS$: Z_MIN$ This is the value of the lower bin used for the first image. 2 Z_MAX$ HISTO_CROSS$: Z_MAX$ This is the value of the higher bin used for the first image. 2 Y_NAME$ HISTO_CROSS: Y_NAME$ This is the name of the second image used in the correlation. 2 Y_BIN$ HISTO_CROSS: Y_BIN$ This is the number of histogram slots to be used for the second image. 2 Y_MIN$ HISTO_CROSS$: Y_MIN$ This is the value of the lower bin used for the second image. 2 Y_MAX$ HISTO_CROSS$: Y_MAX$ This is the value of the higher bin used for the second image. 2 X_NAME$ HISTOCROSS: X_NAME$ This is the name of the output cross correlation image. 1 HISTO_DOUBLE HISTO_DOUBLE Computes the histogram of an image (Y_image) as a function of a second one (Z_image). This gives an information similar to, but slightly different from the cross histogram (cf HISTO_CROSS). The program computes histogram of the mean value and standard deviation. The result is a table of 4 Columns, in which 1) the first column contains the mean value of the Y_image (average of all pixels in the Y_image for which the Z_image value is included in the corresponding slot) 2) the second the standard deviation (of the Y_image values). 2) the third the number of pixels of the Y_image used to compute the mean and deviation. 4) the fourth column contains the histogram abscissa (actual value of the input Z_image) 2 Z_NAME$ HISTO_DOUBLE: Z_NAME$ This is the name of the input image that defines the histogram slots. 2 Z_NAME$ HISTO_DOUBLE: Z_NAME$ This is the name of the input image that defines the histogram values. 2 Y_BLC$ HISTO_DOUBLE: YBLC$[4] This is the position (in pixels) of the Bottom Left Corner of the part of both maps that will be used to compute the histogram. 2 Y_TRC$ HISTO_DOUBLE: YTRC$[4] This is the position (in pixels) of the Top Right Corner of the part of both maps that will be used to compute the histogram. 2 X_BIN$ HISTO_DOUBLE: X_BIN$ This is the number of histogram slots that will be defined within the values of the first image. 2 X_MIN$ HISTO_DOUBLE: X_MIN$ This is the value (of the first image) corresponding to the lowest bin. 2 X_MAX$ HISTO_DOUBLE: X_MAX$ This is the value (of the first image) corresponding to the highest bin. 2 X_NAME$ HISTO_DOUBLE: X_NAME$ This is the name of the output histogram. 1 HISTO_SIMPLE HISTO_SIMPLE Computes the histogram of an image (or of a table). A subset of the image may be specified. The result is a table with 2 columns : Column 1 contains the middle value of the interval, Column 2 contains the number of input image pixels in the interval. 2 Y_NAME$ HISTO_SIMPLE: Y_NAME$ This is the name of the input image. 2 Y_BLC$ HISTO_SIMPLE: YBLC$[4] This is the Bottom Left Corner (in pixels) of the part of the input image on which the histogram is computed. 2 Y_TRC$ HISTO_SIMPLE: YTRC$[4] This is the Top Right Corner (in pixels) of the part of the input image on which the histogram is computed. 2 X_BIN$ HISTO_SIMPLE: X_BIN$ This is the number of bins that will be used in the histogram. 2 X_MIN$ HISTO_SIMPLE: X_MIN$ This is the lowest value used in the histogram. 2 X_MAX$ HISTO_SIMPLE: X_MAX$ This is the highest value used in the histogram. 2 X_NAME$ HISTO_SIMPLE: X_NAME$ This is the name of the output histogram. 1 HISTO_TABLE HISTO_TABLE Computes the cross histogram of two columns of a table. The result is an a 2-D image where the value of pixel (I,J) is the number of points in the input table for which the first column value correspond to bin I, and the second column value to bin J. 2 Y_NAME$ HISTO_TABLE: Y_NAME$ This is the name of the input table. 2 Y_COLUMNS$ HISTO_TABLE: Y_COLUMN$[2] The numbers of the input table columns that will be cross correlated. 2 BIN$1 HISTO_TABLE: BIN$1 The number of histogram bins used for first column. 2 MIN$1 HISTO_TABLE: MIN$1 The value of the lower bin used for first column. 2 MAX$1 HISTO_TABLE: MAX$1 The value of the higher bin used for second column. 2 BIN$2 HISTO_TABLE: BIN$2 The number of histogram bins used for first column. 2 MIN$2 HISTO_TABLE: MIN$2 The value of the lower bin used for first column. 2 MAX$2 HISTO_TABLE: MAX$2 The value of the higher bin used for second column. 2 X_NAME$ HISTO_TABLE: X_NAME$ The name of the output image. 1 IMAGE IMAGE This task transforms an RGDATA file to the standard GILDAS format. This can also be done within the main GRAPHIC or OVERLAY programs, using command RGDATA and then WRITE IMAGE. 2 IN$ IMAGE: IN$ The name of the input RGDATA file. 2 OUT$ IMAGE: OUT$ The name of the output GILDAS image. 1 INTERPOLATE INTERPOLATE This program resamples an input data cube ALONG ITS FIRST AXIS. The resampled output cube may have higher or lower resolution than the original one, but extrapolation is strictly forbidden. The program does not handle blanking values. 2 Y_NAME$ INTERPOLATE: Y_NAME$ This is the name of the input data cube. 2 X_NAME$ INTERPOLATE: X_NAME$ This is the name of the output (resampled) data cube. 2 NX$ INTERPOLATE: NX$ This is the number of pixels along the first axis of the output cube. 2 REFERENCE$ INTERPOLATE: REFERENCE$ This is the reference pixel on the first axis of the output cube. 2 VALUE$ INTERPOLATE: VALUE$ This is the value at the refence pixel on the first axis of the output cube. 2 INCREMENT$ INTERPOLATE: INCREMENT$ This is the distance between two pixels on the first axis of the output cube. 1 LIST LIST This program lists in free format part of an input table. 2 FILE$ LIST: FILE$ This is the name of the table that you want to list. 2 LIST$ LIST: LIST$ This is the name of the output list. Answer TT: to get the output at your terminal. 2 FIRST_LINE$ LIST: FIRST_LINE$ This is the first line of the table that will be listed. 2 LAST_LINE$ LIST: LAST_LINE$ This is the last line of the table that will be listed. 2 FIRST_COLUMN$ LIST: FIRST_COLUMN$ This is the first column of the table that will be listed. No more than ten columns can be listed 2 LAST_COLUMN$ LIST: LAST_COLUMN$ This is the last column of the table that will be listed. No more than ten columns can be listed 1 MAKE_BACK MAKE_BACK Computes the background image of an input image using a thresholding and a smoothing algorithm based on the conjugate gradient method. The background image is computed as follows - First the image is thresholded between THRESHOLD$[1] and THRESHOLD$[2]. All pixels outside this range are blanked. - Then the image is smoothed using the conjugate gradient method, with NITER$ iterations and a smoothing parameter P$. Blanked pixels are thus interpolated in the process. The output image is the background image. P$ should be large enough to yield some smoothing (1 to 100, try...), and NITER$ also to assure convergence especially in "blanked" areas (NITER$ = 20 is typical). 2 P$ TASK\REAL "Smoothing parameter" P$ Smoothing parameter for the background. Use 1 to 100 2 NITER$ TASK\INTEGER "Number of smoothing iterations" NITER$ NITER$ must be large enough to interpolate values properly in blanked areas. Use 10 to 30. 2 IN$ TASK\CHARACTER "Input file name" IN$ Input image (not modified) 2 OUT$ TASK\CHARACTER "Output file name" OUT$ Output background image name (created) 2 THRESHOLD$ TASK\REAL "Low, high threshold, and guess" THRESHOLD$[3] Three values indicating the lowest significant background value, the highest, and an initial guess for blanked pixels. THRESHOLD$[3] should usually be either equal to THRESHOLD$[2] or to THRESHOLD$[1], depending whether the signal is positive or negative... 1 MAKE_CUBE MAKE_CUBE This is an image construction task which is able to produce a filled image from one containing many blanked pixels. The reconstructed filled image is not constrained to fit exactly the observed data points. On the opposite, the construction is made by the analogy to a flexible plate attached to fixed points by springs: the plate is the analogous of the surface represented by the image, and the fixed points are the analogous of the observed data points. By adjusting the parameter P P = (plate stiffness) / (springs stiffness) it is possible to control the fidelity to the original data and the amount of smoothing involved in the image reconstruction. Low values of P mean high fidelity to observed data, and negligible amount of smoothing. The original grid is first expanded by a factor EXPANSION$, new pixels being attributed the blanking value. Then, the minimization proceeds iteratively to adjust the final image, until convergence is reached. Initially blanked pixels are ignored in the convergence criterium. The algorithm works on cubes, processing each plane independently. It can be used as an alternative to FILL_CUBE for oversampled images, but works also for undersampled data. 2 P$ MAKE_CUBE: P$ This is the value of the smoothing parameters. It should remain between 0.001 and 10. The higher values give stronger smoothing, and should be used for noisy data. 2 NITER$ MAKE_CUBE: NITER$ This is the number of smoothing iterations. Ten iterations are usually enough to reach convergence, unless the expansion factor is large. 2 EXPANSION$ MAKE_CUBE: EXPANSION$ This is the expansion factor for the output cube, i.e. the number of output pixels for each input pixel along each direction. Minimum value is 2, maximum is 10. Note that the computing time is proportional to the number of output pixels, thus to EXPANSION$**2. 2 IN$ MAKE_CUBE: IN$ This is the name of the input file. 2 OUT$ MAKE_CUBE: OUT$ This is the name of the output smoothed image. 2 GUESS$ MAKE_CUBE: GUESS$ This is an initial guess for the original value attributed to blanked pixels. Try to find something reasonable, usually the average of valid pixels. Convergence can be delayed if you choose a crazy value, since the minimization tends to produce a smooth image, but the result doesnot depend on GUESS$. 1 MASK MASK This task masks either the inside or the outside of a polygon in all planes of a data cube. It is similar to the MASK command in GreG, GRAPHIC or OVERLAY, but it does affect the output image and not an internal copy of it. The polygon must have been created previously using the WRITE POLYGON command in GreG, for example. 2 POLYGON$ MASK: POLYGON$ This is the name of the polygon file created in GREG (command WRITE POLYGON). 2 Y_NAME$ MASK: Y_NAME$ This is the name of the input data cube. 2 X_NAME$ MASK: X_NAME$ This is the name of the output masked image. 2 MASK_IN$ MASK: MASK_IN$ Answer .TRUE. to mask the inside of the polygon, .FALSE. to mask the outside. 2 MODIFY$ MASK: MODIFY$ Answer .TRUE. if you want to change the blanking value, .FALSE. if you want to use that of the input image. 2 BLANKING$ MASK: BLANKING$ This is the blanking value that will be used for the output image. 1 MERGE MERGE This program merges two input tables WITH THE SAME NUMBER OF LINES. 2 Z_TABLE$ MERGE: Z_TABLE$ This is the name of the first input table. 2 Y_TABLE$ MERGE: Y_TABLE$ This is the name of the second input table. 2 X_TABLE$ MERGE: X_TABLE$ This is the name of the output table. 1 MINIMIZE MINIMIZE This task finds the best linear combination of two input arrays, in the form X(i,j) = A*Y(i,j) + B The results are A and B of course, written in MINIMIZE.GILDAS. 2 Y_NAME$ MINIMIZE: Y_NAME$ This is the name of the first input array. 2 X_NAME$ MINIMIZE: X_NAME$ This is the name of the second input array. 1 NOISE_SMOOTH NOISE_SMOOTH Smoothes an input image using the noise cheating enhancement method. This method works only for strictly positive images. Values for adjacent pixels are summed until a given total is reached. Then, the total is divided by the number of pixels added and the result is used for the output pixel value. This smoothing is very non linear (in particular no smoothing occur on pixels stronger than the smoothing threshold). A parameter allows to restrict the averaging to nearby pixels only : in this case the output image is not necessarily strictly positive. 2 Y_NAME$ NOISE_SMOOTH: Y_NAME$ The name of the input image. 2 X_NAME$ NOISE_SMOOTH: X_NAME$ The name of the output smoothed image. 2 THRESHOLD$ NOISE_SMOOTH: THRESHOLD$ This is the (maximum) integrated intensity at which smoothing stops. 2 SMOOTHING$ NOISE_SMOOTH: SMOOTHING$ This is the maximum radius for the smoothing box (in pixels number). 1 Overlay A big interactive program allowing to display overlays of bit-map images and graphic. It is a superset of programs DISPLAY and GRAPHIC. Currently available only for VAXStation GPX image processor. 1 PLANE PLANE This is a (very) simple task that subtracts a plane from an image. The plane is determined by three non-colinear non-blanked pixels of the original map. The plane image is also computed. This task is usually a first step before more elaborate background algorithms (such as BACKGROUND or MAKEBACK) are applied. 2 Z_NAME$ PLANE: Z_NAME$ The name of the input image. 2 Y_NAME$ PLANE: Y_NAME$ The name of the image that will contain the subtracted plane (which is normally useless). 2 X_NAME$ PLANE: X_NAME: The name of the output image that will contain the result of the subtraction. 2 X_PIXEL$ PLANE: X_PIXEL$[3] The X coordinates (in pixels) of the the 3 pixels that defines the subtracted plane. Try to choose three typical pixels that define a nearly equilateral triangle. 2 Y_PIXEL$ PLANE: Y_PIXEL$[3] The Y coordinates (in pixels) of the the 3 pixels that defines the subtracted plane. Try to choose three typical pixels that define a nearly equilateral triangle. 2 BLANKING$ PLANE: BLANKING$[2] The blanking value and blanking tolerance for both the input and the output images. This information is only used if no blanking section exists in the header of the input image. 1 Private_Tasks GILDAS tasks are searched in the GILDAS_RUN area, and the initialisation and checker files in GILDAS_PAR. These names point by default to GAG_ROOT:[GDF] and GAG_ROOT:[[GDF.PAR] respectively. The help is accessible through the logical name GILDAS_HELP, which points by default to GAG_HELP:TASK.HLB. These three logical names can be defined as search lists. For example $ DEFINE GILDAS_RUN MYDISK:[MYDIR.GDF],GAG_ROOT:[GDF] $ DEFINE GILDAS_PAR MYDISK:[MYDIR.GDF],GAG_ROOT:[GDF.PAR] $ DEFINE GILDAS_HELP MYDISK:[MYDIR.GDF]TASK.HLB,GAG_HELP:TASK.HLB allows to search for tasks first in the private directory, then among the general GILDAS area. See GILDAS user manual for details about developping GILDAS tasks (see $ HELP @GAGHELP DOCUMENTATION to get a copy of the user manual). 1 PSC PSC (Point Source Catalog) This is a general utility to look through the compacted IRAS Point Source Catalog. This program is able to select sources according to various criteria and to create binary tables and/or formatted files containing the information about the sources. Even on tape, the program is fast to select sources (less than 8 minutes in any small positional box, and often less). The time needed to list the information may be longer if you select many sources.. It requests as input a compacted file containing the full catalog information and a table containing pointers to this compacted file, as well as positional and flux information. The compacted catalog can be either on magnetic tape (GAG242 at the Groupe d'Astrophysique in Grenoble), or on disk (with logical name PSC_COMPACT). The input table must be on disk. It can be either the default table IRAS_PSC_EQU (full catalog) or any previous output of the program (to work on a reduced source list). The task produces as output a formatted file and/or an output table. A description of the formatted output can be obtained in GRAPHIC or OVERLAY by HELP PSC_IRAS FORMAT. Reference to the IRAS explanatory supplement will probably be needed except for IRAS wizards. The output table has the same content as the input one (but of course only for selected sources) and can therefore be used as input to another run of the program: as an example, this will allow to select first on the 12/25 color and then on the 60/100 color. Since this table is a standard GreG table, it can also be used in GreG's COLUMN command to produce plots. Column 1 is Right Ascension (in radians), column 2 is Declination (also in radians). Columns 3 to 6 contain the 4 fluxes (12 to 100), with upper limits coded as negative. Column 7 is an internal pointer. This can be used for example, to plot the distribution on the sky of a given class of sources. After some simple processing (in SIC), color/color plots can easily be produced. 2 IN_TABLE$ PSC: IN_TABLE$ The name of the input table. Answer * if you want to search through the whole catalog. If you want to search through a subset of the catalog (obtained from a previous run of the program), specify the name of the corresponding table. 2 F_OUT$ PSC: F_OUT$ Answer YES if you want to create a formatted output, NO otherwise. Don't answer YES if you expect to select half of the catalog! 2 OUT_LIST$ PSC: OUT_LIST$ The name of the formatted output file. This is only used if you create one. 2 T_OUT$ PSC: T_OUT$ Answer YES if you want to create an output table, otherwise answer NO. The output table can be used as input for another run of the program if you want to work on a subset of the catalog. It can also be used in GreG to produce things like color/color plots. See the main help for a description of its content. 2 OUT_TABLE$ PSC: OUT_TABLE$ The name of the output table. This is only used if you create one. 2 CHECK_BOX$ PSC: CHECK_BOX$ Answer YES if you want to select on Right Ascension and Declination, NO otherwise. Selection on LII and BII may be implemented one day. Ask T. Forveille if you need it. 2 BOX$ PSC: BOX$[4] These are the limits of the search box: RAmin, RAmax, DECmin, DECmax. At present, these should be entered in radians. This will hopefully be replaced by a more convenient format one day. Remind T. Forveille that this has to be done. 2 CHECK_QUAL$ PSC: CHECK_QUAL$ Answer YES if you want to select on flux qualities, NO otherwise. 2 QUALITY$ PSC: QUALITY$[4] The minimal flux quality you are willing to accept in each band (12, 25, 60, and 100 microns). 3 is good, 2 poor, 1 an upper limit. 2 CHECK_ID$ PSC: CHECK_ID$ Answer YES if you want to select on the identification type, NO otherwise. 2 WANTED_ID$ PSC: WANTED_ID$[5] Enter the accepted identification types. Possible values of the identification type are: - 1: association with an extragalactic catalog. - 2: association with a stellar catalog. - 3: association with other catalogs (planetary nebulae, HII regions, reflection nebulae...). - 4: association with several of the previous types of catalogs (stellar+ extragalactic for example). 2 CHECK_CATA$ PSC: CHECK_CATA$ Answer YES if you want to select sources in one particular association catalog (IRC, or Zwicky, or de Vaucouleurs, or...), NO otherwise. 2 WANTED_CATA$ PSC: WANTED_CATA This is a code for the association catalog. The meaning of the code is the following: - 1 General Catalog of Variable Stars - 2 Dearborn (Faint Red Stars) - 3 AFGL (=CRL, =AFCRL) Revised - 4 IRC 2 Micron Sky Survey - 5 Wesselius Globules (unpublished) - 6 Second reference catalog, De Vaucouleurs (Bright Galaxies) - 7 Emission Lines Stars - 8 Equatorial Infrared Catalog - 9 UPPSALA Galaxy Catalog - 10 Morphological Catalog of Galaxies - 11 Planetary Nebula Strasbourg - 12 Zwicky (Galaxies and Clusters) - 13 SAO - 14 ESO/UPPSALA Galaxy Catalog - 15 Bright Stars (HR) - 16 Suspected Variable Stars - 17 Cool Carbon Stars - 18 Gliese (Nearby Stars) - 19 S Stars - 20 Parkes HII regions - 21 Bonn HII regions - 22 Blitz (CO velocity in HII regions) - 23 Lynds and others... (Nebulosities) - 24 IRC with good positions - 25 Dwarf Galaxies - 26 Peculiar Galaxies (Arp) - 27 Markarian (Galaxies with UV) - 28 Strong 5 GHz (> 1 Jy, Kuhr) - 29 Veron-Veron (Quasars and Active Nuclei) - 30 Zwicky (Galaxies) - 31 Interacting Galaxies (Vorontsov and Velyaminov.) 2 CHECK_RATIO$ PSC: CHECK_RATIO$ Answer YES if you want to select on the ratio of two fluxes, NO otherwise. 2 BAND$ PSC: BAND$[2] The names of the two band used in the ratio. For example 25 and 60 if you want to select on the ration 25/60. 2 MIN_RATIO$ PSC: MIN_RATIO$ The minimum value of the ratio you are willing to accept. 2 MAX_RATIO$ PSC: MAX_RATIO$ The maximum value of the ratio you are willing to accept. 2 CHECK_LUM$ PSC: CHECK_LUM$ Answer YES if you want to select on the integrated IRAS flux, NO otherwise. 2 MIN_LUM$ PSC: MIN_LUM$ The minimum integrated flux you are willing to accept. The unit for integrated flux is one solar luminosity at one kiloparsec. 2 CHECK_FLUX$ PSC: CHECK_FLUX$ Answer YES if you want to select on a minimum flux in one band, NO otherwise. 2 MIN_BAND$ PSC: MIN_BAND$ The name of the band on which you want to select. 2 MIN_FLUX$ PSC: MIN_FLUX$ The minimum flux in band MIN_BAND$ you are willing to accept. The unit is one Jansky. 2 CHECK_LRS$ PSC: CHECK_LRS$ Answer YES if you want to select on the Low Resolution Spectrometer spectral type, NO otherwise. 2 LRS_TYPE$ PSC: LRS_TYPE$ The LRS spectral type you want to select. Refer to the Explanatory Suplement for a description of the spectral types. Specify 5* if you want to select all sources with spectral type 5 (for example), and 51 if you want to select only the sources with spectral type 5 and subtype 1. 2 TAPE$ PSC: TAPE$ The name of the tape drive on which you will mount the full catalog. This information will only be used if the catalog is not on disk and the selection criteria that you specified need the full catalog. Otherwise, you must nonetheless answer something, but anything will do. 1 REGRESSION REGRESSION This task computes the best least square linear regression between two images. It uses as input a cross histogram of the two images which must have been created previously by program HISTO_CROSS. 2 IN$ REGRESSION: IN$ The name of the input cross histogram (usually created by HISTO_CROSS). 2 THRESHOLD$ REGRESSION: THRESHOLD$ The minimum number of pixels, below which data are not considered. 1 REPROJECT REPROJECT This task resamples an input image to a different projection and coordinate system. Two interpolation methods are available. The SLOW one takes properly the blanking value into account, the FAST one ignores it. Although it is usually much faster than the SLOW one, it may be slower in a few cases (large output map and small input map with change of coordinate system essentially). The task works on data cubes, processing them plane by plane. CAUTION: This task uses an interpolation method, hence the output image increment should be SMALLER than the input image increment. 2 Y_NAME$ REPROJECT: Y_NAME$ The name of the image you want to reproject. 2 X_NAME$ REPROJECT: X_NAME$ The name of the output reprojected image. 2 PROJECTION$ REPROJECT: PROJECTION$ The name of the projection type. The supported projection types are GNOMONIC, ORTHOGRAPHIC, AZIMUTHAL, STEREOGRAPHIC, AITOFF, RADIO and NONE (no projection). 2 SYSTEM$ REPROJECT: SYSTEM$ The name of the coordinate system. The supported coordinate systems are EQUATORIAL, GALACTIC, and UNCHANGED. 1950.0 is the only supported epoch for equatorial coordinates. Ask S.Guilloteau if you need another epoch. 2 CENTER1$ REPROJECT: CENTER1$ The First Coordinate of the new projection center (in the new coordinate system of course). Accepts any formats from sexagesimal (HH:MM:SS.SS ) to decimal. Expects to read HOURS if the system is EQUATORIAL, DEGREES if not. 2 CENTER2$ REPROJECT: CENTER2$ The Second Coordinate of the new projection center (in the new coordinate system of course). Accepts any formats from sexagesimal (HH:MM:SS.SS ) to decimal. Expects the value to be in DEGREES. 2 ANGLE$ REPROJECT: ANGLE$ The position angle of the projection (in degrees). 2 DIMENSIONS$ REPROJECT: DIMENSIONS$[2] The size in pixels of the reprojected image. 2 AXIS_1$ REPROJECT: AXIS_1$[3] The conversion formula for the first axis of the reprojected image: - the reference pixel. - the value of the axis at the reference pixel. - the distance between two pixels on the axis. 2 AXIS_2$ REPROJECT: AXIS_2$[3] The conversion formula for the second axis of the reprojected image: - the reference pixel. - the value of the axis at the reference pixel. - the distance between two pixels on the axis. 2 METHOD$ REPROJECT: METHOD$ The interpolation method. Use SLOW if the input image contains blanked pixels, FAST otherwise. 2 CHANGE$ REPROJECT: CHANGE$ Answer .TRUE. if you want to modify the blanking value, .FALSE. otherwise. 2 BLANKING$ REPROJECT: BLANKING$[2] The new values of the blanking and tolerance on blanking. 1 SELF_CAL SELF_CAL: "Self"-Calibration of a UV data set. SELF_CAL does a Phase and Amplitude Referencing of a UV Data set, using another one representing a point source observed quasi-simultaneously. It is recommended to self-calibrate the phase first, then the amplitude using a longer smoothing time. 2 UV_TABLE$ TASKcHARACTER "UV table to be self-calibrated" UVTABLE$ UV table to be calibrated 2 SELF$ TASKcHARACTER "UV table used as phase reference" SELF$ Point source reference UV table 2 DTIME$ TASKdOUBLE "The smoothing time constant in seconds" DTIME$ Integration time to derive correction. Signal to noise MUST BE SUFFICIENT TO IMPROVE the phase or amplitude determinations. If DTIME$ is too short, noise will be added to the data, rather than the expected improvement. Required values are S/N = 11 for 5 degree phase noise S/N = 20 for 5 % amplitude correction 2 WCOL$ TASKiNTEGER "The phase reference channel" WCOL$ If the reference SELF$ is a spectral line UV table, spectral channel to be used. TASKcHARACTER "Type of self-cal, Ampli, Phase or Both" TYPE$ It is recommended to self-calibrate the phase first, then the amplitude using a longer smoothing time, rather than both together. 1 SMOOTH SMOOTH Smooths an input map using various methods, all based upon a convolving kernel of 5 points large only. This method is crude, but very fast, and supports blanking in principle... Data cubes are processed plane by plane. 1. BOX A simple 5 by 5 boxcar smoothing. This is a very strong smoothing... 2. GAUSS A gaussian smoothing. The gaussian is sampled on a 5 by 5 grid so it must not be too broad. 3 pixels seem a maximum. Use task GAUSS_SMOOTH if you want to smooth with a broader gaussian, but beware it does not handle properly blanked pixels. 3. HANNING Smoothing by a 5 by 5 pyramid, with weights 3 2 1. 4. USER User defined smoothing coefficients on a 5 by 5 grid, assuming biaxial symmetry. Hence 6 coefficients only are required, S00 S10 S11 S20 S21 S22. The corresponding smoothing kernel is : S22 S21 S20 S21 S22 S21 S11 S10 S11 S21 S20 S10 S00 S10 S20 S21 S11 S10 S11 S21 S22 S21 S20 S21 S22 Such a generalised "smoothing" enables some image enhancement based on gradients (e.g. with weights 4 -1 0 0 0 0). The "smoothed" image is normalised by the sum of absolute values of the coefficients. 2 Y_NAME$ SMOOTH: Y_NAME$ The name of the input image. 2 X_NAME$ SMOOTH: X_NAME$ The name of the output smoothed image. 2 BLANKING$ SMOOTH: BLANKING$[2] The blanking value and its tolerance in the input map. This information is only used if it is not present in the header of the input map. 2 METHOD$ SMOOTH: METHOD$ The name of the smoothing method. Supported methods are BOX, HANNING, and GAUSS. 2 WIDTH$ SMOOTH: WIDTH$ The width (in pixels) of the smoothing gaussian. Beware that the gaussian is only sampled on a 5x5 grid: do not specify values higher than 3. 2 WEIGHT$ SMOOTH: WEIGHT$ The 6 smoothing coefficients for USER method. 1 SORT This program sorts all columns of a given table by ascending order of one of its columns. The same table is used for both input and output. Only single precision real tables are accepted by the present version. 2 TABLE$ SORT: TABLE$ The name of the table that will be sorted. 2 COLUMN$ SORT: COLUMN$ The number of the column that will be used for sorting. 1 SORTINT Don't use this task... 1 SUM SUM This program is used to sum many images, handling a weight image in order to produce the average image later on. It computes X(i,j) = X(i,j) + F * Z(i,j) and Y(i,j) = Y(i,j) + F for all non blanked pixels of the input image Z. When called repetitively with the same input/output images X and Y, X contains the weighted sum of all Z images, and Y the weight of each pixels. A parameter is provided to initialise the process. The last step to obtain the average image is to divide X by Y using program COMBINE with option DIVIDE. You can also use SIC command LET to do so. 2 START$ SUM: START$ Answer .TRUE. when you work with the first image, .FALSE. afterwards. 2 Z_NAME$ SUM: Z_NAME$ The name of the file you want to add to the running sum. 2 Y_NAME$ SUM: Y_NAME$ The name of the accumulated weight file. 2 X_NAME$ SUM: X_NAME$ The name of the accumulated sum file. 2 WEIGHT$ SUM: WEIGHT$ The relative weight you want to attribute to the current image. 1 Summary This is a quick reference about all programs available under the GILDAS package. It may not be up to date, and not all listed programs are fully debugged. For convenience, this is a thematic summary in which some algorithms may appear more than one time. TO ACCESS THE SUBTOPICS LISTED BELOW, PLEASE TYPE "EXPLAIN SUMMARY subtopic" 2 Available_Tasks BACKGROUND Compute background images BLANKING Change the blanking value CIRCLE Make the circular average of an image CLEAN Cleaning (of data cubes). COMBINE Combine two input images in many ways CORRELATE Compute correlation image of two images. CV_SMOOTH Cross validation smoothing DFT Slow Fourier Transform from UV to map DG_SMOOTH Conjugate Gradient smoothing * DISPLAY * For colour displays on the image processor. No graphics EXTRACT Extract a subset of an image to fill a subset of an image FIELD_FIND Find the fields in an image FIELD_LIST Find parameters of fields in an image FILL_CUBE Resample a cube by Random or Regular interpolation FLOW A bipolar flow hunter !... FOURIER Compute complex fourier transform of an image FITS_GILDAS Single Image disk-FITS to GILDAS translator GAUSS_SMOOTH Smooth by a gaussian in the fourier plane GAUSS_1D Multi component one-dimension gaussian fitting GAUSS_2D Single component two-dimension gaussian fitting * GFITS * FITS (disk or tape) to|from GILDAS format translator * GRAPHIC * GRAPHIC program. Uses GreG for contouring GILDAS_FITS Single image GILDAS to disk-FITS translator GRID_CUBE A simple gridding task to make an image from a table. HEADER List and modify an image header HISTO_CLOUD Cross histogram of two input images (Table output) HISTO_CROSS Cross histogram of two input images (Image output) HISTO_DOUBLE Histogram of an image as a function of another HISTO_SIMPLE Histogram of an image HISTO_TABLE Cross histogram of column tables IMAGE Make an image from an RGDATA-like file INTERPOLATE Resample and smooth an image along one direction LIST List a table MAKE_CUBE Optimum image reconstruction algorithm MASK Mask part of cube MERGE Merging of Tables MINIMIZE Find best linear correlation between two images NOISE_SMOOTH Noise smoothing enhancement * OVERLAY * Colour Images and Graphics on VAXStation-GPX PLANE Compute the plane going through three points PSC_IRAS To deal with the IRAS Point Source Catalog REGRESSION Compute the linear regression from a cross histogram REPROJECT Reproject an image in a different system SELF_CAL Self-Calibrate a UV data set SMOOTH General pattern smoothing algorithm SORT Sort a table (real version) SUM Add a large number of images SWAP Rotate or make a mirrored image of an input image TABLE Make a Table from a formatted file TRANSPOSE Transpose images TRUE_COLOR To produce true color images of (bipolar) flows. UV_CLIP Clip UV data UV_COMPRESS Compress UV data by spectral channel averaging UV_EXTRACT Extract some planes from UV data UVMAP Makes map from UV data using gridding + FFT UVSORT Precess and sort raw UV tables * VECTOR * To activate GILDAS tasks. 2 Astronomical_Processing PSC_IRAS To deal with the IRAS Point Source Catalog REPROJECT Reproject an image in a different system FLOW A bipolar flow line-wing finding program. TRUE_COLOR Produces true color images of bipolar flows. 2 Correlation_Analysis CORRELATE Computes correlation image of two images. HISTO_CLOUD Cross histogram of two input images (Table output) HISTO_CROSS Cross histogram of two input images (Image output) HISTO_DOUBLE Histogram of an image as a function of another HISTO_SIMPLE Histogram of an image HISTO_TABLE Cross histogram of column tables MINIMIZE Find best linear correlation between two images REGRESSION Compute the linear regression from a cross histogram 2 Display * DISPLAY * For image colour displays on the ARGS or GPX * GRAPHIC * GRAPHIC program. Uses GreG for contouring. * OVERLAY * DISPLAY + GRAPHIC : image and graphic overlay on GPX. All three programs include VECTOR, and thus can RUN GILDAS tasks. 2 Image_Analysis BACKGROUND Compute background images CIRCLE Make the circular average of an image COMBINE Combine two input images in many ways FIELD_FIND Find the fields in an image FIELD_LIST Find parameters of fields in an image PLANE Compute the plane going through three points 2 Image_Construction BLANKING Change the blanking value CLEAN Cleaning (of data cubes). DFT Slow Fourier Transform from UV to map (cubes) EXTRACT Extract a subset of an image to fill a subset of an image FILL_CUBE Resample a cube by Random or Regular interpolation FITS_GILDAS Single image disk-FITS to GILDAS translator * GFITS * FITS (disk or tape) to|from GILDAS format translator GILDAS_FITS Single image GILDAS to disk-FITS translator GRID_CUBE A gridding task to make an image from a table. HEADER List and modify an image header MAKE_CUBE Optimum image reconstruction for noisy data. MASK Mask part of an image. SUM Add a large number of images SWAP Rotate or make a mirrored image of an input image UVSORT Precess and sort raw UV tables UVMAP Make maps (cubes) from UV data using gridding + FFT TRANSPOSE Transpose images 2 Model_Fitting GAUSS_1D Multi component one-dimension gaussian fitting GAUSS_2D Single component two-dimension gaussian fitting REGRESSION Compute the linear regression from a cross histogram 2 Smoothing CV_SMOOTH Cross Validation smoothing DG_SMOOTH Conjugate Gradient smoothing GAUSS_SMOOTH Smooth by a gaussian in the Fourier plane INTERPOLATE Resample and/or smooth an image along first dimension NOISE_SMOOTH Noise smoothing enhancement SMOOTH General (small) pattern smoothing algorithm 2 Table_Processing HISTO_TABLE Cross histogram of column of a table LIST List a table MERGE Merging of Tables SORT Sort a table (real version) TABLE Make a Table from a formatted file UVSORT Precess and Sort a raw UV table * VECTOR * The basic SIC, with vectorial arithmetic. Can RUN any GILDAS tasks. 2 UV_Processing DFT Slow Fourier Transform from UV to map SELF_CAL Self-Calibrate a UV data set UV_CLIP Clip UV data UV_COMPRESS Compress UV data by spectral channel averaging UV_EXTRACT Extract some planes from UV data UVMAP Makes map from UV data using gridding + FFT UVSORT Precess and sort raw UV tables 1 SWAP SWAP This task swaps an image with respect to first or second axis (e.g. along the X dimension, pixel 1 becomes pixel NX, 2 becomes NX-1 and so on). This operation is never required for most algorithms because they work in the "User Coordinate" space (such as the contouring command in GRAPHIC...). It is only needed for a few algorithms which work in the pixel space (such as HISTO_CROSS). Pixel order can be reversed along X (first axis) or Y (second axis). The program does not handle data cubes at present.. 2 Y_NAME$ SWAP: Y_NAME$ The name of an input image. 2 X_NAME$ SWAP: X_NAME$ The name of the output mirrored image. 2 AXIS$ SWAP: AXIS$ The name of the axis along which the order is reversed. This can be X or Y 1 TABLE TABLE This task produces a GILDAS Table from a formatted listing. The table format is recommended if you intend to use your data more than a few (4) times, i.e. in virtually any case. Moreover, the table format can be processed by more programs than the list-directed one. You should set the number of columns exactly, and the number of lines to 0 to allow the program to compute the table size, or to the real number of lines in your formatted file. Never set the number of lines or the number of columns larger than really present or the program will crash. Task LIST does the reverse process. 2 FILE$ TABLE: FILE$ The name of the input formatted file. 2 TABLE$ TABLE: TABLE$ The name of the output GILDAS table. 2 COLUMN$ TABLE: COLUMN$ The number of column in the formatted file. 2 LINE$ TABLE: LINE$ The number of lines in the formatted file. Answer 0 if you don't know. 1 TRANSPOSE TRANSPOSE A non-general simple-minded routine to tranpose data cubes (or images). The only transposition codes it recognises currently are 312, 231 and 213 if I remember well. 2 Y_NAME$ TRANSPOSE: Y_NAME$ The name of the input cube. 2 X_NAME$ TRANSPOSE: X_NAME$ The name of the transposed output cube. 2 ORDER$ TRANSPOSE: ORDER$ The order of input axes 123 in output cube. 231, 312, and 213 are supported at present. Other orders can be added relatively easily if they are needed. 1 TRUE_COLOR TRUE_COLOR This is a dedicated routine to produce bipolar outflow maps in "true" color. It takes as input a cube (N by NX by NY) with the velocity along the first axis, and produces a pseudo cube (3 by NX by NY) containing the integrated intensities in the blue and red lobes in planes 1 2, and a specially encoded map in plane 3. The latter must be displayed in programs OVERLAY or DISPLAY with a special color Look Up Table as defined below ALL\SELECT RANGE 1 225 ! Range of input values ALL\SELECT LUT RGB ! Select Red Green Blue mode SIC\LET RED[I] (I-1)|225 ! Red look up table SIC\LET BLUE[I] (MOD(I+13,15)+1)|15.5 ! Blue look up table SIC\LET BLUE[1] 0 ! Index 1 is for background SIC\LET GREEN 0 ! No green ALL\LUT 2 IN$ Input data cube in Velocity Position Position ordering. 2 OUT$ Pseudo output data cube. 2 VELOCITIES$ Four values specifying the low and high ends of the blue and red wings, to be given in increasing order. 1 UVMAP UVMAP : map making from UV data, using FFT UVMAP makes a map from UV data by gridding the UV data using a convolving function, and then Fast Fourier Transforming the individual channels. The gridding is done only once, thus neglecting frequency corrections for the channels. However, a subset of the input table channels can be selected, so that multiple run of UVMAP can be used to use the exact observing frequency for each channel, at the expense of gridding time. UVMAP produces an output LMV cube (X, Y, Velocity) and a BEAM image from an input UV sorted, precessed table. This method is fast: roughly 8 seconds for 1 channel, 1000 visibilities, including overhead time, gridding and beam creation. The processing time (in seconds) is roughly given by TIME = 4*(NV/1000)*(NX/64)*(NY/64) + ! for gridding 2*(NX/64)*(NY/64)*LOG(NX/64+NY/64)*(NC+1) ! for FFTs of channels. 2 UV_TABLE$ TASK\FILE "UV table" UV_TABLE$ = The sorted, precessed UV table name. Default extension is .UVT. A raw UV table can be sorted and precessed using UVSORT task. 2 MAP_NAME$ TASK\CHARACTER "Map name" MAP_NAME$ = The output map name. Default extension is .LMV. A beam with the same name will be produced with default extension .BEAM. 2 UV_TAPER$ TASK\REAL "UV taper(1/e level, meters)" UV_TAPER$ = The UV taper (to be applied in both directions). 2 WEIGHT_MODE$ TASK\CHARACTER "Weight mode (NA or UN)" WEIGHT_MODE$ = NAtural (optimum in terms of sensitivity) or UNiform (usually lower sidelobes) weighting. 2 MAP_SIZE$ TASK\INTEGER "Map size(2)" MAP_SIZE$[2] = Number of pixels in X and Y. Need not be a power of two, but this would be much better for any further image processing. 2 MAP_CELL$ TASK\REAL "Map cell(arc sec)" MAP_CELL$ = The map cell size (identical in X and Y). 2 UV_CELL$ TASK\REAL "UV cell(m), for unif. weighting" UV_CELL$ = The UV cell size for uniform weighting. Should be of the order of half the dish diameter, or smaller. 2 WCOL$ TASK\INTEGER "Weight channel" WCOL$ The channel from which the weight should be taken. WCOL$ set to 0 implies no weighting. 2 MCOL$ TASK\INTEGER "First and Last channel to map" MCOL$[2] The first and last channel to be mapped. 1 UVSORT UVSORT Sort a raw UV data set to reorder UV data in ascending, negative V values for further processing using UVMAP. In the future, will also apply differential precession which is currently neglected. 2 UVDATA$ CHARACTER UVDATA$ Input UV file name. Default extension is .UVT 2 UVSORT$ CHARACTER UVSORT$ Output UV file name. Default extension is .UVS 1 UV_CLIP UV_CLIP Clip UV data to suppress grossly discrepant values. All channels are flagged if any channel between the first and last considered (specified by CHANNELS$) has an amplitude greater than VCLIP$. Clipped channels are set to zero amplitude and zero weight. 2 UVDATA$ CHARACTER UVDATA$ Input / Output UV table. 2 VCLIP$ REAL VCLIP$ Absolute amplitude threshold. In fact, data is clipped if abs(Real) or abs(Imag) is greater than VCLIP$, which is only a rough clipping threshold. 2 CHANNELS$ INTEGER CHANNELS$[2] First and last channel to look for clipping. To preserve synthesized beam consistency, all channels (including those outside CHANNELS$ interval) are clipped if one is found bad. 1 UV_COMPRESS UV_COMPRESS: Channel averaging of UV data Averages NC$ channels of a UV data set to produce a smaller, spectrally smoothed one. 2 UV_INPUT$ FILE UV_INPUT$ Input UV file name. 2 UV_OUTPUT$ FILE UV_OUTPUT$ Output UV file name. 1 UV_EXTRACT UV_EXTRACT : Extract channels from a UV data set Extract the spectral line channels specified by CHANNELS$ from a bigger UV data set. 2 UV_INPUT$ FILE UVINPUT$ Input UV data set name 2 UV_OUTPUT$ FILE UVOUTPUT$ Output UV data set name 2 CHANNELS$ INTEGER CHANNELS$[2] First and last channels selected. 1 UV_MERGE UV_MERGE This task merges together two tables of UV data, to form a single output table. The two input tables must have the same spectral characteristics (not checked). Multiplicative factors may be applied to both the amplitudes and weights of both tables. 2 TABLE1_IN$ This the name of the first input table (.UVT extension by default) 2 TABLE2_IN$ This the name of the second input table (.UVT extension by default) 2 TABLE_OUT$ This the name of the output table (.UVT extension by default) 2 WEIGHT$ These two real numbers are the multiplicative correction factors to be applied to all the weights of the two tables 2 FACTOR$ These two real numbers are the multiplicative correction factors to be applied to all the amplitudes of the two tables 1 UV_SHORT UV_SHORT This task prepares a UV table of short spacings from a single-dish map. This table may later be merged to an interferometer UV table. The operations performed are: - Fourier transform of the single dish map - Division by the Fourier transform of the single dish beam, up to a maximum spacing (SD_DIAM$, in meters) - Inverse Fourier transform to the image plane, in order to multiply the image by the primary beam of the interferometer elements; Fourier transform to the UV plane. - Creation of the UV table, with a given weight SD_WEIGTH$ and an appropriate calibration factor to Janskys SD_FACTOR$ Both the single-dish and the interferometer antennas are assumed to have gaussian beams (SD_BEAM$ and IP_BEAM$, in radians). 2 MAP_NAME$ This the name of the input single dish map. 2 UV_TABLE$ This the name of the created output UV table. 2 DO_SINGLE$ Logical value, should be .true. except for test purposes. 2 SD_DIAM$ Useful diameter of the single dish, in meters. No spacing higher than SD_DIAM$ is generated. 2 SD_BEAM$ Half-power beam width of the single dish antenna, in radians. The beam is assumed to be gaussian. 2 SD_WEIGHT$ Total weight of the generated visibilities. Should be 10**-6/sigma**2, where sigma is the r.m.s. of the zero-spacing value (total flux) in janskys. 2 SD_FACTOR$ Multiplicative calibration factor; it is used to convert from the single dish map units (e.g., main-beam brightness temperature) to janskys. 2 DO_PRIMARY$ Logical value, should be .true. except for test purposes. 2 IP_BEAM$ Half-power beam width of the interferometer antennas, in radians. The beam is assumed to be gaussian. 1 UV_ZERO UV_ZERO This task appends zero-spacing (single-dish) data (spectrum and/or continuum flux ) to an existing UV table. If zero-spacing data already exists in the UV table, they are replaced by the new data. The zero spacing spectrum is a GILDAS table, which should be created in CLASS by the command GREG. Additional inputs are: the weight to be used, an amplitude calibration factor affecting the spectrum, and a continuum flux. The single-dish spectrum is resampled to the UV table spectral characteristics. 2 UV_TABLE$ This the name of UV table (input and output) 2 FLUX$ The single-dish continuum flux (in janskys) to used for all channels. 2 DO_SPECTRUM$ A logical value. - If true, GREG_TABLE$ is used, and the resulting zero-spacing visibility is : spectrum * FACTOR$ + FLUX$ - If false, the zero-spacing visibility is FLUX$ for all channels 2 GREG_TABLE$ This the name of the single dish spectrum (a GILDAS table created by the command GREG in CLASS) 2 WEIGHT$ The weight of the zero spacing (typically 1**-6/sigma**2, where sigma is the rms in janskys) 2 FACTOR$ A multiplicative calibration factor, used to convert from the spectrum units (e.g., main-beam brightness temperature) to janskys. 1 Vector An interactive program which allows to execute any GILDAS tasks, and by using SIC monitor, any mathematical operation on images.