© European Southern Observatory
FAQ Revised: Tue Oct 24 10:27:42 2000
You do not "learn" eclipse. The commands are more or less dedicated, one per template. What you have to learn is more or less similar to learning new IRAF commands (what to put in 'epar', how to launch it, learn what is happening in the algorithm, etc.). You do not have to learn a new language, because what it takes is launching a Unix command, not more.
Now, if you think you have better routines, IRAF scripts or whatever, you
will probably be more efficient with tools you have written, you know
where they fail and where they work. If you have no clue about ISAAC
data reduction or if what you want to do is already performed by an
eclipse command, you will be faster and more efficient using it.
If you want to do something non-trivial that is not supported in
the eclipse commands, or if you need any kind of interactive
algorithm, you have no choice but use interactive environments like
IRAF. Anyway, past some point you will need something like IRAF
to interact with your data to see what is in there. eclipse
only covers the first stages of data reduction, until an astronomical
knowledge of the data is required to go further. Then it is the
astronomer's job and no software can do it in your place.
ESO is trying to build and validate a number of automatic data reduction tools to provide calibrated and reduced data in the shortest time, without any intervention from the user. The fundamental principle behind pipelines is precisely that they run without anybody to specify which calibration data to use, which recipe to use, or tell the software what to do in case of failure of one component or another.
A pipeline recipe is a data reduction program that reduces a set of given frames, possibly using provided calibration frames. This program takes in input a list of relevant frames and produces one or more products. Recipes are launched by the pipeline infrastructure.
Since recipes do not interact with a user, there are many choices that they cannot do like discarding a frame that would happen to have a satellite trail, and if they have to make choices, there can be no guarantee that it will never be wrong. Other restrictions imposed by this lack of interactivity are simply a switch to another algorithm that might be more efficient for a given data set. No automatic software can be expected to do that in a reliable way. Pipeline products are usually a good indication of what will be obtained after proper (hand) data reduction, but they are rarely a final product.
If you reduce your data using eclipse recipes for the supported
instruments, you will find handy that with little preparation work you can
usually get a result in no time (assuming everything goes fine), without
having to type anything past the configuration files. But your task should
not stop there: validate what can be validated against catalogs and
previous measurements, re-run the procedure with slightly different
parameters, apply the same reduction with another algorithm or software,
etc. Keep in mind that an automatic software must be robust but will not
outsmart you. Do not blindly trust what comes out.
If you need to increase some of these parameters, the limits have been intentionally set in a deeply rooted include file, which means you have to modify a basic eclipse source file and recompile everything from scratch (having taken care of removing all binary executables, libraries and object files before recompiling). The file to edit is eclipse/include/cube_defs.h and the constants to modify are:
#define MAX_COLUMN_NUMBER (40000) #define MAX_LINE_NUMBER (40000) #define MAX_IMAGE_NUMBER (10240)
Do not forget in any case that eclipse always makes the assumption that any single input image fits into memory (be it real or virtual). Nowhere in eclipse has been introduced a stripe-based processing of data which could allow simple operations to be executed on huge images. This is again done on purpose, because eclipse is more targeted towards executing image processing tasks on many reasonably-sized images, than applying dumb operations to enormous data sets. If you happen to process data files containing really large images, I'd recommend either to cut down the images to smaller ones, or to go to other data processing packages.
This being said, if there are requests for that kind of processing it is
very possible to add up such functionalities in the library, but that
effort simply has not been identified yet.
Notice that some commands have default values for the ESO 3.6m telescope in la Silla and some contain Adonis specific features for default behaviour (strehl for example). David Le Mignant has put up a series of scripts to reduce Adonis data with eclipse, have a look at the following page:
http://sc6.sc.eso.org/~dlemigna/eclipse/
The wdat command is a tool written to read Adonis DAT tapes, it can only be useful to Adonis users. wdat has been written and is supported by Francois Lacombe at the Paris-Meudon observatory. Contact him in case of trouble with the command. You can also have a look at the wdat home page which you can find from the main eclipse home page.
Since version 3.1.1, all Adonis specific commands can be found in the base directory eclipse/src/adonis.
Since version 3.8, all Fabry-Perot commands have disappeared from the
eclipse distribution. A quick survey has shown that nobody was actually
using them at all. They are kept in the repository for future reference but
not distributed any more.
% jitter
jitter
part of eclipse library. (c) ESO 1996-2000
purpose: isaac/sofi jitter imaging reduction
parsing configuration file...
error: cannot find ini file [jitter.ini]: aborting
error: in parsing jitter.ini: aborting jitter
% jitter --help
jitter
part of eclipse library. (c) ESO 1996-2000
purpose: isaac/sofi jitter imaging reduction
use : jitter [flags] [options]
flags are :
-d or --display : display information stored in a .ini file
-g or --generate : generate a .ini file
-v or --verbose : enable verbose mode
-D or --debug : enable debug mode
-t or --time : estimate used CPU time
--test : generate test data (see doc)
options are:
-f or --file
to specify which .ini file to work on (default: jitter.ini)
-h or --help to get this help
Short Long Action-L --license Display license and exit --help Display short help and exit --version Display current version and exit
Notice that there were previously (eclipse 3.3 and before) options to
set the verbose and debug mode, and an option to setup memory
requirements. These are not settable through command-line options any
more, but through environment variables. See the INSTALL file
in the base eclipse directory in any distribution (3.4 and later).
This being said, it is true that eclipse has a very fast
processing engine which may be ideally suited to implement CPU-intensive
algorithms as all deconvolution algorithms are. You are free to write
your own scripts making use of the FFT and arithmetic routines of
eclipse to implement quickly standard deconvolution algorithms.
Support for classical deconvolution stuff is not yet planned into
eclipse, though.
For example, if you want to subtract two FITS images named object and skybg, they will not be recognized by ccube as FITS files unless you prefix their name in the expression with an arobas.
--- Incorrect --- % ccube "object skybg -" bgsub error: unrecognized token: [object] error: in arithmetic expression: exiting--- Correct --- % ccube "@object @skysub -" bgsub
All this (and more) is explained in details in the
ccube manual page.
% ls image1 image2 image1-image2 2 % ccube -s "image1-image2/2"
Simple case: there are files named 'image1', 'image2', 'image1-image2', and '2' in the current directory. What should be used to do what?? Also: how do you make the difference between the slash as the operator for the division, and the slash as a separator for path names?
To remove ambiguities, go to Polish Reverse Notation and separate all your arguments by blanks. The above example could then be written, depending on what was intended:
Subtract file 'image2' from file 'image1' and divide by 2: % ccube "@image1 @image2 - 2 /"Subtract file 'image2' from file 'image1' and divide by file '2': % ccube "@image1 @image2 - @2 /"
Divide the file 'image1-image2' by 2: % ccube "@image1-image2 2 /"
Divide the file 'image1-image2' by file '2': % ccube "@image1-image2 @2 /"
Subtract file '2' in directory 'image2' from file 'image1': % ccube "@image1 @image2/2 -"
The case is quite common, actually with VLT archive file names such as:
ONTT.1998-12-03T00:45:04.069.fits it becomes hard to
distinguish the file name from a bunch of numerical arguments, unless we
restrict FITS file names in input to a pre-defined list, but that would
be endless and not so useful. Better go to Polish Reverse Notation.
You can have a look at
deadpix manual page.
% ccube --version eclipse version: 3.6-11
Same applies to version 3.5.
The requirements to compile eclipse are that the system must be POSIX-compliant, have the mmap() system call (POSIX.1b compatibility), and... have an ANSI C compiler.
The eclipse Makefile knows how to handle the gcc and egcs compilers, so if you want to use one of those rather than the default set for your machine, you can do so by requesting a different OSTYPE:
% make OSTYPE=gcc % make OSTYPE=egcsSometimes, you also need to define by hand the variable CCNAME to gcc to compile. In that case, you would type:
% setenv CCNAME gcc % make OSTYPE=gcc
gcc and egcs bugs
Be aware that there have been enormous problems reported lately with gcc and egcs, on Linux mostly:
If you are using Linux with any of the problematic compiler versions, good luck!
If you got eclipse to compile on any other system, please drop me
a mail explaining the problems you faced and the options on your local
system to compile ANSI C.
For HPUX users: there is a warning about a MAXINT macro
multiply defined. We cannot do anything about that, it is due to an
internal HPUX inconsistency.
dfits *.fits is_ghost *.fits ado_refits *.fits stcube *.fits
Or even worse:
dfits */*.fits is_ghost */*.fits
The problem is not really in the eclipse command, but in the shell itself. Whenever you type '*.fits' on the command-line, the shell expands it to a list of files and feeds it into the command. Example:
You have a.fits, b.fits and c.fits in the current directory. Typing:
is_ghost *.fits
will actually be sent to the command as if you had typed:
is_ghost a.fits b.fits c.fits
Problem is: the maximum character length for a command-line is limited to 512 characters (or something like that, but a fixed length) on many Unix systems, which means that if the list of FITS files is longer than that, the list will be truncated. To be convinced, try out the following: replace the eclipse command name by 'echo' and have a look at what you get:
echo *.fits echo */*.fits
The result is that the list does not contain all the file names. This usually makes eclipse command barf with a stupid error message. I will try to remedy to that in future versions, giving an intelligible error message instead.
In the mean time, you can also provide the file names one by one by making use of the 'find' command (present on all Unix platforms). Example:
find . -name \*.fits -exec is_ghost {} \;
Use of the backslashes is important, it prevents the shell from expanding *.fits to the actual list.
You can also use the shell 'foreach' command under csh/tcsh:
foreach i (*.fits) foreach? is_ghost $i foreach? end
Or similarly:
foreach i (*/*.fits) foreach? is_ghost $i foreach? end
There are similar mechanisms under bash, sh, ksh, and other Unix shells.
If you are through typing: % dfits *.fits | fitsort keyword1 keyword2 ... keywordnYou can set the following alias in csh or tcsh:
alias grepfits 'dfits *.fits | fitsort \!*'
Type the above line exactly as it appears here. Now you can do:
% grepfits keyword1 keyword2 ... keywordn
To make them appear by Unix file creation date (most recent first): % dfits `ls -1t *.fits` | fitsort keyword1 ... keywordnTo make them appear by FITS date (assuming in is found in the DATE keyword): % difts *.fits | fitsort DATE keyword1 ... keywordn | sort -k 2
Example:
% ccube "a.fits 2 *" /dev/stdout > b.fits
This trick can be used to process data on one machine and output the results to another one (e.g. on a Beowulf cluster). If you have the correct settings for rsh, you can try:
Assuming machine1 and machine2 are two machines on which you have accounts, and they trust each other (your .rhosts file or similar must be properly configured):
machine1% ccube "a.fits 2 *" /dev/stdout | rsh machine2 "cat > b.fits"
This command loads the file a.fits, multiplies all its pixels by 2, then sends the results to its stdout, they are caught by the pipe and redirected to an rsh command. This remote command running on machine2 uses cat to catch all data coming from its stdin and re-direct it to a file called b.fits on machine2.
Unfortunately, it looks like the /dev/stdout device does not
exist on all Unixes. If that is the case for you, expect support for
stdout output in the next eclipse release. The convention will
be: if the name of an output file is STDOUT all output data
are sent to stdout.