© European Southern Observatory
FAQ Revised: Monday 11 March 2002 13:01:20
You do not "learn" eclipse. The commands are more or less dedicated, one per template. What you have to learn is more or less similar to learning new IRAF commands (what to put in 'epar', how to launch it, learn what is happening in the algorithm, etc.). You do not have to learn a new language, because what it takes is launching a Unix command, not more.
Now, if you think you have better routines, IRAF scripts or whatever, you
will probably be more efficient with tools you have written, you know
where they fail and where they work. If you have no clue about ISAAC
data reduction or if what you want to do is already performed by an
eclipse command, you will be faster and more efficient using it.
If you want to do something non-trivial that is not supported in
the eclipse commands, or if you need any kind of interactive
algorithm, you have no choice but use interactive environments like
IRAF. Anyway, past some point you will need something like IRAF
to interact with your data to see what is in there. eclipse
only covers the first stages of data reduction, until an astronomical
knowledge of the data is required to go further. Then it is the
astronomer's job and no software can do it in your place.
If you want to base a C development on this library, you are not advised to do so. Say you want to go ahead and develop C or C++ software based on eclipse. You will download the latest version today and start working on it. Next version comes, and you realize that an object you have been using all along has changed types and is now offered in an incompatible way. You will have to either re-write your code completely to make use of the new structure (tedious and unsafe), or you have to un-hook your development from the eclipse mainstream, forking from the main project. It might be good to be independent, but you will also miss all bug corrections and new enhancements to the library.
On the other hand, we have developped interfaces to scripting languages (Python and Lua) for which we guarantee a certain level of stability. Functions and objects might change in the underlying C library, but we try to keep the upper layer as stable as possible. The Python and Lua cubes offer very simple operators, and they are sufficiently high-level not to be concerned with the details of the underlying C library. This efficiently shields script writers from changes in the low-level library.
So: if you want to base a C development on libeclipse.a and
eclipse.h, it is highly not recommended unless you really want to
fork from the mainstream development, or are sufficiently confident that
you will be able to support the changes arising in the later versions.
Using scripting languages is the way to go for most developments. There is
of course no absolute rule, you are the best placed to consider what should
be done in your software project.
ESO is trying to build and validate a number of automatic data reduction tools to provide calibrated and reduced data in the shortest time, without any intervention from the user. The fundamental principle behind pipelines is precisely that they run without anybody to specify which calibration data to use, which recipe to use, or tell the software what to do in case of failure of one component or another.
A pipeline recipe is a data reduction program that reduces a given set of frames, possibly using provided calibration frames. This program takes in input a list of relevant frames and produces one or more products. Recipes are launched by the pipeline infrastructure.
Since recipes do not interact with a user, there are many choices that they cannot do like discarding a frame that would happen to have a satellite trail, and if they have to make choices, there can be no guarantee that it will always be correct. Other restrictions imposed by this lack of interactivity are simply a switch to another algorithm that might be more efficient for a given data set. No automatic software can be expected to do that in a reliable way. Pipeline products are usually a good indication of what will be obtained after proper (hand) data reduction, but they are rarely a final product.
If you reduce your data using eclipse recipes for the supported
instruments, you will find handy that with little preparation work you can
usually get a result in no time (assuming everything goes fine), without
having to type anything more than the configuration files. But your task
should not stop there: validate what can be validated against catalogs and
previous measurements, re-run the procedure with slightly different
parameters, apply the same reduction with another algorithm or software,
etc. Keep in mind that an automatic software must be robust and will not
outsmart you. Do not blindly trust what comes out.
If you need to increase some of these parameters, the limits have been intentionally set in a deeply rooted include file, which means you have to modify a basic eclipse source file and recompile everything from scratch (having taken care of removing all binary executables, libraries and object files before recompiling). The file to edit is eclipse/include/cube_defs.h and the constants to modify are:
#define MAX_COLUMN_NUMBER (40000) #define MAX_LINE_NUMBER (40000) #define MAX_IMAGE_NUMBER (10240)
Do not forget in any case that eclipse always makes the assumption that any single input image fits into memory (be it real or virtual). Nowhere in eclipse has been introduced a stripe-based processing of data which could allow simple operations to be executed on huge images. This is again done on purpose, because eclipse is more targeted towards executing image processing tasks on many reasonably-sized images, than applying dumb operations to enormous data sets. If you happen to process data files containing really large images, I'd recommend either to cut down the images to smaller ones, or to go to other data processing packages.
This being said, if there are requests for that kind of processing it is
very possible to add up such functionalities in the library, but that
effort simply has not been identified yet.
Notice that some commands have default values for the ESO 3.6m telescope in la Silla and some contain Adonis specific features for default behaviour (strehl for example). David Le Mignant has put up a series of scripts to reduce Adonis data with eclipse, see the Adonis home page on the ESO web site for more information.
The wdat command is a tool written to read Adonis DAT tapes, it can only be useful to Adonis users. wdat has been written and is supported by Francois Lacombe at the Paris-Meudon observatory. Contact him in case of trouble with the command. You can also have a look at the wdat home page which you can find from the Adonis home page.
Since version 3.1.1, all Adonis specific commands can be found in the base directory eclipse/ins/adonis.
Since version 3.8, all Fabry-Perot commands have disappeared from the
eclipse distribution. A quick survey has shown that nobody was actually
using them at all. They are kept in the repository in Garching for future
reference but not distributed any more.
% jitter jitter part of eclipse library. (c) ESO 1996-2001 purpose: isaac/sofi jitter imaging reduction eclipse run-time configuration verbose : [1] debug : [2] tmp_dir : [.] max_mem : [794 ] Mbytes max_swap : [1024] Mbytes parsing configuration file... error: cannot find ini file [jitter.ini]: aborting error: in parsing jitter.ini: aborting jitter% jitter --help
jitter part of eclipse library. (c) ESO 1996-2001 purpose: isaac/sofi jitter imaging reduction use : jitter [flags] [options] flags are : -g or --generate : generate a .ini file -t or --time : estimate used CPU time --test : generate test data (see doc) --offset : get help about the offset file format -h or --help : get this help options are : -f
or --file to specify which .ini file to work on (default: jitter.ini) following options are only valid with -g or --generate: -i or --in provide input file name -o or --out provide output file name -c or --calib provide calibration file name -r or --rb generate according to a valid RB
Short Long Action-L --license Display license and exit --help Display short help and exit --version Display current version and exit
Notice that there were previously (eclipse 3.3 and before) options to
set the verbose and debug mode, and an option to setup memory
requirements. These are not settable through command-line options any
more, but through environment variables. See the INSTALL file
in the base eclipse directory in any distribution (3.4 and later).
This being said, it is true that eclipse has a very fast
processing engine which may be ideally suited to implement CPU-intensive
algorithms as all deconvolution algorithms are. You are free to write
your own scripts making use of the FFT and arithmetic routines of
eclipse to implement quickly standard deconvolution algorithms.
Support for classical deconvolution stuff is not yet planned into
eclipse, though.
For example, if you want to subtract two FITS images named object and skybg, they will not be recognized by ccube as FITS files unless you prefix their name in the expression with an arobas.
--- Incorrect --- % ccube "object skybg -" bgsub error: unrecognized token: [object] error: in arithmetic expression: exiting--- Correct --- % ccube "@object @skysub -" bgsub
All this (and more) is explained in details in the
ccube manual page.
% ls image1 image2 image1-image2 2 % ccube -s "image1-image2/2"
Simple case: there are files named 'image1', 'image2', 'image1-image2', and '2' in the current directory. What should be used to do what?? Also: how do you make the difference between the slash as the operator for the division, and the slash as a separator for path names?
To remove ambiguities, go to Polish Reverse Notation and separate all your arguments by blanks. The above example could then be written, depending on what was intended:
Subtract file 'image2' from file 'image1' and divide by 2: % ccube "@image1 @image2 - 2 /"Subtract file 'image2' from file 'image1' and divide by file '2': % ccube "@image1 @image2 - @2 /"
Divide the file 'image1-image2' by 2: % ccube "@image1-image2 2 /"
Divide the file 'image1-image2' by file '2': % ccube "@image1-image2 @2 /"
Subtract file '2' in directory 'image2' from file 'image1': % ccube "@image1 @image2/2 -"
The case is quite common, actually with VLT archive file names such as:
ONTT.1998-12-03T00:45:04.069.fits it becomes hard to
distinguish the file name from a bunch of numerical arguments, unless we
restrict FITS file names in input to a pre-defined list, but that would
be endless and not so useful. Better go to Polish Reverse Notation.
You can have a look at
deadpix manual page.
% ccube --version eclipse version: 3.6-11
Same applies to version 3.5.
The requirements to compile eclipse are that the system must be POSIX-compliant, have the mmap() system call (POSIX.1b compatibility), and... have an ANSI C compiler.
See the INSTALL file in the base eclipse directory for
troubleshooting the installation.
The problem with cross-compilation (i.e. compiling the binaries on one machine and running them on another) is that there are a number of parameters which are determined at compile-time. These parameters are then assumed constant by all binaries derived from the eclipse libraries. If they happen to change because the machine on which you are running eclipse has undergone drastic changes (switch from 32-bit to 64-bit, processor or OS upgrade, etc.), or because you are running the binaries on a secondary platform, you are running the risk of getting unexplained core dumps at best, file corruption without warning at worst.
It is possible to move some parameter checks from compile-time to run-time, but that would bring serious performance issues and is not a good solution. A compiler usually optimizes the compiled code for the platform it is running on. If you really want to do cross-compilation, you should use dedicated cross-compilers or specific switches which are meant to do that.
The main recommendation would be: if you want to run eclipse on a given
machine, compile it locally. The source code is distributed for that
purpose.
For HPUX users: there is a warning about a MAXINT macro multiply defined. We cannot do anything about that, it is due to an internal HPUX inconsistency.
If you compile eclipse using gcc -Wall you are likely to get a
family of warnings for which we can do nothing about (functions defined and
never used, etc.). No need to report these warnings, we get them too and
try to minimize the number of such messages.
dfits *.fits is_ghost *.fits ado_refits *.fits stcube *.fits
Or even worse:
dfits */*.fits is_ghost */*.fits
The problem is not really in the eclipse command, but in the shell itself. Whenever you type '*.fits' on the command-line, the shell expands it to a list of files and feeds it into the command. Example:
You have a.fits, b.fits and c.fits in the current directory. Typing:
is_ghost *.fits
will actually be sent to the command as if you had typed:
is_ghost a.fits b.fits c.fits
Problem is: the maximum character length for a command-line is limited to 512 characters (or something like that, but a fixed length) on many Unix systems, which means that if the list of FITS files is longer than that, the list will be truncated. To be convinced, try out the following: replace the eclipse command name by 'echo' and have a look at what you get:
echo *.fits echo */*.fits
The result is that the list does not contain all the file names. This usually makes eclipse command barf with a stupid error message. I will try to remedy to that in future versions, giving an intelligible error message instead.
In the mean time, you can also provide the file names one by one by making use of the 'find' command (present on all Unix platforms). Example:
find . -name \*.fits -exec is_ghost {} \;
Use of the backslashes is important, it prevents the shell from expanding *.fits to the actual list.
You can also use the shell 'foreach' command under csh/tcsh:
foreach i (*.fits) foreach? is_ghost $i foreach? end
Or similarly:
foreach i (*/*.fits) foreach? is_ghost $i foreach? end
There are similar mechanisms under bash, sh, ksh, and other Unix shells.
Have a look at your Unix shell documentation to learn how to loop on a
large number of file names.
If you are through typing: % dfits *.fits | fitsort keyword1 keyword2 ... keywordnYou can set the following alias in csh or tcsh:
alias grepfits 'dfits *.fits | fitsort \!*'
Type the above line exactly as it appears here. Now you can do:
% grepfits keyword1 keyword2 ... keywordn
To make them appear by Unix file creation date (most recent first): % dfits `ls -1t *.fits` | fitsort keyword1 ... keywordnTo make them appear by FITS date (assuming in is found in the DATE keyword): % difts *.fits | fitsort DATE keyword1 ... keywordn | sort -k 2
Example:
% ccube "a.fits 2 *" /dev/stdout > b.fits
This trick can be used to process data on one machine and output the results to another one (e.g. on a Beowulf cluster). If you have the correct settings for rsh, you can try:
Assuming machine1 and machine2 are two machines on which you have accounts, and they trust each other (your .rhosts file or similar must be properly configured):
machine1% ccube "a.fits 2 *" /dev/stdout | rsh machine2 "cat > b.fits"
This command loads the file a.fits, multiplies all its pixels by 2, then sends the results to its stdout, they are caught by the pipe and redirected to an rsh command. This remote command running on machine2 uses cat to catch all data coming from its stdin and re-direct it to a file called b.fits on machine2.
Unfortunately, it looks like the /dev/stdout device does not
exist on all Unixes. If that is the case for you, you can still declare the
ouput file name to be STDOUT. eclipse will recognize this name and
dump all outputs to the stdout steam instead of saving to a file.