% got
% ls
README.Test_Data_for_Codes T_images/ T_runs/
% cd T_runs/thresh
% ls
ex01/ tmp/
% cd ex01
% cp S/* .
% ls
GRAB* MULTI* RUN* S/
% cat GRAB
#!/bin/sh
# Collect image
cp $tdata/T_images/S1/hst4_Sp.fits .
echo "Local test image = hst4_Sp.fits, and in.fits"
cp hst4_Sp.fits in.fits
ls -a
% GRAB
Local test image = hst4_Sp.fits, and in.fits
. .. GRAB hst4_Sp.fits in.fits MULTI RUN S
% ls
GRAB* hst4_Sp.fits in.fits MULTI* RUN* S/
#!/bin/sh
# Usage: RUN 40.0 1 0.5
GRAB
thresh.sh $1 $2 $3
mv out.fits t_$1_$2_$3.fits
echo '# text(42,89) color=white font='"'helvetica 14 normal roman'"' text={'$1','$2','$3'}' >a
mv a t_$1_$2_$3.reg
% RUN 35.0 1 1.0
Local test image = hst4_Sp.fits, and in.fits
. .. GRAB hst4_Sp.fits in.fits MULTI RUN S
Need local "log.make" for log file.
Writing FITS image: out.fits
% ls
GRAB* hst4_Sp.fits in.fits junc MULTI* RUN* runner* S/
t_35.0_1_1.0.fits t_35.0_1_1.0.reg
This is probably a clear case of overkill, but this examples
does illustrate nearly all of the features of running a
Test_Data_for_Codes test. I use the "got" alias to jump
to the test directory and find the section for the thresh
code. Once I decide to use the ex01 case (Example 1) I
"copy up" the three scripts in ./S. The GRAB script is
what will retrieve my input data (in this case the image
named hst4_Sp.fits). As with many of my earliest OTW image
analysis codes (of which thresh is one) the program ALWAYS
takes as input an image named "in.fits", so I have the GRAB
script copy hst4_Sp.fits to that name. Next, we take a look at the
RUN script. We could have run GRAB manually to get our data
file already, but we see that RUN is going to perform that
for use anyway. RUN will take as input (on the command line)
3 arguments: the threshold level, the size of scan bin, and
the fractional fill level for the scan bin. The description
of the last two arguments are not very clear, so I would
go to my document on
thresh
to understand a little more about these. I show
an example of using RUN with 3 selected
arguments (arg1=40.0, arg2=1, arg3=0.5) and then
I show the resultant files that were built. The output
from thresh is always an image named "out.fits", but to
make RUN easier to use multiple times (like in the
script MULTI), I hav RUN construct a unique output image name
for each time RUN is executed. In this case, I just build a
name that is comprised of the input argument values. This
how I built the Figure in my webdoc describing
thresh.
As a final point, I also had RUN build a ds9 regions file
(called t_35.0_1_1.0.reg) that I can use when viewing the
result from thresh (i.e. when I do
"ds9 t_35.0_1_1.0.fits &") to display
the thresh arguments I used for the run. I do this by loading
the regions file with the "Regions" button in the ds9 gui.
The execution scripts stored in Test_Data_for_Codes can be very involved, and usually can employ lots of other scocode routine. The script for testing the code ccl4 is a good example. I use several routines to prepare an input image using a larger image from the Test_Data_for_Codes/T_images repository. An even even more comprehensive script (which allows some flexible user input) is shown below:
#!/bin/bash
# Collect a test image of CIG483
if [ -z "$1" ]
then
printf "RUN 760.0 0.0 \n"
printf "arg1 - threshold level \n"
printf "arg2 - background level \n"
exit
fi
if [ -z "$2" ]
then
printf "RUN 760.0 0.0 \n"
printf "arg2 - background level \n"
exit
fi
printf "\nRunning momcal check with CI483 image\n"
printf "Using input values: \n"
printf "threshold level = $1\n"
printf "background level = $2\n"
printf "Reasonable values: 760.0 0.0 \n"
# printf "\n\nContinue (Y/N):"
# read ANSWER
# printf "\nANSWER = $ANSWER\n"
# if [ $ANSWER == "Y" ]
# then
# printf "\nOkay, I will continue execution of this script!\n"
# else
# printf "\nI will stop.\n"
# exit
# fi
#
# Get the input image
cp $tdata/T_images/S6_cigs/cig483_R.fits t1.fits
getfits -o t2.fits t1.fits 921-1097 925-1034
cp t2.fits in.fits
thresh.sh $1 1 1.0
mv out.fits bitmap.fits
cp bitmap.fits in.fits
#
# Run ccl4, use arg1=1 to get the pass1.fits image
ccl4.sh 1
#
# Rename only the stuff you need
mv pass2.fits mask.fits
#
# Run the momcat code
momcat.sh t2.fits mask.fits $2
#
# Do some cleanup
\rm -f bitmap.fits fitsr2048.header_copy header.info in.fits junc
\rm -f pars.in pass1.equiv pass1.fits play1.out runner t1.fits
\rm -f Unique_label.set
As a "super-example" see the scripts in the test directory for momcat. I wanted to be able to show a simple (and fast) example of runni9ng momcat. However, I also wanted to be able to run it using different image size and threshold levels, and then record the time it tokk to run:
% pwd
.../Test_Data_for_Codes/T_runs/momcat/ex02_cig483/S
% cat README
RUN = a script that runs momcat
RUN 700 0 1000
700 = threshold level
0 = background level
1000 = linear size of sub-image in pixels
Timer = timing script (to time RUN)
FRUN = runs RUN via TIMER and records various
output data in "Results"
BIG_FRUN = performs multiple FRUN's
Using this approach, I was able to quickly run a test
on my CIG483 R image that gave me the following table:
% cat Results thresh, bkg, size, num, time 700 0 500 31 0.18 400 0 500 36 0.20 100 0 500 73 0.32 700 0 750 56 0.60 400 0 750 71 0.70 100 0 750 163 1.42 700 0 1000 110 1.81 400 0 1000 144 2.37 100 0 1000 348 5.17 700 0 1500 242 8.20 400 0 1500 302 10.26 100 0 1500 775 24.65 700 0 1992 421 24.08 400 0 1992 535 30.02 100 0 1992 1329 73.15Hence, I see from this that I can process the entire (1992x1992) image with a low threshold level (thresh=100) in about 73 second and I will locate and measure 1329 discrete sources. Next, I added a couple new parameters to the momcat code. These rely primarily on locating the "edge" pixels for each set of labeled pixels (for label sets comprised of at least 10 pixels). These parameters do look useful for locating extended sources, but how do they affect processing times? I just re-run the script set from above:
thresh, bkg, size, num, time 700 0 500 31 0.16 400 0 500 36 0.19 100 0 500 73 0.32 700 0 750 56 0.56 400 0 750 71 0.70 100 0 750 163 1.40 700 0 1000 110 1.71 400 0 1000 144 2.28 100 0 1000 348 5.45 700 0 1500 242 8.48 400 0 1500 302 10.76 100 0 1500 775 23.34 700 0 1992 421 24.54 400 0 1992 535 31.43 100 0 1992 1329 75.09Here I see that my new parameters add very little to the total processing times for the images.