[Message Prev][Message Next][Thread Prev][Thread Next][Message Index][Thread Index]

[vsnet-chat 6653] Software problem in time-series photometry



Software problem in time-series photometry

Dear colleagues,

   During the past few months, and notably during the past week, I have
been engaged in clarifying a certain problem in CCD time-series photometry.
A number of observers have been suffering from this problem, and after
a number of exchanges of messages and actual data, I have reached some
kind of conclusion.

   The conclusion is that AIP4WIN becomes unreliable under certain
circumstances.  The same phenomenon is observed in other popular photometry
software packages, but I would like to draw observers' attention to
this phenomenon because the a large fraction of observers uses AIP4WIN
and it can affect extensive fields in astronomy.  This is neither observer's
responsibility, nor observer's parameter choice.

   The most striking problem is that AIP4WIN (the other software packages
have a similar tendency, but the degree seems to be smaller) produces
increasingly unreliable results when the object's S/N gets lower.
This degree of degradation is far larger than what is expected for the
given S/N, and the errors are much larger than what AIP4WIN output
statistics would indicate.  The worst thing is that this error is
not random, but is systematic.  This can be confirmed by the presence
of unrealstic appearance of eclipse-like fadings, or systematic deviation
of the mean magnitudes, or systematic appearance of hump-like features.
After receiving these "seemingly unrealistic" data from various observers,
and getting confident that something software-borne effect is responsible,
I asked a number of observers to send their raw images.  To my surprise,
after a formal (a standard) way of analysis, these unrealistic features
disappear.  This indicates that these features are not recorded in the
actual images.  This has been also independely confirmed (when possible)
through comparison with other observer's simultaneous data.  My first
conclusion is: don't too much trust your photometry software, and don't
too much trust statistics derived from software's output.

   The phenomenon is best observed when the objects gets fainter (e.g.
superhump minimum or in eclipse).  A number of software packages give
"fainter than real" magnitudes.  This introduces a systematic bias to
superhump amplitudes and profiles, eclipse depths etc.  The same is
true for observations of fading dwarf novae.  The fading rate is recorded
systematically larger.  The loss of coherent variations during the
rapid decline phase and the early post-outburst phase can be sometimes
ascribed to this software-borne effect.  The most discriminating feature
of this phenomenon is that large deviations are predominantly fainter
values than actuality; the reverse case rarely happens (the same is true
in flux, not magnitude, scale).  An observer would interpret such result
as "below the limitation of my instrument" or "the observing condition
was poor", but it is *not true* in many cases.

   It would be thus recommended to make an analysis with a second
independent software package (we have confirmed that a subtle adjustment
of parameters in the same software does not usually give a good result).
However, the same systematic deviations have been commonly observed
in many widely available software packages, though the degree of deviations
is variable between packages.  It would be possible that these software
packages share some common algorithm and tend to give similar deviations,
but this possibility is not beyond speculation since the actual codes are
not usually available.

   Among thinkable possibilities, a possibility that "software sometimes
fails to correctly center the object" looks likely.  Once the correct
centering fails, this error can be promulgated in subsequent images,
resulting an eclipse-like fading.  If the effect is not so dramatic,
this can result in unreal hump-like variations.  This would explain why
"fainter than real" observations tend to occur more often.  However,
this possibility is only one of thinkable possibilities from observed
results, and the actual cause may be more complex.

   A software provider may explain that such results can occur when the
quality of the data is poor.  But this would be a simple execuse, since
a proper analysis of the same images almost always gives more acceptable
results.

   Knowing that independent confirmation with a second spftware package
is not always effective, I would advise observers to "manually reduce"
images when the result of an automated analysis looks unusual, or looks
discordant.  Please don't discard raw images even if your initial attempt
does not give satisfactory results.  Please don't to much trust software's
statistic analysis to reject some selected data.  And please don't try to
make too much feedback from these results on your observing or analysis
technique -- what is more responsible can be the software package rather
than the observation technique.  And finally please don't blame yourself,
instrument or the weather, or don't be discouraged when the results are
not satifsactory!  We have a lesson in the past that some astrometry
package introduced similar systematic errors (but in astrometry, the
errors can be averaged by measuring in different directions; the situation
is worse in photometry), and was finally resolved by a careful study
and numerical simulations.  It would not be hard to think that a similar
degree of imperfectness can reside in photometry packages -- this may not
be so-called software bugs, but may be more likely from a selection or
implementation of algorithm.  When your results look doubtful or
unrealistic, make manual analysis, and keep your raw images so that they
can be re-analyzed when a more reliable package appears.  Although
there are also pitfalls for manual analysis, this seems to be the best
prescription as of now.  [The best solution would be to correctly use
fully tested professional packages, and make nanual analysis when
necessary.  One also needs to know the mathematics involved in photometry
packages.  But I must admit none of them is not trivial, and the learning
curve is much slower than those with commercial packages...]

Regards,
Taichi Kato


Return to Home Page

Return to the Powerful Daisaku

vsnet-adm@kusastro.kyoto-u.ac.jp

Powered by ooruri technology