Re: photometry software problem > It is unfortunate but true in the publishing > world that publication deadlines do not always permit extensive > testing. In reality, after several thousand users try every feature > in every conceivable way, bugs will be discovered/fixed and new features > requested/implemented. This is not extremely true for high-quality program writers. With clear source codes (not comparable to those of DAOPHOT), more than 90% of problems will be removed when inspecting the source code (even before it runs!). This is probably not true for most of commericial software, as Arne suspected. In a number of cases, simple mathematics can help avoiding the problems before they appear. For example, when a convolution is necessary, a numerical integration within a pixel is always a better approximation than a multiplication of scalars (this was probably the problem what was identified in the past with the comercial astrometry software). This kind of mathematical problem will not be correctly fixed even with testing in the market. Such problems should be avoided before the software is released in any form. The more difficult thing in the photometry software is that the result is given with a low-dimentional values (in extreme cases, it may be a scalar for a given object). From these low-dimentional values, it is almost impossible (probably proven with some mathematics) to make a reverse analysis of the high-dimentional compexlity involved in the software. [This difficulty is similar to reverse analysis (eclipse mapping) of the eclipse light curve of a cataclysmic variable -- one can only choose a "most likely" solution from a huge number of acceptable solutions]. Such "beta-type" reverse test will be effective for image processing software producing multi-dimensional results, or software producing star maps [But is there anyone who has seen entire region of the sky with a given star-mapping program? In theory, the same test is required every time a new version is released. This would require even more work than to write down the software, and bug-fixing at the programmer's end is much more cost-effective. Arne's paradigm of testing in the market, without source codes, is no longer very effective in the modern software engineering]. The high occurrence of errors (from various reasons, although the software venders often want to ascribe errors to the catalog incompleteness) in chart-plotting software is sufficient to indicate that there would be an equivalent number of faults in commercial photometry software. This single difference is that the effect in the latter category of packages is less likely to become obvious. > You should not "bad-mouth" a product on an open > maillist; go to the specific maillist for the product in question and > try to get help, either in understanding how you are misusing > the product, or in getting the bug fixed. So this statement is senseless. We are in pursuit of "better photometry" rather than trying to fix problems with a specific software package. > You can test much of the quality of photometry algorithms > yourself by taking several images of differing exposure length > and seeing how well faint objects on the shorter exposure images > match up with their longer-exposure measures. One of our observers has very recently done this test. And I can simply tell that AIP4WIN failed. I am not aware of the result from other packages, but I suspect a similar result, as inferred from the reported similar tendency of the data. Regards, Taichi Kato
Return to the Powerful Daisaku Nogami
vsnet-adm@kusastro.kyoto-u.ac.jp