Re: (fwd) problems with AIP4win > I have seen, by far, > more errors caused by incorrect parameter setting and usage > than I have by actual algorithm problems. I have seen much good > photometry out of almost any software package, most certainly > including AIP4WIN. Yes, I know there are a lot of instances with incorrect parameter setting, regardless of the software package. Even allowing this, does anyone have a "correct parameter setting" or prescription to produce as much reliable results as possible with AIP4WIN (or other software)? I presume that everyone follows the prescription in the AIP4WIN book, but the circumstances don't seem to be ideal. > The American saying goes: "If it ain't broke, don't fix it." > Once you produce a software package that does better and is > available in full source code and in public domain, you may > have a right to say this, but not before. I know a number of professional packages which can perform to a similar, or even better, photometry. Although I don't know if their source codes are publicly available, DAOPHOT is no longer the unique solution. DAOPHOT was originally designed for extremely crowded (e.g. globular clusters) fields, but this is not always the best approach if the field is not so crowded. Have you ever seen the DAOPHOT source code? At a first look (if you are familiar with software technology), you can soon guess what factors could have prevented third-party improvement of this package. > There is no software package that I know of, including my own, > that does not have problems when you are dealing with low signal/noise > situations. This is a very tricky regime and I am willing to bet > that what works for one case will not work for another. This is not very true. If the object is sufficiently isolated (as in most of CVs) from nearby stars, there is a well-established method of extracting the "statistically correct" magnitudes. You may search through the ADS and will be able to find relevant references. The reason is simple, the best estimate of the flux of the object is a function of pixel values around the objects (most likely a linear function). This can be easily formalized, and the contemporary issue is which weighting factors should be used, or how to determine them. It is known a subtle change in weighting factors (which reflects the PSF) will not introduce systematic errors in isolated stars. In crowded fields, this needs to be done more precisely. However, I don't think this kind of contemporary issue is not the main cause of the software uncertainties discussed here -- they seem to reside in the place well before the modern (recent, say, 10 years) advancement. > If it is important to use the low signal/noise data, then it is > best to have an expert, such as Kato-san, reduce all of the original CCD > frames rather than using results from many different observers and > reduction procedures. Ironically, this is true. This is the reason why I must use more than 10 CPUs to daily reduce observations (the cluster of computers looks something like a factory ;-). I must probably add additional few... This process well works with the domestic data, but the international mailing expenses have made it virtually impossible to make these analysis a routine work. I only request the raw data when I have a doubt in the reported results, and thereby confirmed the software problems I summarized. Regards, Taichi Kato
Return to the Powerful Daisaku Nogami
vsnet-adm@kusastro.kyoto-u.ac.jp