One more comment and then I will be quiet, as obviously we are bothering some observers. I am only cc:ing to vsnet-chat because it was included in the first postings. Taichi Kato wrote regarding my comment: >>This shows some of the problems associated with working at low signal/noise >>or effectively high sky background. The concept of "average populated" >>area does not hold, since the stellar measuring aperture is always smaller >>than the sky measuring aperture/annulus. Therefore, it is less likely that >>there will be faint contaminating sources. > > This is not correct. The probablity of a contamination per pixel is > the same as in the aperture and background sky. This is one of the > renowned "pitfalls" of novice observers. The systematic effect from sky > background measurement is not restricted to the stellar contamination. > Non-uniformity in the background (from various reasons) also plays a role. > I have seen a number of novice observers who try to find "the darkest region" > in the same frame to measure the background. This always led to > systematically brighter object magnitudes. In most extreme cases, some > people even try to "find the location", with a cursory movement of the > background aperture, which eventually gives a positive detection of the > object. This is clearly wrong. > I don't think we are in disagreement, just describing the effect differently. The area involved in the sky annulus is usually larger than for the star aperture, so it depends on how the sky level is determined from that annulus. As you say, the probability per pixel of contamination is the same, but you have more pixels in the annulus. If you, for example, use a "star aperture" to measure the sky value in several places, you will get different results than using a "sky annulus" in several places since the value per pixel is determined differently for these two measurement areas. > >>There are a hundred million objects that are bright and uncrowded that >>a typical amateur can observe; why beat your head against the tough problems? > > Because there is a necessity. If every square kilometer on earth is > covered with a 1 meter telescope dedicated to variable star observing, > we probably don't need to tackle with faint objects ;-). However, observer > will encounter almost the same thing when one tries to measure a 22 mag > star with a 1 meter telescope. A better solution is not to avoid problem, > but to find the best way to measure such objects close to the limit of > the instrument. A very good historical example would be AL Com. This > object is near 21 mag at quiescence. Several observers tried (even with > Palomer 5 meter) to find the periodicity in quiescence. This history > can be easily tracked with the ADS, and you will see how the reduction > technique affected the results, even with the same KPNO telescope. > There *are* some people with best measuring techinque, and some are > unfortunately not: this is even true for professional astronomers. > There must be a feasible way for everyone, unless people intentionally > want to avoid learning it. > My point is that not all interesting stars are faint. Why not, in general, pay more attention to the brighter stars when you have a small telescope? There will always be exceptions, but the average amateur will have more success if they stay within reasonable limits with their systems. > By the way, this is not the original intention of this thread. >Measurements with S/N < 3 will be a challenging case to everyone, but >the discussed problem with commercial software package frequently occur >at much higher (S/N about 10 or even more) level. This is likely from >a software problem, and needs to be fixed. Again, I do not disagree that you are seeing problems. I contend that it is unfair to blame the software without further information. Arne
Return to the Powerful Daisaku Nogami
vsnet-adm@kusastro.kyoto-u.ac.jp