[Message Prev][Message Next][Thread Prev][Thread Next][Message Index][Thread Index]
[vsnet-chat 5919] Re: ResSkySurvey and PIXY
- Date: Fri, 24 Jan 2003 03:58:36 -0600
- To: vsnet-chat@ooruri.kusastro.kyoto-u.ac.jp
- From: Seiichi Yoshida <comet@aerith.net>
- Subject: [vsnet-chat 5919] Re: ResSkySurvey and PIXY
- Delivered-To: vsnet-chat-archive@ooruri.kusastro.kyoto-u.ac.jp
- Delivered-To: vsnet-chat@ooruri.kusastro.kyoto-u.ac.jp
- Sender: owner-vsnet-chat@ooruri.kusastro.kyoto-u.ac.jp
Dear Maciej, Taichi and Doug,
Doug West wrote in [vsnet-chat 5901]:
> This brings up a more general problem, that is, data overload. With a CCD
> camera and a V filter you can generate more observations that you can reduce
> and report in a short time. I have many CCD observations that I don't have
> time to reduce and report. Since we have a lot of clear skies here in
> Kansas, USA I observe often. Unfortunately, I don't always have time to get
> all the observations reduced and reported. Any software that can decrease
> this timely process would be helpful.
Taichi Kato wrote in [vsnet-chat 5902]:
> Absolutely yes. PIXY can effectively help the first step in many cases
> in automatically extracting useful measurement *candidates*. What we need
> is the second-step automatization (and a more basic improvement of the data
> quality by introducing proper catalogs and measureing algorithm, of course).
> This is the very step PIXY should proceed.
A software only can reduce the time for people to check all
automatically measured magnitude data by eyes one by one. I do not
think the VSNET will accept fully automated results containing some
errors, even if the errors are extremely few. So we have to see by
eyes all magnitude data of variable stars automatically measured by a
software, before reporting to VSNET.
Even if using a perfect software, it takes some time to check all
variable stars in one image, especially in the case a wide field
image. An image can contain dozens or hundreds of variable stars, so
it will take about several hours. That means another dozens or
hundreds of new images will be generated while we are checking data
from only one image!
I think this is the "data overload" problem pointed out by Doug West.
I can imagine two ways to check many data at one time.
1) If we take many images of the same field using the same
instruments repeatedly, we can see the light curve which consists
of many automated magnitude data. Based on the light curve shape,
we can easily see they are correct or not.
2) If we obtain the very recent observations by others and compare
them to the automatically measured results, we can pick up
(possible) errors. Most of the data will be very similar to those
by others, so can be directly reported to VSNET.
These approaches force the observer to operate observations in a
special way. For 1), we have to determine our favorite areas and take
images of the areas repeatedly. For 2), most recent observations
accessible in the public (from VSNET web site, for instance) are
visual, so we have to observe in V-band.
Speaking of the MISAO Project, we cannot adopt these approaches
because every policy of observations, including the time, the area,
the filter, the instruments, etc., is completely the thing what each
image observer decides. Based on this policy, we have to check all
magnitude data one by one...
Best regards,
--
Seiichi Yoshida
comet@aerith.net
http://vsnet.aerith.net/
Return to Daisaku Nogami
vsnet-adm@kusastro.kyoto-u.ac.jp