[Message Prev][Message Next][Thread Prev][Thread Next][Message Index][Thread Index]

[vsnet-chat 3659] Re: Overobserving



Re: Overobserving

> >Well, you seem to be very much inclined to
> >character-based data examination ;-)  Almost the same thing can be more
> >easily performed using a GUI light curve viewer/editor.  Just a few
> >mouse clickes are usually necessary.
> 
> Such a GUI device sounds glorious... ...I've never found a fully usable
> one, and don't know how to write one, and so have to compromise on the
> matter with next best attempts.
> 
> Sometimes the sheer size of the data makes things difficult... ...T Cas for
> instance has about 30,000 observations for it (and that is _excluding_ the
> AAVSO data which I do not have access to!)  Even using AVE, chopping up 80
> years worth of T Cas data consisting of 30,000 datapoints gives very small
> "windows" on the data from which to make judgement upon.

   I have one, and made a demonstratation on the VSOLJ Century database
when Janet Mattei visited the VSOLJ Meeting.  There is no practical software
limitation.  It can treat a full combined dataset of AFOEV and VSOLJ
(exceeding two millions) in one volume.  Navigating through a long set
of observations is easy, and writing out and eliminating "bad" observations
are also easy.  The only problem is that the software was written more than
ten years ago, for the domestic PC architeture popular at that time,
which is now being getting extinct :-(  [The second known problem could be
that this software too quickly produces nightly updates (recent/mira)...]

> >   If the observer observed by a constant above the rest of observers,
> >the difference can be easily corrected using a constant correction
> >(self-evident).  Otherwise, the cause of deviation needed to be more
> >rigorously examined.
> 
> Yes, it is self-evident, but as Stan pointed out, it needs to be done on
> almost a per observer per star per night basis... ...though that's up to
> how rigorour the analyst feels they want to be, I suppose ;^)  However,
> your FG Sge comments suggest that just some simple global corrections can
> make quite a difference.

   Yes, some simple global corrections often quite satisfactorily improves
the quality of light curves.  This is the main reason why I suspect
"constant offsets between observers" are main cause of "noise".

   Ideally, every observation may be corrected, as in PEP.  However, as
one may easily suspect, the difference of color term between observers
will affects more than the change in color index of the object.
This can be confirmed by solving linear regressions, the constant correction
terms can be one to a few times the standard deviation, which include all
other source of scatter, for each observer (from the nominal "corrected"
light curve).

> The rigorous examination to cure such problems just isn't done that much
> anymore... ...I think a lot of organisations have enough trouble just
> finding enough time for the reports archived in the first place!

   I think the rigorous examination should be at least paid more attention,
since the number of archival observations is always growing, and the latter
the cure is taken, the more the necessary effort could become [it may not
be the work of the present database manager, but will be an increasing
burden for the next generation.]  The exact records of observations might
be lost in the meantime, or observers might be lost contact with...

> Sebastian as of course
> shown that it is quite possible, but it is very dependant on the quality of
> the sequence, which itself is dependant on the available nearby stars!
> 
> So, I don't think any scheme is better than any other, just different in
> what happens when it is plotted on a folded lightcurve.

   Of course, there is no absolutely superior method.  However, in some
cases one can be far superior than others.  The case of UU Aur as you
mentioned could be avoided using an adequate application of the Pogson
method.

> Aah...  it seems I am concentrating my thoughts too much in the area of LPV
> overobserving, which is where the original thread set off from, but of
> course things have travelled a bit since then!
> 
> Obviously, from what you say, what is done appears to depend on what sort
> of information you are hoping to gain with respect to what sort of star.

   I just gave an example suggesting that constant personal biases are
likely the major cause of noise.  LPVs have not been very adequate target
of this analysis, because of their relative undersampling.  [Of course
I could have done with LPV data, but they were relatively out of my immediate
interest at the time of analysis.]

   Obviously, what should be done appears to be a function of what sort of
information observers or future researchers are hoping to gain.  There is
enough reasoning in the former why some observers are in need of frequent
observations from a (personal) special reason, while we may have little
information in the latter (future researchers), which may not easiliy
allow us to eliminate the necessity of frequent observations.

   Personally, I don't want to put much weight on the analysts' side,
but would be more inclined to observers' wish.  If observers feel exciting
by nightly watching and reporting the increase of the Mira light, it's
perfectly okay and a welcomed excitement to share.  If any difficulty is
associated with such observations at the analysts' side, the difficulty
can be reasonably removed by using an appropriate method or tool by the
analysts.

Regards,
Taichi Kato

VSNET Home Page

Return to Daisaku Nogami


vsnet-adm@kusastro.kyoto-u.ac.jp