Too often I've run into sequences which seem to be internally inconsistent. By that I mean that the reported value of the check star (using VPHOT in all cases discussed here) - which is itself a sequence star - is often off by a half magnitude or more. I depends on the comp star(s) chosen which led me to produce the little table below from lasts night's measurements in the field of V642 Cas.
116
112
110
116
--
-0.417
-0.507
112
0.417
--
-0.091
110
0.507
0.09
--
The values at the head of each column are the sequence stars used as a comp star when the others are used as check stars.
The rows are headed by the check star being measured and the values in the row are the (measured value - catalog value) for each comp star.
At first blush, it would appear best to use 110 and 112 as comp and check stars (in either order) but how are we to understand that either of these misses the value of 116 by a half magnitude if all of the stars have "good" photometry? Using more than one sequence stars in an ensemble leads to big uncertainties in any of the reported targets - which is what led me to this investigation as I was trying to figure out how to improve my reported errors (S/N is NOT a problem).
Jim Roe [ROE]
Hi Jim,
Share one of those images with HQA, and I'll take a look at it.
Arne
[quote=roe]
Too often I've run into sequences which seem to be internally inconsistent. By that I mean that the reported value of the check star (using VPHOT in all cases discussed here) - which is itself a sequence star - is often off by a half magnitude or more. I depends on the comp star(s) chosen which led me to produce the little table below from lasts night's measurements in the field of V642 Cas.
116
112
110
116
--
-0.417
-0.507
112
0.417
--
-0.091
110
0.507
0.09
--
[/quote]
I have long suspected this kind of problem is behind the reason why, despite CCD potentially having much greater accuracy, the scatter between different observers is similar to the visual observations!
Looking at V642 Cas, I see two issues:
1. There are 2 of each of the 110, 112, 116 comps listed in the photometry table. The 2 110's have extremely different colors, B-V= +1.86 and +0.29, for example.
2. The roundoff errors of using a single digit after the decimal artificially adds up to 0.05 mag "error".
Comp stars should be shown on the chart with 2 digits after the decimal, and the B-V shown in () too. This would greatly assist observers in choosing them, and reporting estimates, without having to waste time looking the stars up in the table. Especially, if there are multiple stars of the same "label", then having to figure out by the coordinates which one you are using. Not fun :(
Thanks,
Mike LMK
Another possible case: I'm seeing V725 Tau about 0.2 V magnitudes fainter in my PEP than ROE sees with the CCD. My comp star is SAO 77331.
http://www.aavso.org/lcg/plot?auid=000-BBJ-814&starname=V725+TAU&lastda…
Tom Calderwood
CTOA
Tom,
You are confronted with a situation where your observations differ with another observer.
At this stage, even though you did qualify the question with a question?, no evidence is being presented that there is an issue with any specific sequence comp star (more about how observers should deal with sequence problems later).
I will make some general remarks regarding possible explanations and later followed by some general remarks when an observer identifies a problem with a comp star or multiple comp stars.
Observers do differ for varying reasons (and some of these differences make it very hard on those that want to use the data because there is no apparent explanation).
Some of the reasons for differing observations, some of which can cause greater or lessor separations in reported data:
1) Different comp stars
2) Large gaps in magnitude between the comp star and the target star
3) Failure to properly calibrate images
4) Saturation of comp and or target star
5) Mistaken target star
6) Mistaken comp star
7) Close doubles to target and or comp stars
8) Mismanagement of the Aperture size
9) Differing air masses at the time of observations
10) Using visual chart magnitudes instead of the sequence 3 digit values
11) One observer using an ensemble while another uses only a comp and check star
12) Sampling problems, i.e..., one or both observers may be under sampled, which in some cases can result in spurious data (over sampling is ok).
13) Incorrect data entry of comp values into a photometry program
14) Clouds/fog/poor seeing and other weather problems for one or both observers
15) Poor Sequence Data being used by one or both observers
Tom, you could also mail HQ and request that the other observer get in touch with you so that you both can make an attempt to resolve the observed differences and see if you get a response.
Sequences in general:
Some of the sequence data is quite old and needs updating (which is being done by one of our team members) as better calibrations occur.
Some of the sequence data is based upon potentially questionable calibrations from some of the catalogs. Sometimes even though the source catalog may generally be quite good there will be occasional entries that are simply human error or suffered from poor conditions when gathered. This is mostly addressed when observers bring specific problems to the attention of the sequence team.
Some of the comp stars initially chosen turn out to be variable. These discoveries must also rely upon individual observers to bring them to the attention of the sequence team.
OK, how do observers bring specific sequence problems to the attention of the sequence team?
Simple really, file a CHET (questionable comps, errors or simply an inadequate sequence):
http://www.aavso.org/chet-help
And if you want to observe a star that does not have a single comp star available then please go here and make a request for a sequence (please read all the directions):
http://www.aavso.org/request-comparison-stars-variable-star-charts
Tim R Crawford, CTX,
Sequence Team
Mike,
Including all of that information on charts would make them very busy and would often obscure parts of the chart you want to see. I don't like printing out the photometry tables or flipping back and forth between screens, either. Sure you have to look at the RA and DEC of the stars in the chart but it is normally easy to distinguish betwee same-label stars simply by which is more east or west, north or south. However, by the time you put all the information you need from the table onto the chart, the chart is no longer useful for the purposes it is primarily intended - indentifying targets and comps in the field of view, and distinguishing them from other nearby stars with which they might be confused or blended.
Brad Walter
When one read's Tim's list it is amazing that there is any consistency between observer's data.
Obviously we all want all the precision we can get but there are practical limitation from the sequence through processing that in the practical world require some balancing be done between perfection and getting data in useful volumn at all. However, one of the biggest areas for divergence, based upon my efforts to QC my data and compare it to others, falls in the area of which sequence stars are choosen.
In some stars, with lots of observers using a variety of techniques and equipment, these things can average out but where there are only a relatively few observers involved and particularly where there is an active campaign underway, it seems a shame that it is not easier for people to cooridnate so as to minimize such factors and compare results with the goal of achieving a higher degree of conformity and consistency between observers.
On many occations, I have experimented with differenct K stars inorder to obtain aparent alignment with other observers. Just the otherday I ran a single set of images five different times using different comp stars and apperatures in order to get apparent alignment with another's data and was never happy with the results.
Based upon the forgoing I wrote a comment to the council two day ago, suggesting that a voluntary self administered (members enter thier data as they choose) online membership directory be created. It seems silly to have to work in the dark relative to other members working the same star, or take staff time to get permission to share email addresses of other members in a membership organization.
I am working a list of about 170 stars. Some I am the only active observer and in a few there are many, but for most I have an unwiting partner or two known only by an observer's code. Sure would be nice to be able to email them easily and being able to do so would doubtless improve our data and perhaps lead to some added efficiencies and coordination.
Jim,
A couple of months ago I discovered I also was having internally inconsistent, comp star dependent results for AS 270. You may recall there was some discussion about the large scatter in the CCD data for this star, and further, upon examination there seemed to be something of a bi-modal, observer- dependent distribution to recent measurements. So I decided to see what I would get for this star. What I got was a lesson in diagnosing a problem in my flats.
I certainly am not asserting that you have a flat problem. I am only suggesting something you might want to check. I assumed that you are using the 110, 112 and 113 comp stars that are closest to the target - the ones that would be contained in a 30' FOV centered on the target. The spatial relationship among the comp star distances from the target reminded me of my flat problem assuming the target is located near the center of the FOV. 110 has greatest radial difference. 112 is slightly less than 110, but still a significant distance from the center and 116 would be very close to the center, if the assumptions I have made are valid.
Perhaps you have guessed where I am going with this. My problem was that my my new super-duper light box had a built in radial gradient (now corrected).
I am reasonably sure that color variations in comp stars would not cause the degree of difference you are seeing. I started to blame my problems on color as well until I thought about the values one normally expects for transformation coefficients. Like your differences, the differences I was seeing seemed to be an order of magnitude larger than I would expect from large color differences among stars. Also, the differences in my measurements did not correlate well with the colors of the comps. They did correlate well, however, with the difference in radial distance of the stars from the center of the FOV. See attached plots. These were made using un-transformed measurements so there is some variation due to color as well as radial distance, but the correlation with relative radial distance is obvious.
So, it is just a suggestion, and you may already have done this, but you might try imaging a single star in, say, a 3 x 5 or 5 x 5 grid pattern over your camera's FOV and see how the raw magnitude varies with position in the flat corrected images. It would be great if by some chance you haven't changed camera position, and could manage similar focus so you can use the same flats as you used for the V642 CAS images in question.
Brad Walter, WBY
[quote=WBY]
.......So, it is just a suggestion, and you may already have done this, but you might try imaging a single star in, say, a 3 x 5 or 5 x 5 grid pattern over your camera's FOV and see how the raw magnitude varies with position in the flat corrected images.
[/quote]
I had thought that it could be a flat fielding problem and I can investigate more closely but I see no problem in most of my fields. For example, the field of M67 where I do my transformation measurements has calibrated stars pretty much across my fov and I see no scatter there on the order I'm seeing in some of my fields.
I'm not suggesting this is a wide-spread problem but rather I'm wondering if some sort of protocol could be set up whereby suspicious fields could be documented sufficiently to warrant a more detailed look by the experts. I would think validating the flat fielding would be part of this. Maybe it could be a part of CHET?
Well, Jim, you were ahead of me. I thought you might be, but flat fielding is so important and is so frequently the cause of error that I thought it was worth a shot. It might still be worthwhile to check these particular normalized master flats against ones you know are good just to see if there is some kind of overall gradient.
Did you use the comp stars I assumed you used? I checked SeqPlot to see what the data was for these stars and the measurements errors all seem pretty normal and all have 4 or 5 observations. See the attached CSV file.
I have no other suggestions except possibly a focusing issue but it would have to be so severe as to be obvious. Also I assume that the exposures are long enough that neither scintillation nor shutter speed are significant and there is no chance the camera shifted between images and flats.
As usual, advice is worth what you pay for it, but only if you are careful and selective.
Brad Walter
Let's begin by asking why I get some of these posts twice - can this be rectified?
I had thought that all the northern sequences had been determined by APASS but this seems not to be the case. Are some of these values from what are merely compilations of randomly measured stars? In this case the scatters are not surprising - look at some of the spreads in the old USNO catalogue of pe measures. Then I noticed comments about stars with B-V of ~0.2 and ~1.8. Unless substantially reddened the latter is likely to be variable, perhaps on a longer time scale than anyone has checked.
I noticed that Arne asked to check one image. Was this done?
I use some of the B and V values with stars in which I have an interest and notice some discepancies, often with particular observers. Some of the smaller 2-5% ones will arise because of the variety of reduction techniques used - it would be good to see a standard method adopted - but others appear inexplicable - they're so large.
Oddly enough I found colour photometry with a CCD more difficult than PEP at a time I was doing some sequences for Albert. But that's not enough to explain the errors quoted when this topic was raised initially. Which leads back to the question - how good are the sequences? Which are from APASS - which should be homogeneous - and which not? Do these problems affect southern fields?
Regards,
Stan Walker
Stan,
Foremost, let me state that what follows is my perspective. I do not speak for HQ (Council Members, Director & staff) nor other members of the sequence team.
There are a number of survey’s that have produced excellent calibrations, both those some years old and some that are newer, and there is no reason to replace them. There are also some of those surveys that should be replaced with APASS or other surveys and this is done on an individual basis as they are identified on a case by case basis.
I am aware that there are some observers under impression, as you apparently were, that APASS data was going to automatically replace all existing sequences and further believing that APASS is the final word in calibration. APASS, which has had a lot of Director, staff and volunteer time invested, has in most cases, excellent calibration; on the other hand there are some specific FOV’s that APASS is not necessarily the best option for; it depends on the range of the target and the number of nights that APASS was able to complete calibrations as well as the fact that a minority of previous APASS calibrations occurred on nights where weather was less than ideal (as time goes on APASS data will automatically be updated as more nights become available)
APASS was a mammoth undertaking to calibrate the whole of the sky in both hemispheres. Actually, probably the first for any survey effort that has managed to progress to almost completion (there are still a few gaps remaining); but you need to remember that it too, like many other surveys, has a defined range in which it’s data is viable.
I think we should all be quite indebted to our Director, Arne, Hendon for the conception and implementation of the APASS project as this history making project will serve the professional community as well as our own for generations to come.
Are some of these values from what are merely compilations of randomly measured stars?
I am less than certain as to what you are implying are asking here.
If you look at the bottom of any Field Photometry Chart you will Note a Sequence Source Reference Table for each data point in the sequence. Some of them may or may not be more reliable than others. Some of them may have been used for data outside of their accurate range. Analysis always requires a case by case basis
Sequence Team Members have invested thousands and thousands of volunteer hours in getting us to where we are today; Our Director, Staff and other volunteers have also invested thousands and thousands of hours getting us to where we are today (FYI, APASS scopes are not the only HQ operated set of instruments used to calibrate specific fov’s); and the job is not finished nor probably ever will be.
Each of us on the sequence team chooses the best fit stars from the best available data that exists at the time of the charts creation according to team guidelines; but there are times when we have to go outside of our own guidelines because there are simply no other options if we are to meet an observers need for a sequence (sometimes the only option is choose colors less than ideal because that is all the creator, however you perceive one, put there for us to use and sometimes that is all the sequence data, at the time, will allow for).
Are all the sequences on file the best they can be at any given time? The answer to that is a resounding no; however, those fov’s with sequences in need of improvement are a shrinking quantity and are in the minority of all sequences. Are they being updated when brought to the attention of the sequence team, on a case by case basis? Yes, if we are at all able to.
However, the sequence team can not deal with generalities. As a rule, the sequence team does not have the ability to study specific FOV’s to see if any bad data has slipped through or that a chosen comp might turn out to be variable. We do have at least one member who tries to set aside time to hunt for sequences with recognizable problems but this is slow and time demanding process that does not involve always being able to spot potentially bad data nor does it permit identification of comps that eventually turn out to be somewhat variable.
As I keep saying, for the most part, we really have to depend upon the observer to bring specific comp/sequence problems to our attention, on a case by case basis, when recognized within their own data and observations before we can take corrective action.
I have mentored and worked with dozens of observers and it is my observation that of all the explanations of why one observers data is not in agreement with another’s the least likely is a poor sequence; the most frequent explanations, from image and equipment study: undersampling and saturation and then followed by any one of the other 15 or so that I previously listed in this thread.
how good are the sequences?
Probably about as good as they could be at the time of their creation, within the limits of the survey, and available material (stars).
Which are from APASS
Those that are so identified by an examination of the Field Photometry data.
which should be homogeneous
To answer that question you have to examine the Field Photometry data… if multiple surveys are involved then obviously the data is not homogeneous. If one just one survey is referenced then the presumption is that the data is homogeneous to the limit of the suvey’s valid range and available material (stars).
- and which not?
See previous question. Also a smart idea to examine the uncertainty measurement associated with the sequence value. Sometimes larger values indicate a potential problem either with an individual survey or the data for that specific fov and possibly that some of the data may have been outside of the surveys range of reliable magnitudes.
Do these problems affect southern fields?
My responses apply to both hemispheres
All Observers: please file a CHET, if and when you have concrete data to support something other than observer A’s data does not agree with observer’s B data, etc:
http://www.aavso.org/chet-help
The sequence team will respond to the limit of what is then currently available and monitor where alterations are not possible waiting for the future availability of data.
Tim Crawford, CTX
Sequence Team, Mentoring Team
As I mentioned at the beginning of this thread, I'd like to see the image that ROE used to create his table of discrepant measures. He can share it with HQA via VPHOT, or upload it, but I have stayed out of this discussion until I can see such an image.
Arne
Hi Tim
I'd like to carry on this discussion but not in this ponderous manner. My email is:
astroman@paradise.net.nz
Amongst other things a long time ago I helped set up PEP sequence determinations at Auckland Observatory for the RASNZ VSS. I'm presently organising some projects through Variable Stars South and would like more information.
Regards,
Stan
This thread has well illustrated the difficulties of obtaining accurate measurements when a large number of potential sources of error exists, such as the (at least)15 factors identified by Tim (CTX).
Given the large variations in observer's equipment(optics/sensors/filters), techniques of reduction(various software, algorithms, flats, apertures, etc.), observing conditions/atmospherics, just to name a few, the prospects for success in controlling all these variables among a large number of independent observers seems quite a challenging task indeed!
Not that we shouldn't try to improve photometric techniques, but given the realities of such an undertaking in its entirety, the best solution may be to just invoke the "law of large numbers" LLN, or reducing the standard error of measurement (SEM) by simply increasing the number of observations. Given the previously mentioned large number of factors which may affect observational accuracy, it becomes practical, and statistically valid, to just lump all such factors for all observers together, and consider them independent random error.
The LLN then states that the SEM decreases as the inverse square root of the number of observations. This effect has been well documented and used with the long term variable studies by visual observers over the history of the AAVSO. By using the large number of observations, statistical power is increased to the point where the SEM becomes just several hundredths of a magnitude. The accuracy of a large number of observations well exceeds that of any individual observer's capability. For example, averaging the results of a sufficiently large number of visual observations will be more accurate than a single typical CCD observation!
Best of all, this approach is easy to implement, by simply encouraging observers to observe much more often, and let the myriad sources of errors "average themselves out".
Mike LMK
First let me move something that was the end of Tim's second email up front:
If observers feel that a sequence is bad please file a CHET report at
http://www.aavso.org/chet-help
You will be amazed how quickly the sequence team will respond if they are able to improve the situation. Sometimes there just isn't the photometry to improve the existing sequence.
Beyond that, random error will combine as the sqrt(N) and repeated observations will reduce random error by 1/sqrt(N). Non-random systematic errors combine as N and repeated observations will not reduce them. Most if not all of Tim's list are systematic errors that are non-random in nature. If you average Arne's observation of V339 Del with mine you will just wind up with an estimate that is less accurate than Arne's and more accurate than mine. However where we overlap, we will still probably be within 0.05 mags of each other.
I think that sometimes researchers are forced to average. On the other hand he may realize that Arne is a much more experienced (trustworthy?) observer than this yahoo Jones and give Arne's observations much more weight...especially if he wants four color transformed observations. On the other hand if he needs high cadence long time series he is stuck with me (and others).
So the answer isn't to encourage ccd observers to observe much more often so their data can be averaged. It is to encourage observers to improve their technique, improve their equipment and observe targets that are appropriate for their skill level and equipment. It isn't easy, it takes time and can cost money. and you never get where you want to be.
It really isn't all that mcuh different than improving visual observing.
Jim Jones
[quote=jji]
Beyond that, random error will combine as the sqrt(N) and repeated observations will reduce random error by 1/sqrt(N). Non-random systematic errors combine as N and repeated observations will not reduce them. Most if not all of Tim's list are systematic errors that are non-random in nature. If you average Arne's observation of V339 Del with mine you will just wind up with an estimate that is less accurate than Arne's and more accurate than mine. However where we overlap, we will still probably be within 0.05 mags of each other.
...
So the answer isn't to encourage ccd observers to observe much more often so their data can be averaged. It is to encourage observers to improve their technique, improve their equipment and observe targets that are appropriate for their skill level and equipment. It isn't easy, it takes time and can cost money. and you never get where you want to be.
[/quote]
I figured someone would get into the details of my statistics a bit Jim! Yes, its true if the majority of error of most observers is systematic rather than random, there will be little to gain by just having those observers observe more. The underlying requirements of statistical analysis as typically used, is that the errors are random.
Now, I do not know what percentage of a typical CCD observer's error is random vs. systematic? If it's mainly systematic, as you said, repeated observations won't improve accuracy much. If it's mostly random, however, more observations would help.
I really should have said "encourage more observers to do more observations". My feeling is that since most CCD observers tend to work independently, using different equipment, observing conditions, techniques, adding more observers (even if systematic error is the primary factor) would achieve a sufficient "random effect", since you would expect different observers to have somewhat different systematic errors. A random sample of those observers would then introduce sufficient randomness to allow the errors to reduce by 1/sqrt(n), or maybe 1/n^[0.3 or 0.4 or so, since there is some commonality of hardware/software].
So, I think adding more CCD observers (even if they are not experts), rather than having the existing ones just observe more(without improving any of their methods), would help reduce the error bars the best and easiest way. Because, the task of improving most CCD observers to eliminate most major systematic errors, may be just too much to expect will ever happen in reality!
Mike LMK
Hi all,
looking at the finder chart for AE UMA, I found one of comp stars is an ASAS-SN variable (see attachment).
It should be excluded from the sequence, I think.
Best regards,
Max
ASASSN-V J093634.32+440745.2 is actually a DSCT (12.961-12.98), according to the VSX... thanks for bringing this to our attention. I have suspended the label and it will no longer appear on a chart.
Ad Astra & Good Observing,
Tim Crawford, Sequence Team
Sebastian Otero has reviewed the VSX data and now determined thjat this star is not a DSCT and that the VSX was in error and this star is not variable and has been reclassified.
Per Ardua Ad Astra,
Tim