I recently contacted Diffraction Limited about adding a "time dithering" feature to Maxim DL. The purpose of this feature is automated randomization of image sequence intervals to take advantage of the ability of DCDFT periodic analysis as implemented in VStar, Period04 and other spectrum analysis programs to distiguish aliases from real signal and circumvent Nyquist frequency limitations if observations are taken at randomized time intervals.
The concept is simple. The user picks a minimum and maximum delay time between images or multi-filter image sequences, The software would generate a random series of integers between 0 and 100 equal in length to the number of images or image sequence repititions. Therefore if you are taking a multifilter sequence of images, say IVBBVI, rather than a series of single images, the randomization would occur between repititions of the multi-filter sequence. The random sequence of integers would be scaled so that 0 equals the minimum delay and 100 the maximum and rounded to the nearest 1, 0.1 or 0.01 seconds depending on the time difference between the minimum and maximum delay.
Currently I know of no way to automatically randomize observation times when imaging except through custom programming to externally trigger the camera.
It would be interesting to know if such a feature is of interest to anyone besides the guy teaching the "Analyzing Data with VStar" course.
Brad Walter, WBY
Regarding the strategy: Have you made artificial data with vs. without these random pauses and run them through VStar, to demonstrate clearly that added pauses deliver the benefits you expect? If the benefits are clear, such demonstration of benefit could be persuasive. (Or if the benefits do not accrue, you've saved yourself a lot of energy.)
Regarding implementation: inserting pauses would necessarily waste observing time. Two alternative tactics to consider: (1) randomly alter exposure times instead, thus keeping the camera etc profitably busy, collecting all the photons possible? The arithmetic to correct observed fluxes for changing exposure times is pretty easy. Or (2) randomize the very sequence of filters, which could have the same effect without time loss, if the different filters' exposure times are not uniform.
Your proposal implies that you may have come across a situation in which alias frequencies found by DCDFT are nearly identical to the frequency of your image capture, or differ from the true frequency by plus or minus multiples of your image capture frequency. If that is the case, I would be interested in an actual example of this problem. If that's not the problem, I may have misunderstood the situation.