Autostakkert!V3 vs V4
I was made aware of an update to Autostakkert! a mainstay of my astronomical image processing.
I have been using V3 for some years now… V4 is now out.
AutoStakkert! Stacking Software – Lucky Imaging with an Edge – Emil Kraaikamp – AS!2, AS!3, AS!4
https://www.autostakkert.com/
AutoStakkert! is all about alignment and stacking of image sequences, minimizing the influence of atmospheric distortions (seeing). Its goal is to create high quality images of the Planets, the Sun, and the Moon, without too much hassle.
I could not find anything specific about v4 versus v3 but some of the changelog of v4.0.1:
4.0.1 – November 11, 2023
New data browser / selector
Performance optimizations, particularly when processing very large files
I used the latest v3.1.4 64 bit and v4.0.11 64 bit
The user interface is very much the same. I ran it against a complete nights session of 13 runs and at first blush, the end results look unremarkable.
So perhaps.. it may be faster? Let’s test that.
I selected an 11GB .SER file of a Jupiter 180 second run and used similar if not the same settings on both versions and timed them. This was located on an SSD to minimize drive speed effects.
In between benchmarks I waited for the RAM to be released after closing the program… This is a 32GB (Intel Core(TM) i7-6700 CPU @ 3.40GHz, win10pro 64bit) machine and both versions of autostakkert! used 24GB of that, when processing. I also ran it three times in order of 3,4,3,4,3,4 in case there were other caching issues at work.
V3 settings: Planet (COG); dynamic; automatic; local (ap)
V4 settings: Planet (COG); dynamic; laplace; noise 5; local (ap)
V3 bench1 analyze 40 sec
V3 bench1 stacking 34 alignment points were selected; the best 10% were processed and saved out as a .PNG file, normalize stack 7%; RGB align; no drizzle. 70 sec
V3 Bench1 total time 40+70=110 sec
V4 bench1 analyze 25 sec
V4 bench1 stacking 34 alignment points were selected; the best 10% were processed and saved out as a .PNG file, normalize stack 7%; RGB align; no drizzle. 70 sec
V4 Bench1 total time 25+ 70=95 sec
V3 Benchmark 2 45+71=116 sec
V3 Benchmark 3 43+72=115 sec
V4 Benchmark 2 25 + 70=95 sec
V4 Benchmark 3 27 + 70=97 sec
The average of three runs of V3 was (110+116+115/3=114 sec
The average of three runs of V4 was (95+95+97)/3=95 sec
Summary: V4 is 17% faster, all of it being in the initial analysis segment.
In the near future I will be taking a closer look for any quality issues between the two versions.
After that, the next software head to head is the wavelet processing… I still currently use registax v6.1.0.8 and it took a few years to get the settings I am comfortable with (I actually have a collection of 8 that I rotate through on the first image to see which is best). Next up is “wavesharp” v0.2 a much newer package… also less intuitive user interface 🙂
wavelet processing does not take a lot of time.. it is mainly that registax can only do one file at a time. It can take a good hour to wavelet process 40 odd runs, one at a time. It would be great if this new wavesharp does batch processing!
**updated 2024feb14
hmm.. probably should only compare the analysis section as the stacking times were almost identical.
ok..
bench v3 analyse times were 40, 45, 45 sec. averaging 43
bench v4 analyse times were 25, 25, 27 sec. averaging 26 sec
43-26=17 sec faster / 43=40%.
about 60% faster for the analysis