A bit different resonance test

Today Klipper implements simple vibrations test that tries to excite vibrations at different frequencies. I have a suspicion that this may not work very well in all cases, as the amplitude of vibrations quickly drop below even a single motor step (with the typical settings this happens at ~12 Hz). As such, I implemented a similar, but slightly different test: the regular vibrations are joined with a sweeping motion, which is executed at low frequency and small acceleration (400 mm/sec^2 and 1.2 sec period). I tested it myself and in some cases there’s a good match between the tests:

and in some other cases (on a different machine) the test results are quite comparable, but the new test tends to produce peaks at slightly lower resonance frequencies:


Of course, this new resonance test requires a more in-depth testing. If you have interest, please give it a try, it is available in this branch. The old test there is available with

[resonance_tester]
method: vibrations
...

which is the default (if method is unspecified) and a new one can be enabled as

[resonance_tester]
method: sweeping_vibrations
...

If you want to help with testing the new algorithm, please run both old and new tests with

TEST_RESONANCES AXIS=X OUTPUT=raw_data
TEST_RESONANCES AXIS=Y OUTPUT=raw_data

and post the generated csv file (e.g. compressed in the archive).

I’m interested in the cases when you didn’t have previously any issues with the resonance test, and especially in the cases if you previously did have issues (e.g. the resonances spectrogram has lots of peaks and recommended shaper has low frequency and low maximum recommended acceleration, the magnitude of the peaks is around 1e3 - 1e4 from calibrate_shaper.py script graphs and such).

1 Like

Before

After

1 Like

@ZeyHex Thank you for sharing the results. I also got a few people providing the results offline (left is before, right is after):

So, oftentimes the two methods produce similar and comparable results. However, there are situations where the current method from mainline Klipper produces bogus results (with low-amplitude vibrations all over the place) and the new method produces more believable results that are, most importantly, give shapers that actually work. My current theory regarding this is that if an axis has high friction (e.g. binding rails or complicated belts setups with many pulleys), then the current resonance test may be unable to overcome the stiction force and move the toolhead at higher frequencies, since at higher frequencies the commanded amplitude of vibrations can be a fraction of mm, and that results in spuriously low detected amplitude of vibrations. The new proposed test lacks this defect, since it moves the toolhead slowly with small acceleration, and produces high-frequency vibrations on top of it, thus eliminating the stiction force during the test completely. @Sineos FYI since you’ve looked at it previously and made a post about potential issues with printer mechanics.

So, I’d hope the new test gets a bit more testing, and I would like to integrate into Klipper mainline, at least behind a configuration at first, but with the intent to replace the current test.

True. I had this effect actually on my own printer and that’s how I stumbled upon this after working on the linear rails and then doing another resonance test.

Actually, I think this is a highly useful “feature”.

Also notably at high frequencies the amplitude of vibrations falls below even a single full microstep, and based on the data gathered in this thread:

it seems that the forces stepper produce depend on the relative rotor position within the step (basically, on the exact microstep-to-microstep movement). So, the results with the current mainline test may even vary depending on the test position and luck.

As a heads-up, I went ahead and created a PR for this feature:

Given the solid theoretical motivation and explanation provided here and the test results from various printers, I think it’s worth delaying it a bit more and then providing is as default, with the old technique as backup, to be removed in a couple of years if nothing seems to justify maintaining it.

You developed a more robust and accurate test which should in theory not have drawbacks, and which in practical really doesn’t, why should it be optional?

My idea is to not leave the new test method optional, but eventually change the default and then maybe even delete the current test method. However, it seems that the need to install a different branch from a different github repo is a substantial blocker for many people who could otherwise give the new method a try. As even evident by this thread, not many responded to the request to test the new method, and most of the people who did I had to reach out to via separate channels. Thus, integrating it this way will give an opportunity for more people to test it and then, once we are confident that it is strictly better than the current method, the default can be changed.

1 Like

Is there a way that this could be merged with the Klippain Shake’n’Tune?

I’m not an author of Klippain Shake’n’Tune, and it has its own code to move the toolhead for resonance testing, so I’m afraid this discourse is a wrong place to ask for that.

1 Like

Hi Dmitry, you asked to attach csvs, but I can only make ready-made graphs, due to the automatic removal of csv. The default method, and then yours, the new one, on fully stock params, on my voron 2.4 -




I see that the new method causes more fluctuations than the previous one, with default parameters, and I would not say that anything has changed in my case.
On my own behalf, I would like to ask you to consider speeding up the default process. The usual resonant method works faster, and it also works stably at hz_per_sec = 5 and even 10. On my machine, it works stably even at 100hz\sec. This, of course, is not critical, the measurement is performed quite rarely, but still why slow down. And I also didn’t like that now completion percentages are written instead of the measured frequency, it wouldn’t be bad if both were output to the cmd. Thanks.

I’m not sure I understand this feedback. What do you mean by ‘fluctuations’? I see that the results are almost the same between the old and the new test.

I’m not sure that this would work well in the general case. Note that Klipper by default processes windows of data of less than 1 second, and if the base frequency changes dramatically over it, due to windowing effects I’m unsure that the results will be stable and very reliable. However, since the controls are available, and you believe the accelerated test works well for you, you are very much welcome to increase hz_per_sec (and it should also work for the new test).

The amplitude has increased, for example, along the x axis from 5e4 to 1e5. This is more than the possible measurement errors. Other users, whose charts you have attached above, also have this situation

Well, it’s worth checking out how it works, thanks!

Does this new method require modification of the microcontroller code, or only host code?
I’m hoping to port it to the older Klipper version 0.10 my printer uses.

Only the host code is changed.