Normalizing accelerometer plots

Capturing accelerometer data on individual printers has been a huge boon for quantifying the effect that changes to a machine can have on how it’s mechanically behaving, but I’ve noticed some issues with the way the calibrate_shaper script presents the data that can be confusing for users.

The fundamental issue is that the Y axis is scaled relative to the data in a particular capture. This makes it difficult to compare plots run-to-run and can be misleading, suggesting problems that aren’t actually there.

An obvious example of this would be a case where the user makes a change that reduces peak resonance. Say the machine starts with, e.g., a clear primary peak at 45Hz with a magnitude of 8k with a smaller peak at 120Hz / 2K. If a change is made to the machine that reduces that primary peak to 4K, the relative scaling can make it appear that there’s suddenly been a huge increase in resonance at the higher frequencies even though nothing changed there. Helping out a user that was pulling their hair out going down the rabbit hole with this type of scenario is what prompted me to take a look at this.

The other issue is that the Y axis is scaled based on a unitless value presented in scientific notation. This means that plots not only change relative scale, but can shift by an order of magnitude. While the scaling is presented at the top left of the plot, it’s extremely easy to overlook and I’d wager that most users have never noticed it at all.

I ran plots on a sample data set with a couple of small changes to demonstrate this here.

The unit magnitude is addressed simply by changing:
ax.ticklabel_format(axis='y', style='scientific', scilimits=(0,0))

To style='plain'.

I don’t see very much downside to doing this beyond the plots becoming slightly wider. This helps prevent silent unit shifts from providing a misleading picture of how the machine is behaving. This seems like an obvious improvement to me.

Normalizing the Y axis was similarly done in a quick and dirty manner – to be clear, I do not think this is the correct approach for the general case script but was done here to visually demonstrate the issue. In this case, I normalized Y scaling to 1e5 by adding the line ax.set_ylim(0,100000).

This one’s a tougher nut to crack since overall resonance can vary so much between printers/axes, so a one size fits all approach won’t work. Setting the scaling to 1e5 works for the data set here, but on a printer with less resonance, would crunch the graph to the point of being indecipherable.

My initial thought is that a script parameter that allows the user to specify a normalization scale might work, i.e., the user could tell the script to normalize to 1e4, 1e5, etc based on their printer, but I’m not 100% sure whether that’s the best approach.

At any rate, this is something I’ve seen trip people up a number of times and being able to generate normalized plots would certainly add value to the accelerometer as a tool. I thought I’d put this out there to see what thoughts folks have on how this could best be done.

Whereas still each axis needs to be assessed individually, I think think both proposals:

  • plain vs. scientific
  • command-line option to specify range for normalization

would add value.

Just note that the meaning / interpretation of the spectral power density and its magnitude is not really known. This means, it is hard to tell how much worse 1.5e4 compared to 2.0e4 is.
Guess it is safe to assume that 0.8e4 is better than 5.0e4.

Ping @dmbutyugin

As a measured and calculated value, it does have meaning. The units and magnitude should not be disregarded. Because it is a real measurement, it is possible to extract meaning from it. Klipper’s documentation doesn’t at all explain the units or use of the PSD but I think it would be helpful to add some discussion on it. PSD represents the mean square of acceleration divided by the sampling frequency. So by determining the sampling frequency, you could calculate the measured acceleration at a given frequency. You could also integrate to obtain max velocity and distance spectral densities. The way the resonance test works makes it harder to interpret these results because the acceleration increases with frequency linearly, but with an offset because the runs don’t necessarily start at 0Hz. That said, knowing all of the above might help you decide if a smaller PSD at a low frequency is going to be a bigger problem than a larger PSD at a higher frequency.

The real issue is there’s no way of knowing how much of the commanded acceleration results in actual movement of the printhead. Higher acceleration is needed for higher frequencies because the moves would otherwise be too small to accomplish for the steppers and lost in drivetrain losses/backlash. Some kind of constant acceleration test is needed to get more meaningful data that can be compared within a single run and also with separate runs/machine configurations.

As I’m more the practical guy, my question is “does it have a meaning that it has a meaning?”

  • Lacking target / nominal values, it is more qualitative than quantitative
  • I can compare two graphs on their spectrum, e.g. “cool, this change has moved the spike from 50 Hz to 75 Hz” → good, higher freq resonances have less impact
  • I can compare two graphs on their spectrum, e.g. "cool, this change has made the spike less broad and less “residual resonances” → good, since less aggressive shaper needed
  • I can compare in the sense of two PSD values, e.g. “cool, this changed lowered my spike’s PSD from 3.0e4 to 1.5e4” → already pretty low relevance, since it is filtered away but anyway good, as apparently my change made the printer better

And even if I knew that 1.3e4 PSD at 103 Hz contributes 7% to my max ringing, but I can’t shape it away, because the shaper already is at 63 Hz, what I’m I going to do with this information? Where does this resonance come from? What can I do against it?

I agree with your qualitative and quantitative deductions. I think those are the ways most experienced users are interpreting resonance measurements. I do think there is a possibility to use the data quantitatively more than is being done currently though. For example: consider a shaper that reveals two resonance peaks but the magnitude of those peaks is very low. How low is very low? The way the shaper is used by most right now, everything is relative. But there is more meaning in the magnitude of the PSD.

There is an amplitude that is low enough that the vibrations are negligible and reveal no surface artifacts in a print. I would suggest that this should be the goal of a resonance test: to reveal at what frequencies the resonances of the print head exceed a threshold at which the visible artifacts become unacceptable. Second, the test should clearly reveal which frequency peak will lead to the largest visible artifacts.

With experimentation, it is possible with the current resonance test to approximate these answers within a narrow set of conditions. But with different setups and test conditions, it is not possible. I’ve only looked into the development history of input shaper on klipper a little bit so I’m not sure if other methods of auto shaping/resonance testing are still in the works but I think there are probably better methods than are being used now. I have an idea that I’m working on now but I’m limited by time available and my knowledge of python/raspberry pi.