# Brushless Servo Support

Hey folks! Long time fan of Klipper’s architecture! I’ve always wanted thought the same architectural concept would apply perfectly to control servo motors.

That is; do the motion planning in Python on a non-RT host computer and stream the motion profile, ready to run, to the RT MCU on the servo drive.

Custom, well integrated servos have also been a long-time coming project for me, but a couple of friends and I have managed to prototype these little NEMA17 servos with an integrated driver super cheap (~\$7 for the PCBA) recently. We’re going to see if we can get them manufactured in small-scale (100~1000) cheap enough for them to be a viable replacement for stepper motors in hobby applications (eg. ~\$50 each)

Project URL: atopile / Servo Drive · GitLab

I would like to set about designing and adding servo support to Klipper.

(sorry, I had to split this in two because the forum said new users aren’t allowed to include so many links in one post )

For the control protocol, I’m imagining expressing all the paths as splines or a semantically equivalent piecewise-polynomial. This makes it extremely efficient and simple to compute the target position, velocity, torque, etc… in realtime on the servo’s MCU.

These splines can be computed in numerous ways, but perhaps the easiest (and dumbest) is using something like `scipy.interpolate`, specifically probably scipy.interpolate.make_interp_spline, because it means we can set the dimensionality to the number of servos we’re trying to control and the degree to meet our smoothness needs (eg. jerk control). I’m sure this has issues with cornering accuracy etc… and isn’t optimal, but might provide a simple starting point

Once they’re computed, we can express these paths to the servo’s MCUs as the coefficients of the polynomials + the start/end time of each piece.

On the MCU we can, trivially, take all the derivatives of our polynomial required to implement feedforward controls like shown here:

The whole article is worth a read Feedforward in Motion Control - Vital for Improving Positioning Accuracy

We can also easily do a cascading PID control loop as they describe earlier!

I’m here posting looking for 3 things:
Collaborators experienced with Klipper
Feedback on the design thus far
A gauge on excitement at the future of servos!

Stay awesome,

Matt

1 Like

You might want to check Mechaduino experiment

Nice! Thank you for the link!

Looks like it’s very similar in concept to the suggestion here.
Do you know if the project got much traction at all? I notice there hasn’t been any commits on the branch recently?

@copper280z also sent me this repo doing something almost identical FW-wise with SimpleFOC too: GitHub - Copper280z/Klipper_commander

They said they were able to get up to a 36kHz control loop, which is pretty impressive!

The above linked “experiment” is no longer followed up since demand and supply of proper hardware was pretty low back then.

An interested developer would need to pick it up and carry it across the finish line.

1 Like

Have you been at all involved in its development?

We’ve been working on some BLDC servos that are coming out pretty inexpensive (~\$30, motor and all!). If you’ve got the right experience and you’re keen to join in perhaps we could send you some once we get them?

Not at all. I just clean up here, but thanks for the offer. Appreciated.

Interesting. I’d be curious what chips you plan to use on the board you are designing.

I have spent a bunch of time on “stepper servo” support in the past. Both in prototyping of “mechaduino support” and in implementation of magnetic hall sensors for precise stepper axis position monitoring ( Configuration reference - Klipper documentation ).

Unfortunately, due to time constraints on my side, I don’t think I’ll be able to contribute to any new efforts in this area in the short term.

I can share a few high-level thoughts that may (or may not) help:

• Consider reading through all the threads that have linked to Mechaduino experiment (as found immediately after the original post at that link). This topic has come up a few times before, and there have been several different ideas on the approach.
• For what it is worth, I’m no longer convinced that a PID position/command feedback loop is a good idea. The concern is that if the motor over shoots a desired target then it will result in a print blemish, and commanding a return to the actual desired target will not undo that blemish, but is instead likely to make the blemish more noticeable. This, I believe, is what leads to the “salmon skin effect” that is a common complaint with past servo steppers implementations.
• In contrast, it may very well be worthwhile to build some type of “overall feedback loop” so that, over extended time periods, the printer learns not to “overshoot the target in the first place”. That is a lot of work though.
• If interested in utilizing velocity and/or acceleration in determining motor control, then the best way to do that may be to reuse the existing host kinematic system and calculate the desired velocity/acceleration from the existing `queue_step` commands that are queued in the micro-controller. That is, the Klipper host code already queues upcoming motion in the micro-controller 100+ms prior to it being used. It should not be difficult for micro-controller code to inspect that queued movement to obtain velocity and acceleration.

Cheers,
-Kevin

Hi,

I’m the author of the repo linked by the OP. I’ve been working on a similar, but separate, effort. I mostly agree with everything you said. I also think that due to the heavy computational demands of high bandwidth FOC motor drives, it’s not a great candidate to integrate into the Klipper MCU firmware. I’m more of the mindset to add Klipper support to the servo firmware.

The driver implementation I’m using is of similar form factor to most of the nema17 servo stepper boards, it uses an STM32g431 MCU, along with an ST L6226 driver, which I’ve found is pretty happy driving 2.5A into a NEMA17 in FOC, this uses an MT6835 21 bit encoder. I’ve also used an STM32F401 with a pair of DRV8876 drivers, with a CUI capacitive encoder. The higher resolution encoder works better in some ways, but there’s no clear winner in my opinion.

I also am not sure a basic PID loop is sufficient for the servo loop to have enough bandwidth. I think a higher order controller will make this work much better. My project is based on the SimpleFOC arduino library, which by default uses a cascaded position/velocity/current PID control loop, this works ok-ish, particularly if you use a velocity feed forward term. Graphing the following error during a move with velocity feed forward enabled, the constant velocity portion of the move has approx zero following error. The constant accel portions have some following error, but the ends of the accel portions have a lot, this is due to the integrators needing to build up some error to get the load moving. This is where an accel feed forward term will help. This is the reason I’ve tried writing my own implementation of the MCU protocol, then I can use the information in the step queue to calculate the various feed forward terms without doing a finite difference on the step/dir interface. It’s incomplete at the moment, I’ve gotten to the point where the MCU is just about configured by the host, but some life things came up and I haven’t touched it in a while. As this isn’t functional yet, I’ve used step/dir and a single finite difference to get velocity, I haven’t tried to get accel from finite difference but I assume it’ll be noisy, it still might be an improvement though.

Stability of the controller is also impacted by load resonances, so the simple PID controller is heavily restricted in bandwidth by the mechanical resonances. I’ve never designed a real full state controller, but I’ve wanted to, so I’m going to try it for this. A sort of “learning” controller is an interesting idea, and I actually know of some like that used in industry, but I don’t think it’s the best choice here. The ones I know of restrict themselves to only working for the exact move commands that it’s learned. Instead I think some form of LQG with integral action is the way to go, but I’m unsure if this is computationally feasible to implement on the microcontroller, I think I’m just going to have to try and see.

An alternative is to use a notch filter in the servo loop and in the motion planner, but again this reduces control bandwidth.

That’s a lot of text, so to sum up my opinions on this, as someone who has spent significant time working on it. Getting Klipper to move the servo is easy, step/dir works. It even works pretty well in some cases, but having velocity/accel info is very worthwhile. The control system to get the servo to match the performance of the stepper when working within the limitations of open-loop stepper control is challenging. Once you start asking for move parameters outside the open-loop capabilities, anything that isn’t a print failure is a win.

Interesting.

It should be possible to limit noise by reducing the “step size”. In a “servo” setup, the “distance travelled for each step” is a nominal value, as there isn’t actually discrete “steps” taken by the servo. So one should be able to configure a sufficiently small step size to reduce issues with quantization of distances.

FWIW, if one did want to send the mcu velocity/acceleration information from the host, then the host would need to generate that information from the requested position of the stepper. That is, the host does not itself have access to velocity/acceleration information for each stepper. There is often a misconception that Klipper has this information because it nominally tracks acceleration and velocity for commanded moves. That information isn’t directly useful for calculating the velocity/acceleration of the stepper motor though. It is common for prints to contain “curve movements” and slicers generate those movements by emitting many small moves. Klipper analyzes each of these small movements (using nominal velocity and acceleration) and generates the step timing from those requests. It’ll further modifies that timing using “pressure advance” and “input shaper”. The final timing of the step pulses has little relationship to the initial velocity/acceleration - and you don’t want it to - as the final stepper movements are often much smoother than the incoming segmented movements.

FWIW, it might be possible to implement a Kalman filter (and similar parts of LQG) in the host code, and have the host periodically send the resulting parameters to the mcu for it to perform low-latency application of those parameters. That is, it may be possible to split out the low-latency parts of the control algorithm from the cpu intensive parts.

Cheers,
-Kevin

Thanks for all the input folks, and apologies I’ve been a little MIA!

We’ve got the next rev. of our servo drives we’re quite excited about!

They were about \$70 (USD) as pictured in the 9-off quantity we got, but we’re hoping/planning to get that down to being able to sell them for about \$50.

They’re 24V 50W / 0.16N.m rated (or 3x that burst) with a 14-bit encoder.
The rated torque appears low at first glance compared to most stepper motors, but it’s a much fairer fight once they’re both running at any speed, since the efficiency and shaft power capability of these brushless motors should be significantly higher than a similar stepper.

As I mentioned, the design files/repo is all up here.

We’ve got a discord server up here:

We’ve just got the spinning with SimpleFOC. Love for anyone who’d like to play along!

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.