New proposal for Klipper "extension" support

I have been working recently on support for an official “Klipper extension system” (aka an official “plugin” system). This has been discussed a few times before (for example, Possible Klipper "plugins" instead of macros? ).

This is still in a “proposal” and “development” state. However, I do have an initial software implementation. Klipper patches at:

And an example extension at:

This code is in a very early state. The main goal at this point is to test the high-level concepts.

There are two major goals of the extension system:

  1. Make it easy for users to find, install, version, and update common Klipper enhancements provided by the community.
  2. Define a clear programming interface between Klipper and these extensions to reduce the chance of errors. In particular, to reduce the chance of things breaking as new software updates become available.

There are two initial targets for the extension system: a) make it possible to batch up “macro config includes” into an easily deployed and versioned extension; and b) to make it possible for an extension to define new G-Code commands that are implemented in a general purpose programming language (ie, not limited to Jinja2). As time goes on, the goal will be to continue to add new API calls so that additional functionality can be implemented in an extension.

On the technical side, there are envisioned to be four major components to this proposal:

  1. Changes to Klipper to identify extensions and start them if requested.
  2. Example “extension code” so that developers can rapidly implement a new community extension.
  3. A new “global listing repository” that extension developers can submit their community extensions to.
  4. A tool to help users identify extensions available in the “global listing repository” and make it easy for them to install and/or update them. Ideally this tool will be directly available from the Klipper GUIs.

The “klipper patches” link above has initial support for the first component. A new mechanism has been added for Klipper to find and launch locally found extensions, and new API calls have been defined (see the docs/API_Server.md file). Documentation is limited at this point, as this is still a “work in progress”.

The “example extension” repo listed above is the very beginnings of the second component. More work is needed here. The current example code interacts with Klipper to append the config snippets in test_extension/config_snippet.txt to Klipper’s main “printer.cfg” file.

The third and fourth components are still in a discussion phase. I hope to gather feedback on these components over the next several weeks.

-Kevin

8 Likes

Looks interesting, a few thoughts:

Because the section name is used to find the extension, there is both the chance for conflicts, and the lack of a way to specify multiple sections for one extension. Some common host modules today leverage this pretty heavily, such as tmc_autotune.

The extmgr assumes that the python interpreter is at extdir/extname/bin/python, if it were created there using virtualenv defaults, the kernel would not reused mapped pages, and there would be N copies of the interpreter resident. Symlink venvs can help reduce this, though in the simple case, where an extention has no requirements other than those needed by klippy, reusing klippy’s own interp (and venv, if applicable) may be an alternative.

The extmgr has no way to get information about a module besides loading it. With how far back klippy will work (py 3.3 as of this writing) some additional metadata (such as required python versions, minimum klipper versions and/or capability flags (if those are implemented later)). This metadata could also contain specification for building the venv, allowing a future tool to better manage them. (and possibly integrate with moonraker’s update_manager system)

2 Likes

Some thoughts:

I’m going to use the term “plugin” to refer to an addition to Klipper, whether an extra or an extension.

  • Since extensions are outside of the klipper directory, the klippy.log wouldn’t report a “dirty” version. Would this be a concern, since a broken extension could still break Klipper?
  • Extensions look great for simpler, non-intrusive plugins, like my KlipperMaintenance.
  • For more complex, intrusive plugins like Happy Hare, or my DynamicMacros, the old “extras” system seems better. Extras allow for deeper control of Klipper, which is crucial for some plugins.

Overall, I think this is a good first step to a better plugin system for Klipper, starting with simpler plugins.

1 Like

The proposal here would be for the external code to interact with the main Klipper code through a defined API. That is, the code would run in its own process, and be much less likely to interfere with timing, memory usage, and similar. The external code would only have access to defined API calls, so no real risk of internal changes to Klipper breaking external code. If something does go wrong, there should ideally be sufficient logging to rapidly determine who should be contacted. So, there would not be any particular reason to mark the code as “dirty” in the logs.

Modifying code in the Klipper repo itself is notably different (this includes “adding” code). Code modified/added in this way can easily destabilize the Klipper code (eg, invalid use of threads, stressing the memory garbage collection system, pausing the main processing, consuming excessive bandwidth, etc.). If things do go wrong, there is little ability to quickly identify what code is at fault - it quickly leads to “finger pointing” as people search for “someone else to solve the problem”. There is also no way to realistically manage internal changes in such a way that external code modifications do not break (as we have no way to even know what the external changes may depend on).

Cheers,
-Kevin

3 Likes

I did read the above code several times during the year. I did read several “extras” and sometimes support people with writing one. These are my thoughts.

I think the perfect klipper extension was and is the Moonraker. It shares nothing, except the API protocol. (Not share CPU shares, memory limits, cgroups, python & etc).
It would be “cool” if more extensions were written this way. But for some reason it is not happening.

I think, that one of the reasons was and is complexity. It is easier to copy-paste code, mimic patterns in current “extras”, and do a similar job.
I do not own or maintain any plugin, and if I do, I have to be pretty clever and motivated to do in in a way, how Moonraker is implemented.

The current proposal solves half of the issue: How to run the plugin and integrate it with the config semi-automatically. It does inevitably introduce more memory usage and resource sharing (Cgroups/CPU Shares within Cgroup). A Python process cannot share all memory from libs, each process does have RSS, and I doubt it will always be small.
This is okay, I doubt it’s worth it to hack around cgroups and systemd.

I do not have thoughts about the plugin manager or repo system, what we have now seems to work. People write their installers. Moonraker manages updates and runs update hooks.

The other half of the problem is a lack of code, which does work with the klipper api (of course, there is a Motan, and some folks do know how to do it, but I’m sure we can count those people on fingers).
It is somewhat unfair to force people to use JSON RPC API, when 99% percent of the code for Klipper does not do that.

As far as I know, most current “plugins” expect to run within Klippy venv, simply because they are “simple” ascetic (as Klippy is) and do not require something fancy.

So, I would suggest based on my thoughts above.
I think it may be better/simpler to:

  • Define extensions within Klippy. Let it be klippy/extensions + .gitignore
  • Use Python, which runs the klippy to run the subprocess extension.
  • I can suggest and commit myself to doing so. Migrate some extra code to the new subsystem.

I think the extensions folder within Klippy would solve the chicken and egg problem, where to place it, why, who decides it & etc.
And also allow for migration of some ad-hoc built-in extras to the new extension subsystem.

The reuse of Python should make everyone’s job easier. If extension does require venv, it could place some trampoline code which could do the necessary exec to target venv. (ex. Shake&Tune which uses scipy, for example).

To make things fair for all parties, and to do an actual test flight alongside, probably find and solve resource issues.
I can think of migrating to the new subsystem ad-hoc commands (in complexity increase order), specifically:

Hope at least part of it sounds sane.

Thanks.


And after that, it would be possible to say, “There is a working, production-ready example, please take a look.”

If it is required, I could probably guess which existing “plugins” should work with this. (I would guess most, except MMU/Custom Eddy boards for now).

1 Like

I like the idea but I’m not sure that it targets one of the core problems (at least in my view):

  1. The “native” code extensions have every possibility to introduce instability and quite some take this opportunity
  2. Many extensions do require this native interface due to them accessing low-level Klipper methods and code.
  3. A “locked down” API would secure that extensions cannot do “something stupid” but at the same time also limit what they can actually accomplish.

For me, the big question is not where they put their code into. The big question is how Klipper can be safely extended, without compromising core functionality and creating the pointless discussion of whom to blame. However, at the same time, providing enough flexibility to “do cool things”.

1 Like

The proposal I made here last year for a new “extension” system has, unfortunately, stalled. One reason is a lack of time on my part, but another reason is that I now have some concerns with my previous proposal. A big concern is that completing the proposal would take a lot of work and I’m unsure other developers would widely utilize it. That is, I fear there’s a real possibility that ad-hoc modifications to the code would continue to be the preferred method to distribute changes even if the original extension proposal was implemented.

For what it is worth, I have been thinking about an alternate implementation of “extensions” over the last few months. This idea involves internally reworking Klipper “extras” into two different abstractions: “hwmods” (or “hardware modules”) and “prmods” (or “printer modules”). This would be an internal rework and at its completion there would no longer be any support for the current “extras” system.

A “prmod” would be very similar to a current “extras” module:

  • It would be auto-instantiated if a corresponding printer.cfg section is found. It would have standard access to a “config” object, access to the main “printer” object, access to a “reactor” object, and similar.
  • Most G-Code commands would be implemented in a “prmod”. This includes commands like BED_MESH_CALIBRATE, PID_CALIBRATE, DELTA_CALIBRATE, and similar.
  • It would implement g-code macros, g-code templates, and implement typical Jinja2 macro processing.
  • For those familiar with the code today, a “prmod” would largely correlate with the current PrinterXXX classes in the code today.

However, it would have some notable differences from the current “extras” system:

  • It would not directly communicate with the micro-controllers nor handle any “hard real-time” deadlines. That is, it can’t call mcu.lookup_command(), some_mcu_command.send(), nor similar. It would not track printtime of events, and would not schedule events to occur at specific micro-controller times. In order to talk to the micro-controller or to implement a schedule, a prmod would need to instantiate a hwmod and then request the hwmod to queue an action.
  • Prmods would run in a separate unix process from the code running hwmods. Communication between the two processes would be via messages.

In contrast, a “hwmod” would be code focused on handling micro-controller communication and converting high-level actions into a schedule of real-time deadlines. For those familiar with the code today, this would largely correspond to the current MCU_xxx classes - for example: MCU_pwm, MCU_I2C, MCU_endstop, MCU_stepper, etc. . It would also include the toolhead class and kinematics classes, and similar classes that schedule events with a well timed deadline.

In order for this to be successful, it would require a convenient way to send messages between unix processes. My current thinking is that it should be possible to statically declare API functions and then automatically generate Python “veneer” objects that translate calls to messages. So, for example, if the hwmod MCU_SPI code declared spi_transfer() as an api method, then ideally a prmod could do something like: self.spi = hardware_manager.alloc_spi()some_result = self.spi.spi_transfer("some_data"). That is, ideally, the vast majority of code would not need to know that code is running in a separate unix process.

There are some advantages to maintaining a split between hwmods and prmods:

  • Once “prmods” are successfully running in a separate unix process it should be straight-forward to run a “prmod” outside the main repo. That is, ideally an “extension” would mostly just be a “prmod” and thus able to do the same things that a “prmod” could do. It would also, ideally, be straight forward to migrate a “prmod” to/from an “extension” as needed. Also, there would be plenty of examples of “prmods” that a new “extension” developer could look at to get started.

  • Placing all the real-time code in a separate unix process should help improve overall stability. Currently, there are a lot of ways that Klipper code could result in hard to debug errors like “timer too close” - for example, by stalling the main thread, causing unexpected python garbage collection, consuming high cpu usage, not being thread safe when called from the serialhdl background thread, etc. . Collecting all the real-time code together will ideally make it easier to audit that code and reduce the chance of failures. This should make the system more scalable, as most code wont need to worry about threads or real-time deadlines.

  • The current “extras” directory has over 130 code files in it today. This has become unwieldy. In the process of moving to prmods and hwmods we could, ideally, introduce directories that better segregate functionality.

Of course there are some notable disadvantages to this idea:

  • It’s even more work that a simple “extension” system. Given constraints on developer time this may not be feasible. However, ideally we’d be able to migrate modules over a relatively long time frame, and thus avoid any kind of “big bang” conversion.

  • Introducing an additional abstraction does add code complexity. This could make it harder for new developers to learn the code and harder for maintainers to maintain the code. In particular, it is conceivable that some new future functionality might require new mcu code, new hwmod code, and new prmod code. That could get tedious. It is also likely that a migration may involve splitting current “extras” modules up into different “prmods” and “hwmods” where today the functionality happily resides in a single module.

  • This proposal pretty much ensures that extensions would need to be written in Python (and be GPLv3). That is, proper handling would likely require notable amounts of background code (eg, the “reactor” and “api veneer code”). Thus, if would be likely be very challenging to implement an extension in another coding language.

Anyway, in closing, this is just some recent thoughts I’ve had on the topic of “extensions”. This is a very “loose” idea and nothing is “written in stone”. Thoughts?

-Kevin

1 Like

If this ends up being the route chosen, I completely agree that a slow transition is ideal, especially after the recent significant motion queue changes that broke both Happy Hare and AFC software for a couple weeks until the devs could update with the changes. Also, I really appreciate you sharing this idea publicly to get feedback and not just going ahead immediately.

2 Likes

This is exactly the point and frequent source of “philosophical” discussions. My view here is that not the updates broke these extensions, but the extensions broke because they are using low level functions in a way that have never been meant / endorsed / documented this way.

And this is exactly the dilemma, creating the need for complex work to support a flexible and powerful extension system.

For popular extensions and hardware, it would be much better to come up with a proposal on how to extend Klipper officially, e.g., an “MMU API”, instead of targeting their “closed ecosystem” and running behind changes in the official sources.
Finally, most users either do not understand or care about the intricate dependencies; they just want a stable system that just works. Having to fear that a careless hitting of the update button renders a printer unusable for the weeks to come seems unfavorable.

Just my 2 cents.

1 Like

So my thoughts are here, what exactly would you consider not low level functions? I completely agree that having an extension / plugin / whatever system would be beneficial, and having some sort of API would be great. Most systems will have (like I believe mentioned above), some sort of internal API that is documented and subject to change whenever, and then a most “user” facing API to support modifications etc.

I know for AFC in particular, fixing the software after one of the updates was possible in < 24 hours, however there was another PR that was hanging about that just got merged with regards to motion flushing I believe that it was decided (on our end) to not have to push 2 patches to users in order to try and maintain as best of an user experience as possible.

I think a large majority of the larger klipper modifications would be happy to adjust their software if it was in a logical and documented way.

While I do agree that creating an API would be a good long term goal, I still think developers will take some time to fully move to this if they want to support older versions of Klipper(v12) and Kalico. Since a lot of people use Kalico I do think it would be a good idea to work together so that both Klipper and Kalico would have the same API as this would make it easier to develop a plugin that seamlessly works between both.

But until such API is developed, I think a good short term solution that could be done would be to add function comments to the code base as this can help guide developers “hooking” into the code base to do it properly instead of everyone looking through the code and trying to figure out how to “hook” into the code themselves which could be done improperly and could cause timer too close errors or other weird errors.

And as a developer for AFC-Klipper-Add-On I was not too bothered by the breaking changes that were pushed out, its the nature of this kind of development especially since we are developing outside the norm of what was originally expected of Klipper. When this happens is just easy to let people know to rollback to a certain commit and not update until an update is pushed out. But I see the other problem of this is that people come directly to this discourse asking for help since they think it’s a Klipper problem instead of going to the respected discord channel or other help spot for their installed plugin when these errors occur.

I certainly sympathize with the MMU developers (and users). It’s useful hardware and it needs quality software support.

As the Klipper maintainer, I rarely hear about MMU stuff until something breaks. That’s unfortunate.

I’d definitely like to see MMU support be “first class” in Klipper. I guess my question is - what needs to be done to make that happen? How can we change the mainline Klipper code to better support MMUs? Who is ready to do the majority of that work?

There are certainly some possibilities I can think of:

  • Submit a PR to mainline Klipper supporting all common MMUs. This is certainly challenging given the wide variety of hardware out there, and time constraints on my side for reviewing/maintaining that code.
  • Add an “extension” system to Klipper with a well documented API that is powerful enough to support common MMUs (and convert existing MMU code to use that API). That’s basically what’s been discussed here in this thread, but it raises some obvious questions on what that API should look like and who will be doing the work.

Are there other solutions?

When I ask questions like this, I feel the most common response goes something like, “we should just keep doing what we’re doing today, but somebody else should be doing it better”. :slight_smile:

As for documentation, I feel there is a significant amount of information available on the stable APIs - API server - Klipper documentation , Status reference - Klipper documentation , G-Codes - Klipper documentation , Configuration reference - Klipper documentation , and we try to give lengthy deprecation notices of any breaking changes to them (eg, Configuration Changes - Klipper documentation ). Modifying the code (including by adding code to the repository) is obviously not a supported API.

I certainly appreciate that there’s interesting capabilities that can not be achieved with the stable API today. So, what do we need to do to obtain these capabilities in a supported way and who will be doing that work?

Cheers,
-Kevin

2 Likes

I’m not a developer or user of a MMU so take this for what it’s worth.

How are the OEM companies handling their MMU addons? Do they directly control every motor and read evert sensor with the “host computer” or have they moved logic to a processor embedded in the hardware?
Klipper is designed to be fast and maintain strict timing synchronization. MMUs are slow and cumbersome by comparison. The (ever increasing) bandwidth that creating/maintaining a “universal” interface, seems to me, to beyond the scope of an opensource project when maintaining/improving “core functions” is always using 80-100% of available dev resources.

It seems to me in a world where a Raspberry Pi Nano is available retail for <$5 and OEMs are producing printers by the thousands that have embedded SBC’s that can barely keep up with the current loads that extending functionality is a path to despair.

The above goes double for LED “effects”

In a MMU equipped printer when a filament change gcode arrives klipper should:

  1. Park over the purge tower
  2. Send a signal to MMU to load filament “x”
  3. Wait for a “done” signal
  4. Run the purge macro
  5. Resume printing

We now return you to input from real developers.

@koconnor As the developer of Happy Hare (universal MMU software) I can say that the biggest area of pain for me is in controlling steppers the way I need. Currently I use a second Toolhead to build MMU rails. This was primarily to obtain drip homing movement (which I now see is part of manual_stepper) but also to mimic an extruder stepper (to allow for syncing) and finally to allow for multiple endstops.

Essentially, what I create, albeit through some maintenance-prone class manipulation, is a “mmu rail/stepper” type. This needs the following:

  • Multiple endstops allowing for endstop choice when performing homing moves
  • Drip homing (obviously)
  • Two modes of operation": extruder trapq for when synced to extruder, regular trapq when operated as a “axis”/manual stepper.
  • Abiity to sync to an extruder or operate independently
  • Abilty for the extruder to sync with it

These semantics allow:

  • control of the loading operation where, for parts of the process, the extruder may or may not be synced to the mmu rail using sophisticated homing moves to chosen sensor. I.e. the filament loads with mmu rail only until it hits the extruder, the extruder is synced to mmu rail, and loading continues, perhaps accurately homing to an toolhead sensor before final load and purging.
  • in print syncing of the mmu rail to the extruder (where the rotation distance of the mmu_stepper is being dynamically altered to remove compression/tension of the filament between mmu and extruder)

I refer these different drive modes as:
”gear+extruder” - mmu driving, extruder following
”gear” - mmu only movement (no extruder)
”extruder” - extruder ONLY on mmu rail and being controlled by mmu movement. Have value on unload and recovery.
”synced” - mmu gear is synced to the extruder for in-print synchronization (extruder trapq)

With this level of hardware control you would provide the basic building block for MMU control logic and allow the “ecosystem” to sit on top allowing for handling all the nuances of different MMU styles.
BTW I don’t know if it is useful but I tried to characterize MMU’s as type-A/B/C here: Conceptual MMU · moggieuk/Happy-Hare Wiki · GitHub

3 Likes

Thanks. I appreciate the information and feedback. I have some additional comments and questions.

For the low-level manipulations that you need to do, it sounds like you need:

  • A way to define several “mmu” stepper motors. All the steppers are known at startup, though when they move changes at runtime.
  • A way to define a set of “mmu” endstops (potentially 2+). All the endstops are known at startup, but when they are used varies at run-time.
  • Be able to group one or more of these “mmu” motors into a subset. The total set is known at start time, but the pertinent subset can change at run-time.
  • You need to be able to manually command movement of a grouping of “mmu” steppers. You don’t need to group or simultaneously move kinematic steppers and mmu steppers (that is, you’re not moving X while simultaneously moving a “gear” motor).
  • You need to be able to manually command movement of a grouping of “mmu” steppers until one of the previously defined “mmu” endstops triggers. Which endstop changes at run-time. You don’t need to home mmu steppers to an XY endstop nor similar; you don’t need to home an XY stepper to an “mmu” endstop; you don’t need to home an XY stepper while simultaneously homing an “mmu” stepper; you don’t need to home one “mmu” stepper to one “mmu” endstop while simultaneously homing a different “mmu” stepper to a different “mmu” endstop.
  • You need to be able to associate a grouping of “mmu” steppers with an extruder at runtime.

Did I understand the above correctly?

Are there any situations where you have “hard real-time” requirements, or can you mostly just “queue up a bunch of commands” and wait for them to complete? I understand you need to associate steppers with extruders and need to implement homing, but are there other cases which are strictly timed?

Separately, what is your mid to long term preferred development approach? Are you happy with the current setup (instructing users to drop code in the Klipper directory), would like to upstream the code to mainline Klipper repo, would you prefer HappyHair to remain in a separate repo and interact with Klipper via a stable api (like moonraker, mainsail, etc.), or perhaps something else.

Thanks again,
-Kevin

2 Likes

Hi Kevin @koconnor , sorry for the tardy response – I’m in the middle of vacation/traveling :slight_smile:

A way to define a set of “mmu” endstops (potentially 2+). All the endstops are known at startup, but when they are used varies at run-time.

Correct. So when I perform a homing move, I can specify the endstop. Currently I add them as “extra_endstops” in the (hacked) stepper definition and allow the caller to specify which. That why there can still be a default but others can be substituted as needed. Note that this is not just about filament movement – consider a “selector” stepper that has a homing endstop but also has an endstop for each MMU lane/gate. The new BTT ViViD is an example of this. For filament movement, there is a need to home to known positions using endstops (not just the traditional filament sensor) but always a type of switch.
With a typical fully equipped Happy Hare MMU setup a (lane) stepper may have:

  • a MMU “entry” sensor used for “reverse homing” when ejecting filament. This also doubles as a filament switch to detect the desire to auto-load filament.
  • a MMU “exit” sensor after the MMU filament drive stepper. This might also be located in the filament aggregation point or “hub”. It could also be shared by each lane of the MMU.
  • a “filament compression” sensor that can be used to detect the filament hitting the extruder (i.e. end of bowden)
  • an “extruder entry” sensor. Ala what Prusa have always had and another way to detect filament at extruder
  • a “toolhead” sensor inside the extruder. This is really useful because it provides a highly accurate homing point for the final stages of filament movement.
  • sometimes users have also set up "touch” endstop (stallguard) that can also be used to detect the filament hitting the extruder entry.

So it can be a lot, some specific to one stepper, others shared by all MMU steppers.

Side note (future): There are a couple of pure analog sensors that measure the filament compression/tension. These are great for dynamically controlling the rotation_distance of the MMU (thus keeping synced with extruder) but it is difficult to “home” to the compression state. Using server side logic of course means an over-run.. There is at least one PR for an “analogue” input trigger at the mcu that would solve this problem.

Be able to group one or more of these “mmu” motors into a subset. The total set is known at start time, but the pertinent subset can change at run-time.

Correct. I call them Type-B MMU’s. They have a stepper per lane. Here you are changing which one you are driving and particularly which you are syncing to the extruder movement. Note it is still useful to keep then separate so that you could perform a “filament pre-loading” operation on a lane whilst another is loaded to the extruder (or even whilst printing…?)

You need to be able to manually command movement of a grouping of “mmu” steppers. You don’t need to group or simultaneously move kinematic steppers and mmu steppers (that is, you’re not moving X while simultaneously moving a “gear” motor).

Correct. I don’t see value is combining movement with existing x/y/z. They are separate movements much like the manual_stepper is today.

You need to be able to manually command movement of a grouping of “mmu” steppers until one of the previously defined “mmu” endstops triggers. Which endstop changes at run-time. You don’t need to home mmu steppers to an XY endstop nor similar; you don’t need to home an XY stepper to an “mmu” endstop; you don’t need to home an XY stepper while simultaneously homing an “mmu” stepper; you don’t need to home one “mmu” stepper to one “mmu” endstop while simultaneously homing a different “mmu” stepper to a different “mmu” endstop.

The “endstops” for the MMU are completely separate from the printers x/y. They MAY also be shared as regular filament sensors but to be honest this gets in the way and it is better for the MMU controlling software to manage runout intelligently. E.g. it implements “endless spool” automatic reloading, etc.

As for grouping movement – only one of the MMU steppers is moving at the time so although all teh steppers would share the same endstops there would only be one homing at a time. Today I achive this by dynamcally changing which steppers are on the “rail” so that I’m only controlling one at a time (the disabling is removing the trapq and/or removing the step flushing routine in previous klipper versions).

You need to be able to associate a grouping of “mmu” steppers with an extruder at runtime.

Not a group but just one (at least with present MMU designs). I had assumed a Rail per MMU lane. I.e. just like a manual_stepper each MMU lane stepper has a rail and that is synced with extruder at runtime. When synced and “following” exturder it would have the extruders kinematics otherwise would have regular.

1 Like

No real-time needs that I’ve encountered (in the 3+ years of doing this). The only need is syncing with the extruder when in print and then dynamically varying rotation_distance to keep strain off the extruder. Other than non-time sensitive “pre-loads” or “ejects” as described earlier, the MMU isn’t doing anything while printing.

Separately, what is your mid to long term preferred development approach? Are you happy with the current setup (instructing users to drop code in the Klipper directory), would like to upstream the code to mainline Klipper repo, would you prefer HappyHair to remain in a separate repo and interact with Klipper via a stable api (like moonraker, mainsail, etc.), or perhaps something else.

I love the idea of integrating more extensively into Klipper. Frankly I’m in the middle of a big HHv4 rework that will clean up and make HH more modular. It has developed into quite large modules over time. Once that is done I’d love to connect with you on future options…
PR’s have been pulled into Moonraker (although I still have a large spoolman integration and gcode pre-processor that is separate).
Support has been pulled into Fluidd already. Mainsail is imminent.
There is also a klipperscreen that I’ve had the offer to create PR but no time yet.

The bottom line is that I don’t want to loose the ability to easily maintain/update but I don’t need a separate module that users need to install…

One other random off-topic idea that might be of interest to future klipper is a “menuconfig” based installer that creates template *.cfg files. It’s experimental (and not my original idea) but seems to be working quite well for basic setup for HH. It’s all declarative and creates great (often fully working) starter config.

3 Likes

Just some thinking out loud on how integrating HH into mainline Klipper could work. I’m not the HH dev, but I have done a couple PR’s to HH so I know at least a bit of how it works internally.

If HH is going to be (at least partially) integrated into mainline Klipper, the best way I can think to do it (after Klipper’s stepper code is updated to do the same things HH does), is to have a few minimal extras modules in mainline that read the configs, initialize steppers, servos, and sensors, and expose their controls and additional settings over webhooks. Then, HH can run as a separate Python process to connect to the webhooks to communicate with Klipper, either directly through the Unix socket or through Moonraker’s websockets.

The main thing left to figure out is what happens when a toolchange (or any MMU command) is requested. Since (as far as I understand), webhooks update subscriptions every second, maybe a status variable could change to notify HH about the requested command. HH can then send over the proper commands to perform the command. Or is it possible for Klippy to directly send a message over webhooks to a specific HH socket from a G-code command?

Again, just some thoughts, and there might be a better way that what I described.