Possible Klipper "plugins" instead of macros?

The current Klipper macro system has shown impressive results. Impressive, both in a “terrific” way and in a “terrible” way. Terrific in that I’ve seen macros used to implement very impressive functionality that can directly improve user printing experience, and terrible in that some macros have grown to a level of complexity that makes them very hard to troubleshoot.

One of the weaknesses of Klipper macros is that they rely on Jinja2 and G-Code - both of which are very “quirky”. They both also lack good error handling capabilities - thus making it very difficult for macro authors to check for and meaningfully react to abnormal situations. This lack of error handling makes troubleshooting even more difficult.

I’d like to start a discussion on the possibility of enhancing Klipper to support an alternative mechanism for adding new commands to Klipper. I’m tentatively calling this proposal a “Klipper plugin”, though I admit that may not be a good name for it. This is just for discussion at this point - it may not be a good idea, and even if it is a good idea it would still need to be implemented.

To be clear, this would be in addition to the current Klipper macros. I have not plans or intentions of removing support for macros in Klipper.

The core idea of this “plugin system” would be a mechanism for users and developers to be able to add new commands to Klipper. I envision it would have the following capabilities:

  1. “Plugins” would not need to reside in the Klipper3d github repository, and would not need to go through the Klipper3d review process. Regular users could define their own plugins or obtain plugins directly from other developers.

  2. The new code could be implemented in Python or some other high-level language. New functionality would not need to be implemented in Jinja2.

  3. Code could “register a command” with Klipper, and if that command is run the plugin would be notified. It could then generate a series of G-Code that Klipper should run (as macros do today). It could also “pause” the g-code command stream (as tools like TEMPERATURE_WAIT do today). As an extra capability, if any g-code commands that a plugin issues raises an error, then it could “catch” and handle that error (potentially without the error causing Klipper to stop a print).

  4. It could export “status information” that other macros and API users could access (eg, {printer.myplugin.x}).

  5. Configuration parameters for the plugin could live in the standard printer.cfg file.

  6. As a capability not currently available to macros, I envision a plugin could execute a limited set of “asynchronous actions” that could occur even when the G-Code stream is nominally busy (for example, change the color of an LED).

  7. As another capability not currently available to macros, I envision there would be limited support for a plugin to be able to use SAVE_CONFIG to alter its own configuration sections in the printer.cfg file.

A “plugin” would not be a Klipper “extras module” - there are some things that I would envision are “out of scope” and thus explicitly would not be possible from a “plugin”:

  1. It can not implement new types of kinematics, nor introduce custom “g-code coordinate transformations”. Plugins would not be intended to alter the Klipper movement “fast path code”.

  2. It would not have low-level access to the internal state of Klipper objects, not be able to directly change that internal state, and would not be run within the main Klipper OS process.

  3. It would not be able to directly send commands to MCUs. It can not add support for MCU commands that the main Klipper code does not know about.

  4. It can not reliably respond to hardware events with low-latency. It would not have direct access to Klipper’s low-latency background thread.

In summary, the goal would be to make it possible to implement new g-code commands using high-level languages and to support meaningful error handling. At the same time, the goal would be to support a meaningful logging and troubleshooting system so that if a plugin has a defect its users can be rapidly directed to the developers that can solve the underlying issue. (That is, we want to try to limit the burden that a buggy plugin could place on the mainline Klipper contributors.)

I have some ideas for a possible implementation - mainly by extending the existing Klipper API server unix domain socket:

  1. Klipper could be extended to check a directory that “plugins” live in. If an unknown config section is encountered and the config section matches a file in the plugin directory, Klipper could run the plugin (in a sub-process) and pass Klipper’s unix domain socket to that plugin on its command line. When the plugin starts it could use the unix domain socket to obtain the Klipper config and handle any configuration items in its associated config sections.

  2. A new “register_gcode_command” API endpoint could be added to Klipper. A plugin could use this capability to register a command. If Klipper sees this new command in a command stream it could forward a notification via the API server. The plugin could then acknowledge the command, or issue new “run_gcode_from_command” API endpoint requests back to Klipper. Thus the API server could be extended to support the addition of synchronous G-Code commands.

  3. Similarly, a new “set_status_state” API endpoint could be added to Klipper so that a plugin can update {printer.myplugin.xxx} information.

  4. The plugin would have all the existing capabilities of the API server. It could thus “subscribe” to status updates, invoke asynchronous “endpoints”, submit G-Code commands, and similar.

  5. A Python reference library could be implemented with the low-level details of opening the unix domain socket, handling the registration of commands, and similar. Thus, I envision most plugin authors could just write basic Python code for their new G-Code commands, and not have to worry about all the gory details of the underlying unix domain socket.

Again, this is just a starting point for discussions. I haven’t made any decision one way or another if this is a “good idea” or not.

Even if this is a good idea and even if it is successful, I’m sure the results will still be both “terrific” and “terrible”. Even with plugins I’m sure we’d still have issues with over-complicated plugins, bugs, and troubleshooting. Perhaps, at least though, it will be possible to build complex plugins that are maintainable, which I suspect may not be achievable with today’s macros.

Thoughts?

-Kevin

7 Likes

Kevin,

I like the idea of providing plugin functionality to Klipper but I think there needs to be a clear separation between what a macro does and what a plug in does. My concern is the case where functionality is shared between the two entities that can lead to systems that no longer behave deterministically or require a specific set of macros and plug ins at a specific release level (both for the macros and plugins as well as Klipper, Moonraker, Mainsail/Fluidd, etc.).

In terms of separation, I would suggest that plugins affect the features of Klipper and not its operation. Rereading your post, I think you’re more or less specifying this model (in the second and third lists) but I think it needs to be spelled out explicitly.

In terms of your seven capabilities points, I have the following comments:

  1. There needs to be a set of reviewed plugins so that users have confidence in the system. Unless there are reviewed/approved plugins then I’m not sure how they will be received or trusted if they’re only available on some anonymous person’s GitHub. See my final point on “security” at the end of the post.
  2. Love the idea of plugins being implemented in a high level language; however, I feel that a source code level debugger as part of the tool chain is a must.
  3. I’m not sure about this one as it goes over the line between features and operations. Personally, I think that any added/changed/removed gcode commands should be only issued by macros.
  4. Great.
  5. I’m not so sure about this and it would depend on what the plugin functionality was. My issue with this is portability - what happens when somebody copies a printer.cfg and tries running it without the plugin or with a different system which means the plugin will operate incorrectly?
  6. I think this goes over the line in terms of separation but you do give a good example where it could be useful (ie the plugin causes the LEDs in a single printer in a farm when it gets a “Timer Too Close” error so it can be easily identified).
  7. I think I’m onboard with this. I’d like to see the APIs for this but that could be useful - especially if they’re accessed from Mainsail/Fluidd.

There is one big point that you miss and that’s security. What will be done to ensure that plugin writers cannot provide a backdoor into user’s systems and networks? Ensuring plugin users’ security needs to be part of the overall plan.

Overall, I think you have a good start to a plan here and I appreciate you putting it out for people’s comments.

2 Likes

I like the idea of being able to extend Klippers functionality beyond what can be done with macros without altering Klipper itself. As someone who is running a fork of Klipper as a daily driver because I have introduced several new modules (and honestly I have lost hope to get them all introduced into official Klipper), I would really appreciate an official way of doing this.

This sounds like a rather strong restriction to me. I am wondering a bit what is the reason for this. Most modules I had to introduce to support my printer model require at least one of these things which will not be possible with a “plugin”.

Wouldn’t it also just be easier to provide some kind of official interface for “extra modules” coming from an external source (maybe even some official market place to distribute them) rather than inventing yet another API? If some “plugin” does not require access to those things I does not have to use them.

I think this should be seen in particular in light of the increasing difficulties to get a new extra module into official Klipper. This is not meat as a critique, it is a simple fact that the increasing popularity makes it more difficult and time consuming for you maintainers to review all code and at the same time increases the requirements for code sanity and compliance to some rules for us developers, which in the end makes it harder and harder to get something in. I really think, there should be a way to share less official “extra modules” (and maybe even low-level MCU code) in a sane way.

4 Likes

I wholeheartedly back this initiative and find myself aligned with @mhier’s insights. Speaking from the standpoint of an end-user and not a developer, I believe the following considerations are crucial:

Minimizing Fragmentation

As identified here, the Klipper ecosystem is increasingly dispersing. It is vital for any upcoming system to actively discourage this trend.

Maintaining Quality

Hosting extras or plugins outside the core Klipper repository, devoid of a dedicated review process, introduces a significant risk of compromised quality control and documentation. This, in turn, promotes the fragmentation discussed earlier. There’s also a probability of multiple contributions accomplishing the same function in various manners. This signifies the necessity of retaining a broader perspective.

Streamlining Support

It’s essential for the sake of support that extras/plugins remain centralized and incorporated into the Klipper mainline. Dedicating resources to trace the origin of a contribution and then redirecting the user to the respective owner detracts from the holistic experience.

Review Process and Workload Management

I propose that extras/plugins be housed in the primary repository (or a separate repository overseen by the project), with a flexible review process:

  1. Identify any potential pitfalls that might impact the fast path (guidelines could be set on which areas/functions/methods to sidestep or approach with caution).
  2. Ascertain that the extras/plugins can be swiftly disabled - if a problem arises, disable them and see if the issue persists. If it does, the faulty extra remains deactivated until rectified (by the contributor or a volunteer); if not, the troubleshooting continues.
  3. Establish a warning system for malfunctioning plugins/extras - these would remain flagged until they are fixed.
  4. Once points 1, 2, and 3 are met, adopt a “merge first, rectify later” approach.

Conclusion

I’m convinced that this strategy could balance the best aspects of both scenarios:

  1. Swift and efficient acceptance of new contributions.
  2. Centralized access with thorough documentation and support to counter fragmentation and avoid redundant efforts.
3 Likes

Thanks for the feedback.

Great question.

An “extras” module can easily destabilize Klipper in such a way that it causes Klipper to fail with no indication that the root cause of the failure is the external code. An “extras” module can also easily access other parts of Klipper in such a way that updates to the mainline Klipper code will break (or destabilize) the “extras” module.

Common causes of failure include accessing core components from background threads (leading to “race conditions”), consuming too much processing time (leading to “timer too close” errors), performing blocking IO (leading to toolhead pauses or errors), and similar.

If effect, adding an external “extras” module is the same as altering the Klipper code. I certainly don’t mind when people alter Klipper (I explicitly built Klipper with the goal of making the code easier to experiment with), but I feel it is important that people know they are altering the code. That is, they should know what code they are running, know if it has been widely tested in that configuration, and know who to contact if they find something not working. I feel that is very important, both to sustain user confidence, and for managing the long-term maintenance burden of contributors.

The “extras” module was never intended as a “plugin” system - it was always intended as an internal abstraction to promote long-term code maintenance.

The idea I am proposing here would be to extend the Klipper “programmer interface” (API) to make it more amenable to external code. That is, try to make it easier for external code to add useful functionality to Klipper, while still making clear distinctions between that Klipper code and that external code. As an example, I don’t think there are any systemic issues with Mainsaill defects being distinguished between Klipper defects - most users can probably figure that out, and if not a contributor can rapidly point that out to them. I wonder if we can make it possible to add new Klipper commands while still maintaining that clear distinction on code ownership.

Finally, note that other projects have tried “expansive plugin systems” in the past with less than ideal results - for example the early web browsers had a plugin system which has now been completely removed (in favor of “extensions” with limited well defined APIs). OctoPrint also has an expansive “plugin” system and it is reported to be a maintenance burden. Those projects also don’t have the “knarly” real-time requirements that Klipper has. So, I’d prefer not to repeat those attempts (or at least go into it with the full understanding of the consequences).

Cheers,
-Kevin

1 Like

Hi everyone, I agree overall with mhier, but here is my bit of feedback.

A new API would not, from my perspective, add more value, because I barely use macros. The functionality I require could not be implemented without an extras module. Being this the case, the extras module is to me a de facto plugin system.

I’d vote for (1) a Klipper extras repository for 3rd-party extras, (2) a public wiki where their docs can be written and improved, (3) a less 3d-printer-centric klippy code (i.e. more flexible), and (3) official developer docs on how to properly use klippy’s methods for new extras modules.

This would form a playground where modules can be tested out, accessed centrally, and improve iteratively until they are ready for a merge (if ever).

I feel many issues would be mitigated if new developers (like me) could educate themselves properly, in a space where mistakes and innovation are encouraged.

Hoping for the best and thanks for your work :slight_smile:

Atte.
Nico

1 Like

Plug-ins / Modules

“Plug-in” or “feature support module”? Po-ta-to or pah-tata. Either way, it is a worthy idea. This allows for “additional features” which enhance the Klipper experience. Also, it can explicitly differentiate between enhancement and core programming.

Some basic things for consideration:

  • Where to Include Modules: If we go Python (or Rust even) a subdir would be a nice centralized container so that people wouldn’t program locations willy-nilly. (e.g. “Modules” directory) Each module would have it’s own subdirectory and there would need to be an understanding of “first-come, first named” directories. I don’t think Klipper devs should be tasked with keeping directories from clashing.

  • How to Include Modules: The Easy Way - Just make a subsection in printer.cfg for each module call. Name and path to runtime necessary. Best way(?) - Have Klipper search the Modules subdirectory for ini/env/etc files and then append the module(s) subsections to the printer.cfg (see: bed leveling). Installing them would just point to a repository and activating a copy to module subdirectory with an appropriate identifying script. Restart the firmware to integrate new or remove modules.

  • Hosting: I feel strongly about hosting (and developing) these modules anywhere else, but not in Klipper. Like Octoprint, let them develop and host their modules. They will need a template to create a module and that can be hosted separate from the main Klipper branch. I think this is the best route since it will keep module developers out of the main Klipper branch and their need for commits. In other words, these modules will not be like hosting all those *.cfg files in Klipper.

  • Long term Mitigation of Macros: Honestly, I can see macros, as such, going away to make room for modules. Sometimes I think a single macro could take up more lines of a printer.cfg than everything else combined. Plus, if you have several of them, you are troubleshooting within the main operating file for a “Klippered” printer. Another downside is that you have to replicate this every time you add another printer (each printer = another printer.cfg file). So having a process which automatically sweeps in/out modules is less prone to clerical accidents.

Module Behaviors

So to separate or indemnify Klipper development from module behavior:

  1. Set rules in place that first and foremost, Klipper does not endorse nor support any particular module. (no defects in Klipper repository either)

  2. Be explicit as possible on what a module can and cannot do in Klipper.

  3. Set up documentation on how to create a module.

That’s all I can think on right now before bed time. Sorry.

Even though it is not what I like to hear as a developer, I have to admit you are right. Probably this new “plugin” mechanism won’t be a solution for my problems, but this is of course no argument against it :slight_smile:

I align myself with mhier and naikynem, a RPC plugin API won’t help the distribution of my code in any way. That ship has sailed anyway as I’m patching toolhead.py and resonance_tester.py in order to fix some minor side-effects. These fixes are deemed optional as I don’t expect users to maintain their own patched git history with the current tooling (moonraker updater refuses to work with out-of-tree commits).

I’m comfortable with the notion of declining responsibility by explicitly tainting the klippy process in the logs when external code is introduced. Nevertheless, I think this process could be more streamlined:

  • allowing to toggle the loading of external code,
  • don’t fail initialization when encountering unknown config sections,
  • show its provenance in the logs.

Regarding the proposed RPC plugin API, I think it will cover most macros use cases. Only limited additions are required to the current webhooks API. And most importantly, it provides two type of isolation:

  • guarantees that klippy’s event loop won’t be blocked by a defective plugin,
  • forbids access to klippy’s internal API and private members.

In order to evaluate the functionality of this API, it would be useful to make a list of extra modules that could ported over. Currently I don’t see many that can, beside the ones that only interface at the gcode level.

The main limitation is that it forbids any insertion of code in klipper’s core functionality (g-code coordinate transformations, custom kinematics, low latency callbacks, communication with custom hardware).
The latency overhead is also a concern for high throughput commands like G0/G1 overloads. The serialization of printer object statuses also adds overheads, this is demonstrated by the fact that an idle klippy process with a frontend connected spends most of its cpu time in QueryStatusHelper.

To help with this, klipper could receive (marshaled) code over the uds socket for execution within the primary process. This effectively pokes holes regarding the guaranties above, but would encourage the code to remain minimal and self contained. Some guaranties could be recovered by:

  • Logging the code’s provenance (for accountability) and capabilities,
  • Limit the data and APIs exposed through eval/exec,
  • Run a timeout timer in the background that triggers an error if the procedure occupies the main loop for an extended period.
1 Like

Just my 2 cents here, I’d feel like there’s merit to giving the current macro system a boost. Currently, Arksine’s G code shell commands gives this boost but I think it opens a few holes it if we want it to be akin to the proposed plugin system, or potentially anything using a language that isn’t sandboxed yet powerful, and not end up like the early days of minecraft modding, where malware was more prevelent prior to curseforge or PMC.

I’ll go into hosting, and creating a site/service to host them, in general talk, but this is the way I can see things going if something like this is implemented. (Again with my Minecraft analogies :sweat_smile:)

At the lowest level you would have the macro system. Basic enough to get menial things done but becomes quite powerful, albeit a bit resource heavy, in the right hands. Moreso akin to minecraft’s current command system.

Mid range would be the plugin system, while it can run stuff outside the intended target, it’s not hooked into the intended service to completely modify it and is meant to work with macros. There is a Minecraft mod that allows datapacks to run python scripts for commands in the game, but with it using jep it is exposed to the system running the server.

At the highest would be extensions for klipper, and this is where conflicts can abound the most, while most of them only add features, others can completly altar the target features and have issues with other extensions. While minecraft does have modloaders available for this purpose, despite it being a mess considering the attitudes of 2 of the modloader’s lead devs, Klipper lacks any injection system for extensions, so it’s more like patching code during minecraft’s beta days.

Bottom line is that the plugin system would need to be powerful enough to do things macros can’t but not require knowledge of OOP. Which, considering how capable a python based plugin system would be in Klipper compared to in Minecraft the idea of having a official site with, macros, plugins, and extensions, the latter 2 with proper checks, has some merit as well

1 Like

I like this proposal, it’s useful to allow a higher level language for macros/plugins, especially if they can run as a separate process and thus not interfere with Klipper’s main loop.

In terms of the Klipper team reviewing these plugins, I can tell you as one of the reviewers of Voron user mods repository PRs, that this is unsustainable as you get more users and PRs, there’s simply not enough volunteer time to review these in a timely manner. Over time users get tired of waiting for a review, or disagree with changes that you ask for, and just put their mod somewhere else. I feel that the same would happen to Klipper plugins if they needed to be reviewed by the Klipper team. And people already do this - either make their own extras modules that they ask users to install, or even maintain their own Klipper forks. Perhaps instead of reviewing these, there could be some automated runtime test and sandbox for the plugins to run. E.g. timeouts, memory/cpu limits, network limits etc.

1 Like

I think it would be helpful to analyze what current macros do. All the macros I personally use are essentially GUI helpers “move here, now do this”. There would be no point in preventing them from altering “fast path code” because the printer is essentially idle and there is not much of that “fast path” stuff anyway.

So I don’t think it is worth creating a separate isolated system to run a high-level macro language. It is a lot of effort for a questionable result. As for concerns of the plugin breaking Klipper, I think everything would eventually settle down like in any other extendible system — good plugins will prevail, and bad plugins will die out.

However, I have written some small Jinja2 macros, and I think there are plenty of improvements that could be made to the existing system. Jinja2 syntax is clunky at best, but the need to restart Klipper after each change is far more annoying and time-consuming. So if you could add fast-reload capability, maybe a dedicated debug output and other helpers, this would enhance the current workflow tremendously and diminish the need for a new system. I do not know which option is easier to implement, though :slight_smile:

I’ve got some fairly bespoke macros that I’ve put together to manage my 6-in-1-out Ender 3 mod-bomination. They do things like:

  • Handle all filament changes, including retraction, moving to the purge bucket, unloading and loading filament, nozzle wiping, and moving back to the print
  • Calculating the volume of filament that needs to be purged to prevent color bleed, based on the color codes of the old and new filaments
  • Persistent state-tracking of whether filament is currently loaded in the hotend or present in the “shared” bowden tube, and (somewhat) graceful refusal to load a new filament if so, to prevent collisions and filament grinding
  • Persistent state-tracking of what filament was last in the nozzle so the loading/purging macro can calculate the appropriate purge volume, between prints and even reboots.
  • Keeping track of what filaments are currently available to the printer and in which positions, so that the slicer only has to specify what filament to use and the printer knows where to find it, or else warn me that I haven’t loaded the filament the print is asking for.
  • Turning on the +24V only when needed to power a heater or stepper and powering it down when idle.

I love the fact that the jinja2 macro structure allows this kind of power and flexibility, although there are some aspects where a fuller plugin system might make it even better. The main thing that comes to mind is that there’s obviously no GUI support in Mainsail/Fluidd for quick or visual entry of what filament is in what hopper. I also make use of more than one “hacky” workaround for some of the inherent limitations of the jinja2 implementation, and that has made it difficult to troubleshoot as I’ve been refining these macros.

if I knew how Klipper implemented macros I would probably try to implement a lua or wren macro system, and while lupa does exist for Python I can’t find anything for wren. I may try to figure out lupa for a bit before trying to implement it as a macro system

I think this is an excellent example, @theophile, as “color mixing and 1-in-N-out” is something that is kind of missing in Klipper’s standard portfolio.

In my opinion, macros are not well suited for such a complex feature (unless you only need / use it for your self):

  • Makes the printer.cfg very complex for the average user
  • The users will have trouble to do anything with the macro because they mostly do not understand it, e.g. even modifying some options will be a challenge
  • (Probably) not extensively documented
  • Fully depends on your support and willingness to continue caring for it. Also, e.g., in case of changes to the macro system, deprecation etc.

Of course the points above do depend on the extent / impact of such macros. I would distinguish two types:

  1. High impact: The macro is providing a feature that is missing in Klipper and of interest to a broader audience
  2. Niche feature: A macro that is solving an unusual problem or only of interest to a limited / special group

So, for type 1 (and I would consider Color Mixing as such) I would prefer:

  • “Something” (plugin / extra etc.) that lives in the Klipper main repository
  • Is officially documented, e.g. has a well defined and described set of options
  • Is officially supported and maintained

From current experience, such Type 1 macros would be:

  • Color Mixing / 1-in-N-out hotends
  • KAMPS
  • Dockable probes (luckily something is already in motion there)
  • More complete / extensive Pause / Resume / Park features
  • Time-Lapse

If all of them would be replaced by main-line features (plugin / extra / you-name-it) with documented options, many many printer.cfg would be a lot shorter, less complex and more user-friendly

2 Likes

there is a mixing hotend PR in the works if I remember correctly, even though I managed to get an implementation, however crude, working.

it seems that most features macros use aren’t really deprecated, there was one in KAMP I think for its adaptive purging but it seems more so the exception than the rule. I may try to develop that lua macro idea a bit with lupa but OOP isn’t my strong suit and I’ve just started a CS concepts course.

to me, if Kevin was more lenient on PR’s, and maybe had a way to mark additions from PR’s as experimental, klipper would probably be well ahead in features compared to Marlin or RRF. Cus unless there’s a way to patch python and C code without replacing files, like with pymixin or similar, we’re rather limited on what 3rd party features we can have on one machine at once. Subsequently, a way to only add features we want without the other code and files taking up storage would be nice too.

I think it’s best to start simple.
Make it possible to load unsafe third-party python modules from the config and observe what people do with it.
A possibly overcomplicated and isolated system is not really needed right now.

There are also a few things you can do that can greatly simplify macros the way they are now. For example, built-in functions for translating coordinate systems or reusing parts of the code.

2 Likes

Hi, is this mostly dead, or is something happening?

IMO this is probably the best route out of the current situation with general aversion to add new freatures to the core.

Thanks for the feedback.

The overall impression I got from this thread is that there isn’t a lot of “excitement” for a limited plugin system as described here. So, the idea is not “dead”, but it also does not seem like it should be at a high priority.

Cheers,
-Kevin

Re-reading the thread there seems to be quite a few excited posts, but no clear direction how to do it “properly”.

And there seems to be a chasm between experimenters and maintainers.
With experimenters wanting a playground, however simple to operate in.
And maintainers trying to devise a perfect system from very limited information on how it’s going to be used.

Maybe it’s worth analyzing this from the oppsite side:
If someone (me?) made an unofficial plugin manager that just installs stuff into the extras folder. With all the security and potential maintainability issues, it will likely get some traction with hobbyists. What would be your reaction? What can you do today to avoid that?