Config static analysis

I am currently looking into the Klipper codebase to improve the area related to the config. My motivation towards doing so is that I have a complex set of config files that are joined together using imports and I want to better understand the end-state of the config as Klipper sees it.

Towards that end I want to add a tool which parses the config file and allows the end-user to do things with it, such as display it all as a single file, for troubleshooting, annotating which line came from which file, and possibly going as far as adding new features to the config language like supporting multiple units for some properties (e.g. mm and cm) though I doubt that would be accepted.

I did some searching, but I couldnā€™t find much discussion on it. Is there any prior work in this area? Would a tool like this be accepted to upstream?

Not sure that I understand what you are aiming at and what the overarching goal or benefit would be. Could you highlight it with some concise examples?

In my view, this is quite simple:

  • The klippy.log will always reflect the config as parsed and used by Klipper.
  • Adding the same config section multiple times will either:
    • Result in an error, notifying you that this is not possible.
    • Overwrite the previous declaration with the one that has been parsed last.
1 Like

Maybe to add: What I tried and failed due to my miserable coding skills would have been a linter for Klipper config, similar to ShellCheck, ESLint, etc.

It should have covered items like:

  • Missing indentation in macros
  • Settings that are not recommended, like hold_current or switching between Stealth Chop and Spread Cycle mid-print
  • Duplicate settings that would get overwritten
  • Classified as Notes, Warnings, or Errors
  • And so on

In the best case, as a web application so that users have an easy way to quality check their configs.

Not sure if you are suggesting something along these lines. If yes, Iā€™m all ears.

1 Like

Thanks for that info! Iā€™d like to update the config docs to share that info about how it behaves! That will help a lot!

Regarding the practical benefits, Iā€™m an enthusiast of programming languages and parsers/compilers/static analysis, so Iā€™m hoping my interest can be helpful to Klipper.

Yeah, that sounds like schema validation! Itā€™s a step I would like to add after the config is parsed.

If we wanted to make it available as a web application, is that something I would cooperate with the Mainsail project on adding?

If Iā€™m understanding correctly, you want to make an addition to Klippy for config schema validation. If so, youā€™ll have to create the Klippy extra (or modify the existing configfile.py module). Then, you could create an addition to Mainsail to access the messages from your custom module.

So the flow of information would be Configuration ā†’ Validation code ā†’ get_status() ā†’ Moonraker ā†’ Mainsail.

The get_status() part (a function in your extra) is important to expose the information through Moonraker. Basically it just returns a JSON-friendly dictionary and it can be accessed through macros or UIā€™s.

I wrote a tutorial for Klippy extras here if you want help getting started.

1 Like

FWIW and without being the authority, I would keep clear of Klipperā€™s config parsing in order to:

  • Avoid the danger of regressions. History has shown that this is a sensitive area.
  • Adding features and further complexity also increases the maintenance burden in this area.

From experience, there are three groups of ā€œstaticā€ config issues:

  1. Issues preventing Klipperā€™s start in the first place, e.g., wrong or incomplete serial-paths in the [mcu] section. There are more such problems, and typically their error messages can be cryptic.
  2. Issues that ā€œsilentlyā€ might affect operation, e.g., TMC drivers - Klipper documentation and following chapters.
  3. Issues that only hit when a certain function is called, e.g., SAVE_CONFIG section 'bltouch' option 'z_offset' conflicts with included value.

To catch all three, the best thing would be some external validation that ā€œimportsā€ Klipperā€™s parsing rules.
An ā€œextraā€ like proposed by @3dcoded would be nice as well, but will likely not work for number 1.

My dream would have been:

  • Static analysis with a flexible rule engine that can be easily extended.
  • Likely stand-alone. Being part of either Klipper, Moonraker, or the web interfaces always requires a successful start of Klipper.
  • Proposals on how to fix certain issues and useful hints, e.g., ā€œOften the BLTouch pin requires the declaration of a pull-up via ^.ā€
  • As a cherry on the cake: Some beautifier functionality, like indentation, etc.
1 Like

I spent a few hours yesterday digging deeper into the Klippy code and extras and I learned a lot. I learned that Klippy is not entirely responsible for parsing the config. Instead, each module parses the value from a key in its own config section. Changing how that works in any way would be quite disruptive, too disruptive for a first improvement.

So, yeah I can better understand your thinking that an external tool, in its own codebase entirely. It would take as input the printer.cfg and do its analysis, outputting to the terminal or a file or whatever. Once that is mostly done, I could look into a frontend like Mainsail to see how to surface that information there.

Thanks for your help on this! It should be a fun project!

3 Likes

Looking forward to seeing your results. A generic way to catch the most obvious quality issues in a config would surely help a lot of users.

If you have something to start with, I can help create some rules for common issues. Not everything is caught by the parsing rules in Klipper. There are quite a few regular issues that seem valid but are essentially not. For example, points 2 and 3 above.

1 Like

Hey @Sineos, where is the best place to data mine issues people have with the config file? Iā€™m currently finishing up collecting from the GitHub issues, but thatā€™s a small data set.

1 Like

Well, this place and the official Klipper Discord should definitely contain the highest concentration of such topics.

1 Like

@labradorius ,

Iā€™ll be following this development. Let me know if you need help testing!

If you are interested, I can add topics that I stumble upon here in the thread, for example:

Just as a first start.

1 Like

I published the current state of the project here, in case youā€™re interested in contributing, but Iā€™ll warn you that it is liable to change.
https://github.com/chexxor/klipper-cfg-linter

There are only a few linting rules in there, in the klipperlint/rules directory, to explore how those might look. I havenā€™t deeply considered the format of them, like how we should store them, their format, etc., so I look for input on that.

2 Likes

@labradorius ,

Thank you for letting me know. I tested it on my setup, and was able to locate a couple bugs. I submitted a PR #1.

Overall, this looks really well done so far.

2 Likes

@labradorius,

this looks like a fun project!

you can easily improve your test setup by adding a bunch of real-world known-good configs, to check the rules for false positives, because false positives are what discourages people from using such toolsā€¦

for example, by leveraging sample configs from klipper, from 3d printer projects that publish sample configs (voron, ā€¦), and if you donā€™t know the klippain project maybe worth having a look to torture test your parser and include resolver :).

I had a brief look at your rules and the pin rule doesnā€™t seem to be quite in line with the klipper documentation, for example that regex will not accept a ^!PA3 nor does it know about multi-mcu pin names or about pin aliases

which brings me to my unsolicited opinion: most value-add rules will require code, so having yaml files around instead of one python module per rule is not an obvious benefit.

@bozzo I appreciate your feedback! Regarding the rule format, I agree. I really wanted the rules to be formatted in a way that non-programmers could contribute. There are some simple rules that could be formatted declaratively using a YAML format, but I can see it quickly becoming a small language of its own, which was what I wanted to avoid.

I can look into some inference techniques, so we could give some examples of config properties that are valid and invalid, then have an inference tool derive a rule from that. That would be a neat way to do them, assuming it would be reliable enough.

1 Like