Some Klipper extras have routines that generate output files which are written to the /tmp folder. Recently I received a request to register /tmp as a root folder in Moonraker so these files will be available to frontends. I think it would be useful to provide access to these output files, so my initial thought was to add this support. However after further consideration I don’t think its a good idea for Moonraker to serve /tmp. Any application on the system can write to it, and it would be reasonable for a developer to assume that this folder is not accessible remotely.
I think the better option would be to provide configuration for the output folder in Klipper. There are a couple of different approaches, and I’d like feedback on what is best:
Create an output_folder option that all extras use. My guess is that this option could be in the [printer] section. Alternatively each extra could have this option.
Use the parent folder specified by the log file as the output folder. This would default to /tmp, but likely be ~/printer_data/logs for most users.
I think it would be better to define some Klipper endpoints to define remote RPC as a G-code. So, another process in the system, let it be measuring resonance or a raw data logger, could register its custom commands within the klipper.
So, the user could emulate the current tooling and do:
Underneath, the external process receives RPC, subscribes to the klippy endpoint, and does some custom work (write CSV or whatever).
Also, it could probably be a good go-to example of extension. Within the Klipper source code.
So, I mean, instead of solving this problem here, within the Klipper process.
It could be completely removed/refactored to something external, where we would care less a little.
I agree that serving the /tmp/ directory is probably not a good idea. Various Unix processes store information in /tmp/ that may contain sensitive information. Probably best not to have to deal with those kinds of security issues.
As for exporting data from Klipper, if you want my “2 cents”, I think we should avoid writing data files from Klipper and instead convert the remaining users to the “webhooks” api. At least in the upstream Klipper code, there are only 3 modules creating files in /tmp - pidcalibrate, resonance_tester, and the accelerometers. The raw accelerometer data is already available from the webhooks api. That leaves just the resonsance_tester and pidcalibrate code. I don’t think it would be particularly difficult to convert those to use the webhooks api. (Specifically, these tools store the raw data in memory and then write them out as csv files at the end of the test - instead they could hold the last test’s data in memory until a webhooks query, and the various tools in scripts/ could query the data directly - as scripts/graph_mesh.py does.)
While this sounds like the cleanest approach, wouldn’t it make using more advanced functions impractical or impossible, or in the best case always dependent on support of the web interface?
For example, the beauty of calibrate_shaper.py is the additional command line arguments and requires the measurement files.
Also, looking towards a future plug-in system, a (persistent) file storage that could potentially be served seems handy.
To the best of my knowledge I don’t think that would be a limit. We’re already exporting the raw accelerometer data over the “uds” port, and local scripts like scripts/graph_mesh.py can already directly connect to the “uds” port to extract data.
There’s also been a lot of feedback that users don’t want to use ssh, run custom scripts, and download graphs. Making the information available on the UDS port has the secondary benefit of allowing frontends to directly provide a rich set of data.
Wouldn’t the current options offered by, e.g. calibrate_shaper, then be somehow transferred into “features” of the web interface:
./scripts/calibrate_shaper.py -h
Usage: calibrate_shaper.py [options] <logs>
Options:
-h, --help show this help message and exit
-o OUTPUT, --output=OUTPUT
filename of output graph
-c CSV, --csv=CSV filename of output csv file
-f MAX_FREQ, --max_freq=MAX_FREQ
maximum frequency to plot
-s MAX_SMOOTHING, --max_smoothing=MAX_SMOOTHING
maximum shaper smoothing to allow
--scv=SCV, --square_corner_velocity=SCV
square corner velocity
--shaper_freq=SHAPER_FREQ
shaper frequency(-ies) to test, either a comma-
separated list of floats, or a range in the format
[start]:end[:step]
--shapers=SHAPERS a comma-separated list of shapers to test
--damping_ratio=DAMPING_RATIO
shaper damping_ratio parameter
--test_damping_ratios=TEST_DAMPING_RATIOS
a comma-separated liat of damping ratios to test input
shaper for
Probably not impossible, but it would create additional dependencies and effort.
I’m suggesting that we work on modifying calibrate_shaper.py to read its data from the “uds” port instead of from a CSV file.
That is, the current workflow is klippy.py obtains a bunch of data, writes it to a csv file, the user then runs calibrate_shaper.py on that csv file. Instead of working on making that csv file more accessible, I’m suggesting we change the workflow so that klippy.py obtains a bunch of data, makes it available via its standard “webhooks” json interface, and if the user runs calibrate_shaper.py it reads the data from that “webhooks” interface. See scripts/graph_mesh.py as an example of this.
At some point in the future, of course, someone could implement a web interface, but I don’t think that is a prerequisite.
Ah, this is where I went wrong. Thanks for clarifying and sorry for being a bit slow-witted.
Essentially, it means the functionality of the script stays the same but instead of reading data from a file, it fetches them via the webhook’s interface.