Feature Request Binary G-Code (BG-Code) support

After the introduction of Binary G-Code in PrusaSlicer 2.7 it would be great to have support for that file format in Klipper.
There’s a library libgcode under AGPL, that offers Python bindings.
I’m not sure how to incorporate that into Klipper but my assumption would be to do it in the klipper/klippy/extensions/virtual_sdcard.py:

  • Add the extension(s) to VALID_GCODE_EXTS
  • Extend _load_file to detect and handle the new format, maybe decoding it to a temporary file or in memory stream which then is used as the file like it would be done for a “normal” G-Code file.
  • Remove/delete that temporary file after the print has finished

In my opinion that should work, what do you think?

Additionally the common frontends would have to add the binary G-Code extensions to their list of supported formats, too.

1 Like

What benefits do you see?

Personally I see none, in fact rather disadvantages:

  • Even more obscure than gcode and will not allow for any analysis why some things might go wrong
  • Due to the architecture of Klipper, no benefit since Klipper does not suffer from the serial connection bottleneck compared to other firmwares
1 Like

There are not much benefits, except of

  1. saving space claimed by the G-Code files when uploaded to the on the commonly used small SD-cards (as stated on average 70%)
  2. faster uploads on slow networks
1 Like

What capacity have small SD cards?

I usually run with 32 GB. Much room for a lot of Gcode files. Except you do extensive time laspsing.

Interesting. Someone still using 10base-T ?

3 Likes

From the Docs:

A new G-code file format featuring the following improvements over the legacy G-code:

Block structure with distinct blocks for metadata vs. G-code
Faster navigation
Coding & compression for smaller file size
Checksum for data validity
Extensivity through new (custom) blocks. For example, a file signature block may be welcome by corporate customers.

What capacity have small SD cards?

I usually run with 32 GB. Much room for a lot of Gcode files. Except you do extensive time laspsing.

8GB is for sure still used a lot out there and not everyone, except of you and me may be clever enough to use a larger SD-Card.

Interesting. Someone still using 10base-T ?

Nothing to add besides the clevernes thing above. Greetz from 1GBps :wink:

1 Like

Klipper’s basic development philosophy is to only include features that really provide a meaningful and mensurable impact (see Feature idea: bed temperature loss compensation - #21 by koconnor).

Personally, I very much agree with this approach, since not all what is possible also makes sense. On the contrary, it might generate a burden in documentation, support, code maintainability etc.

Especially for support it is not uncommon to ask for a sliced gcode file, e.g. to see if macros or similar stuff is doing something weird. Alone from this aspect this feature, in my view, is creating more disadvantages than advantages.
In the same sense, diagnosing, if the slicer is doing something “bad” is quite impossible when using this format.

Agreed on keeping things simple and the debugging/support aspect is true, too.
If I had enough time I’d take the approach to try out how it could work, just for curiosity.

@3dPrintingGeek

I think @Sineos has articulated why the Klipper team would be reluctant (this is not a strong enough word) to consider supporting Binary G-Code but what I’d like to understand what problems in Klipper do you think that adding this feature would address?

Maybe we’re missing something here but unless there is a real tangible and demonstrable problem, then this is (rightly so) a non-starter.

From my perspective ability to read gcode is needed for a small subset of makers, everyone else just wants to get a thing quickly uploaded. I don’t see why binary can’t be converted into text for debugging purposes there shouldn’t be any data loss.

Uploading of 100MB+ has a noticeable delay (could be easily a few seconds on a ok network), then there is a lot of time spent by moonraker (as I understand the process) to parse the text to enable individual object cancellation feature, this adds another 30-40 seconds or more before printer starts to do anything at all, while binary format should enable a way to speed this up by using mentioned metadata.

Besides binary format can be augmented with a number of other things in the future, like inclusion of integrity validations could be a nice thing allowing to detect issues caused by garbage sd cards, bad network connection while uploading etc.

Comparing binary gcode support to something like can bus implementation, I’m not sure why the former could be rejected on the grounds of not being useful enough. One could argue can bus introduces same issues, eg harder debugging, extra complexity into process with little return.

2 Likes

We definitely see things from opposite perspectives.

I always prefer person-readable data for the simple reason that a quick scan will immediately show you have a problem. I don’t know how many people debug raw gcode, but I do know it’s useful to look at quickly if you’re not sure of your slicer output is correct or a gcode file is corrupted.

As for upload speed, what is the percentage of time spent uploading/parsing as the whole printing process? For files less than 20MB (which is a pretty big/complex model) it’s less than five seconds but print time is usually 5 hours or more (say a 1:3600 ratio). I don’t think I’ve ever done a 100+MB gcode file, but I don’t image the percentage time spent uploading/parsing would be significant - assuming the ratio is the same, it would be on the order of 2 minutes but I doubt going with binary gcode would reduce the wait time to the second or so I see for most gcode files.

Your third paragraph is interesting because you’re suggesting adding complexity and code to the system to check the integrity of a binary gcode when, as noted above, this is available by just visually scanning the raw gcode.

As people here know, I really like CAN bus toolheads. It electrically (and mechanically) simplifies the system and, as I discovered when I built my Voron 2.4), it is very cost competitive compared to a Stealthburner toolhead PCB and cabling. I don’t know how you can make the assertion that debugging is harder or adds complexity to the system - I would argue that the opposite is true. I know there are people here that work with multiple CAN bus toolheads because connecting/disconnecting the four wire connector is trivial. Personally, I would never work with a printer without a CAN toolhead (and I’ve put them on all five of my printers).

How many people actually know how to read gcode? Especially now when the price of ready to run machines was brought down to the point when building own thing or tinkering with something like ender 3 became more of a hobby than a way of getting something reliable for a reasonable price. Then again, as mentioned before, binary doesn’t mean lossy, instead of opening it in text editor you just would open it in binary gcode reader and get your raw gcode commands to validate, it’s an extra step which can be automated so it shouldn’t add any complexity for whoever prefer to read gcode. And nobody prevents one from unchecking “binary gcode” checkbox in slicer.

Everyone’s usage may vary, I do upload files over 100MB quite often, that’s my use case. It’s annoying that I need to wait and check the status to make sure it’s still in progress, not that the printer didn’t want to start printing for whatever reason. From usability perspective I just want the file to be sent quickly, immediately hear confirmation from the printer and go about my business. I know comparing to 5, 12 etc hours of printing that’s not long, but those hours are spent by the machine not by me so I don’t think it’s comparable anyway. Sure, it still works when it works slowly, but we are talking about improvements here, so I think it’s valid.

About integrity, just want to reiterate that’s just an idea, something what could be potentially implemented.
I doubt you can always read and validate the whole file, you might be able to spot an error at the beginning but you won’t be able to tell if something went wrong in the middle, it’s just a good measure to validate data you send over the network (or even download form the internet) as well as validate that something what is being executed by a machine is what it is supposed to be. Additionally, from what I’ve seen over the years people often would use crappy cards they got with something else for free, which sometimes would result in failed prints, that’s especially annoying if the print was long, in many cases it could probably be prevented by validating the file before starting to print, so between plastic waste and extra safety checks in software I would certainly choose the later.

I’m sorry, I’m failing to see how CAN bus simplifies anything, it adds extra unnecessary weight to the toolhead (not much but still), it creates an extra point of failure, it also adds cost (not sure about voron, it’s a not very cost effective project overall, but from what I see the wires I needed cost fraction of the price of a board with CAN bus). For the benefit of easily disconnecting it, I would argue that’s a very specialised workflow when you do a lot of tinkering, upgrading etc. Last time I had to disconnect something on mine was about 2 years ago when I decided to change cooling system, though I don’t think CAN bus would’ve helped me there.
Just to point out, I’m not saying CAN bus is bad or anything, I’m saying that from my perspective having faster uploads could be more important for some users and it also would impact 100% of the users while CAN is something what is used by few.

1 Like

This isn’t supported by any other slicer, so this will only benefit PS users. That benefit also seems to be very small. A small difference in upload time (well under a minute in most cases), and the ability to store a couple thousand gcode files instead of several hundred on commonly used 32GB SD cards. Unless the code to add support is trivial, it doesn’t seem worth the effort and ongoing maintenance.

3 Likes

I found out that in many cases zipping a text gcode is usually as efficient/more efficient than binary gcode, about 2.5-3x reduction with bgcode and about 2-10x reduction with zipping text gcode (I tried this among a lot of gcodes I have printed)

I think this is a more optimal solution than binary gcode especially with klipper, just generate the gcode, zip it, send it, printer host unzips it and it prints like normal. If you need to do troubleshooting you can simply unzip the file and read it with a text editor. Should be easier to implement klipper side as well, just run an unzip and pass it back to the gcode handling. Even better is the fact that this is a well established format of compressing files. The zipping feature within the slicer should be added once there’s enough interest in quick uploads

Yes, uploads are slow especially on a 10mb line, furthermore most people use wifi and that’s even slower depending on location so you might have to keep a laptop open for longer than you’d like.

Other software also use zipping such as MS office - try renaming a .doc file to .zip and you will see what it’s made of, all for the sake of easier network transfers. You can also have a couple different files in the zip such as storing the thumbnail as a separate jpg file (I believe it’s more efficient than base64 in the gcode both to store and display).

Only downside with this is the slightly more substantial memory usage, but with most hosts being at least 512mb of ram it seems less of an issue (and even now I think RPIos can use virtual memory)

2 Likes

We have two different issues here that can be broken down based Klippers firmware that resides on the printers (klipper) and the controller (klippy) that runs on the rpi/linux/windows host connected via usb to the 3d printer. The usb connection is actually a serial converter and has a relatively low bandwidth (like modems back in the day).

When you upload a gcode using mainsail, fluidd UI, or sd put it on the sd card, allowing the host to unzip and process gcode would suffice. The files could have a .gcode.zip extension, or even .zip. When the user selects a zip the host machine would decompress the zip file to a temporary location and start “streaming” the gcode instruction to klipper (the firmware on the 3d printer). To make it simple: lets say it looks for the first .gcode file in the zip archive and processes it.

That being said, fluidd and mainsail run on top of nginx (an html server), so if the data transfer can accepts gzip encoding, the compression and decompression can be done on the fly by the web-browser and nginx server where the gcodes are gzipped in chunks as they travel over network.

Now onto klipper firmware (firmware running on the 3d printer):
GCodes are a set of text based instructions that tell klipper what to do next. This is done over the serial connection via the usb cable and there is a queue thats constantly being filled by the host. If klipper firmware accepts some kind of binary instruction set, then the gcodes can be converted on-the-fly from text to binary, which could allow a larger number of instructions to be transferred onto klipper firmware to have a larger queue. The firmware would not have to spend time translating the text to a binary set of instructions as its already a machine level language. This is how languages like python work: they compile the human-readable text instructions to machine runtime code on-the-fly. I am not sure if this is how klipper is doing it now anyway.

This is a misconception of how Klipper works.
The gcode is processed in the Klippy host application and only compressed step information is sent to the firmware that is running on the board.

Again, nothing gained by transferring binary gcode to the host application.

2 Likes

FWIW, Moonraker already supports extracting gcode files and thumbnails from Ultimaker UFP files. The thing to keep in mind is that the upload request will not return until the gcode file is extracted, which takes time for large files…so you don’t save space on disk or time in the end. You do save bandwidth on the upload.

We could add support for gzipped files that includes thumbnails and metadata in the archive (rather than in the gcode). In this scenario, Klipper would need to support reading from gzips, and Moonraker only needs to extract the thumbs and metadata. Of course slicers would need to support this format.

Just another perspective, but the use of binary gcode is not entirely be about file size, but the efficiency of processing the file. This is why they didn’t just choose a compression algorythm - that would add CPU overhead.

A similar comparison is choosing gRPC instead of REST when building your server architecture. The main benefit of gRPC is that you remove the parsing of strings (JSON), and the binary code can be mapped directly into existing data structures without all the serialization overhead. This dramatically reduces CPU usage and allows you to scale better.
This is also similar to why klipper uses a binary protocol between the host and MCU. It comes down to processing efficiency.

tl;dr binary is machine language, and plain gcode is human language. There’s significant overhead in processing human language so a machine can understand it.

Now to the discussion at hand - does klipper need this? Probably not. But if the format takes off and becomes popular over time (it’s an open standard after all), why not support it? If it becomes the norm, we don’t want klipper being left behind and slicers listing it as a “legacy” option… Of course it’s way too early to tell if that’s going to happen.

1 Like

This reminds me a lot of arguments around compression in general over the years. I love the arguments around wanting to be able to read what is intrinsically a machine-intended language, and that because storage is plentiful, lets be wasteful. That last argument has been huge over time in the world of HTTP and in images. I’m glad my phone’s storage allows me to hold more that five images, thanks to the image compression used!

Companies like Splunk make money because of this continued out-moded thought process that says ‘lets log everything’ & ‘human readable is good!’. Neither of these are worthwhile. The klipper log files are far less valuable than they could be because of the amount of noise in the stream. And when a human does need to troubleshoot at that level, decoders are very common in the development world; weather for binary JSON files or machine executables, etcetera.

The only valid arguments are: does the new format allow for a better end-user experience, and if so, how would this feature rank in importance to other features, based on limited developer resources.

1 Like

Since web servers already compress HTTP transfers with gzip, wouldn’t it be possible also for the upload to moonraker, to have transparent compression for the network transfer without needing to manually zip the gcode?

Maybe it’s already done and binary woulnd’t even improve anything.

But this seems anyway a moonraker/mainsail issue/feature request, not klipper’s.

Thank you. As someone looking to get into Klipper, this dismissive attitude was the first thing I saw and concerns me. I am glad to see a rational approach.

Here are my questions about almost any feature:

  • What are the advantages?
  • What are the disadvantages?
  • How does this compare to alternatives? (eg. file compression)
  • How much dev time would it take to implement?
  • How much dev time would it take to maintain?
  • How much complexity does it add to the project as a whole?

In the case of binary gcode, I suspect the largest issue would be with the API server protocol. Which is a variant of JSON Lines, but with an extremely non-standard separator character.