Stepper.error: Internal error in stepcompress

Basic Information:

Printer Model:Voron 2.4
MCU / Printerboard:octopus v1.1 + canbus + cartographer
Host / SBC rpi4
klippy.log

klippy(25).log (413.2 KB)

Fill out above information and in all cases attach your klippy.log file (use zip to compress it, if too big). Pasting your printer.cfg is not needed
Be sure to check our “Knowledge Base” Category first. Most relevant items, e.g. error messages, are covered there

Describe your issue:

So I upgraded to a phaetus rapido and added a filament sensor. Ran ellis tuning tests all day and once I was kind of happy I updated all files in machine section from mainsail.

Since that moment I am getting random stepper.error: Internal error in stepcompress. but they are preceded with:

5:48 PM
Internal error on command:"PRINT_START"
5:48 PM
Internal error on command:"BED_MESH_CALIBRATE"

I then hit the E-stop because the stepper(s) are not reaching their intended position so subsequent moves result in crashes. It does not result in an abort by itself, it will try to continue with disasterous results.

This happened yesterday 3 times and then I decided to reslice (superslicer) and it worked.

Today new files and instantly does it. Yesterday it happened while it was going to do the QGL and I saw it move strangely while making a wrong noise. I stopped it because the next corner would have been outside the bed area.

Today it does it while starting the bedmesh, again wrong noise and incomplete move. I hit the E-stop again.

I will reslice this file and check again but is there anybody else with this after last update of klipper?

My version now is v0.13.0-190

I realise it is dirty because as a buyer I am at the mercy of producers of aftermarket stuff. The cartographer changes things and cartographer does not know how to solve this issue so we are stuck. But this setup minus the new hotend and the filament sensor have been working many months.

EDIT: resliced and it worked again!!! Do we have some slicer bug???

Not a slicer bug.
That error generally means something goes horribly wrong inside the motion planning.

Dumping trapq 'toolhead' 11 moves:
move 0: pt=430.932626 mt=0.498709 sv=500.000000 a=0.000000 sp=(53.888889,25.000000,2.000000) ar=(1.000000,0.000000,0.000000)
move 1: pt=431.431334 mt=0.015744 sv=500.000000 a=-9000.000000 sp=(303.243314,25.000000,2.000000) ar=(1.000000,0.000000,0.000000)
move 2: pt=431.447079 mt=0.029515 sv=358.301054 a=-9000.000000 sp=(310.000000,25.000000,2.000000) ar=(1.000000,0.000000,0.000000)
move 3: pt=431.476594 mt=0.000517 sv=92.663595 a=9000.000000 sp=(316.655172,25.000000,2.000000) ar=(0.995185,0.098017,0.000000)
move 4: pt=431.477111 mt=0.004689 sv=97.313581 a=0.000000 sp=(316.704013,25.004810,2.000000) ar=(0.995185,0.098017,0.000000)
move 5: pt=431.481800 mt=0.005672 sv=97.313581 a=-9000.000000 sp=(317.158103,25.049534,2.000000) ar=(0.995185,0.098017,0.000000)
move 6: pt=431.487472 mt=0.003642 sv=46.261908 a=9000.000000 sp=(317.563351,25.089448,2.000000) ar=(0.956940,0.290285,0.000000)
move 7: pt=431.491114 mt=0.005773 sv=79.036348 a=0.000000 sp=(317.781671,25.155674,2.000000) ar=(0.956940,0.290285,0.000000)
move 8: pt=431.496887 mt=0.003642 sv=79.036348 a=-9000.000000 sp=(318.218310,25.288127,2.000000) ar=(0.956940,0.290285,0.000000)
move 9: pt=431.500528 mt=0.003642 sv=46.261908 a=9000.000000 sp=(318.436630,25.354354,2.000000) ar=(0.881921,0.471397,0.000000)
move 10: pt=431.504170 mt=0.005773 sv=79.036348 a=0.000000 sp=(318.637834,25.461900,2.000000) ar=(0.881921,0.471397,0.000000)
Requested toolhead position at shutdown time 431.404493: (289.82245833334423, 25.0, 2.0)
b'stepcompress o=8 i=0 c=6 a=0: Invalid sequence'
b'stepcompress o=8 i=0 c=6 a=0: Invalid sequence'
b'stepcompress o=8 i=0 c=6 a=0: Invalid sequence'
b'stepcompress o=8 i=0 c=6 a=0: Invalid sequence'
Internal error on command:"BED_MESH_CALIBRATE"
Traceback (most recent call last):
  File "/home/kees/klipper/klippy/gcode.py", line 212, in _process_commands
    handler(gcmd)
  File "/home/kees/klipper/klippy/gcode.py", line 140, in <lambda>
    func = lambda params: origfunc(self._get_extended_params(params))
  File "/home/kees/klipper/klippy/extras/scanner.py", line 3224, in cmd_BED_MESH_CALIBRATE
    self.calibrate(gcmd)
  File "/home/kees/klipper/klippy/extras/scanner.py", line 3413, in calibrate
    clusters = self._sample_mesh(gcmd, path, speed, runs)
  File "/home/kees/klipper/klippy/extras/scanner.py", line 3576, in _sample_mesh
    self._fly_path(path, speed, runs)
  File "/home/kees/klipper/klippy/extras/scanner.py", line 3494, in _fly_path
    self.toolhead.manual_move([x, y, None], speed)
  File "/home/kees/klipper/klippy/toolhead.py", line 493, in manual_move
    self.move(curpos, speed)
  File "/home/kees/klipper/klippy/toolhead.py", line 485, in move
    self._process_lookahead(lazy=True)
  File "/home/kees/klipper/klippy/toolhead.py", line 367, in _process_lookahead
    self._advance_move_time(next_move_time)
  File "/home/kees/klipper/klippy/toolhead.py", line 324, in _advance_move_time
    self._advance_flush_time(flush_time)
  File "/home/kees/klipper/klippy/toolhead.py", line 304, in _advance_flush_time
    sg(sg_flush_time)
  File "/home/kees/klipper/klippy/stepper.py", line 252, in generate_steps
    raise error("Internal error in stepcompress")
stepper.error: Internal error in stepcompress
Internal error on command:"BED_MESH_CALIBRATE"

Generally, I would suspect that some movement tries to happen with infinite speed.

Why would it try to move with infinite speed all of a sudden? Do you have anything from the klippy log that indicates this?

Both yesterday, 3 attempts with the same slicerfile, did exactly the same in the same spot. resliced the part and it worked.

Today one attempt with this error and I resliced it immediately, now it works.

Notsaying it has to be slicer, it can be a horrible coincidence of course. But either way it started happening after the update of klipper. Never had this error before.

Am I reading correctly? A g-code file that worked yesterday errors today.
Then
New g-code of the same object sliced with ALL the same settings works?

Slicers from time to time can have a hic-up.
Re-slicing usually does the job.

I had it too some days ago.

But a reprint from an old g-code file that did print suddenly won’t?

Yes I know there is some (pseudo)randomness in slicer output.

Did the OP mentioned that? Maybe I’ve overseen that?

That’s what I’m trying to figure out. On first read I thought “reprint”. On the 2nd pass I wondered if the first print “today” was a fresh slice.

1 Like

As I understand:
3 fails on one day.

One fail on the next day and success with the new sliced file after that.

1 Like

To clear up:

Both files were new files of different parts, because I just finished a major rework of the toolhead, so all new tuning and settings.

I printed the first new file straight after the upgrade and it did exactly the same thing in the same place three times. I then decided to reslice which worked.

Then one day later I printed a new file of a totally different part, and it did a similar thing but in a slightly different place, and I decided immediately to reslice which worked.

First file error was when it started the QGL and the second file did the error when it started the bedmesh.

In both instances it did not abort the print and would have crashed horribly.

Yes slicers can have a hic-up, agreed. I had different hic-ups in the past too. But twice in a row after a major update of klipper is perhaps too ominous.

After my previous explanation to clear up the misunderstanding, I sliced a new part and it did it again. I resliced it without changing any settings, just move the object on the bed so it has to reslice, and this time it repeated the error.

I think it still does not prove or disprove anything except for that this problem is persistent, dangerous (because it does not abort) and random.

klippy(26).log (302.6 KB)

this klippy is the first attempt and in the terminal it reported the following:

5:39 AM
Klipper state: Disconnect
5:39 AM
Empty clusters found
5:39 AM
Empty clusters found
Try increasing mesh cluster_size or slowing down.
The following clusters were empty:
(226.207,58.621)[20,2]
(235.517,67.931)[21,3]
(300.690,77.241)[28,4]
(310.000,86.552)[29,5]
(40.000,105.172)[0,7]
(58.621,114.483)[2,8]
(67.931,123.793)[3,9]
(133.103,133.103)[10,10]
(142.414,142.414)[11,11]
(151.724,151.724)[12,12]
(170.345,161.034)[14,13]
(179.655,170.345)[15,14]
(244.828,179.655)[22,15]
(254.138,188.966)[23,16]
(263.448,198.276)[24,17]
(282.069,207.586)[26,18]
(291.379,216.897)[27,19]
(235.517,244.828)[21,22]
(300.690,254.138)[28,23]
(310.000,263.448)[29,24]
5:39 AM
Empty clusters found
5:39 AM
Empty clusters found
Try increasing mesh cluster_size or slowing down.
The following clusters were empty:
(226.207,58.621)[20,2]
(235.517,67.931)[21,3]
(300.690,77.241)[28,4]
(310.000,86.552)[29,5]
(40.000,105.172)[0,7]
(58.621,114.483)[2,8]
(67.931,123.793)[3,9]
(133.103,133.103)[10,10]
(142.414,142.414)[11,11]
(151.724,151.724)[12,12]
(170.345,161.034)[14,13]
(179.655,170.345)[15,14]
(244.828,179.655)[22,15]
(254.138,188.966)[23,16]
(263.448,198.276)[24,17]
(282.069,207.586)[26,18]
(291.379,216.897)[27,19]
(235.517,244.828)[21,22]
(300.690,254.138)[28,23]
(310.000,263.448)[29,24]
5:39 AM
RESTART
5:39 AM
FIRMWARE_RESTART
5:39 AM
FIRMWARE_RESTART
5:39 AM
FIRMWARE_RESTART
5:39 AM
Samples binned in 0 clusters
5:39 AM
Sampled 0 total points over 2 runs
5:39 AM
Klipper state: Shutdown
5:38 AM
Run Current: 0.49A Hold Current: 0.49A
5:38 AM
Run Current: 0.49A Hold Current: 0.49A
5:38 AM
Run Current: 0.80A Hold Current: 0.80A
5:38 AM
Run Current: 0.80A Hold Current: 0.80A
5:38 AM
Run Current: 0.80A Hold Current: 0.80A
5:38 AM
Run Current: 0.80A Hold Current: 0.80A
5:38 AM
Run Current: 0.49A Hold Current: 0.49A
5:38 AM
Run Current: 0.49A Hold Current: 0.49A
5:38 AM
Retries: 0/5 Probed points range: 0.003978 tolerance: 0.007500
5:38 AM
Making the following Z adjustments:
stepper_z = -0.000760
stepper_z1 = 0.007926
stepper_z2 = -0.005468
stepper_z3 = -0.001697
5:38 AM
Average: 7.918709
5:38 AM
Actuator Positions:
z: 7.919469 z1: 7.910783 z2: 7.924177 z3: 7.920406
5:38 AM
Gantry-relative probe points:
0: 7.918570 1: 7.916377 2: 7.920355 3: 7.920219
5:38 AM
probe at 280.000,70.000,2.054 is z=1.973732
5:38 AM
probe at 280.000,250.000,2.041 is z=1.961369
5:38 AM
probe at 70.000,250.000,2.049 is z=1.964890
5:38 AM
probe at 70.000,70.000,2.047 is z=1.965834
5:38 AM
Retries: 1/5 Probed points range: 0.009710 tolerance: 1.000000
5:38 AM
Making the following Z adjustments:
stepper_z = -0.021014
stepper_z1 = 0.022333
stepper_z2 = -0.015410
stepper_z3 = 0.014091
5:38 AM
Average: 7.922932
5:38 AM
Actuator Positions:
z: 7.943946 z1: 7.900599 z2: 7.938342 z3: 7.908842
5:38 AM
Gantry-relative probe points:
0: 7.929111 1: 7.919401 2: 7.924532 3: 7.920617
5:38 AM
probe at 280.000,70.000,2.057 is z=1.977980
5:38 AM
probe at 280.000,250.000,2.042 is z=1.966895
5:38 AM
probe at 70.000,250.000,2.046 is z=1.965514
5:38 AM
probe at 70.000,70.000,2.029 is z=1.957724
5:38 AM
Retries: 0/5 Probed points range: 1.204964 tolerance: 1.000000
5:38 AM
Making the following Z adjustments:
stepper_z = -1.182815
stepper_z1 = 1.660510
stepper_z2 = 0.769668
stepper_z3 = -1.247363
5:38 AM
Average: 7.916258
5:38 AM
Actuator Positions:
z: 9.099073 z1: 6.255747 z2: 7.146590 z3: 9.163621
5:38 AM
Gantry-relative probe points:
0: 8.539243 1: 7.444686 2: 7.709640 3: 8.649651
5:38 AM
probe at 280.000,70.000,1.320 is z=1.969171
5:38 AM
probe at 280.000,250.000,2.253 is z=1.962911
5:38 AM
probe at 70.000,250.000,2.517 is z=1.961707
5:37 AM
probe at 70.000,70.000,1.435 is z=1.973764
5:37 AM
Run Current: 0.49A Hold Current: 0.49A
5:37 AM
Run Current: 0.49A Hold Current: 0.49A
5:37 AM
Run Current: 0.80A Hold Current: 0.80A
5:37 AM
Run Current: 0.80A Hold Current: 0.80A
5:37 AM
Run Current: 0.80A Hold Current: 0.80A
5:37 AM
Run Current: 0.80A Hold Current: 0.80A
5:37 AM
Run Current: 0.49A Hold Current: 0.49A
5:37 AM
Run Current: 0.49A Hold Current: 0.49A
5:36 AM
Received parameters: {'BED_TEMP': '60', 'EXTRUDER_TEMP': '250'}
5:36 AM
File selected
5:36 AM
File opened:ASApi mount front.gcode Size:845976
5:36 AM
SDCARD_PRINT_FILE FILENAME="ASApi mount front.gcode"

Don’t know what it means but hoping others do.

klippy(27).log (794.3 KB)

This klippy is the second attempt of the new sliced file. Same part same settings.

In both cases I stopped it again. What may also be interesting is that it would not listen to the mainsail buttons for firmware restart and when I tried via klipperscreen it was the same. Then suddenly it executed all my pushes in one go. Looked like something hung temporarily.

Third reslice same part same settings and again the error. Either it is getting worse or I am having lots of bad luck today.

klippy(28).log (1.7 MB)

Thank you for clarification.

In deed, you are right.

1 Like

To add more info. Today I used prusaslicer2.9.2 to slice the same part with kind of the same settings, since prusa slicer is slightly different this is not possible to be exact.

Printing now and it started correctly first attempt. It is a 40 or so minute print and after this I will restart superslicer again, slice the same part again and retry.

If I cannot solve this any other way I will try to reset my klipper to the previous version. I have never done that before so I am a bit hesitant with these things, code and firmware are not my strongest point.

Can I suggest that you make up a table plotting Klipper versions against slicer versions and type?

I find that by doing that, it’s easier to keep track of what I’ve tried and, hopefully, trends will appear that will help lead you to where the problems lie.

1 Like

for now it is simple enough to remember since my klipper has only been v0.13.0-190 since the update. I have not yet redone any other klipper version yet.

The slicers are:

Superslicer 2.7 beta and Prusaslicer 2.9.2.

The prusaslicer file worked perfectly from start to end, but it is just one attempt sofar.

I have just reopened another instance of superslicer and resliced the same part again fresh, and this is now on the printer and also seems to work.

It is very random indeed and based on my findings till now I am starting to think it could either be Superslicer which may have had a bug during my previous session, but now it hasn’t, or it is still in my system and it simply does not want to make the error today!

The reason why I think it could be superslicer is because during the last two days of trouble I have not closed superslicer at all and I just reloaded parts into it and sliced them. This time today I had closed it and reopened a new instance and it is the first time it works of the bat.

I am not sure if it could be what I think but so far it seems it is possible.

I will keep printing and will report back.

1 Like

I would politely suggest stopping messing with the Slicers, unless you really found this interesting.

If you read the stack trace in any log file, you would realize it is crashing inside the BED_MESH call.

Which is called inside the PRINT_START macros.
So, as long as PRINT_START is called and the BED_MESH_CALIBRATE is called, it would crash.
So, as long as ANY slicer calls PRINT_START, it would crash.

Hope that helps a little.


About the issue itself. Well: The "dirty" Flag and the Team's Position

There should be a goodwill to read the scanner.py code and sometimes guess what is wrong.
Which is complicated if you do not have one. I can’t find anything suspicious from a brief look.

I may only suggest changing the bed_mesh speed 500 -> 100.
But I honestly have no idea how or why it is triggered here.
Maybe speed reduction would help, idk.

I’m using a simple analog inductive probe and also using the high movement speed (400mm/s).
So, the mainline code should work here.


From my small calculations, I do think, that OID=8 it is one of the XY motors.

# queue_step oid=8 interval=1594 count=986 add=0
>>> 1594 / 180000000
8.855555555555555e-06
>>> print("%.9f s" % (1594 / 180000000))
0.000008856 s
>>> 40 / 200 / 32
0.00625 # mm per step
>>> 1 / 0.000008856
112917.79584462511 # Steps per second
>>> 1 / 0.000008856 * 0.00625
705.736224028907 # 705 mm/s - diagonal XY movement. 500 * 1.44 = 720.0
1 Like

Well if what you concluded was correct, I would instantly accept it. However there are a few things to point out where it did not follow your logic:

  1. each file, with or without the error, has called PRINT_START and BED_MESH_CALIBRATE.

  2. both slicers use the very same PRINT_START and custom g_code.

  3. it gives the error in different sections, sometimes while starting QGL and sometimes when starting BED_MESH_CALIBRATE. And sometimes it does work correctly for no obvious reason. So far the error mostly happened on the BED_MESH_CALIBRATE but twice it was on the QGL, which resulted in the toolhead leaving the bed area and nearly crashing against the frame.

I tried lower speeds as well, but to no avail. Also these speeds have been working very nicely for quite a few months now without ever going wrong. But that was before the updates.

As for today, all my new g-code files worked, from both slicers.

So I am not saying you are not right in saying it is not the slicers, I am just doing what I can test here on my end, before deciding that I either want to rollback the updates or install a totally new klipper from the ground up. Neither which I am comfortable with but…..what needs to be done will be done.

Actually @nefelim4ag’s logic is perfectly sound and applicable in all cases shown in the available logs:

  • Each of the stack traces involves the 3rd party scanner.py
  • It does not matter if BED_MESH_CALIBRATE or QGL:
    • Both are functions that involve the scanner
    • Both are completely unrelated to G-code generated by a slicer

Way beyond my paygrade, but in the end, it could be some race condition that may hit home or not, depending on circumstances not fully understood.

Perhaps you should contact the folks who wrote the modules in question (and took your money) and give a thank you to someone who is simply trying to help.

Or at least phrase your response along the lines of “I’m not sure I follow your logic would you mind clarifying these points.”

@3dcase You may find the folks here

Split this topic on Sep 5, 2022.