Exlude Objects: CPU heavy to parse

I got a report on TTC in a pretty weak Chinese SBC.
So, I got the file, cleaned it a little bit.
Try to reproduce, ~1s CPU usage:

Starting SD card print (position 0)
Stats 73.9:
  gcodein=0
  mcu: mcu_awake=0.001 mcu_task_avg=0.000005 mcu_task_stddev=0.000027 bytes_write=4451 bytes_read=26427 bytes_retransmit=9 bytes_invalid=0 send_seq=364 receive_seq=364 retransmit_seq=2 srtt=0.001 rttvar=0.000 rto=0.025 ready_bytes=0 upcoming_bytes=0 freq=480024560
  ebb42: mcu_awake=0.002 mcu_task_avg=0.000010 mcu_task_stddev=0.000009 bytes_write=1889 bytes_read=12096 bytes_retransmit=9 bytes_invalid=0 send_seq=213 receive_seq=213 retransmit_seq=2 srtt=0.001 rttvar=0.000 rto=0.025 ready_bytes=0 upcoming_bytes=0 freq=64000427 adj=63996275
  host: mcu_awake=0.005 mcu_task_avg=0.000058 mcu_task_stddev=0.000092 bytes_write=4211 bytes_read=9746 bytes_retransmit=0 bytes_invalid=0 send_seq=471 receive_seq=471 retransmit_seq=0 srtt=0.000 rttvar=0.000 rto=0.025 ready_bytes=0 upcoming_bytes=0 freq=50252149 adj=50456081 sd_pos=32735 w_dryer_h: temp=21.0 W_dryer: target=0 temp=20.9 pwm=0.000
  chamber: temp=23.2 chamber: target=0 temp=23.2 pwm=0.000 ebb42: temp=30.4 X_Driver: temp=48.3
  raspberry_pi: temp=45.2
  heater_bed: target=0 temp=21.7 pwm=0.000 sysload=0.67 cputime=3.632 memavail=3783312 print_time=1971.226 buffer_time=0.000 print_stall=0 
  extruder: target=35 temp=35.2 pwm=0.014
Resetting prediction variance 73.916: freq=50252149 diff=321546 stddev=50000.000
Stats 74.9:
  gcodein=0
  mcu: mcu_awake=0.001 mcu_task_avg=0.000005 mcu_task_stddev=0.000027 bytes_write=4468 bytes_read=26716 bytes_retransmit=9 bytes_invalid=0 send_seq=366 receive_seq=366 retransmit_seq=2 srtt=0.001 rttvar=0.000 rto=0.025 ready_bytes=0 upcoming_bytes=0 freq=480024419
  ebb42: mcu_awake=0.002 mcu_task_avg=0.000010 mcu_task_stddev=0.000009 bytes_write=1895 bytes_read=12228 bytes_retransmit=9 bytes_invalid=0 send_seq=214 receive_seq=214 retransmit_seq=2 srtt=0.001 rttvar=0.000 rto=0.025 ready_bytes=0 upcoming_bytes=0 freq=64000405 adj=63996783 host: mcu_awake=0.005 mcu_task_avg=0.000058 mcu_task_stddev=0.000092 bytes_write=4282 bytes_read=9836 bytes_retransmit=0 bytes_invalid=0 send_seq=478 receive_seq=478 retransmit_seq=0 srtt=0.000 rttvar=0.000 rto=0.025 ready_bytes=0 upcoming_bytes=0 freq=50252779 adj=50418438 sd_pos=463869 w_dryer_h: temp=21.0 W_dryer: target=0 temp=20.8 pwm=0.000 chamber: temp=23.2
  chamber: target=0 temp=23.2 pwm=0.000 ebb42: temp=30.4 X_Driver: temp=48.3
  raspberry_pi: temp=45.8
  heater_bed: target=0 temp=21.7 pwm=0.000 sysload=0.77 cputime=4.649 memavail=3783192 print_time=1971.226 buffer_time=0.000 print_stall=0
  extruder: target=35 temp=35.2 pwm=0.014
Finished SD card print
Exiting SD card print (position 516117)

I was unable to reproduce TTC, but I can say where it spent so much time:
There are 2 places, shlex in the gcode.py and the json.dumps in webhook.py

There is testing code:
heavy_exclude_short.gcode (504.0 KB)

This is a speedscope profile from py-spy and from another relatively slow CPU:
profile (3).zip (11.5 KB)
Mine has a high resolution, but RPI5 is too fast, made with perf and Python 3.12, perf profile, compatible with speedscope.
heavy_exclude_short.perf.zip (1.1 MB)
(you can just unzip and open that files in the speedscope browser app).

I think the real question is:
AFAIK, CPU usage of the main thread should generally not cause TTC.
So, if I can reproduce CPU usage, but can’t TTC on the recent release.
Should it bother? I’m not sure that shlex can be replaced, or if it does make sense to add a parsing hacks there.
Same, for the webhook, JSON can be optionally reloaded to orjson as an example.
That probably makes it blazing fast, but is it worth it?

Thanks.

I don’t think you’ve provided enough context. Could you provide more information on the total system?

“pretty weak Chinese SBC” doesn’t indicate:

  • Main controller board
  • Any additional MCUs (ie connected via CAN)
  • Any additional peripherals (ie WebCam)
  • What apps are running in the main system concurrently

I don’t think you’ve provided enough context. Could you provide more information on the total system?

It is a custom, no-name integrated board.
Which is used in the Reborn 3 Infimech TX (EU) (not sure) printers.


IIRC,
SBC: Allwinner V853
MCU: STM32F07*

Any additional peripherals (ie WebCam)

It does not matter (board is too weak in both cases), but webcam can be connected.

What apps are running in the main system concurrently

Afaik, custom KlipperScreen, moonraker, that is it mostly.

Hope it helps.


My point is not that “Hey, there is weak board X, let’s do something specifically for them”.
My point is, “There are hot spots, do they need to be fixed?”

Even the manufacturer can’t provide tools/os image to flash it. There is pretty goofy os, an unknown 0.12+ version of Klipper with broken git & etc.
Like literally, there is one with broken libc (crash visible in the UART console).
The manufacturer just sends a new board, and another one.. Instead of just give a tool to flash it.

Yes, there is a flash tool, but no suitable distribution.
IMHO those boards are better in a landfill in their current state.
They will never receive an update, and probably never would run a pristine Klipper.

I was asking for context to understand where you are coming from.

Is it correct to summarize what you’re saying as: This marginal system generates TTCs and I’ve found some areas that are suspect (and maybe be responsible for TTCs in other systems) - should these areas be investigated/addressed?

Is that correct?

If it is, then I think it’s a valid approach. The only comment I would make is to remove the custom KlipperScreen from the system, see if you still have what you’re describing as “hot spots”. If you do then look at the code there and see what kind of improvements can be made.

I’m only suggesting to run without KlipperScreen (of any flavour) because in the cases where I’ve helped people through TTC errors, I’ve had success when I’ve got them to remove everything but Klipper/Moonraker/Mainsail-Fluidd (and, in a couple of cases, don’t use the “MainsailOS” for the rPi but go with the compact OS and install Klipper (et al) using KIAUH).

1 Like

:plus:

Those profiles are from vanilla Klipper.
One of them, from my RPI5, where it still takes around 1s.
I do not have fancy stuff, and generally think that RPI5 is OP.

So, I suspect those hotspots should be valid for all systems.
But I’m not sure that they will generate TTC.
The only possible difference is the amount and complexity of exclude_objects_definition per build plate.

Just my 2 cents.
I generally think that CPU can be loaded to 100% and klipper should work
Maybe with a little amount of magic dust.

I suspect that KlipperScreen is somewhat the same as the WebUI in the sense that it uses Moonraker and receives status/state updates.
So, if there is large JSON data (512kb, as in the above screenshot) (high amount of excluded object definitions), that large JSON serialized and deserialized should consume the power of Klipper, then Moonraker, and then Klipper screen.

2 Likes

That’s odd. Are you saying an exclude_objects.get_status() call took over a second to complete?

I didn’t fully understand the profiling snippet you provided and I wont be able to give this any significant attention for several weeks. But the screen snapshot seemed to indicate that the printer.exclude_objects.objects had several hundred items in its list. That seems like it could get unwieldy.

-Kevin

1 Like

Not directly, but probably results with all objects, takes time encoded as json.

114 to be precise, as in the file.
~450kb of EXCLUDE_OBJECT_DEFINE in gcode.
~4kb per object.
Mostly the same in the API output.


I try to show what I see and how I see it.

First, is the shlex.

Example EXCLUDE_OBJECT_DEFINE
EXCLUDE_OBJECT_DEFINE NAME=blah.stp_id_0_copy_0 CENTER=276,270.6 POLYGON=[[262.5,273.6],[262.5,267.6],[262.506,267.331],[262.506,267.324],[262.522,267.062],[262.523,267.049],[262.55,266.794],[262.553,266.774],[262.589,266.528],[262.593,266.501],[262.639,266.263],[262.646,266.231],[262.7,266.001],[262.71,265.962],[262.771,265.741],[262.785,265.697],[262.854,265.485],[262.871,265.435],[262.947,265.232],[262.969,265.176],[263.05,264.983],[263.077,264.923],[263.164,264.739],[263.196,264.674],[263.287,264.5],[263.325,264.43],[263.42,264.266],[263.465,264.192],[263.563,264.037],[263.615,263.96],[263.716,263.815],[263.774,263.735],[263.877,263.6],[263.943,263.517],[264.047,263.391],[264.121,263.306],[264.226,263.189],[264.307,263.102],[264.412,262.995],[264.502,262.907],[264.607,262.809],[264.706,262.721],[264.809,262.631],[264.917,262.543],[265.019,262.462],[265.135,262.374],[265.235,262.301],[265.36,262.215],[265.458,262.15],[265.592,262.065],[265.687,262.008],[265.83,261.925],[265.921,261.875],[266.074,261.796],[266.159,261.754],[266.323,261.677],[267.733,261.05],[284.268,261.05],[285.677,261.677],[285.841,261.754],[285.926,261.796],[286.079,261.875],[286.17,261.925],[286.313,262.008],[286.408,262.065],[286.542,262.15],[286.64,262.215],[286.765,262.301],[286.865,262.374],[286.981,262.462],[287.083,262.543],[287.191,262.631],[287.294,262.721],[287.393,262.809],[287.498,262.907],[287.588,262.995],[287.693,263.102],[287.774,263.189],[287.879,263.306],[287.953,263.391],[288.057,263.517],[288.123,263.6],[288.226,263.735],[288.284,263.815],[288.385,263.96],[288.437,264.037],[288.535,264.192],[288.58,264.266],[288.675,264.43],[288.713,264.5],[288.804,264.674],[288.836,264.739],[288.923,264.923],[288.95,264.983],[289.031,265.176],[289.053,265.232],[289.129,265.435],[289.146,265.485],[289.215,265.697],[289.228,265.741],[289.29,265.962],[289.3,266.001],[289.354,266.231],[289.361,266.263],[289.406,266.501],[289.411,266.528],[289.447,266.774],[289.45,266.794],[289.477,267.049],[289.478,267.062],[289.494,267.324],[289.494,267.331],[289.5,267.6],[289.5,273.6],[289.494,273.869],[289.494,273.876],[289.478,274.138],[289.477,274.151],[289.45,274.406],[289.447,274.426],[289.411,274.672],[289.406,274.699],[289.361,274.937],[289.354,274.969],[289.3,275.199],[289.29,275.238],[289.228,275.459],[289.215,275.503],[289.146,275.715],[289.129,275.765],[289.053,275.968],[289.031,276.024],[288.95,276.217],[288.923,276.277],[288.836,276.461],[288.804,276.526],[288.713,276.7],[288.675,276.77],[288.58,276.934],[288.535,277.008],[288.437,277.163],[288.385,277.24],[288.284,277.385],[288.226,277.465],[288.123,277.6],[288.057,277.683],[287.953,277.809],[287.879,277.894],[287.774,278.011],[287.693,278.098],[287.588,278.205],[287.498,278.293],[287.393,278.391],[287.294,278.479],[287.191,278.569],[287.083,278.657],[286.981,278.738],[286.865,278.826],[286.765,278.899],[286.64,278.985],[286.542,279.05],[286.408,279.135],[286.313,279.192],[286.17,279.275],[286.079,279.325],[285.926,279.404],[285.841,279.446],[285.677,279.523],[284.268,280.15],[267.733,280.15],[266.323,279.523],[266.159,279.446],[266.074,279.404],[265.921,279.325],[265.83,279.275],[265.687,279.192],[265.592,279.135],[265.458,279.05],[265.36,278.985],[265.235,278.899],[265.135,278.826],[265.019,278.738],[264.917,278.657],[264.809,278.569],[264.706,278.479],[264.607,278.391],[264.502,278.293],[264.412,278.205],[264.307,278.098],[264.226,278.011],[264.121,277.894],[264.047,277.809],[263.943,277.683],[263.877,277.6],[263.774,277.465],[263.716,277.385],[263.615,277.24],[263.563,277.163],[263.465,277.008],[263.42,276.934],[263.325,276.77],[263.287,276.7],[263.196,276.526],[263.164,276.461],[263.077,276.277],[263.05,276.217],[262.969,276.024],[262.947,275.968],[262.871,275.765],[262.854,275.715],[262.785,275.503],[262.771,275.459],[262.71,275.238],[262.7,275.199],[262.646,274.969],[262.639,274.937],[262.593,274.699],[262.589,274.672],[262.553,274.426],[262.55,274.406],[262.523,274.151],[262.522,274.138],[262.506,273.876],[262.506,273.869],[262.5,273.6]]
Callstack graphs

Mine high res:


From slower RK3328, with slower profiler:

So, python console test per call
import timeit
stmt = '''
s = shlex.shlex("NAME=blah.stp_-1_id_113_copy_0 CENTER=24,109.8 POLYGON=[[10.5,112.8],[10.5,106.8],[10.5056,106.531],[10.5059,106.524],[10.5223,106.262],[10.5234,106.249],[10.5502,105.994],[10.5527,105\
.974],[10.5891,105.728],[10.5935,105.701],[10.639,105.463],[10.6459,105.431],[10.6999,105.201],[10.7097,105.162],[10.7715,104.941],[10.7849,104.897],[10.8539,104.685],[10.8713,104.635],[10.9468,104.432],[1\
0.9687,104.376],[11.0501,104.183],[11.077,104.123],[11.1636,103.939],[11.196,103.874],[11.2871,103.7],[11.3254,103.63],[11.4204,103.466],[11.4651,103.392],[11.5634,103.237],[11.6147,103.16],[11.7156,103.01\
5],[11.774,102.935],[11.8769,102.8],[11.9428,102.717],[12.047,102.591],[12.1206,102.506],[12.2256,102.389],[12.3073,102.302],[12.4125,102.195],[12.5024,102.107],[12.6071,102.009],[12.7056,101.921],[12.8094\
,101.831],[12.9166,101.743],[13.0188,101.662],[13.1349,101.574],[13.2351,101.501],[13.3602,101.415],[13.4578,101.35],[13.592,101.265],[13.6866,101.208],[13.83,101.125],[13.9211,101.075],[14.0737,100.996],[\
14.159,100.954],[14.3227,100.877],[15.7332,100.25],[32.2677,100.25],[33.6774,100.877],[33.841,100.954],[33.9263,100.996],[34.079,101.075],[34.17,101.125],[34.3134,101.208],[34.408,101.265],[34.5422,101.35]\
,[34.6398,101.415],[34.765,101.501],[34.8651,101.574],[34.9812,101.662],[35.0834,101.743],[35.1906,101.831],[35.2944,101.921],[35.3929,102.009],[35.4976,102.107],[35.5876,102.195],[35.6927,102.302],[35.774\
4,102.389],[35.8794,102.506],[35.953,102.591],[36.0572,102.717],[36.1231,102.8],[36.226,102.935],[36.2844,103.015],[36.3853,103.16],[36.4367,103.237],[36.5349,103.392],[36.5796,103.466],[36.6746,103.63],[3\
6.7129,103.7],[36.804,103.874],[36.8364,103.939],[36.923,104.123],[36.95,104.183],[37.0313,104.376],[37.0532,104.432],[37.1287,104.635],[37.1461,104.685],[37.2151,104.897],[37.2285,104.941],[37.2903,105.16\
2],[37.3002,105.201],[37.3541,105.431],[37.361,105.463],[37.4065,105.701],[37.4109,105.728],[37.4474,105.974],[37.4499,105.994],[37.4766,106.249],[37.4777,106.262],[37.4942,106.524],[37.4944,106.531],[37.5\
,106.8],[37.5,112.8],[37.4944,113.069],[37.4942,113.076],[37.4777,113.338],[37.4766,113.351],[37.4499,113.606],[37.4474,113.626],[37.4109,113.872],[37.4065,113.899],[37.361,114.137],[37.3541,114.169],[37.3\
002,114.399],[37.2903,114.438],[37.2285,114.659],[37.2151,114.703],[37.1461,114.915],[37.1287,114.965],[37.0532,115.168],[37.0313,115.224],[36.95,115.417],[36.923,115.477],[36.8364,115.661],[36.804,115.726\
],[36.7129,115.9],[36.6746,115.97],[36.5796,116.134],[36.5349,116.208],[36.4367,116.363],[36.3853,116.44],[36.2844,116.585],[36.226,116.665],[36.1231,116.8],[36.0572,116.883],[35.953,117.009],[35.8794,117.\
094],[35.7744,117.211],[35.6927,117.298],[35.5876,117.405],[35.4976,117.493],[35.3929,117.591],[35.2944,117.679],[35.1906,117.769],[35.0834,117.857],[34.9812,117.938],[34.8651,118.026],[34.765,118.099],[34\
.6398,118.185],[34.5422,118.25],[34.408,118.335],[34.3134,118.392],[34.17,118.475],[34.079,118.525],[33.9263,118.604],[33.841,118.646],[33.6774,118.723],[32.2677,119.35],[15.7332,119.35],[14.3227,118.723],\
[14.159,118.646],[14.0737,118.604],[13.9211,118.525],[13.83,118.475],[13.6866,118.392],[13.592,118.335],[13.4578,118.25],[13.3602,118.185],[13.2351,118.099],[13.1349,118.026],[13.0188,117.938],[12.9166,117\
.857],[12.8094,117.769],[12.7056,117.679],[12.6071,117.591],[12.5024,117.493],[12.4125,117.405],[12.3073,117.298],[12.2256,117.211],[12.1206,117.094],[12.047,117.009],[11.9428,116.883],[11.8769,116.8],[11.\
774,116.665],[11.7156,116.585],[11.6147,116.44],[11.5634,116.363],[11.4651,116.208],[11.4204,116.134],[11.3254,115.97],[11.2871,115.9],[11.196,115.726],[11.1636,115.661],[11.077,115.477],[11.0501,115.417],\
[10.9687,115.224],[10.9468,115.168],[10.8713,114.965],[10.8539,114.915],[10.7849,114.703],[10.7715,114.659],[10.7097,114.438],[10.6999,114.399],[10.6459,114.169],[10.639,114.137],[10.5935,113.899],[10.5891\
,113.872],[10.5527,113.626],[10.5502,113.606],[10.5234,113.351],[10.5223,113.338],[10.5059,113.076],[10.5056,113.069],[10.5,112.8]]", posix=True)
s.whitespace_split = True
s.commenters = '#;'
[e.split("=", 1) for e in s]
'''
setup = 'import shlex'
print(timeit.timeit(stmt=stmt, setup=setup, number=1000)/1000)

Output on RPI5:
0.00683
Output on the Allwinner V853:
0.06529

Second is the json.dumps() in the webhook.py.
Which is probably, yes, just return status info from the exclude_object.get_status()
(there is no direct call stack in the trace of that).

Callstack graphs

Mine high res:


From much slower RK3328 and slower profiler:

json_response.txt (448.5 KB)

So, python console test per call
import json
import timeit
with open("json_response.json", "r") as f:
  data = json.load(f)

print(timeit.timeit('json.dumps(data,separators=(",", ":"))', globals=globals(), number=10)/10)

Output on RPI5:
0.0542
Output on the Allwinner V853:
0.3091

Hope that helps.

Interesting results. I’m not sure how much we can improve the gcode parser’s efficiency using pure Python. The shlex library ensures we are correctly parsing extended gcode commands and Klipper shouldn’t encounter many when its actually printing. Frankly I think it should be possible for slicers to cap how many vertices are generated for object definitions, the amount shown in this example seems like overkill.

With regard to json serialization, Moonraker optionally supports msgspec if its installed. Moonraker spends quite a bit of time encoding/decoding json data so this addition reduces Moonraker’s load substantially on low spec SBCs. The msgspec library has similar performance to orjson when it comes to unstructured json serialization, however its implemented as a C extension rather than a rust extension. SBCs that stand to benefit the most from these packages will have a hard time installing rust’s tools if they need to build the package. As a bonus, msgspec supports other formats such as MessagePack, YAML, and TOML. AFAIK both libraries require Python 3.

Klipper’s API Server could probably benefit from a similar implementation, where it uses msgpack if its installed, falling back to the standard library otherwise. I’d be happy to put together a PR if its agreed that it would be useful.

3 Likes

Interesting findings. For what it is worth, if it was me, I would not spend a lot of time optimizing for these really low-end SBC systems. Really powerful general purpose computers are just too cheap to invest development time writing and maintaining software to save a few pennies on hardware. (Machines like the rpi3 and rpi0v2 have tons of processing power and have great prices.) Certainly, I’d look to optimize the Klipper kinematics code and Klipper micro-controller code - but if a machine is too low end to format JSON messages then that seems excessive.

That said, I’m a little confused why the get_status() is on the list of time consumers at all. Ideally the excluded objects list wouldn’t be changing frequently, so the status subscription code should have rapidly detected duplicate content (list pointers being equal) and no json messages being sent at all. So, something seems odd here.

-Kevin

Respectfully, there are many places around the world where Raspberry Pis are not easy to come by in terms of cost and order delivery times - this is also true for devices like the BTT CB1. This is not a result of the recent tariff situation or a hold over from Covid, but long standing country specific issues.

I’ve had many situations where I’ve worked with people who have inexplicable problems only to find out they’re working with what we would consider low-end SBCs and when I suggest going with something like an rPi 4B (which costs $50 CAD/$35 USD) I’m told that it’s very difficult/expensive to get them.

Along with that, I’m not sure you can say that a problem in a “low-end SBC system” will not appear in a higher end one. It’s been my experience that problems that result from code inefficiencies are rarely resolved by running the software in a more powerful processor.

I think the work being proposed by @nefelim4ag has value and there’s a very good chance he can find some issues that cause TTCs in all systems.

To be clear, I wasn’t recommending one particular manufacturer - I was pointing out that hardware as an example. There are many providers of low-cost single board computers with lots of processing power (at or more than an rpi3). As a rough guideline, I’d say look for something with hardware double precision floating point support, 1+Ghz frequency, 2+ cpu cores, at least 1Gig of ram, and 8+GB of flash.

That said, even if the hardware did cost $50 and was hard to get, I still don’t think that’s an easy justification for development time. Hiring developers is very expensive - for a project this size, if a company had to pay all the developers, I’d say they’d need a multi-million dollar annual budget. So, at a high level, if another developer asks for my opinion I’m inclined to tell them to not undervalue their time - to wit “it’s not worth spending tens of thousands of dollars in your time to help someone else save $50”. Certainly not worth trying to help manufactures that are designing hardware with a goal to save every penny. Anyway, I don’t want to get too off-topic - just some of my high level opinions.

Agreed - there’s something very strange about get_status() using so much cpu time. It would be good to understand that.

Cheers,
-Kevin

1 Like

It appears to me that the issue is not specifically with get_status(). @nefelim4ag has a frontend connected that is subscribed to exclude_object. Each time a new object is defined via EXCLUDE_OBJECT_DEFINE the internal list of objects is updated, so new status is pushed to all connected clients subscribed to exclude_object. As the object list grows the amount of data that needs to be encoded increases, and perhaps this resulted in a missed deadline leading to a “timer too close” error.

I can envision other scenarios where this could be a problem. For example, a user navigates to a front end to check on the status of a print. During init, the front end queries/subscribes to configfile, bed_mesh, and exclude_object. The response could result in several megabytes of data that needs to be serialized, which even on modern SBC hardware likely takes over 100ms based on the results posted earlier in the thread.

FWIW, I agree that spending a lot of time trying to optimize this area isn’t worth the dev cost, however I was able to put something together rather quickly.

Its a rather small change that uses msgspec if its available, falling back to the standard library if the import fails.

1 Like

Thanks. Seems fine to me. @nefelim4ag - does this notably improve your test?

-Kevin

Python 3.13, RPI5, same JSON as above, same console test.
json.dumps() - ~55ms
msgspec.json.encode() - ~14ms

From the profiling view, the difference seems pretty similar

The G-code is available above.
~54ms → ~12ms


There is also a large after-call, but I’m not sure what it is doing. There are just larger timings overall.
So, patched ~106ms → ~27ms.


So, yes, replacing built-in json.dumps() improves the performance of the API server webhook.py.
By around x4.


Oh, yes, this is all with the patch from the PR above.

1 Like