Does it make sense to actually sync mcu clock speed?

In common single mcu setups, using the main mcu as the timing reference means that measurement noise doesn’t impact kinematic timing. (Since all the peripherals are on the main mcu, we can time all movements relative to that main mcu, and any noise in the correlation between host and mcu timing doesn’t alter those movements.)

Even in multi-mcu setups, if the X and Y are on the main mcu then again, measurement error wont impact the vast majority of movements.

Separately, just to be clear, the goal isn’t to find the “right” time, but to accurately predict the clocks relative to each other. Common crystals are accurate to within about 1 part per million. So, if one commands the toolhead to move at 100mm/s and instead it moves at 100.00001mm/s then no one cares. In contrast, if the code can’t detect and account for a one part per million drift between clocks, then a print will be complete “spaghetti” within about 10 minutes of a print. That’s what I meant earlier by, “everything is relative to a particular clock and we don’t know (and don’t really care) how accurate that clock is”. We’re not looking for accuracy, we’re looking for predictability.

Cheers,
-Kevin

3 Likes