Wow they wrote a pile of code! Their python… I’m not reading 2000 lines of undocumented python like that. Its good feedback to know that its not reliable, we still have value to add here.
I see potential problems with what they did in C at least. There is no good reason for the 4 sensors to stay in sync, unless the board has the same external clock source hooked up to all of the HX711 chips. If they don’t, each chip’s internal clock should slowly drift away from the others. Some macro shots of the board could help there. If they used a single clock their C code looks less naive.
It looks like their polling frequency math is:
REST_TICKS = (120000000 / 10000 * 125) / 2 = 750,000
Dividing the clock speed in ticks per second by the rest ticks gives (120000000 / 750,000) = 80
. So they sleep for exactly the sample period which is 1/80th of a second.
This is the last CPU intensive thing you can do, but also the least accurate. My understanding of theses Delta Sigma ADC devices is that they take the sample period to prepare a sample, then the sample is copied into a register and the ‘data ready’ pin it set. That sample sits in the register for the next entire sample period getting stale. So when you read the sample like this its inevitable that you have an uncertainty about exactly how stale it has become. Its something between 0
and sample period
. That depends on the sensor and CPU clocks and blind luck. The printer moves some amount during this time… which we have no way to account for.
In the code I’ve been testing I got around this problem by brute force. I just turned the sample rate up to 40Khz. That brings that uncertainty down to 1/40,000 of a second. IT also totally gets rid of ERROR_SAMPLE_NOT_READY
. This works well but adds a load of wasted CPU overhead.
I am going to prototype using a sliding window strategy to optimize the high frequency polling. I think I can get a 95% CPU usage improvement with no loss in resolution. But probably after I get a release out that just prints.