Eddy current sensor homing and calibration problems

Basic Information:

Printer Model: modified Tronxy X5SA
MCU / Printerboard: BTT Manta 8P
Host / SBC: Pine SOQuartz 2GB
klippy.log

klippy.log.zip (30.0 KB)

Describe your issue:

I’ve using the new BTT Eddy sensor with a patch derived from their changes applied to the master branch. I started getting homing errors following commit 4a92727. For the purposes of this report I’ve used an unmodified Klipper source to reproduce the issue.

The error “Error during homing probe: Eddy current sensor error” was not descriptive enough so I instrumented the MCU code to return the 4 most significant bits of the sample data which contain the error when the trigger fails. I was able to determine that the error I’m experiencing from the datasheet description:
bit15: ERR_UR0 (Conversion under range error)
bit14: ERR_OR0 (Conversion over range error)
bit13: ERR_WD0 (Conversion watchdog timeout error)
bit12: ERR_AE0 (Conversion amplitude error)

Bit 12 was set so it’s either an Amplitude High Error or an Amplitude Low Error. I then changed the LDC1612 setup to not report Amplitude High errors in the DATAx_MSB registers and homing worked without issue so the error is confirmed as Amplitude Low Error.

Originally I thought that the cause was transition through a distance that resulted in a low amplitude error as the bed came into range. Now I’m not so sure after further reading the data sheet which suggests that amplitude errors are the result of a poorly chosen drive current.

LDC_CALIBRATE_DRIVE_CURRENT CHIP=btt_eddy

Auto calibration chose a value of 15 for drive current. After bumping to 16 I cannot reproduce the amplitude errors.

That begs the question whether drive current calibration is working correctly.

I tried calibrating drive current with the bed against the sensor and far away and the calibration always returns 15. In fact if I change the saved configuration reg_drive_current the drive current calibration returns as follows:

reg_drive_current=2-13 → cal result: 14
reg_drive_current=14 → cal result: 14
reg_drive_current=15 → cal result: 15
reg_drive_current=16-31 → cal result: 16

Right now I think that the calibration isn’t working correctly or isn’t picking the optimal value and that’s the root of the problem. I’m going to review the calibration routine but I’ve spent 2 days debugging with the datasheet and I thought I should post what I’ve got so far in case anyone else has more experience with this.

1 Like

I didn’t work with that Eddy sensor or something related to it but i did look how it should be configured.

As I understand BTT is pointing to use their forked version of klipper, i went there and did see that fork don’t have any additional changes, it’s only behind vanilla klipper by 74 commits.

So I did check if we can merge it - and seems everything is fine.
Then I spot commit in vanilla klipper: “sensor_ldc1612: Initial support for bulk reading ldc1612 sensor
following it I came to this pull request in vanilla klipper: Support for ldc1612 eddy current probes
which was merged “Apr 9”

From my point of view Eddy sensor is already supported in vanilla klipper and as i can see official documentation already have it
https://www.klipper3d.org/Eddy_Probe.html

Are you sure you still need some patches ?

P.S. maybe i’m wrong as i can see following info in BTT Eddy documentation

You need to change from the main branch of klipper to BTTs branch as discussed in the warning at the top of the page. This is only temporary and will be updated accordingly.

Still accurate as of **25-05-2024**.

I’m using BTT’s Klipper but in a roundabout way. I created a patch off their fork and applied it to Klipper master. Everything works great but Kevin’s commit 4a92727 highlighted a problem due to improved error checking. For the issue to be investigated I need to reproduce it against unmodified Klipper which I did for this report.

Basically, I think we’re choosing the drive current for eddy current sensors incorrectly. I’m not convinced it’s picking the correct value or that we should even be using sensor auto-calibration to derive the sensor drive current. At least according to the datasheet the drive current should be chosen according the sensor parasitic resistance (Rp) from values in Table42. The manufacturer of the sensors should know the value of Rp from their design and should be able to specify the IDRIVEx register setting.

In my case, ‘LDC_CALIBRATE_DRIVE_CURRENT CHIP=eddy’ picked the value 15 which results in occasional Amplitude Low errors which get worse as the temperature increases. Setting reg_drive_current = 16 fixed my issue but I did a test running 30+ current calibrations changing the initial register value with the following results:

initial IDRIVE0=2-13 → cal result: 14
initial IDRIVE0=14 → cal result: 14
initial IDRIVE0=15 → cal result: 15
initial IDRIVE0=16-31 → cal result: 16

So I wonder if the calibration starts with the register at 15 and that mostly works for most sensors given their similar geometry.

Ultimately, according to the LDC1612 datasheet, the drive current should be a design artifact not a value determined at run time.

Thanks for investigating. I agree that an amplitude error generally indicates an issue with the “drive current”.

Alas, I don’t know of a generic way to choose a drive current other than to run a calibration tool, or have a table for known hardware.

For what it is worth, though, I haven’t seen reports of other users needing to override this value with typical BTT sensors. I guess we can see if this is a regularly reported issue.

-Kevin

I think we’re early days with these sensors, especially for homing. I think that everyone using the BTT Eddy will be using the BTT fork which doesn’t have the sensor error check. Time will tell. I’ve also found that IDRIVE0=16 produces sporadic low amplitude errors when the sensor temperature is around 70C, but IDRIVE0=17 results in high amplitude errors when the sensor temperature is below 56C.

The error bits for bulk sensor queries are counted by ldc1612.py but the sample values still look they’re used. Sensor errors will stop mesh generation that uses homing to probe each position I don’t think errors will stop mesh generation that uses METHOD=SCAN. I know that’s not a thing in official Klipper yet.

I’m waiting for my second BTT Eddy to arrive so I can hook it up to my oscilloscope without fear of destruction of my only probe and measure the amplitude of the sensor wave form.

Until the manufacturers start doing the engineering I don’t see another way of arriving at the drive current. The calibration could could determine and report the range of values. Right now calibration uses 15 as the initial IDRIVE0 value. It could additionally use 0 and 31 to determine the lower and upper bound at the current sensor temperature.

Well, for what it is worth, prior to the code change an ldc1612 warning bit would have resulted in an early toolhead halt (it would have stopped descending). So, although you wouldn’t have gotten an error, it would very likely have lead to an inaccurate position report.

That aside, I agree that it may be necessary to improve the error check (perhaps by only throwing an error after several failures). I suppose it may also be possible that you got a bad sensor.

-Kevin

I could have a bad sensor. I’ll know more when #2 arrives but it hasn’t even shipped yet. To be clear, I wouldn’t even have picked up on this if Klipper’s initial IDRIVE0 was chosen >15 for drive current calibration.

Do you have a temperature sensor on your eddy_current_probe? I’ve been pushing mine to high temperatures to see what the limits are. It works very consistently so long as the coil part of the probe stays below about 70C which isn’t hard under normal circumstances. It could be that the capacitor on the sensor board has poor thermal properties. I’ve seen X7R drift massively due to thermals.

My bed height changes by about +0.5mm from 60C to 100C and I did a 1 layer 0.25mm thickness test using the Eddy sensor for homing and adaptive mesh after thermal calibration. I measured the printed layer in many places with the following results:

Bed: 60C, Actual measured max layer height: 0.26mm
Bed: 100C, Actual measured max layer height: 0.27mm

I didn’t record the sensor temperature during homing and adaptive mesh. They were different between the runs by at least 10C but I’m willing to call that consistent result within experimental error.

1 Like

I’ve just set up the BTT Eddy using the main branch of Klipper (v0.12.0-249), now that all the necessary changes have been made. I am also getting "“Error during homing probe: Eddy current sensor error”.

My coil temp is quite high, 78 degrees C, because I let the printer heat soak before calibration and with troubleshooting it’s now been a couple of hours at bed temp 125. Edit: I was using BTT Eddy MCU temp, have just now configured to get BTT Eddy temp, which was likely 6 or 7 degrees lower.)

‘LDC_CALIBRATE_DRIVE_CURRENT CHIP=eddy selects 16. Because of this thread, I tried 17, 14, and just got to work at 18. Will see if it’s reliable.

To set up with standard Klipper, I followed steps 1 to 4 at GitHub - bigtreetech/Eddy, except for commenting out the following parts in printer.cfg:

data_rate: 500

[temperature_probe btt_eddy]
sensor_type: Generic 3950
sensor_pin: eddy:gpio26
horizontal_move_z: 2

And there we reach the end of my knowledge. I would expect that once BTT update their page they will either have to address the issue in their instructions, or there’ll be flood of help request posts on Klipper forums.

Why would you “comment out” this setting?

1 Like

Because it was giving an error that temperature_probe is not a valid option. Perhaps it works in the BTT branch?

Anyway, setting it up as temperature_sensor appears to work just fine.

1 Like

@koconnor it’s been some months since this post. I’ve helped many users on the Klipper Discord and a couple of things stand out.

  1. The error message Error during homing z: Eddy current sensor error provides little direction for correcting the problem because 4 error conditions result in this message. Is it possible to have the error bits returned from the MCU to make debugging easier?
  2. Most of the issues relate to drive current calibration and in almost every single case incrementing the value by 1 solves the issue. I think that in a lot of cases the initial IDRIVE0=15 is close to the boundary but not over so no automatic amplitude correction takes place. Choosing initial IDRIVE0=31 will have the chip correct downwards and arrive at 16 for this boundary case or 15 when the boundary case is not present.

I guess both can be fixed in documentation too. Let me know which way you would prefer and I’ll work on a patch.

2 Likes

We can certainly change the error message. Getting the error bits back is possible, but may be a little tricky - I’m not sure.

If you’ve got a patch to improve the error message and/or documentation then that seems fine to me.

Cheers,
-Kevin

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.