Ah, interesting, I’ve accidentally found this thread.
So, I’ve partially reinvented the wheel.
So, here it is: RFC: Pid calibration use First Order Plus Dead Time model for PID estimation by nefelim4ag · Pull Request #7243 · Klipper3d/klipper · GitHub
And my following thoughts for the sake of history.
I guess, if I try to put how it works, and probably it is different from above:
- Existing test is reused
- It still outputs PID params.
- Fit for the First Order model happens over the heat-up curve, only for Gain and Tau. Only for the time when the heater is enabled, between min temp and “max temp.”
- Dead Time Estimated from the relay test, heating phase. So, between the heater is enabled and the temperature minimum is reached.
Where rough estimation from data, or fit, should produce adequate values.
It is therefore possible that some heaters will have a flat heating line, which is impossible to fit.
It is expected that fit cannot be ideal, for example, because:
- PTC can change the curvature
- AC/SSR thing can alter bed power (lock with AC phase, so ± 1 half of sine wave)
- The test can be done not from the steady cold state (which roughly represents 0 power)
- etc.
I’m not sure it’s worth it to further complicate the code to get a slightly better fit.
So, from my current PoV, it should work like this:
- One does normal PID_CALIBRATE, it should work
- It can happen that it isn’t, for example, if there is an overpowered heater (15% power to reach 250C).
- PID_CALIBRATE at some reduced MAX_POWER (i.e. HEAT_UP_POWER). Where normally, lower is better (more points and closer to the real asymptote).
- It will overestimate the dead time, which can be fine for the bed heater or can be unwanted, because it will increase the reaction time.
- PID_CALIBRATE MAX_POWER=<low_value> CYCLE_POWER=1.0 will measure minimal dead time, and by so, should produce a “tight reactive” fit, within our model and limits.
I doubt people will recalculate values manually or semi automatically to adjust the dead time, so there is CYCLE_POWER for “noobs”.
Where, for the conversion of FOPDT to PID params, the Stogestad SIMP is used.
The only tuning parameter is basically tau constant/lambda constant, which in our case is reduced to dead time measurements.
Derivative is optional, and not necessary for the PI to work.
Extreme cases:
- Flat heat up line, overestimated gain, low PI, high overshoot.
- For a given Gain/Time constant, dead time is low - slow PWM oscillations
Code limits the dead time below to pwm_delay + report_interval.
Because dead time seems to be from the last observed data to the next observation of change after the control signal.
If one decides to reduce this, PWM will oscillate.
Cooling effects are ignored because we cannot apply control in the cooling direction or alter the PID during runtime. If one considers using them, it makes the model less predictable.
So, my goal(s) here is to always have:
- predictable PID values.
- predictable tuning steps (if necessary)
- Ability to interpret what is going on
- Get rid of Voodoo practices like PID tuning for specific temperature or enable/disable fan, heat chamber, whatever.
It should have stable behaviour across a large range of temperatures and environments.
Where the price for that can be less aggressive, have slower response to external disturbances.
One can estimate the limits from the P value, for example:
P = 16
16 / 255 = 0.0627 # real P
Max possible instant deviation is:
1 / (16 / 255) = 16 C
If one knows that an enabled fan removes 7% of the power:
0.07 / (16 / 255) = 1.15C
Where in practice, it will be lower than that, because of ID.
I doubt we can or even should try to handle that with PID.
Because there will always be another too powerful heater (with low PID), or another more powerful fan (or liquid nitrogen, compressed air, whatever), which removes more power.
And so:
- Normal cases should be handled by the above
- Extreme cases should be handled by the Feed Forward control, for example: RFC: Cooperative Heaters predictive control by nefelim4ag · Pull Request #6837 · Klipper3d/klipper · GitHub
Thanks,
-Timofey
Just in case, one can do the PID_CALIBRATE WRITE_FILE=1 (file saved to /tmp/heattest.txt) and run the following rough port from the PID_CALIBRATE code
graph_heat_up.py <heattest.txt>
I’ve used it mostly to evaluate the fit.
Script
#!/usr/bin/python3
import sys
import optparse
import math
import numpy as np
import matplotlib.pyplot as plt
def coordinate_descent(adj_params, params, error_func):
# Define potential changes
params = dict(params)
dp = {param_name: 1. for param_name in adj_params}
# Calculate the error
best_err = error_func(params)
print("Coordinate descent initial error: %s" % best_err)
threshold = 0.00001
rounds = 0
while sum(dp.values()) > threshold and rounds < 10000:
rounds += 1
for param_name in adj_params:
orig = params[param_name]
params[param_name] = orig + dp[param_name]
err = error_func(params)
if err < best_err:
# There was some improvement
best_err = err
dp[param_name] *= 1.1
continue
params[param_name] = orig - dp[param_name]
err = error_func(params)
if err < best_err:
# There was some improvement
best_err = err
dp[param_name] *= 1.1
continue
params[param_name] = orig
dp[param_name] *= 0.9
print("Coordinate descent best_err: %s rounds: %d" % (best_err, rounds))
return params
def parse_log(file_name):
file_name = sys.argv[1]
pwm_time = []
data = []
with open(file_name, 'r') as f:
for line in f.readlines():
columns = line.split()
if columns[0] == "pwm:":
time = float(columns[1])
power = float(columns[2])
pwm_time.append((time, power))
continue
time = float(columns[0])
temp = float(columns[1])
data.append((time, temp))
start_time, pwm1 = pwm_time[0]
end_time, pwm2 = pwm_time[1]
start_i = 0
while data[start_i][0] < start_time:
start_i += 1
min_sample = data[start_i]
for sample in data[start_i:]:
if sample[1] < min_sample[1]:
min_sample = sample
start_i = data.index(min_sample)
end_i = start_i
while data[end_i][0] < end_time:
end_i += 1
return data[start_i:end_i], pwm1
def main():
# Parse command-line arguments
usage = "%prog [options] <logfile>"
opts = optparse.OptionParser(usage)
options, args = opts.parse_args()
if len(args) != 1:
opts.error("Incorrect number of arguments")
file_name = args[0]
# Parse data
data, pwm = parse_log(file_name)
if not data:
return
t = [p[0] for p in data]
y = [p[1] for p in data]
def least_squares_error(params):
gain = params['gain']
time_constant = params['time_constant']
tau_b = params['tau_b']
if gain <= 0. or time_constant <= 0. or tau_b <= 0.:
return 9.9e99
start_time = t[0]
T0 = y[0]
err = .0
for time, temp in zip(t[1:], y[1:]):
dtemp = temp - T0
dt = time - start_time
x = (1.0 - math.exp(-dt/time_constant))
x2 = (1.0 - math.exp(-dt/tau_b))
err += (gain * x * x2 - dtemp) ** 2
return err
def fit():
params = {
'gain': y[-1] - y[0],
'time_constant': (t[-1] - t[0]) * 0.6,
# Second order correction
'tau_b': 1.0
}
adj_params = ('gain', 'time_constant', 'tau_b')
return coordinate_descent(adj_params, params, least_squares_error)
params = fit()
print(params)
def model(times, temps, params):
gain = params['gain']
time_constant = params['time_constant']
tau_b = params['tau_b']
start_time = times[0]
for t in times:
dt = t - start_time
x = (1.0 - math.exp(-dt/time_constant))
x2 = (1.0 - math.exp(-dt/tau_b))
yield temps[0] + gain * x * x2
y_fit = list(model(t, y, params))
dy_dt = np.gradient(y, t)
plt.figure(figsize=(10, 6))
plt.subplot(2, 1, 1)
plt.plot(t, y, label="Measured")
plt.plot(t, y_fit, '--', label="Fit")
plt.ylabel("Temperature")
plt.legend()
plt.grid()
plt.subplot(2, 1, 2)
plt.plot(t, dy_dt, label="Measured slope")
dy_fit = np.gradient(y_fit, t)
plt.plot(t, dy_fit, '--', label="Fit slope")
plt.xlabel("Time")
plt.ylabel("Slope (dT/dt)")
plt.legend()
plt.grid()
plt.tight_layout()
plt.show()
if __name__ == '__main__':
main()




