I am currently working on a university project. My aim is to get a rough estimate of the posture of a person's back. For this I have built a system that collects the data from 4 MPU6050 at a central service. From there it will be visualised. My first idea was to draw 4 lines and take the roll angles for the angles between the lines. My problem is that the data from the MPU6050s drifts a lot. I have already created calibration data and use a complementary filter on the nodeMCUs that read the data from the MPUs.
I am using the Adafruit_MPU6050.h lib to connect to the sensors and get data from it. To calculate roll, pitch and yaw I use the following code:
sensors_event_t gyro_1;
sensors_event_t accel_1;
mpu_gyro_1->getEvent(&gyro_1);
mpu_accel_1->getEvent(&accel_1);
acc_1_angle_x = (atan(accel_1.acceleration.y / sqrt(pow(accel_1.acceleration.x, 2) + pow(accel_1.acceleration.z, 2))) * 180 / PI) - mpu_1_acc_err_x;
acc_1_angle_y = (atan(-1 * accel_1.acceleration.x / sqrt(pow(accel_1.acceleration.y, 2) + pow(accel_1.acceleration.z, 2))) * 180 / PI) - mpu_1_acc_err_y;
previousTime = currentTime;
currentTime = millis();
elapsedTime = (currentTime - previousTime) / 1000;
gyro_1_angle_x = gyro_1_angle_x + (gyro_1.gyro.x - mpu_1_gyro_err_x) * elapsedTime; // deg/s * s = deg
gyro_1_angle_y = gyro_1_angle_y + (gyro_1.gyro.y - mpu_1_gyro_err_y) * elapsedTime;
yaw_1 = yaw_1 + (gyro_1.gyro.z - mpu_1_gyro_err_z) * elapsedTime;
roll_1 = 0.96 * gyro_1_angle_x + 0.04 * acc_1_angle_x;
pitch_1 = 0.96 * gyro_1_angle_y + 0.04 * acc_1_angle_y;
How can I make my data more stable? I tried playing with the 0.96 and 0.04 to change the amount of data I use from the gyro but it only got worse.
If you are interested, this is my project github: https://github.com/Trojan13/posture-control
Thanks in advance for your input!
-
The visualization of data on a PC is off topic here. What is wrong with your idea with the lines?chrisl– chrisl2022年01月14日 10:00:55 +00:00Commented Jan 14, 2022 at 10:00
-
Okay thanks. Where would you suggest I should ask? The problem is currently about the lines being either way to jittery or drifting way to hard.Trojan– Trojan2022年01月14日 10:04:51 +00:00Commented Jan 14, 2022 at 10:04
-
1Sorry guys. I adjusted my question to make it more precise and fitting for this board. @EdgarBonet I am already using a filter to correct the values. Additionally I am using a lowpas filter the adafruit library provided. "MPU6050_BAND_44_HZ"Trojan– Trojan2022年01月14日 10:29:39 +00:00Commented Jan 14, 2022 at 10:29
-
1If you add incorrect corrections to the gyro data, that could cause a pretty strong drift. Check your pull-requests.Edgar Bonet– Edgar Bonet2022年01月14日 12:05:43 +00:00Commented Jan 14, 2022 at 12:05
-
1posture for spinal health is relative. personally would have favored strain gauges. your filters and or data sources are likely not being managed/filtered correctly. with multiple mpu6050 on a rigid structure you need to blend accelerations from different devices with gyros from one device to get a reasonable angle for one device. even so it will drift so you need to add more sources or assumptions (gravity is a nice freebie and with 4 points m8ght be sufficient)Abel– Abel2022年01月15日 13:46:19 +00:00Commented Jan 15, 2022 at 13:46
2 Answers 2
Looking at your code, I see the only place where gyro_1_angle_x
is
updated is here:
gyro_1_angle_x = gyro_1_angle_x
+ (gyro_1.gyro.x - mpu_1_gyro_err_x) * elapsedTime;
This is adding to gyro_1_angle_x
a quantity that does not depend on
gyro_1_angle_x
itself. This means that, if there is an uncorrected
systematic error in the measurement of gyro_1.gyro.x
(which there
always is, as you can never achieve perfect calibration), the errors
will pile up, leading to an unbounded drift. Then, you estimate the roll
angle as
roll_1 = 0.96 * gyro_1_angle_x + 0.04 * acc_1_angle_x;
The factor 0.96 slightly reduces the amount of drift (by 4%), but does
not change the qualitative behavior: roll_1
is also prone to unbounded
drift.
Suggested approach
My suggestion will be to forget about angles. Using angles to keep track of your device's attitude is a bad idea: the formulas needed to update the angles are complex (and thus easy to get wrong), computationally expensive, and prone to gimbal lock. For tracking the whole attitude, it would be more reasonable to use rotation matrices or, better yet, quaternions.
In your use case, however, you do not need to track the whole attitude. You have no means to reliably track the yaw angle, and you actually do not even care about this angle. You are only interested in the orientation of the device relative to the vertical direction. Thus I suggest you do no even try to track the attitude, and instead just track the vertical direction. This direction is given by the coordinates of the gravity vector in the device's reference frame. Then you just have one vector to track. No angles, no matrices, no quaternions: just a simple vector.
Using the accelerometers
The accelerometers are sensitive to the gravity, thus you can use their readings as an estimate of the gravity vector, as in the following pseudo-code:
at each time step:
(accel, gyro) = read_MPU();
gravity = accel;
This may give you the opposite of the gravity (direction of the zenith). That's fine, as long as you know which vector you are tracking.
There is an issue with this approach though. The accelerometers are not only sensitive to the gravity, they are also sensitive to the acceleration (hence the name). If your subject moves, its acceleration will cause an error in the estimate of the gravity vector. The estimate is poor in the short term, but it should be good in the long term: as your subject is unlikely to maintain sustained acceleration, a time average of the accelerometer readings should closely match the gravity vector.
Using the gyroscopes
In order to get rid of the acceleration-induced errors, you can instead use the gyro readings. Assuming you somehow manage to get an initial estimate of the gravity vector, you can use the gyro readings to iteratively update this estimate:
at each time step:
(accel, gyro) = read_MPU();
dt = time_since_previous_time_step;
rotation = - dt * gyro;
gravity = apply_rotation(rotation, gravity);
here rotation
is the rotation vector describing how much
the gravity has rotated since the previous time step. The minus sign in
its formula comes from the fact that the gyro gives you the rotation of
the device relative to a still reference frame, whereas you want the
rotation of a still vector relative to the device's reference frame. The
function apply_rotation()
would apply Rodrigues' rotation
formula in order to rotate the gravity vector to its new
orientation.
This approach does not suffer from the acceleration induced errors. It is, however prone to drift, as combining a large number of small rotations ends up piling up a large number of small errors.
Combining gyroscope and accelerometer data
The accelerometer approach is prone to short-term errors, while the gyroscope approach is prone to long-term drift. You can try to combine these two approaches, and use the accelerometers to determine the long-term behavior of your gravity estimate, while the gyroscopes determine its short-term behavior. For this you would, at each time step:
- use the gyro data to rotate the current estimate of the gravity vector
- then "stir" it gently towards the accelerometer readings.
In pseudo-code, this would look like:
at each time step:
(accel, gyro) = read_MPU();
dt = time_since_previous_time_step;
rotation = - dt * gyro;
gravity = apply_rotation(rotation, gravity);
gravity = gravity + (dt/tau) * (accel - gravity)
The last line has the effect of slightly pushing the gravity estimate
towards the accelerometer readings. Note that this line looks like the
implementation of a first-order low-pass filter with time constant
tau
. This time constant defines the boundary between the "short" time
scales, where you trust the gyroscopes more than the accelerometers, and
the "long" time scales, where you trust more the accelerometers. The
value of tau
has to be chosen by experiment. It controls the trade-off
between the noise coming from the acceleration and the systematic error
coming from the gyros.
With this formula, the accumulated gyro errors will always stay bound,
with a total error proportional to tau
. Since the step-wise rotations
are presumably very small, and since the rotation errors do not pile up
in an unbounded way, you can probably get away with using a
linearization of the Rodrigues' rotation formula (cos θ = 1, sin θ = θ):
gravity = gravity + rotation ×ばつ gravity
and then you need no trigonometry at all.
It is also worth noting that, with this approach, you can safely
initialize gravity
to zero: it will immediately start growing as the
program runs, and its magnitude will eventually converge to the
appropriate value.
Getting the pitch and roll
Once you know the gravity vector, you can easily compute the pitch and roll. I am not saying that you should completely avoid using angles: you probably want angles on your user interface. What I am saying is that you should refrain from using angles for the purpose of keeping track of the device's orientation. Computing the angles as a post-processing step is fine.
Actually, you may as well compute those angles in the computer that does the final processing of the data. The ESPs can simply send their best estimate of the gravity vector, and let the computer do the trigonometry.
-
I'm assuming this described approach will still result in yaw drifting, right?Runsva– Runsva2024年02月16日 18:07:56 +00:00Commented Feb 16, 2024 at 18:07
-
@Runsva: This approach tracks only the gravity vector, from which you can compute pitch and roll, but not yaw. The assumption is that you do not care about yaw, which is the case for the OP.Edgar Bonet– Edgar Bonet2024年02月16日 20:50:58 +00:00Commented Feb 16, 2024 at 20:50
You have the right idea fusing data from both gyro and accelerometer. However, I would strongly advise against doing the fusion yourself since there is a dedicated Digital Motion Processor (DMP) on your MPU-6050 that does this for you. You just need a library that grabs data from the DMP. As far as I know the only Arduino libraries that do this are ones based on Jeff Rowberg's work.
The data outputted by the DMP is in quaternion format. Jeff converts it to yaw pitch and roll but his conversion is flawed (it produces spurious flips and rotations at north and south poles, called gimbal lock). Don't worry, this can be fixed.
If using Jeff's library (or one based on it) you'll need to make sure the detector is level and still before sending a character over serial to trigger the calibration and you need to do this every time you power on. The library and sketches are somewhat overwhelming/intimidating to new users. This too can be fixed.
If you want to
(a) Learn more about this whole topic
(b) Use the correct conversion from quaternion to yaw pitch roll (one that does not produce gimbal lock)
(c) Hide Jeff's library behind a simple and easy to use interface
(d) Perform the calibration only once and have those results saved to persistent memory on your arduino so that you don't have to repeat the calibration every time you power on the device.
Then I would recommend the following tutorial:
Explore related questions
See similar questions with these tags.