8th-order polynomial vs EPROM lookup table?

This forum is for posts that might be considered off-topic but that may be useful or interesting to members. Examples include posts about electronics or programming in general, other microcontrollers or interesting devices, useful websites, etc.
Post Reply
audioguy
Posts: 15
Joined: 09 October 2009, 18:09 PM

8th-order polynomial vs EPROM lookup table?

Post by audioguy »

Hello,

For ZBasic devices running "native", does anyone have an opinion as to the plusses and minuses of using a 50-element lookup table of 16-bit integer values versus an 8th-degree polynomial fitting function?

The (single precision) polynomial function would look like this in BASIC:
y = a
y = y + b * x
y = y + c * x ^ 2
y = y + d * x ^ 3
y = y + e * x ^ 4
y = y + f * x ^ 5
y = y + g * x ^ 6
y = y + h * x ^ 7
y = y + i * x ^ 8

In case anyone's interested, the application is reading a 10K thermistor in a divider and converting the voltage applied to the ADC to degrees F with the best practical accuracy.

Thanks!
mikep
Posts: 796
Joined: 24 September 2005, 15:54 PM

Post by mikep »

It always comes down to the same 3 considerations: performance, memory space, and accuracy.

The real question is what do you need i.e. the requirements. If performance is paramount then the lookup table is best. If accuracy is more important then use the polynomial. The native mode device has enough memory for either approach.
Mike Perks
audioguy
Posts: 15
Joined: 09 October 2009, 18:09 PM

Post by audioguy »

Thanks for the reply.

Not having used one yet, I wanted to find out if there were any 'gotchas' with using floating point math on the devices. I understand it would be slower than a binary search on a small table, but depending on the implementation, "slower" can mean half as fast or 1/50th as fast.
In designs with different devices I've been tripped up by that and also by the relative speed reading EEPROM vs RAM.

The obvious path is to spend some time and try it both ways, of course. That's difficult in this case, but perhaps necessary.

Regards,
--jim
(audioguy)
mikep
Posts: 796
Joined: 24 September 2005, 15:54 PM

Post by mikep »

It is going to be much slower using the polynomial, possibly by a factor of 10. The expansion

Code: Select all

y = a + b*x + c*x^2 + d*x^3....
is much slower than the expansion

Code: Select all

y = (((((((i*x + h)*x +g)*x + f)*x + e)*x + d)*x + c)*x + b)*x + a
Count the number of multiplies to understand why this is the case. Conversely the table lookup is a log2N compares and one division. You can increase accuracy by making the table larger.

I'm not sure why it is difficult to compare both ways. It might take a few hours of work but it is not really that hard.

Another thing to try is to reduce the order of the polynominal. Does it really have to be eighth order or will fourth do the job? I also googled on the Steinhart-Hart equation which is used specifically for thermistors and gives very good accuracy. Check your thermistor datasheet for the values of the coefficients A, B, and C.
Mike Perks
audioguy
Posts: 15
Joined: 09 October 2009, 18:09 PM

Post by audioguy »

"Difficult" in this case has to do only with external requirements. not comparing the methods. I know I'm not being completely clear, sorry.

As for the rest, that gets complicated. You're correct that a garden-variety thermistor should be adequately modeled with a 4th-order polynomial. In this particular case, though, that response is modified by the (necessary) physical encapsulation that *also* doesn't have a linear thermal response.

Assuming I've done it correctly, I've attached an image that shows both 4th and 8th order functions. You'll notice the 4th-order does not converge to the data with enough precision.

Thanks for taking the time to respond to me on this. It does show that there are people associated with the product that are responsive and interested.

Regards,
--jim
(audioguy)
Attachments
4AND8POLY.jpg
(76.28 KiB) Downloaded 1978 times
spamiam
Posts: 739
Joined: 13 November 2005, 6:39 AM

Post by spamiam »

This is an interestng thread. Is the decision to go with an 8th order equation due something like "2 different 4th order equations interacting will need an 8th order equation for complete description"? Or will a 5th, 6th, or 7th order be sufficient?

I would have thought that the speed advantage of a look up table to be significantly greater than a factor of 10! At least you don't have any divisions.... If you do use a lookup table, it can be written sot hat it resides in falsh instead of in RAM. I am not sure how ZBasic would handle the array, but plain old GCC would put it in ram without specifically using the somewhat arcane code to get it into flash and to read that flash. But it saves potentially precious RAM.

-Tony
audioguy
Posts: 15
Joined: 09 October 2009, 18:09 PM

Post by audioguy »

spamiam wrote:This is an interestng thread. Is the decision to go with an 8th order equation due something like "2 different 4th order equations interacting will need an 8th order equation for complete description"? Or will a 5th, 6th, or 7th order be sufficient?
Hi Tony,
It isn't a direct relationship, no. 7th order would give +,- 1 degree F over most of the range except the extreme ends. For most applications, increasing the size of the measured data set to push the ends out away from the desired range would work. In this particular application I wanted +,- 0.5 degrees F accuracy over the working range, (60-105F) and it turns out that the 8th-order function works for that. (values above and below the working range are still reported, but an out-of-control process condition is assumed to exist so the ultimate accuracy isn't important)

If this probe didn't need to live in a pressurized, somewhat corrosive liquid medium I would probably have defaulted to a LM335 instead of a thermistor. As it is, the thermistor probes are manufactured specifically for the application, so are worth the trouble.

--jim
mikep
Posts: 796
Joined: 24 September 2005, 15:54 PM

Post by mikep »

Assuming you want to do linear interpolation between values then the formula is as follows:

Code: Select all

y = x * (y[i]-y[i-1])/(x[i]-x[i-1])
You can save 50% of the table storage by making the x values consecutive integers so the denominator is 1 and the division is possibly eliminated. This only works of course where the x values are evenly spread. If you need to deal with fasting moving slopes then you may need the values of x to be closer together in that region. I say that the division is possibly because you need to apply a scaling to the x values to make them into integers. Another way of looking at halving the table size is that the number of y values can be doubled (say to 128) and the same amount of storage is used.

It is usual to put a thermistor in a voltage divider circuit together with some precision (1% or better) resistors. Given the values of the fixed resistors, it is possible to derive the resistance value of the thermistor from the ADC count. The total voltage VCC is not even required as it cancels out of each side of the ratio. The resistance RT of the thermistor is given by the formula

Code: Select all

RT = (R1 * 1024)/GetADC - R1 - R2
where R1 is the resistor in the other half of the divider and R2 is a small resistor in series with the thermistor.

The value of RT can be further scaled to give an integer value in a small range from say 0 to 127 by dividing and subtracting some constants. The value of these constants depends on the thermistor values in the required temperature range. Note that the scaling factor can be factored into the equation above to reduce the number of floating point multiplies and divides. For example if the scaling factor is 10 to convert to an integer then the equation becomes

Code: Select all

RT = (R1 * 102.4)/GetADC - R1/10 - R2/10
where R1/10 and R2/10 are constant values which can be precalculated by the compiler.

Note that it is also important to apply some checks to make sure that the calculated values are not out of range.
Mike Perks
audioguy
Posts: 15
Joined: 09 October 2009, 18:09 PM

Post by audioguy »

Thanks, Mike.

In the measured data set, the X values are indeed consecutive integers so the interpolation for x[y=0] : x[y=n] is straightforward, except where y=0. ;-)
(sorry, couldn't help it)

Your comments about interpolation triggered a thought... if we make the ADC readings periodic at some known frequency, then the interpolation takes the form of a single-pole IIR lowpass filter, tunable by varying the sampling rate instead of the (normally present) coefficient. Not that it matters, really, but it does offer a degree of control over the process to look at it that way.

Far too much organized thought for a Monday, but I truly do appreciate the responses. :-)

--jim
pjc30943
Posts: 220
Joined: 01 December 2005, 18:45 PM

Post by pjc30943 »

If you bound the function you're fitting with, judging by how your data curve looks you should be able to get away with something substantially less than 8th order.
Also, have you tried an exponential? Attaching the data points to your post might be useful.
Paul
audioguy
Posts: 15
Joined: 09 October 2009, 18:09 PM

Post by audioguy »

pjc30943 wrote:If you bound the function you're fitting with, judging by how your data curve looks you should be able to get away with something substantially less than 8th order.
Also, have you tried an exponential? Attaching the data points to your post might be useful.
Embarrassingly enough, I checked the data and found that zeroes had been added to the end. It's a quirk of the input editor that I know about but too often forget. Re-running the batch, it turns out a 5th order polynomial fits, but a Hoerl model fits better: ab(to the x) * x(to the c)
Exponential (and family variants) wasn't a good fit, but your suggestion did get me looking at it again and I was able to identify the problem.

I'll post the data tomorrow when I go down to the lab. It's only down one floor from here, but that seems like a long way at 12:44AM .

Thanks for the help!
--jim
audioguy
Posts: 15
Joined: 09 October 2009, 18:09 PM

Post by audioguy »

Here's the data set for the encapsulated 10K thermistor being discussed.
It's actually a subset of a much larger table. Since the source is a printed table we only hand-entered as much data as we thought we needed for a 60-105 degree F working range.

The curve-fitting software is CurveExpert 1.4 , available as shareware.

Thanks to everyone that's responded.

Regards,
--jim
Attachments
10ktherm.txt
Data set for encapsulated 10K thermistor discussion
(526 Bytes) Downloaded 2159 times
Post Reply