With the Olympics just finishing up, I was excited to see the following link posted on Twitter entitled: Significant Digits and Pool Tolerances are Why There are So Many Ties in Swimming.

So, whether it was NASCAR drivers getting the pole position by only 6 one thousandths of a second over the next driver or seeing Michael Phelps edge out Milorad Cavic by one one hundredth of a second to take the gold medal in the 2008 Olympics, then I was able to explain to my students the importance of accuracy in measuring devices and then link that to the use of significant digits. However, it never dawned on me why NASCAR could measure to the 0.001 second and why in Olympic swimming events they only measured to the 0.01 second. So, when I saw this article regarding the tolerance level of the engineering of pools then it really became clear to me why there is this difference and what a great example this is. For a quick explanation, it comes down to the tolerance level of construction where in a pool, swimmers use different lanes that are just slightly different lengths, where in NASCAR, the drivers all use the same track. It simply would be too expensive to obtain a tolerance level within a pool that has all the lanes that are exactly the same length. I hope you will be able to ask your students this question and share these examples and see what thoughts they have regarding the differences in timing. I have many swimmers in my classes and look forward to getting their thoughts on this.

## Comments 8

## "Significant" Figures

Love this Doug. Already used it in conversation this morning. Although, through our summer curriculum work we came to the realization that significant figures wasn't all that significant anymore within the context of NGSS and Michigan Science Standards. That's a bummer. I'll still incorporate significant figures as it relates to recording data in the lab so this post will come in handy in a couple of weeks.

## AP Chemistry

I didn't teach sig figs last year for that reason in my regular chemistry courses, but I still have to include instruction about significant figures in my AP Chemistry course, so having this as a resource is awesome!

## Significant figures

As a chemistry student and lab worker, I can appreciate significant figures. As a math/stats student, I've realized how pointless and misleading they are.

Suppose for a moment that you have a device that is "accurate" to 5 decimal places. If we make say 5 samples from that device and we find a confidence interval is +/- 0.01, we have 1 decimal of accuracy, not 5.

When I am modeling nuclear decay, polymer chain growth and other chemical kinetics problems, the 15 digits I have with a double float calc is not enough. In weather forecasting, we can get better models with more digits of accuracy. The entire brach of math dedicated to chaotic systems was borne from using 3 decimal places that was truncated from 6.

If I had my way, I would delete sig figs for ever.

## Interesting

Andrew,

Thanks for the comments. As a chemistry teacher I as well appreciate the sig figs. My students may have a different opinion but that's for other reasons. However, I can't speak as a statistician. I will have to look into consulting my colleagues that are and get their take on them. My concern however is that if you have an accuracy of 0.01 and you have a device that goes five decimal places then you really have an uncertain digit as the 1 not five decimal places out so the instrument really isn't as accurate as mentioned. Or maybe I'm just not understanding the example. Thanks though indeed an interesting take on it.

Doug

## That's the silliness of sig figs

Hey Doug,

That's part of the silliness of sig figs.

Suppose that you are titrating a sample of something. You use 121.0324g of substance (as measured by your $50,000 balance in a climate controlled balance box). You mix that in with X,XXX.xxxxL of sovent... Let's say by the final calculation you have 5 "sig figs" using the textbook rules. If you had values of say (in arbitrary units), 50.024, 50.022, 50.028, 50.025, 50.026. The 95% confidence interval is (50.022, 50.028)***. So, we are certain about 4 digits. We know the average is 50.02X. We are not sure what X is though.

If we have the same system but we get the final values of: 50.000, 49.999, 49.997, 50.003, 50.001. The 95% confidence interval for this data is (49.996, 50.004)***. So, we are not even sure if the first digit is a 5 or a 4. So, even though we have 5 "sig figs" we don't know what any of those values are. You can say, "It's 50.000". I can say, "It's 49.997." Neither of us can say the other is wrong.

In a chemical reaction simulation I did, the 15 digits I get from a double float point calculation are not sufficient. 15 digits of accuracy are not good enough. The way I can tell the calcs are off is by using what I know about the chemical reaction: A=>B=>C=>D=>E=>F=>G=>H, If I say [A](t=0) =1.00000000000000. Since there is a 1:1 ratio of all the chemicals [A]+[B]+[C]+[D]+[E]+[F]+[G]+[H] = 1.00000000000000 at every time point. However, my final calc comes out to be 1.00226. I can even use the sum above and plot that at all the time points I use. If I ever get 1.00000000000000, it's coincidence. I can even plot the sum vs time and see that it grows as time goes on. The error also has exponential growth too.

Unless I use special algorithms or math library in my computer program, I can get as many digits as I want for a value, say 50. If I start using 50 digits for all my calculations, only the first 15 digits are of each value is accurate. However, in the end, I have less than 15 digits that are truely accurate because of rounding errors. According to sig figs, I have 50 digits.

That's part of the silliness of sig figs.

*** Calculations according to MS Excel.

## Hi Andrew:

Hi Andrew:

You bring up some interesting questions. It is true that the rules that we teach students to deal with reporting digits in measurements are not always strictly correct. However, I do think discussing significant digits with students is important because it helps them begin to think about precision of measurements. It also helps students think about how recorded decimals convey information about what is known, uncertain, and not known about particular measurements. A student who writes down all the digits in the calculator display (2.1142857...) when taking the average of seven measurements (2.0, 2.1, 2.0, 1.9, 2.3, 2.4, 2.1) clearly does not understand the limitations of their measurements. By introducing significant digits, introductory chemistry students can begin to understand precision of measurements without having to delve deeply into statistics. The statistics can come later if need be.

I looked at your example of measurements of 50.000, 49.999, 49.997, 50.003, 50.001 with a 95% confidence interval of 49.996 to 50.004. Have you considered that you can show that these measurements contain 5 significant digits, and that the first four and are certain and the last is not? To do so, assume that the measurements were taken on some actual physical variable. Let's say mass was measured in grams. Given that 1 pound = 453.59 grams, the measurements can be listed faithfully as 0.11023 lb, 0.11023 lb, 0.11023 lb, 0.11024 lb, and 0.11023 lb. The 95% confidence interval of such measurements (hopefully I calculated correctly) is 0.110226 to 0.110238. In terms of how we teach our students significant digits, the measurements given are known to 5 significant digits, with the 5th one being uncertain.

## Significant figures

Dear Sirs

I am not an English native speaker, and my proficiency in English is not the best. This discussion remembers me my former times when I began teaching General Chemistry at the Universidad Nacional de Colombia. I had similar questions and asked the old Professors; nobody had a clear concept of the significant figures and the rules that we found in the books.

I began my research and discovered that there are no clear rules, but the subject is so important that the Bureau International des Poids et Measures wrote the "Guide to the Expression of Uncertainty in Measurement (GUM)." They could not write a "norm" or a "standard," because there so many measures and situations that were impossible to create a general rule that applies to all situations.

The rules that the General Chemistry books teach are just approximations; they intend to keep the uncertainty in the right place. As a rule of thumb, the uncertainty obtained following the rules is usually bigger than the one obtained following the complicated recommendations of the "GUM".

I have taught General Chemistry to about 2000 students, in a course that had (unfortunately not now) theory and laboratory together. It is my opinion that first of all people have to learn is to measure. If they learn to measure, they will understand the limitations of the instruments and the act of measuring. Once they understand the many factors involved in the act of measuring is very easy to catch the concept of the "significant figure," because, significant figures are born from the act of measuring, the reading of an instrument and the writing of a measure.

As an example, the weight of a person. Today balances are cheap and good enough to weigh a person to the hundredth of a kilogram. Let say 82,53 kg . But is it reasonable to give the weight of a person to the decagram? The answer is no. Even the instrument can give you four significant figures the person is metabolizing and modifies its weight every time he/she breathes, eats, drinks, or use the toilet. Physicians normally write the weight (in my country and most of the world) just to the kg (in yours perhaps to the pound). In such a case 83 kg. The subject (object) of measuring does not allow more than two significant figures, and such measure will have only two significant figures in the calculations where the weight of the person will be involved.

Today there are many automatic and digital instruments for measuring. Students should learn that every measure has an uncertainty; even the most sophisticated device like the one used in the Olympics or the Nascar reports figures (if they are calibrated) that are exact and other that are dubious. To estimate (it is not possible to "calculate") the uncertainty is necessary to use Statistics. The approximated rules that we teach are not enough to estimate the uncertainty but are good for many real life situations.

Carlos Alexander Trujillo

## Gage R&R Studies

Hey Carlos,

When I was wroking on my MS in Applied Mathematics, I took several courses from the Industrial Engineering department. Several of those classes dealt with quality control. One of the ideas that is central to measurement and uncertanty is the hope that the uncertanty is normally distributed. Thus, if you have a device that claims to be accurate to +/- 0.01, you will find all of you measurements are off by +/- 0.01. This tends NOT to be true. There is bias in every device. Those tolerances are general and generic. Your particular device might be off by +0.005 to -0.001. My device might be off by +0.0001 to -0.01. Someone else mighte be lucky and have a device that is +/- 0.005.

In my Design of Experiments class from the Industrial Engineering department, we learned of (and my MS thesis used) a statistical method called a Gage R&R design. The basic idea behind a Gage R&R study is that you have say 5 volunteers. You give each volunteer a piece of a metal rod and a measuring device. You then ask each volunteer to measure the diameter of the rod 3 times and record their observations. What you will find is that each person will use that same measuring device and the same rod and come up with their own biased results. Often, the results will show that the volunteers have "significantly different" results. If you make this even more expansive, you will find that every device you use has it's own bias. Then there is some bias due to the user too.

In a Gage R&R design, we can look at say 2 student assistants, 4 lab stations in 4 sections of the lab. You have each student take 2-3 readings from each lab station in each lab section. When you look at the results, you can tell the sources of bias/uncertainty and how much variability is due to student, station and section. What you will find is that there is a bias for each student, another for each station and a third bias for each lab section. Unfortunatey, we tend to ignore these biases or lump them together as the standard deviation.

All of this means, that really good lab student you had, might be really good or just lucky. It also means the idea that you cannot change more than one thing at a time during an experiment is wrong. But, that is another discussion for another time.