views
Determine the tests sensitivity. This is generally given for a specific test as part of the tests intrinsic characteristic. It is equal to the percentage of positives among all tested persons with the disease or characteristic of interest. For this example, suppose the test has a sensitivity of 95%, or 0.95.
Subtract the sensitivity from unity. For our example, we have 1-0.95 = 0.05.
Multiply the result above by the sensitivity. For our example, we have 0.05 x 0.95 = 0.0475.
Divide the result above by the number of positive cases. Suppose 30 positive cases were in the data set. For our example, we have 0.0475/30 = 0.001583.
Take the square root of the result above. In our example, it would be sqrt(0.001583) = 0.03979, or approximately 0.04 or 4%. This is the standard error of the sensitivity.
Multiply the standard error obtained above by 1.96. For our example, we have 0.04 x 1.96 = 0.08. (Note that 1.96 is the normal distribution value for 95% confidence interval found in statistical tables. The corresponding normal distribution value for a more stringent 99% confidence interval is 2.58, and for a less stringent 90% confidence interval is 1.64.)
The sensitivity plus or minus the result obtained above establishes the 95% confidence interval. In this example, the confidence interval ranges from 0.95-0.08 to 0.95+0.08, or 0.87 to 1.03.
Comments
0 comment