there is a ton of info on each page. i understand that each page corresponds to a lecture, but is it possible to keep this structure and additionally have a page for each term (e.g. "Predictive Values") as well? it would also be nice to have links to other wiki pages for technical terms. this way it would be more digestible.
Thanks for your suggestions. We will discuss your ideas at our next wiki group meeting and see what we can incorporate. Did you have any specific suggestions for links?
Thanks again for your comments,
Rob Fitzgerald
Hi,
I have a question.
My laboratory participate in PT program.
The institution that send the samples and then analyze the results uses 1SD as pass or fail criteria.
I looked at different standards including ISO 13528 that clearly define acceptable range for the results as +/- 2SD.
Also, the CV for the tests received from different participants vary between 23 – 59. All results reported by my laboratory remain below 1.7 SD and I would qualify those results as valid, however, they received fail code.
I would appreciate your comments,
Regards,
Roger
I wonder why this question has not been answered so far?!!
But my guess for this is that the program assesses accuracy rather than precision (as all external QC programs do) and this is calculated by the % Deviation from the "true" or "expected" value compared with your measured value. Persumably they used 1SD as an estimate of the limit of deviation which should not exceed 2%.
what are the types of reference values?
how e establish reference values and reference values or range are same?
When we compare instruments twice a year, what is an appropriate means of determining if they are the same and what to do if they are not the same?
We have 8 remote sights that have different models of the same vendors instruments. How do you establish a reference interval or explain the diference to the medical staff?
Appreciatively
CLIA provides guidelines on instrument comparison.
One way to approach this (this is by no means comprehensive or the best way but "one" of the many ways. I am trying to be as specific as possible to be useful.)
You could use 20 patient samples with a wide range of values to correlate the two instruments (Deming is a good place) .
Set your own acceptable criteria based on clinical considerations. If the slope is acceptable and close to 1 (and r^2 close to 1) you are on track. If there is an intercept, determine if it is statistically significant (p-value). If the slopes are different determine if it is some thing to do with instrumentation, reagents or any other experimental factor (troubleshoot). If slopes are truly different determine its clinical significance and develop a correlation factor, if possible. But if it changes every six months you probably have some sort of systemic lab problem you might want to investigate further. If the slopes and intercepts are clinically comparable or if a valid correlation factor can be applied to make the instruments report the same value, you have nothing to explain to the medical staff. If the instruments are significantly different you have a more complex issue. I know this is not fully helpful but..
Hi,
I have a question. In our hematology lab we run daily qc with commercially available high, normal and low controls. However because of the high cost (I am from a third world country) and unreliable supplies we are looking at running patient samples for qc. That is to say we run the qc samples supplied by the manufacturer and ensure that the instrument is functioning properly. Then we run patient sample and select 3 patient samples of high, normal and low values to be run with the next batch of sample ( within 6 hours). Is such a qc run reliable. How do I calculate the sd (accepted range) for these samples.
Thank you