When comparing test sensitivities it’s critical to use a calibrated viral load cutoff like 10^6 RNA copies/ml. Ct values themselves are almost meaningless. Here’s why with an example:
1/8
I was researching the sensitivity of the Innova rapid antigen tests used widely across the UK. Low viral load samples need to be ignored because they generally don’t represent contagious levels of virus. Many studies use a PCR cutoff around Ct<25.
2/8
The big Liverpool Covid-SMART study showed a sensitivity of 69% at Ct<24. Kinda poor. https://www.bmj.com/content/374/bmj.n1637.short
3/8
But then I checked this German survey of tests and it found a sensitivity of 94% at Ct<25. https://www.eurosurveillance.org/content/10.2807/1560-7917.ES.2021.26.44.2100441#html_fulltext But that’s based on a much smaller sample size than the Liverpool study. Is it just wrong?
4/8
No! Ct values are relative to a particular device and test protocol (eg. A sample diluted 2x more will have a Ct one higher). Sadly most studies don’t publish their Ct calibration in absolute terms like RNA copies per ml. But these two do!
5/8
German study: Ct=25 is 10^6 RNA copies/ml.
Liverpool study: Ct=24.4 is 10^4 RNA copies/ml.
That’s a 100x fold difference!
The Liverpool study uses Ct=18.3 for 10^6 RNA copies/ml.
6/8
And sure enough at Ct<18.3 the Liverpool study shows a sensitivity of 91%, exactly consistent with the German study!
7/8
So in conclusion, don’t trust any test sensitivity number unless it’s accompanied by a cutoff in terms of RNA copy number.
>10^6 RNA copies/ml is a good standard threshold for contagious samples.
8/8