Over the last month we have discussed a recent healthcare trend that has arisen – that trend being a lack of grid usage in mobile radiography – and we last left off in our discussion comparing anti-scatter grids (hardware) to grid emulation software to see what works best.
This is what we found: when the correct grids are selected, properly matched and positioned, a grid (hardware) will out-perform the grid emulation software when it comes to scatter reduction and image quality.
Why, then, would a radiologic technologist or hospital department use anything else?
Good question. Keep reading to learn the No. 1 reason why grid emulation software is promoted.
Grid Software Claims to Reduce Radiation Dose to Patients
Radiation dose reduction is the most commonly cited advantage to using grid software for mobile radiography over grid hardware. In theory this is true, the removal of a grid will decrease exposure to the patient, but is dose reduction actually occurring in all clinical settings via software? It seems a bit ironic that the same promise – dose reduction – was made when we switched from conventional radiography to digital radiography years ago in the field, but it was determined this may not be the case.
The ASRT published a Best Practices in Digital Radiography white paper and plenty of others have discussed the potential for dose creep, or techs using higher techniques than needed as digital imaging provides a wider exposure latitude. In other words the radiographer uses more dose than needed– taking advantage of the wide exposure latitude of software. This will have a cumulative dose effect to both the patient and HC provider alike.
Even with that said, diagnostic radiology has not seen a negative biologic effect on an adult since the 1940s. The people who were impacted were the radiographers themselves, and not the patients. Radiographers of the past were developing leukemia and cataracts (to name a couple of diseases) at much higher rates than their non-X-ray exposed counterparts, and this was what thrust the concept of radiation safety (think ALARA) into the spotlight. Radiation safety has been improved many times since then.
And yet, in our current field, there seems to be an enormous push to lower radiation dose in general radiography for adult patients. But these patients haven’t suffered negative pathologic events caused by a general portable X-ray machine for a very long period of time – if ever. So what’s all the fuss about dose in general radiography?
Who Are We Looking Out For?
Are we lowering radiation dose during X-rays but reducing image quality in the process? Are we increasing the cost of healthcare to patients under the guise of patient safety, when in reality there is no need for extra protection to the patient during the general diagnostic radiographic procedure?
We absolutely need to optimize dose in CT (exposure levels are much higher) and we should always be dose-conscience for all pediatric patients. Unfortunately, the big push in our industry right now does not seem to relate back to that.
It seems the trend is to capitalize on the low-hanging (and quite frankly lucrative) fruit before educating the populace about dose, scatter radiation, optimizing image quality, and lastly remind everyone about the potentially life-saving effects of ionizing radiation when used properly.
Where Should We Go From Here?
Image quality and dose reduction should always be discussed at the same time. Without image quality –without the proper diagnostic information being populated, then we have strayed from one of the 3 principals set by the ICRP for radiation protection, justification. Diagnostic information is paramount. We need to provide radiologists the best and most detailed information possible in order for them to provide an accurate reading.
Perhaps a combination of hardware and software together may be the answer? Nonetheless we should strive to obtain the best image quality which could lead to earlier treatment and potentially saving lives.