A new overview of systematic reviews of published randomized control trials of mobile health apps found just 23 RCTs of currently-available apps have been conducted, and less than half of those showed a positive health effect from the app in question.
A group of researchers from the Centre for Research in Evidence-Based Practice at Bond University in Queensland, Australia conducted the review, which was .
"Smartphone popularity and mHealth apps provide a huge potential to improve health outcomes for millions of patients," the researchers wrote. "However, we found only a small fraction of the available mHealth apps had been tested and the body of evidence was of very low quality. Our recommendations for improving the quality of evidence and reducing research waste and potential harm in this nascent field include encouraging app effectiveness testing prior to release, designing less biased trials, and conducting better reviews with robust risk of bias assessments. Without adequate evidence to back it up, digital medicine and app 'prescribability' might stall in its infancy for some time to come."
The researchers looked at four databases and the Journal of Medical Internet Research for systematic reviews of health app RCTs. The idea was to suss out "prescribable apps": those that were backed by credible evidence as well as currently available. They identified a handful of candidates, mostly in the areas of diabetes, mental health, and obesity. But even these trials had small sample sizes and duration, and were subject to a lot of biases.
"The overall low quality of the evidence of effectiveness greatly limits the prescribability of health apps," the review states. "mHealth apps need to be evaluated by more robust RCTs that report between-group differences before becoming prescribable."
There are unique challenges to reviewing health apps, the researchers wrote. For one thing, it's hard to create a good control group that accounts for a "digital placebo effect." Many studies compare the app to regular care, when it would be better to use a dummy app of some kind, the researchers wrote. For another, with subjects having access to the whole of the app store, it's important to control their access to non-study apps that could distort results.
"We believe that sharing information amongst researchers working in app development is vital to reduce research waste and prevent re-invention of wheels," the authors wrote. "We also found several cases where, despite the initial trials failing to demonstrate any positive benefit, the apps were still released, adding to the ‘noise’ rather than the ‘signal’ in this field, and leading to opportunity costs. In other cases, app testing and release were terminated due to lack of ongoing funding as the technology requires constant updates and improvements."
There are a number of efforts underway to get doctors to prescribe apps. Researchers pointed to the NHS App Library as a prominent example. But in order to create a credible app formulary, the mobile app world will need better evidence.