In recent years, we have witnessed the implementation of various community programs based on proactive monitoring and secondary prevention interventions aiming to improve the health of chronically ill patients and reduce costs to the system, such as avoiding unnecessary hospitalizations. But when studies have been conducted to evaluate the effectiveness and efficiency of these studies, the results have been rather poor.
Some of these disappointing results were presented during the Congress of the European Union Geriatric Medicine Society (EUGMS). The symposium called "Strategies in primary care to promote the autonomy of frail elderly people" presented the preliminary results of three large randomized cluster studies (cluster Randomized Clinical Trial) of the Netherlands:
- Metzelthin BMJ 2013: 340 patients from 12 primary care centres selected by fragility.
- Bleijenberg BMC Geriatric 2012 (U-PROFIT): > 3,000 patients in 39 primary care centres, selected by polypharmacy, frailty or little contact with primary care.
- ISCOPE Netherlands (Trial Register number 1946): > 11,000 people in over 59 primary care centres selected for physical, functional, mental and social complexity.
None of the three studies showed functional improvements or quality of life enhancements or any reduction in resource utilization or cost. Along the same lines, I will quote a couple of programs that have also failed to achieve the expected results: a) Virtual Wards from the Nuffield Trust (UK), where the intervention has a long-term program with a multidisciplinary team for home interventions; b) Guided Care (USA), featuring trained nurses that apply an assessment of the patient and create a plan based on disease-management, case management, patient empowerment, changes in lifestyle, education, caregiver support and geriatric assessment. This program improved satisfaction, but overall did not prove any functional or clinical impact (CJ Boult. Gen Int Med 2013).
As you can see, I just quoted 5 community interventions that have been applied to thousands of people and have been published in prestigious journals with little success. So, you ask, what should we do? Give up? Or, as someone said in the same European Congress, shall we try other more robust projects? Or, I wonder whether there’s something that we are not analyzing well enough?
1) Are the study designs appropriate? The work we are witnessing, at least until now, is characterized by complex interventions, non-pharmacological and multi-domains, with decisive influence from the human factor. Therefore, in my opinion, there are some aspects that should be taken into account if we are to improve the quality:
- Select the population with simple and appropriate tools. For example, in relation to the concept of frailty, if disabled people are selected, instead of "just frail" (pre-handicapped), the preventive interventions may be ineffective, as has happened in some of the Dutch studies. Perhaps in those cases, implementing more clinical and more intensive programs would have been more effective even in acute care or residential hospitals settings.
- Question more appropriately the type of intervention. Given that the case management, per se, has not given encouraging results, it’s likely that we ought to design interventions that integrate different levels of care or involvement (perhaps including specialized medical intervention in selected patients) in addition to associating preventive activities such as physical exercise, where there is more evidence, and nutritional interventions (although there’s a lot to be discovered when it comes to changing patients’ habits).
- Investigating the measures and indicators of concrete and solid results, including the impact of both functional specific objectives, such as quality of life, and efficiency (which the Dutch studies were already doing).
- Establish an appropriate follow-up, since the implementation is always complex, and we must assess long term trends based on population and wide samples.
2) Are the methods used in the investigation appropriate so far? The feeling is that traditional methods such as clinical trials are not the most suitable ways of evaluating such programs; on the contrary, it would be more appropriate to design assessments of practical indicators with population-based settings.
On the other hand, we ought to avoid the temptation to use this argument –that clinical trials are not necessary– so we can thus boost the more lax programs, spending effort and money. It seems that evidence-based medicine is outdated, as recently argued by Professor Gaietà Permanyer at a conference in Barcelona: "The investigation diverges from real life in order to create ideal experimental conditions and sometimes is also biased by the desire to achieve desirable results."
In my opinion, if I may, we ought to maintain evaluative processes that combine different methods, as long as these are rigorous, so we can move towards programs that actually have an impact; for this reason I find it fundamental to improve the understanding and training of the processes and methods employed to evaluate complex interventions, by following, for example, the indications of the Medical Research Council's guide, which suggests a sequence of: 1) pilot; 2) evaluation of efficiency, process and cost-effectiveness; and 3) assessment of long-term population indicators (MRC, Developing and evaluating complex interventions: New guidance, summarized in an article in the BMJ 2008).