Monday, 9 July 2018

Do we need audits or indicators to control the quality and safety of health centres?

Mª Luisa de la Puente 


Are care indicators useful in detecting the quality problems in hospitals? This question, which seems obvious, has its crumb. Alex Griffiths, of the School of Management at King's College London, has just published the results of a study of the usefulness of health indicators as predictors of the quality of care of English hospitals and the conclusion is that you can’t trust them too much.

The Care Quality Commission (CQC) is responsible for ensuring the quality of health and social services in England (more than 30,000 providers). Due to the scarcity of resources to carry out on-site inspections, the CQC used statistical surveillance through health indicators to prioritize those hospitals that had quality of care problems and subsequently send inspectors to these centres. The inspectors’ mission is to analyze the situation and give support to the suppliers for the orientation of the improvements. What was surprising to see was that the instrument "Intelligent Monitoring" (IM, scale of risk composed of 150 indicators, among others waiting list, mortality, surveys of users and professionals, etc.) is not able to predict which hospitals present quality problems or discern between those that work well and those that need improvements (compared to the classification of sanitary inspectors).

They propose that the IM should no longer be used as a mechanism for prioritizing inspections due to their low validity (high number of false positives and negatives, giving in some cases false security and in others penalizing without good cause). It’s better to continue with random inspections and continue searching for new statistical surveillance instruments that are better predictors of risk. Don’t we need indicators then? Or is it that behind these unexpected results there is a problem that we don’t detect well? The IM is perhaps too simple to encompass the complexity of health institutions; the weight of its different components may not be well established; important quality indicators can only be obtained in situ in the centres themselves; Inspections and indicators measure different things or measure them in a different depth.

Do we need inspectors then? In England inspections are used as a method of control and have a body of officials dedicated to this task. Also the Healthcare Improvement Scotland makes periodic inspections to the health centres of the country and informs and publishes its conclusions sharing them with the users. They consider that this measure has a positive impact on the care outcomes and feeds the cycle of continuous improvement when detecting the needs.

Martin Bardsley, from the Health Foundation, says in his editorial, that issuing a value judgment about the quality of a centre often involves measuring intangible subjective and difficult things to capture through indicators. It often requires direct observations and experience, difficult to obtain only with the monitoring of indicators. Defends that the indicators can be useful to see the context of the institution and that, obviously, some are more valid than others to measure what they intend to. What lines of work are then recommended?: 1) identify the intelligent information that is the predictor of non-quality risk, 2) explore alternative sources of information where existing indicators have a low prediction, 3) explore the impact of different thresholds of alert, 4) look for ways to relate quantitative and qualitative information, making qualitative information fill in the gaps of the quantitative.

Too few studies do measure the effectiveness of different control methods and perhaps it’s too ambitious to believe that whatever the control mechanism is, it will be able to ensure that the supplier will maintain a high standard of quality and safety everywhere and at all times.

Many regulatory systems recognize the need to have some on-site investigation mechanism, but the cost of controlling everything at all times is unaffordable and unnecessary. A formula that allows direct research to be prioritized based on the seriousness or relevance of the problem must be found. In our case, besides continuing to promote the publication of indicators that measure different areas of quality (as does the Results Centre in Catalonia that doesn’t give a single synthetic indicator of each centre but a wide range), it would be good to complement the analysis of statistical indicators with direct observations on specific assistance processes of special relevance. There is already some experience in this regard as is the case of oncological surgery. During the last International Forum on Quality of Care in Gothenburg, interesting works of combined use of indicators and direct observation were presented (e.g.: Surgical processes in the Netherlands. Surgical Association Netherlands (ASN) together with the government.)

References
Wouters M, van der Berg J. Improvement on quality and safetyon surgery on a national scale. The Dutch example. International Forum onQuality & Safety in Healthcare. Gothenburg 2016.

No comments:

Post a Comment