Abstract

Theory can be important for researchers studying health services. Often investigators not trained in the social sciences assume the relevant theories are accepted by all, jumping right into the study design. However, hypothesis tests are essential to the scientific method and hypotheses are derived from theory, even if that theory has not been articulated.
On the other hand, the social science approach to health services research may be too heavily wedded to theory. One common problem is choosing measures that superficially appear to reflect the elements of a theory but actually may not correspond closely to the constructs they purport to measure. The fallacy of misplaced concreteness usually is described as mistaking a theoretical construct for a physical or “concrete” reality. 1 At some risk, I suggest that an example of this fallacy would be a statement such as the following: “Well-managed primary care and community health organizations produce better quality services.” This sounds good and reflects a theory but both the independent variable (well-managed) and the dependent variable (quality of services) are so broad that they defy valid measurement. Even if the theoretical statement is generally true, it may not apply to particular dimensions of good management or particular quality indicators.
The more usual problem as seen in the manuscripts submitted to this journal is the flip side of the fallacy: Concrete measures are mistakenly assumed to reflect broad theoretical constructs. An example would be comparing the rates of adherence to cancer screening guidelines in 2 clinics as a test of the relationship between clinical care quality in general and, worse yet, inferring that the observed differences are because of differences in quality management efforts. We might call this “being blinded by theory.”
Often we see manuscripts that were motivated by a theory, but in the study the theory is not actually tested. The investigators were so enamored of their original theory that they did not see they were actually testing a different theory. Sadly, the theory they actually tested may be of no interest to them if they are committed advocates for a particular scientific, political, or clinical agenda. A similar problem arises when the original theory is fairly tested but the data do not support the theory. Rarely do investigators conclude their theory may have been wrong.
Maintaining a commitment to theory while not overinterpreting the data is a challenge but it can be accomplished. Consider a recent study assessing the impact of a program that makes primary care services accessible to persons who are not eligible for Medicaid or Medicare. 2 Reformers say, as they have said since before the Nixon administration, that improved access to primary care will improve health status, thereby reducing the number of emergency department visits and hospitalizations, thus lowering total medical costs. This theory is so widely accepted that many reform-minded policy makers and consumers assume it has been fully proven, but MacKinney et al 2 point out that the supportive evidence is weak. Their study, published in an earlier issue of this journal, intended to fill part of the evidence gap.
The first step is to narrow the problem. As described above, a long chain of causality comprises the theory: Access to primary care improves health status, which reduces use of the emergency department and hospital, and lowers total costs. MacKinney et al narrowed the scope of the problem by focusing on use of the emergency department and hospital as their 2 dependent variables. They valiantly collected health status information but measurement of health status in all of its clinically relevant dimensions would have been prohibitive and is not necessary. You can’t do everything in one study. On the other hand, the findings of their study do not say anything directly about the impact of the program on health status because use of the emergency room and hospital are not entirely determined by patients’ health. Drawing conclusions about the impact of the program on health status would be outrunning the data.
MacKinney et al compared persons in the primary care access program to those on a wait-list. This is not random assignment but it is a reasonable approach. Some of the patients on the waiting list may have been in less urgent need of services than those in the program. Use of a nonequivalent group is not ideal but it is a recognized quasi-experimental design that can yield useful information.
When comparing groups, it is important to adjust for differences in the patients that might affect use of the emergency room or hospital. However, you can’t control for everything. Even if the complete range of relevant clinical, psychological, sociological and economic variables was measurable, adjusting for all of them would have required a very large sample. Pragmatically, investigators must limit their adjustments to the most important issues and mention omissions in the study limitations.
MacKinney et al found that utilization of services decreased in both groups. In their discussion, they mentioned all the limitations of their study, both in design and measures. They could only speculate that the program may have benefited health status in ways not reflected by use of service. They could not go further than that. In the end, your measures measure what they measure; no more and no less.
If MacKinney et al had drawn firm conclusions about the impact, or lack of impact, of the program on health status, they would have committed the fallacy of misplaced concreteness. Just because you have measured service use does not mean you have measured health status, even if the theory says the 2 are causally linked. Some investigators would have been tempted to say that the program must have improved health status because we “know” that increased access to primary care has that affect. To make that assertion based on the data available in this study would have been fallacious.
Theory is a useful guide to design of research projects. Without it, hypotheses are arbitrary. On the other hand, when the results of the study do not support the theory, the report should say that. And when enough studies do not support a theory, then at some point the theory should be reconsidered. In the case of access to primary care, we know that removal of financial barriers is important but some people will still go to the emergency room because it is more convenient. And some people will still avoid care until they need hospital admission. And sometimes hospital admission is not avoidable. With all these factors at work, we might be forced to the conclusion that many managers of medical programs have reached; financial barriers should be reduced for people who need encouragement; people who overuse services may need discouragement; and social services may be needed to deal with circumstance affecting services that are not financial in nature. All these management efforts raise program costs and adversely affect cost-effectiveness. Not every problem can be fixed and sometimes we need to concentrate on those we can fix with the limited resources available. Primary care and community health researchers provide useful evidence to support these decisions, even if the answers are not always clean, simple or supportive of a particular agenda.
