The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
ColumnFull Access

Clinical Computing: The Computer as Clinician Assistant: Assessment Made Simple

Published Online:https://doi.org/10.1176/ps.49.4.467

During the first lecture to new medical students, a professor of medicine vividly demonstrated the importance of observation. Taking a flask of liquid, he held it high so that all could see its yellow color. He then inserted and licked a finger and announced that this liquid was the sweet-tasting urine of a patient with diabetes.

Walking along the front row of students, he invited them to carry out the test he had done. Reactions ranged from curiosity to repugnance, but all dutifully followed the professor's example. When he asked what they had learned, he heard a range of taste reports. The professor thanked them for their observations and then explained that he had inserted his index finger into the urine but licked his middle finger, a difference that careful observers should have noticed. He also made clear that while it is possible to detect a sugary taste in urine from a diabetic patient, physicians have long since given up the practice of tasting urine in favor of more reliable and accurate methods.

The story may be apocryphal, but the substitution of more reliable and accurate methods of assessment, albeit less personal and tasty, has led to great advances in medicine. This column describes how computers can improve the reliability of screening, diagnosis, and monitoring of patients with mental disorders. Standardized assessment in clinical practice becomes feasible with this simple approach.

The medical assessment triad

Across all of medicine, the assessment triad comprises screening, diagnosis, and monitoring. We screen to identify individuals who may have risk factors or a disorder that would benefit from treatment. We diagnose disorders to confirm a need for treatment and monitoring, or we conduct monitoring alone if treatment does not have a favorable cost-benefit ratio.

Screening, diagnosis, and monitoring each depend on data gathered from the patient through interviews and physical and laboratory examinations. Clinical interviews provide most of the data, physical examinations add a little, and laboratory tests add not much more. Leon Eisenberg (1) has noted that while some physicians become marvelous diagnosticians and experts at monitoring treatment, even the best can overlook important information or fail to weight properly the data they gather.

A study compared diagnoses made by two clinicians experienced in using the paper version of the PRIME-MD structured diagnostic interview (2) and by a computer-administered version of the same instrument (3). Clinicians made diagnostic errors 10 percent of the time due to inaccurate applications of the diagnostic algorithm, while the computer processed the algorithm perfectly each time. Other studies of the accuracy of psychiatric diagnoses have found error rates up to 37 percent because of clinicians' errors in applying diagnostic rules (4-6).

Monitoring the severity of and changes in a patient's condition is best done by tracking not only target complaints, which often change most in response to treatment, but also changes in the frequency and intensity of the common symptoms and signs of the disorder. These data are very valuable for assessing severity and change and also anchor the patient's problem in the larger context of patients with the disorder and those without it. Clinicians monitor chief or target complaints much better than specific symptoms because the chief complaint has a heavy loading for specificity and severity that is easily remembered from visit to visit.

However, consistently following up on the same set of symptoms and signs and remembering their intensity is not easily done by humans and is seldom done systematically at all. Even when clinicians steel themselves to repeated reviews of symptom intensity and frequency, such as by use of the Hamilton Depression Rating Scale, they do not administer the instruments the same way each time, thereby producing intrarater unreliability. And poor interrater reliability in multicenter trials can doom the separation of an effective treatment from a placebo control (7). Even if clinicians could overcome these limitations, collecting such data is tedious and not a good use of their abilities and time.

Assessment by computer

Computer interviews can help clinicians improve their screening, diagnostic, and monitoring functions. Computer interviews can gather information directly from patients either at a personal computer in the physician's office or on a touch-tone telephone connected to an interactive voice response (IVR) program on a central computer. With desktop and laptop personal computers, questions and information are presented on the screen. Patients read and answer the questions by using the keyboard. With an IVR program, patients listen to the questions and respond either by pressing keys on the keypad or by saying simple words, such as yes or no or the numbers from zero to ten. Computer terminals are available whenever the office is open. An IVR program allows interviews to be completed at any time and from any touch-tone telephone.

For screening, the capacity of computer interviewing is staggering. On National Depression Screening Day in 1995, a screening version of the Zung Depression Rating Scale was administered by computer via IVR to 21,037 persons. The figures were 32,300 persons in 1996 and 116,374 in 1997. Although these numbers are impressively large, they could be scaled upward exponentially on very short notice and with little cost.

For diagnosis, the accuracy of computer programs is comparable to that of excellent clinicians (3,8). But it is not in competition with the clinician that the computer enters the diagnostic process. The thoroughness of a structured computer interview complements and supplements the more intuitive diagnostic process of clinicians. Two heads are usually better than one, even when one is a computer

Clinicians can delegate monitoring to computers, just as they delegate laboratory tests to laboratory technicians. Monitoring is a repetitive task in which previous and present status are compared on important dimensions. The comparisons should be independent of one another and not influenced by extraneous information—both difficult processes for humans to carry out flawlessly. Perfect reliability is the ideal in monitoring. Computers can do repetitive tasks better than humans.

We have long accepted that automated laboratory assessments are more reliable and accurate, as well as less costly, than those performed by laboratory technicians. Relying on computers for monitoring is comparable. Clinicians and patients can use accurate and reliable severity and change data from the computer to make better overall decisions about treatment.

Patients' reactions

Patients like computer interviews (9). Many feel less embarrassed responding to a computer's questions about sensitive subject matter such as sexual functioning (10), drug use (11), and suicidal thoughts (12,13). They often want their doctor to know this important information, but feel uncomfortable sharing it—just as some clinicians' discomfort prevents them from asking about certain topics. Most patients express no overall preference for computer or clinician interviews. Patients with social phobia are an exception and greatly prefer a computer to a clinician interviewer (14). Even patients who prefer a clinician interviewer are content to provide information through the computer interview.

Most patients will repeat computer interviews with little or no complaint when they know that the data they provide are helpful in monitoring the status of their disorder. Patients find they can proceed at their own pace while they think and answer, freed of concerns about taking too much of the doctor's time. Computer interviews can be programmed so that patients can back up and change their answers, obtain explanations of concepts they do not understand, and skip questions they would rather not answer. They can even be asked whether they want the data they have provided to go to their doctor after they have seen how they answered the questions. They can also be asked to grant permission to use their data for research.

The impact of computers on value

Patients, clinicians, and society want and need value from our health care system. Value can be defined simply as quality divided by cost. If cost can be reduced without reducing quality, value increases. If quality can be increased without increasing cost, value also increases. Simultaneously increasing quality and decreasing cost generates the greatest value.

Sometimes satisfaction with services, which is measured, for example, by health care report cards, is a surrogate for quality. However, satisfaction is a pale substitute for meaningful measures of symptoms, functioning, and other dimensions of quality of life. Satisfaction surveys completed by patients have been used as proxies because the cost of obtaining outcome data on symptoms and quality of life is too great when clinicians or other staff must be involved in the process.

Computer interviews that monitor symptom change and quality of life can fill this void. Such interviews have been carefully validated and are highly reliable. Results of the interviews can be interpreted and reported to the patient and also provided to the clinician. Serial reports show previous values and can be presented in tabular or graphic form. A glance reveals current status and direction of change. Patients' responses to items of special interest, such as suicide risk, can be highlighted or communicated immediately and automatically to an emergency beeper.

Patients, clinicians, and those who pay for health care all want value in health services. Guidelines to establish the effectiveness of clinical services, such as the Health Plan Employer Data and Information Set (HEDIS) developed by the National Committee for Quality Assurance, are being promulgated. Computer assessment interviews permit patients to communicate at their own pace about matters of personal importance. These computer programs extend the capabilities of clinicians by providing accurate and reliable data about patients' status at a small cost in time and money. Obtaining these valuable data in routine clinical practice is not feasible without computer interviews.

With such interviews, these assessments can be carried out widely, leading to improvements in quality while lowering costs and thus substantially increasing value in health care. Improved assessment programs can be disseminated rapidly as soon as they become available. As we strive to increase value in medical care, recognition grows that some assessments important for screening, diagnosis, and monitoring simply cannot be obtained without computer interviewing.

Dr. Greist, who is editor of this column, is distinguished senior scientist at the Dean Foundation for Health, Research, and Education, 2711 Allen Boulevard, Middleton, Wisconsin 53562, and clinical professor of psychiatry at the University of Wisconsin Medical School in Madison.

References

1. Eisenberg L: Medicine: molecular, monetary, or more than both? JAMA 274:331- 334, 1995Google Scholar

2. Spitzer RL, Williams JBW, Kroenke K, et al: Utility of a new procedure for diagnosing mental disorders in primary care: the PRIME-MD 1000 study. JAMA 272:1749-1756, 1994Crossref, MedlineGoogle Scholar

3. Kobak KA, Taylor LvH, Dottl SL, et al: A computer-administered telephone interview to identify mental disorders. JAMA 278:905-910, 1997Crossref, MedlineGoogle Scholar

4. Skodol AE, Williams JBW, Spitzer RL, et al: Identifying common errors in the use of DSM-III through supervision. Hospital and Community Psychiatry 35:251-255, 1984MedlineGoogle Scholar

5. Spitzer RL, Skodol AE, Williams JBW, et al: Supervising intake diagnosis: a psychiatric “Rashomon.” Archives of General Psychiatry 39:1299-1305, 1982Crossref, MedlineGoogle Scholar

6. Rubinson EP, Asnis GM, Harkavy JM: Knowledge of the diagnostic criteria for major depression: a survey of mental health professionals. Journal of Nervous and Mental Disease 176:480-484, 1988Crossref, MedlineGoogle Scholar

7. Demitrack MA, Faries D, De Brota D, et al: The problem of measurement error in multisite clinical trials. Psychopharmacology Bulletin 33:513, 1997Google Scholar

8. Erdman BP, Klein ME, Greist JH, et al: A comparison of the Diagnostic Interview Schedule and clinical diagnosis. American Journal of Psychiatry 144:1477-1480, 1987LinkGoogle Scholar

9. Kobak KA, Greist JH, Jefferson JW, et al: Computer-administered clinical rating scales: a review. Psychopharmacology 127:291-301, 1996Crossref, MedlineGoogle Scholar

10. Greist JH, Klein NTH: Computer programs for patients, clinicians, and researchers in psychiatry, in Technology in Mental Health Care Delivery Systems. Edited by Sidowski JB, Johnson JH, Williams TA. Norwood, NJ, Ablex, 1980Google Scholar

11. Lucas RW, Mullins PJ, Luna CB, et al: Psychiatrists and a computer as interrogators of patients with alcohol-related illnesses: a comparison. British Journal of Psychiatry 131:160-167, 1977Crossref, MedlineGoogle Scholar

12. Greist JH, Gustafson DH, Stauss FF, et al: A computer interview for suicide risk prediction. American Journal of Psychiatry 130:1327-1332, 1973LinkGoogle Scholar

13. Petrie K, Abell W: Responses of parasuicides to a computerized interview. Computers in Human Behavior 10:415-418, 1994CrossrefGoogle Scholar

14. Katzelnick DJ, Kobak KA, Greist JH, et al: Sertraline in social phobia: a double-blind, placebo-controlled crossover study. American Journal of Psychiatry 152:1368-1371, 1995LinkGoogle Scholar