I heard shoes click crisply against the tile floor as my patient’s son ran up behind me in the corridor.
“Doctor,” he said, slowing to a trot, “I was wondering if we could have a word.” Of course, I nodded, as we stepped to the side of the hallway.
His mother, an older woman with chronic abdominal pain, had been admitted to the hospital under my care several days earlier, her third admission for these symptoms in the last year. Preliminary tests had returned reassuringly, but her condition remained puzzling and her symptoms continued to limit her. Throughout, her son proved concerned and supportive, standing attentively whenever I entered the room and nodding and thanking me vigorously before I left. He didn’t speak much but possessed a composure that remained unshaken by our ongoing uncertainty about his mother’s diagnosis. But as I leaned against the wall and looked at him, his brow furrowed for the first time since we’d met, I sensed that something was wrong.
He started by expressing his appreciation for the medical team’s work. But, he explained, in his quest to best help his mother, he’d also heard of a new online service that he thought might help. Crowdsourcing, he said, and using the collective knowledge of crowds to improve diagnosis. He admitted he didn’t know much about this type of thing, but figured it wouldn’t hurt to ask. Could sharing his mother’s symptoms with a group of strangers help us diagnose her, and then make her feel better?
This is exactly what a recent startup, Crowdmed, is banking on. Launched in April, the company operates off of the premise that crowd power can be applied to medical diagnosis, particularly diagnosis of rare conditions that have been missed by doctors. Crowdmed’s creators believe that the aggregate knowledge of the masses can trump that of small numbers of experts – in this case, physicians – who diagnose in isolated settings. They also believe this approach can help reduce health care costs.
The site’s users have two options. If they have a health issue they want solved by the crowds, they can sign up as a patient and upload their symptoms and histories into the system. Alternatively, they can sign up as “MDs” (medical detectives, no medical or health-related degree required) and solve others’ cases. Based on a proprietary algorithm, diagnoses suggested by MDs are ranked by likelihood, with the most likely on top and those with very low probability removed from the list altogether. If a list of plausible diagnoses is generated, the patient pays $200 for the service. The MDs in turn get virtual points deposited in their accounts for worthy diagnoses.
This approach has a few important practical challenges. If power is found in numbers, crowdsourcing groups will need to attract and incentivize a genuinely sizeable group of participants. They will need to validate their software. And they will need to create a system that fosters loyal patient and MD groups. Crowdmed took an exciting step in this direction last month by releasing its first case challenge, accompanied by a $10,000 first prize. Unfortunately, even beyond practical hurdles, this approach is destined to underachieve.
Diagnosis is shaped by sensory observation, and crowdsourcing methods miss this vital information. All doctors are taught – on day one of medical school, in most cases – that the most important thing in diagnosis is “a good history and physical examination” (meaning that diagnosis hinges in part on an accurate assessment of physical findings). This is crucial because things are frequently not what they seem in medicine. Patients complaining of ‘swollen lymph nodes’ sometimes end up having plugged hair follicles instead. Chest pain that for all the world sounds like a heart attack sometimes turns out to be bad heartburn. And not everything seen on an X-ray or other imaging is clinically important. Because groups such as Crowdmed must rely on what patients report when they input their information, their online “MDs” are left to diagnose with potentially incomplete or incorrect information.
Diagnosis also relies on how we frame information. For example, patients with heart failure frequently come to the hospital with difficulty breathing (dyspnea) and leg swelling (edema). Not all dyspnea plus edema, however, equals heart failure. There are many different explanations for these symptoms that have nothing to do with faulty hearts. But crowdsourcing approaches – because they rely on what patients report, or what patients say that their doctors said or concluded – may not generate these more subtle explanations.
Another major challenge is that diagnosis is dynamic, not static. Some diseases require time to fully manifest, and all tests have inherent limitations. So as new symptoms arise, and as we perform tests and imaging, new data always inform and improve our diagnosis (this is why doctors “round” on patients everyday in the hospital or see them in clinic for follow-up). If real-time cases are presented too early in the process, it may be more difficult to produce right diagnoses.
To be clear, the solution in all of this is not to simply rely on a small number of experts for accurate diagnosis. There are numerous initiatives underway to improve diagnosis through checklists, information technology, and other clinical decision-support tools. Multidisciplinary team medicine will help, and some doctors are engaging patients directly to help in diagnosis by teaching them methods for accurately recording their symptoms. Others are incorporating tools into medical charts to produce user-friendly, “smarter” electronic health records.
Supporters of crowdsourcing methods such as Crowdmed maintain that the service is meant to supplement and help clinicians, not replace or supercede them. But the knife cuts both ways. What can clarify can also confuse. What is meant to reduce cost can also create them: the expense of added tests and treatments necessitated by crowdsourced diagnoses that turn out to be wrong.
Can we eventually harness the power of the masses to meaningfully improve diagnosis? Medicine has changed so unimaginably fast over the last several decades that no one can rule it out. The concept may also already be useful for small subsections of disease. For now, however, I still have considerable doubts about its widespread application. I fear that the complexity of diagnosis will produce many experiences like my patient’s son had, in which efforts to use crowdsourcing ultimately produce more questions than answers, and more confusion than direction.