OUP user menu

★ Viewpoint Paper ★

Recognizing Obesity and Comorbidities in Sparse Data

Özlem Uzuner
DOI: http://dx.doi.org/10.1197/jamia.M3115 561-570 First published online: 1 July 2009


In order to survey, facilitate, and evaluate studies of medical language processing on clinical narratives, i2b2 (Informatics for Integrating Biology to the Bedside) organized its second challenge and workshop. This challenge focused on automatically extracting information on obesity and fifteen of its most common comorbidities from patient discharge summaries. For each patient, obesity and any of the comorbidities could be Present, Absent, or Questionable (i.e., possible) in the patient, or Unmentioned in the discharge summary of the patient. i2b2 provided data for, and invited the development of, automated systems that can classify obesity and its comorbidities into these four classes based on individual discharge summaries. This article refers to obesity and comorbidities as diseases. It refers to the categories Present, Absent, Questionable, and Unmentioned as classes. The task of classifying obesity and its comorbidities is called the Obesity Challenge.

The data released by i2b2 was annotated for textual judgments reflecting the explicitly reported information on diseases, and intuitive judgments reflecting medical professionals' reading of the information presented in discharge summaries. There were very few examples of some disease classes in the data. The Obesity Challenge paid particular attention to the performance of systems on these less well-represented classes.

A total of 30 teams participated in the Obesity Challenge. Each team was allowed to submit two sets of up to three system runs for evaluation, resulting in a total of 136 submissions. The submissions represented a combination of rule-based and machine learning approaches.

Evaluation of system runs shows that the best predictions of textual judgments come from systems that filter the potentially noisy portions of the narratives, project dictionaries of disease names onto the remaining text, apply negation extraction, and process the text through rules. Information on disease-related concepts, such as symptoms and medications, and general medical knowledge help systems infer intuitive judgments on the diseases.


Narrative patient records allow doctors to write precise notes. The narratives do not contain controlled vocabularies, and thus allow doctors flexibility of expression.1 However, the narratives also make information contained inaccessible to automated clinical systems. Natural language processing (NLP) and medical language processing (MLP) focus on technologies that can extract structured information from narratives.2

The Obesity Challenge was motivated by the clinical need for technologies that can help counter the current obesity epidemic.3 Its goal was to systematically evaluate NLP and MLP systems. Run as a shared task, the challenge was organized as a part of an i2b2 (Informatics for Integrating Biology to the Bedside) “Driving Biology Project.” A total of 30 teams participated in the Obesity Challenge and met at a workshop cosponsored by the American Medical Informatics Association. This paper provides an overview of the challenge, describes the data and the evaluation metrics, reviews the best performing systems, and identifies directions for future MLP research.

Related Work

Systematic, head-to-head evaluations of technology can help advance state of the art and guide future research.4 Shared tasks provide a way of conducting such evaluations. They provide the participants with a common set of training documents annotated with the ground truth for a particular task and evaluate all participants on the same held-out set.

Outside the medical domain, shared tasks have included the Message Understanding Conference5 and the Text Retrieval Evaluation Conferences (TREC),6 organized by the National Institute of Standards and Technology.7 Shared tasks for biomedicine have included BioCreAtIvE8 and TREC Genomics.9

In 2006, we organized the first MLP shared task on clinical narratives.10 This task focused on two challenges involving discharge summaries: automatic de-identification of personal health information (the De-identification Challenge)11 and automatic evaluation of the smoking status of patients (the Smoking Challenge).12 These shared tasks were followed by a similar effort of the University of Cincinnati Computational Medicine Center.13 The Obesity Challenge continued i2b2's efforts to make existing clinical records available to the research community. Extracting information about obesity and comorbidities from narrative discharge summaries was the focus of this challenge.

Challenge Task: Recognition of Obesity and Comorbidities

To define the Obesity Challenge task, two experts from the Massachusetts General Hospital Weight Center studied 50 (25 each) random pilot discharge summaries from the Partners HealthCare Research Patient Data Repository. The experts identified fifteen frequently occurring obesity comorbidities: asthma, atherosclerotic cardiovascular disease (CAD), congestive heart failure (CHF), depression, diabetes mellitus (DM), gallstones/cholecystectomy, gastroesophageal reflux disease (GERD), gout, hypercholesterolemia, hypertension (HTN), hypertriglyceridemia, obstructive sleep apnea (OSA), osteoarthritis (OA), peripheral vascular disease (PVD), and venous insufficiency. They determined the Obesity Challenge task as automatic classification of obesity and the above comorbidities, referred to as diseases, as Present, Absent, or Questionable in a patient, or Unmentioned in the discharge summary of the patient. We define these classes as follows:

  1. Present: the patient has/had the disease.

  2. Absent: the patient does/did not have the disease.

  3. Questionable: the patient may have the disease.

  4. Unmentioned: the disease is not mentioned in the discharge summary.

We expect that the technologies developed in response to the challenge will be useful for indexing, classifying, and summarizing obesity-related facts found in discharge summaries. All relevant Institutional Review Boards approved the i2b2 Obesity Challenge.

Obesity Challenge Data

Data Draw and De-identification

Obesity Challenge data consisted of 1237 discharge summaries from the Partners HealthCare Research Patient Data Repository. These data were chosen from the discharge summaries of patients who were overweight or diabetic and had been hospitalized for obesity or diabetes sometime since 12/1/04. Some of the selected summaries included no mention of the stems “obes” and “diabet”, others included at least one mention of these stems.

De-identification was performed semi-automatically. All private health information was replaced with synthetic identifiers.11



The data for the challenge were annotated by two obesity experts from the Massachusetts General Hospital Weight Center. The experts were given a textual task, which asked them to classify each disease (see list of diseases above) as Present, Absent, Questionable, or Unmentioned based on explicitly documented information in the discharge summaries, e.g., the statement “the patient is obese”. The experts were also given an intuitive task, which asked them to classify each disease as Present, Absent, or Questionable by applying their intuition and judgment to information in the discharge summaries, e.g., the statement “the patient weighs 230 lbs and is 5 ft 2 inches”. We refer to the textual task annotations as textual judgments and the intuitive task annotations as intuitive judgments.

Given the tasks, the experts agreed that:

  • Textual judgments would require no reasoning.

  • Intuitive judgments would generally agree with a textual Present, Absent, or Questionable judgment. The focus of the intuitive task would be on diseases marked Unmentioned.

  • A textual judgment of Unmentioned, in the absence of information from the discharge summary supporting an inference about the disease, would translate to an intuitive judgment of Absent

  • Information that would allow inference of diseases would include mentions of examination and test results, e.g., blood pressure or blood sugar measurements, physical characteristics, e.g., body mass index, and the medication and diseases discussed in the discharge summary.

Agreement and Tie-breaking

The two experts independently annotated our 1237 discharge summaries. The kappa (κ) agreement14 between the two annotators on each disease is shown in Table 1. The lowest κ on textual judgments was 0.71. For 12 diseases, κ on textual judgments was above 0.8; for four diseases, κ on textual judgments was between 0.71 and 0.79. The lowest κ on intuitive judgments was 0.44. For seven diseases, κ on intuitive judgments was above 0.8; for six of the diseases, κ on intuitive judgments was between 0.6 and 0.79. Although the κ values are open to interpretation,15 κ of 0.8 is widely used as the threshold for “almost perfect agreement”, κ values of 0.6–0.79 indicate “substantial agreement”14. Please see the online supplement at http://jamia.org for a description of agreement calculation and extended analysis of agreement.

View this table:
Table 1

Kappa Agreement on Textual and Intuitive Judgments

Comorbidity (Disease)Textual KappaIntuitive Kappa
Atherosclerotic CV disease (CAD)0.780.81
Congestive heart failure (CHF)0.810.74
Diabetes mellitus (DM)0.910.87
Hypertension (HTN)0.820.67
Obstructive sleep apnea (OSA)0.920.92
Osteoarthritis (OA)0.760.76
Peripheral vascular disease (PVD)0.940.73
Venous insufficiency0.790.44
  • CV = cardiovascular; GERD = gastroesophageal reflux disease.

After annotation, a resident from the Massachusetts General Hospital resolved the disagreements in textual judgments. Majority vote among the three annotators determined the ground truth for the textual task. In the absence of a third obesity expert who could resolve the disagreements in intuitive judgments, only judgments agreed on by the two obesity experts were used as the ground truth for the intuitive task. Table 2 shows the correspondence between the ground truth textual and intuitive judgments. Most textual Present judgments map to intuitive Present judgments. Similar observations hold for the other classes.

View this table:
Table 2

Distribution of Classes between Textual and Intuitive Ground Truth

Intuitive PresentIntuitive AbsentIntuitive QuestionableNo Intuitive Class (No Agreement)
Textual Present502120377
Textual Absent1126025
Textual Questionable511832
Textual Unmentioned50012327201219
No Textual Class (No Agreement)25620

Final Data

Table 3 and Table 4 show data distribution into training and test sets per disease. The distributions are non-uniform. In studying datasets with unbalanced class distributions, it is easier to focus on the better populated classes and ignore the less well-represented ones due to their limited contribution to overall performance. In our case, the less well-represented classes indicate the possibility or absence of a disease in a patient. Accurate recognition of these classes allows their inclusion in structured knowledge bases that can support future clinical decisions. Please refer to the online supplement at http://jamia.org for Table 5 and baseline results on these data.

View this table:
Table 3

Distribution of Textual Judgments into Training and Test Sets

Venous insufficiency21100000707497728507
  • CAD = coronary artery disease; CHF = congestive heart failure; DM = diabetes mellitus; GERD = gastroesophageal reflux disease; HTN = hypertension; OSA = obstructive sleep apnea; OA = osteo arthritis; PVD = peripheral vascular disease.

View this table:
Table 4

Distribution of Intuitive Judgments into Training and Test Sets

Venous insufficiency542957739800631427
  • CAD = coronary artery disease; CHF = congestive heart failure; DM = diabetes mellitus; GERD = gastroesophageal reflux disease; HTN = hypertension; OSA = obstructive sleep apnea; OA = osteo arthritis; PVD = peripheral vascular disease.


We evaluated system performances using micro- and macro-averaged precision (P), recall (R), and F-measure (F1). Given the emphasis of the Obesity Challenge on the less well-represented classes, we used macro-averaged F-measure as the primary metric for evaluation. Micro-averaged F-measure maintained a global perspective on the results.

Evaluation Metrics

For each disease, the macro-averaged metrics represent the arithmetic mean of the precision, recall, and F-measure on the Present, Absent, Questionable, and Unmentioned classes that are observed in the ground truth for that disease (see Eqs 1, 2 and 3). The macro-averaged precision, recall, and F-measure of the system are obtained from the precision, recall, and F-measure on the classes observed in the ground truth for all diseases. In these formulae, M is the number of classes.

Equation 1—Macro-averaged Precision (Pmacro)


Equation 2—Macro-averaged Recall (Rmacro)


Equation 3—Macro-averaged F-measure(F1macro)


Macro-averages give equal weight to each class, including rare ones.16 As a result, two systems that make the same raw number of mistakes can end up with two different macro-averaged scores.

Equation 4 and Equation 5 show the formulae for computing micro-averaged precision and recall from true positives (TP), false positives (FP), and false negatives (FN) for each class.16,17 In these formulae, M is the number of classes. Micro-averaged F-measure is the harmonic mean of micro-averaged precision and recall (Eq 6). Micro-averages give equal weight to each sample regardless of its class. They are dominated by those classes with the greatest number of samples.

Equation 4—Micro-averaged Precision(Pmicro)


Equation 5—Micro-averaged Recall(Rmicro)


Equation 6—Micro-averaged F-measure(F1micro)


Significance Test

We determined the significance of the difference of the systems' performance using the Z test on two proportions.18,19 We used a two-tailed test with a Z value of ± 1.645 and confidence level of 0.9.20

Obesity Challenge Submissions

A total of 30 teams participated in the Obesity Challenge (see Table 6). Training data were released in March 2008. Test data were released in June 2008. Each team submitted up to three system runs for predicting textual judgments and three for predicting intuitive judgments on test data.

View this table:
Table 6

Participating Teams

Ambert et al.Oregon Health and Science UniversityUnited States
Barrett et al.University of VictoriaCanada
CaliffIllinois State UniversityUnited States
Childs et al.Lockheed Martin and SAGE AnalyticaUnited States
DeShazo et al.University of WashingtonUnited States
Frunza et al.University of OttawaCanada
Grabar et al.LIPN–UMR 7030, Université Paris 13—CNRSFrance
  • Centre de Recherche des Cordeliers

  • Université Paris Descartes

GuillenCalifornia State University, San MarcosUnited States
HaraNara Institute of Science and TechnologyJapan
Harkema et al.University of PittsburghUnited States
Ho et al.IDI-NTNUNorway
Jazayeri et al.University of AlbertaCanada
Lan et al.National University of SingaporeSingapore
Institute of Infocomm Research
MacNamee et al.Dublin Institute of TechnologyIreland
Mata et al.Universidad de HuelvaSpain
MatthewsUniversity of EdinburghScotland
McInnesUniversity of MinnesotaUnited States
MedowBoston UniversityUnited States
MeystreUniversity of UtahUnited States
Mishra et al.
  • Centers for Disease Control and Prevention

  • National Center for Public Health informatics

United States
Neves et al.Centro Nacional de BiotecnologíaSpain
Universidad Complutense de Madrid
Patrick et al.University of SydneyAustralia
PedersenUniversity of Minnesota, DuluthUnited States
Peshkin et al.HarvardUnited States
Alias-I, Inc.
Savova et al.Mayo ClinicUnited States
Solt et al.Budapest University of Technology and EconomicsHungary
TextMiner, Ltd, Budapest
Szarvas et al.University of SzegedHungary
Ware et al.MedQuistUnited States
West Virginia University
Yang et al.University of ManchesterUK

We received a total of 68 textual and 68 intuitive system runs.2146 To obtain textual task results, we ranked each team on its best performing textual system run. To assess the intuitive task, we ranked each team on its best performing intuitive system run. We review the top ten textual and intuitive systems in ranked order below.

Top Ten Textual Systems

Of the top ten textual systems, Yang et al.,22 Solt et al.,42 Ware et al.,28 Childs et al.,24 Mishra et al.43 Szarvas et al.,21 and Deshazo et al.26 filtered the narrative summaries from information indirectly related to the patient and marked negations and uncertainty through methods that resembled NegEx47 or ConText.48 In addition:

Yang et al. used a precompiled dictionary of disease, symptom, treatment, and medication terms. They looked for sentences with either exact or approximate matches. For documents that contained more than one sentence about a disease, they determined the class for that disease based on a weighted combination of the evidence in sentences.22

Solt et al. stripped the documents of personal identifiers, expanded abbreviations, and split discharge summaries into sections. To mark a disease as Present, they developed a rule-based classifier with disease names, synonyms, spelling variants, and semantically related terms. They partitioned text using contextual clues that indicate negative or uncertain statements and fed the partitions into a series of binary classifiers that determined whether each disease was Questionable, Absent, or Present, in that order. Diseases that failed to receive any of these three labels were labeled Unmentioned.42

Ware et al. used regular expressions with a set of disease-related keywords and their synonyms. They assumed that keywords not marked as negated, historical, or associated with a relative would indicate a disease is present.28

Childs et al. used the rule-based Rocket AeroText information extraction system49 with keywords, their synonyms, and patterns generated by medical experts. They weighed and combined the evidence for each class of each disease.24

Mishra et al. marked the text with a set of disease-related keywords compiled by analyzing the training set. They determined the total number of positive, negative, and uncertain assertions for each disease in a discharge summary. The class with the highest number of assertions related to the disease labeled the disease. Ties were broken in favor of positive assertions.43

Szarvas et al. used term frequency and conditional probability in the Present class to preselect the most common terms that could aid classification. They supplemented this list with spelling variants and infrequent terms. The resulting dictionaries, along with disease contexts and document structure, formed the backbone of their rule-based system.21

Savova et al.25 and Patrick et al.44 deviated from the pattern of text filtering and negation extraction. Savova et al. combined an information extraction system, a maximum entropy classifier, and an SVM. They evaluated these approaches, and determined the best one for each disease on each of the textual and intuitive tasks. They then allowed the identified best method to judge a disease for a task.25

Patrick et al. used a combination of rules and a decision-tree classifier with features that included signs, symptoms, and medication names related to each disease. They also leveraged the correlations between diseases.44

DeShazo et al. analyzed 300 of the discharge summaries, annotating them for information that supported ground truth textual judgments. They employed a rule base to propagate the information supporting ground truth judgments to the rest of the corpus.26

Top Ten Intuitive Systems

Most intuitive systems benefited from the output of the textual systems. Solt et al.42 Szarvas et al.21 and Childs et al.24 determined a default mapping between textual and intuitive judgments and used it as the starting point. The top four intuitive systems employed rule-bases that incorporated “disease-specific, non-preventive medications and their brand names”, disease-related procedures, and symptoms highly correlated with diseases,42 “numeric expressions corresponding to measurements”21, and medication names.24,28

Different from the top four, Ambert et al. took a machine learning approach to the intuitive task. They combined hot-spot filtering with error-correcting output codes. They identified words that demonstrated high information gain with respect to each disease, extracted the text within a 100-character window of these words, marked the negations, and vectorized the extracted text. Of the created vectors, “the ones that were absent any non-zero features” were automatically labeled Absent. The rest were labeled using error-correcting output codes that weighted each class inversely proportionally to its size.45

Meystre extracted sections and sentences from each discharge summary using regular expressions and rules. In these excerpts, he disambiguated acronyms and extracted concept identifiers from the Unified Medical Language System (UMLS).50 He supplemented the identified concepts with medications and biomarker values that could indicate a disease. He determined intuitive labels using NegEx and ConText.46

Yang et al. based their intuitive predictions on evidence sentences containing information about symptoms, clinical measurements, and medications. They processed the sentences using clinical information, so the symptoms more directly related to a disease were more heavily weighted. The evidence sentences were considered to mark the presence of a disease unless a negation extractor marked them as negative or uncertain. In diseases with multiple evidence sentences, the information was combined.22

DeShazo et al. used SVMs for their intuitive system. This system used features derived from the text by the rule-based classifier they developed for the textual task.26

Matthews evaluated as features stemmed word tokens, bigrams, trigrams, UMLS semantic types of concepts, and negation as extracted by NegEx. He identified the most useful features for each class and applied Bayesian networks to classify diseases.33

Obesity Challenge Results

The results for the textual task are shown in Table 7 and in Table 8.Table 7 shows that the best macro-averaged F-measure on the textual task was 0.8052; the best micro-averaged F-measure was 0.9773. Table 8 shows that the macro-averaged performance difference between the top two systems is not statistically significant. The top three systems are not significantly different in their micro-averaged F-measures. Table 9 and Table 10 show the top ten intuitive systems, as ranked by the macro-averaged F-measure. The best macro-averaged F-measure on the intuitive task is 0.6745; the best micro-averaged F-measure is 0.9654. Table 10 shows that the top three systems are not statistically different in either macro- or micro-averaged F-measures.

View this table:
Table 7

Micro- and Macro-averaged Results on Textual Judgments, Sorted by Macro-averaged F-Measure

Yang et al.0.84820.77370.80520.97230.97230.9723
Solt et al.0.83180.77760.80000.97560.97560.9756
Ware et al.0.83140.75420.78210.97180.97180.9718
Childs et al.0.81690.74540.77620.97730.97730.9773
Mishra et al.0.74850.80500.77180.97040.97040.9704
Szarvas et al.0.76440.76000.76220.97290.97290.9729
Savova et al.0.77010.71470.73770.96680.96680.9668
Patrick et al.0.79710.62190.67370.96930.96930.9693
*Jazayeri et al.0.78490.57790.62050.95140.95140.9514
DeShazo et al.0.85520.62400.61400.96390.96390.9639
  • Best F-measures are in bold.

  • System utilized external annotators.

  • * System description not available.

View this table:
Table 8

Significance Tests on the Top Ten Textual Systems

SystemsSolt et al.Ware et al.Childs et al.Mishra et al.Szarvas et al.Savova et al.Patrick et al.Jazayeri et al. DeShazo et al.
Yang et al.+*****
Solt et al.***
Ware et al.++****
Childs et al.+*
Mishra et al.+***
Szarvas et al.*
Savova et al.**
Patrick et al.*
Jazayeri et al.+
  • Sorted by macro-averaged F-measure.

  • + marks pairs Not significantly different in macro-averaged F-measure.

  • * marks pairs Not significantly different in micro-averaged F-measure.

  • System utilized external annotators. Only the upper diagonal is marked.

View this table:
Table 9

Micro- and Macro-averaged Results on Intuitive Judgments, Sorted by Macro-averaged F-Measure

Solt et al.0.74850.65710.67450.95900.95900.9590
Szarvas et al.0.69990.65880.67270.96420.96420.9642
Childs et al.0.70610.65400.66960.95820.95820.9582
Ware et al.0.64100.63990.64040.96540.96540.9654
Ambert et al.0.63830.63070.63440.95580.95580.9558
Yang et al.0.63830.62940.63360.95720.95720.9572
DeShazo et al.0.97220.62160.62920.95240.95230.9524
Jazayeri et al.0.63200.62570.62870.95080.95080.9508
  • Best F-measures are in bold.

  • System utilized external annotators.

View this table:
Table 10

Significance Tests on the Top Ten Intuitive Systems

Szarvas et al.Childs et al.Ware et al.Ambert et al.MeystreYang et al. DeShazo et al.MatthewsJazayeri et al.
Solt et al.+*+*****
Szarvas et al.+**
Childs et al.****
Ware et al.++++++
Ambert et al.+*+*+*+*+*
Yang et al.+*+*+*
DeShazo et al.+*+*
  • Sorted by macro-averaged F-measure.

  • + marks pairs Not significantly different in macro-averaged F-measure.

  • * marks pairs Not significantly different in micro-averaged F-measure.

  • System utilized external annotators. Only the upper diagonal is marked.

Table 11 shows that the top ten systems on the textual task had F-measures ranging from 0.92 to 0.97 on Present class. Their F-measures range from 0.97 to 0.99 on the Unmentioned class. On the Absent class, the F-measures range from 0.39 to 0.66; on the Questionable class, the F-measures range from 0 to 0.62. Table 12 shows that seven out of the top ten systems produced a zero F-measure on the Questionable class on the intuitive task. The best F-measure for this class is 0.12. The performance of the top ten systems on the Present class range from 0.92 to 0.95, while the top ten systems on the Absent class performed in a range from 0.97 to 0.98.

View this table:
Table 11

Top Ten Textual Systems on Individual Classes (Aggregate Over All Diseases)

Yang et al.0.940.970.960.710.620.660.750.530.620.990.980.98
Solt et al.0.960.970.960.630.630.630.750.530.620.990.980.99
Ware et al.0.950.970.960.590.600.600.800.470.590.990.980.98
Childs et al.0.960.970.970.750.550.640.570.470.520.990.990.99
Mishra et al.0.950.960.960.620.630.630.440.650.520.990.980.98
Szarvas et al.0.970.950.960.640.630.640.470.470.470.980.990.98
Savova et al.0.950.940.950.740.520.610.410.410.410.970.980.98
Patrick et al.0.950.960.960.690.310.430.570.240.330.980.980.98
Jazayeri et al.0.910.930.920.590.290.390.670.120.200.970.970.97
DeShazo et al.0.940.950.950.500.570.531.
  • Best F-measures per class are in bold. Sorted by macro-averaged F-measure.

  • System utilized external annotators.

View this table:
Table 12

Top Ten Intuitive Systems on Individual Classes (Aggregate over All Diseases)

Solt et al.0.950.920.930.960.980.970.330.070.12
Szarvas et al.0.970.920.940.960.990.970.170.070.10
Childs et al.0.960.910.930.960.980.970.200.070.11
Ware et al.0.950.940.950.970.980.980.000.000.00
Ambert et al.0.950.920.930.960.980.970.000.000.00
Yang et al.0.960.910.930.960.980.970.000.000.00
DeShazo et al.0.970.880.920.950.990.971.000.000.00
Jazayeri et al.0.940.900.920.960.980.970.000.000.00
  • Best F-measures per class are in bold. Sorted by macro-averaged F-measure.

  • System utilized external annotators.


Rule-based approaches played a significant role in the top ten systems in the textual task. Machine learning approaches contributed to the top ten systems in the intuitive task but were less dominant in the textual task.

Given the similar approaches taken by the top ten textual systems, we expect that their performance differences resulted from the accuracy of their negation extraction modules and the completeness of their dictionaries. The approaches taken by the intuitive systems were more varied. In general, clinical information, world knowledge, and information from the textual task benefited the top ten intuitive systems. A subset of the top ten textual and intuitive systems took advantage of medical experts, indicating the value of engaging medical professionals in system development.

A subset of the top ten textual and intuitive systems encodes expert knowledge in the form of hand-crafted rules and patterns, generated either through direct interactions with domain experts or through (laypersons') observations on the ground truth created by domain experts. “Expert knowledge is a combination of a theoretical understanding of the problem and a collection of heuristic problem-solving rules that experience has shown to be effective in the domain”51. However, such knowledge is limited to a closed-domain, narrowly defined task. Expert systems based on this knowledge, e.g., the hand-crafted systems developed for the Obesity Challenge, perform well when tested within the domain of their focus; however, they require some work to be adapted to new tasks and domains.

Despite the limitations on their generalizeability, MLP systems that can address the Obesity Challenge with near-human-level performance were developed within a three month period. Although starting from an existing system was preferred for the development of some systems, e.g.,24,46 most, including two of the best systems22,42 developed for the Obesity Challenge, were built from scratch.

The main complexity and difficulty of the Obesity Challenge, in contrast to past challenges12,13 and most mainstream MLP work, came from the focus on less well-represented classes. The worst macro-averaged F-measures on the challenge were 0.2237 and 0.3358, in the textual and intuitive tasks respectively.

In particular, the textual Questionable class contained some discharge summaries that were incorrectly classified by all system runs. One such summary, marked Questionable for GERD, stated “The patient was continued on her PPI for GERD prophylaxis. … required increasing her dosage of Nexium secondary to GERD-like symptoms.”

Similarly, for the textual Absent class, no system runs could correctly predict the judgment for CAD in a discharge summary which stated, “no history of cancer or heart disease.” In general, textual Absent judgment required careful study of the context where diseases are mentioned. For example, recognizing the absence of diabetes when a patient “had no further insulin requirement and was not a diabetic” requires correct interpretation of this text. Only a subset of the submitted system runs correctly classified this case.

The Present class was easier to predict. For example, all systems correctly labeled a discharge summary which stated “adult onset diabetes mellitus”. However, even the Present class was not straightforward when the discharge summary failed to mention the disease by name. For example, a discharge summary about “ventral hernia” and “atrial fibrillation” that did not mention “coronary artery disease” or “cardiovascular disease” was judged Present for CAD. Only a subset of the submitted system runs predicted this textual judgment. Prediction of textual Present judgments was even more difficult in summaries using biomarkers or other related information to describe a disease. For example, none of the system runs submitted to the i2b2 challenge could correctly predict the ground truth judgment for obesity on the discharge summary that stated “The patient's admission weight was 106.2 kg. Her discharge weight was 100.7 kilograms”, and “weight should be monitored daily.”

The textual Unmentioned class was the easiest to predict. Most of these judgments were classified correctly by almost all the submitted system runs. Those textual Unmentioned judgments that could not be predicted correctly demonstrate peculiarities of data. For example, author's reading of the statement “The patient was an obese male” indicates a textual label of Present for obesity and disagrees with the ground truth label of Unmentioned.

Given the characteristics of the data and the observations on performance on the less well-represented classes, removing the emphasis from these classes would have made the Obesity Challenge much more mainstream and much more straightforward, but not trivial. Eighty-five percent of the systems in the intuitive task and 93% of the systems in the textual task achieved micro-averaged F-measures above 0.8. Two of the best performing systems from the Obesity Challenge are open source and can either be downloaded for local installations or utilized online.52,53

Conclusions and Implications for Future Research

The Obesity Challenge demonstrates the difficulty of differentiating textual judgments from intuitive ones. The overlap in information used by automated systems for identifying textual and intuitive judgments and the author's observations on the Obesity Challenge data indicate that textual judgments of domain experts may differ from textual judgments of lay persons. In other words, the annotators' domain knowledge may have led them to consider some inferred information as explicit. As a result, some judgments that could be considered intuitive by lay persons were found among the textual judgments.54

However, even with unclear boundaries between textual and intuitive judgments, the automated systems built by lay persons effectively extracted much useful information from discharge summaries. These systems performed best on the most factual and objective pieces of information. They experienced more difficulty arriving at conclusions only medical experts could infer. Most of the factual and objective pieces of information were identified by simple rule-based systems armed with dictionaries of terms and negation extraction modules. Machine learning approaches that studied the patterns in the textual judgments provided a beginning to correctly predicting intuitive judgments. We should emphasize that the relative performance of the systems is likely to change if we have much larger corpora for both training and testing. The unavailability of such corpora is likely to be the largest bottleneck for future progress in MLP.


  • This work was supported in part by the NIH Road Map for Medical Research Grants U54LM008748. Institutional Review Board approval has been granted for the studies presented in this manuscript. The author thanks all participating teams for their contributions to the challenge, and AMIA for its support in organizing the workshop that accompanied the challenge.


View Abstract