Labelbox > Case Studies > Non-Expert Labeling Teams Can Create High Quality Training Data for Medical Use Cases

Non-Expert Labeling Teams Can Create High Quality Training Data for Medical Use Cases

Labelbox Logo
Technology Category
  • Analytics & Modeling - Computer Vision Software
  • Analytics & Modeling - Machine Learning
Applicable Industries
  • Agriculture
  • Education
Applicable Functions
  • Product Research & Development
  • Quality Assurance
Use Cases
  • Computer Vision
  • Visual Quality Detection
Services
  • Testing & Certification
  • Training
About The Customer
The customer in this case study is the BioMedIA research group at Imperial College London and the iFind Project. They are involved in medical research and were looking for ways to speed up the process of creating high-quality training data for machine learning use cases in the medical field. They partnered with Labelbox, a leading training data platform for machine learning applications, to conduct an experiment that challenged the assumption that only medical experts can provide quality annotations for clinical deep learning models.
The Challenge
Creating high-quality training data for machine learning (ML) use cases can be expensive and time-consuming, especially for specialized fields that require domain experts to review and label the data. This is particularly true in the medical field, where doctors are often required to meticulously label or review training data. This process can be arduous and costly, slowing down the development of potentially life-saving algorithms. The challenge was to find a way to create high-quality training data with less involvement from physicians, thereby reducing costs and speeding up the development process.
The Solution
A study conducted by researchers from Imperial College London’s BioMedIA research group and the iFind Project, in partnership with Labelbox, challenged the assumption that only medical experts can provide quality annotations for clinical deep learning models. The team conducted an experiment where minimally trained, novice labeling workforces were tasked with annotating fetal ultrasound imagery for signs of a specific congenital heart disease. The researchers then trained the same algorithm on two datasets: one labeled by the Labelbox workforces and one labeled by experts in medical imagery. The experiment found that these novice annotators could perform complex medical image segmentation tasks to a high standard, with their labels being similar in quality to those provided by experts. The researchers concluded that novice labeling teams can play a vital role in developing high-performing models in specialized medical use cases when employed in combination with existing methods to handle noisy annotations and choose the most informative annotations for the model.
Operational Impact
  • The operational results of the study were significant. The researchers were able to challenge the widely held assumption that only medical experts can provide quality annotations for clinical deep learning models. They demonstrated that novice labeling teams can play a vital role in developing high-performing models in specialized medical use cases. This has the potential to significantly reduce costs and speed up the development process for machine learning algorithms in the medical field. Furthermore, the researchers were able to submit a paper that was accepted into the FAIR workshop at MICCAI 21, demonstrating the academic recognition of their findings.
Quantitative Benefit
  • The study found that novice annotators could perform complex medical image segmentation tasks to a high standard, similar to those provided by experts.
  • The prediction performance in the models were comparable in both image segmentation and downstream classification tasks, regardless of whether the data was labeled by experts or novices.
  • The speed of the Labelbox Workforce team was impressive and everything was annotated within the expected time-frame.

Case Study missing?

Start adding your own!

Register with your work email and create a new case study profile for your business.

Add New Record

Related Case Studies.

Contact us

Let's talk!
* Required
* Required
* Required
* Invalid email address
By submitting this form, you agree that IoT ONE may contact you with insights and marketing messaging.
No thanks, I don't want to receive any marketing emails from IoT ONE.
Submit

Thank you for your message!
We will contact you soon.