Run:AI > Case Studies > London Medical Imaging & AI Centre Speeds Up Research with Run:ai

London Medical Imaging & AI Centre Speeds Up Research with Run:ai

Run:AI Logo
Company Size
1,000+
Region
  • Europe
Country
  • United Kingdom
Product
  • Run:ai's Platform
Tech Stack
  • AI Hardware
  • Deep Learning Training Models
  • GPU Compute
Implementation Scale
  • Enterprise-wide Deployment
Impact Metrics
  • Innovation Output
  • Productivity Improvements
Technology Category
  • Analytics & Modeling - Machine Learning
  • Application Infrastructure & Middleware - API Integration & Management
Applicable Industries
  • Healthcare & Hospitals
Applicable Functions
  • Product Research & Development
Use Cases
  • Computer Vision
  • Predictive Maintenance
Services
  • Data Science Services
  • System Integration
About The Customer
The London Medical Imaging & Artificial Intelligence Centre for Value Based Healthcare is a consortium of academic, healthcare and industry partners, led by King’s College London and based at St. Thomas’ Hospital. It uses medical images and electronic healthcare data held by the UK National Health Service to train sophisticated deep learning algorithms for computer vision and natural-language processing. These algorithms are used to create new tools for effective screening, faster diagnosis and personalized therapies, to improve patients’ health.
The Challenge
The London Medical Imaging & AI Centre for Value Based Healthcare was facing several challenges with its AI hardware. The total GPU utilization was below 30%, with significant idle periods for some GPUs despite demand from researchers. The system was overloaded on multiple occasions where more GPUs were needed for running jobs than were available. Poor visibility and scheduling led to delays and waste, with bigger experiments requiring a large number of GPUs sometimes unable to begin because smaller jobs using only a few GPUs were blocking them out of their resource requirements.
The Solution
The AI Centre implemented Run:ai's Platform to address these challenges. The platform increased GPU utilization by 110%, with resultant increases in experiment speed. Researchers ran more than 300 experiments in a 40-day period, compared to just 162 experiments that were run in a simulation of the same environment without Run:ai. By dynamically allocating pooled GPU to workloads, hardware resources were shared more efficiently. The platform also improved visibility with advanced monitoring and cluster management tools, allowing data scientists to see which GPU resources were not being used and dynamically adjust the size of their job to run on available capacity. The platform also enabled fair scheduling and guaranteed resources, allowing large ongoing workloads to use the optimal amount of GPU during low-demand times, and automatically allowing shorter, higher-priority workloads to run alongside.
Operational Impact
  • Increased GPU utilization by 110%, with resultant increases in experiment speed.
  • Researchers ran more than 300 experiments in a 40-day period, compared to just 162 experiments that were run in a simulation of the same environment without Run:ai.
  • Improved visibility with advanced monitoring and cluster management tools.
  • Enabled fair scheduling and guaranteed resources, allowing large ongoing workloads to use the optimal amount of GPU during low-demand times, and automatically allowing shorter, higher-priority workloads to run alongside.
Quantitative Benefit
  • 2.1X Higher GPU Utilization
  • 31X Faster Experiments
  • 1.85X More Experiments
  • Elastic Workloads

Case Study missing?

Start adding your own!

Register with your work email and create a new case study profile for your business.

Add New Record

Related Case Studies.

Contact us

Let's talk!
* Required
* Required
* Required
* Invalid email address
By submitting this form, you agree that IoT ONE may contact you with insights and marketing messaging.
No thanks, I don't want to receive any marketing emails from IoT ONE.
Submit

Thank you for your message!
We will contact you soon.