Run:AI > Case Studies > How one company went from 28% GPU utilization to 73% with Run:ai

How one company went from 28% GPU utilization to 73% with Run:ai

Run:AI Logo
Company Size
1,000+
Product
  • Run:ai's Platform
Tech Stack
  • Nvidia DGX servers
  • GPU workstations
Implementation Scale
  • Enterprise-wide Deployment
Impact Metrics
  • Cost Savings
  • Productivity Improvements
Technology Category
  • Analytics & Modeling - Machine Learning
  • Application Infrastructure & Middleware - API Integration & Management
Applicable Industries
  • Software
  • Telecommunications
Applicable Functions
  • Business Operation
  • Product Research & Development
Use Cases
  • Computer Vision
  • Predictive Maintenance
Services
  • Data Science Services
  • System Integration
About The Customer
The customer is a multinational company that is a world leader in facial recognition technologies. They provide AI services to many large enterprises, often in real-time. Accuracy, measured in terms of maximizing performance of camera resolution and FPS, density of faces, and field of view are critically important to the company and their customers. They have an on-premises environment with 24 Nvidia DGX servers and additional GPU workstations, and a team of 30 researchers spread across two continents.
The Challenge
The company, a world leader in facial recognition technologies, was facing several challenges with their GPU utilization. They were unable to successfully share resources across teams and projects due to static allocation of GPU resources, which led to bottlenecks and inaccessible infrastructure. The lack of visibility and management of available resources was slowing down their jobs. Despite the low utilization of existing hardware, visibility issues and bottlenecks made it seem like additional hardware was necessary, leading to increased costs. The company was considering an additional GPU investment with a planned hardware purchase cost of over $1 million dollars.
The Solution
The company implemented Run:ai's platform to address their challenges. The platform increased GPU utilization by moving teams from static, manual GPU allocations to pooled, dynamic resource sharing across the organization. It also increased productivity for the data science teams using hardware abstraction, simplified workflows, and automated GPU resource allocations. The platform provided visibility into the GPU cluster, its utilization, usage patterns, wait times, etc., allowing the company to better plan hardware spending. Furthermore, it accelerated training times, using automated, dynamic allocation of resources which enabled the data science teams to complete training processes significantly faster.
Operational Impact
  • The company managed to go from 28% GPU utilization to optimization of over 70%.
  • They achieved a 2X increase in the speed of their training models.
  • The data science teams simplified GPU utilization workflows and increased productivity by 2X, allowing them to more quickly deliver value with deep learning models.
  • The company gained control and visibility into GPU clusters and saw utilization go from 28% to more than 70% for better budgeting and planning of new hardware needs.
  • They achieved the ability to scale deep learning so new researchers and jobs easily gain access to infrastructure.
Quantitative Benefit
  • 70% Average GPU Utilization, leading to higher ROI.
  • 2X Experiments per GPU, leading to better Data Science.
  • Multi-GPU Training by Default, leading to Faster Time to Value.
  • Simplified Workflows, leading to Reduced DS Hassles.

Case Study missing?

Start adding your own!

Register with your work email and create a new case study profile for your business.

Add New Record

Related Case Studies.

Contact us

Let's talk!
* Required
* Required
* Required
* Invalid email address
By submitting this form, you agree that IoT ONE may contact you with insights and marketing messaging.
No thanks, I don't want to receive any marketing emails from IoT ONE.
Submit

Thank you for your message!
We will contact you soon.