Case Studies > The William and Flora Hewlett Foundation: Targeting Education Through Data Science

The William and Flora Hewlett Foundation: Targeting Education Through Data Science

Company Size
1,000+
Country
  • United States
Product
  • Automated Student Assessment Prize (ASAP)
Tech Stack
  • Data Science
  • Machine Learning
Implementation Scale
  • Enterprise-wide Deployment
Impact Metrics
  • Cost Savings
  • Digital Expertise
  • Productivity Improvements
Technology Category
  • Analytics & Modeling - Data Mining
  • Analytics & Modeling - Machine Learning
  • Analytics & Modeling - Predictive Analytics
Applicable Industries
  • Education
Applicable Functions
  • Business Operation
  • Quality Assurance
Use Cases
  • Automated Disease Diagnosis
  • Predictive Quality Analytics
  • Remote Collaboration
Services
  • Data Science Services
  • Software Design & Engineering Services
About The Customer
The William and Flora Hewlett Foundation is a private foundation established by the Hewlett family. It is one of the largest philanthropic organizations in the United States, with a focus on solving social and environmental problems at home and around the world. The foundation supports a wide range of initiatives, including education, global development, and the environment. In the context of this case study, the foundation is particularly interested in improving the quality and efficiency of educational assessments. By leveraging data science and machine learning, the foundation aims to create tools that can assist teachers and educational institutions in grading essays more consistently and affordably, thereby enhancing the overall quality of education.
The Challenge
Education experts agree that essay writing is better than multiple choice tests for measuring essential skills like critical thinking, communication, and collaboration. However, because essays are more expensive and time consuming to grade, most standardized tests are still multiple-choice. In Kaggle’s Automated Student Assessment Prize (ASAP), the Hewlett Foundation challenged participants to build data science tools to help teachers and public education departments to grade essays consistently, quickly, and affordably—without sacrificing quality.
The Solution
Phase 1 of the competition included more than 22,000 hand-scored, long-form student essays that varied in length, topic, and grading protocol. Participants were challenged to develop models that could reproduce the scores given by expert human graders. Along with Kaggle’s community, eight commercial vendors of education software were invited to participate. The top five Kaggle teams outperformed all of the commercial vendors and even showed more consistency than the expert human graders. The winning team, which included a British particle physicist, an American weather analyst, and a German computer science student, ultimately sold the intellectual property behind their solution. Phase 2 tackled the even more difficult problem of short answers. The data included more than 27,000 hand-scored short answers around 50 words, covering topics from English to science. Results showed great promise: The top teams did not outperform human graders, but did outperform an automated benchmark by almost 20%. The winning teams presented their solutions to the sponsors and publicly released all code and writeups for use in future research.
Operational Impact
  • The top five Kaggle teams outperformed all of the commercial vendors and even showed more consistency than the expert human graders.
  • The winning team, which included a British particle physicist, an American weather analyst, and a German computer science student, ultimately sold the intellectual property behind their solution.
  • The top teams did not outperform human graders, but did outperform an automated benchmark by almost 20%.
  • The winning teams presented their solutions to the sponsors and publicly released all code and writeups for use in future research.
Quantitative Benefit
  • Phase 1 included more than 22,000 hand-scored, long-form student essays.
  • Phase 2 included more than 27,000 hand-scored short answers around 50 words.
  • The top teams outperformed an automated benchmark by almost 20%.

Case Study missing?

Start adding your own!

Register with your work email and create a new case study profile for your business.

Add New Record

Related Case Studies.

Contact us

Let's talk!
* Required
* Required
* Required
* Invalid email address
By submitting this form, you agree that IoT ONE may contact you with insights and marketing messaging.
No thanks, I don't want to receive any marketing emails from IoT ONE.
Submit

Thank you for your message!
We will contact you soon.