Bias Detection Tools for Clinical Decision Making

Minimizing Bias & Maximizing Long-Time Accuracy of Predictive Algorithms in Healthcare

Image with different healthcare scenarios: 1. Doctor with patient in office, Doctor with a family, and a team of doctor with in a hospital bed.
Overview

NIH’s NCATS challenges you to create a solution that detects bias in AI/ML models used in clinical decisions.

Challenge Start Date

October 31, 2022

Registration Deadline

February 15, 2023

Submission Deadline

March 1, 2023

Ideas & Showcase Day

May 5, 2023

SHOWCASE & IDEAS EXCHANGE FOR BIAS IN HEALTHCARE AI

The National Center for Advancing Translational Sciences (NCATS) at the National Institutes of Health (NIH) has announced the winners of the Bias Detection Challenge in a virtual event on held on May 5, 2023.
The Bias Detection Tools for Clinical Decision Making Challenge launched in October to award up to $700,000 to those teams best able to make an open-source solution that detects and mitigates bias in AI/ML models used in healthcare settings. Over 200 individuals registered to compete in this Challenge, and the resources/webinars provided during the course of the Challenge aimed to provide context and cross-training for a complex issue that encompasses AI/ML, healthcare and bias/ethics.

YOUR CHALLENGE

NIH’s NCATS challenges you to create a solution that detects bias in AI/ML models used in clinical decisions. Some ideas to consider:

The National Center for Advancing Translational Sciences (NCATS) team within the NIH launched the Minimizing Bias and Maximizing Long-term Accuracy, Utility, and Generalizability of Predictive Algorithms in Healthcare Challenge.
  1. How do you identify predictive and social bias?
    Predictive Bias: Algorithmic inaccuracies in producing estimates that significantly differ from the underlying truth.
    Social Bias: Systemic inequities in care delivery leading to suboptimal health outcomes for certain populations.
  2. How do you account for “latent” bias where social or statistical biases happen over time due to the complexities of healthcare processes?
  3. Where does bias occur and how do we provide a path forward for follow-up investigations?
  4. How do you account for consistent evaluation and assessments of the algorithm over time and for all patient populations?
Although AI/ML algorithms offer promise for clinical decision making, that potential has yet to be fully realized in healthcare.  Even well-designed AI/ML algorithms and models can become inaccurate or unreliable over time due to various factors; changes in data distribution, subtle shifts in the data, real world interactions, user behavior, and shifts in data capture and management practices can have repercussions for model performance. These subtle shifts over time can cause degradation of the predictive capability of an algorithm, which can effectively negate the benefits of these types of systems in the clinic. Accurate monitoring of an algorithm’s behavior and the ability to flag material drifts in performance may enable timely adjustments that ensure the model’s predictions remain accurate, fair, and unbiased over time.  In this way, degradation of the predictive capability of the algorithm when applied in the real world may be prevented.

As AI/ML algorithms are increasingly utilized in healthcare systems, accuracy, generalizability, and avoidance of bias and drift appropriately come to the forefront. Bias can primarily surface in the form of predictive bias—algorithmic inaccuracies in producing estimates that significantly differ from the underlying truth; and/or social bias, or latent bias (definitions above).

HOW TO ENTER

Registration Process
To compete in this challenge, participants must register above. Participants will be required to identify whether they are an Individual competing alone, or a Team Lead on behalf of a group of individuals (i.e., a Team), or a Point of Contact on behalf of an entity (i.e. an institution, organization, or corporation):
For Individuals:
For Students:
For Teams:
For Entities:

HOW TO WIN

To win this challenge, first, register above! Then, learn about the challenges of bias detection and mitigation through our upcoming educational webinars and engage with our mentors to explore how to identify and minimize harmful effects of AI/ML bias in healthcare.  Then, work with your team to come up with ideas. Our Slack workspace will be your lifeline. On Slack, you can find lots of wonderful resources, such as this data starter kit, and contribute to help others as well. NCATS will award prizes to the participants who are most successful at addressing all four items listed. Familiarize yourself with the challenge requirements and judging criteria to create a high-quality, compliant solution:
1. Detect predictive and social biases
2. Identify source(s) of these biases
3. Prevent perpetuation of bias over time
4. Provide proof-of-concept as a tool that supports broad use of AI algorithms

BACKGROUND

NCATS Background. NCATS was established to coordinate and develop resources that leverage basic research in support of translational science and to develop partnerships and work cooperatively to foster synergy in ways that do not create duplication, redundancy, and competition with industry activities. This challenge will further the mission of NCATS by spurring innovation in the AI bias mitigation space — both identification and minimizing inadvertent amplification/perpetuation of systemic biases.  Through this challenge, NCATS hopes to see innovators create tools to foster and promote the use of predictive and social bias detection and correction in order to increase the accuracy of AI/ML algorithms utilized in the healthcare setting.
Challenge Background. The challenge registration and submission portal is administered by a contractor, Blue Clarity LLC, under contract with the NASA Center of Excellence for Collaborative Innovation on behalf of NCATS. NCATS is conducting this challenge under the America Creating Opportunities to Meaningfully Promote Excellence in Technology, Education, and Science (COMPETES) Reauthorization Act of 2010, as amended [15 U.S.C. § 3719].

PRIZES

NCATS may award:
NCATS may recognize additional participants with non-monetary honorable mention awards. Following the selection and awarding of the Challenge  prizes, winners will be invited to showcase their tools at an NCATS sponsored Demo Day on May 5, 2023.
First-Place Prize

$200,000

Two Second-Place Prizes

$150,000 each

Two Third-Place Prizes

$75,000

One Student Prize

$50,000

Payment of the Prize
Cash prizes will be awarded by NIH/NCATS directly to each individual winner (having registered and competed in the challenge as an individual), or to each Team Lead of a winning Team (having registered and competed in the challenge on behalf of a Team) or to an entity (such as an institution, organization, or corporation). Prizes awarded under this challenge will be paid by electronic funds transfer and may be subject to federal income taxes. The Department of Health and Human Services/NIH will comply with the Internal Revenue Service withholding and reporting requirements, where applicable. NCATS reserves the right, in its sole discretion, to (a) cancel, suspend, or modify the challenge, or any part of it, for any reason, and/or (b) not award any prizes if no submissions are deemed worthy.

SCHEDULE

October 2022

October 31, 2022

Challenge Launch

October 31, 2022 - May 1, 2023
Three Educational Webinars, Office Hours, Teaming, and Mentoring

December 2022

Early December 2022

Kickoff

February 2023

February 15, 2023

Registration Deadline 11:59 PM EST

March 2023

March 1, 2023

Submission Deadline 11:59 PM EST

April 2023

April 21, 2023

Winners Notified

May 2023

May 5, 2023

Bias in Healthcare AI Challenge Showcase

SUBMISSION RULES

Click here for eligibility rules and participant agreement for the challenge.

Each submission must have its own GitHub Page consisting of three main components outlined below. Please use our HTML template (.ZIP file download) and view this GitHub Pages tutorial for your reference.

GitHub Repository
Supporting Documentation (Template Now Available!)
Link to Video Submission
Submissions will not be considered complete until all components are submitted.
  1. Submissions are due by 11:59 PM ET on March 1, 2023. Late submissions will not be accepted.  
  2. Supporting Documentation must be submitted in Adobe PDF format, written in English, and use 11-point Arial or Times New Roman font (except in figures and tables). Pages should be standard letter size (8.5 x 11 in), single-spaced, and have margins no less than 1 inch on every side.
  3. Video submissions shall be submitted as a YouTube, Vimeo or other streaming video service. It is the responsibility of the team to ensure that videos are correctly uploaded to YouTube or Vimeo and are accessible to judges.
  4. There is no template for video submissions. Video submissions should summarize key descriptions of the tool (e.g. problem, solution description, potential impact) and should include visual representations of important information. Creativity is encouraged!  
  5. Judges will be discouraged to click on additional URLs that are not listed as requirements in these submission instructions.
  6. Participants will be allowed to submit three (3) total solutions before the submission deadline. If a team submits more than three solutions, the judges will use the timestamp to determine the first three submissions and will ignore the other submitted solutions. Each solution will be graded exclusively and not in conjunction with another solution. Any attempt to circumvent stated limits will result in disqualification.
  7. All decisions of the contractor and NCATS will be final and binding on all matters relating to this challenge.

JUDGING CRITERIA

Basis Upon Which Winners Will be Selected
Winners will be determined by an evenly weighted average of scores from the judgement criteria listed below.  Submissions will be evaluated based upon the following judging criteria:
Performance:
Feedback Loop:
Innovation:
Generalizability:
The Evaluation Process. A multidisciplinary Judging Panel composed of experts in related fields (e.g., clinical informatics, machine learning and artificial intelligence, bioinformatics, ethics) will evaluate all submissions against the judging criteria and submission requirements, including the outputs of the bias detection tool code against our healthcare models and data. The Judging Panel will then provide their evaluations of individual submissions and determination of cash prize winners and honorable mentions to the NCATS Award Approving team who will make the final award and honorable mention decisions. A total of six winners are anticipated, with additional honorable mentions.

FAQS

Q: Are teams also expected to create ML models?
Q: Where can I find data and ML models on which to test my bias detection tool?
Q: Does my bias detection tool need to work on various types of healthcare data or is it acceptable to limit it to certain types of data?
Q: How do qualify for the student prize?
Q: Do I need to submit the ML models and data we used?
Q: Can I compete if I'm not a U. S. citizen or U.S. permanent resident?
Can state and local government employees compete?

POINT OF CONTACT

If you have any inquiries about participating in this competition, please don’t hesitate to reach out to us at expeditionhacks@blueclarity.io.