DOM FEEDBACK APP

How an AI-assisted tool can foster stronger faculty-learner collaboration with high-quality feedback.

Client: UCSF Department of Medicine

Year: 2024

My Role: UX Designer, UX Researcher

Team: Solutions architect, designer, engineer

Challenge

Faculty/learner feedback is not frequent, specific, learner-driven, or well-documented.​ How might we enhance the quantity and quality of feedback in health professions education?​

Solution

A mobile-friendly, AI-assisted tool that facilitates frequent, specific, learner-driven, and well-documented feedback.​

Duration

4 months

Our impact

Used findings as a guide towards creating alignment and considerations for specific model designs for AI-assisted technologies on faculty/learner feedback at UCSF.


Secondary Research

  • Review existing documentation to understand current state and behaviors and attitudes around feedback process (surveys, presentations, training documents).

  • Understand current technology landscape and existing tools.

DESIGN PROCESS

Primary Research

  • Interview Interviewed nine (9) stakeholders: 1 resident, 3 faculty members, 1 fellow, 1 med student, 1 systems manager, 1 course director, 1 GME data coordinator​

  • Focused our design research questions on the following themes:​

    • Motivations and behaviors around giving and receiving feedback​

    • Quality of feedback​

  • Learner-teacher relationship​

  • Tools and processes​

  • Use findings to inform initial mockups​

  • Co-design with stakeholders

  • Test and iterate mockups to inform the Minimum Viable Product (MVP)​

Interview with stakeholders

User interviews

  • Feedback is expected and given often, but there is still hesitation to give constructive feedback.​

  • Formative, on-the-fly feedback is perceived to be more useful than written evaluations, yet there’s no great mechanism to translate verbal feedback to written.​

  • Written evaluations in current evaluation tool are lengthy, there's a large number that needs to be done, and they disrupt the flow of organic conversations.​

  • Current evaluation tool presents a number of challenges that discourage faculty to submit written evaluations.

Insights from Research

Synthesizing research

Create a tool that collects verbal, on-the-fly feedback

Prototyping Opportunities

Directly feed evaluations into current system

AI to summarize verbal entries into one written evaluation

AI to assist in co-creating constructive feedback

Co-designing and iterating

1st co-design sessions with stakeholders

User flows

Second iteration on desktop interface

Third iteration


USING RESEARCH INSIGHTS TO INFORM TECHNICAL DISCOVERY

Technical Research

  • Understand technical requirements

  • Explore LLMs, technical options, tradeoffs, and considerations​

  • Developed a proof of concept with an estimated cost to build.

Prompt engineering

AI Considerations

  • Accuracy: Define how accurate responses need to be.  ​Consider users' revisions before final submission.​

  • Feedback Loop: Continuously refine the process by gathering feedback and making adjustment.​

  • Ethical Considerations: check for bias and fairness to maintain trust and compliance.​

Technical Proof of Concept

Experimenting with prompt engineering


PRESENTING FINDINGS TO UCSF DOM LEADERSHIP

Getting buy-in from Chief of Medicine Dr. Bob Wachter

Presenting to DOM leadership to get buy-in

Presented final mockups

Click image to view full prototype

Roadmapping

Business Outcomes

  • Enhance quality and quantity of feedback in order to improve ACGME accreditation standards.

  • Improve the feedback culture at UCSF to create a richer learner and faculty experience, thus increasing satisfaction.

  • Use as catalyst to decrease manual and outdated processes, thus saving time and money.

  • Use as catalyst to advance the use of AI in education at UCSF.