
Internship Details
Google Deepmind
At DeepMind, I joined the People + AI Research (PAIR) team to develop and implement tools that help product teams build more trustworthy and reliable AI systems. I focused on improving the behavior of large language models in conversational settings—making sure AI communicates clearly, handles errors gracefully, and maintains user trust—while collaborating closely with research scientists and engineers.
Student Researcher
5 months
Cambridge, MA
Framework for Responsible Conversational AI: Six Core Principles

User Needs + Defining Success

Data + Model Evolution

Mental Models + Expectations

Explainability + Trust

Feedback + Controls

Errors + Graceful Failures

Errors + Graceful Failures

I worked on Plannerific, a hypothetical AI agent designed to explore user needs with AI. In this project, I created 8+ new user journeys and “super patterns” — single AI patterns that demonstrate five or more capabilities. Altogether, I designed 40+ Figma screens and workflows based on the six principles above.

Design Process
My approach followed a structured design process:
Review existing Guidebook content to identify relevant principles and gaps.
Brainstorm potential patterns that could be demonstrated through Plannerific, a hypothetical AI agent.
Map user flows and scenarios (8+ new journeys, 20+ workflows) to explore how users interact with AI across different contexts.
Detail the human–AI experience, ensuring key concepts like autonomy, safeguards, and feedback loops were reflected in the interaction design.
Iterate and review with researchers to validate whether each pattern worked as intended and aligned with responsible AI principles.
Below are two published examples of use cases I designed; additional work cannot be shared due to confidentiality.
Example 1: User Autonomy
People expect technology to make their work easier and more efficient, but they also want to stay in control. It’s important to design AI systems that let users decide which tasks to delegate, how the AI should help, and when to take back control. Striking the right balance of autonomy builds trust and ensures the AI feels like a supportive partner, not a replacement.
In this scenario, our AI agent Plannerific helps users draft invitations. User autonomy is supported in three ways:

Clear visual cues
Highlight key differences in AI outputs so users can easily choose what fits best.


1. Options for decision-making Provide users with enough choices to compare and decide how the AI should perform the task.

Feedback and control
Add lightweight feedback loops (e.g., a small chip or inline prompt) that let users guide improvements and set preferences without extra effort.

Example 2: Safe Guards and Practices with AI
When dealing with sensitive scenarios such as healthcare, food allergies, or other ambiguous cases, AI should prioritize user safety over providing quick answers. This means adding safeguards that encourage credible sourcing, asking clarifying questions, and directing users to professional guidance when needed.
In this scenario, Plannerific helps a parent check if a menu is safe for a children’s birthday party. Safeguards are built in with AI through three approaches:
Ask for more context
Recognize limits
Provide credible sources
These approaches ensure users feel supported and informed while staying in control, striking a balance between AI assistance and human expertise.

Recognize limits
Identify special cases that should be referred for professional advice, rather than giving overly confident responses.

1. Ask for more context
Prompt users for more details (e.g., allergies, restrictions) before suggesting solutions.

Provide credible sources
Share authoritative third-party information, such as food allergy foundations or nutrition hubs.

rh692@cornell.edu
©Gloria Hu 2025