Layered AI
A mixed-methods research project examining the long-term emotional, psychological, and relational effects of parental divorce on young adults through survey data and in-depth interviews.
Overview
Layered AI is a UX research and design project exploring how people interact with AI-powered tools — specifically how trust, transparency, and perceived agency shape user experience when AI is embedded in everyday workflows.
The project was completed at UC Berkeley’s School of Information as part of the UX for AI course.
Problem
As AI systems become more capable and more embedded in daily tasks, users increasingly struggle to understand what the AI is doing, why, and how much control they retain. This opacity leads to over-reliance, distrust, and disengagement.
Research
- Diary studies — Participants logged daily interactions with AI tools over two weeks, capturing moments of confusion, delight, and unease
- In-depth interviews — Followed up with 8 participants to explore themes surfacing in diary entries
- Thematic analysis — Identified patterns across trust calibration, mental models, and user agency
Key Insights
- Users developed more accurate mental models of AI behavior when systems offered lightweight explanations at decision points — not overwhelming disclosures, but contextual “why” cues
- Perceived control — even when illusory — significantly increased comfort and continued engagement
- Participants distinguished between AI as a tool (preferred) versus AI as an agent (met with resistance), suggesting design language matters as much as functionality
Design
Translated research findings into a set of UX principles and interface patterns for AI-assisted tools, prototyped in Figma:
- Progressive disclosure of AI reasoning
- User-adjustable confidence thresholds
- Clear, recoverable override controls
Delivered final prototype and research report to stakeholders with design rationale grounded in empirical findings.