DESIGN CRITIQUE AT DUO

Designed critique framework → nearly tripled team connections
Created systematic peer feedback practice and improved designer connections for 30-person team.
Why this project?
When our 30-person design team struggled with isolation and declining craft quality, most would have organized workshops. Instead, I treated it like any design problem: research the users, prototype solutions, test and iterate.
Impact delivered
Team awareness scores improved 60%, social connection increased 75%, and designers gained visibility into teammates' "messy middle" design process—all measured through systematic user research.
Role
Sr. DesignOps Program Manager focused on team rituals, feedback systems, and scalable program design.​
​
Duration
9 months
​
Team
30-person product design team
2x Sr. Product Designer collaborators
​
Deliverables
"Work With Me" sheet framework
Structured critique templates & guides
Cross-functional pod system design
Research findings & insights
Mural collaboration templates
Program measurement framework

Something essential to our craft was missing
Duo's 30 person design team had no systematic critique practice. Some squads had informal design reviews, but for a team this size, the absence of structured peer feedback was holding us back. Critique and design review are considered tablestakes for mature design teams—yet we had none.
COVID's shift to remote work intensified the isolation. With no in-person activities, designers had limited visibility into each other's work, social connections were weakening, and opportunities for meaningful feedback were inconsistent.
When I transitioned into a DesignOps role, I immediately saw this as a systems design opportunity. Rather than add another meeting to calendars, I could research what designers actually needed and prototype a solution that prioritized both connection and craft elevation.
“There isn't a clear way to learn what other designers are up to. This lack of visibility is holding us back.”
— Focus group session feedback
Activities & outputs
Problem definition, Stakeholder alignment
Prototyping a systematic framework
I designed and prototyped a complete system addressing each problem:
​
"Work with Me" sheets solved psychological safety by helping teammates understand individual communication styles and working preferences—pure user-centered design applied to team dynamics.
​
Cross-functional pods created diverse 6-8 person groups mixing designers, researchers, and domain expertise. This addressed isolation while bringing fresh perspectives to familiar problems.
​
Structured activities with clear roles included presenter, critiquers, and pod facilitators—removing the guesswork that kills volunteer participation.
​
Supporting artifacts in Slack and Mural provided both synchronous critique sessions and asynchronous work-sharing, creating multiple touchpoints for team connection.

"Work with Me" Sheets were instrumental in helping peers get to know each other and strengthened psychological safety.
“The ideal process emphasizes strong communication and trust.”
— From the presentation deck
Activities & outputs
Framework design workshops, Beta program design, "How to" guides for 3 roles, "Work With Me" sheet template, Mural collaboration templates, Pod structure design
Survey data showed improvement across all measures
Data-driven tracking as the team felt the shift
I launched "Critique Beta" with quantitative baselines across three metrics: team awareness, social connection, and visibility into design process. After three months of testing with structured surveys, the data showed improvement across all measures—but qualitative feedback revealed a fundamental design flaw.
Survey results showed progress:
​
✅ Team awareness: 3-6 range improved to 6-8 range
✅ Social connection: 3-6 range improved to 6-8 range
✅ Process visibility: Similar positive trajectory
But user feedback revealed a core problem: Artificial pods pulled people from their natural work context. In addition, setting context for pod members who didn't know each other's projects took longer than getting actual feedback.
Activities & outputs
Beta program launch, Baseline & follow-up surveys, Hands-on observation of 16 pod sessions, Survey results analysis
Iteration based on user needs
The insight: I had optimized for social connection but ignored workflow integration. Critique needed to happen where real work happened—within squads, not artificial pods.
I completely redesigned the system:
​
-
Moved critique to squad-level activities integrated into project timelines
-
Included design managers and leads to provide accountability and governance
-
Made critique project-triggered, not calendar-driven to ensure relevant timing
-
Maintained support materials while shifting ownership from DesignOps to team leads
Activities & outputs
V2 framework redesign, Squad-level integration design, Governance model redesign, Lead designer collaboration sessions, V2 rollout presentation

192% increase in high satisfaction scores and an 84% reduction in low scores
Impact: Systematic improvements
After six months, follow-up surveys showed sustained improvement across all metrics. Team members reported getting valuable feedback that moved their work forward (8/10 average satisfaction), with strong sentiment about time investment value.
“The activity has given me a much better idea of what folks are working on and how they approach problems.”
— Duo Product Designer, survey feedback
What I learned
This program taught me that organizational problems are design problems—they require user research, prototyping, testing, and iteration. Most designers focus on individual craft skills, but I focus on designing the systems that enable great craft to thrive at scale.
​
While other designers struggle to get their work implemented, my program management background lets me identify and solve the organizational barriers that kill good design. I don't just design—I design the conditions for design success.
​
This is how I approach all design challenges: diagnose the systemic barriers, prototype solutions, test with real users, iterate based on data, then scale what works.
Next up : McAfee Case Study
