Practical Designer Performance Reviews

A practical way to run designer performance reviews using a living document with clear keep/start/stop feedback, goal tracking, and AI tools to make you more efficient.

Practical Designer Performance Reviews
Photo by Nathália Rosa / Unsplash

In many early-stage startups, performance reviews can feel like box‑ticking exercises. The process is often high‑level, managed through generic Google forms, and leaves designers with vague comments or bland affirmations. Without actionable feedback or a clear path to promotion, both manager and direct report end up frustrated.

Over the last few years, I’ve been refining a system that brings clarity and continuity to designer performance reviews. The secret: take good notes and treat your feedback as a living document and tie everything back to real, specific work.


The Ratings System: More Than Just "Successful"

Most organisations use a simple status rubric. Ours includes five choices:

  • Role Model
  • Exceeds Expectations
  • Successful
  • Developing
  • Not Delivering

"Successful" is the baseline. Nothing to complain about. You're rock solid.

For designers aiming to grow though, you need to be hitting Exceeds Expectations. It helps signals readiness for the next level. Promotions tend to reward work you're already doing.

For feedback we also write some bullets for what people should keep, start, and stop doing. This is useful but is often biased to the last few weeks.


How I Track Reviews: A Living Google Doc

Rather than juggling multiple forms, I keep one shared Google Doc per direct report. Each review date gets its own page, so past discussions never vanish into inboxes.

  1. Status (pick one of the five above)
  2. Keep Doing, Start Doing, Stop Doing
  3. From Last Time
  4. For Next Time

This setup means every conversation builds on the last. Copy, paste, and refine—there’s no reinventing the wheel each cycle.

Every bullet needs to tie directly to real work, both successes and failures, and maps back to one of our feedback buckets.


Goal Tracking That Actually Works

Review time isn’t just a status update, it’s a checkpoint on past commitments; all in one place.

  • Unchecked: We didn’t hit this goal. Let’s either re-commit or swap in a new, higher-priority objective.
  • Intermediate: We made progress. Keep it rolling or pivot scope.
  • Checked: Goal achieved! Time to celebrate and pick a fresh challenge.

Because goals live alongside feedback in the same document, revisiting them is as simple as ticking boxes and editing a line or two. The best thing is that you can scroll back through time and see your progress.

Anonymised Example:

AREAS OF OPPORTUNITY:
From last time:
[-] Embed user research findings directly into sprint planning
[ ] Adjust fidelity of prototypes based on project phase
[x] Lead design reviews for at least one major feature launch

Designing for Promotion

Promotion isn’t about sharper wireframes or better interfaces; it’s about upping your problem solving, visibility, credibility, and strategic impact.

Here’s how I map next-role behaviours onto today’s work:

Keep doing
Ensure you choose something that they're already doing that maps to behaviours for their next role; even loosely.

Start doing
Pick one or two things that can easily augment their existing behaviour that are not big a leap from where they are now. Remember that they have to do their job as well as these new things aimed at promotion.

Stop doing
Identify behaviours that are having the opposite effect and highlight ways to channel this behaviour.

Areas of opportunity (aka goals)
Take the high level keep, start, and stop, and translate them into 3 specific, actionable, and achievable goal.

For next time:
- Mentor a junior designer through an entire feature cycle, from discovery to delivery
- Demonstrate a quantifiable uplift in user activation metrics (e.g., +10% sign-up completion rate)
- Initiate bi-weekly design critique sessions with product managers to foster continual learning

This clarity helps designers practice the behaviours that define a lead role, long before the promotion conversation.

Put it all together

15 June 2025

Rating: Exceeds Expectations

KEEP DOING:
- Conducting rapid user journey sketches before high-fidelity mocks
- Collaborating with the data team early to ground designs in real user insights
- Sharing clickable prototypes in #design-review to gather cross-functional feedback

START DOING:
- Incorporating accessibility checks into initial wireframes to catch issues sooner
- Running small-scale A/B tests on key interface changes before full release

STOP DOING:
- Keeping design rationale hidden in private notes; move context and decisions into our team wiki for transparency

AREAS OF OPPORTUNITY:
From last time:
[-] Embed user research findings directly into sprint planning
[x] Adjust fidelity of prototypes based on project phase
[x] Lead design reviews for at least one major feature launch

For next time:
- Mentor a junior designer through an entire feature cycle, from discovery to delivery
- Demonstrate a quantifiable uplift in user activation metrics (e.g., +10% sign-up completion rate)
- Initiate bi-weekly design critique sessions with product managers to foster continual learning

Notes > Memory

Relying on memory is a recipe for recency bias. Notes, captured in the moment, give you a reliable reference. I use Obsidian for a permanent, linked record:

A redacted graph view from Obsidian
Graph view shows the interconnectedness of notes over time
  1. People: One note per direct report in a folder.
  2. Date-stamped one-on-ones: Quick bullet notes with links to project docs.
  3. Graph links: Cross-reference mentions of people, projects, and feedback so nothing falls through the cracks.

When it’s review time, I have a searchable history of meetings, project outcomes, and side conversations.


Supercharging Reviews with AI

To speed up drafting, I leverage an Obsidian → MCP → Claude plugin that turns my notes into a knowledge-base. This only works if you take good notes of course. The benefit is surfacing my own thoughts and content without having to manually sift through hundreds of files.

Workflow:

  1. Prompt Claude:
Based on my Obsidian notes for [Name], identify one behaviour Keep doing, one to Start doing, and one to Stop doing for a mid-level to Senior transition. 

Tie each to real examples from our past six months of projects.

Provide links to the source files for me to review and the readoning begind your suggestions.
  1. Claude returns draft bullets and sources for me to review
  2. I review, reword, and add to my living document before sharing.

This hybrid approach saves hours and keeps feedback grounded in my own observations.


Preparation and Reflection

Great reviews take prep. I block:

  • Weekly note sprint (30 min): Update one-on-one notes and link new projects. Reflect on team and individual progress. I find the Tuckman model a useful rubric.
  • Review prep (60 min): Skim feedback doc, adjust bullets, and draft status updates.
  • Review 1:1 (60 mins): Discuss with your direct report. Explain your rationale. Solicit feedback and their thoughts. Do they agree/disagree with your assessment? Do they agree with your goals? Work with them to set new ones. And Note it.

A scheduled deep-dive trumps scrambling the day before. Your future self (and your team) will thank you.


Conclusion

When I first used this format not long after becoming a manager myself, my manager at the time said they almost screenshot it and posted it on LinkedIn as an example of how reviews should be done.

When reviews are authentic, designers grow. A living document, anchored in outcomes, plus a dash of AI, makes feedback fast, fair, and forward-looking.