Solution
Trustpilot hosts reviews for over one million businesses, each with a TrustScore that reflects customer feedback. After identifying that many users were unsure how the TrustScore was calculated, I led a project to make this information more visible and digestible. The result was a layered, accessible explainer on every company page that helped users understand and trust what the score represents.
Role
Lead Product Designer
I led this project within my role in the Trust and Transparency team, taking it from concept through delivery. Our work sits horizontally across the product, so this project required close collaboration with the teams that owned the pages of our entry-points. I defined the UX direction, partnered with a content designer and legal to shape the explanations, and worked with PMs and engineers to deliver the experience.
Activities
Product & UX Strategy
End-end UX design
Principle-driven design
UI and visual design
User interviews and testing
Data-driven design
Legal & stakeholder alignment
Prototyping
Cross-functional collaboration
Research evangelism
Team
Duration
Impact
Increased engagement
Achieved an 85X higher click-through rate than the previous tooltip and help-centre link.
Faster comprehension
Users understood 40% faster than the previous tooltip and help-centre link
Defined design principles for trust
Established principles now used within the UX team when designing for trust
Strengthened brand trust
Contributed to raising Trustpilot's own TrustScore from 4.2 to 4.3
Problem
Across social media, I noticed confusion about how Trustpilot’s TrustScore was calculated. People were questioning its fairness and tagging Trustpilot to demand clarity. An explanation of the TrustScore already existed, but it was hidden behind a small tooltip and a help-centre link that almost no one clicked - just 0.006% of users. This presented a dual challenge: users lacked visibility into how trust was calculated, and the business risked losing credibility in its most visible trust signal. I saw an opportunity to surface this information more visibly and help rebuild understanding and confidence.
Earlier UX research had identified several areas where the TrustScore didn’t align with user expectations, creating moments of confusion that risked eroding trust. I took ownership of translating those findings into a design direction that made the TrustScore easier to interpret. I prioritised three core scenarios to address:
Perception mismatch
Familiar brands like Netflix often have lower TrustScores than users expect. Companies that don’t invite reviews collect fewer positive ones, which lowers their score and leaves users questioning its accuracy.
Bayesian average
All companies start with seven 3.5 star reviews to create a baseline. With fewer than ten reviews, this weighting becomes clearer; three 5-star reviews may show a TrustScore of 4, leaving users unsure why.
Polarized reviews
When a company’s reviews are split between 1-star and 5-star, users are left unsure why opinions are so divided, which makes the TrustScore feel inconsistent and less reliable overall.
Principles
As the project unfolded, it became clear that a shared framework was needed for designing with trust in mind. I drew on the principles created before I joined the the Trust and Transparency team and used them to guide our design decisions, ensuring they were applied consistently throughout the project. Our four principles were:
Contextualise data for clarity
Every data point or metric should include enough context for users to understand what it means and why it matters.
What this means
Explain why the data matters
Show benchmarks for context
Use real-world examples for relatability
Embrace an approachable tone
Communicate in a way that feels relatable and trustworthy while maintaining accuracy and legal integrity
What this means
Be transparent without overwhelming
Tailor depth to user context
Use conversational, empathetic language
Development
As well as showing what drives each company’s TrustScore, I wanted to break down the more complex concepts behind it, so I designed a series of modals and accompanying animations. This built on our principle of simplifying complex concepts: using plain language, clear visuals, and step-by-step explanations to make complex ideas easy to grasp.
I spent time refining the content with the legal team, to make sure that the explanations shown were compliant and finessed them alongside a content designer.
I ran live interviews to validate the concept and understand how users interpreted our explanations of the TrustScore. Since earlier research had already defined the problem space, these sessions focused on discoverability and the balance between enough and too much information. I shared these insights in the form of a presentation for the rest of the business, as this was the first time we had tested comprehension around the TrustScore.
Key insights:
The attributes cards felt too intrusive to users that felt they understood the TrustScore
The polarity card added little beyond what was visible in the star breakdown
The (i) icon was the first place users looked for how the TrustScore is calculated
Nobody understands what "Bayesian average" means
Outcomes:
1
Removed the permanent attributes cards
2
Integrated the TrustScore explainer into the (i) icon
3
Removed "Bayesian" terminology
4
Leadership chose to defer designing for perception mismatch
Final Designs
I collaborated with the team that owns the company profile page to determine where the link to access the TrustScore explainer would live. Working with a content designer, we finalised the copy, and a visual designer created bespoke animations for the flow.
On profiles where the Bayesian average was more visible, we reversed the order of the modals so that explanation appeared first, addressing users’ most immediate questions. Before release, we tested this against the original tooltip and help-centre link, measuring how long it took users to understand the information shown. The new modal flow reduced time to comprehension by 40%.
Future
Our redesign drove significantly higher engagement (85x) with most users who opened it reading through the full multi-modal flow. However, data showed that on desktop, the previous tooltip attracted more passive hovers - 1.12% of users viewed the short explanation without clicking through. As a next step, we could A/B test reintroducing a less intrusive information icon on desktop in the place of the previous tooltip we removed, that opens the modal flow directly.
Learnings
This project showed how closely legal alignment and stakeholder decisions intersect in a regulated space like online reviews. During development, senior stakeholders chose to exclude messaging about whether companies invite customers to leave reviews. Later, legal clarified that this disclosure was still required somewhere on the page, leading to a post-launch copy update.
Since then, I’ve made legal collaboration an active part of my design process within the Trust & Transparency team. We now share early concepts and evolving policy updates in regular check-ins throughout the design cycle, which has helped us move away from reactive fixes.


