Topvisor: User-Centric SaaS

User Testing, 0→1, Product design, UX Design

User Testing, 0→1, Product design, UX Design

User Testing, 0→1, Product design, UX Design

Tools

Figma, Jira, Github

Timeline

6 months

Device

Web, Mobile

Competitor Research Tool at Topvisor

I’m Irina, a Product Designer with 8+ years of experience designing enterprise, SaaS, and AI-driven products. I specialize in untangling complex data flows and creating interfaces that work for very different user types at once.

This case study is about how I designed and launched the Competitor Research Tool at Topvisor — transforming keyword discovery from an empty starting point into a structured, end-to-end workflow. The project required balancing multiple personas, working closely with engineering and product, and turning technical constraints into product strengths.

Context

Topvisor is a suite of SEO tools built around projects tied to a website’s URL. It’s used by SEO specialists and agencies who need rank tracking, audits, and reporting in one place.

By the time we started this project, I had already been a Lead Product Designer at Topvisor for two years. We were a small, close-knit team of around 12 people, which meant I was deeply involved in product strategy, discovery, and delivery end to end.

Our core product was Rank Tracker, but users kept running into the same problem: “Where do I get the keywords to track?”
Without keywords, projects stalled, onboarding broke, and users dropped off before reaching value.

The goal of the Competitor Research Tool was to solve this foundational problem. We wanted to automatically surface competitor keywords, reveal strategies, and highlight opportunities — not just as a standalone feature, but as a way to make Topvisor a true end-to-end SEO suite and differentiate it from much larger competitors.



Research

I started by looking at both the market and our users.

On the market side, I analyzed tools like SEMrush, Ahrefs, and SimilarWeb. They all offered massive keyword datasets, but they shared the same weakness: users were left alone with raw data. There was little guidance, structure, or onboarding logic. That revealed a clear gap — users didn’t need more data, they needed help making it usable.

On the user side, I focused on two primary personas:

  • SEO specialists, who needed full control: filters, clustering, automation, and scalability for client work.

  • Small business owners, who wanted clarity and quick answers. They didn’t care about thousands of rows — they wanted to understand what competitors ranked for and where gaps existed.

Mapping their journeys revealed two key pain points:

  • For SMBs, the biggest blocker was the empty start — they didn’t know what to input.

  • For specialists, the problem appeared later, when large datasets arrived in a chaotic, hard-to-reuse form.

I shared these insights early with engineers, the PM, and the CEO. Because the team was small, collaboration was fast and open. We sketched ideas together, discussed trade-offs, and aligned quickly on direction.

Three core principles emerged:

  • Remove the empty start

  • Turn chaotic data into something usable

  • Balance very different personas without fragmenting the product

Design Challenges

Once we started prototyping, several challenges became clear.

First, data chaos. Early versions dumped thousands of keywords into flat tables. Beginners froze, power users struggled to parse the data, and performance suffered.

Second, persona tension. SEO specialists wanted control and depth, while SMBs wanted simplicity and guidance. Creating two separate tools wasn’t an option — we needed a single system that could flex.

Third, integration. Competitor Research initially felt isolated. Moving keywords into Rank Tracker required manual copy-paste, breaking the end-to-end flow.

Finally, data freshness. We originally aimed for live competitor data, but at scale, crawlers could only provide monthly snapshots. This felt like a major limitation at first.

Close collaboration shaped how we handled these constraints. Engineers flagged performance risks early, the PM balanced business priorities, and even the CEO participated in design reviews. Problems surfaced early — and directly informed the solutions.


Product Design Strategy

We used Lean UX and RICE prioritisation to manage velocity and business value. The foundational design strategy focused on clarity through progressive disclosure, exposing the right insights at the right time without overwhelming users.

The interface emphasised:

  • SERP-based keyword clustering, grouping domains by real-world visibility patterns

  • Preview cards of real search results, enriched with organic and ad placements

  • Color-coded visual tables and compact filters for engine/device/context — all scoped to avoid global resets


Deep Thinking Workshops



As this was a greenfield project, I ran collaborative deep thinking workshops to ensure we were building the right product, not just the right UI:

  • Information architecture mapping to define the base hierarchy of competitor analysis

  • Priority flows to identify what insights matter most and in what sequence

  • Card sorting to validate mental models and reduce decision fatigue

  • Scenario-based design to simulate real-world consultant use cases

These led to critical decisions like:

  • Keyword clusters as the primary lens

  • Filtering scoped per context rather than global

  • Pre-configured quick reports based on user goals


Report Anatomy


I designed a modular report structure that allowed users to:


Build and export structured reports without formatting manually


Analyse graphical data, tables


Filter and view report on user's terms


And do it all on the mobile as well



Measuring Impact

We tracked success through internal dashboards and Google Analytics, focusing on activation, task success, and adoption.

Within the first month:

  • Activation increased significantly — users no longer dropped off during setup

  • Task success rose from 57% to 92%

  • Competitor Research became the primary entry point into the suite

  • Support tickets asking “What do I put here?” nearly disappeared

We paired quantitative data with qualitative feedback through regular stakeholder and user check-ins, ensuring we understood not just what changed, but why.

Key Learnings

This project reinforced several important lessons:

  • Constraints can create better products. Monthly data enabled clearer trend analysis than noisy real-time feeds.

  • Massive tables don’t help anyone. Progressive disclosure and clustering are essential — even for power users.

  • Strong defaults matter. SMB satisfaction increased once we reduced upfront choices and preserved flexibility for experts.

This case was a crash course in balancing personas, collaborating deeply with engineering and product, and turning limitations into strengths — lessons I continue to apply when designing scalable, integrated systems.

Available for new projects

Let’s Build Something Amazing Together.

Have a question or an exciting opportunity in mind? I’d love to hear from you. Let’s create user experiences that make a difference.

Available for new projects

Let’s Build Something Amazing Together.

Have a question or an exciting opportunity in mind? I’d love to hear from you. Let’s create user experiences that make a difference.

Available for new projects

Let’s Build Something Amazing Together.

Have a question or an exciting opportunity in mind? I’d love to hear from you. Let’s create user experiences that make a difference.