CRED: Credit transparency for AI-powered product
Tools
Figma, Mixpanel
Timeline
3 weeks
Device
Web
CRED is a platform where every enrichment action — CRM syncs, “Add to list,” automated runs — consumes credits. Users frequently faced unexpected spend, struggled to trace where costs came from, and lost confidence in the produ
Short summary
At CRED, I redesigned workflows around credit spending after discovering users were frequently surprised by hidden enrichment charges when adding to lists or syncing with CRM. I investigated through 6–8 stakeholder interviews, support ticket mining, funnel analysis, and session replays, which revealed visibility gaps, unclear triggers, and lack of traceability. I introduced a live credit estimator tied directly to CTAs, guardrails with warnings/approvals/daily caps, and a conditional review step for high-risk runs, along with clearer copy and accessible UI patterns. Post-action, I designed a job-centric Credit Activity view showing Estimate → Spent (Δ%), actor, approvals, timestamps, and error counts. This made every credit attributable, explainable, and exportable, reducing “unexpected spend” tickets, improving user prediction of cost, and restoring trust and budget control in high-stakes workflows.
The problem
No visibility into when spend was triggered.
No clear link between action and cost.
Finance teams couldn’t answer “What did we pay for?” or “Who ran this?”.
Support was flooded with tickets mentioning unexpected credits.
My role
I led the end-to-end redesign of credit workflows — from research and mapping mental models to designing new UI patterns and states. The goal: make credit spend predictable, explainable, and auditable.
Approach
Research & investigation
User interviews with power users, admins, FinOps, new users.
Session replays (FullStory/Hotjar) to identify hesitation and backtracks.
Funnel and job log analysis to quantify fields × rows and outliers.
Support ticket mining around surprise spend and tracking gaps.
Flow review for clarity, error prevention, and accessibility.
Design principles
Transparency — always show a numeric estimate before commit.
Proximity — cost chips placed directly on CTAs, no hidden side effects.
Trust through language — plain words (“Enrich”) instead of jargon (“Run waterfall”).
Guardrails — approvals, daily caps, and hard stops built into the flow.
Ownership — actor names, timestamps, approval trails, job IDs.
AI UX clarity — translate complex AI enrichment logic into user-friendly options: Add without enrichment, Request approval, Schedule run.
Solution
Pre-action
Live estimator (fields × records ≈ credits) shown before commit.
Guardrails: warnings, approval request, hard stop if limits exceeded.
Option to Add without enrichment for zero-risk flow.

Post-action
Credit Activity redesigned as a job-centric audit view:
Estimate → Actual (Δ%)
Actor, timestamps, skipped/failed counts
Approvals trail and exportable reports
Every credit now traceable, explainable, and attributable.

Results & impact
Users can predict spend upfront with confidence.
Finance teams gain a single source of truth for credit activity.
Expected outcomes (validated with usability tests + support data):
−60% tickets tagged “unexpected spend.”
+90% of users correctly predict cost before running.
≥90% of high-cost runs routed through approval.
Why this matters
Money is at stake. By transforming AI enrichment from a black box into a transparent, trustworthy interface, I helped CRED reduce friction, prevent errors, and restore user confidence.