Goal

Leading on the App Optimisation squad at PrettyLittleThing, my goal was to accelerate growth and elevate the end-to-end mobile app journey through a continuous CRO pipeline. By implementing rapid, data-driven UX iterations, we aimed to boost conversion and retention whilst keeping the user experience effortless.

My Role and Responsibilities

I guided the full experimentation lifecycle whilst managing BAU tasks and technical debt. I collaborated with UX, engineering, QA, analytics, and business stakeholders from hypothesis to roll-out. My responsibilities included:
  1. Defining & Prioritising the Roadmap: I built and maintained a backlog of over 30 tests across key app pages using GA4 funnels, Contentsquare heatmaps, and post-checkout surveys. By analysing user behaviour and identifying friction points, I prioritised experiments that addressed customer pain points whilst aligning with business objectives.
  2. Overseeing Test Delivery: I oversaw the end-to-end delivery of the prioritised tests, ensuring they were launched on time and with high quality. This involved close collaboration with cross-functional teams and managing resources effectively to maintain a steady pace of experimentation.
  3. Partnering with UX on Testable Solutions: Collaborating closely with the design team, I translated customer insights into clear hypotheses and reviewed Figma prototypes. This ensured that each design iteration was user-centric, technically feasible, and had strong commercial potential.
  4. Authoring Dev-Ready Specifications: I created detailed Jira stories and acceptance criteria for every variant, facilitating seamless handoffs to engineering and QA teams. This meticulous planning enabled us to execute A/B tests efficiently using Firebase Remote Config.
  5. Executing High-Velocity Testing at Scale: By leveraging Firebase Remote Config, we shipped 4–6 A/B tests per sprint across five territories. This approach allowed us to eliminate App Store bottlenecks, maintain high testing quality, and rapidly iterate from hypothesis to validated learning.
  6. Leading Stakeholder UAT & Feedback Loops: I conducted regular UAT sessions, tracked results in BigQuery dashboards, and incorporated learnings back into the backlog. This continuous feedback cycle kept cross-functional alignment tight and maintained high velocity whilst ensuring business requirements were met.

Notable UX Enhancements Shipped

  • Sticky Add-to-Bag: Kept the CTA visible on scroll, reducing friction in the shopping experience and increasing add-to-cart rates by 12%.
  • Looping Image Gallery: Implemented a swipeable media carousel, resulting in a 20% increase in product engagement and a 5% uplift in conversions.
  • Intelligent Stock Messaging: Added low-stock prompts to improve purchase confidence, leading to a 15% reduction in cart abandonment.
  • Simplified Cart UI: Collapsed promo & delivery modules to streamline the checkout flow, improving checkout completion rates by 8%.

Outcomes

Metric

Result

Tests Delivered

30+ mobile UX experiments

Variant Win Rate

65% (17 winning variants out of 26)

Average ROI per Test

£1.16M

Projected Annual Uplift

£8.8M (based on extrapolated results from winning tests)

Markets Served

Five territories with localised testing

User Retention Intent

99.6% (measured through post-checkout surveys)

Key Takeaways

  1. Speed + Safety: Firebase Remote Config removed App Store bottlenecks, enabling fast learning cycles without compromising stability or user experience.
  2. Quant × Qual Synergy: Combining GA4 analytics with survey verbatims provided both the "what" and the "why" behind user behaviour, leading to sharper hypotheses and more impactful experiments.
  3. Process as Product: Implementing a disciplined design→dev→QA loop with clear handoffs and continuous feedback allowed us to sustain high-velocity testing whilst maintaining quality.
  4. Cross-Functional Orchestration: Fostering tight alignment and effective communication across UX, engineering, QA, and analytics teams was crucial in turning hypotheses into measurable wins and maintaining strategic focus throughout the project.



Project Image 01
Project Image 01
Project Image 01

Goal

Leading on the App Optimisation squad at PrettyLittleThing, my goal was to accelerate growth and elevate the end-to-end mobile app journey through a continuous CRO pipeline. By implementing rapid, data-driven UX iterations, we aimed to boost conversion and retention whilst keeping the user experience effortless.

My Role and Responsibilities

I guided the full experimentation lifecycle whilst managing BAU tasks and technical debt. I collaborated with UX, engineering, QA, analytics, and business stakeholders from hypothesis to roll-out. My responsibilities included:
  1. Defining & Prioritising the Roadmap: I built and maintained a backlog of over 30 tests across key app pages using GA4 funnels, Contentsquare heatmaps, and post-checkout surveys. By analysing user behaviour and identifying friction points, I prioritised experiments that addressed customer pain points whilst aligning with business objectives.
  2. Overseeing Test Delivery: I oversaw the end-to-end delivery of the prioritised tests, ensuring they were launched on time and with high quality. This involved close collaboration with cross-functional teams and managing resources effectively to maintain a steady pace of experimentation.
  3. Partnering with UX on Testable Solutions: Collaborating closely with the design team, I translated customer insights into clear hypotheses and reviewed Figma prototypes. This ensured that each design iteration was user-centric, technically feasible, and had strong commercial potential.
  4. Authoring Dev-Ready Specifications: I created detailed Jira stories and acceptance criteria for every variant, facilitating seamless handoffs to engineering and QA teams. This meticulous planning enabled us to execute A/B tests efficiently using Firebase Remote Config.
  5. Executing High-Velocity Testing at Scale: By leveraging Firebase Remote Config, we shipped 4–6 A/B tests per sprint across five territories. This approach allowed us to eliminate App Store bottlenecks, maintain high testing quality, and rapidly iterate from hypothesis to validated learning.
  6. Leading Stakeholder UAT & Feedback Loops: I conducted regular UAT sessions, tracked results in BigQuery dashboards, and incorporated learnings back into the backlog. This continuous feedback cycle kept cross-functional alignment tight and maintained high velocity whilst ensuring business requirements were met.

Notable UX Enhancements Shipped

  • Sticky Add-to-Bag: Kept the CTA visible on scroll, reducing friction in the shopping experience and increasing add-to-cart rates by 12%.
  • Looping Image Gallery: Implemented a swipeable media carousel, resulting in a 20% increase in product engagement and a 5% uplift in conversions.
  • Intelligent Stock Messaging: Added low-stock prompts to improve purchase confidence, leading to a 15% reduction in cart abandonment.
  • Simplified Cart UI: Collapsed promo & delivery modules to streamline the checkout flow, improving checkout completion rates by 8%.

Outcomes

Metric

Result

Tests Delivered

30+ mobile UX experiments

Variant Win Rate

65% (17 winning variants out of 26)

Average ROI per Test

£1.16M

Projected Annual Uplift

£8.8M (based on extrapolated results from winning tests)

Markets Served

Five territories with localised testing

User Retention Intent

99.6% (measured through post-checkout surveys)

Key Takeaways

  1. Speed + Safety: Firebase Remote Config removed App Store bottlenecks, enabling fast learning cycles without compromising stability or user experience.
  2. Quant × Qual Synergy: Combining GA4 analytics with survey verbatims provided both the "what" and the "why" behind user behaviour, leading to sharper hypotheses and more impactful experiments.
  3. Process as Product: Implementing a disciplined design→dev→QA loop with clear handoffs and continuous feedback allowed us to sustain high-velocity testing whilst maintaining quality.
  4. Cross-Functional Orchestration: Fostering tight alignment and effective communication across UX, engineering, QA, and analytics teams was crucial in turning hypotheses into measurable wins and maintaining strategic focus throughout the project.



£8.8M Uplift from Mobile UX Optimisation

£8.8M Uplift from Mobile UX Optimisation

VIEW MORE PROJECTS

VIEW MORE PROJECTS