Lessons from redesigning LinkedIn Search Kumaresh Pattabiraman (@kumareshp) Senior Product Manager, LinkedIn May 16, 2014
Jul 02, 2015
Lessons from redesigning
LinkedIn Search
Kumaresh Pattabiraman (@kumareshp)Senior Product Manager, LinkedIn
May 16, 2014
2012
The Search Page
•
5B queries a year
One of the top visited pages on LinkedIn
Why fix it?
• Selling us short – Discoverability LinkedIn has more to offer than just people search, but
the other verticals don’t get discovered enough
• Inflexible - Not easy to iterate Each search vertical is built on a different stack with low
leverage across verticals
• Design cadence with the rest of the site Search verticals look completely different from each
other, and LinkedIn is doing a site-wide redesign
March 15, 2012
Product Review
New design: Unified Search
Product Goals
Engagement from Search
(Page Views & Actions driven
from search)
Searchers per vertical
Dead-end searches
Revenue from search
1 year later
March 25, 2013
5% Public Launch
Early Results
Engagement from Search
(Page Views & Actions)
Searchers per vertical
Dead-end searches
Revenue from search
What changed?
100+ things
Change 1: The Vertical Selector
Before Unified Search
Searcher gets control of what
they are searching for
(People, Jobs, Companies,
Groups)
Unified Search
We remove the vertical selector
Change 2: Intent detector
Before Unified Search
Searcher specifies intent
explicitly
Unified Search
We algorithmically predict the
searcher’s intent
Query: “marketing”
?
Change 3: Buttons
Before Unified Search
Big blue Action buttons
Unified Search
Small gray CTAs
Vote
What change do you think impacted
engagement the most?
Vertical Selector
Intent Detector
Gray vs Blue buttons
To find out… We ran controlled A/B tests
Control
Default search box with no
vertical selector
Treatments
Vertical selector
Ghost text changes
We prioritized what to test... ruthlessly
And measured the isolated impact of major changes
Metrics
A/B tests
We optimized for speed of learning
• Quick experimental iterations designed to
answer the most burning questions
• Design -> Spec -> Dev -> QA -> Prod in
~1 week.
We identified and ramped the winning
changes
And either iterated on or killed the losing
ones
3 months and ~30 experiments later
June 25, 2013: 100% en-US Launch
And we eventually rolled out Unified Search
100% worldwide over the following 3 months…
Results
Engagement from Search
(Page Views & Actions)
Searchers per vertical
Dead-end searches
Revenue from search
What did we learn?
#1: Have opportunity analysis drive goal setting
Was an X% increase in searchers per vertical a realistic goal?
eg: How much of search traffic can we realistically expect to
distribute from people to other verticals with unified search?
Opportunity unclear? Test the waters - Quickly
Example of a test we ran within one dev quarter:
Structured suggestions to clarify user’s vertical intent
#2: Importance of controlled experimentation
• To understand the isolated impact of each
major change
• Especially so when you are changing
something working well
• Even when the combination of changes is
a huge net win (so we know what led to
the win)
• Often mistaken with going after
“incremental” wins - Disruptive changes
can be executed incrementally and can be
tested in a controlled fashion
Google: Thousands of search experiments per year
Bing: Search Quality α Experimental velocity
“You have to kiss a lot of frogs to find one
prince. So how can you find your prince
faster? By finding more frogs and kissing
them faster and faster.”
Mike MoranDo It Wrong Quickly: How the Web Changes the Old
Marketing Rules, 2007
#3: Agility in a crisis
Product launches after 1 year in development.
Metrics drop.
Panic sets.
All hands on deck.
Huge number of (emotional) people involved.
Huge number of options.
Behind schedule on ramp.
The clock is ticking.
Often the time for drastic measures…
And yet, it is important to be agile. We made controlled changes,
executing quickly and taking rational decisions based on data
Organizational alignment critical to pull this off…
• Product/Design: Micro-prioritization, mini-specs for
experiments with clear hypotheses
• Web-dev/Apps: Time-box efforts, limited scope (eg:
launch test in a subset of locales or browsers)
• Relevance: Practical hand-tuned approaches
• Analytics: A/B dashboards and custom analysis
• QA: Minimal QA automation & more manual checks
until test succeeds
• SRE/Ops: Frequent deployments
3 Key Takeaways
• Analyze opportunity & test the waters
early, quickly, cheaply
• Control your biggest changes to
understand isolated impact
• Stay agile when things go wrong
Thanks!
Questions/Comments? @kumareshp