Understanding how to include end-user feedback and behavioural data in the development of information discovery tools is vital for publishers looking to increase engagement and provide true value to subscribers.
Market analysis providers are waking up to the integral role customers play across content strategy and delivery formulation – particularly in relation to tools that aid efficient information discovery.
Discovery tools, such as taxonomy and category trees, workflow tools, and search, are designed to reduce time to discovery – a metric denoting the time it takes for readers to locate, gather, and re-use specific information.
So, where can market analysis providers find customer insight to improve these tools and reduce time to discovery?
- The qualitative route: driven by Customer Success (CS) activity. CS teams should regularly check in with customers, inform them of new developments (and future releases planned on your product roadmap), and demo new developments to gauge feedback.
- The quantitative route: data is gathered using analytics tools which track user behaviour. Taking the time to unpack and analyse data reveals true subscriber behaviour and uncovers areas of your information discovery system requiring improvement.
A blend of ‘human’ customer success-generated insights and data-driven analysis of subscriber behaviour provides a clear picture of the steps needed to create an efficient information discovery system within your content delivery platform.
Read on as we examine:
- How to incorporate these two information sources into the development and configuration of your delivery system’s discovery tools
- What metrics to keep an eye on to minimise time-to-discovery
- The internal structures needed to ensure customer feedback plays a central role in discovery feature development.
- Understanding subscriber usage is key to developing efficient information discovery tools
5 steps to taxonomy mastery
Your category tree is the backbone of your content portfolio. Without it, your library would be an unnavigable jumble of reports and datasets, leaving subscribers with little option but to manually scroll through content and hope they come across the right title.
Taxonomies bring structure to content and improve the end-user experience, so their formation requires a significant level of feedback.
Andrew Woods, our in-house content expert at Content Catalyst, suggests that subscribers need practical use of your taxonomy to gather feedback. When they’re using your site, you can see how they discover content and what’s hindering their information discovery journeys.
“If searches are culminating in subscribers not finding content that’s useful to them, this indicates that your category tree needs tweaking. You may also need to educate end-users to ensure they know how to navigate your category tree correctly”, said Andrew.
Andrew is the creator of the ‘5 steps to taxonomy mastery’ – a collaborative framework he recommends to all new customers of Content Catalyst when mapping out their category tree:
- First draft: create a first draft based on what you know about your subscribers, their expectations of your organisation, and how they interact with your content.
- Sense check: share with stakeholders and invite some of your customers to review it.
- Gather feedback: revise your categories based on feedback from internal stakeholders, including analysts.
- Soft launch: put your category tree live for a test period to gather feedback. Following this, you should analyse data embedded in your content delivery platform and conduct user research.
- Launch: amend taxonomy and go live. Book a second review point between 6 and 12 months after launch.
Key user metrics & behaviour to monitor:
- Number of enquiries sent to Customer Success (CS) team from lost/dissatisfied customers (Help Desk/customer service line)
- The average time it takes a user to locate a report (Usage analytics)
- Number of user sessions that ended with no report access (Usage analytics)
In contrast to the era of flat PDF delivery, market analysis providers can no longer serve up useful content and immediately produce satisfied subscribers. Content must also fit into a well-planned, thoroughly tested workflow process to provide true value.
When crafting a customer-centric workflow process, catch-up calls with your CS team are a crucial starting point. Use these calls to understand how subscribers access and repurpose content and where perceived inefficiencies lie.
If, for example, customers are struggling to create custom cuts of your reports to re-use in their PowerPoint presentations, this information should be fed back to your product development team, ready for implementation in later development cycles. If multiple accounts raise similar complaints, elevate the development’s urgency.
Careful analysis of usage data will also provide evidence of inefficiencies. Metrics like report reading time reveal whether your workflow tools allow readers to efficiently find information.
Your subscribers are busy, time-poor businesspeople. Internally, set a benchmark for the optimum time spent on a report. Then, with feedback from customers, identify the features causing delays and aim to fix these in later development cycles.
Implement a fluid workflow model – underpinned by intuitive workflow and information discovery tools – and your subscribers will achieve their tasks in record time.
Key user metrics & behaviour to monitor:
- Average time spent reading reports – adjust depending on report length. This could be as simple an equation as 5 minutes of reading time per 1000 words. (Usage analytics)
- Average session length – once you implement the new workflow process, is there a marked reduction in time spent in-platform? (Usage analytics)
- Customer satisfaction compared to the previous workflow system. (Customer survey / feedback in CS catch up calls)
Underpinning any successful workflow process is powerful search. Search functionality is so highly valued that clients have cancelled entire global subscriptions if employees cannot quickly and intuitively find the information they need, meaning subscriber feedback on search quality and the structuring of returns is integral.
Search configuration and UX can lead you down a rabbit hole as the value of filters, weighting, and exact terms/phrases are highly subjective.
Before launching new search UX/functionality, task key subscriber accounts with sense checking the results a few example queries return. This could, for example, be the account’s most regular search. Identify the differences between the new returns and the old. Is the new structuring of returns more useful?
If your previous search functionality only searched by report title, rather than a granular sweeping of every word of your content portfolio, have confidence that the increased specificity will benefit your subscribers. If you’re fine-tuning or testing a new search configuration, be more cautious and receptive to feedback.
Once live, usage data is key for determining the effectiveness of your search functionality.
Key search metrics & behaviour to monitor:
- Number of searches culminating in no result returns (Usage analytics)
- Number of reports clicked through to upon running a search (Usage analytics)
- Average user session length (Usage analytics)
- Number of enquiries sent to the CS team from lost customers (Help Desk/customer service line)
- Feedback from customer demos / example search tasks set to clients (CS team)
Analyse and Listen
Like in any healthy relationship, communication is key. Listen carefully to your customers’ opinions and feedback. Combine this with careful analysis of user behaviour and your information discovery tools will soon significantly reduce that all important metric: time to discovery.
Reduce time to discovery of business-critical information and you will soon see a return in improved engagement, renewals, and revenue.