One of the most valuable ways to be sure your recommendations are heard is to forecast the impact of your proposal.
Consider what is more likely to be heard:
“I think we should do X…”
“I think we should do X, and with a 2% increase in conversion, that would drive a $1MM increase in revenue”
The benefits of modeling out the impact of your recommendations include:
- It forces you to think through your recommendation. Is this really going to drive revenue? If so, how? What are the behaviours that will change that will drive the growth?
- A solid revenue estimate will help you “sell” your idea
- Comparing the revenue impact estimate of a number of initiatives can help the business to prioritise
There are a few basic steps to putting together an impact estimate:
- Clarify your idea
- Detail how it will have an impact
- Collect any existing data that will help you model that impact
- Build your model, with the ability to adjust assumptions
- Using your current data, and assumed impact, calculate your revenue estimate
- Discuss your proposal with stakeholders and fine-tune the model and its assumptions
Example 1: Adding videos to an ecommerce product page
This model forecasts the revenue impact of adding videos to an ecommerce site’s product pages. This model makes a few assumptions about how this project will drive revenue:
- It assumes some product page visits will view a video, where those visits would not have previously engaged with photo details
- It assumes that conversion from product page to cart page will be improved because of users who were viewing photos being further convinced by video
- Note: This assumption could be more general, or more specific. In the model we have assumed that conversion will be better for users who view photos or videos. The model could also simplify, and assume a generic lift, without taking in to account whether users view the video or click photos.
It does not assume there will be an impact on:
- Migration to the product pages (since users won’t even know there are videos until they get there)
- Conversion from cart to purchase
- Average Order Value
However, for #2 and #3, placeholders are there to allow the business to adjust those if there is a good reason to.
There are a lot of other levers that could be added, if appropriate:
- Increase in order size
- Increase in migration to the product page, if videos were widely advertised elsewhere on the site
So you will see it’s a matter of thinking through the project and how it’s expected to affect behaviour (and subsequently, revenue) in choosing what assumptions to adjust.
Example 2: Adding a new ad unit to the home page
This is a non-ecommerce example, for a website monetised via advertising. The recommendation is to add a third advertising unit to the home page, a large expanding unit above the nav.
The assumptions made are:
- The new ad unit will have high sell through and high CPM. This is because we are proposing a “high visibility” unit that we think can sell well.
- The existing Ad Unit 1 will suffer a small decrease in sell through, but retain its CPM
- The existing Ad Unit 2, as the cheaper ad unit, will not be affected as those advertisers would not invest in the new, expensive unit
There are of course other levers that could be adjusted:
- We could factor in the impact to click-through rate for the existing ads, and assume a decrease in CPM for both ads due to lower performance.
- We could take into account the impact on down-stream ad impressions, as the new ad unit generates clicks off site. For users to click the ad, we would lose revenue from the ads they would have otherwise seen later in their visit.
- We could, as a business, consider only selling the new ad unit half the time (to avoid such a high-visibility ad being “in user’s faces” all the time), and adjust the sell through rate down accordingly.
Five Tips to Success
- Keep the model as simple as possible, while accounting for necessary assumptions and adjustments. The simpler the model, the easier it will be for stakeholders to follow your logic (a critical ingredient for their support!)
- Be clear on your assumptions. Why did you assume certain things? And why didn’t you assume others?
- Encourage stakeholder collaboration. You want your stakeholders to weigh in on what they think the impact can be. Getting them involved is key to getting them on board. Make it easy for them to adjust assumptions and have the model re-calculate. (A user experience tip: On the example models, you’ll see that I used colour coding: yellow fill with blue text means this is an “adjustable assumption.” Using that same formatting repeatedly will help train stakeholders how to easily adjust assumptions.)
- Be cautious. If in doubt, be conservative in your assumptions. If you’re not sure, consider providing a range – a conservative estimate and an aggressive estimate. E.g. With a 1% lift in conversion, we’ll see X, with a 10% lift we’ll see Y.
- Track your success. If a project gets implemented, compare the revenue generated to your model, and consider why the model was / was not in line with the final results. This will help fine-tune future models.
Bonus tip: Remember this an estimate. While the model may calculate “$1,927,382.11″, don’t confuse being precise with being accurate. When going back to the business, consider presenting “$1.8-2.0MM” as the final estimate.
What tips would you add?
Share your experiences in the comments!
As an analyst, your value is not just in the data you deliver, but in the insight and recommendations you can provide. But what is an analyst to do when those recommendations seem to fall on deaf ears?
1. Make sure they’re good
Too often, analysts’ “recommendations” are essentially just half-baked (or even downright meaningless) ideas. This is commonly due to poor process, and expectation setting between the business and analytics team. If the business expects “monthly recommendations”, analysts will feel pressured to just come up with something. But what ends up getting delivered is typically low value.
The best way to overcome this is to work with the business to clearly set expectations. Recommendations will not be delivered “on schedule”, but when there is a valuable recommendation to be made.
2. Make sure you mean it
Are you just throwing out ideas? Or do you truly believe they are an opportunity to drive revenue, or improve the experience? Product Managers have to stand by their recommendations, and be willing to answer to initiatives that don’t work. Make sure you would be willing to do the same!
3. Involve the business
Another good way to have your recommendations fall on deaf ears is if they 1) Aren’t in line with current goals / objectives; or 2) Have already been proposed (and possibly even discussed and dismissed!)
Before you just throw an idea out there, discuss it with the business. (We analysts are quick to fault the business for not involving us, but should remember this applies both ways!) This should be a collaborative process, involving both the business perspective and data to validate and quantify.
4. Let the data drive the recommendation
It is typically more powerful (and less political…!) to use language like, “The data suggests that…” rather than “I propose that…”
5. Consider your distribution method
The comments section of a dashboard or report is not the place for solid, well thought out recommendations. If the recommendation is valuable, reconsider your delivery methods. A short (or even informal) meeting or proposal is likely to get more attention than the footnote of a report.
6. Find the right receiver
Think strategically about who you present your idea to. The appropriate receiver depends on the idea, the organisation (and its politics…) and the personalities of the individuals! But don’t assume the only “right” person is at the top. Sometimes your manager, or a more hands-on, tactical stakeholder, may be better able to bring the recommendation into reality. Critically evaluate the appropriate audience is, before proposing it to just anyone.
Keep in mind too: Depending on who your idea gets presented to, you should vary the method and level of detail you present. You wouldn’t expect your CMO to walk through every data point and assumption of your model! But your hands-on marketer might want to go through and discuss (and adjust!) each and every assumption.
7. Provide an estimate of the potential revenue impact
In terms of importance, this could easily be Tip #1. However, it’s also important enough for an entire post! Stay tuned …
What about you?
What have you found effective in delivering recommendations? Share your tips in the comments!
If I could give one piece of advice to an aspiring analyst, it would be this: Stop showing your ‘math’. A tendency towards ‘TMI deliverables’ is common, especially in newer analysts. However, while analysts typically do this in an attempt to demonstrate credibility (“See? I used all the right data and methods!”) they do so at the expense of actually being heard.
The Cardinal Rule: Less is More
Digital measurement is not ninth grade math class. So by default, analysts should refrain from providing too much detail about how they arrived at their conclusions and calculations.
What should you show?
Your conclusions / findings
Any assumptions you had to make along the way (where these impact the conclusions)
An estimated revenue impact
A very brief overview of your methodology (think: abstract in a science journal.) This should be enough for people to have necessary context, but not so much they could repeat your entire analysis step by step!
What shouldn’t you show?
Calculations or formulas used
Props / vars / custom variables / events used
That’s not to say that this stuff isn’t important to document. But don’t present it to stakeholders.
But Of Course: “It Depends”
Because there is never one rule that applies to all situations, there are additional considerations.
How much ‘math’ you show will also depend on:
The level of your audience
Your manager should get more detail. After all, if your work is wrong, they are ultimately responsible.
Executives will typically want far less detail.
Some individuals need to see more detail to be confident in relying upon your work. For example, your CFO may need to see more math than your Creative Director. Get to know your stakeholders over time, and take note of who may need a little extra background to be persuaded.
Note: We commonly hear the argument “But my stakeholders really want more details!” Keep in mind there is a difference between them hearing you out, and truly wanting it. To test this, try presenting without the minutiae (though, have it handy) and see whether they actually ask for it.
Are you confirming or refuting existing beliefs?
If what you are presenting comes as a surprise, you should be prepared to give more detail as to how you got to those findings.
In Your Back Pocket
Keep additional information handy, both for those who might want it, as well as to remind yourself of the details later. A tab in the back of a spreadsheet, an appendix in a presentation or a definitions/details section in a dashboard can all be a handy reference if the need arises now, or later.
[Credit: Thanks to Tim Wilson, Christopher Berry, Peter O’Neill and Tristan Bailey for the discussion on the topic!]
Last week, myself and 7,000+ of my friends attended Adobe’s Summit 2014 in Salt Lake City. The overarching theme of the event was “the reinvention of marketing”, which got me thinking about how digital analytics professionals can continue to reinvent themselves and their skills.
Digital analytics is a rapidly evolving field, progressing swiftly from log files, to basic page tagging, to cross-device tracking. The “web analysts” of just a few years ago have progressed from pulling basic reports to advanced segmentation, optimisation and personalisation and modeling in R.
So as technology continues to develop, how can analysts and marketers stay up to date on their skills?
1. Attend trainings and conferences like Adobe Summit. These events are a great opportunity to learn how other companies are leveraging technologies, and spark creative ideas. If you struggle to justify budget, propose attending low cost events like DAA Symposiums or our ACCELERATE, or consider submitting a speaking submission to share your own insights (as speaking normally earns you a free conference pass.)
2. Read up! There is no shortage of blogs and articles that discuss new trends in digital. Try to carve out a small amount of time each day or week to read a few.
3. Network and discuss. Local events like DAA Symposiums, Web Analytics Wednesdays and Meet Ups are great places to meet people and discuss trends and challenges.
4. Join the social conversation. If you can’t attend local events (or, not as often as you would like) use social media as another source of inspiration and conversation. Twitter, Linked In groups or the new DAA forums are great places to start.
5. Online courses. Lots of vendors offer free webinars that can help you stay up to date with your skills. Or, consider taking a Coursera, Khan Academy or similar online course to learn something new.
6. Experiment. Playing can be learning! If you hear of a new tool, social channel or technology, try getting your hands on it to see how it works.
What other tips do you have for keeping skills fresh? Share them in the comments!
It seems impossible to believe that twelve months has passed already. But here I am, Salt Lake City-bound for another Adobe Digital Marketing Summit.
For the past couple of years, I have been lucky enough to be invited to Adobe Summit as a “Summit Insider.” Being a Summit Insider gives me a chance to not only enjoy the education, networking and entertainment at Summit, but also an opportunity to share the experience with those who might not be able to make it. I’m super excited to be back, so thanks to the Adobe team for inviting me!
What am I looking forward to?
Like a kid in a candy store, I eagerly perused the Summit Agenda and have carefully selected breakout sessions on topics like predictive analytics, social analytics, data communication and storytelling, and building cross-department co-operation and a culture of analytics.
And even though I am the totally clueless person who never knows the bands, I’m definitely looking forward to the Summit Bash and musical acts Vampire Weekend and Walk The Moon. (Don’t worry, I created a Spotify playlist to brush up on my “new cool music” knowledge.)
Come say hi!
Are you planning on attending Summit? Come say hi! I’ll be there with my fellow Summit Insiders, Travis Wright, Toby Bloomberg and Elisabeth Osmeloski, as well as my partners at Web Analytics Demystified.
Keep up to date
Don’t forget to follow #AdobeSummit on Twitter via the official Twitter account (@AdobeSummit) and your Summit Insiders.
In town a little early?
Come check out Un-Summit on Monday afternoon. Un-Summit is a great chance to catch up with friends before the conference craziness kicks off, and hear from some great speakers.
Unless you’ve been living under a rock, you have heard (and perhaps grown tired) of the buzzword “big data.” But in attempts to chase the “next shiny thing”, companies may focus too much on “big data” rather than the “right data.”
True, “big data” is absolutely “a thing.” There are certainly companies successfully crunching massive volumes of data to reveal actionable consumer insight. But there are (many) more that are buried in data, and wondering why they are endlessly digging when others have struck gold.
Unfortunately, “big data” discussions often lead to:
An assumption that more is better;
A tendency for companies to try to skip the natural maturation of analytics in their organisation, in attempt to jump straight to “big data science.”
The value of data is in guiding business success, and that does not necessarily require massive volumes of data.
So when is big data of value?
But to succeed at a more foundational level, companies should focus first on whether they have:
While all businesses should be preparing for increased use and volume of data in the coming years, it is far easier to chase and hoard more and more and more data than it is to derive value from the data that already exists. However, the latter will drive far greater business value in the long term, and set up the right foundation to grow into using big data effectively.
My presentation from the Digital Analytics Association San Francisco Symposium is now available on SlideShare:
What the ‘Quantified Self’ movement and really, really personal data means for marketing, analytics and privacy?
At the intersection of fitness, analytics and social media, a new trend of “self-quantification” is emerging. Devices and applications like Jawbone UP, Fitbit, Runkeeper, Foursquare and more make it possible for individuals to collect tremendous detail about their lives, creating a wealth of incredibly personal data. What does this intersection of “”big data”" and very small, very personal data teach us about the practice of analytics? And what cautions must marketers heed with respect to targeting and privacy in trying to seize upon this trend?
Thoughts/feedback? Share them in the comments!
I spent five years responsible for web analytics for a major ad-monetised content site, so I’m not immune to the unique challenges of measuring a “content consumption” website. Unlike an eCommerce site (where there is a more clear “conversion event”) content sites have to struggle with how to measure nebulous concepts like “engagement.” It can be tempting to just fall back on measures like “time on site”, but these metrics have significant drawbacks. This post outlines those, as well as proposing alternatives to better measure your content site.
So … what’s wrong with relying on time metrics?
1. Most business users don’t understand what they really mean
The majority of business users, and perhaps even newer analysts, may not understand the nuance of time calculations in the typical web analytics tool.
In short, time is calculated from subtracting two time stamps. For example:
Time on Page A = (Time Stamp of Page B) – (Time Stamp of Page A)
So time on page is calculated by subtracting what time you saw the next page from what time you saw the page in question. Time on site works similarly:
Time on Site = (Time Stamp of last call) – (Time Stamp of first call)
A call is often a page view, but could be any kind of call – an event, ecommerce transaction, etc.
Can you spot the issue here? What if a user doesn’t see a Page B, or only sends one call to your web analytics tool? In short: those users do not count in time calculations.
So why does that skew your data?
Let’s take a page, or website, with a 90% bounce rate. Time metrics are only based on 10% of traffic. Aka, time metrics are based on traffic that has already self-selected as “more interested”, by virtue of the fact that they didn’t bounce!
2. They are too heavily influenced by implementation and technical factors unrelated to user behaviour
The way your web analytics solution is implemented can have a significant impact on time metrics.
Consider these two implementations and sets of behaviour:
- I arrive on a website and click to expand a menu. This click is not tracked as event. I then leave.
- I arrive on a website and click to expand a menu. This click is tracked as an event. I then leave.
In the first example, I only sent one call to analytics. I therefore count as a “bounce”, and my time on the website does not count in “Time on Site”. In the second example, I have two calls to analytics, one for the page view and one for the event. I no longer count as a bounce, and my time on the website counts as “Time on Site.” My behaviour is the same, but the website’s time metrics are different.
You have to truly understand your implementation, and the impact of changes made to it, before you can use time metrics.
However, it’s not even just your site’s implementation that can affect time metrics. Tabbed browsing – default behaviour for most browsers these days – can skew time, since a user who keeps a tab open will keep “ticking” until the session times out in 30 mins.
Even the time of day your customers choose to browse can also impact time on site, as many web analytics tools end visits automatically at midnight. This isn’t a problem for all demographics, but perhaps the TechCrunches and the Mashables of the world see a bigger impact due to “night owls”!
3. They are misleading
It’s easy to erroneously determine ‘good’ and ‘bad’ based on time on site. However, I may spend a lot of time on a website because I’m really interested in the content, but I can also spend a lot of time on a website because the navigation is terrible and I can’t find what I need. There is nothing about a time metric that tells you if the time spent was successful, yet companies too often consider “more time” to indicate a successful visit. Consider a support site: a short time spent on site, where the user immediately got the help they needed and left, is an incredibly successful visit, but this wouldn’t be reflected by relying on time measures.
So what should you use instead?
Rather than relying on “passive” measures to understand engagement with your website, consider how you can measure engagement via “active” measures: aka, measuring the user’s actions instead of time passing.
Some examples of “active” measures on a content site:
- Content page views per visit. A lot of my concerns about regarding time measures also apply to “page views per visit” as a measure. (Did I consume lots of page views because I’m interested, or because I couldn’t find what I was looking for?) For a better “page views per visit” measure of engagement, track content page views, and calculate consumption of those per visit. This would therefore exclude navigational and more “administrative” pages and reflect actual content consumption. You can also track what percentage of your traffic actually sees a true content page, vs. just navigational pages.
- Ad revenue per visit. While this is less a measure of “engagement”, businesses do like to get paid, so this is definitely an important measure for most content sites! It can often be difficult to measure via your analytics tool, since you need to not only take in to account the page views, but what kind of ad the user saw, whether the space was sold or not and what the CPM was. However, it’s okay to use informed estimates. For example:Click-through rate to other articles. A lot of websites will include links to “related articles” or “you also might be interested in….” Track clicks to these links and measure click rate. This will tell you that users not only read an article, but were interested enough to click to read another.
- I saw 2 financial articles during my visit. We sell financial article pages at an average $10CPM and have an estimated 80% sell through rate. My visit is therefore worth 2/1000*$10*80% = 1.6 cents. This can be a much more helpful measure than “page views per visit” since not all page views are created equal. Having insight in to content consumed and its value can help drive decisions like what to promote or share.
- Number of shares or share rate. If sharing is considered important to your business, clearly highlight this call to action, and measure whether users share content, and what they share. Sharing is a much stronger indicator of engagement than simply viewing. (You won’t be able to track all shares, for example, copy-and-pasting URLs won’t be tracked, but tracking shares will still give you valuable information about content sharing trends.)
- Download rate. For example, downloading PDFs.
- Poll participation rate or other engaging activities.
- Video Play rate. Even better, track completion rate and drop-off points.
- Sign up and/or Follow on social.
- Account creation and sign in.
If you’re already doing a lot of the above, consider taking it a step further and calculating visit scores. For example, you may decide that each view of a content article is 1 point, a share is 5 points, a video start is 2 points and a video complete is 3 points. This allows you to calculate a total visit score, and analyse your traffic by “high” vs “low” scoring visitors. What sources bring high scoring visitors to the site? What content topics do they view more? This is more helpful than “1:32min time on site”!
By using these active measures of user behaviour, you will get better insight than through passive measures like time, which will enable better content optimisation and monetisation.
Is there anything else you would add to the list? What key measures do you use to understand content consumption and behaviour?
Recently I have been exploring the world of “self quantification”, using tools like Jawbone UP, Runkeeper, Withings and more to measure, well, myself. Living in a tech-y city like Boston, I’ve also had a chance to attend Quantified Self Meet Ups and discuss these topics with others.
In a recent post, I discussed the implications of a movement like self quantification on marketing and privacy. However, it’s easy for such conversations to to stay fairly simply, without necessarily addressing the fact that privacy is not an all or nothing: there are levels of privacy and individual permissions.
Let’s take self quantification as an example. On an on-going basis, the self quantification tools I use track:
- My every day movement (steps taken, as well as specific exercise activities)
- Additional details about running (distance, pace, elevation and more)
- Calorie intake and calorie burn
- Heart rate, both during exercise (via my Polar heart rate monitor or Tom Tom running watch) and standing resting heart rate (via my Withings scale)
- Weight, BMI and body fat
- Sleep (including duration and quality)
That’s a ton of data to create about myself every day!
Now think about the possible recipients of that data:
- Myself (private data)
- My social network (for example, my Jawbone UP “team” can see the majority of my data and comment or like activities, or I can share my running stats with my Facebook friends)
- Medical professionals like my primary care physician
- Corporations trying to market to me
It’s so easy to treat “privacy” as an all or nothing: I am either willing to share my data or I am not. However, consumers demand greater control over their privacy precisely because there are different things we’re willing to share with different groups, and even within a group, specific people or companies we’re willing to share with.
For example, I may be willing to share my data with my doctor, but not with corporations. Or I may be willing to share my data with Zappos and Nike, but not with other corporations. I may be willing to share my running routes with close friends but not my entire social network. I may be willing to share my data with researchers, but only if anonymised. I may be willing to share my activity and sleep data with my social network, but not my weight. (C’mon, I won’t even share that with the DMV!)
This isn’t a struggle just for self quantification data, but rather, a challenge the entire digital ecosystem is facing. The difficulty in dealing with privacy in our rapidly changing digital world is that we don’t just need to allow for a share/do not share model, but specific controls that address the nuance of privacy permissions. And the real challenge is managing to do so in a user-friendly way!
What should we do? While a comprehensive system to manage all digital privacy may be a ways off (if ever), companies can get ahead by at least allowing for customisation of privacy settings for their own interactions with consumers. For example, allowing users to opt out of certain kinds of emails, not just “subscribe or unsubscribe”, or providing feedback that which targeted display ads are unwelcome, or irrelevant. (And after you’ve built those customisation options, ask your dad or your grandma to use them to gauge complexity!)
Want to hear more? I have submitted to speak about these issues and more at SXSW next year. Voting closes EOD Sun 9/8, so if you’re interested in learning more, please vote for my session! http://bit.ly/mkiss-sxsw
It’s no secret that ours is a new and rapidly evolving industry. Skills are often acquired on-the-job, and training is critical to building a successful analytics practice and career.
That’s why I’m so excited about ACCELERATE in Columbus, OH. Even before I joined Demystified, ACCELERATE was my favourite event of the year. As my prodigious use of Twitter would suggest, I have been accused of having a short (140-character!) attention span, and ACCELERATE is the perfect format for delivering rapid-fire insights without even a split second to get bored. On top of that, ACCELERATE has hosted some fantastic speakers, many who don’t typically speak at analytics conferences, giving us a fresh perspective.
This year however, ACCELERATE raises the bar, with two days of training preceding the event. With specific trainings on testing & optimisation, social analytics, analysis practice and career development, Adobe SiteCatalyst, Discover, ReportBuilder and Advanced Google Analytics, there’s a training to help you grow, no matter your level.
I’m personally pretty excited to get a chance to discuss analysis and analytics career development. Here’s a little sneak peak of what you can expect to hear about in my analysis practice training:
- A guide to using analytics for performance measurement, whether it be on-going performance or for a specific initiative
- A guide to ad-hoc analysis for hypothesis testing
- Communication tips and tricks
- Best practices for communicating analytics results, including:
- Tailoring to different learning styles
- Tips for data visualisation
- What a career in analytics can look like, and how to choose your path
- How to successfully recruit for analytics
- How to grow and retain your analysts
And shhhhh: Don’t tell Eric, but I snuck you all a discount. Use the code blog-michele (or just click through that link) for 10% off ACCELERATE trainings and the event itself.
For more information, check out webanalyticsdemystified.com/accelerate/. Or, just go ahead and sign up now. You know you want to.