I Believe in Data*

* [Caveats to follow]

Articles about analytics tend to take two forms. One style exalts data as a cure-all panacea.

Another style implies that people put too much faith in the power of their data. That there are limitations to data. That data can’t tell you everything about how to build, run and optimize your business. I agree.  

My name is Michele Kiss, I am an analytics professional, and I don’t believe that data solves everything.

I believe in the appropriate use of data. I don’t believe that clickstream data after you redesign your site can tell you if people “like” it. I believe that there are many forms of data, that ultimately it is all just information, and that you need to use the right information for the right purpose. I do not believe in torturing a data set to extract an answer it is simply not equipped to provide.

I believe in the informed use of data. Data is only valuable when its i) definitions and its ii) context are clearly understood. I believe there is no such thing as pointing an analyst to a mountain of data and magically extracting insights. (Sorry, companies out there hiring data scientists with that overblown expectation.)

When a nurse tells your doctor, “150” and “95” those numbers are only helpful because your doctor knows that i) That’s your blood pressure reading and ii) Has a discussion with you about your diet, exercise, lifestyle, stress, family/genetic history and more. That data is helpful because it’s definition is clear, and your doctor has the right contextual information to interpret it.

I believe that the people and processes around your data determine success more than the data itself. A limited data set used appropriately in an organization will be far more successful than a massive data set with no structure around its use.

I believe data doesn’t have to be perfect to be useful, but I also believe you must understand why imperfections exist, and their effect. Perfect data can’t tell you everything, but outright bad data can absolutely lead you astray!

I believe that data is powerful, when the right data is used in the right way. I believe it is dangerously powerful when misused, either due to inexperience, misunderstanding or malice. But I don’t believe data is all powerful. I believe data is a critical part of how businesses should make decisions. But it is one part. If used correctly, it should guide, not drive.

Data can be incredibly valuable. Use it wisely and appropriately, along with all the tools available to your business.

“There are more things in heaven and earth, Horatio,
Than are dreamt of in your [big data set].”
- Analytics Shakepeare

Published on February 26, 2015 under Analysis, Data

The Right Use for Real Time Data

Vendors commonly pitch the need for “real-time” data and insights, without due consideration for the process, tools and support needed to act upon it. So when is real-time an advantage for an organization, and when does it serve as a distraction? And how should analysts respond to requests for real-time data and dashboards?

There are two main considerations in deciding when real-time data is of benefit to your organization.

1. The cadence at which you make changes

The frequency with which you look at data should depend on your organization’s ability to act upon it. (Keep in mind – this may differ across departments!)

For example, let’s say your website release schedule is every two weeks. If, no matter what your real-time data reveals, you can’t push out changes any faster than two weeks, then real-time data is likely to distract the organization.

Let’s say real-time data revealed an alarming downward trend. The organization is suddenly up in arms… but can’t fix it for another two weeks. And then… it rights itself naturally. It was a temporary blip. No action was taken, but the panic likely sidetracked strategic plans. In this case, real-time served as a distraction, not an asset.

However, your social media team may post content in the morning, and re-post in the afternoon. Since they are in a position to act quickly, and real-time data may impact their subsequent posts, it may provide a business advantage for that team.

When deciding whether real-time data is appropriate, discuss with stakeholders what changes would be made in response to observed shifts in the data, how quickly those changes could be made, and what infrastructures exists to make the changes.

2. The technology you have in place to leverage it

Businesses seldom have the human resources needed to act upon trends in real-time data. However, perhaps you have technologies in place to act quickly. Common examples include real-time optimization of advertising, testing and optimization of article headlines, triggered marketing messages (for example, shopping cart abandonment) and on-site (within-visit) personalization of content.

If you have technology in place that will actually leverage the real-time data, it will absolutely provide your organization an advantage. Technology can spot real-time trends and make tweaks far more quickly than a human being can, and can be a great use of real-time information.

But if you have no such technology in place, and real-time is only so executives can see “how many people are checking out right now”, this is unlikely to prove successful for the business, and will draw resources away from making more valuable use of your full data set.

Consider specific, appropriate use cases

Real-time data is not an “all” or “nothing.” There may be specific instances where it will be advantageous for your organization, even if it’s not appropriate for all uses.

A QA or Troubleshooting Report (Otherwise known as the “Is the sky falling?!” report) can be an excellent use of real-time data. Such a report should look for site outages or issues, or breaks in analytics tracking, to allow quick detection and fixes of major problems. This may allow you to spot errors far sooner than during monthly reporting.

The real-time data can also inform automated alerts, to ensure you are notified of alarming shifts as soon as possible.

Definitions matter

When receiving a request for “more real-time” data, dashboards or analysis, be sure to define with stakeholders how they define “real-time.”

Real-time data can be defined as data appearing in your analytics tool within 1 minute of the event taking place. Vendors may consider within 15 minutes to be “real-time.” However, your business users may request “real-time” when all they really mean is “including today’s partial data.”

It’s also possible your stakeholders are looking for increased granularity of the data, rather than specifically real-time information. For example, perhaps the dashboards currently available to them are at a daily level, when they need access to hourly information for an upcoming launch.

Before you go down the rabbit hole of explaining where real-time is, and is not, valuable, make sure that you understand exactly the data they are looking for, as “real time” may not mean the same thing to them as it does to you.

Published on February 9, 2015 under Best Practices, Data, Real Time Data

The Curse of Bounce Rate and ‘Easy’ Metrics (And Why We Can Do Better)

One of the benefits of having a number of friends in the analytics industry is the spirited (read: nerdy) debates we get in to. In one such recent discussion, we went back and forth over the merits of “bounce rate.”

I am (often vehemently) against the use of “bounce rate.” However, when I stepped back, I realised you could summarise my argument against bounce rate quite simply:

Using metrics like ‘bounce rate’ is taking the easy way out, and we can do better

Bounce rate (at its simplest) is the percentage of visits that land on your site that don’t take a second action. (Don’t see a second page, don’t click the video on your home page, etc.)

My frustration: bounce rate is heavily dependent upon the way your site is built, and your analytics implementation (do you have events tracking that video? how are the events configured? is your next page a true page load?) Thus, bounce rate varies as to what exactly it represents. (Coupled with the fact that most business users don’t, of course, understand the nuances, so may misuse a metric they don’t understand.)

So let’s take a step back. What are we trying to answer with “bounce rate”?

Acquisition analysis (where bounce rate is commonly used) compares different traffic sources or landing pages by which do the best job of “hooking” the user and getting them to take some next action. You are ultimately trying to decide what does the best job of driving the next step towards business success.

Let’s use that!

Instead of bounce rate, what are the conversion goals for your website? What do you want users to do? Did they do it? Instead of stopping at “bounce rate”, compare your channels or landing pages on how they drive to actual business conversions. These can be early-stage (micro) conversions like viewing pricing, or more information, or final conversions like a lead or a purchase.

So, what is better than bounce rate?

  • Did they view more information or pricing? Download a brochure?

  • Did they navigate to the form? Submit the form?

  • Did they view product details? Add to cart? Add the item to a wish list?

  • Did they click related content? Share the article?

Using any of these will give you better insight into the quality of traffic or the landing pages you’re using, but in a way that truly considers your business goals.

But let’s not stop there…. what other “easy metrics” do we analysts fall back on?

What about impressions?

I frequently see impressions used as a measure of “awareness.” My partner, Tim Wilson, has already detailed a pretty persuasive rant on awareness and impressions that is well worth reading! I’m not going to rehash it here. However the crux of this is:

Impressions aren’t necessarily ‘bad’ – we can just do a better job of measuring awareness.

So what’s better than impressions?

  • At least narrow down to viewable impressions. If we are honest with ourselves, an impression below the fold that the user doesn’t even see does not affect their awareness of your brand!

  • Even measuring clicks or click-through is a step up, since the user at least took some action that tells you they truly “saw” your ad – enough to engage with it.

  • A number of vendors provide measures of true awareness lift based on exposure to ad impressions, by withholding and measuring against a control group. This is what you truly want to understand!

What about about page views per visit or time on site?

Page views per visit or time on site is commonly used in content sites as a measure of “engagement.” However (as I have ranted before) page views or time can be high if a user is highly engaged – but they can also be high if they’re lost on the site!

So what’s better than just measuring time or page views?

So why do we do this?

In short: Because it’s easy. Metrics like bounce rate, page views, time on site and impressions are basic, readily available data points provided in standard reports. They’re right there when you load a report! They are not inherently ‘bad’. They do have some appropriate use cases, and are certainly better than nothing, in the absence of richer data.

However, analysis is most valuable when it addresses how your actions affect your business goals. To do that, you want to focus on those business goals – not some generic data point that vendors include by default.

Thoughts? Leave them in the comments! 

Published on January 13, 2015 under Analysis, Best Practices

Three Foundational Tips to Successfully Recruit in Analytics

Hiring in the competitive analytics industry is no easy feat. In most organisations, it can be hard enough to get headcount – let alone actually find the right person! These three foundational tips are drawn from successful hiring processes in a variety of verticals and organisations.

1. Be honest about what you need

This includes being honest in job description, as well as when you talk to candidates. Be clear about what the person will actually do and what your needs are – not what you wish they would get to do!

I have seen highly qualified candidates promised exciting work and the chance to build a team, only to find out “Director” was an in-name-only title, and in reality, they were nothing more than a glorified reporting monkey. Unsurprisingly, these hires last just long enough to line up a better opportunity, and leave with a bad taste in their mouth (and a guarantee that they would never recommend the company to anyone.)

2. Separate ‘nice to have’s from ‘must have’s

A job description is not your wishlist for Santa, and unicorns don’t exist. You are not going to find someone with twenty years Adobe Analytics experience and a PhD in Statistics who is also a Javascript expert (and won a gold medal for basket weaving!) This may sound like a ridiculous example, but so are most of the supposed “requirements” for analytics roles these days.

Start by detailing the bare minimum skills someone would need to have to be effective in the role, and focus the role to address your greatest need. (Yes, I understand you “may not get another req for years!” But by refusing to prioritise, you guarantee that this req will 1) Take forever to fill, and 2) End up being filled by someone who may meet some of your requirements, but perhaps not your most critical!) Do you need someone technical? More business oriented? Devoted to testing? (Resist the urge to throw in the kitchen sink.)

Keep in mind, if candidates have other skills that makes them desirable, they will mention them during the interview process, and you can then factor them into your hiring decision.

Focusing on your most pressing needs will also make sure other interviewers besides yourself clearly understand what is necessary to succeed in the role. There is nothing worse than having another interview provide poor feedback about a candidate you love, because “They didn’t know R” – except that wasn’t something you truly needed!

3. Focus on relationships, not recruiting

Managers who hire well understand they are always recruiting. While you may not have an active req open, you should continue building relationships with others in the industry. This will allow you to move more quickly, with some great candidates in mind, when the time comes.

Managers who do this well are constantly on the lookout for, and evaluating, people they meet for a potential hiring fit. They take the time to catch up with contacts from time to time, whether it’s a quick phone call to check in, or catching up for lunch. They also openly discuss the possibility of one day working together! Be clear that you’re not hiring right now (you don’t want to lead anyone on) but talk through whether there’s even a fit in terms of skills and interests on both sides.

On the flip side, managers who struggle here tend to blow off connections until they need something (for example, they’re actively hiring.)

What do you think?

Please leave your thoughts or questions in the comments!

Published on December 1, 2014 under Digital Analytics, Management, Org and Culture, Recruiting

Five Proven Tips for Managing Analysts Like a Pro

In the analytics industry, it is common to progress through the ‘ranks’ from analyst to managing a team. Unfortunately, many companies do not provide much in the way of support or management training, to help these new managers learn how to effectively work with their team.

Improving your people management skills is no small task. It takes years, and you are never “done.” Here are just a few small tips, picked up over the years from some of my own managers:

1. Be a leader, not a boss. If you didn’t have the official title, would your team still come to your for assistance or guidance? Focus your efforts on being considered a leader, rather than just someone’s “supervisor” on an org chart.

2. Words matter. Using words like “subordinates” or descriptions like “Jane works for me” endears no one. The best managers don’t need to speak of team members as if they are below them. People look up to good leaders because there’s something to see, not because they’ve been pushed down.

3. We, not I. Many analytics managers are still in dual player-coach roles, meaning they still actively work on projects while managing a team. But when you discuss the team’s work and achievements, a little “we” can go a long way. Think about the difference in perception when you say: “We have been working on something really exciting” or “The team has been hard at work” versus I have been working on X.” Even if the work you’re referencing is entirely your own project, give the team credit. A team attitude is contagious, and your example will help team members follow suit.

4. Use your experience to your team’s advantage. Analytics can be a complex field. While it is often resource constraints that keep managers active in day-to-day analytics tasks, most analysts enjoy the work and don’t want to be fully removed, as a pure people-manager. Use this to your team’s advantage! Keeping your hands dirty helps you understand the challenges your team faces, and keeps you realistic about what is reasonable, when negotiating with stakeholders.

5. Share the credit, take the blame. With leadership comes an obligation to share praise to your team, and take the rap when things go wrong. If you’re not willing to do this, don’t take on a leadership role. It’s that simple. Were there mistakes made in an analysis? Data integrity issues, or data loss? Being responsible for a team means having ultimate oversight, and being responsible when that fails.

To overcome a mistake without throwing your team under the bus, explain to affected parties:

  • That an error occurred, and (generally) what it was
  • The consequences for the information shared previously (for example, should they throw out all previous findings?)
  • Where the breakdown in process was
  • How you’ve already addressed the process failure, it to ensure it doesn’t happen again

(None of this requires mentioning specific individuals!)

Treat it as a learning opportunity, and encourage your team to do the same. Work with team members privately to enhance necessary skills and put in place process to ensure it doesn’t happen again.

BONUS! Aim to be rendered obsolete. Good leaders train and guide their team until they’re not even needed anymore. This is great news for your own career: it frees you up to take on a new challenge!

There are a million books and courses on leadership out there, but these are a few of my favourite lessons from some of the best leaders I’ve ever worked for. What are yours? Please share in the comments!

Published on November 17, 2014 under Management

The Downfall of Tesco and the Omniscience of Analytics

Yesterday, an article in the Harvard Business Review provided food for thought for the analytics industry. In Tesco’s Downfall Is a Warning to Data-Driven Retailers, author Michael Schrage ponders how a darling of the “analytics as a competitive advantage” stories, British retailer Tesco, failed so spectacularly – despite a wealth of data and customer insight.

I make no claims to a completely unbiased opinion (I am, after all, in the analytic space.) However, from my vantage point, the true warning of Tesco lies in the unrealistic expectation (or, dare I say, hype) that ‘big data’ and predictive analytics can think for us.

It is all too common for companies to expect analytics to give them the answers, rather than providing the supporting material with which to make decisions. Analytics can not help you if you are not asking the right questions. After all, a compass can tell you where north is, but not that you should be going south. It is the reason we at Web Analytics Demystified prefer to think about being data informed, not data driven. Being ‘data driven’ removes the human responsibility to ask the right questions of, and take the right actions in response to, your data.

Ultimately, successful business decisions are an elusive combination of art and science. Tesco may have had the greatest analytics capabilities in the world, but without business sense to critically assess and appropriately act upon the data, it is a warning: considering analytics to have some kind of omniscience, rather than being a part of your business ‘tool box’, is to set it up to fail.

What do you think? Is Tesco’s downfall a failure of analytics? Leave your thoughts in the comments. 

Published on October 29, 2014 under "Big Data"

Wearable Tech, Quantified Self & Really Personal Data: eMetrics 2014

This week I had the pleasure of speaking at eMetrics Boston about a recent pet project of mine: what wearable and fitness technology (including consumer collection and use of data) means for analysts, marketers and privacy.

First, a little background… In April 2013, I was having a lot of trouble with sleep, so I purchased Jawbone UP to better understand how I was sleeping, and make improvements. This quickly became an exploration of the world of fitness and related wearable tech, as I explored my data, via devices and apps like Fitbit Force, Garmin Vivosmart, Whistle, Withings, Runkeeper, Strava, Map My Fitness, My Fitness Pal and Tom Tom Multisport. I leveraged IFTTT to connect this data, output raw data and even link to devices like Nest.

qs-ecosystem

In the course of doing so, I noticed some striking similarities between the practice of “self quantification” and the practice of digital analytics, and started to think about 1) What opportunities these devices afford to marketers and 2) What the considerations and cautions we should be aware of, from a privacy perspective.

You can check out the presentation on Prezi if you are interested.

prezi-screenshot

I would love to hear any thoughts, questions, or your own experiences in the comments!

Published on October 10, 2014 under Conferences, Presentations, Self-Quantification

Got a burning Digital Analytics question? #AskDemystified before next week’s ACCELERATE!

Since ACCELERATE started in 2011, the Partners at Web Analytics Demystified have kept our ‘thinking caps’ on about what else we could offer that would make the content more helpful for attendees and the community generally.

This year, we are introducing the opportunity for the community to ask us anything. The last session of the day will address your questions, so send them our way by tweeting using the #AskDemystified hashtag. (Not a ‘Twitter-er’? Email your questions to AskDemystified@webanalyticsdemystified.com.)

What’s more, the best question we receive will win the opportunity to attend any upcoming Web Analytics Demystified training you want, for free. What are you waiting for?

askdemystifiedtweetscreenshot

About ACCELERATE: Join us Thursday, 18 September in Atlanta, GA for ACCELERATE. You’ll hear from speakers at brands like Google, Nestle, Home Depot and Lenovo, covering everything from from strategy to implementation, analysis and optimization. Our  twenty-minute “Ten Tips” format leaves no time for boredom to set in, and the low $99 price tag is unheard of in the analytics industry. Register now!

 

Published on September 8, 2014 under ACCELERATE

How to Deliver Better Recommendations: Forecast the Impact!

One of the most valuable ways to be sure your recommendations are heard is to forecast the impact of your proposal.

Consider what is more likely to be heard:

“I think we should do X…”

vs

“I think we should do X, and with a 2% increase in conversion, that would drive a $1MM increase in revenue”

The benefits of modeling out the impact of your recommendations include:

  1. It forces you to think through your recommendation. Is this really going to drive revenue? If so, how? What are the behaviours that will change that will drive the growth?
  2. A solid revenue estimate will help you “sell” your idea
  3. Comparing the revenue impact estimate of a number of initiatives can help the business to prioritise

There are a few basic steps to putting together an impact estimate:

  • Clarify your idea
  • Detail how it will have an impact
  • Collect any existing data that will help you model that impact
  • Build your model, with the ability to adjust assumptions
  • Using your current data, and assumed impact, calculate your revenue estimate
  • Discuss your proposal with stakeholders and fine-tune the model and its assumptions

Example 1: Adding videos to an ecommerce product page

Sample Revenue Model: Videos on the Product Page

View model

This model forecasts the revenue impact of adding videos to an ecommerce site’s product pages. This model makes a few assumptions about how this project will drive revenue:

  1. It assumes some product page visits will view a video, where those visits would not have previously engaged with photo details
  2. It assumes that conversion from product page to cart page will be improved because of users who were viewing photos being further convinced by video
    • Note: This assumption could be more general, or more specific. In the model we have assumed that conversion will be better for users who view photos or videos. The model could also simplify, and assume a generic lift, without taking in to account whether users view the video or click photos.

It does not assume there will be an impact on:

  1. Migration to the product pages (since users won’t even know there are videos until they get there)
  2. Conversion from cart to purchase
  3. Average Order Value

However, for #2 and #3, placeholders are there to allow the business to adjust those if there is a good reason to.

There are a lot of other levers that could be added, if appropriate:

  • Increase in order size
  • Cross-sell
  • Increase in migration to the product page, if videos were widely advertised elsewhere on the site

So you will see it’s a matter of thinking through the project and how it’s expected to affect behaviour (and subsequently, revenue) in choosing what assumptions to adjust.

Example 2: Adding a new ad unit to the home page

Sample Revenue Model: Ad Unit on Home Page

View model

This is a non-ecommerce example, for a website monetised via advertising. The recommendation is to add a third advertising unit to the home page, a large expanding unit above the nav.

The assumptions made are:

  1. The new ad unit will have high sell through and high CPM. This is because we are proposing a “high visibility” unit that we think can sell well.
  2. The existing Ad Unit 1 will suffer a small decrease in sell through, but retain its CPM
  3. The existing Ad Unit 2, as the cheaper ad unit, will not be affected as those advertisers would not invest in the new, expensive unit

There are of course other levers that could be adjusted:

  • We could factor in the impact to click-through rate for the existing ads, and assume a decrease in CPM for both ads due to lower performance.
  • We could take into account the impact on down-stream ad impressions, as the new ad unit generates clicks off site. For users to click the ad, we would lose revenue from the ads they would have otherwise seen later in their visit.
  • We could, as a business, consider only selling the new ad unit half the time (to avoid such a high-visibility ad being “in user’s faces” all the time), and adjust the sell through rate down accordingly.

Five Tips to Success

  1. Keep the model as simple as possible, while accounting for necessary assumptions and adjustments. The simpler the model, the easier it will be for stakeholders to follow your logic (a critical ingredient for their support!)
  2. Be clear on your assumptions. Why did you assume certain things? And why didn’t you assume others?
  3. Encourage stakeholder collaboration. You want your stakeholders to weigh in on what they think the impact can be. Getting them involved is key to getting them on board. Make it easy for them to adjust assumptions and have the model re-calculate. (A user experience tip: On the example models, you’ll see that I used colour coding: yellow fill with blue text means this is an “adjustable assumption.” Using that same formatting repeatedly will help train stakeholders how to easily adjust assumptions.)
  4. Be cautious. If in doubt, be conservative in your assumptions. If you’re not sure, consider providing a range – a conservative estimate and an aggressive estimate. E.g. With a 1% lift in conversion, we’ll see X, with a 10% lift we’ll see Y.
  5. Track your success. If a project gets implemented, compare the revenue generated to your model, and consider why the model was / was not in line with the final results. This will help fine-tune future models.

Bonus tip: Remember this an estimate. While the model may calculate “$1,927,382.11″, don’t confuse being precise with being accurate. When going back to the business, consider presenting “$1.8-2.0MM” as the final estimate.

What tips would you add?

Share your experiences in the comments!

Published on July 14, 2014 under Analysis, Digital Analytics

7 Tips For Delivering Better Analytics Recommendations

As an analyst, your value is not just in the data you deliver, but in the insight and recommendations you can provide. But what is an analyst to do when those recommendations seem to fall on deaf ears?

1. Make sure they’re good

Too often, analysts’ “recommendations” are essentially just half-baked (or even downright meaningless) ideas. This is commonly due to poor process, and expectation setting between the business and analytics team. If the business expects “monthly recommendations”, analysts will feel pressured to just come up with something. But what ends up getting delivered is typically low value.

The best way to overcome this is to work with the business to clearly set expectations. Recommendations will not be delivered “on schedule”, but when there is a valuable recommendation to be made.

2. Make sure you mean it

Are you just throwing out ideas? Or do you truly believe they are an opportunity to drive revenue, or improve the experience? Product Managers have to stand by their recommendations, and be willing to answer to initiatives that don’t work. Make sure you would be willing to do the same!

3. Involve the business

Another good way to have your recommendations fall on deaf ears is if they 1) Aren’t in line with current goals / objectives; or 2) Have already been proposed (and possibly even discussed and dismissed!)

Before you just throw an idea out there, discuss it with the business. (We analysts are quick to fault the business for not involving us, but should remember this applies both ways!) This should be a collaborative process, involving both the business perspective and data to validate and quantify.

4. Let the data drive the recommendation

It is typically more powerful (and less political…!) to use language like, “The data suggests that…” rather than “I propose that…”

5. Consider your distribution method

The comments section of a dashboard or report is not the place for solid, well thought out recommendations. If the recommendation is valuable, reconsider your delivery methods. A short (or even informal) meeting or proposal is likely to get more attention than the footnote of a report.

6. Find the right receiver

Think strategically about who you present your idea to. The appropriate receiver depends on the idea, the organisation (and its politics…) and the personalities of the individuals! But don’t assume the only “right” person is at the top. Sometimes your manager, or a more hands-on, tactical stakeholder, may be better able to bring the recommendation into reality. Critically evaluate the appropriate audience is, before proposing it to just anyone.

Keep in mind too: Depending on who your idea gets presented to, you should vary the method and level of detail you present. You wouldn’t expect your CMO to walk through every data point and assumption of your model! But your hands-on marketer might want to go through and discuss (and adjust!) each and every assumption.

7. Provide an estimate of the potential revenue impact

In terms of importance, this could easily be Tip #1. However, it’s also important enough for an entire post! Stay tuned …

What about you?

What have you found effective in delivering recommendations? Share your tips in the comments!

Published on July 8, 2014 under Analysis, Best Practices