In designing surveys, one always has to identify the goals of the survey and then optimally balance the emphasis between “measurement” and “discovery”. It is not a question of one is good or the other is bad, but rather the task at hand is to make sure that the goals fo the study are aligned with the questions in the survey.


Measurement is the simplest to explain and most commonly used. It is an attempt to quantify the occurrence of an entity in term of various choices that you know. Measurement manifests itself as questions that ask respondents to rank the given options, provide a numerical rating for the presented statement, select all the choices that apply, or, enter a number in response to a question. Some classic measurement questions are -

  • How satisfied were you with your recent purchase? Please provide a numeric rating with 0 meaning “Not satisfied at all” and 10 meaning “Extremely satisfied”
  • Which class of travel do you choose for your last trip? Choose one of the following - “Economy”, “Business” or “First”.
  • What is important for you in choosing a mobile phone service? Choose all that apply - Handset choices, Price, Network reliability, Data services.

What all the above questions have in common is that the users are asked to select (one or more responses) from the choices presented to them. This makes it easy for users to get through the question as their effort is limited to selecting the relevant choice(s) and clicking a button on the screen.

The data collection for these questions is also straightforward. You end up capturing structured responses which are easy to aggregate, analyze, and, present in the form of charts and graphs.

A well designed measurement question has an unequivocal answer from the point of view of the respondents. The choices presented are distinct and do not overlap in scope. And most importantly, the choices cover all the possible answers that respondents in the audience pool may have.


Discovery on the other hand is trying to find out what you don’t know. Traditionally, this has been the realm of market research, focus groups, and, one on one interviews conducted by product managers and behavioral scientists. You need to be able to listen to the respondents, understand what they are saying and hopefully have a meaningful follow-up conversation to discover the themes, issues or concerns that prompted the response.

Some topics well suited for discovery are -

  • Why did you switch from your previous mobile carrier?
  • This is the proposed draft of the proposed features in the new version of the product. Tell us what you think.
  • What feedback would you like to anonymously provide John, so that he can continue to develop as a software engineer?

Common among these questions is the quest for the unknown. Though a good bit of context of the respondents is known through previous knowledge or answers to previous Measurement questions, these questions are aimed at finding the root cause. Questions targeted at discovery most commonly take the form of open-ended text questions. Insight Magnet does support a few advanced question types such as Ranking, Weighted Ranking, Snowball, and Conjoint Analyses that attempt to quantify a portion of the discovery. **links**

Open text responses fall in the unstructured data category and can be notoriously difficult to analyse unless you have a tool like Insight Magnet at your disposal. This gets even more complex if you are doing a multilingual survey. People resort to tabulating in Excel, writing macros, reading a subset of the answers. Almost always the analysis is done by analysts and executives get to see a summarized PowerPoint slide that is arguably the “biased” opinion of the analysts and may or may not reflect the population.

For some situations, the challenges in analyzing the data outweigh the benefits of a user-centric survey and there are several recommendations from experts to avoid to minimize using open-ended questions. This has a direct impact on eliminating or reducing the amount of discovery you can accomplish in the survey.

Aligning study goals to survey design

Measurement and Discovery by themselves are two criteria that need to be weighed in as you design the survey. Ideally the design should aim at accomplishing the goals of the study and this is where we see a wide variation of implementation variations.

Studies aiming at pure measurement are easy to point out. Any evaluation quiz, a national census are great examples of measurement. A census is a perfect example of measurement, where you need the boxes important to the Census Bureau filled in for every household.

In the business sense though most of the customer facing surveys are a mix of measurement and discovery. And therein lies the confusion we observe.

Take for example the omnipresent “Customer Satisfaction Survey”. Ask any executive who sponsors this about the goal of this effort and you will hear one of the following -

  • We want to measure customer satisfaction
  • We want to identify segments of customers who are not happy with our products.
  • We want to identify areas where we can improve
  • We want to capture and provide relevant feedback to other groups in the company

If your goals are limited to #1 and #2, you can argue that this is a measurement driven exercise. You need to agree on a metric that represents satisfaction and assign it to every transaction you perform. With the help of customer data from your CRM system, you can evaluate #2. Done!

On the other hand, if #3 matters to you then having a numeric score is not going to help you. In addition to the score, you need a ranked list of areas that drag your score down. You can still make this a 100% measurement exercise if you can list the areas (from and inside-out view) and let the customers point out what needs to improve.

All companies know and some acknowledge that they don’t know everything there is know about their customers. So they treat #3 as a discovery exercise as it should be rightfully treated.

Then comes the tactics to achieve the goal of accomplishing discovery. A very common trend is to add an open-text question at the very end of the survey to collect open-ended feedback. Accomplishing discovery involves a bit more than mere data capture. The questions relevant to the effectiveness of the open-ended questions are -

  • How many respondents are providing the open-ended feedback?
  • What percent of the captured responses are being read?
  • How soon can executives see the captured feedback? (Recency)
  • What part of the captured feedback is available for executives to see? Is it rolled-up using an algorithm or is it summarized by an individual? (Bias) Can they drill down to see the captured text? (Summarization)
  • Can I correlate the results of the open-ended question with other measurement questions?

The answers to these questions will help you identify if your survey efforts are useful in discovery.

Product Requirements Gathering is a very “discovery” heavy activity. Especially, if current customers are surveyed you already know a lot of information about your customers. Important factors such as how long have they used the product, who is using it, what modules are they licensed to use? should already be known to you at the individual level before you start conversing with folks in your audience. This can be a treasure trove of useful information if executed correctly using a well designed survey.

Avoiding “meaningless” Discovery

Discovery for the sake of discovery yields meaningless (but sometime frighteningly popular) results. It means good “media” headlines but does not yield much in terms of actionable decisions.

Take for example, Twitter trends. It is just a measure of how often a particular keyword was mentioned. In the category of “Social Media mapping” we see a lot of mentions of how often a brand was mentioned on social media or blog posts. The important corollary for implementing findings into business decision is context and that comes from knowing not just what was said, but who said it.

Read how a hotel chain focussed on discovery to identify and mitigate root causes of guest complaints at a new property.