Deconstructing Analysis Techniques

Breaking down the analysis black box of analysis techniques

No related posts.


Analysis is that oft-glossed over, but extremely important step in the research process that sits between observation (data gathering) and our design insights or recommendations. In many respects, analysis is crucial to realizing the value of our research since good analysis can salvage something from bad research, but the converse is not so true. This is where the literature tends to fall a little silent, jumping over the analysis techniques straight to a discussion of how best to document and communicate the findings from analysis. This article seeks to begin to redress that imbalance by breaking down the analysis black box into its major sub-techniques.

On a recent project I needed to collect and analyze the content management templates in use across a large enterprise Intranet. We were looking to inventory the diversity of templates in use; whether they existed outside or within the enterprise content management system; what changes might be made to the ‘official’ template set to reduce the overall number of templates, and to prepare for the migration of all content to a new design a few months down the track. I looked around at the literature for information architecture and Web design generally and found quite a few references to content inventories and content analysis, but nothing on analyzing templates.

I set about designing the analysis task from scratch: looking at what we wanted to get out of the analysis; and looking at what tools and techniques would most effectively allow us to get there. In so doing, it struck me that there is very little information published about the process of analysis that would equip practitioners with a toolkit to construct their own analytical techniques. So User Experience literature and all of its component domains focuses on techniques for user research and testing, it’s surprising to realize that the coverage often skips over the process of analysis, since this is where much of the value of our research is realized.

Techniques of Analysis

We can start to pull back the curtain on analysis by looking at the techniques that go into the process:

  • Deconstruction: breaking observations down into component pieces. This is the classical definition of analysis.
  • Manipulation: re-sorting, rearranging and otherwise moving your research data, without fundamentally changing it. This is used both as a preparatory technique – i.e. as a precursor to some other activity – or as a means of exploring the data as an analytic tool in its own right.
  • Transformation: Processing the data to arrive at some new representation of the observations. Unlike manipulation, transformation has the effect of changing the data.
  • Summarization: collating similar observations together and treating them collectively. This is a standard technique in many quantitative analysis methods.
  • Aggregation: closely related to summarization, this technique draws together data from multiple sources. Such collections typically represent a “higher-level” view made up from the underlying individual data sets. Aggregate data is used frequently in quantitative analysis.
  • Generalization: taking specific data from our observations and creating general statements or rules.
  • Abstraction: the process of stripping out the particulars – information that relates to a specific example – so that more general characteristics come to the fore.
  • Synthesis: The process of drawing together concepts, ideas, objects and other qualitative data in new configurations, or to create something entirely new.

Let’s take a look at each of these techniques in detail and discuss some of the ways in which each technique can be applied.

Deconstruction

Breaking observations down into component pieces. This is the classical definition of analysis.

Breaking down research data into its component parts is a standard technique for analysis. One example of deconstruction is turning an interview transcript into a series of separate comments or answers to questions. Deconstruction is often used simply to prepare data for other analytic processes such as manipulation or summarization, or even abstraction.

The aim of deconstruction is to decouple each component so as to allow inspection of each in its own right. In other disciplines this process is used as a device for critical thinking, bypassing the potentially misleading image conveyed by the whole. In so doing deconstruction can be a powerful tool for exposing unquestioned assumptions about our users’ mental models or the business priorities of the client organization.

Looking at our template analysis example, one of our first analysis tasks was to deconstruct the templates into their components. Like most of the technique we took a very low-tech approach to the task, blocking out the individual components with a pencil. In our case, the deconstruction made easier a lot of the subsequent analysis work.It was a minor, but significant, step in the overall process.

Manipulation

Re-sorting, rearranging and otherwise moving your research data, without fundamentally changing it. This is used both as a preparatory technique or as a means of exploring the data as an analytic tool in its own right.

The ability to “play with the data” is a critical capability in analysis. We utilize this technique in many situations: searching for patterns or trends in our observations; or as another preparatory stage for further analysis. For example, sorting data in some way – alphabetic, chronological, complexity or numerical – is an a form of manipulation.

The ability to easily manipulate data is one of the key determinants for the tools we use in our analysis work. Spreadsheets are an excellent tool for manipulating data; but as we see in our template analysis task, the use of a more tangible form – such as our index cards – can be just as effective: if not more so in some cases.

When data recorded in a format that resists fluid manipulation and exploration people can stumble when moving from observation & data collection into analysis. It is important to plan this task into the research design so that it is not overlooked. You could find yourself with a costly and time-consuming data-entry process  if it is forgotten in the planning stages.

Transformation

Processing the data to arrive at some new representation of the observations. Unlike manipulation, transformation has the effect of changing the data.

Transforming research data is the process of taking our research data and turning it into something else. For example, you may recall from your schooling days the practice of “scaling” results from an assessment task (exam, essay etc) so they fit a certain distribution, so you end up with (for example) 10% A, 15% B, 25% C, 25% D etc

Another example might be to convert raw data into a logarithmic form to reduce the impact of extreme values – or to demonstrate power laws in the data.

Summarization

Collating similar observations together and treating them collectively. This is a standard technique in many quantitative analysis methods.

The goal of summarizing data is to generate an additional set of data, typically more succinct, that encapsulates the raw data in some way. This may be a short sentence that captures the essential point from several minutes of an interview transcript: “participant finds site search unwieldy, confusing and difficult to use”.

We can also summarize the data quantitatively using summary or descriptive statistics such as frequencies, means, and standard deviations. Unlike the process of abstraction, where specificity is sacrificed for the sake of clarity; or aggregation, where several data sets are “rolled up”; summarization seeks to characterize the underlying data.

Once again, spreadsheets are a very useful tool, especially when dealing with quantitative data. But they can be similarly useful when handling other data types. An equally useful medium for capturing summaries (once you have them) – particularly of qualitative data – is the PostIt or sticky note. This medium is also highly suited to manipulation and exploration of the resulting data. One advantage sticky notes have over a spreadsheet is that you can arrange and re-arrange them in two dimensions, so you can further manipulate and explore the summaries.

Index cards share many of the same advantages as sticky notes. They can be an excellent tool for capturing and working with summaries. They have the added advantage of being relatively robust and can therefore sustain a greater degree of handling.

Aggregation

Closely related to summarization, this technique draws together data from multiple sources. Such collections typically represent a “higher-level” view made up from the underlying individual data sets. Aggregate data is used frequently in quantitative analysis.

As discussed previously, aggregation is similar to, but distinct from summarization. In one respect aggregation is simply the process of bringing together data from a variety of sources and adding it together. In an analytic context it also carries with it the connotation of combining those sources together into something new.

A good example to highlight aggregation in action is the creation of a (fictional) customer satisfaction index (CSI). Our CSI will use data from:

  • An annual customer survey;
  • The number of product returns received; and
  • The ratio of new to repeat customers.

We combine data from each of these sources and arrive at some single figure – based on some form of calculation (we’ll save the ‘how’ of that for another time). That single figure – which we can track year-to-year – is our aggregate. Unlike a summary, which characterizes a single piece of data, you can see that our aggregate is a composite value.

Generalization

Taking specific data from our observations and creating general statements or rules.

Taking the results of some specific research task and drawing general inferences about the broader population is one of the most common, but perhaps the least understood analytical technique. Generalization draws a great deal of its strength from the discipline of statistics, and the particular techniques of statistical inference.

In many respects generalization is similar to abstraction in that it reflects a move from the specific to the general or essential. It is a way of describing the common characteristics of the objects reflected in the data.

An example of generalization might be: “security is important to our users” based on an analysis of user interviews.

Abstraction

The process of stripping out the particulars – information that relates to a specific example – so that more general characteristics come to the fore.

The process of abstraction involves the progressive removal of specific data retaining just the essential information needed to communicate particular characteristics of an object. For example, “professional” is a more abstract form of “Doctor” or “Lawyer”; “graphic” is a more abstract form for “photograph”, “logo”, “illustration” or “chart”.

A wireframe is an abstract representation of a page design; the template thumbnails on our index cards are an abstract representation of the templates.

Abstract representations can be very useful because they remove a lot of visual noise from the analysis process. What we’re left with is a “high-level” depiction devoid of specific detail; highlight focused on just those elements which are relevant to the discussion.

Synthesis

The process of drawing together concepts, ideas, objects and other qualitative data in new configurations, or to create something entirely new.

Combining multiple elements together to create a new, complex ‘thing’ is what the technique of synthesis is all about. Similar in some respects to aggregation, synthesis typically deals with non-numeric data.

Synthesis is often undertaken towards the end of an analytic process as the reverse of deconstruction. So where we might begin by breaking down data into its component parts and examining them; we often end by recombining those components in new ways. Note, however, that synthesis can also form part of an exploration and is one of the fundamental tools of the trade for UX strategy work.

If deconstruction allows us to critically examine assumptions by isolating individual components, synthesis allows us to explore new configurations for the whole.

But what about…

In discussing this article with other people we identified three other techniques that we either weren’t sure belonged as analytic techniques, or we couldn’t decide if they were already covered by the techniques discussed above. We believe they’re all very important to the analysis process. They are:

  • Reflection: thinking, pondering, contemplating. To the outside observer it looks a lot like staring into space, but your mind is going over and over and over all the detail of your observations, data, diagrams, and other research materials. It’s the part you can’t put a time limit on, and can make or break your subsequent work. You might call it “soaking it all in”, or “immersing myself in the data”. This technique is incredibly valuable to me in my own work and I’m not sure I’d be as effective if I didn’t include it.
  • Visualization: this technique is about giving the data a visual dimension. Instead of lists of items, or rows of numbers in a spreadsheet, a chart or graph or some form of illustration. A good visualization can help expose patterns or gaps much more clearly than the raw data.
  • Number-crunching’: this feels like it needs to be drawn out as a separate activity from data manipulation, transformation, or summarization, but I also recognise that this level of distinction may just be peculiar to me. This refers to all of the heavy-duty quantitative analysis work like clustering analysis, or regression, calculating correlation co-efficients and the like.

Conclusion

Working with research data and observations is often treated as a black box in design literature. Designers find themselves faced with the daunting task of analysing research data, but lack clear approaches to that task. Understanding the major techniques used in analysis work can remove some of the uncertainty and provide a clear way in to the work.

There still exists a very large gap in the literature on analysis and analytic techniques, but I hope that this discussion of the major components of analysis will go some way towards filling that void. The next time you’re undertaking some analysis work, try and identify these major techniques, and see if there are any others we can add to the list.

I’d like to say a very big thank you to the people who helped clarify and refine both my thinking on this topic, and the expression of that thinking in this article: Will Evans, Livia Labate, Donna Spencer and Daniel Szuc; Christian Crumlish, Michael Leis and Kaleem Khan.

Graphics by Jeroen van Geel (and he’s pretty proud of them :) .

Steve Baty

Steve Baty, principal at Meld Studios, has over 14 years experience as a design and strategy practitioner. Steve is well-known in the area of experience strategy and design, contributing to public discourse on these topics through articles and conferences. Steve serves as Vice President of the Interaction Design Association (IxDA); is a regular contributor to UXMatters.com; serves as an editor and contributor to Johnny Holland (johnnyholland.org), and is the founder of UX Book Club – a world-wide initiative bringing together user experience practitioners in over 80 locations to read, connect and discuss books on user experience design. Steve is co-Chair of UX Australia – Australia’s leading conference for User Experience practitioners; and Chair of Interaction 12 – the annual conference of the IxDA for 2012.

21 comments on this article

  1. Pingback: Analysis of data | USiT

  2. Pingback: Deconstructing Analysis Technique - johnnyholland.org « Meld Consulting

  3. Laura Patterson on

    Nice article! I agree that there is much to the analysis process that is often minimised and misunderstood, especially in regards to the time it takes to do great analysis. I had a couple thoughts as I was reading that I just thought I’d share.
    - Not all of these happen at once in the chunk of time we call analysis. You generally need to deconstruct before you can aggregate, and so on. Your list of techniques seems to represent a latent understanding of this (they’re in a general process order… with perhaps the exception of transformation) but a clarification of this might make it clearer that analysis is not just “pick a technique”
    - Deconstruction and perhaps manipulation are the only two that address the “dirty” side of analysis – getting into the data, letting it sink in (reflection), developing hypotheses about how to make sense of it, in relation to the needs of your client. There are other techniques that assist in that phase/time of the process – without fancy names(abstraction?!) they are things like pulling together the themes and major learnings (much like your Deconstruction but less top-down/structural), mapping relationships, systems and processes to flush out the data, finding analogies to help you think differently about the data, and so on. These can later be manipulated or transformed but initially in the analysis process it’s important to just see the data in lots of ways that might not be refined enough to share with others/the client.
    In any case, thanks for this!

  4. Laura, thanks for the comment.

    In a sense, what I was trying to do here was mimic what happens in a typical analysis process: to begin to understand, often we first need to break the thing down into its parts. The order of the techniques shown in the article make a logical sense, but analysis work is never as clean in practice as its shown on paper. As you go on to point out, the techniques discussed are combined to make up the processes we read about (when they’re written about), but typically not all at once. We pick and choose the ones we need for specific tasks.

    The next step then, for me anyway, will be to manipulate, and abstract and generalize, and hope to learn a few things along the way about analysis.

  5. Steve, it’s great to see all these approaches broken out and so concisely presented. Thanks–very useful (and it’s visually quite appealing as well).

  6. Steve on

    Great article Steve. Can you share any artifacts like worked templates or sketches?

  7. Pingback: Pasta&Vinegar » Blog Archive » User research data analysis

  8. Pingback: A little more on eBooks and design research « Meld Consulting

  9. Pingback: Changing thinking; changing practice « Meld Consulting

  10. Pingback: Johnny Holland - It’s all about interaction » Blog Archive » Johnny’s 100th post: time to evaluate

  11. brilliant post..very thorough breakdown and examination of all elements at play and associated with deconstruction

  12. Pingback: Johnny Holland - It’s all about interaction » Blog Archive » Deconstructing Analysis Techniques: Deconstruction

  13. SteveJB on

    A few of the processes mentioned (particularly synthesis because I had to do a project/presentation on it) reminded me of analysis techniques that were covered during a course called ‘critical thinking’ which I attended while studying for a degree in International Business Administration.

    I’d post the name of the books and their authors but the course ware is currently on the other side of the ocean from me.

  14. Good article Steve, but I can’t help but think that the reason you didn’t find much available information on the deconstruction techniques is that it really stems from a web development school of thought.

    Having completed this set of deconstruction tasks a good number of times, I have never used a formalised approach. However more a reserve engineering of the process, the interesting thing is that the process I do use, although not formal, is very similar to the one you have detailed above.

    Maybe this just says that some of us are doing this without the guidelines as it’s part of the inherit process. Now that’s not for everyone. I guess it’s a case of experience for different fields coming into play, eh.

  15. Gary,

    You’re right, but may have misconstrued the purpose of the article, so let me clarify a couple of things just to be sure we understand each other.

    We *all* do analysis work of one form or another. Basically, if you’re conducting research, then you’ll be analysing it. I’m not suggesting that people aren’t doing analysis: I’m saying that the way we talk about analysis, and write about analysis, can be pretty fluffy on the detail.

    What I set out above is an attempt to provide us all with some shared language we can use to describe the techniques that we’re already using – I don’t think I’m introducing anything new by way of technique.

    The other thing I’d clear up is that the above do not represent a process: I’ve presented a collection of techniques that are combined in different ways to perform analysis appropriate to the data and our objectives. I’m hoping that by doing so we’ll all be in a better position to articulate more precisely what we’re doing.

    The last thing I’ll mention is that the above tasks are about analysis; deconstruction is just one of them. Perhaps the choice of title is confusing in that respect.

    Cheers, Steve

  16. Pingback: Johnny Holland - It’s all about interaction » Blog Archive » Manipulating Data: Analysis Techniques part 3

  17. Pingback: Johnny Holland - It’s all about interaction » Blog Archive » UX Australia ‘09 report: Day 1

  18. Pingback: Link Backup from Delicious.com » Blog Archive » links for 2009-09-07

  19. Pingback: Putting people first » Conversations in a weekend village — Interaction10 impressions

  20. Pingback: The Character of Design, by Steve Baty - Core77

  21. Pingback: philpin.com » Blog Archive » The Character of Design, by Steve Baty