It’s been fascinating over the last several years to watch the big hairy technology battle of our age play out—Google versus Apple. Like Microsoft/Apple a generation ago, this one is also about a clash of ideologies. Much has been made over the open versus closed architecture debate, but I’d like to focus on another aspect of their relationship – their orientation to design. Apple famously champions purity of vision and a minimal functional and visual aesthetic, driven by a small group of genius designers. Even though superficially Google values a similar visual aesthetic, Google’s core design approach is rooted in data—unimaginably large amounts of quantitative data about every aspect of what people do on their websites. What causes quicker or more relevant searches is good design.
This has generated a trend toward quantitative and statistical approaches to design. In Google’s school of thought, with modern web technologies almost anything can be built and fielded so quickly that the cheapest and best way to design is to brainstorm, build, put it in front of the world, and see what happens. If your statistics improve, great, do more; if you degraded performance, roll back and try again.
Almost inevitably, a corresponding backlash developed against this view. The “soft” camp holds that designers need to be free to innovate without restriction, letting intuition drive solutions. The debate really got fun when Doug Bowman, a top Google visual designer, quite publicly left Google last year. Both camps dug in, and in today’s overly polarized culture, started hurling mud at each other.
To me, the whole thing got framed like so many other arguments do today, as strictly as a black and white issue: design “versus” analytics. In fact, it is kind of tempting to think one is right, and the other is wrong—especially when the debate gets framed as Google versus Apple and fanaticism sets in.
But like so many other false choices, I think it’s better to look more deeply and recognize when a statistical approach is appropriate, and when it is not.
So when are statistics useful for design?
Businessweek recently published a great article about the design activities and rationale behind Google and their 2010 homepage redesign. The broad arc of the effort that is traced in the article echoes the Google philosophy: get a bunch of super-smart people in a room, brainstorm lots of solutions, figure out which will work with the architecture, implement, and optimize with large scale testing driven by statistical performance measures.
Sounds great, worked great. Google rolls on. But let’s look a bit closer at the design circumstances that made Google’s application of statistics work.
1. Easily obtained performance metrics clearly, unambiguously and quickly define the “goodness” of the chosen solution
This is a simple statement but it has profound design implications. Google’s metrics are clear: speed, number of clicks per search, and so forth. They can be measured straightforwardly with the tools Google has. And they can be measured relatively quickly—results of a design change can be known within days, even hours, of making the adjustment.
2. Many of the changes Google tests are between a small number of similar, low-risk design options
The work practice that search supports is clear, and all of the options Google chooses to test support that practice. So, small changes, like the color of the search button, the pixel width of the search box, and what to call “similar” results, can be compared relatively evenly and won’t break the user. So you’re comparing apples and apples. These small changes are ideal to test statistically.
3. The design options are cheap and fast to generate and field
In the web world, iterative cycles can be quickly made—even iterative cycles on the “real product.” Of course, this is the real magic of the statistical approach, but it is a significant luxury that is not enjoyed by all products and services in the world.
For me, the thing that gets lost in the debate about design “versus” analytics is how relatively rare this combination of circumstances is. Surely if all three are true, the application of Google’s design rationale is hard to argue with. But aside from a subset of the web world, very few design domains have this triple luxury.
As an example, I’ve been working a lot lately with automotive clients. In the car industry, market share and margin are king. But there are literally millions of design choices that impact those two big “metrics.” Aesthetics play a big part, and the purchaser benefits of the look of a car are not even well understood, let alone objectively measurable. Product cycles take years, so designers can’t just make a change and see what happens in a couple of hours. Designers can’t deploy a prototype to millions of drivers instantaneously. And so on.
It’s clear other physical products are similar, but so are some software packages, and even web applications. Think about CRM or ERP—some software, even if it’s web based, takes a long time to roll out, and has lots of work practice redesign issues surrounding it. Again, none of the circumstances I outlined above fit.
Design choices in these domains is not just apples and apples. Or Apples and apples. We need to think different. Statistics alone aren’t useful, and we need to make use of other design methods that give us other sorts of data from which to design, things like values, culture and behaviors.
So what’s my take-away from all of this? For me, it’s not a question of either design or analytics. It’s in having the maturity of design thinking to choose when a particular method is useful and when it is not.