I’ve attended two conferences in the last three weeks—Jared Spool’s UX Immersions conference, and CHI 2012, the primary computer-human interaction conference.
UX Immersions was the smaller conference, focusing this year on the Agile/UX interface and on design for mobile devices. It was a great group of very focused people and, as it was run by Jared’s organization, the whole conference ran like clockwork.
I had my own session to teach but I was able to get around a little and very much enjoyed Jeff Gothelf’s talk on UX and Agile. He calls his approach “Lean UX,” because he’s looking to integrate the UX work as closely as possible to the development work. I think it’s a good challenge: How closely can you get the UX people and the developers to work together? Can you do initial rough sketches together? Can you make a space for user feedback and for design, if you give the developer a way to get started? Can you do all this within the boundaries of a sprint?
I’d love to hear your experience—let us all know in the comments how you’ve done it.
CHI was a joyful zoo, as always. We at InContext presented sessions on Agile/UX, Innovation, and Cool—all well-attended and well-received. Some highlights from the rest of the conference:
One fascinating paper discussed the effect interleaving tasks has on making errors. They showed that for a moderately involved task (setting up an IV drip) the error rate increases when the user tries to program multiple IV devices at once. They were able to reduce the error rate by moving the instructions farther away from the IV device, making quick reference more difficult so that users defaulted to programming one device at a time. Hmm. Where else might interleaving tasks be a real problem? Should we be designing to discourage interleaving in those cases, rather than for convenience?
There was a good paper on social tagging in the presence of other social taggers. When people are given the opportunity to tag content they view online, it turns out they not only copy the tags others have used, but also use tags that extend the meaning others have used. So it’s important to see the tags the community has used, so the community can coalesce around a shared set of terms.
But by far the coolest thing I saw was technology from Seeing Machines that can present a true 3D effect on an (almost) standard laptop with no glasses. The technology uses the laptop’s webcam to track where your eyes are and a special filter over the screen to direct the image to one eye or the other. They say the screen is cheap and light enough to be included in a laptop as standard equipment.
Imagine what the world will be in five years when computers ship with 3D as standard, built into the OS, just as they currently ship with color as standard. And it’s not just 3D movies—the technology tracks where your head is and can present 3D virtual worlds. 3D can be built into the OS interface itself—windows can truly be laid out in the Z dimension. In a few years, we may be moving our heads to peek around a window on the screen and see what’s behind it. Start thinking about how to design for it now—it’s coming sooner than you think!