This Is Spatial Tap

Of recent I’ve come to the conclusion that the one defining characteristic that is so fulfilling about using appropriately designed tools is that they become an extension of the user themselves, be it their body, their hands, or indeed their mind. Apples more recent device is a case in point, the iPad reduces the user interface to the extent that one feels as if they are actually ‘holding’ a webpage or application – and to all extents and purposes they are. However if one was to take this notion to the next logical level surely we would find that for many circumstances the ultimate interface would be no interface at all, and in turn the minimal physical interaction necessary, excepting neural and thought based interfaces, would be a simple and discrete gesture.

However many of the gestural interface implementations we have seen recently are concerned with how we as users might interact with a digital reality or unreality as the case may be. Projects such as Microsoft’s Natal demonstrate how a Natural User Interface (NUI) may be used to enhance our gaming environments by using our own bodies and actions as the proverbial input device in order to control a corresponding digital self or avatar.

In a recent re-reading of Don Normans excellent book the ‘Design of Everyday Things’ I was prompted to consider the real-world scenarios whereby a gesture based system would be advantageous in interacting with physical objects.

What implications would such changes have on our lives?

Many of the interactions which strike me initially would be scenarios where one is somewhat reticent to touch a shared surface, however by affording the modification of such behaviors would we be facilitating the creation of a generation physically averse to many everyday interactions which we as humans currently take for granted?

As our interactions with objects head in a direction visibly reminiscent of something approaching telekinesis how will this affect the way in which our everyday artifacts function and in what ways will designers harness gestures to influence behavior.

Which interactions or physical devices do we encounter on a daily basis which might benefit from such an interface?

Doors

An obvious choice surely? Maybe not. Many public facing doors in buildings are already using motion sensors to detect a person as they approach. Would they benefit by requiring a directive gesture to open? Probably not, although many of us would be familiar with two scenarios in which such solutions fall short.

a) The first we’ll call ‘Playing Chicken’ – When one walks towards an ‘Automatic Door’ with the intention of entering the building, yet the doors fail to open with sufficient time, forcing the individual to either break stride or stop, face the glass and wait patiently for the doors to acknowledge their presence.

b) ‘Getting Smart’ – when the flow of people using an automatic door is out of step with the sensors timing. Causing awkward uncertainty and nervous approaches by those who wish to enter, daunted by the prospect of getting sandwiched between the unforgiving glass panes.
The name of which is my tribute to an old spy TV show Get Smart in which the character confidently faced the prospect of simliar door malfunction.

However if a simple hand gesture were required would people feel socially awkward ‘waving’ to an inanimate object, or would it only be a matter of getting used to this slight change in behavior. For instance only 15 years ago a person may have avoided the prospect of having a full-blown conversation in a public space over a mobile device, yet nowadays many feel obliged to do just that not only by necessity but mostly by choice. Social conventions regularly accommodate technological advances with a unforeseen pace.

Toilet Seats

Use of a gestural command to lift and close a toilet seat would be of great benefit hygienically speaking, particularly in public spaces where one would be much less inclined to touch a toilet lid. When I researched this idea a little further it transpires that some other designers have had ideas along similar lines.

TV & Audio Systems

Use of a hand wave to change channels would initially seem useful, yet imagine a scenario where there are several people watching TV who might have differing opinions on what exactly to watch. Moving a simple single-user interface such as a remote control into a to multi-user scenario may unnecessarily complicate an otherwise straight-forward situation.

Imagine your next home audio system being able to recognise your gestures such as snapping your fingers to switch it on, a wave of your right hand to play and raising ones hand to raise the volume correspondingly. Would we want for such a thing or a would it be more a hindrance than a help.

Taps

I also remember thinking that taps would also benefit from having a non-touch interface. Again we are already familiar with motion activated taps but what about one where we facilitate greater control, allowing a user to turn it on/off, modify the flow and adjust the hot/cold streams – this could be achieved through use of gestures. Until this morning I had forgotten my thoughts on such interactions, that is until I received a message from Jasper Dekker, a product designer with Flankworks in The Netherlands who for his graduation project designed a tap which is controlled via spatial interaction.

You can view a demo of this tap in action below. There is also a more polished conceptual demonstration available here but the initial prototype spoke to me more, for some reason.

Neat huh? Seeing this prototype in action, it again struck me that the potential for gestural interfaces is vast and, if you excuse the pun – untapped, and we have only seen the beginnings of the impact such a subtle, virtually invisible technology can and will have on our daily lives.

As children we watched movies such as Star Wars where Jedi characters used their innate ability to manipulate objects at will, such behavior seemed magical at the time. However as man-machine interfaces continue to advance and their applications broaden, our interactions with the world and the objects within will accordingly becomes less intrusive and more natural.

Where else can you see gestural interfaces adding value in our day to day life?


Filed under Industrial Design Innovation Interaction Design User Experience User Interface Design
Tagged with
On Wednesday, July 28th, 2010 at 1:01 am 2 Comments »


Bill Buxton on Natural User Interfaces

Bill Buxton is Principal Researcher with Microsoft Research, he is a noted expert within the HCI field and was a pioneer of multi-touch interfaces back in the seventies.

He has a 30 year involvement in research, design and commentary around human aspects of technology, and digital tools for creative endeavour, including music, film and industrial design, in particular. Prior to joining Microsoft, he was a researcher at Xerox PARC, a professor at the University of Toronto, and Chief Scientist of Alias Research and SGI Inc. – where 2003 he was co-recipient of an Academy Award for Scientific and Technical Achievement.


Get Microsoft Silverlight

Buxton works from the assumption that sketching is fundamental to all design activity, and explores what it means to sketch a variety of possible user experiences. His approach is aggressively low-tech and eclectic. He argues that although you can use software tools to create fully-realized interactive mockups, you generally shouldn’t. Those things aren’t sketches, they’re prototypes, and as such they eat up more time, effort, and money than is warranted in the early stages of design. What you want to do instead is produce sketches that are quick, cheap, and disposable.

“Now that we can do anything, what should we do?”

Bill Buxton

His book Sketching User Experiences – Getting the Design Right and the Right Design is an absolute must read for anyone working on software/hardware concerned with creating an engaging and usable experience. Last year he gave the first keynote presentation at MIX 09 conference.

Recently at CES, Microsoft spent time alot of time speaking about the ‘Natural User Interface’ or NUI, and how this gesture based, human oriented approach could represent one of the most significant changes to human-device interfaces since the mouse appeared next to computers in the early 1980s.

Touch, face, voice-recognition and movement sensors – all are part of an emerging field of computing often called natural user interface, or NUI. Interacting with technology in these humanistic ways is no longer limited to high-tech secret agents and Star Trek. Buxton says everyone can enjoy using technology in ways that are more adaptive to the person, location, task, social context and mood. Microsoft’s XBox technology ‘Project Natal’ incorporates face, voice, gesture, and object recognition technology to give users a variety of ways to interact with the console, all without needing a controller.

Larry Larsen’s lenghty (38 min. 42 sec) but fascinating interview with Buxton can be seen above in which he discusses his work with Microsoft on NUI technologies and the implications and impact such advances in human-machine interaction will have on our daily lives in the near future.

Further Reading.

Multi-Touch Systems that I Have Known and Loved
Natural User Interfaces: Voice, Touch and Beyond
Now, Electronics That Obey Hand Gesture


Filed under Innovation Interaction Design User Experience User Interface Design
Tagged with
On Sunday, January 24th, 2010 at 6:15 pm Comments Off


Zoomable User Interfaces & Desert Fog

Zooming user interfaces or zoomable user interfaces (ZUI, pronounced zoo‐ee) are not exactly a new concept in the field of HCI/IXD. A ZUI could generally be defined as a graphical environment where users can change the scale of the viewed area in order to see more detail or less, and browse through different documents or objects. Despite all the work and research carried out in the space over the years the ZUI has had somewhat limited success. Indeed the finding of an effective and if you excuse the pun, scalable solution has proved somewhat elusive. That is not to say that ZUIs haven’t been effectively implemented in certain scenarios, indeed success stories such as Google Maps, Microsoft Labs Seadragon and Prezi have capitalised on the obvious benefits of effective applications of zoomable interfaces.

The term itself was coined by  one Franklin Servan‐Schreiber while working for the Sony Research Lab in partnership with Ben Bederson and Ken Perlin. One of the longest running efforts to create a ZUI has been the Pad++ project started by Ken Perlin, Jim Hollan, and Ben Bederson at New York University and continued at the University of New Mexico under Hollan’s direction. More recent ZUI efforts include Archy by the late Jef Raskin, and the simple ZUI of the Squeak Smalltalk programming environment and language. Bederson developed Jazz and later Piccolo at the University of Maryland, College Park, which is still actively being developed in Java and C#.

ZUIs use zooming as the main metaphor for browsing through hyperlinked information. Objects are presented within a zoomed page or canvas and can in turn be zoomed themselves to reveal further detail, allowing for recursive nesting and an arbitrary level of zoom.

A good introductory read is the late great Jef Raskins passage on ZoomWorld in his seminal HCI tome The Humane Interface: New Directions for Designing Interactive Systems, in which he discussed his idea of using the ZUI as a solution to the navigational dilemma for users. It’s also worth noting that he spent the latter stages of his career working on implementations of this new UI paradigm with his research team.

Dr Ben Shneiderman, another noted researcher in the HCI field made the following observation, which nicely encapsulates the lure of zoomable interfaces:

“Humans can recognize the spatial configuration of elements in a picture and notice relationships among elements quickly. This highly developed visual system means people can grasp the content of a picture much faster than they can scan and understand text. Interface designers can capitalize on this by shifting some of the cognitive load of information retrieval to the perceptual system. By appropriately coding properties by size, position, shape, and color, we can greatly reduce the need for explicit selection, sorting, and scanning operations.”

Ben Shneiderman UMBC

The potential benefits of ZUIs are well-documented and as previously mentioned recent applications such as PREZI and Microsofts DeepZoom technology have nicely demonstrated certain use cases in which ZUIs are a viable and cognitively acceptable model. However the shortcomings are also well-documented with the most commonly cited bête noire being a phenomena commonly referred to as ‘Desert Fog’. This occurs when a person becomes disorientated whilst using a zoomable interface and loses track of where they are, which could be confusing for the user, which likely leads to frustration and ultimately results in the abandonment of whatever task it was they were trying to carry out. The user no longer has any on-screen landmarks or cues upon which to work out where they are. Unquestionably, this is a worse situation than most everyday/orthodox interfaces where at the very least a user can often infer the context of their operations by looking at what is on screen. With the presence of ‘desert fog’ within ZUIs, there is nothing on screen to aid this inference, and so a user is left in a proverbial ‘no-mans land’. Wayfaring, assistive navigational maps and various other interface features have been employed in order to address this undesirable scenario albeit with somewhat varying degrees of success. Perhaps seeking a singular solution is the incorrect approach, with the ZUI conundrum proving it could be a case of ‘One Size Fits Some’.

Every now and then however a demonstration or an advancement in technology comes along which reignites the buzz for zoomable interfaces, yesterday I happened upon one of these demos which actually inspired me to write this little piece. At this years CESA Developers Conference in Japan Sony revealed an upcoming technology which will be available shortly as an SDK to developers for both the PS3 and the PSP. Sony have christened it High-Resolution Image Enlargement Technology, and despite the rather long-winded name it does not fail to impress. When I watched the demonstration video I was taken aback with the speed and ease at which the system was able to handle such resolution-intensive content.

The video below showcases a number of the demonstrations – the main demo appears to be a release calendar which inside each entry, contains high-resolution photos or a video of whatever is being released that particular day. Make sure you stick around for the mosquito – it’s quite impressive. This is a genuinely astounding piece of technology that could well enable some pretty cool software applications, however the real selling-point for me is that is will be available on widely used consumer products.

Perhaps the ‘desert fog’ may lift sooner than expected.


Filed under Interaction Design User Experience User Interface Design
Tagged with
On Tuesday, October 13th, 2009 at 9:14 pm Comments Off


Making Steve Jobs an Icon

To user interface & icon designers everywhere Susan Kare needs no introduction, it was she who designed the icons for the first Macintosh. Through her friend Andy Hertzfeld (a member of the original Mac team) she came to work at Apple after receiving a Ph.D. in fine art from New York University. In 1983 she joined the Macintosh software group and went on to create all of the original Mac’s icons and UI elements. From the ubiqutous trash bin, watch, pouring paint can and bomb icons to the portrait of a computer with a sly Mona Lisa smile, her work has graced desktops all across the world.

Which brings us to the story of the ‘The Steve Icon’; one day way back in February 1983 Susan Kare was busy creating icons for the Finder. Those were simple icons, only 32 by 32 black or white pixels or 1024 dots in total. It was said Kare would also draw lots of other images as well, for either practice or just for fun, usually reflecting her somewhat playful sense of humor. Then in the spur of the moment she took it upon herself to start drawing a portrait of Steve Jobs – no small task within such a tiny space, but somehow Susan succeeded in crafting an instantly recognizable likeness with a mischevious grin that captured a lot of Steve’s personality. It was reported that Jobs himself approved of the icon. Before long other members of the Mac team came to Susan requesting that they too be forever immortalised in 32 by 32 pixels – it became a Mac team status symbol to be iconified.

The Steve Icon

Kare left Apple around the same time as Jobs and went on to become the 10th employee at his new company NeXT – where she undertook the role of creative director. One of her first projects was to oversee the design of the NeXT logo for which she hired her idol the great Paul Rand. Nowadays as a freelance user interface graphic designer, she works for some of the biggest tech companies in the world including Electronic Arts, Facebook, IBM, Sony Pictures, Motorola and Microsoft. In recent interviews she has stated that over the past 10 years, she has drawn more than 2,000 icons.

No mean feat – even for the lady who had a hand in making Steve Jobs an icon, in both a metaphorical and a literal sense.


Filed under User Interface Design Visual Design
Tagged with
On Tuesday, October 6th, 2009 at 10:56 pm 1 Comment »


Sixth Sense & Touchable Holography

There have been many exciting developments in the field of HCI recently, with Augmented Reality, Experimental Sensory Experiences and numerous other emerging technologies making the headlines. Over the past year two in particuliar have stood out for me personally.

At TED this year Dr Pattie Maes a professor at MIT with the Fluid Interfaces Group gave a mind-blowing demo under the moniker of SixthSense which featured a wearable device that enables new interactions between the real world and the world of data.

Another technology I happened upon recently was featured in this years Siggraph, this demonstration is called Touchable Holography. It involves mid-air displays, holographics and actual tactile feedback. Normally we can “see” holographic images as if they are really floating in front of us, however we cannot “touch” them, because they are nothing but light. To address this problem Takayuki Hoshi and Masafumi Takahashi of The University of Tokyo have ingeniously combined holographics with actual tactile feedback.

This project adds tactile feedback to the hovering image in 3D free space. Tactile sensation requires contact with objects, but including a stimulator in the work space dilutes the appearance of holographic images. The Airborne Ultrasound Tactile Display solves this problem by producing tactile sensation on a user’s hand without any direct contact and without diluting the quality of the holographic projection.

The potential applications of both of these technologies is huge and I watch for further exciting developments with great interest.


Filed under Interaction Design User Experience User Interface Design
Tagged with
On Sunday, October 4th, 2009 at 10:19 pm Comments Off