NYC ♥’s Data Visualisation

The field of data visualisation appears to be the ‘Plat du Jour’ of recent. It continues to gain great popularity as more and more people recognise the value of visualising data of any nature in a more aesthetic form, be it as part of a narrative, a news-story or as standalone interactive piece. Indeed as an antidote to the constant information overload we encounter everyday it makes for a welcome alternative.

There’s a lot of amazing work going on in the field at the moment, and sites such as Flowing Data, information aesthetics, Visual Complexity, Data Visualization.ch and Information is Beautiful do an excellent job of covering the trends of what’s happening on the scene, and indeed whats on the horizon.

“The purpose of visualization is insight, not pictures”

Ben Shneiderman (1999)

However three pieces of work have caught my attention over the past number of weeks, which I’ll briefly describe in this post. They also share a common theme, in that they are either made about, made in or made by a person who lives in NYC. Perhaps a somewhat tenuous link, but one none the less.

Gray Lady

While the entire newspaper industry sits around debating whether the internet will bring about their demise and how they may avoid such a fate, The New York Times bucks the trend by embracing it. The work carried out by the inhouse team known as the Interactive Newsroom Technologies at ‘The Gray Lady’ has been making headlines of their own for quite a while now, and justifably so. Their combination of visualisation and data is leading the field in an emerging digital storytelling domain.

dataviz_i

Interactive Newsroom Technologies are the minds behind the online pieces which have captured the eyes and the attention of online readers, works such as their ‘Word Train ‘- a mood database which appeared on the home page for Election Day and ‘Casualities of War: Faces of the Dead’ and ambitious project which merged photography, databases, audio, and graphics – this project marked the date U.S. military fatalities in Iraq reached 3,000.

Emily Nussbaum wrote an excellent piece ‘The New Journalism – Goosing the Gray Lady’ earlier this year which details how & why this team was put together, also examining some of the fruits of their labor.

The recently launched The New York Times – Innovation Portfolio aims to showcase the work carried out by the team and is itself a excellent piece of interactive design work, incidentally it was carried out by the uber-talented Jon Dobrowski. The pieces are visually represented by color-coded bubbles under the categories Virtual, Multimedia, Personal Tools, Interactive Graphics, User-Submitted and Applications. They also provide some insight into user engagement by showing the actual page-views along with the average time spent with the feature.

Well worth a visit.

Gray Maps

Next up is an interactive map I came across which shows the income & rent data by New York City neighborhoods.

It poses the questions - Who lives here? Who can afford to live here?

This visualisation stands out for me though, in that is is beautifully executed. It enables one to view income demographics and rents in the neighborhoods of New York City. When you click on particular neighbourhood, it maps the number of families in each income category on the multicolored bar residing at the foot of the interface.

Who Lives Here? Who Can Afford To?

Web & Information design was carried out by by Sha Hwand, Zach Watson and William Wang with concept and project direction by Rosten Woo and John Mangier of The Center for Urban Pedagogy.

As Tim O Reilly already pointed out this type of visualisation should be part of every city’s eGovernment toolkit, indeed every countrys. A highly functional yet simplistic visualisation that exposes the potential for applications as a means of explaining the numbers by way of the pictures.

Anthropology + Mapping Application + Data Visualisation = Awesome.

Check it out at http://envisioningdevelopment.net/map.

Gray Matter

I first stumbled across the work of Jonathan Harris back in 2005, which happended to be an interactive piece named Phylotaxis it aimed to be an expression of the space where science meets culture. He designed it in collaboration with the one and only Stefan Sagmeister and it was commisioned by SEED magazine who recently hired another guru, namely Ben Fry to head up their data visualisation group.

Returning to Harris, his work aims to comibine elements of computer science, anthropology, visual art and storytelling, his projects range from building the world’s largest time capsule to documenting an Alaskan Eskimo whale hunt on the Arctic Ocean.

Phylotaxis was/is an impressive piece of work and his output has continued to impress since – however a year later he released a new work named ‘We Feel Fine’ which set out to be ‘ an exploration of human emotion.’

“It continually harvests sentences containing the phrase “I feel” or “I am feeling” from the Internet’s newly posted blog entries, saves them in a database, and displays them in an interactive Java applet, which runs in a web browser. Each dot represents a single person’s feeling. We Feel Fine collects around 15,000 new feelings per day, and has saved over 13 million feelings since 2005, forming a constantly evolving portrait of human emotion.”

Jonathan Harris

We Feel Fine

Just released is a book which is based on the project, We Feel Fine: An Almanac of Human Emotion. With lush, colorful spreads devoted to 50 feelings, 13 cities, 10 topics, 6 holidays, 5 age groups, 4 weather conditions, and 2 genders, We Feel Fine explores our emotions from every angle, providing insights into and examples of each. Equal parts pop culture and psychology, computer science and conceptual art, sociology and storytelling, It is a radical experiment in mass authorship, merging the online and offline worlds to create an indispensable handbook for anyone interested in what it’s like to be human.

Check out the interactive, installation and print versions of this amazing project at ‘We Feel Fine’.


Filed under Data Visualisation Innovation Interaction Design User Experience
Tagged with
On Sunday, December 6th, 2009 at 8:57 pm Comments Off


Zoomable User Interfaces & Desert Fog

Zooming user interfaces or zoomable user interfaces (ZUI, pronounced zoo‐ee) are not exactly a new concept in the field of HCI/IXD. A ZUI could generally be defined as a graphical environment where users can change the scale of the viewed area in order to see more detail or less, and browse through different documents or objects. Despite all the work and research carried out in the space over the years the ZUI has had somewhat limited success. Indeed the finding of an effective and if you excuse the pun, scalable solution has proved somewhat elusive. That is not to say that ZUIs haven’t been effectively implemented in certain scenarios, indeed success stories such as Google Maps, Microsoft Labs Seadragon and Prezi have capitalised on the obvious benefits of effective applications of zoomable interfaces.

The term itself was coined by  one Franklin Servan‐Schreiber while working for the Sony Research Lab in partnership with Ben Bederson and Ken Perlin. One of the longest running efforts to create a ZUI has been the Pad++ project started by Ken Perlin, Jim Hollan, and Ben Bederson at New York University and continued at the University of New Mexico under Hollan’s direction. More recent ZUI efforts include Archy by the late Jef Raskin, and the simple ZUI of the Squeak Smalltalk programming environment and language. Bederson developed Jazz and later Piccolo at the University of Maryland, College Park, which is still actively being developed in Java and C#.

ZUIs use zooming as the main metaphor for browsing through hyperlinked information. Objects are presented within a zoomed page or canvas and can in turn be zoomed themselves to reveal further detail, allowing for recursive nesting and an arbitrary level of zoom.

A good introductory read is the late great Jef Raskins passage on ZoomWorld in his seminal HCI tome The Humane Interface: New Directions for Designing Interactive Systems, in which he discussed his idea of using the ZUI as a solution to the navigational dilemma for users. It’s also worth noting that he spent the latter stages of his career working on implementations of this new UI paradigm with his research team.

Dr Ben Shneiderman, another noted researcher in the HCI field made the following observation, which nicely encapsulates the lure of zoomable interfaces:

“Humans can recognize the spatial configuration of elements in a picture and notice relationships among elements quickly. This highly developed visual system means people can grasp the content of a picture much faster than they can scan and understand text. Interface designers can capitalize on this by shifting some of the cognitive load of information retrieval to the perceptual system. By appropriately coding properties by size, position, shape, and color, we can greatly reduce the need for explicit selection, sorting, and scanning operations.”

Ben Shneiderman UMBC

The potential benefits of ZUIs are well-documented and as previously mentioned recent applications such as PREZI and Microsofts DeepZoom technology have nicely demonstrated certain use cases in which ZUIs are a viable and cognitively acceptable model. However the shortcomings are also well-documented with the most commonly cited bête noire being a phenomena commonly referred to as ‘Desert Fog’. This occurs when a person becomes disorientated whilst using a zoomable interface and loses track of where they are, which could be confusing for the user, which likely leads to frustration and ultimately results in the abandonment of whatever task it was they were trying to carry out. The user no longer has any on-screen landmarks or cues upon which to work out where they are. Unquestionably, this is a worse situation than most everyday/orthodox interfaces where at the very least a user can often infer the context of their operations by looking at what is on screen. With the presence of ‘desert fog’ within ZUIs, there is nothing on screen to aid this inference, and so a user is left in a proverbial ‘no-mans land’. Wayfaring, assistive navigational maps and various other interface features have been employed in order to address this undesirable scenario albeit with somewhat varying degrees of success. Perhaps seeking a singular solution is the incorrect approach, with the ZUI conundrum proving it could be a case of ‘One Size Fits Some’.

Every now and then however a demonstration or an advancement in technology comes along which reignites the buzz for zoomable interfaces, yesterday I happened upon one of these demos which actually inspired me to write this little piece. At this years CESA Developers Conference in Japan Sony revealed an upcoming technology which will be available shortly as an SDK to developers for both the PS3 and the PSP. Sony have christened it High-Resolution Image Enlargement Technology, and despite the rather long-winded name it does not fail to impress. When I watched the demonstration video I was taken aback with the speed and ease at which the system was able to handle such resolution-intensive content.

The video below showcases a number of the demonstrations – the main demo appears to be a release calendar which inside each entry, contains high-resolution photos or a video of whatever is being released that particular day. Make sure you stick around for the mosquito – it’s quite impressive. This is a genuinely astounding piece of technology that could well enable some pretty cool software applications, however the real selling-point for me is that is will be available on widely used consumer products.

Perhaps the ‘desert fog’ may lift sooner than expected.


Filed under Interaction Design User Experience User Interface Design
Tagged with
On Tuesday, October 13th, 2009 at 9:14 pm Comments Off


Sixth Sense & Touchable Holography

There have been many exciting developments in the field of HCI recently, with Augmented Reality, Experimental Sensory Experiences and numerous other emerging technologies making the headlines. Over the past year two in particuliar have stood out for me personally.

At TED this year Dr Pattie Maes a professor at MIT with the Fluid Interfaces Group gave a mind-blowing demo under the moniker of SixthSense which featured a wearable device that enables new interactions between the real world and the world of data.

Another technology I happened upon recently was featured in this years Siggraph, this demonstration is called Touchable Holography. It involves mid-air displays, holographics and actual tactile feedback. Normally we can “see” holographic images as if they are really floating in front of us, however we cannot “touch” them, because they are nothing but light. To address this problem Takayuki Hoshi and Masafumi Takahashi of The University of Tokyo have ingeniously combined holographics with actual tactile feedback.

This project adds tactile feedback to the hovering image in 3D free space. Tactile sensation requires contact with objects, but including a stimulator in the work space dilutes the appearance of holographic images. The Airborne Ultrasound Tactile Display solves this problem by producing tactile sensation on a user’s hand without any direct contact and without diluting the quality of the holographic projection.

The potential applications of both of these technologies is huge and I watch for further exciting developments with great interest.


Filed under Interaction Design User Experience User Interface Design
Tagged with
On Sunday, October 4th, 2009 at 10:19 pm Comments Off


The Way Things Go

RFID and NFC will both undoubtedly play a huge role in the field of Interaction Design in the coming years. The Institute of Design at the Oslo School of Architecture and Design in Norway have been carrying out some very interesting research in the field and have in turn come up with some very innovative applications utilising these Near Field Communications technologies. Their Touch initiative is a research project that investigates Near Field Communication, a technology that in short enables connections between mobile phones and physical things.

For their Nearness project they put together a nice short video in collaboration with BERG which illustrates some of the potential applications of these technologies.

Whilst watching the clip I was immediately reminded of the mind-blowing Honda Accord Cog Commercial which was made a number of years ago. You can watch the advert below.

The Touch group have obligingly acknowledged their influences and also included a mention of the art movie filmed by Swiss artists Peter Fischli and David Weiss called ‘Der Lauf der Dinge’ or ‘The Way Things Go’. For their film they built a enormous, precarious structure 100 feet long out of common items. Using fire, water, gravity, and chemistry they created a mind-blowing chain reaction of physical and chemical interactions and precisely crafted chaos.

As a child I was fascinated by dominos and can recall watching with amazement those videos in which thousands of carefully placed pieces of plastic ran meandering paths on massive high-school gym floors. There is something hugely captivating about watching a chain of self-triggering events, and all of these movies use this technique to great effect.

From the world of art to the business of advertising and eventually arriving in the field of interaction design, the ‘visual chain of events’ device has been used to great effect. Lets see where it turns up next.


Filed under Interaction Design Miscellaneous User Experience
Tagged with
On Sunday, September 27th, 2009 at 8:37 pm Comments Off