The Future of UX/UI: The Natural User Interface

August 14, 2013 Alex Chong

For an adult with no prior exposure or experience, learning how to use a desktop computer can be a confusing challenge.

The desktop computing experience is neither intuitive nor innate to human beings – it requires significant training, time, and ideally early-age immersion in order to understand the paradigms of computing (both understanding how to physically interact, as well as conceptually understand virtual computing environments).

Since the adoption of the personal desktop computer in the 1980s, our efforts to naturalize the personal computing experience has been limited to humans as the variable factor when adapting to computing environments. It’s amazing that we have still managed to retain the paradigms of how to sit at a computer since the early 80s – with desktop computing today resembling the exact same monitor/keyboard/mouse setup with very little physical interactive variation. To illustrate how unnatural and unintuitive this archaic experience is, imagine the learning curve that a first-time user in their 50s experiences in order to understand this interface paradigm. There really is nothing fundamentally “natural” about the desktop computing experience – if anything, it is the furthest thing away from being a natural human function.

We are living in a fascinating time – it is only recently that we are finally breaking the paradigms of our traditional interface constraints set by the 80s. With the introduction of the biggest paradigm shift in human-computer interfaces, we are finally seeing new forms of portable computing devices – multitouch surfaces, powerful and lightweight mobile devices, and now the emerging market of wearable technology. We are entering an exciting world outside the constraints of physical and virtual environments. It’s about time we return to our natural world.

The History of the User Interface

A quick history lesson: computers up until the mid 70s were not much more than large glorified calculators. It was in the late 70s and early 80s when personal computing took a drastic leap forward: moving from command line interfaces (CLI), where typing was the primary communication with computing technology, to the graphical user interface (GUI), which was a more natural and emotionally compelling way to interact with a computer. This made personal computing dramatically more accessible to average folk – giving people the ability to “see” into the computer world, creating virtual environments and live visual feedback, pushing us one step closer to a more human-like computing environment.

But there’s something odd here – in the 30 years since the release of the Apple Lisa (the first personal computer to offer a GUI in an inexpensive machine), very little has changed regarding the physical experience when interacting with personal computers. Sure we invented the internet, progressed from the first iteration of HTML to HTML 5, and developed transformative web platforms to connect everyone – but our rigid adherence to the monitor/keyboard/mouse legacy kept our physical desktop experience the same in the early 2010s as it was in the early 1980s. To further support how far we’ve deviated from the natural world, the study of ergonomics emerged as a way to save our bodies from injury as we try to adapt them to this unnatural environment.

The Future of UI: Returning to Nature

Flash forward to the late 2000s – or as Apple describes it, the “Post-PC era.” The introduction of widespread multi-touch and mobile computing marks the largest leap forward yet in human-computer interfaces. Mobile devices are designed to be lightweight, portable, and seamless with one’s everyday lifestyle. Suddenly we’re in the middle of one of the most important technological revolutions, when our computing devices begin to adapt to our natural human function.

The first step into this realm has been the wide adoption of smartphone mobile technology – where checking your schedule is as easy as pulling out a piece of paper with your day plans scribbled on it. Knowing where to go (regardless of how bad your sense of direction is) is again as easy as pulling out paper with directions written down. Knowledge appears as you pull out your phone for answers.

We are only beginning to discover the possibilities of a world where devices adapt to our natural human behaviors; a stark departure from “human-computing,” where technology supports and amplifies natural human function.

3 Components of Natural User Interfaces

In the next wave of this revolution, hardware will virtually disappear. Take Google Glass as an example. The intention of Google Glass is to liberate us from needing to compute within the universe of a personal computer, by having an unobtrusive overlay to our natural sight and vision. This is computing as a support to our natural human functions, and is an example of Invisible Computing, one of three natural user interfaces.

  1. Invisible Computing
    Invisible computing is when hardware virtually disappears, as computing technology unobtrusively integrates with everyday, natural human function.
  2. Supportive Computing
    Supportive computing is computing technology that supports natural human function, rather than requires humans to adapt to computing functions.
  3. Adaptive Computing
    Adaptive computing and machine learning intelligently recognize and interpret human patterns to produce output based on relative context.

An example of a technology that has matured over time, is optics and optometry. Just think about corrective lenses: in the case of contact lenses, we place a thin film directly on our cornea, thereby altering light rays to converge absolutely perfectly onto our retina. Suddenly, with very little effort, we have perfect vision. We tend to forget how phenomenal corrective lenses are due to their seamless integration into our everyday lifestyles and routine. To reflect on our current state of computing: imagine if in order to correct your vision, you required a keyboard and mouse to toggle your vision every time you needed to focus.

Mature technological applications seamlessly disappear as they integrate into our lives.

What does this mean for UX/UI designers?

As computing technology advances, it pushes us UX/UI designers to constantly be on our toes while adapting accordingly. But as any UX/UI designer will tell you, in order to become a leader in the industry we must discuss and evaluate technological trends in order to effectively evolve with the exponential growth and changes within the industry. To put into context: UX/UI has seen unprecedented growth as an industry in the past few years. This is the result of designers adapting to technological developments – and as we are introduced to new challenges ahead, we will think about natural human experiences beyond the everyday screen. Perhaps one day the screen will no longer be relevant. Perhaps one day we will adopt a human-centric interface that can’t be mocked up in Photoshop. But when that day comes, we’ll be ready to meet new challenges – envisioning and designing for the future of technology.

 

Connect with Alex and Jacky on LinkedIn.

About the Author

Biography

Previous
Test Driven iPhone Development with Cedar, Part II
Test Driven iPhone Development with Cedar, Part II

Co Author: Andy Pliszka In Test Driven iPhone Development with Cedar, Part I, we created a new Xcode projec...

Next
Triaging Accessibility Issues with Cameron Cundiff
Triaging Accessibility Issues with Cameron Cundiff

Cameron Cundiff discusses accessibility issues, how to triage them and how to fix them. Slides are also ava...

×

Subscribe to our Newsletter

!
Thank you!
Error - something went wrong!