How Distracting are Distractions?

Ancient wisdom says "The mind is like the wind, it never stays still." This holds true even when we drive. In a desperate attempt to keep our minds occupied, we reach for our phones.

The results are pernicious. According to the National Safety Council, over 17,000 people died on US roads in the first six months of 2016 alone. This marks the highest increase in 50 years -- a sharp uptick attributed largely to driver distraction. The dangers of driver distraction are real. 

Distraction can be broken into four categories: 

  1. Mild distraction: Listening to radio or podcasts 
  2. Moderate distraction: Talking on the phone 
  3. Moderate to High Distraction: Voice commands 
  4. Very High Distraction: Trying to grasp a moving object while driving. 

Research concludes that both heads-up display (HUD) and voice commands demonstrate unsafe levels of distraction. A recent NY Times article quotes Paul Atchley, a psychologist at the University of Kansas who studies driver distraction as saying HUDs are, “ a horrible idea.” He continues  -- “The technology is driven by a false assumption that seeing requires nothing more than having the eyes fixed on the right spot.” The AAA’s study about Apple's Siri mirrors this insight. 

The Industry Grabs Onto Straws

Despite the fact that voice commands have the proven capacity to incite danger on the road, the tech industry continues to lean heavily into this distracting functionality. While frustrating, this leads us to ask -- what’s so distracting about voice commands in the first place? To understand why, let’s look back at the dual task experiments psychologists have conducted over the past 50 years. 

Dual task experiment findings indicate that when a person is asked to perform a secondary auditory task (respond to a tone) while performing a primary auditory task (recall a list of numbers read aloud), performance on one or both tasks suffers. Performing secondary tasks that are visual, however, leads to minimal performance declines. These results suggest that two tasks requiring the same modality (i.e. two visual or two auditory) drain the limited working memory capacity in that center. In contrast, two tasks that call on different modalities (i.e. one visual and one auditory) make use of the separate subcomponents in working memory.

Based on these dual-task experiments, psychologists have inferred that working memory has two subcomponents: one for auditory information called the phonological loop and a separate one for visual information called the visual-spatial sketchpad (Baddeley, 1992). The phonological loop processes auditory, mainly verbal, information while the visual-spatial component processes diagrams and pictures.

The O6 Difference

Visual technologies such as heads-up Displays, Apple CarPlay, or Android Auto require visual attention. These interactions primarily split your limited visual subcomponent of working memory, hence leading to severe performance declines in your primary driving task.

Similarly, audio-only technology such as voice commands that require both speaking and listening split the audio subcomponent of the working memory. This results in severe performance declines as well.

O6 technology solves this problem by requiring touch as an input (touch taps into the visual-spatial subcomponent of working memory) and audio as an output (audio taps into the phonological subcomponent of working memory). According to dual-task experiments, this type of user flow leads to minimal performance declines. That’s why we designed the O6 experience to be is as simple as listening to radio -- so that you can keep hands on the wheel, eyes on the road, and avoid visual distractions.