We're bad at the "sustained supervisory" tasks that autonomous vehicles create, but researchers have a solution
If autonomous cars capable of doing most driving duties become a reality, we will be free to relax or focus on other things while on the road.
But will we be able to make the mental switch back to driving and regain control fast enough in a critical situation? Researchers at the University of Glasgow have been investigating whether augmented reality could help in this regard.
The problem is that letting the car take control while concentrating on something else means the driver’s role shifts to somewhere between that of a driver and a passenger, not actively involved in steering but still in charge of the car.
One thing we aren’t good at, says the research team, is “sustained supervisory tasksâ€. We become bored, lose awareness of road conditions and react too slowly to sudden changes around us. Another problem is the “look but fail to see†phenomenon, where we don’t process something right in front of our eyes.
The team reasoned that using augmented reality to grab the attention of a driver engaged in another task would help them to quickly switch focus back to driving in an emergency. To test the theory, they set up a laboratory experiment centred on a steering wheel with screens emulating the view through a windscreen.
The screens displayed a series of 40 video clips while participants carried out one of two tasks wearing an augmented reality headset, sometimes heads up working on the screen ahead, sometimes heads down using a tablet. One task was playing a simple game, tracking and collecting moving virtual gems. A second task presented participants with a pad on which they had to type in a phone number displayed on the screen.
In both scenarios, the videos were stopped immediately before a potential hazard was displayed – such as a pedestrian stepping into the road – to test situational awareness.
The participants then had to choose from one of four predictions as to what would happen next, based on their understanding of conditions before the video stopped. The results were compared to a similar experiment in which the predictions were made without participants performing either task.
Perhaps not surprisingly, participants fared less well when performing another task in both heads-up and heads-down situations than when focusing purely on driving.
However, when in another test visual cues drawing attention to an unfolding event were added to the augmented reality headset seconds before the video was stopped, participants showed better awareness and did better in the heads-up situation than looking down at the tablet.
The suggestion drawn from this is that there’s a zone in which we can be engaged in another task while staying in touch with developing road conditions.
The team has published a paper called ‘Can you hazard a guess? Evaluating the effects of augmented reality cues on driver hazard prediction’ on this aspect of autonomy that needs more scrutiny if driverless cars are to become commonplace.