A patent recently filed by Mazda outlined their concept for a “driving assistance system,” a simple name which conjures up ideas about improved accessibility or decision-making in the driving experience. However, the name belies a complex system of emotion detection and behavior manipulation through a series of algorithmic comparisons to modify a driver’s experience based on the assessment. This patent is indicative of the ever-growing interest in drivers as an influential market for smarter, more connected technologies.
The patent’s abstract states,
When finding the driver distracted from driving (such as looking aside), this system provides driving assistance that enhances his or her motivation to drive by encouraging him or her to drive actively and thereby increase his or her internal focus on driving. Examples of specific driving assistance include giving exemplary driving instructions to the driver, providing him or her with navigation to a road with features that would entertain him or her through driving, and improving sensitivity to any change in the vehicle’s state responsive to driving operations. If dangerous driving is sensed while driving assistance is being provided, driving control may be performed to increase the driver’s tension.
Delivery of automated tips for “exemplary driving” (whether by voice or screen) seems like a natural development for safer, more technologically advanced cars and is not particularly surprising. What is notable, however, is the attention paid in the patent to the process of assessing a driver’s emotional state, the resultant behavior, and the explicit directive to change that behavior.
To “classify the driver’s condition” (i.e. their emotional state) the patent focuses on image capture of the driver’s face and sensors to detect operating states of the accelerator and brake pedal. The patent claims that the combined results from “measurement units” will determine if a driver is focused, distracted, entertained or tense; using these determinations, the system will perform functions to increase or decrease these states. To discourage distracted or dangerous driving, examples of “assistance” from the system include:
- “providing navigation to a road with features that would entertain him or her through driving”
- “a loudspeaker configured to emit an engine sound inside the vehicle’s cabin” to “make the driver feel the vehicle is running at higher speeds than the actual one” and motivate deceleration.
- “reducing the operation reaction forces of the accelerator and brake pedals”
In these “exemplary embodiments,” the car would be capable of manipulating the perceptions of the driver until they are back within a threshold of a “predetermined value.” And how are these values determined? The image below shows the flowchart of how a driver’s level of distraction is quantified, while similar series of questions is used to determine a level of focus. Together, these create a score that results in appropriate “assistance” from the system.
The determination of emotional state or level of focus based on body language (i.e. head tilt or eye movement) is certainly nothing new; surely anthropologists, psychiatrists, doctors, teachers, and even live performers like comedians rely on it to adjust their own behaviors and modify, or at least understand, the perspectives of their audience. Indeed, facial patterns have long been mapped across cultures to determine universal emotional representations used for studying behavior and development.
However, the computational analysis of head position relative to eye position to determine if a person is entertained or focused, for example, presents a paradigm filled with challenges and opportunities. How are these thresholds defined and who decides what those thresholds should be? How do we design for accessibility, diverse body capabilities, or atypical behaviors? And the evergreen question: how much control do we give over to the machines? Extending far beyond the 15′ of a car, this patent provides a fascinating look into the tensions of autonomy, for both human and computer. We look forward to seeing how things develop for both.