top of page
Search

Needed Human-Centric Transportation Research In Light of Technological Advances and Industry Messaging

  • birina
  • Aug 22
  • 4 min read

The current technological landscape, marked by advances in AI (especially generative and multimodal models), final arrival of sophisticated wearable devices (remember Google Glasses?), and an industry focus towards human-centric goals, presents a unique opportunity to redefine transportation. 


Traditionally, transportation research has prioritized efficiency, but this new era allows for a focus on human well-being, inclusivity, safety, and personal empowerment. 



MLLM illustrated by Gemini
MLLM illustrated by Gemini

At the heart of this transformation are Multimodal Large Language Models (MLLMs), which can process and reason across text, audio, and visual inputs in real time. Concurrently, wearable technology has evolved into proactive health and wellness hubs with advanced biometric sensors that can infer a user's physiological and affective state. Leading AI companies like OpenAI, Meta, and Google are guiding this application centered on human benefit and, hopefully, responsible innovation.


Wearable Technology in 2025 and Beyond


Wearables are transforming from data trackers to intelligent companions, capable of creating "AI-Generated Health Twins" and acting as "Conversational and Actionable Micro-Coaches." 


This is driven by advanced chipsets and new biometric sensors that monitor blood oxygen, skin temperature, blood pressure, heart rate, heart rate variability (HRV) and critically, physiological proxies for mental and emotional states like electrodermal activity (EDA). 



Meta Ray-Ban AI Glasses
Meta Ray-Ban AI Glasses

Smart glasses enable discreet data collection and offer augmented reality interfaces. The convergence of physiological sensing (widely adopted by population such as Fitbit, AmazFit, etc.), location data (collected by smartphones for years), and different health and habits related data (collected using such apps as Apple HEalth, Fitbod, Google Fit, Fooducate, and many many others) creates an opportunity to build "Individual psycho-geographic" layer, that maps emotional and behavioral impact of physical spaces, allowing for development of personalized routing around "stress hotspots".


Industry Guiding Principles for an AI-Driven Future


Leading AI companies have articulated missions emphasizing responsibility, safety, and human benefit. OpenAI focuses on AGI benefiting all humanity, Meta on human connection and personal AI, and Google on responsible innovation and universal accessibility. Let’s take them at their word and discuss opportunities for transportation research! 


The Context-Aware Agent for Personalized and Inclusive Navigation


Digital phenotyping for mobility, created by Gemini
Digital phenotyping for mobility, created by Gemini

A truly personalized navigation agent would fuse three real-time data streams: external environmental context (crowd density, noise levels), a learned user profile (stable preferences like avoiding tunnels), and real-time internal state (physiological stress from wearable like EDA and HRV). This allows the agent to dynamically reroute based on a user's rising stress levels. This approach reframes the navigation agent as a "digital phenotyping for mobility" tool, offering objective insights into environmental triggers for clinicians (for research purposes ethical questions about data privacy and potential misuse are ignored).


Future research should focus on co-design with neurodivergent individuals to ensure empowerment, not paternalism, and explore mechanisms for user feedback. Systematic sensory data collection and mapping of urban environments are needed to create "sensory maps" that inform the agent's routing decisions. Rigorous studies are required to evaluate the real-world impact of these agents on anxiety reduction and independence, using both objective physiological measures and qualitative methods.


Therapeutic Applications in Immersive Environments


Sensory cocoon, created with Gemini
Sensory cocoon, created with Gemini

Public transportation often causes sensory overload and anxiety due to unpredictable stimuli. Research needs to explore creating a "sensory cocoon" using immersive technologies to mediate sensory input. The audio component would use AI-powered headphones for intelligent, selective sound filtering (dampening jarring noises, preserving announcements) while overlaying calming audio. The visual component, delivered via AR smart glasses, would subtly augment the real world by dynamically dimming cluttered environments or blurring faces to reduce cognitive load and social pressure, while also highlighting clear paths or displaying calming visual patterns.


Unlike Virtual Reality Exposure Therapy (VRET), which exposes users to simulated anxiety triggers, the "immersive sanctuary" shields users from overwhelming real-world stimuli for in-the-moment coping. These approaches are complementary: VRET builds long-term resilience clinically, while the "sensory cocoon" is an assistive digital therapeutic for daily life, potentially bridging clinical treatment to independent functioning.


Future research must quantify the physiological impact of sensory modulation on anxiety using laboratory and field studies. Critically, it must address the immersion-awareness trade-off, ensuring therapeutic benefits don't compromise situational awareness, and safety. Finally, research should explore personalization and user control, investigating interfaces from fully automated to granular manual control to ensure effectiveness and empowerment for diverse users.


Navigating Social Anxiety


Navigating Social Anxiety, created with Gemini
Navigating Social Anxiety, created with Gemini

Urban environments, with their high density and constant social interaction, can be debilitating for individuals with Social Anxiety Disorder (SAD). AR offers a powerful in-situ intervention, overcoming the generalization limitations of VR exposure therapy. An AR application via smart glasses can act as a "social-cognitive prosthetic" and in-situ coach that can help to reduce cognitive and emotional burden by highlighting less congested paths, displaying pre-scripted conversation starters, or even analyzing facial expressions of conversation partners to provide private feedback on social cues. I am sure many other assistance are needed that I can’t imagine. To discover them, broad focus group discussions are needed with domain experts and individuals with SAD. 

Focused research is required to establish clinical efficacy through randomized controlled trials comparing AR interventions to gold-standard treatments. Usability and social acceptability are paramount, requiring research into discreet hardware design and subtle interfaces that don't draw unwanted attention. Finally, deep engagement with ethical implications is crucial, examining potential bias in emotion recognition algorithms and the risk of over-reliance on technology, aligning with principles of responsible, human-centered innovation.



These research areas collectively aim to transform our cities into a more humane, responsive, and supportive ecosystem for individual well-being. This is now feasible due to the convergence of multimodal AI, on-device processing, and continuous sensory data from wearables. Ethical frameworks from AI industry leaders guide this potential, compelling researchers to prioritize usability, trust, equity, and the long-term psychological and social impacts. 


The path forward demands human-subject research, co-design with vulnerable communities, rigorous evaluation, and critical examination of personalizing reality, ultimately shifting walking, cycling, transit riding and any there traveling from a metric of efficiency to a measure of human well-being and happiness.

Comments


bottom of page