<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Cross-Modal | Haokun Wang</title><link>https://wanghaokun.site/tags/cross-modal/</link><atom:link href="https://wanghaokun.site/tags/cross-modal/index.xml" rel="self" type="application/rss+xml"/><description>Cross-Modal</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Wed, 15 May 2024 00:00:00 +0000</lastBuildDate><item><title>Let It Snow: Cross-Modal Cold &amp; Touch for VR Snowfall</title><link>https://wanghaokun.site/project/let-it-snow/</link><pubDate>Wed, 15 May 2024 00:00:00 +0000</pubDate><guid>https://wanghaokun.site/project/let-it-snow/</guid><description>&lt;h2 id="overview">Overview&lt;/h2>
&lt;p>&lt;strong>Let It Snow&lt;/strong> is a hands-free, wearable-free haptic experience: users hold their bare hands over a custom mid-air display that simultaneously fires focused ultrasound pressure points and directed cold airflow to simulate individual snowflakes landing — or rain drops splattering — on their palms.&lt;/p>
&lt;p>Published in &lt;strong>ACM IMWUT 2024&lt;/strong> (Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies), the project explores how cross-modal cold–tactile pairing creates emergent sensory illusions greater than either cue alone.&lt;/p>
&lt;hr>
&lt;h2 id="the-problem">The Problem&lt;/h2>
&lt;p>Simulating precipitation in VR is a classic immersion gap. Visually, snow and rain can look photorealistic. But without &lt;em>feeling&lt;/em> the cold, the wet, the gentle impact — users never quite believe it. Existing approaches require worn devices, which break the &amp;ldquo;bare hand in the weather&amp;rdquo; fantasy entirely.&lt;/p>
&lt;p>&lt;strong>Core Questions:&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>Can cold airflow and ultrasound pressure co-localize in mid-air to synthesize a snowflake or raindrop percept?&lt;/li>
&lt;li>Do cold and tactile cues mask each other, or can they be independently perceived at the same skin location?&lt;/li>
&lt;li>How should aggregated stimuli be rendered for heavy snowfall / rainfall?&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h2 id="research-approach">Research Approach&lt;/h2>
&lt;p>We drew on &lt;strong>cross-modal sensory integration&lt;/strong> theory: cold and tactile channels are processed by separate neural pathways (thermoreceptors vs. mechanoreceptors), so two signals can coexist without mutual interference — unlike, say, two sounds at the same frequency.&lt;/p>
&lt;p>Key hypothesis: a brief cold puff + simultaneous pressure focus = snowflake percept; a sharp cold burst + faster pressure = raindrop percept.&lt;/p>
&lt;p>We also designed an &lt;strong>aggregated haptic scheme&lt;/strong> for particle-dense scenes: rather than rendering every particle individually (physically impossible), we modulate cold intensity and pressure density proportionally to particle count, exploiting temporal summation in both sensory channels.&lt;/p>
&lt;hr>
&lt;h2 id="system-design">System Design&lt;/h2>
&lt;h3 id="hardware">Hardware&lt;/h3>
&lt;ul>
&lt;li>&lt;strong>Cold array&lt;/strong>: 6 Peltier modules (20 × 20 mm) mounted in a ring, each with a micro-fan to direct cold air toward the focus point; temperature range: 5°C–15°C above ambient&lt;/li>
&lt;li>&lt;strong>Ultrasound haptic display&lt;/strong>: Ultrahaptics STRATOS Inspire — 256 transducers at 40 kHz, creating mid-air pressure foci up to 200 mN at distances up to 22 cm&lt;/li>
&lt;li>&lt;strong>Depth tracking&lt;/strong>: Intel RealSense D435 hand tracking, integrated into Unity for palm position → focus point mapping&lt;/li>
&lt;li>&lt;strong>Control PC&lt;/strong>: Custom C++ driver for thermal timing; Unity handles audio, visuals, and hand tracking&lt;/li>
&lt;/ul>
&lt;h3 id="unity-vr-integration">Unity VR Integration&lt;/h3>
&lt;ul>
&lt;li>Built in &lt;strong>Unity 2021 LTS&lt;/strong>, standalone VR scene with Oculus Integration SDK&lt;/li>
&lt;li>Particle system drives two managers:
&lt;ul>
&lt;li>&lt;code>SnowRenderer&lt;/code>: handles visual particles with collision callbacks to trigger haptic events&lt;/li>
&lt;li>&lt;code>HapticAggregator&lt;/code>: accumulates per-frame particle counts, applies transfer function to Peltier intensity and ultrasound amplitude&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>Snowflake percept: 150 ms cold puff + 40 Hz pressure burst; Raindrop: 60 ms sharp cold + 200 Hz single-pulse&lt;/li>
&lt;li>Scene contains interactive environments: snowy mountain valley, rainstorm on a city rooftop&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h2 id="user-evaluation">User Evaluation&lt;/h2>
&lt;h3 id="perceptual-study--cold--tactile-independence">Perceptual Study — Cold × Tactile Independence&lt;/h3>
&lt;ul>
&lt;li>&lt;strong>N = 14 participants&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Design&lt;/strong>: 2 (cold present/absent) × 2 (tactile present/absent) × 5 repetitions&lt;/li>
&lt;li>&lt;strong>Measure&lt;/strong>: detection accuracy per modality, reported interference rating&lt;/li>
&lt;li>&lt;strong>Finding&lt;/strong>: No significant cross-modal masking — participants detected cold and tactile independently (d&amp;rsquo; &amp;gt; 2.5 for both modalities combined)&lt;/li>
&lt;/ul>
&lt;h3 id="experience-study--aggregated-rendering-comparison">Experience Study — Aggregated Rendering Comparison&lt;/h3>
&lt;ul>
&lt;li>&lt;strong>N = 20 participants&lt;/strong>, within-subject&lt;/li>
&lt;li>&lt;strong>Conditions&lt;/strong>: (1) no haptics, (2) tactile-only, (3) cold-only, (4) Snow (cold+tactile sparse), (5) Snow (cold+tactile aggregated)&lt;/li>
&lt;li>&lt;strong>Measures&lt;/strong>: presence subscale (IPQ), realism rating, preference ranking&lt;/li>
&lt;li>&lt;strong>Task&lt;/strong>: 3-minute free exploration of snowy mountain scene, 3-minute rainstorm scene&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h2 id="results--key-findings">Results &amp;amp; Key Findings&lt;/h2>
&lt;ul>
&lt;li>&lt;strong>Aggregated scheme rated significantly more realistic&lt;/strong> than sparse individual-particle scheme (p&amp;lt;.01) for heavy snowfall&lt;/li>
&lt;li>Cold+tactile combination rated &lt;strong>+1.8 points&lt;/strong> on 7-pt presence scale vs. tactile-only (p&amp;lt;.001)&lt;/li>
&lt;li>18/20 participants preferred the full cross-modal condition; primary qualitative theme: &amp;ldquo;it actually felt cold and real, like being outside&amp;rdquo;&lt;/li>
&lt;li>System achieved stable cold delivery at ±0.3°C variance across a 10-minute continuous session&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h2 id="impact">Impact&lt;/h2>
&lt;ul>
&lt;li>📄 Published: &lt;strong>ACM IMWUT 2024&lt;/strong> — &lt;em>Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.&lt;/em>&lt;/li>
&lt;li>DOI:
&lt;/li>
&lt;li>Framework for aggregated haptic rendering has been adopted in follow-on multi-particle VR haptics research&lt;/li>
&lt;/ul></description></item><item><title>Let It Snow: Designing Snowfall Experience in VR</title><link>https://wanghaokun.site/publication/journal-article-snow/</link><pubDate>Wed, 15 May 2024 00:00:00 +0000</pubDate><guid>https://wanghaokun.site/publication/journal-article-snow/</guid><description/></item></channel></rss>