  {"id":582,"date":"2025-06-18T14:00:14","date_gmt":"2025-06-18T14:00:14","guid":{"rendered":"https:\/\/cmblog.neuroscience.queensu.ca\/?p=582"},"modified":"2025-11-14T16:58:53","modified_gmt":"2025-11-14T16:58:53","slug":"meta-physical-theatre-making-touch-real-in-virtual-reality","status":"publish","type":"post","link":"https:\/\/cmblog.neuroscience.queensu.ca\/meta-physical-theatre-making-touch-real-in-virtual-reality","title":{"rendered":"Meta-Physical Theatre: Making Touch Real in Virtual Reality"},"content":{"rendered":"\n<p>Imagine if you could enter a play by putting on a virtual headset. Now imagine that the characters in the play are shaking your hand or giving you a hug, and that you can feel them do these things.<\/p>\n\n\n\n<p>This is the kind of experience that researchers at Queen\u2019s University are developing through the \u201cMeta-Physical Theatre: Designing Physical Interactions in Virtual Reality Live Performances Using Robotics and Smart Textiles\u201d project. In a nutshell, the project integrates physical touch into virtual reality (VR) live performances. <\/p>\n\n\n\n<p><a href=\"https:\/\/smithengineering.queensu.ca\/directory\/faculty\/matthew-pan.html\">Dr. Matthew Pan<\/a> is the lead researcher on the project. Dr. Pan is an Assistant Professor in the Faculty of Engineering and Applied Science and a member of Ingenuity Labs Research Institute at Queen\u2019s. He was one of six inaugural recipients of a <a href=\"https:\/\/www.yorku.ca\/research\/connected-minds\/opportunities\/seed-grants\/\">Connected Minds seed grant<\/a> in 2024, supporting community-focused research that pushes boundaries in technology and society.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Virtual Reality Beyond the Visual<\/strong><\/h2>\n\n\n\n<p>In association with intersectional arts organizations, this project aims to build immersive narratives where participants can not only see and hear virtual characters but can also physically interact with them. It pushes boundaries on VR environments to build immersive environments where touch becomes a part of the narrative structure.<\/p>\n\n\n\n<p>Dr. Pan\u2019s idea began years earlier during his time at Disney. While working on <em>Star Wars: Galaxy\u2019s Edge<\/em>, he developed an immersive experience where visitors could feel an iconic \u201cforce grab\u201d (when a Jedi summons a lightsabre through the air). \u201cYou would put on a VR headset, and you would see, in the distance, this lightsabre that you can reach out to with your hand and it would start zooming toward you,\u201d he explains. \u201cYou would actually see the lightsabre come into your hand in VR. At the same time, a robot in the real world would deliver a lightsabre prop with the exact same timing and force.\u201d Though the project was ultimately shelved by Disney, Dr. Pan didn\u2019t give up on the idea. \u201cI thought there was a lot left on the table by shelving that project.\u201d<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Making Touch Feel Real in VR<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-image size-large\"><img fetchpriority=\"high\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/06\/Copy-of-Untitled-Design-1024x576.jpg\" alt=\"\" class=\"wp-image-585\" srcset=\"https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/06\/Copy-of-Untitled-Design-1024x576.jpg 1024w, https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/06\/Copy-of-Untitled-Design-300x169.jpg 300w, https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/06\/Copy-of-Untitled-Design-768x432.jpg 768w, https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/06\/Copy-of-Untitled-Design-1536x864.jpg 1536w, https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/06\/Copy-of-Untitled-Design-2048x1152.jpg 2048w, https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/06\/Copy-of-Untitled-Design-800x450.jpg 800w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">Dr. Matthew Pan (L) and Michael Wheeler (R)<\/figcaption><\/figure>\n\n\n\n<p>Of course, there&#8217;s no VR theatre without theatre, and Dr Pan&#8217;s collaboration with <a href=\"https:\/\/sdm.queensu.ca\/people\/wheeler-michael\">Michael Wheeler<\/a> is essential to the project.\u00a0 Wheeler is a fellow Ingenuity Labs and Connected Minds member, Assistant Professor in the DAN School of Drama and Music, and Director of Artistic Research at SpiderWebShow Performance. &#8220;Shortly after arriving at Queen\u2019s, I was introduced to Michael\u2026 we thought it would be really cool to actually have a theatrical narrative that uses interpersonal interactions in VR,\u201d Dr Pan says. Supported by community organizations, Dr. Pan and Wheeler co-created a live VR theatre experience that integrates physical touch. \u201cIt\u2019s a high risk, high reward project that Connected Minds was willing to fund.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>\u201c[We are] creating this narrative that involves physical interactions with virtual characters. [We are] starting out simple, we\u2019re looking at simple interactions like high fives, or fist bumps, and handovers of objects where you don\u2019t necessarily need a lot of fidelity in terms of physical interactions.\u201d<\/em><\/p>\n<\/blockquote>\n\n\n\n<p>To make these moments feel real, the team uses haptic proxies. As Dr. Pan explains, haptic proxies are physical props that \u2018stand in for haptic interactions you would normally feel in the real world.\u201d For example, a robot-mounted hand can simulate a high five at the exact moment the participant sees it in VR.<\/p>\n\n\n\n<p>However, matching physical actions in the world and virtual actions in VR creates a major technical challenge. The system must align spatial coordinates using motion capture and high-fidelity 3D pose tracking, so that the location of the proxy in the real world matches the location where the VR headset thinks you should be.<\/p>\n\n\n\n<p>Timing matters too. The team must also synchronize physical and virtual actions on the scale of milliseconds. \u201cFor dynamic experiences, it\u2019s even more complicated,\u201d Dr. Pan explains. \u201cParticularly for handovers or high fives, there needs to be not only a physical correlation, but also a temporal correlation. You can\u2019t have the high five happen in VR first, followed by it happening 500 milliseconds later in the physical world. It breaks the illusion.\u201d To avoid this lag, the team uses a system that shares information between the VR environment and robotic devices to keep latency low and synchronization precise.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Collaboration Across Disciplines<\/strong><\/h2>\n\n\n\n<p>The project is supported by two arts organizations: <a href=\"https:\/\/spiderwebshow.ca\/\">SpiderWebShow Performance<\/a> and <a href=\"https:\/\/www.bcurrent.ca\/\">bCurrent Performing Arts<\/a>. The former is a Kingston-based arts organization and Canada\u2019s first live-to-digital performance company. It focuses on exploring the intersection of live performance and digital technology. \u201cWith SpiderWebShow, we work with Adrienne Wong, who is contributing to the dramaturging\u201d says Dr. Pan. bCurrent is a Toronto-based company that supports the work of Black and intersectional artists and plays a role in shaping the narrative voice of the theatrical experience. Together, these collaborators ensure that the narrative experience is inclusive and culturally relevant.<\/p>\n\n\n\n<p>For Dr. Pan, it\u2019s important that the creative process among engineers and artists is authentic. \u201cWe are emphasizing the co-creative nature of this project &#8230; [Michael and I] talk about these experiences at length and we have many ideas on what we eventually want to do with this technology, but one of the most important steps is we\u2019re not leaving each other out in the dark.\u201d<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Beyond Theatre: Next Steps<\/strong><\/h2>\n\n\n\n<p>Dr. Pan has big ideas on where the project and technology could eventually go.<\/p>\n\n\n\n<p>\u201cWe already have inquiries into sports training,\u201d he says. \u201cThere\u2019s lots of implications for being able to customize training regimens for athletes.\u201d For instance, being able to train a hockey goalie in a safe and replicable environment without needing live opponents or expensive setups would be helpful to coaches.<\/p>\n\n\n\n<p>The technology could also support hands-on training for skilled trades, with the potential to lower barriers to technical training and to improve safety. \u201cWe could use a robot to mimic a lathe, and then do operator training in VR, especially when there is a shortage of machine equipment or safety concerns, we could have novices training with haptic proxies before moving on to the physical machine.\u201d<\/p>\n\n\n\n<p>Beyond performance and training, Dr. Pan is also excited about its application to care for the elderly and combatting loneliness. He\u2019s been speaking to <a href=\"https:\/\/health.yorku.ca\/health-profiles\/index.php?dept=&amp;mid=1471484\">Dr. Lora Appel<\/a>, a Connected Minds researcher at York University, who studies VR in palliative and geriatric care. \u201cLoneliness is a huge issue with the elderly population,\u201d he explains. \u201c[We are exploring] if we can use this technology to make social connections.\u201d<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Conclusion<\/strong><\/h2>\n\n\n\n<p>In a world where screens mediate much of our social world, Dr. Pan asks: what if we could reclaim a sense of touch, even through a headset? His project brings together engineering, art, performance and community, showing that immersive technology isn\u2019t about tricking the eye, it\u2019s about restoring presence and human connection.<\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Imagine if you could enter a play by putting on a virtual headset. Now imagine that the characters in the play are shaking your hand or giving you a hug, and that you can feel them do these things. This is the kind of experience that researchers at Queen\u2019s University are developing through the \u201cMeta-Physical [&hellip;]<\/p>\n","protected":false},"author":4,"featured_media":589,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"ocean_post_layout":"","ocean_both_sidebars_style":"","ocean_both_sidebars_content_width":0,"ocean_both_sidebars_sidebars_width":0,"ocean_sidebar":"","ocean_second_sidebar":"","ocean_disable_margins":"enable","ocean_add_body_class":"","ocean_shortcode_before_top_bar":"","ocean_shortcode_after_top_bar":"","ocean_shortcode_before_header":"","ocean_shortcode_after_header":"","ocean_has_shortcode":"","ocean_shortcode_after_title":"","ocean_shortcode_before_footer_widgets":"","ocean_shortcode_after_footer_widgets":"","ocean_shortcode_before_footer_bottom":"","ocean_shortcode_after_footer_bottom":"","ocean_display_top_bar":"default","ocean_display_header":"default","ocean_header_style":"","ocean_center_header_left_menu":"","ocean_custom_header_template":"","ocean_custom_logo":0,"ocean_custom_retina_logo":0,"ocean_custom_logo_max_width":0,"ocean_custom_logo_tablet_max_width":0,"ocean_custom_logo_mobile_max_width":0,"ocean_custom_logo_max_height":0,"ocean_custom_logo_tablet_max_height":0,"ocean_custom_logo_mobile_max_height":0,"ocean_header_custom_menu":"","ocean_menu_typo_font_family":"","ocean_menu_typo_font_subset":"","ocean_menu_typo_font_size":0,"ocean_menu_typo_font_size_tablet":0,"ocean_menu_typo_font_size_mobile":0,"ocean_menu_typo_font_size_unit":"px","ocean_menu_typo_font_weight":"","ocean_menu_typo_font_weight_tablet":"","ocean_menu_typo_font_weight_mobile":"","ocean_menu_typo_transform":"","ocean_menu_typo_transform_tablet":"","ocean_menu_typo_transform_mobile":"","ocean_menu_typo_line_height":0,"ocean_menu_typo_line_height_tablet":0,"ocean_menu_typo_line_height_mobile":0,"ocean_menu_typo_line_height_unit":"","ocean_menu_typo_spacing":0,"ocean_menu_typo_spacing_tablet":0,"ocean_menu_typo_spacing_mobile":0,"ocean_menu_typo_spacing_unit":"","ocean_menu_link_color":"","ocean_menu_link_color_hover":"","ocean_menu_link_color_active":"","ocean_menu_link_background":"","ocean_menu_link_hover_background":"","ocean_menu_link_active_background":"","ocean_menu_social_links_bg":"","ocean_menu_social_hover_links_bg":"","ocean_menu_social_links_color":"","ocean_menu_social_hover_links_color":"","ocean_disable_title":"default","ocean_disable_heading":"default","ocean_post_title":"","ocean_post_subheading":"","ocean_post_title_style":"","ocean_post_title_background_color":"","ocean_post_title_background":0,"ocean_post_title_bg_image_position":"","ocean_post_title_bg_image_attachment":"","ocean_post_title_bg_image_repeat":"","ocean_post_title_bg_image_size":"","ocean_post_title_height":0,"ocean_post_title_bg_overlay":0.5,"ocean_post_title_bg_overlay_color":"","ocean_disable_breadcrumbs":"default","ocean_breadcrumbs_color":"","ocean_breadcrumbs_separator_color":"","ocean_breadcrumbs_links_color":"","ocean_breadcrumbs_links_hover_color":"","ocean_display_footer_widgets":"default","ocean_display_footer_bottom":"default","ocean_custom_footer_template":"","ocean_post_oembed":"","ocean_post_self_hosted_media":"","ocean_post_video_embed":"","ocean_link_format":"","ocean_link_format_target":"self","ocean_quote_format":"","ocean_quote_format_link":"post","ocean_gallery_link_images":"on","ocean_gallery_id":[],"footnotes":""},"categories":[9],"tags":[],"class_list":["post-582","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-connected-minds","entry","has-media"],"rttpg_featured_image_url":{"full":["https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/06\/Force-Grab-1.jpg",2000,1199,false],"landscape":["https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/06\/Force-Grab-1.jpg",2000,1199,false],"portraits":["https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/06\/Force-Grab-1.jpg",2000,1199,false],"thumbnail":["https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/06\/Force-Grab-1-150x150.jpg",150,150,true],"medium":["https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/06\/Force-Grab-1-300x180.jpg",300,180,true],"large":["https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/06\/Force-Grab-1-1024x614.jpg",1024,614,true],"1536x1536":["https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/06\/Force-Grab-1-1536x921.jpg",1536,921,true],"2048x2048":["https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/06\/Force-Grab-1.jpg",2000,1199,false],"ocean-thumb-m":["https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/06\/Force-Grab-1-600x600.jpg",600,600,true],"ocean-thumb-ml":["https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/06\/Force-Grab-1-800x450.jpg",800,450,true],"ocean-thumb-l":["https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/06\/Force-Grab-1-1200x700.jpg",1200,700,true]},"rttpg_author":{"display_name":"Jaspreet Dodd","author_link":"https:\/\/cmblog.neuroscience.queensu.ca\/author\/jaspreet"},"rttpg_comment":0,"rttpg_category":"<a href=\"https:\/\/cmblog.neuroscience.queensu.ca\/category\/connected-minds\" rel=\"category tag\">Connected Minds<\/a>","rttpg_excerpt":"Imagine if you could enter a play by putting on a virtual headset. Now imagine that the characters in the play are shaking your hand or giving you a hug, and that you can feel them do these things. This is the kind of experience that researchers at Queen\u2019s University are developing through the \u201cMeta-Physical&hellip;","_links":{"self":[{"href":"https:\/\/cmblog.neuroscience.queensu.ca\/wp-json\/wp\/v2\/posts\/582","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cmblog.neuroscience.queensu.ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cmblog.neuroscience.queensu.ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cmblog.neuroscience.queensu.ca\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/cmblog.neuroscience.queensu.ca\/wp-json\/wp\/v2\/comments?post=582"}],"version-history":[{"count":10,"href":"https:\/\/cmblog.neuroscience.queensu.ca\/wp-json\/wp\/v2\/posts\/582\/revisions"}],"predecessor-version":[{"id":616,"href":"https:\/\/cmblog.neuroscience.queensu.ca\/wp-json\/wp\/v2\/posts\/582\/revisions\/616"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cmblog.neuroscience.queensu.ca\/wp-json\/wp\/v2\/media\/589"}],"wp:attachment":[{"href":"https:\/\/cmblog.neuroscience.queensu.ca\/wp-json\/wp\/v2\/media?parent=582"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cmblog.neuroscience.queensu.ca\/wp-json\/wp\/v2\/categories?post=582"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cmblog.neuroscience.queensu.ca\/wp-json\/wp\/v2\/tags?post=582"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}