Jaspreet Dodd – Connected Minds https://cmblog.neuroscience.queensu.ca Neural and Machine Systems for a Healthy, Just Society Wed, 22 Apr 2026 18:08:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 I Hung Out With a G1 Humanoid Robot for a Day /connected-minds/i-hung-out-with-a-g1-humanoid-robot-for-a-day Wed, 22 Apr 2026 18:08:08 +0000 /connected-minds/?p=661 Last semester, I watched a YouTube video of the Ingenuity Labs unboxing their new Unitree G1 humanoid robot. The robot looked slightly larger than a child and as the team gathered around it, they watched it walk, run, and even dance. I immediately had one thought: I need to meet this robot. A few months later, I found myself standing inside Ingenuity Labs, about to do exactly that.

Before seeing the robot up close, I sat down with Ramzi Asfour, Associate Director (Administration) at Ingenuity Labs Research Institute, to learn more about their newest addition. For Asfour, bringing a new robot into the lab is always an exciting moment. “It’s usually a fun experience when we open up boxes and there’s a new robot to get to do stuff,” recalls Asfour.

What is a G1 Humanoid?

When I finally saw the robot in person, the first thing that struck me was its size. Standing a little over four feet tall, the G1 looks almost like a small human figure: arms, legs, and joints designed to move in ways that mimic our own.

“It’s a robot that looks like a person… about the size of an average 10-year-old kid. It’s a humanoid form factor,” says Asfour. “The promise with it is that it can kind of do tasks that humans normally have done.”

Speaking with Asfour, I learned that the initial focus for the robot will be an agricultural application. But how exactly does a humanoid robot fit into agriculture?

The design of the robot, standing at about 4’4” (1.3m) and weighing around 35kg, allows it to perform tasks originally designed for humans. Its mobility allows it to walk, run, and navigate uneven terrain.

For instance, it would be useful for repetitive and tedious tasks often associated with agriculture. As Asfour explains, “if you’re in a greenhouse and want to bag produce, it’s very repetitive. You’re doing the same task over and over again.”

The robot is also equipped with advanced sensing and computing abilities. “It has computer vision. It has a built-in computer. It has some AI capability. You can talk to it. You can program it to do different things,” says Asfour.

A long-term goal will be to have the robot perform tasks independently, but a first application will be telerobotics. Or, in other words, having a human in a comfortable spot operating the robot remotely.

Ramzi Asfour - Smith Engineering Directory | Queen's University
Ramzi Asfour, Associate Director (Administration) at Ingenuity Labs Research Institute

Humans and Robots Working Together

Seeing the robot move makes its humanoid design immediately clear. The G1 doesn’t glide on wheels like many robots, but instead it walks. Watching it shift its weight from one leg to the other, I could easily imagine how robots like this might eventually operate in environments built for humans.

Asfour puts it clearly: “robotics and AI are going to be everywhere.” So a key question for Ingenuity Labs becomes, “how do you successfully roll out robots into a community situation or workplace situation and have it be a positive experience rather than people worrying about safety or job security?”

A first step is to study how humans behave around robots. Asfour says, “[we ask] how do people behave differently in the presence of a robot?”

Moreover, researchers must also study humans to improve robotic motion. “You study the biomechanics of how people walk… and then analyze that and come up with control schemes to have the robot walk better,” explains Asfour. “You want it to be more stable and look more natural while it’s moving around.”

Meeting the robot!

AI and Robotics

The robot also provides a unique avenue to integrate artificial intelligence (AI) with physical machines. “People have called it embodied AI or physical AI… where AI and robotics come together,” Asfour explains.

For years, AI has largely existed behind screens whether that be powering algorithms, recommendations, and data analysis. But robots like the G1 begin to shift that, bringing AI into physical spaces where it can interact with the world in real time.

“AI exists on computers… but they needed a way to go out into the real world,” says Asfour. “You could see the potential to talk to a robot and ask it to do something in an environment.”

The idea of simply talking to a machine and having it carry out a task still feels slightly futuristic. But standing in the lab, watching the G1 move, it becomes easier to see how that future might not be so far away.

Researchers, Students, and Collaborators

The robot also creates new opportunities for students.

“All our robots here are industrial-grade… these are robots that industry will be using,” says Asfour. That means students are gaining experience with the kinds of systems they may encounter in their future careers.

Graduate students use the robots in research projects, while undergraduate engineering students get hands-on experience through capstone design projects. For many, it’s a rare chance to work directly with advanced robotic systems before entering the workforce.

Beyond individual projects, Ingenuity Labs is also encouraging collaboration across disciplines. Asfour notes that “anybody who’s part of the Connected Minds program can make use of it through our collaboration.”

In a space like this, the robot becomes a shared platform for ideas, bringing together researchers, students, and different fields to explore what these technologies can do.

Conclusion: Robots Everywhere

Standing next to the robot, it’s easy to imagine a future where machines like this are no longer unusual lab equipment, but working alongside people in tasks like harvesting, packaging, or other everyday work.

“At some point, robots are going to have to fill the jobs and do the tasks that humans normally do now,” Asfour explained. But rather than replacing expertise, these robots could expand what people are able to do.

As these systems develop, another key focus is making them easier for people to use. Rather than requiring specialized coding knowledge, researchers are working toward ways of interacting with robots more naturally, for example, through language or simple instructions. “You could have someone who knows a lot about metalworking and a little bit of robotics, and the robot becomes very intuitive to program.”

Ultimately, Asfour believes the future will involve a world where intelligent machines and humans work together. “We think robots and AI are going to be everywhere… and we want to put together systems for the benefit of society.”

]]>
Building Community in Neuroscience: Inside ϳԹԴ Women in Neuro /connected-minds/building-community-in-neuroscience-inside-queens-women-in-neuro Tue, 17 Mar 2026 14:24:09 +0000 /connected-minds/?p=647 When I started graduate school at ϳԹԴ, one of the first things I did was look for a neuroscience club for women-identifying students. During my undergraduate degree, I co-founded the Women in STEM club at Simon Fraser University, having recognized the lack of community and representation for women in the sciences. Most of my professors were men, yet my class was split pretty evenly between women and men. I often felt isolated and my journey in science seemed unnecessarily difficult – and science wasn’t easy to being with.

The discrepancy is clear to me. Women are interested in STEM, but systemic barriers, stereotypes, and a lack of role models continue to limit representation in the workforce. I recognized that challenge during my undergraduate years, and so did

Blake Noyes.

Blake runs the Women in Neuro (WiN) club at ϳԹԴ, a student-led initiative at focused on creating a supportive community for women-identifying neuroscientists. The club focuses on research discussions, mentorship opportunities, outreach activities, and an annual conference, to help students navigate academic and professional pathways, while inspiring the next generation of women scientists.

Why was Women in Neuro created?

For Blake, the idea of WiN began with a simple observation, “I came up with the plan after chatting with some fellow students in my program,” she explained. “At the time, we were predominantly women (and I think we still are) but the faculty, including our supervisors, were predominantly men.”
While their supervisors were supportive, Blake and her peers noticed that some experiences specific to women in science were difficult for male mentors to fully relate to. She explained, “though our supervisors are incredibly supportive, there are certain challenges like biases about women in science or managing pregnancy during graduate school that they don’t have experience with.”


.

Recognizing these issues, Blake and her colleagues set out to build something new. She says, “we wanted to build a community to support each other [through] these challenges and provide mentorship from women who have gotten through it.”

Challenging the Competitive Culture of Academia

Beyond issues of representation, there is an inherent competitiveness in academia. Graduate students frequently compete for the same grants, fellowships, and conference opportunities, and these pressures that can intensify feelings of isolation.

“Grad school can feel competitive at times,” Blake says. “We are often all applying [for] the same grants. We are all trying to win the oral presentation spots at conferences. But I think we have a lot to learn from each other.”

The 2024-2025 Women in Neuro Executive Team.

Women in Neuro aims to counter that culture by emphasizing collaboration and community building.

Events like this one create spaces where students can present their work, meet potential mentors, and build networks that extend beyond their own institutions.

“Getting the community together makes our research stronger and grows our network heading into the future,” Blake says.

Inspiring the Next Generation

Another key feature of the club is to inspire the next generation of women scientists. As Blake says: “I think it’s super important for young girls to know it’s a possibility to be neuroscientists, or scientists in general.”

Growing up, she recalls watching science programs like Bill Nye the Science Guy but rarely seeing women represented on the show. “When I was growing up, we watched Bill Nye, and I didn’t really see a lot of women in science.”

To help overcome this issue, the organization runs outreach programs for students ranging from elementary school to high school. Activities are tailored to different age groups and emphasize hands on learning.

“With young kids, we introduce the lobes of the brain and do hands-on activities like making playdough neurons,” Blake explained. “Older students get more insight into the work we actually do, including testing equipment like microscopes, the EyeLink eye tracker, or the KINARM.” [Note: links included]

These experiences often provide students with their first real glimpse of neuroscience research. Perhaps more important are the informal conversations that follow. “More importantly, we have time to answer their questions about university and tell them how we ended up where we are today,” Blake says.

The Role of Connected Minds

Support from initiatives like Connected Minds has also played an important role in expanding Women in Neuro’s reach. “Connected Minds gave us funding for our conference last year towards the purchase of our new poster display boards,” Blake explains. “That was a huge help and saves us the cost of rentals in the future.”

Organizing conferences can be expensive, and student-run initiatives often rely on sponsorship and grants. Funding support allowed the team to keep registration accessible for students.

“We were able to keep last year’s student ticket price at $25,” Blake explained. “We thought that was a great deal for a full-day conference with lunch, snacks, and unlimited coffee.”

For some attendees, the impact has been lasting. “Some undergraduates told me the conference gave them the confidence to pursue graduate school,” Blake says. “Support from funders like Connected Minds has been very influential in advancing the next generation of neuroscientists.”

Looking Ahead

As neuroscience continues to evolve, collaboration is becoming increasingly central to scientific progress. “Scientific research has been moving towards more collaborative efforts,” Blake says.

By bringing together women researchers across institutions and career stages, Women in Neuro hopes to strengthen those networks and encourage future partnerships. Blake concludes by noting “bringing women in neuroscience together now will facilitate knowledge transfer and encourage future collaborations.”

]]>
Co-Creation Stories: Start with People, Not the Problems /connected-minds/co-creation-stories-start-with-people-not-the-problems Mon, 19 Jan 2026 20:23:02 +0000 /connected-minds/?p=636 On the , the word “co-creation” is seemingly everywhere. Connected Minds gives you a clear starting point for understanding co-creation, including a definition: a collaborative approach to research wherein researchers work directly with the people, communities, and sectors affected by an issue to jointly define problems, design and test ideas, and evaluate outcomes. The website also includes a step-by-step guide, developed from an extensive literature review that walks researchers through, ii) , and iii)

On paper, co-creation is important, and it doesn’t take a genius to understand why. Collaboration pushes research forward. Diverse voices and perspectives broaden how problems are framed, and consulting with end-users can only make our solutions better. So yes, co-creation sounds great. But in practice, is it worth the extra time, discomfort, and the added coordination in a research world that rewards speed, efficiency, and productivity?

If you are like me, it isn’t enough to just read about the definition of co-creation or skim a how-to guide. You also need real-life proof of why it is worth going the extra mile to reach out to community members, researchers in other disciplines, and experts from all over. To understand why co-creation is so important, I spoke to two ϳԹԴ researchers, Dr. Claire Davies and Dr. Gavin Winston. Both have made co-creation a foundational part of how they do research.

Starting with people changes the problem

Dr. Claire Davies, Professor in Mechanical and Materials Engineering, spoke to me about a time she brought students who were developing robotic sensors and exoskeletons for stroke rehabilitation. As she recalls, after her students had met the patients, ‘they walked out of the clinic at the end and said, “those people don’t operate anywhere near what they do on YouTube. I am going to have to redesign my sensors.”’

, Professor in the Department of Medicine, echoes a similar sentiment from a clinical perspective. He reflects on early co-creation workshops with people living with epilepsy, he notes that “one of the overwhelming things was that people really liked the concept and felt that it would be useful.” In both cases, talking to end users shaped how the technology is built, but more importantly, it reshapes how researchers perceive problems.

And this is where a major problem emerges: do we, as researchers, actually understand the problem at hand? We read the papers, do literature reviews, and make assumptions, but do we slow down enough to actually listen?

Co-creation slows research down, but that’s the point

It’s a race against time for the next grant, the next position, and the next project. Co-creation interrupts that process and slows it down.

Dr. Davies was recently awarded a for her project, When People Talk, Listen Completely, which focuses on developing AI-driven communication technologies, educational tools, and workplace strategies to improve employment access for Canadians with speech impairments. Across her work, she has noticed repeatedly that, “ninety percent of engineering is designing, and people neglect to actually talk to clients [during] the design process.”

When People Talk, Listen Completely: Led by Dr. Claire Davies (ϳԹԴ) and, this team is developing AI-driven communication technologies, educational tools, and workplace strategies to improve employment access for Canadians with speech impairments.

Through Connected Minds funded co-creation workshops with people who have speech impairments, Dr. Davies and her team learned how varied communication can be for this group of people. For instance, some participants relied entirely on speech-generating devices, others used vocal utterances which required time and familiarity to understand, and some participants needed interpreters.

Sometimes, there were long gaps while participants typed out their answers. At first, she thought these pauses would be labelled as inefficient since they resulted in delayed responses. However, she realized that it was important to deliberately leave them in. As she recalled, “Initially I thought, well let’s take out all those huge gaps where people are typing their answers. But then I realized that was the most important thing for people to learn, that you have to sit there and you have to wait and you have to listen and you don’t interrupt… you just have to be quiet and patient.” 

Ultimately, it became about letting people communicate at their own time, pace, and comfort level. The approach she and her team took forced researchers to slow down, wait, and listen completely. These workshops were then thematically analyzed and brought back to participants for validation, directly informing her grant application and future research themes. For Dr. Davies, there is a clear takeaway: “the biggest thing is going in with no preconceived conceptions of what you think needs it, [so having] no preconceived ideas of what you’re expecting out of it.” Involve people in the beginning and throughout the research process, not just at the end.

Consultation with end-users expands the research problem

Dr. Winston has a similar experience. He was also awarded a for his project, Wearable EEG for Personalized Epilepsy Management, which focuses on developing a smart, wearable electroencephalogram (EEG) device designed for clinical accuracy, long-term comfort, and ethical use in everyday environments. His team aims to bridge the gap between short, clinic EEG recordings and long hospital stays by developing a wearable EEG device that provides clinical-quality data, long battery life, and full electrode coverage that can be used independently at home. At first, the research problem may sound like a technical challenge, but it becomes something much bigger.

Wearable EEG for Personalized Epilepsy Management: Co-led by Dr. Gavin Winston (ϳԹԴ) and , this team is developing a smart, wearable electroencephalogram (EEG) device designed for clinical accuracy, long-term comfort, and ethical use in everyday environments.

Involving people in the research process immediately expands the scope of the problem being addressed beyond medicine or engineering alone. Once an EEG wearable is introduced into the home, questions of ethics, legality, accessibility, privacy, and caregiver impact become as important as the hardware of the device. Thus, it becomes clear that what works in a controlled hospital environment doesn’t always work into someone’s daily routine.

Addressing these questions requires an interdisciplinary team that brings together clinicians, engineers, ethicists, lawyers, community organizations, patients, and caregivers. Dr. Winston and his team heard directly from participants who would eventually use the technology. For instance, “things such as comfort were brought up as being critical if [the device] was going to be used.” Beyond comfort, participants also discussed usability and support: “They felt there would need to be clear availability of technical support… or at least training so they know how to use such a device.”

As Dr. Winston points out, “there’s another large part of the project which looks at all the ethical and legal implications of such a device… if we’re recording data in a home environment, what are the security implications of that?” Once people are involved, the problem is no longer about engineering a better device. It’s also about understanding the context in which it will be used. Starting with people means that no single discipline can fully understand the problem on its own.

Conclusion

For me, the value of co-creation has become clear, not because it sounds good on a website, but because of what it forces researchers to think about. Starting with people changes what we think the problem is, slows us down in ways that enhance our solutions, and makes it impossible to work by ourselves.

]]>
Exo-Sensory Augmentation: Designing Inclusive Wearable Solutions for Safer Work Environments /connected-minds/exo-sensory-augmentation-designing-inclusive-wearable-solutions-for-safer-work-environments Fri, 14 Nov 2025 17:00:50 +0000 /connected-minds/?p=614 When physicians, nurses, and healthcare staff can no longer complete work due to workplace injury, patients inevitably face the consequences.

“[The Canadian] healthcare system focuses strongly on patient outcomes to treat patients better and to improve overall health. However, a lot of the time, we neglect the needs of our healthcare professionals,” says Dr. at ϳԹԴ.

Home Image
Dr. Qingguo Li

Dr. Li leads the Bio-Mechatronics and Robotics Lab (BMRL) at Ingenuity Labs Research Institute. He is His awarded project ‘Exo-sensory Augmentation to Reduce Musculoskeletal Injury Risk in Clinical Settings’ aims to use innovative wearable technology to enhance sensory awareness and mitigate injury risks.

A Hidden Problem

While healthcare focuses on the treatment of Canadians, a forgotten demographic ends up being the people who provide that care: physicians, nurses, and healthcare staff themselves.

“When we come back to the literature, there are surveys that find that a very high percentage of people working in operating rooms, for instance, experience back problems and musculoskeletal injury,” says Dr. Li. The issue can lead to short careers that often result in early retirement; as Dr. Li puts it, “if the doctor is sick, who can do the treatment?”

“We focus on a group of clinicians who work in a real-time X-ray environment,” says Dr. Li. In these settings, staff wear lead aprons weighing about 15 to 25 pounds, often bending over patients for sustained periods. The combination of heavy aprons and sustained, awkward posture raises spinal loads and the chance of injury.

There are practical fixes that have been proposed but remain inadequate to address the problem. “One way is exoskeletons; however, [the surgeons] have told us that even if we develop a pretty exoskeleton, they would not use it,” explains Dr. Li. “Exoskeletons are bulky, heavy, and affect range of motion.” Another solution would be to hang a harness from the ceiling. “This would hold the weight of the load as a kind of suspension system,” he says. However, there are issues with mobility in a delicate operating environment. “[Surgeons] would need to drag the harness, and when moving in a surgery environment, that’s not a good solution.”

An Innovative Solution to a Sensory Problem

Home Image
Dr. Li and his team. (L to R: Will Bonin, Sophie Lau, Jialin Luo, Paul Quinlan, Qingguo Li, Natasha Anderson, Samuel Brost, Romaric Bambara)

realized that the core issue is not just physical load, but posture awareness. “This is why we propose an exo-sensory augmentation approach,” he says. “The major issue is posture.” In a demanding environment like surgery, with long procedures, the staff are deeply focused on the task at hand and do not pay attention to how their posture deteriorates. As Dr. Li explains, “the clinicians or the surgeons or the nurses are not aware [of their posture] during the operation, so if we give them this information, hopefully they can adjust their posture, stretch, and relax a bit.”

Dr. Li compares exo-sensory augmentation to the use of glasses or hearing aids — devices that amplify our senses. “Sometimes you forget about posture because you’re focusing on other tasks. So, [the system] can provide that information to you,” he explains. This approach gives staff real-time awareness of their posture as they work, allowing them to regain a sense they had temporarily lost under cognitive and physical load.

“We took a user-centered design approach by involving users in all stages of decision-making,” says Dr. Li. “We work very closely with surgeons and stakeholders. We have regular meetings with them to make solutions that are both feasible and acceptable in a clinical environment.”

The system under development has three modules to achieve overall exo-sensory augmentation: posture measurement via small wearable sensors placed on the user’s back, load estimation using biomechanical models to estimate spinal loading at different vertebral levels (e.g., C7, L4) , and feedback to the user, delivered as either haptic cues (e.g., a buzz when posture exceeds a threshold) or onscreen indicators (e.g., status on a display). These real-time cues prompt adjustments without disrupting surgery.

Designing for Accessibility

An important focus for Dr. Li is accessibility and inclusivity for all clinical roles and users. “You have nurses, surgeons, and radiology technicians,” says Dr. Li. “We are trying to develop a system that works for all of them.”

Dr. Li and his group therefore study the posture demands unique to each staff role in a clinical setting. For instance, they take into consideration that a nurse assisting a surgeon, or a technician handling X-ray equipment, face different ergonomic challenges. This consideration has underpinned the adaptability of the system to each staff member’s movement patterns and potential risks.

Dr.Li’s group also considers sex differences in how users perceive and respond to feedback. “We consider if we need to develop a universal intervention or a system with different parameters,” Dr. Li explains. By accounting for these factors early in design, the team aims to ensure the final system can support clinicians of all body types, abilities, and backgrounds.

Looking Ahead

Looking ahead, Dr. Li hopes to validate the exo-sensory system in clinical trials and to explore commercialization. “We work with ϳԹԴ Partnerships and Innovation to get the technology to the hands of our end users, “ he explains.

Dr. Li’s long-term goal is to move beyond the lab and integrate the technology in real-time workflows. “We believe this technology can not only be applied in clinical settings or healthcare, but also in manual labour, working in a factory, for example… we believe exo-sensory augmentation can benefit the ergonomics and working conditions of other fields and make a broader impact.”

]]>
Meta-Physical Theatre: Making Touch Real in Virtual Reality /connected-minds/meta-physical-theatre-making-touch-real-in-virtual-reality Wed, 18 Jun 2025 14:00:14 +0000 /connected-minds/?p=582 Imagine if you could enter a play by putting on a virtual headset. Now imagine that the characters in the play are shaking your hand or giving you a hug, and that you can feel them do these things.

This is the kind of experience that researchers at ϳԹԴ are developing through the “Meta-Physical Theatre: Designing Physical Interactions in Virtual Reality Live Performances Using Robotics and Smart Textiles” project. In a nutshell, the project integrates physical touch into virtual reality (VR) live performances.

is the lead researcher on the project. Dr. Pan is an Assistant Professor in the Faculty of Engineering and Applied Science and a member of Ingenuity Labs Research Institute at ϳԹԴ. He was one of six inaugural recipients of a in 2024, supporting community-focused research that pushes boundaries in technology and society.

Virtual Reality Beyond the Visual

In association with intersectional arts organizations, this project aims to build immersive narratives where participants can not only see and hear virtual characters but can also physically interact with them. It pushes boundaries on VR environments to build immersive environments where touch becomes a part of the narrative structure.

Dr. Pan’s idea began years earlier during his time at Disney. While working on Star Wars: Galaxy’s Edge, he developed an immersive experience where visitors could feel an iconic “force grab” (when a Jedi summons a lightsabre through the air). “You would put on a VR headset, and you would see, in the distance, this lightsabre that you can reach out to with your hand and it would start zooming toward you,” he explains. “You would actually see the lightsabre come into your hand in VR. At the same time, a robot in the real world would deliver a lightsabre prop with the exact same timing and force.” Though the project was ultimately shelved by Disney, Dr. Pan didn’t give up on the idea. “I thought there was a lot left on the table by shelving that project.”

Making Touch Feel Real in VR

Dr. Matthew Pan (L) and Michael Wheeler (R)

Of course, there’s no VR theatre without theatre, and Dr Pan’s collaboration with is essential to the project.  Wheeler is a fellow Ingenuity Labs and Connected Minds member, Assistant Professor in the DAN School of Drama and Music, and Director of Artistic Research at SpiderWebShow Performance. “Shortly after arriving at ϳԹԴ, I was introduced to Michael… we thought it would be really cool to actually have a theatrical narrative that uses interpersonal interactions in VR,” Dr Pan says. Supported by community organizations, Dr. Pan and Wheeler co-created a live VR theatre experience that integrates physical touch. “It’s a high risk, high reward project that Connected Minds was willing to fund.

“[We are] creating this narrative that involves physical interactions with virtual characters. [We are] starting out simple, we’re looking at simple interactions like high fives, or fist bumps, and handovers of objects where you don’t necessarily need a lot of fidelity in terms of physical interactions.”

To make these moments feel real, the team uses haptic proxies. As Dr. Pan explains, haptic proxies are physical props that ‘stand in for haptic interactions you would normally feel in the real world.” For example, a robot-mounted hand can simulate a high five at the exact moment the participant sees it in VR.

However, matching physical actions in the world and virtual actions in VR creates a major technical challenge. The system must align spatial coordinates using motion capture and high-fidelity 3D pose tracking, so that the location of the proxy in the real world matches the location where the VR headset thinks you should be.

Timing matters too. The team must also synchronize physical and virtual actions on the scale of milliseconds. “For dynamic experiences, it’s even more complicated,” Dr. Pan explains. “Particularly for handovers or high fives, there needs to be not only a physical correlation, but also a temporal correlation. You can’t have the high five happen in VR first, followed by it happening 500 milliseconds later in the physical world. It breaks the illusion.” To avoid this lag, the team uses a system that shares information between the VR environment and robotic devices to keep latency low and synchronization precise.

Collaboration Across Disciplines

The project is supported by two arts organizations: and . The former is a Kingston-based arts organization and Canada’s first live-to-digital performance company. It focuses on exploring the intersection of live performance and digital technology. “With SpiderWebShow, we work with Adrienne Wong, who is contributing to the dramaturging” says Dr. Pan. bCurrent is a Toronto-based company that supports the work of Black and intersectional artists and plays a role in shaping the narrative voice of the theatrical experience. Together, these collaborators ensure that the narrative experience is inclusive and culturally relevant.

For Dr. Pan, it’s important that the creative process among engineers and artists is authentic. “We are emphasizing the co-creative nature of this project … [Michael and I] talk about these experiences at length and we have many ideas on what we eventually want to do with this technology, but one of the most important steps is we’re not leaving each other out in the dark.”

Beyond Theatre: Next Steps

Dr. Pan has big ideas on where the project and technology could eventually go.

“We already have inquiries into sports training,” he says. “There’s lots of implications for being able to customize training regimens for athletes.” For instance, being able to train a hockey goalie in a safe and replicable environment without needing live opponents or expensive setups would be helpful to coaches.

The technology could also support hands-on training for skilled trades, with the potential to lower barriers to technical training and to improve safety. “We could use a robot to mimic a lathe, and then do operator training in VR, especially when there is a shortage of machine equipment or safety concerns, we could have novices training with haptic proxies before moving on to the physical machine.”

Beyond performance and training, Dr. Pan is also excited about its application to care for the elderly and combatting loneliness. He’s been speaking to , a Connected Minds researcher at York University, who studies VR in palliative and geriatric care. “Loneliness is a huge issue with the elderly population,” he explains. “[We are exploring] if we can use this technology to make social connections.”

Conclusion

In a world where screens mediate much of our social world, Dr. Pan asks: what if we could reclaim a sense of touch, even through a headset? His project brings together engineering, art, performance and community, showing that immersive technology isn’t about tricking the eye, it’s about restoring presence and human connection.

]]>
The Brain in Motion: Insights from Dr. Gunnar Blohm’s Interdisciplinary Lab /connected-minds/the-brain-in-motion-insights-from-dr-gunnar-blohms-interdisciplinary-lab Wed, 23 Apr 2025 18:49:10 +0000 /connected-minds/?p=564

When was the last time you caught a ball, typed an email, or crossed the street? Did you think about this action, or was it simply something you did? These movements are second nature to us, but each one involves complex interactions between sensory input, neural noise and split-second decision making.

How the brain transforms perception into action drives the work of and the multidisciplinary team in his lab. Dr. Blohm is Professor for Computational Neuroscience at Queen’s University, as well as the Vice-Director (ϳԹԴ) of Connected Minds. Members of his lab combine physics, psychology and mathematics to investigate how the brain learns to move, adapt, and make decisions.

“Sensory-motor processes are the things to study because they are the reason the brain evolved — to use sensation, or sensory inputs, to sense and act on the world,” says Dr. Blohm.

Dr. Blohm began his academic career with a Master’s degree in physics, but pivoted to neuroscience during his PhD, drawn in by the challenge of understanding living systems. For Dr. Blohm, the mission behind his research is clear: to uncover the brain’s most fundamental mechanisms. As he explains, “through the study of sensory motor control, [researchers] can uncover fundamental principles and mechanisms of brain functions at various scales [from] behaviour to how the brain is set up.”

In practice, how does this mission shape the research questions in his lab? It tends to attract a diverse group of researchers from different fields. While the students in Dr. Blohm’s lab may be investigating different parts of a system (for instance, from normative to behavioural modelling) they all seek to answer the same fundamental question: how does the brain adapt, decide, and act in the world?

At the same time, building a lab isn’t just about answering questions. It’s also about training students to ask better ones. “Graduate school is training through research,” Dr. Blohm says. He hopes the main skill his students take away from their time in the lab is critical thinking: learning how to ground ideas in evidence, assess the logic behind different scientific approaches, and analyze data in a systematic way. He also encourages students to follow their passions, pursue unconventional questions, and collaborate across disciplines.

3 Students, 3 Paths, 1 Mission

PhD students (L to R): Connor Braun, Arefeh Farahmandi, and Sydney Doré

For Connor, understanding the brain begins with pen and paper. As a mathematics PhD student, he is investigating the idea that neurons are not simply passing along signals but acting as decision-makers with each neuron operating with limited information in a complex network. Connor’s research uses mathematical frameworks from multi-agent systems and reinforcement learning to model the brain as a decentralized network. His research asks how neurons know what to do in such a noisy environment. It’s a question that mirrors problems in economics and game theory, fields where individual agents make choices based on sparse and sometimes conflicting information.

Connor chose to work with Dr. Blohm after seeing the diversity of research in his lab. “[Dr. Blohm] really emphasizes the importance of collaboration and multi-disciplinary work for answering questions about the brain,” he explains. This openness led to co-supervision of Connor’s thesis with , a mathematician at ϳԹԴ. Their project now sits at the intersection of neuroscience, systems theory, and machine learning. “It’s exciting to have an idea,” Connor says, “and to realize that not many people are having the same idea.”

While Connor builds mathematical theories of how neurons communicate, Arefeh is focused on how humans move. With a background in electrical engineering and control systems, Arefeh’s research blends machine learning, biomechanics, and neuroscience to answer one question: how can we detect abnormal movement patterns, and possibly diagnose disease, just by analyzing video?

“How do we distinguish different movements, extract primitives, and then, from those features, distinguish abnormality?” she explains. Her project uses machine learning tools to analyze simple, even smartphone-recorded, videos. She extracts 3D pose data from these videos and identifies movement primitives—subtle, repeated patterns in how we walk or gesture. The long-term goal is to create a tool that flags movement irregularities, prompting early screenings or medical follow-ups. “Like, hey,” she says, “maybe your grandma’s walk looks a little different—maybe she should consult with a physician.”

Finally, while Connor builds theoretical frameworks and Arefeh designs practical tools, Sydney’s work unfolds in real time, with participants adapting to vision loss as her experiment unfolds. Her research investigates how people respond to central vision loss, such as by age-related macular degeneration (AMD), and whether the brain can rewire itself to compensate.

Using eye-tracking, Sydney’s experiments mimic a central blind spot on screen, forcing participants to rely on their peripheral vision to follow moving targets and track motion. Over the course of several sessions, she observes whether participants develop a new point of focus, known as a preferred retinal locus (PRL). “We already know that some AMD patients can develop a PRL in place of their absent fovea in tasks such as reading,” she explains. “However, it’s unknown if and how this PRL can be used in tracking [moving] targets.”

Her path to the lab was simple: “‘I’ve been in the lab since my [undergraduate] 4th year thesis project,” she says. “I read about the research in this lab and thought it would really align with my interests. I’ve been here ever since!”

Conclusion: The Lab, in Practice

These PhD students work on diverse problems, yet all of their research is grounded in a core mission and philosophy, asking: how does the brain adapt, decide, and act in the world? And how do we train scientists to answer this question in a way that promotes curiosity and critical thinking in equal measure? As Dr. Blohm explains, he wants to give his students “the tools and critical thinking and perspectives [to find] their own paths.”

The work coming out of Dr. Blohm’s lab reminds us that the future of neuroscience lies in embracing uncertainty and complexity. Whether it’s a neuron making a choice, a body moving abnormally, or a brain adapting to vision loss, the questions that matter are rarely the easiest ones to answer. What connects these researchers isn’t a single method or problem, but a philosophy that progress begins with curiosity, collaboration, and the courage to ask meaningful questions.

As Dr. Blohm explains, “the future of science lies in complex science.”

]]>
Connected Minds Research Retreat 2025: A Trainee’s Perspective /connected-minds/connected-minds-research-retreat-2025-a-trainees-perspective Wed, 26 Mar 2025 17:27:42 +0000 /connected-minds/?p=476 On February 27 – 28th, 2025, the second annual Connected Minds Research Retreat was held at York University. I arrived expecting to learn from panels, listen to poster presentations, and network. Instead, I left with questions about how interdisciplinary collaboration and ethics shape the ever-evolving landscape of neuroscience, technology, and society. More importantly, I found myself reflecting on how I, as a graduate student, can shape my own training to become an informed, well-rounded, and adaptable researcher.

This year’s retreat was the first one attended by graduate trainees and postdoctoral fellows. Though I was aware of the incredible scope of research by Connected Minds, it wasn’t until I was talking to my fellow trainees that I grasped the unique community being developed across widely varying fields. One moment, I was listening to an engineer discuss robotics, the next an artist challenging the way we think about technology. It was informative, but more importantly, it was eye-opening and perspective changing.

The more immersed I became in the retreat, the more I felt inspired by the power of interdisciplinary collaboration, the role of trainee voices in shaping these interactions, and the ethical considerations that should guide this work, especially in relation to artificial intelligence (AI).

The Power of Interdisciplinary Collaboration

One of my biggest takeaways from the retreat was learning how seemingly unrelated disciplines– neuroscience, engineering, arts, ethics, and more– can come together to tackle complex questions. I came to realize that effective collaboration does not just mean pooling together different areas of expertise, but rather, it is about learning how to communicate across different ways of thinking and problem-solving.

In one panel discussion, Dr. Anne Sullivan highlighted how each discipline has its own distinct priorities, methodologies, and even language, creating barriers to collaboration. For instance, as a neuroscientist I may discuss precise behavioural measurements and neural processing, but an artist speaks in terms of expression and interpretation, while an engineer focuses on efficiency and optimization. It made me reflect on my own research: how can I make my work accessible to people outside of my field? How can I remain open to different ways of thinking?

The co-creation panel featured several researchers who had worked on a team together, where they further emphasized that co-creation requires more than just assembling experts, but rather, it demands intentionality, shared purpose, and ongoing relationship-building. It made me question: how are we ensuring that collaborations are meaningful and serve a real purpose? How can you ensure you are building relationships that are built on trust and accountability? I continue to ponder these and related questions.

Co-creation panel with facilitators: Dr. Sachil Singh (L) & Dr. Vincent DePaul (not pictured) and panelists (L to R): Dr. Claire Davies, Dr. Karen Yeates, Khadyn Butterfly, Dr. Roseanne Aleong, Laura Levin, Dr. Michael Wheeler, & Dr. Hossein Kassiri.

Trainee Voices at the Forefront

A standout event was the trainee poster sessions. Speaking to my fellow trainees, the consensus was that we engaged most meaningfully with faculty and other researchers in these sessions. The poster sessions were not just an opportunity to showcase our work, but they were also a chance to exchange ideas with researchers across disciplines and receive valuable feedback.

Going into the retreat, I was unsure how my work would fit into the larger Connected Minds framework. But as I explored and discussed others’ research, I was motivated by the vast range of topics. Moreover, I came to realize how interconnected our research really is. Though topics ranged from AI-driven healthcare, to ethical neurotechnology, autobiographical performance, and Indigenous language revitalization– all vastly different topics — these conversations inspired us to find common ground and it seemed there were always threads that tie us together.

I am just beginning my research career, but trainee involvement at this year’s retreat highlighted to me that I am not a passive observer. Alongside other trainees, I am an active part of an ongoing conversation in which everyone’s unique perspectives and experience matters. These conversations give me confidence that I have found future collaborators.

Trainee poster sessions

Ethical Considerations for AI

While the retreat showcased advancements in AI and neuroscience, it also made me question: what are the ethical trade-offs and implications of these innovations?

One conversation that stuck with me came from Dr. Qingguo Li, who posed the question: “how much do we really need AI assistance?” It made me rethink the future of AI-driven development, especially technological progress. We often assume that more AI is better to enhance efficiency and reduce human-error, but what if there are cases when AI is unnecessary– or even actively harmful? Are we optimizing AI for human benefit, or are we at risk at creating over-the-top, unnecessary, or over-engineered “solutions”?

“How much do we really need AI assistance?”, Dr. Qingguo Li

This concern tied into a broader discussion on bias and equity in AI-driven healthcare. Dr. Sachil Singh highlighted how biases (such as those held by hospital data scientists) can create disparities in healthcare algorithms, ultimately harming marginalized patients. What stayed with me is that this issue isn’t caused by AI, but rather, it is a fundamental inequality that exists independent of AI . As echoed by Dr. Laleh Seyyed-Kalantri, “even without AI, the current algorithms and procedures being used in hospital settings [lead] to inequalities.” If the system itself is inequitable, then introducing AI without addressing these underlying disparities may only reinforce existing harm.

“Even without AI, the current algorithms and procedures being used in hospital settings [lead] to inequalities,” Dr. Laleh Seyyed-Kalantri

The AI equity & representation: building inclusive & ethical systems for heath and well-being panel featured facilitator (L) Dr. Shayna Rosenbaum, and panelists (L to R): Dr. Crystal MacKay, Dr. Laleh Seyyed-Kalantari, & Dr. Sachil Singh.

Conclusions: Looking Forward

At the end of the retreat, I had more questions than when I arrived , but I’ve come to realize that this is exactly the point. The Connected Minds retreat was meant to challenge me on what it means to be a researcher in today’s evolving world.

While I came expecting to absorb information, I left recognizing that meaningful research is not just about answering complex questions. It’s about learning how to ask the right ones. What does meaningful interdisciplinary collaboration actually look like? How do we ensure work is ethical? And as a trainee, how do I find my own voice at the table?

What will stay with me are the connections and perspectives I made that have pushed me to think differently about my work and the broader research community. Though I don’t have all the answers to the questions I asked, and likely never will, this retreat left me better equipped to engage with these questions than when I arrived. One aspect of growth as a researcher is to continuously question, refine, and expand the way we approach research. To me, that’s what being part of Connected Minds is all about.

Read more about each panel and associated speakers below:

  • The retreat kicked off with the Precision in Motion: AI, Biomechanics and Neural Dynamics panel facilitated by (Neuroscience, York). Panel members (Biomedical Science, ϳԹԴ), (Mechanical Engineering, ϳԹԴ), and (Rehabilitation Therapy, York) explored how AI is transforming biomechanics, rehabilitation, and BCIs.
  • The Indigenous Advisory Circle in Action panel featured (Communications, York), Nathan Brinklow (Indigenous Education, ϳԹԴ & UVic), (Indigenous Languages, York), and Dr. Michael Sherbert (Indigenous Knowledge and AI, ϳԹԴ). This panel explored how AI and neurotechnology can be developed in ways that respect Indigenous sovereignty, ethics, and knowledge systems.
  • Meet the Connected Minds: Artists in Residence panel was facilitated by (Art & Lead Partnerships Committee, York) this session featured artists in residence and They discussed how creative expression intersects with technology to challenge existing scientific narratives.
  • Interdisciplinary Innovations: Bridging Art, Technology, and Science panel explored the power of interdisciplinary collaboration specifically between artists, engineers, and neuroscientists to bridge art with technology and science. The panel was facilitated by (Neuroscience & Vice Director, ϳԹԴ), and featured panelists(Computational Art, York), (Electrical Engineering, ϳԹԴ) and (Neuroscience, York). 
  • The Inside Co-Creation: Lessons from the Team Grant Experience panel featured facilitators: & and panelists: Dr. Claire Davies, , , , , , &.
  • AI, Equity, and Representation: Building Inclusive and Ethical Systems for Health and Well-being panel tackled the question on how we build inclusive and ethical health systems. It was led by (Neuroscience & Vice Director, York) and featured panelists (Engineering, York), (Health, York), and (Rehabilitation Therapy, ϳԹԴ).
  • The final panel examined how ethical frameworks for technology and data can shape a responsible digital economy. Facilitated by (Law & Scientific Director, York), the discussion featured panelists (Technoscience and Society, York), (Communications, York), and (Creative Technologies, York). 

]]>