Connected Minds https://cmblog.neuroscience.queensu.ca Neural and Machine Systems for a Healthy, Just Society Wed, 22 Apr 2026 18:08:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 I Hung Out With a G1 Humanoid Robot for a Day /connected-minds/i-hung-out-with-a-g1-humanoid-robot-for-a-day Wed, 22 Apr 2026 18:08:08 +0000 /connected-minds/?p=661 Last semester, I watched a YouTube video of the Ingenuity Labs unboxing their new Unitree G1 humanoid robot. The robot looked slightly larger than a child and as the team gathered around it, they watched it walk, run, and even dance. I immediately had one thought: I need to meet this robot. A few months later, I found myself standing inside Ingenuity Labs, about to do exactly that.

Before seeing the robot up close, I sat down with Ramzi Asfour, Associate Director (Administration) at Ingenuity Labs Research Institute, to learn more about their newest addition. For Asfour, bringing a new robot into the lab is always an exciting moment. “It’s usually a fun experience when we open up boxes and there’s a new robot to get to do stuff,” recalls Asfour.

What is a G1 Humanoid?

When I finally saw the robot in person, the first thing that struck me was its size. Standing a little over four feet tall, the G1 looks almost like a small human figure: arms, legs, and joints designed to move in ways that mimic our own.

“It’s a robot that looks like a person… about the size of an average 10-year-old kid. It’s a humanoid form factor,” says Asfour. “The promise with it is that it can kind of do tasks that humans normally have done.”

Speaking with Asfour, I learned that the initial focus for the robot will be an agricultural application. But how exactly does a humanoid robot fit into agriculture?

The design of the robot, standing at about 4’4” (1.3m) and weighing around 35kg, allows it to perform tasks originally designed for humans. Its mobility allows it to walk, run, and navigate uneven terrain.

For instance, it would be useful for repetitive and tedious tasks often associated with agriculture. As Asfour explains, “if you’re in a greenhouse and want to bag produce, it’s very repetitive. You’re doing the same task over and over again.”

The robot is also equipped with advanced sensing and computing abilities. “It has computer vision. It has a built-in computer. It has some AI capability. You can talk to it. You can program it to do different things,” says Asfour.

A long-term goal will be to have the robot perform tasks independently, but a first application will be telerobotics. Or, in other words, having a human in a comfortable spot operating the robot remotely.

Ramzi Asfour - Smith Engineering Directory | Queen's University
Ramzi Asfour, Associate Director (Administration) at Ingenuity Labs Research Institute

Humans and Robots Working Together

Seeing the robot move makes its humanoid design immediately clear. The G1 doesn’t glide on wheels like many robots, but instead it walks. Watching it shift its weight from one leg to the other, I could easily imagine how robots like this might eventually operate in environments built for humans.

Asfour puts it clearly: “robotics and AI are going to be everywhere.” So a key question for Ingenuity Labs becomes, “how do you successfully roll out robots into a community situation or workplace situation and have it be a positive experience rather than people worrying about safety or job security?”

A first step is to study how humans behave around robots. Asfour says, “[we ask] how do people behave differently in the presence of a robot?”

Moreover, researchers must also study humans to improve robotic motion. “You study the biomechanics of how people walk… and then analyze that and come up with control schemes to have the robot walk better,” explains Asfour. “You want it to be more stable and look more natural while it’s moving around.”

Meeting the robot!

AI and Robotics

The robot also provides a unique avenue to integrate artificial intelligence (AI) with physical machines. “People have called it embodied AI or physical AI… where AI and robotics come together,” Asfour explains.

For years, AI has largely existed behind screens whether that be powering algorithms, recommendations, and data analysis. But robots like the G1 begin to shift that, bringing AI into physical spaces where it can interact with the world in real time.

“AI exists on computers… but they needed a way to go out into the real world,” says Asfour. “You could see the potential to talk to a robot and ask it to do something in an environment.”

The idea of simply talking to a machine and having it carry out a task still feels slightly futuristic. But standing in the lab, watching the G1 move, it becomes easier to see how that future might not be so far away.

Researchers, Students, and Collaborators

The robot also creates new opportunities for students.

“All our robots here are industrial-grade… these are robots that industry will be using,” says Asfour. That means students are gaining experience with the kinds of systems they may encounter in their future careers.

Graduate students use the robots in research projects, while undergraduate engineering students get hands-on experience through capstone design projects. For many, it’s a rare chance to work directly with advanced robotic systems before entering the workforce.

Beyond individual projects, Ingenuity Labs is also encouraging collaboration across disciplines. Asfour notes that “anybody who’s part of the Connected Minds program can make use of it through our collaboration.”

In a space like this, the robot becomes a shared platform for ideas, bringing together researchers, students, and different fields to explore what these technologies can do.

Conclusion: Robots Everywhere

Standing next to the robot, it’s easy to imagine a future where machines like this are no longer unusual lab equipment, but working alongside people in tasks like harvesting, packaging, or other everyday work.

“At some point, robots are going to have to fill the jobs and do the tasks that humans normally do now,” Asfour explained. But rather than replacing expertise, these robots could expand what people are able to do.

As these systems develop, another key focus is making them easier for people to use. Rather than requiring specialized coding knowledge, researchers are working toward ways of interacting with robots more naturally, for example, through language or simple instructions. “You could have someone who knows a lot about metalworking and a little bit of robotics, and the robot becomes very intuitive to program.”

Ultimately, Asfour believes the future will involve a world where intelligent machines and humans work together. “We think robots and AI are going to be everywhere… and we want to put together systems for the benefit of society.”

]]>
Building Community in Neuroscience: Inside ϳԹԴ Women in Neuro /connected-minds/building-community-in-neuroscience-inside-queens-women-in-neuro Tue, 17 Mar 2026 14:24:09 +0000 /connected-minds/?p=647 When I started graduate school at ϳԹԴ, one of the first things I did was look for a neuroscience club for women-identifying students. During my undergraduate degree, I co-founded the Women in STEM club at Simon Fraser University, having recognized the lack of community and representation for women in the sciences. Most of my professors were men, yet my class was split pretty evenly between women and men. I often felt isolated and my journey in science seemed unnecessarily difficult – and science wasn’t easy to being with.

The discrepancy is clear to me. Women are interested in STEM, but systemic barriers, stereotypes, and a lack of role models continue to limit representation in the workforce. I recognized that challenge during my undergraduate years, and so did

Blake Noyes.

Blake runs the Women in Neuro (WiN) club at ϳԹԴ, a student-led initiative at focused on creating a supportive community for women-identifying neuroscientists. The club focuses on research discussions, mentorship opportunities, outreach activities, and an annual conference, to help students navigate academic and professional pathways, while inspiring the next generation of women scientists.

Why was Women in Neuro created?

For Blake, the idea of WiN began with a simple observation, “I came up with the plan after chatting with some fellow students in my program,” she explained. “At the time, we were predominantly women (and I think we still are) but the faculty, including our supervisors, were predominantly men.”
While their supervisors were supportive, Blake and her peers noticed that some experiences specific to women in science were difficult for male mentors to fully relate to. She explained, “though our supervisors are incredibly supportive, there are certain challenges like biases about women in science or managing pregnancy during graduate school that they don’t have experience with.”


.

Recognizing these issues, Blake and her colleagues set out to build something new. She says, “we wanted to build a community to support each other [through] these challenges and provide mentorship from women who have gotten through it.”

Challenging the Competitive Culture of Academia

Beyond issues of representation, there is an inherent competitiveness in academia. Graduate students frequently compete for the same grants, fellowships, and conference opportunities, and these pressures that can intensify feelings of isolation.

“Grad school can feel competitive at times,” Blake says. “We are often all applying [for] the same grants. We are all trying to win the oral presentation spots at conferences. But I think we have a lot to learn from each other.”

The 2024-2025 Women in Neuro Executive Team.

Women in Neuro aims to counter that culture by emphasizing collaboration and community building.

Events like this one create spaces where students can present their work, meet potential mentors, and build networks that extend beyond their own institutions.

“Getting the community together makes our research stronger and grows our network heading into the future,” Blake says.

Inspiring the Next Generation

Another key feature of the club is to inspire the next generation of women scientists. As Blake says: “I think it’s super important for young girls to know it’s a possibility to be neuroscientists, or scientists in general.”

Growing up, she recalls watching science programs like Bill Nye the Science Guy but rarely seeing women represented on the show. “When I was growing up, we watched Bill Nye, and I didn’t really see a lot of women in science.”

To help overcome this issue, the organization runs outreach programs for students ranging from elementary school to high school. Activities are tailored to different age groups and emphasize hands on learning.

“With young kids, we introduce the lobes of the brain and do hands-on activities like making playdough neurons,” Blake explained. “Older students get more insight into the work we actually do, including testing equipment like microscopes, the EyeLink eye tracker, or the KINARM.” [Note: links included]

These experiences often provide students with their first real glimpse of neuroscience research. Perhaps more important are the informal conversations that follow. “More importantly, we have time to answer their questions about university and tell them how we ended up where we are today,” Blake says.

The Role of Connected Minds

Support from initiatives like Connected Minds has also played an important role in expanding Women in Neuro’s reach. “Connected Minds gave us funding for our conference last year towards the purchase of our new poster display boards,” Blake explains. “That was a huge help and saves us the cost of rentals in the future.”

Organizing conferences can be expensive, and student-run initiatives often rely on sponsorship and grants. Funding support allowed the team to keep registration accessible for students.

“We were able to keep last year’s student ticket price at $25,” Blake explained. “We thought that was a great deal for a full-day conference with lunch, snacks, and unlimited coffee.”

For some attendees, the impact has been lasting. “Some undergraduates told me the conference gave them the confidence to pursue graduate school,” Blake says. “Support from funders like Connected Minds has been very influential in advancing the next generation of neuroscientists.”

Looking Ahead

As neuroscience continues to evolve, collaboration is becoming increasingly central to scientific progress. “Scientific research has been moving towards more collaborative efforts,” Blake says.

By bringing together women researchers across institutions and career stages, Women in Neuro hopes to strengthen those networks and encourage future partnerships. Blake concludes by noting “bringing women in neuroscience together now will facilitate knowledge transfer and encourage future collaborations.”

]]>
Co-Creation Stories: Start with People, Not the Problems /connected-minds/co-creation-stories-start-with-people-not-the-problems Mon, 19 Jan 2026 20:23:02 +0000 /connected-minds/?p=636 On the , the word “co-creation” is seemingly everywhere. Connected Minds gives you a clear starting point for understanding co-creation, including a definition: a collaborative approach to research wherein researchers work directly with the people, communities, and sectors affected by an issue to jointly define problems, design and test ideas, and evaluate outcomes. The website also includes a step-by-step guide, developed from an extensive literature review that walks researchers through, ii) , and iii)

On paper, co-creation is important, and it doesn’t take a genius to understand why. Collaboration pushes research forward. Diverse voices and perspectives broaden how problems are framed, and consulting with end-users can only make our solutions better. So yes, co-creation sounds great. But in practice, is it worth the extra time, discomfort, and the added coordination in a research world that rewards speed, efficiency, and productivity?

If you are like me, it isn’t enough to just read about the definition of co-creation or skim a how-to guide. You also need real-life proof of why it is worth going the extra mile to reach out to community members, researchers in other disciplines, and experts from all over. To understand why co-creation is so important, I spoke to two ϳԹԴ researchers, Dr. Claire Davies and Dr. Gavin Winston. Both have made co-creation a foundational part of how they do research.

Starting with people changes the problem

Dr. Claire Davies, Professor in Mechanical and Materials Engineering, spoke to me about a time she brought students who were developing robotic sensors and exoskeletons for stroke rehabilitation. As she recalls, after her students had met the patients, ‘they walked out of the clinic at the end and said, “those people don’t operate anywhere near what they do on YouTube. I am going to have to redesign my sensors.”’

, Professor in the Department of Medicine, echoes a similar sentiment from a clinical perspective. He reflects on early co-creation workshops with people living with epilepsy, he notes that “one of the overwhelming things was that people really liked the concept and felt that it would be useful.” In both cases, talking to end users shaped how the technology is built, but more importantly, it reshapes how researchers perceive problems.

And this is where a major problem emerges: do we, as researchers, actually understand the problem at hand? We read the papers, do literature reviews, and make assumptions, but do we slow down enough to actually listen?

Co-creation slows research down, but that’s the point

It’s a race against time for the next grant, the next position, and the next project. Co-creation interrupts that process and slows it down.

Dr. Davies was recently awarded a for her project, When People Talk, Listen Completely, which focuses on developing AI-driven communication technologies, educational tools, and workplace strategies to improve employment access for Canadians with speech impairments. Across her work, she has noticed repeatedly that, “ninety percent of engineering is designing, and people neglect to actually talk to clients [during] the design process.”

When People Talk, Listen Completely: Led by Dr. Claire Davies (ϳԹԴ) and, this team is developing AI-driven communication technologies, educational tools, and workplace strategies to improve employment access for Canadians with speech impairments.

Through Connected Minds funded co-creation workshops with people who have speech impairments, Dr. Davies and her team learned how varied communication can be for this group of people. For instance, some participants relied entirely on speech-generating devices, others used vocal utterances which required time and familiarity to understand, and some participants needed interpreters.

Sometimes, there were long gaps while participants typed out their answers. At first, she thought these pauses would be labelled as inefficient since they resulted in delayed responses. However, she realized that it was important to deliberately leave them in. As she recalled, “Initially I thought, well let’s take out all those huge gaps where people are typing their answers. But then I realized that was the most important thing for people to learn, that you have to sit there and you have to wait and you have to listen and you don’t interrupt… you just have to be quiet and patient.” 

Ultimately, it became about letting people communicate at their own time, pace, and comfort level. The approach she and her team took forced researchers to slow down, wait, and listen completely. These workshops were then thematically analyzed and brought back to participants for validation, directly informing her grant application and future research themes. For Dr. Davies, there is a clear takeaway: “the biggest thing is going in with no preconceived conceptions of what you think needs it, [so having] no preconceived ideas of what you’re expecting out of it.” Involve people in the beginning and throughout the research process, not just at the end.

Consultation with end-users expands the research problem

Dr. Winston has a similar experience. He was also awarded a for his project, Wearable EEG for Personalized Epilepsy Management, which focuses on developing a smart, wearable electroencephalogram (EEG) device designed for clinical accuracy, long-term comfort, and ethical use in everyday environments. His team aims to bridge the gap between short, clinic EEG recordings and long hospital stays by developing a wearable EEG device that provides clinical-quality data, long battery life, and full electrode coverage that can be used independently at home. At first, the research problem may sound like a technical challenge, but it becomes something much bigger.

Wearable EEG for Personalized Epilepsy Management: Co-led by Dr. Gavin Winston (ϳԹԴ) and , this team is developing a smart, wearable electroencephalogram (EEG) device designed for clinical accuracy, long-term comfort, and ethical use in everyday environments.

Involving people in the research process immediately expands the scope of the problem being addressed beyond medicine or engineering alone. Once an EEG wearable is introduced into the home, questions of ethics, legality, accessibility, privacy, and caregiver impact become as important as the hardware of the device. Thus, it becomes clear that what works in a controlled hospital environment doesn’t always work into someone’s daily routine.

Addressing these questions requires an interdisciplinary team that brings together clinicians, engineers, ethicists, lawyers, community organizations, patients, and caregivers. Dr. Winston and his team heard directly from participants who would eventually use the technology. For instance, “things such as comfort were brought up as being critical if [the device] was going to be used.” Beyond comfort, participants also discussed usability and support: “They felt there would need to be clear availability of technical support… or at least training so they know how to use such a device.”

As Dr. Winston points out, “there’s another large part of the project which looks at all the ethical and legal implications of such a device… if we’re recording data in a home environment, what are the security implications of that?” Once people are involved, the problem is no longer about engineering a better device. It’s also about understanding the context in which it will be used. Starting with people means that no single discipline can fully understand the problem on its own.

Conclusion

For me, the value of co-creation has become clear, not because it sounds good on a website, but because of what it forces researchers to think about. Starting with people changes what we think the problem is, slows us down in ways that enhance our solutions, and makes it impossible to work by ourselves.

]]>
Human-in-the-Loop Is Not a Technical Detail. It’s an Ethical Position /connected-minds/human-in-the-loop Fri, 16 Jan 2026 16:45:03 +0000 /connected-minds/?p=624 Artificial intelligence is usually discussed in terms of what it can do. How accurate is it? How fast? How much work can it take off our plates?

But performance is only part of the story. When AI is involved in decisions that affect people’s lives, someone has to be responsible. The question is who.

Across very different fields, from engineering labs to medical education, the same idea keeps surfacing. Keeping humans “in the loop” is not a temporary safeguard or a design choice to be optimized. It is a moral decision. It is a way of saying that efficiency does not replace accountability, and that judgment cannot be fully automated.

This tension came up repeatedly in my conversations with two researchers working at very different points along the AI pipeline.

At the technical end of this pipeline, “human-in-the-loop” is often discussed as a procedural detail. It is treated as a feature to be added early and removed later, once systems become more capable and trustworthy. But this framing misses what is actually at stake. When AI systems are designed to recognize people, interpret behavior, infer emotional or cognitive states, and personalize responses in real time, oversight is no longer just a design choice.

Dr. Ali Etemad, an Associate Professor in the Department of Electrical and Computer Engineering at ϳԹԴ, works on human-centred AI that draws on many types of data, including text, images, audio, and biological signals, to understand who people are, what they are doing, and how they are feeling.

Dr. Ali Etemad, Queen’s Department of Electrical and Computer Engineering

In this context, his concern is not simply that AI systems make mistakes, but that their errors can carry an unwarranted sense of authority.

“Hallucinations are a big problem in language models where the model generates output that sounds real but is not real. This has happened to me many, many times where I’ve asked for a reference about a particular thing and it makes up a legitimate sounding paper that doesn’t actually exist.”

The risk here is not simply incorrect information. It is the confidence with which that information is delivered. Systems that speak fluently and persuasively can make errors feel settled and final, even when they are not. When human oversight becomes symbolic rather than active, people stop questioning outputs that sound convincing. Keeping a human in the loop is not about checking grammar or fixing small errors. It is about maintaining responsibility for accuracy. Someone must still decide whether an answer is reasonable, appropriate, and safe.

The same ethical tension shows up downstream, when AI systems move into real decision-making environments. In postgraduate education, for example, large language models (LLMs) are increasingly used to summarize resident evaluations, analyze feedback, and help promotion committees manage large volumes of data about individual medical trainees. These tools promise efficiency, and they often deliver it. But they also quietly reshape how decisions are made.

Dr. Benjamin Kwan, an Assistant Professor and Neuroradiologist at ϳԹԴ, Educational Innovations Lead at PGME and Faculty Research Director for Diagnostic Radiology, works directly at this intersection, researching how large language models are used to support assessment and decision-making in postgraduate medical education. For him, the limits are clear.

Dr. Benjamin Kwan, Faculty Research Director for Diagnostic Radiology at Queen’s

“I don’t think we should ever replace the human in these decisions. We should always have what they call the ‘professor-in-the-loop’ or the ‘teacher-in-the-loop’ to make sure everything is appropriate.”

This perspective does not imply resistance to technology. It is an acknowledgment of responsibility. When evaluative authority shifts, even subtly, from people to systems, accountability becomes harder to pin down. If an AI-assisted recommendation disadvantages a trainee, who answers for that outcome? The algorithm? The institution? Or the human who deferred too easily to an LLM?

When decisions are delegated, responsibility does not disappear. It just becomes harder to trace.

Machines Don’t Wash Bias Clean

Automation is often framed as a solution to human bias. The logic is appealing: remove people from the process and decisions become more objective. But this logic collapses under closer scrutiny. From a technical perspective, Dr. Etemad makes clear that bias is not just a problem of flawed or unbalanced data. It can be introduced much earlier, through choices that are rarely visible to end users and are often treated as purely technical.

“We proved that certain training algorithms with the same data and the same model could make the model more biased or less biased. Using certain algorithms to train neural networks could have a huge impact on how biased these models will become when they are released.”

This matters because it undermines a widely held assumption—that holding data constant guarantees consistent outcomes. As Dr. Etemad’s work shows, that assumption fails. Models trained on the same data diverge significantly depending on design and training choices. Bias, in other words, is not just inherited. It can be built in through decisions embedded deep within the technical pipeline.

Dr. Etemad also points out that addressing bias is rarely straightforward because improving fairness often involves trade-offs that cannot be resolved by optimization alone.

“There is a fairness–performance trade-off. One way to increase fairness is to penalize the better-performing group so that everyone performs equally poorly, but that’s not really what we want.”

Deciding what counts as “fair enough” is not a technical judgment. It requires values. That same tension shows up clearly in educational settings. Dr. Kwan is cautious about using AI systems to make or strongly influence decisions about progress, not only because they are informed by past evaluations (which reflect their own judgments and constraints), but because the systems themselves encode assumptions about what should count as performance, risk, or success.

“You probably wouldn’t want to use a tool that will determine if somebody passes or fails solely because of all these potential bias problems.”

The concern is not abstract. In practice, automated systems tend to present their outputs as neutral summaries or recommendations, such as a score, a ranking, or a flagged concern. Once framed this way, decisions can feel less open to discussion. When a person makes a biased decision, it can be questioned. They can be asked to explain their reasoning. But when a system produces the same outcome, the decision often feels procedural, as though it simply followed the rules.

This shift matters. Bias does not need to be extreme to be harmful. It only needs to become harder to challenge.

Human-in-the-Loop Is ϳԹԴ Responsibility

These perspectives are connected by a shared understanding. Insisting on human oversight is not distrusting AI. It is about refusing to abandon as our responsibility as those who design and deploy it. For Dr. Etemad, this responsibility is most visible in the problem of alignment. In systems that interact closely with people, the central question is not simply whether an AI system performs well, but whether its outputs reflect the priorities and values it is meant to serve.

“One of the important sub-areas in this context of AI that interacts with humans is responsible AI. One of the ways to address that is alignment. Can we develop models that are aligned with a set of values that we’re interested in?”

Alignment, in this sense, cannot be achieved through optimization alone. It requires human judgement about which values matter, how they should be balanced, and when a system’s output must be constrained or overridden. From this viewpoint, human-in-the-loop systems are not a temporary safeguard. They are one of the few ways to ensure that responsibility is upheld. Someone still has to stand behind the decisions an AI makes.

In professions like medicine and education, this issue is critical. These fields depend on trust and their decisions shape real lives. As Dr. Benjamin Kwan put it, AI may become an increasingly powerful assistant, but it should never be the final authority.

“AI will be a good helper… another voice at the table.”

A voice, not a verdict.

The Question We Should Be Asking

We spend a lot of time debating what AI will eventually be capable of doing. The more important question is what we should ask it to do. Human-in-the-loop systems are slower. They require explanation. They force people to stay engaged. That is exactly why they matter.

Because automated systems don’t take responsibility. People do.

]]>
Exo-Sensory Augmentation: Designing Inclusive Wearable Solutions for Safer Work Environments /connected-minds/exo-sensory-augmentation-designing-inclusive-wearable-solutions-for-safer-work-environments Fri, 14 Nov 2025 17:00:50 +0000 /connected-minds/?p=614 When physicians, nurses, and healthcare staff can no longer complete work due to workplace injury, patients inevitably face the consequences.

“[The Canadian] healthcare system focuses strongly on patient outcomes to treat patients better and to improve overall health. However, a lot of the time, we neglect the needs of our healthcare professionals,” says Dr. at ϳԹԴ.

Home Image
Dr. Qingguo Li

Dr. Li leads the Bio-Mechatronics and Robotics Lab (BMRL) at Ingenuity Labs Research Institute. He is His awarded project ‘Exo-sensory Augmentation to Reduce Musculoskeletal Injury Risk in Clinical Settings’ aims to use innovative wearable technology to enhance sensory awareness and mitigate injury risks.

A Hidden Problem

While healthcare focuses on the treatment of Canadians, a forgotten demographic ends up being the people who provide that care: physicians, nurses, and healthcare staff themselves.

“When we come back to the literature, there are surveys that find that a very high percentage of people working in operating rooms, for instance, experience back problems and musculoskeletal injury,” says Dr. Li. The issue can lead to short careers that often result in early retirement; as Dr. Li puts it, “if the doctor is sick, who can do the treatment?”

“We focus on a group of clinicians who work in a real-time X-ray environment,” says Dr. Li. In these settings, staff wear lead aprons weighing about 15 to 25 pounds, often bending over patients for sustained periods. The combination of heavy aprons and sustained, awkward posture raises spinal loads and the chance of injury.

There are practical fixes that have been proposed but remain inadequate to address the problem. “One way is exoskeletons; however, [the surgeons] have told us that even if we develop a pretty exoskeleton, they would not use it,” explains Dr. Li. “Exoskeletons are bulky, heavy, and affect range of motion.” Another solution would be to hang a harness from the ceiling. “This would hold the weight of the load as a kind of suspension system,” he says. However, there are issues with mobility in a delicate operating environment. “[Surgeons] would need to drag the harness, and when moving in a surgery environment, that’s not a good solution.”

An Innovative Solution to a Sensory Problem

Home Image
Dr. Li and his team. (L to R: Will Bonin, Sophie Lau, Jialin Luo, Paul Quinlan, Qingguo Li, Natasha Anderson, Samuel Brost, Romaric Bambara)

realized that the core issue is not just physical load, but posture awareness. “This is why we propose an exo-sensory augmentation approach,” he says. “The major issue is posture.” In a demanding environment like surgery, with long procedures, the staff are deeply focused on the task at hand and do not pay attention to how their posture deteriorates. As Dr. Li explains, “the clinicians or the surgeons or the nurses are not aware [of their posture] during the operation, so if we give them this information, hopefully they can adjust their posture, stretch, and relax a bit.”

Dr. Li compares exo-sensory augmentation to the use of glasses or hearing aids — devices that amplify our senses. “Sometimes you forget about posture because you’re focusing on other tasks. So, [the system] can provide that information to you,” he explains. This approach gives staff real-time awareness of their posture as they work, allowing them to regain a sense they had temporarily lost under cognitive and physical load.

“We took a user-centered design approach by involving users in all stages of decision-making,” says Dr. Li. “We work very closely with surgeons and stakeholders. We have regular meetings with them to make solutions that are both feasible and acceptable in a clinical environment.”

The system under development has three modules to achieve overall exo-sensory augmentation: posture measurement via small wearable sensors placed on the user’s back, load estimation using biomechanical models to estimate spinal loading at different vertebral levels (e.g., C7, L4) , and feedback to the user, delivered as either haptic cues (e.g., a buzz when posture exceeds a threshold) or onscreen indicators (e.g., status on a display). These real-time cues prompt adjustments without disrupting surgery.

Designing for Accessibility

An important focus for Dr. Li is accessibility and inclusivity for all clinical roles and users. “You have nurses, surgeons, and radiology technicians,” says Dr. Li. “We are trying to develop a system that works for all of them.”

Dr. Li and his group therefore study the posture demands unique to each staff role in a clinical setting. For instance, they take into consideration that a nurse assisting a surgeon, or a technician handling X-ray equipment, face different ergonomic challenges. This consideration has underpinned the adaptability of the system to each staff member’s movement patterns and potential risks.

Dr.Li’s group also considers sex differences in how users perceive and respond to feedback. “We consider if we need to develop a universal intervention or a system with different parameters,” Dr. Li explains. By accounting for these factors early in design, the team aims to ensure the final system can support clinicians of all body types, abilities, and backgrounds.

Looking Ahead

Looking ahead, Dr. Li hopes to validate the exo-sensory system in clinical trials and to explore commercialization. “We work with ϳԹԴ Partnerships and Innovation to get the technology to the hands of our end users, “ he explains.

Dr. Li’s long-term goal is to move beyond the lab and integrate the technology in real-time workflows. “We believe this technology can not only be applied in clinical settings or healthcare, but also in manual labour, working in a factory, for example… we believe exo-sensory augmentation can benefit the ergonomics and working conditions of other fields and make a broader impact.”

]]>
Driving ethical innovation /connected-minds/driving-ethical-innovation Mon, 28 Jul 2025 15:17:17 +0000 /connected-minds/?p=605 Through Connected Minds Team Grants, ϳԹԴ researchers are harnessing technology to support responsible innovation.

With $318.4 million in combined support, Connected Minds unites researchers from York and ϳԹԴ to shape technology that serves equity, health, and society.

Advances in technology are rapidly transforming everyday life. At ϳԹԴ, researchers are working to ensure that innovation goes hand in hand with inclusion, equity, and community engagement. This commitment is embodied in Connected Minds, a national research initiative led by York University in collaboration with ϳԹԴ that supports interdisciplinary teams exploring the social dimensions of technological change.

Supported by more than $100 million from the  and additional contributions from York and ϳԹԴ, Connected Minds is one of the largest initiatives of its kind in Canada. Now in its third year, the program is supported by more than 50 industry, hospital, and community partners. In its first round of Team Grants, the program invested a total of $7.5 million across five interdisciplinary projects. Each team received up to $1.5 million to investigate how technology can support a more inclusive, equitable society.

“Connected Minds reflects the kind of collaborative, human-centred research ϳԹԴ is proud to be part of,” says Gunnar Blohm, Connected Minds’ Vice-Director, and a computational neuroscientist with ϳԹԴ Centre for Neuroscience Studies. “These projects show how we can work across disciplines and institutions to shape technologies that respect people’s needs and support real change in society.”

The first round of Connected Minds Team Grants features five ambitious projects, each with ϳԹԴ researchers helping to discover new ways of aligning innovation with community needs:

When People Talk, Listen Completely 

 (Smith Engineering) and York’s Shital Desai are co-leading a national effort to improve employment outcomes for people with speech impairments. The project, When People Talk, Listen Completely, brings together experts in workplace accessibility, communication technologies, and clinical care to reduce stigma and enhance inclusion. The team is working closely with individuals with lived experience, employers, and community organizations to develop practical tools, policies, and education materials that support workplace success and foster greater acceptance.

Creative Collectivities: Rehearsing Equitable Futures through Participatory Technologies 

 (DAN School of Drama and Music) is collaborating with York’s Laura Levin to study how participatory technologies, from immersive theatre to artificial intelligence, can foster more inclusive ways of coming together. Their project, Creative Collectivities, involves artists, engineers, neuroscientists, and community partners from 2SLGBTQIA+, Indigenous, racialized, and disabled communities. By combining live performance with digital tools such as AI, the team is reimagining how technology can support collective expression and challenge systemic barriers.

The goal is to create shared spaces, both physical and virtual, where diverse voices shape not only the stories being told but also the tools used to tell them. Through collaboration and experimentation, Creative Collectivities is exploring how performance and emerging technologies can reflect the needs and imaginations of marginalized communities.

Development and Validation of a Technologically Advanced, Clinically-Effective, Socio-ethically-Responsible Wearable EEG System for Personalized Epilepsy Management

 (Medicine), in partnership with York’s Hossein Kassiri, is developing a wearable electroencephalogram system to help people with epilepsy monitor and manage their condition in real time. The team is combining medical, engineering, and ethical expertise to create a device that is accurate, comfortable, and socially responsible. With input from people living with epilepsy and support from organizations like Epilepsy Toronto, the project aims to bring personalized care into people’s homes while addressing important questions around brain data and privacy.

The Biskaabiiyaang Indigenous Metaverse

 (Biomedical and Molecular Sciences) is contributing to a project that brings together Indigenous knowledge systems, community leadership, and immersive technology to create a new kind of virtual learning environment. The Biskaabiiyaang Indigenous Metaverse, led by York University’s Rebecca Caines and Maya Chacaby, is designed to support cultural connection and language revitalization through interactive, story-based experiences.

Developed in partnership with organizations such as the Nokiiwin Tribal Council and UniVirtual, the platform aims to uphold Indigenous governance and values while advancing digital inclusion. Through research in neuroscience, psychology, and community-informed design, the team is examining how virtual spaces can reflect Indigenous ways of knowing and offer personalized, ethical approaches to in-game learning and engagement.

Co-creating Intelligent Neuro-Technologies for Healthy Aging (CINTHeA) 

 (Rehabilitation Therapy) is working with York’s James Elder to create socially assistive technologies that support healthy aging. The project, CINTHeA, focuses on mobility, cognitive health, and social connection for older adults. In collaboration with caregivers, community organizations, and clinical experts, the team is developing tools such as personalized robots and mobile assessments designed to promote independence and quality of life. Neuroscience research and AI-powered monitoring systems also help enable early intervention and support.

These inaugural Team Grants reflect ϳԹԴ research community’s growing involvement in Connected Minds and its commitment to socially responsible innovation. As the program continues to grow, researchers across disciplines will have new opportunities to lead initiatives that connect technological advancement with ethical and real-world impact.

Originally posted in the Queen’s Gazette July 14, 2025 – /gazette/stories/driving-ethical-innovation

]]>
Decision Intelligence: The Data-Driven Future of Healthcare Resource Planning /connected-minds/decision-intelligence-the-data-driven-future-of-healthcare-planning Mon, 14 Jul 2025 19:03:29 +0000 /connected-minds/?p=598 It starts with a closed door.

A rural emergency room shuts down without warning. It’s late. It’s snowing. The nearest hospital is hundreds of kilometers away and there’s no family doctor to call. In the backseat, a patient clutches their chest, growing colder by the minute. That closed door isn’t just an inconvenience. It could be the difference between life and death.

Across Ontario and throughout Canada, temporary ER closures have become a symbol of a healthcare system under immense strain, where staffing shortages, vast distances, and surging demand converge to create serious risks for patient care. In 2023 alone, there were over 600 temporary emergency department closures in rural Ontario, often leaving residents with no choice but to travel long distances to receive urgent care.1

But what if we could prevent these closures before they happen?

That’s the question driving Dr. Salimur Choudhury, a computer scientist at ϳԹԴ and founder of the GOAL Lab (short for Global Optimization, Analytics, and Learning). His lab builds decision-making tools powered by algorithms, data, and AI, tools that could help healthcare administrator decide, in real time, how to deploy limited resources where they’re needed most.

 Dr. Salimur Choudhury of ϳԹԴ is developing AI-driven tools to help healthcare leaders make faster, smarter decisions

“One of my research priority areas is optimizing health care resources,” he said. “Over the past few years, we’ve been developing data-intensive, data-driven methods to support health care policymakers.”

At its core, Choudhury’s research addresses a deceptively simple question: how do we allocate scarce resources – physicians, supplies, or even transportation routes – in a way that supports real people in real time?

Some of his most urgent work focuses on ER closures in Northern Ontario. With limited staff supporting a vast geographical area, hospitals in the north frequently shut their emergency departments on evenings and weekends. For patients, these closures can mean driving for hours to find help, often in hazardous winter conditions. “We’re analyzing patterns in ER visits and studying which emergency rooms are closing and when,” he explained. “The idea is to use data to guide decisions, to figure out when and where closures can be avoided.”

But the barriers to care in Northern communities go beyond hospital staffing. They also include the seasonal roads that connect them. Many remote areas rely on winter roads—temporary routes built over frozen lakes and waterways —that are only operational for a few weeks each year. Climate change is shortening these windows by causing warmer winters and unpredictable freeze-thaw cycles, cutting off critical links between patients and healthcare. Choudhury’s lab is using geospatial and climate data to anticipate these risks. “We are combining satellite imagery and trying to analyze which winter roads are disappearing,” he said, “because if and when that road is closed, the ER will also close, and that may impact the service delivery.”

When winter roads close, so can access to emergency care, leaving remote communities stranded without critical health services.

By identifying communities at risk of being stranded, his team aims to help decision-makers plan ahead, reallocating resources before patients are left without options. To make this relocation possible, his lab is building mathematical models that support regional coordination, enabling better use of travel nurses, shared staff pools, and dynamic scheduling, all grounded in patient and hospital data. While access to these data remains a challenge, Dr. Chourhury’s team is painstakingly gathering what it can from public sources.

But Choudhury isn’t stopping at ERs. In another project, his team is helping reduce administrative burden in family medicine. “In family health teams, physicians often spend a lot of time filling out forms, but not all of them need to be completed by doctors,” he said. “We’re looking at how we can redistribute that work to other allied professionals.”

By analyzing health records from the ϳԹԴ Family Health Team and partnering with a start up company , the lab developed an AI-based autofill system for medical referral forms, saving time for doctors and potentially improving workflow across team-based care. “The time savings for physicians were significant,” Choudhury said.

These innovations point to a broader shift that Choudhury is pursuing — one where intelligent systems don’t just offer recommendations, but take action to solve and prevent problems in real time.

“What I’m envisioning is called decision intelligence,” he explained, “where systems don’t just analyze data and generate algorithms but actually make the decisions themselves.”

This vision may sound futuristic, but his team is already working on it. In one project, they’re building a tool that converts natural language (plain, human descriptions of a problem) into the mathematical equations needed to solve it. For example, someone might type out their scheduling or logistics challenge, and the algorithm would automatically generate an optimization model and code, ready to run.

“The idea is that a user could describe their problem in natural language,” he said, “and the system would generate the equations and code needed to solve it.”

Ultimately, these tools are designed to help healthcare leaders and government agencies make faster, smarter and more equitable decisions, especially in communities where resources are stretched. But for research like this to drive real-world change, it has to reach the people in charge of policy and planning. That’s why Choudhury is strategic about where he publishes his work.

“When you develop a particular methodology, some of it needs to be published in computer science journals,” he said. “But we also need to reach the right audience, and healthcare policymakers may never read a standard computer science journal.”

To bridge this gap, Choudhury not only publishes in technical outlets, but he also seeks opportunities in health-sector venues where clinical leaders and system planners are more likely to engage. These opportunites include journals, conferences and collaborations  with clinicians. By tailoring communications to this audience, Choudhury ensures his research doesn’t just push the boundaries of algorithm design, but informs better care, smarter systems and more responsive policy.

Despite the complexity of his work, Choudhury is focused on practical impact. He partners with hospitals, startups and government agencies, and trains the next generation of researchers in this field. And he continues to ask how novel algorithms can help to solve systemtic problems in healhcare.

“We’re trying to find small algorithmic approaches,” he said, “and connect those dots to make a meaningful difference.”

In a healthcare system buckling under pressure, these small things might just be the breakthroughs we need.  But the significance of this work goes far beyond technical achievement. By designing systems that can allocate resources intelligently, deliver care more efficiently, and anticipate infrastructure-related challenges before they become crises, Choudhury’s research lays the foundation for a healthcare system that is more equitable, accessible, and resilient — one that serves everyone, no matter their postal code.


  1. Rural Ontario Municipal Association. (2024). Fill the gaps: Bringing care closer to home in rural and northern Ontario. https://www.roma.on.ca/sites/default/files/assets/IMAGES/Home/ROMA%20Report%20-%20Fill%20the%20Gaps%20Closer%20to%20Home%20January%2021%202024%20FINAL%20Draft-Reduced.pdf

]]>
Meta-Physical Theatre: Making Touch Real in Virtual Reality /connected-minds/meta-physical-theatre-making-touch-real-in-virtual-reality Wed, 18 Jun 2025 14:00:14 +0000 /connected-minds/?p=582 Imagine if you could enter a play by putting on a virtual headset. Now imagine that the characters in the play are shaking your hand or giving you a hug, and that you can feel them do these things.

This is the kind of experience that researchers at ϳԹԴ are developing through the “Meta-Physical Theatre: Designing Physical Interactions in Virtual Reality Live Performances Using Robotics and Smart Textiles” project. In a nutshell, the project integrates physical touch into virtual reality (VR) live performances.

is the lead researcher on the project. Dr. Pan is an Assistant Professor in the Faculty of Engineering and Applied Science and a member of Ingenuity Labs Research Institute at ϳԹԴ. He was one of six inaugural recipients of a in 2024, supporting community-focused research that pushes boundaries in technology and society.

Virtual Reality Beyond the Visual

In association with intersectional arts organizations, this project aims to build immersive narratives where participants can not only see and hear virtual characters but can also physically interact with them. It pushes boundaries on VR environments to build immersive environments where touch becomes a part of the narrative structure.

Dr. Pan’s idea began years earlier during his time at Disney. While working on Star Wars: Galaxy’s Edge, he developed an immersive experience where visitors could feel an iconic “force grab” (when a Jedi summons a lightsabre through the air). “You would put on a VR headset, and you would see, in the distance, this lightsabre that you can reach out to with your hand and it would start zooming toward you,” he explains. “You would actually see the lightsabre come into your hand in VR. At the same time, a robot in the real world would deliver a lightsabre prop with the exact same timing and force.” Though the project was ultimately shelved by Disney, Dr. Pan didn’t give up on the idea. “I thought there was a lot left on the table by shelving that project.”

Making Touch Feel Real in VR

Dr. Matthew Pan (L) and Michael Wheeler (R)

Of course, there’s no VR theatre without theatre, and Dr Pan’s collaboration with is essential to the project.  Wheeler is a fellow Ingenuity Labs and Connected Minds member, Assistant Professor in the DAN School of Drama and Music, and Director of Artistic Research at SpiderWebShow Performance. “Shortly after arriving at ϳԹԴ, I was introduced to Michael… we thought it would be really cool to actually have a theatrical narrative that uses interpersonal interactions in VR,” Dr Pan says. Supported by community organizations, Dr. Pan and Wheeler co-created a live VR theatre experience that integrates physical touch. “It’s a high risk, high reward project that Connected Minds was willing to fund.

“[We are] creating this narrative that involves physical interactions with virtual characters. [We are] starting out simple, we’re looking at simple interactions like high fives, or fist bumps, and handovers of objects where you don’t necessarily need a lot of fidelity in terms of physical interactions.”

To make these moments feel real, the team uses haptic proxies. As Dr. Pan explains, haptic proxies are physical props that ‘stand in for haptic interactions you would normally feel in the real world.” For example, a robot-mounted hand can simulate a high five at the exact moment the participant sees it in VR.

However, matching physical actions in the world and virtual actions in VR creates a major technical challenge. The system must align spatial coordinates using motion capture and high-fidelity 3D pose tracking, so that the location of the proxy in the real world matches the location where the VR headset thinks you should be.

Timing matters too. The team must also synchronize physical and virtual actions on the scale of milliseconds. “For dynamic experiences, it’s even more complicated,” Dr. Pan explains. “Particularly for handovers or high fives, there needs to be not only a physical correlation, but also a temporal correlation. You can’t have the high five happen in VR first, followed by it happening 500 milliseconds later in the physical world. It breaks the illusion.” To avoid this lag, the team uses a system that shares information between the VR environment and robotic devices to keep latency low and synchronization precise.

Collaboration Across Disciplines

The project is supported by two arts organizations: and . The former is a Kingston-based arts organization and Canada’s first live-to-digital performance company. It focuses on exploring the intersection of live performance and digital technology. “With SpiderWebShow, we work with Adrienne Wong, who is contributing to the dramaturging” says Dr. Pan. bCurrent is a Toronto-based company that supports the work of Black and intersectional artists and plays a role in shaping the narrative voice of the theatrical experience. Together, these collaborators ensure that the narrative experience is inclusive and culturally relevant.

For Dr. Pan, it’s important that the creative process among engineers and artists is authentic. “We are emphasizing the co-creative nature of this project … [Michael and I] talk about these experiences at length and we have many ideas on what we eventually want to do with this technology, but one of the most important steps is we’re not leaving each other out in the dark.”

Beyond Theatre: Next Steps

Dr. Pan has big ideas on where the project and technology could eventually go.

“We already have inquiries into sports training,” he says. “There’s lots of implications for being able to customize training regimens for athletes.” For instance, being able to train a hockey goalie in a safe and replicable environment without needing live opponents or expensive setups would be helpful to coaches.

The technology could also support hands-on training for skilled trades, with the potential to lower barriers to technical training and to improve safety. “We could use a robot to mimic a lathe, and then do operator training in VR, especially when there is a shortage of machine equipment or safety concerns, we could have novices training with haptic proxies before moving on to the physical machine.”

Beyond performance and training, Dr. Pan is also excited about its application to care for the elderly and combatting loneliness. He’s been speaking to , a Connected Minds researcher at York University, who studies VR in palliative and geriatric care. “Loneliness is a huge issue with the elderly population,” he explains. “[We are exploring] if we can use this technology to make social connections.”

Conclusion

In a world where screens mediate much of our social world, Dr. Pan asks: what if we could reclaim a sense of touch, even through a headset? His project brings together engineering, art, performance and community, showing that immersive technology isn’t about tricking the eye, it’s about restoring presence and human connection.

]]>
The Brain in Motion: Insights from Dr. Gunnar Blohm’s Interdisciplinary Lab /connected-minds/the-brain-in-motion-insights-from-dr-gunnar-blohms-interdisciplinary-lab Wed, 23 Apr 2025 18:49:10 +0000 /connected-minds/?p=564

When was the last time you caught a ball, typed an email, or crossed the street? Did you think about this action, or was it simply something you did? These movements are second nature to us, but each one involves complex interactions between sensory input, neural noise and split-second decision making.

How the brain transforms perception into action drives the work of and the multidisciplinary team in his lab. Dr. Blohm is Professor for Computational Neuroscience at Queen’s University, as well as the Vice-Director (ϳԹԴ) of Connected Minds. Members of his lab combine physics, psychology and mathematics to investigate how the brain learns to move, adapt, and make decisions.

“Sensory-motor processes are the things to study because they are the reason the brain evolved — to use sensation, or sensory inputs, to sense and act on the world,” says Dr. Blohm.

Dr. Blohm began his academic career with a Master’s degree in physics, but pivoted to neuroscience during his PhD, drawn in by the challenge of understanding living systems. For Dr. Blohm, the mission behind his research is clear: to uncover the brain’s most fundamental mechanisms. As he explains, “through the study of sensory motor control, [researchers] can uncover fundamental principles and mechanisms of brain functions at various scales [from] behaviour to how the brain is set up.”

In practice, how does this mission shape the research questions in his lab? It tends to attract a diverse group of researchers from different fields. While the students in Dr. Blohm’s lab may be investigating different parts of a system (for instance, from normative to behavioural modelling) they all seek to answer the same fundamental question: how does the brain adapt, decide, and act in the world?

At the same time, building a lab isn’t just about answering questions. It’s also about training students to ask better ones. “Graduate school is training through research,” Dr. Blohm says. He hopes the main skill his students take away from their time in the lab is critical thinking: learning how to ground ideas in evidence, assess the logic behind different scientific approaches, and analyze data in a systematic way. He also encourages students to follow their passions, pursue unconventional questions, and collaborate across disciplines.

3 Students, 3 Paths, 1 Mission

PhD students (L to R): Connor Braun, Arefeh Farahmandi, and Sydney Doré

For Connor, understanding the brain begins with pen and paper. As a mathematics PhD student, he is investigating the idea that neurons are not simply passing along signals but acting as decision-makers with each neuron operating with limited information in a complex network. Connor’s research uses mathematical frameworks from multi-agent systems and reinforcement learning to model the brain as a decentralized network. His research asks how neurons know what to do in such a noisy environment. It’s a question that mirrors problems in economics and game theory, fields where individual agents make choices based on sparse and sometimes conflicting information.

Connor chose to work with Dr. Blohm after seeing the diversity of research in his lab. “[Dr. Blohm] really emphasizes the importance of collaboration and multi-disciplinary work for answering questions about the brain,” he explains. This openness led to co-supervision of Connor’s thesis with , a mathematician at ϳԹԴ. Their project now sits at the intersection of neuroscience, systems theory, and machine learning. “It’s exciting to have an idea,” Connor says, “and to realize that not many people are having the same idea.”

While Connor builds mathematical theories of how neurons communicate, Arefeh is focused on how humans move. With a background in electrical engineering and control systems, Arefeh’s research blends machine learning, biomechanics, and neuroscience to answer one question: how can we detect abnormal movement patterns, and possibly diagnose disease, just by analyzing video?

“How do we distinguish different movements, extract primitives, and then, from those features, distinguish abnormality?” she explains. Her project uses machine learning tools to analyze simple, even smartphone-recorded, videos. She extracts 3D pose data from these videos and identifies movement primitives—subtle, repeated patterns in how we walk or gesture. The long-term goal is to create a tool that flags movement irregularities, prompting early screenings or medical follow-ups. “Like, hey,” she says, “maybe your grandma’s walk looks a little different—maybe she should consult with a physician.”

Finally, while Connor builds theoretical frameworks and Arefeh designs practical tools, Sydney’s work unfolds in real time, with participants adapting to vision loss as her experiment unfolds. Her research investigates how people respond to central vision loss, such as by age-related macular degeneration (AMD), and whether the brain can rewire itself to compensate.

Using eye-tracking, Sydney’s experiments mimic a central blind spot on screen, forcing participants to rely on their peripheral vision to follow moving targets and track motion. Over the course of several sessions, she observes whether participants develop a new point of focus, known as a preferred retinal locus (PRL). “We already know that some AMD patients can develop a PRL in place of their absent fovea in tasks such as reading,” she explains. “However, it’s unknown if and how this PRL can be used in tracking [moving] targets.”

Her path to the lab was simple: “‘I’ve been in the lab since my [undergraduate] 4th year thesis project,” she says. “I read about the research in this lab and thought it would really align with my interests. I’ve been here ever since!”

Conclusion: The Lab, in Practice

These PhD students work on diverse problems, yet all of their research is grounded in a core mission and philosophy, asking: how does the brain adapt, decide, and act in the world? And how do we train scientists to answer this question in a way that promotes curiosity and critical thinking in equal measure? As Dr. Blohm explains, he wants to give his students “the tools and critical thinking and perspectives [to find] their own paths.”

The work coming out of Dr. Blohm’s lab reminds us that the future of neuroscience lies in embracing uncertainty and complexity. Whether it’s a neuron making a choice, a body moving abnormally, or a brain adapting to vision loss, the questions that matter are rarely the easiest ones to answer. What connects these researchers isn’t a single method or problem, but a philosophy that progress begins with curiosity, collaboration, and the courage to ask meaningful questions.

As Dr. Blohm explains, “the future of science lies in complex science.”

]]>
Breaking the Silence: Sonja Bonar’s Quest to Decode Internal Speech /connected-minds/breaking-the-silence Tue, 22 Apr 2025 23:51:12 +0000 /connected-minds/?p=559 Imagine your thoughts—your needs, your questions, your feelings—held captive, with no reliable way to be heard. This is the daily reality for many individuals with communication disabilities, and it’s the mission that drives ϳԹԴ PhD candidate and Connected Minds trainee Sonja Bonar in her quest to develop brain-computer interfaces (BCIs) that give voice to silent thoughts. Working at the intersection of neuroscience, engineering and human connection, Sonja is developing BCIs that translate internal speech, also known as covert speech, into meaningful communication. As a researcher in the Building and Designing Assistive Technology Lab supervised by Dr. Claire Davies, she’s taking on one of the most nuanced challenges in neurotechnology: enabling individuals with communication disabilities to communicate using covert speech alone.

Sonja’s path into this research area didn’t follow a straight line. “I was interested in prosthetics at the beginning [of graduate school],” she explained during a recent interview. Early on in her time at ϳԹԴ, she became involved in a side project that would ultimately reshape her research direction: observing focus groups made up of individuals who use augmentative and alternative communication (AAC) devices. These tools help people with motor and communication impairments to express themselves, alongside their caregivers and device manufacturers. “A parent said, ‘I wish we could just have direct thought-to-communication devices.’ And I was like, OK, well, why can’t we?”

Sonja Bonar, PhD Candidate at Queen’s University

That question became the foundation of her doctoral research. Today, Sonja is focused on decoding covert speech from brain signals to build BCIs for individuals who cannot rely on traditional forms of communication.

Rewriting the Rules of Speech Development

Sonja’s work challenges a longstanding psychological theory by Lev Vygotsky, which holds that covert (or inner) speech can only develop from spoken dialogue. “Just by looking at that theory, it excludes populations that have not been able to communicate reliably since development, or individuals with developmental communication impairments,” Sonja said. To explore the assumptions underlying this theory, she conducted a survey with adults who have developmental communication and motor impairments.  “What I found from the survey is that this population, who has never been able to reliably speak out loud… actually can develop covert speech.”  These findings suggest that inner speech can develop even in the absence of spoken dialogue, calling into question the dominant hypotheses on the form of covert speech. “If this population can use covert speech, this is potentially a more intuitive or natural input for a BCI compared to other input methods,” Sonja explains.

Traditional communication devices often rely on methods like eye-tracking or visually evoked potentials—electrical signals recorded from the brain in response to visual stimuli—where users focus on flashing letters to spell out words. Recently, motor imagery has emerged as a promising BCI input for AAC devices, requiring users to imagine physical movements, such as moving a hand or articulating with the mouth, to trigger a response. But for individuals who have never reliably spoken or performed these movements, this type of imagery can be abstract, cognitively demanding, or difficult to use due to their lack of motor experience and the system’s reliance on consistent, learned patterns. Covert speech, by contrast, may offer a more direct and intuitive path from thought to communication.

In the current phase of her research, Sonja is exploring whether covert speech can be reliably decoded from brain activity. She is currently recording electroencephalography (EEG) data from typically developing adults as they silently respond to simple yes-or-no questions. “They’re asked questions that have obvious yes or no answers,” she explains. “I wanted them to be asked questions audibly because that would be the most realistic in any sort of interaction in real life.”

Sonja Bonar sets up her 60+ channel EEG system for a covert speech experiment

Her early results are promising. In a pilot study, she was able to distinguish between participants’ internal “yes” and “no” responses with approximately 83% accuracy, suggesting that decoding covert speech may be feasible . Encouraged by these findings, Sonja plans to extend the study to adults with developmental communication and motor impairments, focusing on whether similar neural patterns can be observed across participant populations.

From Academia to Industry: An Internship with Impact

As her research continues to push the boundaries of what’s possible in brain-computer interface design, Sonja is also stepping into the world of industry. This summer, she’s joining , a Toronto-based neurotechnology startup, for a Connected Minds-sponsored internship that will immerse her in the applied side of BCI systems.

“VIBRAINT works on motor rehabilitation with brain-computer interface technologies and VR… through decoded motor imagery tasks (with EEG) where a client’s arm is moved by a robotic arm manipulator to match their intended movement,” she explains. While her own research centers on communication rather than movement, Sonja recognizes a valuable connection between the two domains. “This work is very complimentary to my current project… Motor imagery is a common BCI input method for communication devices, so it’s [interesting] to see the process of decoding motor imagery up close.”

The opportunity emerged through a connection made through Connected Minds. “I got my internship through a connection that I made at Connected Minds, Dr. Lauren Sergio from York University,” Sonja says. “I had been interested in VIBRAINT’s work months before I was in contact with them… I remembered Dr. Sergio from the VIBRAINT website… When I reached out to her, she was able to put me in contact with VIBRAINT.”

Looking Ahead

Sonja’s research offers a hopeful glimpse into the future of communication—but it also highlights the practical limitations of current technology. “The device I use takes two hours to set up,” she explains, citing the time-consuming process of adjusting sensors, troubleshooting connections, and managing the bulky equipment.  “There are a lot of ways that it’s still impractical as a communication device… it’s definitely not usable [in everyday settings].”

Still, the promise of her research extends beyond proof-of-concept studies. By demonstrating that covert speech can be decoded with accuracy, Sonja’s work lays the foundation for a new class of assistive technologies—tools that are not only scientifically viable but are designed for real-world use. It’s a step toward more portable, accessible BCI systems that could one day offer seamless communication for those who need it most.

]]>