Erika Johannessen – Connected Minds https://cmblog.neuroscience.queensu.ca Neural and Machine Systems for a Healthy, Just Society Fri, 16 Jan 2026 16:46:29 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 Human-in-the-Loop Is Not a Technical Detail. It’s an Ethical Position /connected-minds/human-in-the-loop Fri, 16 Jan 2026 16:45:03 +0000 /connected-minds/?p=624 Artificial intelligence is usually discussed in terms of what it can do. How accurate is it? How fast? How much work can it take off our plates?

But performance is only part of the story. When AI is involved in decisions that affect people’s lives, someone has to be responsible. The question is who.

Across very different fields, from engineering labs to medical education, the same idea keeps surfacing. Keeping humans “in the loop” is not a temporary safeguard or a design choice to be optimized. It is a moral decision. It is a way of saying that efficiency does not replace accountability, and that judgment cannot be fully automated.

This tension came up repeatedly in my conversations with two researchers working at very different points along the AI pipeline.

At the technical end of this pipeline, “human-in-the-loop” is often discussed as a procedural detail. It is treated as a feature to be added early and removed later, once systems become more capable and trustworthy. But this framing misses what is actually at stake. When AI systems are designed to recognize people, interpret behavior, infer emotional or cognitive states, and personalize responses in real time, oversight is no longer just a design choice.

Dr. Ali Etemad, an Associate Professor in the Department of Electrical and Computer Engineering at ϳԹԴ, works on human-centred AI that draws on many types of data, including text, images, audio, and biological signals, to understand who people are, what they are doing, and how they are feeling.

Dr. Ali Etemad, Queen’s Department of Electrical and Computer Engineering

In this context, his concern is not simply that AI systems make mistakes, but that their errors can carry an unwarranted sense of authority.

“Hallucinations are a big problem in language models where the model generates output that sounds real but is not real. This has happened to me many, many times where I’ve asked for a reference about a particular thing and it makes up a legitimate sounding paper that doesn’t actually exist.”

The risk here is not simply incorrect information. It is the confidence with which that information is delivered. Systems that speak fluently and persuasively can make errors feel settled and final, even when they are not. When human oversight becomes symbolic rather than active, people stop questioning outputs that sound convincing. Keeping a human in the loop is not about checking grammar or fixing small errors. It is about maintaining responsibility for accuracy. Someone must still decide whether an answer is reasonable, appropriate, and safe.

The same ethical tension shows up downstream, when AI systems move into real decision-making environments. In postgraduate education, for example, large language models (LLMs) are increasingly used to summarize resident evaluations, analyze feedback, and help promotion committees manage large volumes of data about individual medical trainees. These tools promise efficiency, and they often deliver it. But they also quietly reshape how decisions are made.

Dr. Benjamin Kwan, an Assistant Professor and Neuroradiologist at ϳԹԴ, Educational Innovations Lead at PGME and Faculty Research Director for Diagnostic Radiology, works directly at this intersection, researching how large language models are used to support assessment and decision-making in postgraduate medical education. For him, the limits are clear.

Dr. Benjamin Kwan, Faculty Research Director for Diagnostic Radiology at Queen’s

“I don’t think we should ever replace the human in these decisions. We should always have what they call the ‘professor-in-the-loop’ or the ‘teacher-in-the-loop’ to make sure everything is appropriate.”

This perspective does not imply resistance to technology. It is an acknowledgment of responsibility. When evaluative authority shifts, even subtly, from people to systems, accountability becomes harder to pin down. If an AI-assisted recommendation disadvantages a trainee, who answers for that outcome? The algorithm? The institution? Or the human who deferred too easily to an LLM?

When decisions are delegated, responsibility does not disappear. It just becomes harder to trace.

Machines Don’t Wash Bias Clean

Automation is often framed as a solution to human bias. The logic is appealing: remove people from the process and decisions become more objective. But this logic collapses under closer scrutiny. From a technical perspective, Dr. Etemad makes clear that bias is not just a problem of flawed or unbalanced data. It can be introduced much earlier, through choices that are rarely visible to end users and are often treated as purely technical.

“We proved that certain training algorithms with the same data and the same model could make the model more biased or less biased. Using certain algorithms to train neural networks could have a huge impact on how biased these models will become when they are released.”

This matters because it undermines a widely held assumption—that holding data constant guarantees consistent outcomes. As Dr. Etemad’s work shows, that assumption fails. Models trained on the same data diverge significantly depending on design and training choices. Bias, in other words, is not just inherited. It can be built in through decisions embedded deep within the technical pipeline.

Dr. Etemad also points out that addressing bias is rarely straightforward because improving fairness often involves trade-offs that cannot be resolved by optimization alone.

“There is a fairness–performance trade-off. One way to increase fairness is to penalize the better-performing group so that everyone performs equally poorly, but that’s not really what we want.”

Deciding what counts as “fair enough” is not a technical judgment. It requires values. That same tension shows up clearly in educational settings. Dr. Kwan is cautious about using AI systems to make or strongly influence decisions about progress, not only because they are informed by past evaluations (which reflect their own judgments and constraints), but because the systems themselves encode assumptions about what should count as performance, risk, or success.

“You probably wouldn’t want to use a tool that will determine if somebody passes or fails solely because of all these potential bias problems.”

The concern is not abstract. In practice, automated systems tend to present their outputs as neutral summaries or recommendations, such as a score, a ranking, or a flagged concern. Once framed this way, decisions can feel less open to discussion. When a person makes a biased decision, it can be questioned. They can be asked to explain their reasoning. But when a system produces the same outcome, the decision often feels procedural, as though it simply followed the rules.

This shift matters. Bias does not need to be extreme to be harmful. It only needs to become harder to challenge.

Human-in-the-Loop Is ϳԹԴ Responsibility

These perspectives are connected by a shared understanding. Insisting on human oversight is not distrusting AI. It is about refusing to abandon as our responsibility as those who design and deploy it. For Dr. Etemad, this responsibility is most visible in the problem of alignment. In systems that interact closely with people, the central question is not simply whether an AI system performs well, but whether its outputs reflect the priorities and values it is meant to serve.

“One of the important sub-areas in this context of AI that interacts with humans is responsible AI. One of the ways to address that is alignment. Can we develop models that are aligned with a set of values that we’re interested in?”

Alignment, in this sense, cannot be achieved through optimization alone. It requires human judgement about which values matter, how they should be balanced, and when a system’s output must be constrained or overridden. From this viewpoint, human-in-the-loop systems are not a temporary safeguard. They are one of the few ways to ensure that responsibility is upheld. Someone still has to stand behind the decisions an AI makes.

In professions like medicine and education, this issue is critical. These fields depend on trust and their decisions shape real lives. As Dr. Benjamin Kwan put it, AI may become an increasingly powerful assistant, but it should never be the final authority.

“AI will be a good helper… another voice at the table.”

A voice, not a verdict.

The Question We Should Be Asking

We spend a lot of time debating what AI will eventually be capable of doing. The more important question is what we should ask it to do. Human-in-the-loop systems are slower. They require explanation. They force people to stay engaged. That is exactly why they matter.

Because automated systems don’t take responsibility. People do.

]]>
Decision Intelligence: The Data-Driven Future of Healthcare Resource Planning /connected-minds/decision-intelligence-the-data-driven-future-of-healthcare-planning Mon, 14 Jul 2025 19:03:29 +0000 /connected-minds/?p=598 It starts with a closed door.

A rural emergency room shuts down without warning. It’s late. It’s snowing. The nearest hospital is hundreds of kilometers away and there’s no family doctor to call. In the backseat, a patient clutches their chest, growing colder by the minute. That closed door isn’t just an inconvenience. It could be the difference between life and death.

Across Ontario and throughout Canada, temporary ER closures have become a symbol of a healthcare system under immense strain, where staffing shortages, vast distances, and surging demand converge to create serious risks for patient care. In 2023 alone, there were over 600 temporary emergency department closures in rural Ontario, often leaving residents with no choice but to travel long distances to receive urgent care.1

But what if we could prevent these closures before they happen?

That’s the question driving Dr. Salimur Choudhury, a computer scientist at ϳԹԴ and founder of the GOAL Lab (short for Global Optimization, Analytics, and Learning). His lab builds decision-making tools powered by algorithms, data, and AI, tools that could help healthcare administrator decide, in real time, how to deploy limited resources where they’re needed most.

 Dr. Salimur Choudhury of ϳԹԴ is developing AI-driven tools to help healthcare leaders make faster, smarter decisions

“One of my research priority areas is optimizing health care resources,” he said. “Over the past few years, we’ve been developing data-intensive, data-driven methods to support health care policymakers.”

At its core, Choudhury’s research addresses a deceptively simple question: how do we allocate scarce resources – physicians, supplies, or even transportation routes – in a way that supports real people in real time?

Some of his most urgent work focuses on ER closures in Northern Ontario. With limited staff supporting a vast geographical area, hospitals in the north frequently shut their emergency departments on evenings and weekends. For patients, these closures can mean driving for hours to find help, often in hazardous winter conditions. “We’re analyzing patterns in ER visits and studying which emergency rooms are closing and when,” he explained. “The idea is to use data to guide decisions, to figure out when and where closures can be avoided.”

But the barriers to care in Northern communities go beyond hospital staffing. They also include the seasonal roads that connect them. Many remote areas rely on winter roads—temporary routes built over frozen lakes and waterways —that are only operational for a few weeks each year. Climate change is shortening these windows by causing warmer winters and unpredictable freeze-thaw cycles, cutting off critical links between patients and healthcare. Choudhury’s lab is using geospatial and climate data to anticipate these risks. “We are combining satellite imagery and trying to analyze which winter roads are disappearing,” he said, “because if and when that road is closed, the ER will also close, and that may impact the service delivery.”

When winter roads close, so can access to emergency care, leaving remote communities stranded without critical health services.

By identifying communities at risk of being stranded, his team aims to help decision-makers plan ahead, reallocating resources before patients are left without options. To make this relocation possible, his lab is building mathematical models that support regional coordination, enabling better use of travel nurses, shared staff pools, and dynamic scheduling, all grounded in patient and hospital data. While access to these data remains a challenge, Dr. Chourhury’s team is painstakingly gathering what it can from public sources.

But Choudhury isn’t stopping at ERs. In another project, his team is helping reduce administrative burden in family medicine. “In family health teams, physicians often spend a lot of time filling out forms, but not all of them need to be completed by doctors,” he said. “We’re looking at how we can redistribute that work to other allied professionals.”

By analyzing health records from the ϳԹԴ Family Health Team and partnering with a start up company , the lab developed an AI-based autofill system for medical referral forms, saving time for doctors and potentially improving workflow across team-based care. “The time savings for physicians were significant,” Choudhury said.

These innovations point to a broader shift that Choudhury is pursuing — one where intelligent systems don’t just offer recommendations, but take action to solve and prevent problems in real time.

“What I’m envisioning is called decision intelligence,” he explained, “where systems don’t just analyze data and generate algorithms but actually make the decisions themselves.”

This vision may sound futuristic, but his team is already working on it. In one project, they’re building a tool that converts natural language (plain, human descriptions of a problem) into the mathematical equations needed to solve it. For example, someone might type out their scheduling or logistics challenge, and the algorithm would automatically generate an optimization model and code, ready to run.

“The idea is that a user could describe their problem in natural language,” he said, “and the system would generate the equations and code needed to solve it.”

Ultimately, these tools are designed to help healthcare leaders and government agencies make faster, smarter and more equitable decisions, especially in communities where resources are stretched. But for research like this to drive real-world change, it has to reach the people in charge of policy and planning. That’s why Choudhury is strategic about where he publishes his work.

“When you develop a particular methodology, some of it needs to be published in computer science journals,” he said. “But we also need to reach the right audience, and healthcare policymakers may never read a standard computer science journal.”

To bridge this gap, Choudhury not only publishes in technical outlets, but he also seeks opportunities in health-sector venues where clinical leaders and system planners are more likely to engage. These opportunites include journals, conferences and collaborations  with clinicians. By tailoring communications to this audience, Choudhury ensures his research doesn’t just push the boundaries of algorithm design, but informs better care, smarter systems and more responsive policy.

Despite the complexity of his work, Choudhury is focused on practical impact. He partners with hospitals, startups and government agencies, and trains the next generation of researchers in this field. And he continues to ask how novel algorithms can help to solve systemtic problems in healhcare.

“We’re trying to find small algorithmic approaches,” he said, “and connect those dots to make a meaningful difference.”

In a healthcare system buckling under pressure, these small things might just be the breakthroughs we need.  But the significance of this work goes far beyond technical achievement. By designing systems that can allocate resources intelligently, deliver care more efficiently, and anticipate infrastructure-related challenges before they become crises, Choudhury’s research lays the foundation for a healthcare system that is more equitable, accessible, and resilient — one that serves everyone, no matter their postal code.


  1. Rural Ontario Municipal Association. (2024). Fill the gaps: Bringing care closer to home in rural and northern Ontario. https://www.roma.on.ca/sites/default/files/assets/IMAGES/Home/ROMA%20Report%20-%20Fill%20the%20Gaps%20Closer%20to%20Home%20January%2021%202024%20FINAL%20Draft-Reduced.pdf

]]>
Breaking the Silence: Sonja Bonar’s Quest to Decode Internal Speech /connected-minds/breaking-the-silence Tue, 22 Apr 2025 23:51:12 +0000 /connected-minds/?p=559 Imagine your thoughts—your needs, your questions, your feelings—held captive, with no reliable way to be heard. This is the daily reality for many individuals with communication disabilities, and it’s the mission that drives ϳԹԴ PhD candidate and Connected Minds trainee Sonja Bonar in her quest to develop brain-computer interfaces (BCIs) that give voice to silent thoughts. Working at the intersection of neuroscience, engineering and human connection, Sonja is developing BCIs that translate internal speech, also known as covert speech, into meaningful communication. As a researcher in the Building and Designing Assistive Technology Lab supervised by Dr. Claire Davies, she’s taking on one of the most nuanced challenges in neurotechnology: enabling individuals with communication disabilities to communicate using covert speech alone.

Sonja’s path into this research area didn’t follow a straight line. “I was interested in prosthetics at the beginning [of graduate school],” she explained during a recent interview. Early on in her time at ϳԹԴ, she became involved in a side project that would ultimately reshape her research direction: observing focus groups made up of individuals who use augmentative and alternative communication (AAC) devices. These tools help people with motor and communication impairments to express themselves, alongside their caregivers and device manufacturers. “A parent said, ‘I wish we could just have direct thought-to-communication devices.’ And I was like, OK, well, why can’t we?”

Sonja Bonar, PhD Candidate at Queen’s University

That question became the foundation of her doctoral research. Today, Sonja is focused on decoding covert speech from brain signals to build BCIs for individuals who cannot rely on traditional forms of communication.

Rewriting the Rules of Speech Development

Sonja’s work challenges a longstanding psychological theory by Lev Vygotsky, which holds that covert (or inner) speech can only develop from spoken dialogue. “Just by looking at that theory, it excludes populations that have not been able to communicate reliably since development, or individuals with developmental communication impairments,” Sonja said. To explore the assumptions underlying this theory, she conducted a survey with adults who have developmental communication and motor impairments.  “What I found from the survey is that this population, who has never been able to reliably speak out loud… actually can develop covert speech.”  These findings suggest that inner speech can develop even in the absence of spoken dialogue, calling into question the dominant hypotheses on the form of covert speech. “If this population can use covert speech, this is potentially a more intuitive or natural input for a BCI compared to other input methods,” Sonja explains.

Traditional communication devices often rely on methods like eye-tracking or visually evoked potentials—electrical signals recorded from the brain in response to visual stimuli—where users focus on flashing letters to spell out words. Recently, motor imagery has emerged as a promising BCI input for AAC devices, requiring users to imagine physical movements, such as moving a hand or articulating with the mouth, to trigger a response. But for individuals who have never reliably spoken or performed these movements, this type of imagery can be abstract, cognitively demanding, or difficult to use due to their lack of motor experience and the system’s reliance on consistent, learned patterns. Covert speech, by contrast, may offer a more direct and intuitive path from thought to communication.

In the current phase of her research, Sonja is exploring whether covert speech can be reliably decoded from brain activity. She is currently recording electroencephalography (EEG) data from typically developing adults as they silently respond to simple yes-or-no questions. “They’re asked questions that have obvious yes or no answers,” she explains. “I wanted them to be asked questions audibly because that would be the most realistic in any sort of interaction in real life.”

Sonja Bonar sets up her 60+ channel EEG system for a covert speech experiment

Her early results are promising. In a pilot study, she was able to distinguish between participants’ internal “yes” and “no” responses with approximately 83% accuracy, suggesting that decoding covert speech may be feasible . Encouraged by these findings, Sonja plans to extend the study to adults with developmental communication and motor impairments, focusing on whether similar neural patterns can be observed across participant populations.

From Academia to Industry: An Internship with Impact

As her research continues to push the boundaries of what’s possible in brain-computer interface design, Sonja is also stepping into the world of industry. This summer, she’s joining , a Toronto-based neurotechnology startup, for a Connected Minds-sponsored internship that will immerse her in the applied side of BCI systems.

“VIBRAINT works on motor rehabilitation with brain-computer interface technologies and VR… through decoded motor imagery tasks (with EEG) where a client’s arm is moved by a robotic arm manipulator to match their intended movement,” she explains. While her own research centers on communication rather than movement, Sonja recognizes a valuable connection between the two domains. “This work is very complimentary to my current project… Motor imagery is a common BCI input method for communication devices, so it’s [interesting] to see the process of decoding motor imagery up close.”

The opportunity emerged through a connection made through Connected Minds. “I got my internship through a connection that I made at Connected Minds, Dr. Lauren Sergio from York University,” Sonja says. “I had been interested in VIBRAINT’s work months before I was in contact with them… I remembered Dr. Sergio from the VIBRAINT website… When I reached out to her, she was able to put me in contact with VIBRAINT.”

Looking Ahead

Sonja’s research offers a hopeful glimpse into the future of communication—but it also highlights the practical limitations of current technology. “The device I use takes two hours to set up,” she explains, citing the time-consuming process of adjusting sensors, troubleshooting connections, and managing the bulky equipment.  “There are a lot of ways that it’s still impractical as a communication device… it’s definitely not usable [in everyday settings].”

Still, the promise of her research extends beyond proof-of-concept studies. By demonstrating that covert speech can be decoded with accuracy, Sonja’s work lays the foundation for a new class of assistive technologies—tools that are not only scientifically viable but are designed for real-world use. It’s a step toward more portable, accessible BCI systems that could one day offer seamless communication for those who need it most.

]]>
Breaking the One-Size-Fits-All Model: Dr. Paul Hungler’s Vision for Smarter, Faster Learning /connected-minds/adaptive-learning-paul-hungler Tue, 18 Mar 2025 17:34:42 +0000 /connected-minds/?p=466 Think back to a time when you sat in a classroom, struggling to stay engaged. Maybe the lesson skimmed over a concept you barely understood, leaving you lost. Perhaps it dragged on about something you had already mastered, making you restless. Traditional education assumes that all learners progress at the same rate, marching through a shared curriculum without considering individual strengths, weaknesses, or prior knowledge.

What if education wasn’t a one-size-fits-all approach? Imagine learning math by solving problems, where the difficulty of the problems adjusts to the speed of your solutions. Imagine practicing a new language with an AI tutor that slows down its speech when you hesitate and speeds up as you gain in confidence. Imagine a history lesson that presents material in pictures if you’re a visual learner or through storytelling if that’s how you most easily retain information. Adaptive learning isn’t imaginary. It’s the future of personalized education, where technology tailors curriculum to students, ensuring they’re appropriately supported and challenged  as they learn.

At ϳԹԴ, Dr. Paul Hungler is pioneering research in adaptive learning to make this vision a reality. With a focus on engineering, he is leading the charge in developing personalized, technology-driven training experiences that prioritize efficiency, engagement, and competency-based progression. Dr. Hungler’s journey into adaptive learning was shaped by his 20-year tenure with the Royal Canadian Air Force. Overseeing online training, he saw firsthand how rigid, standardized instruction often failed to meet the diverse needs of learners, sparking his drive to find a more effective, personalized approach. “The military is not set up for that,” he explains. “Regardless of your trade or occupation, you take the same courses, get the same check marks, and take the same amount of time to get through. But when I was in charge of the whole system, I thought, this is pretty inefficient.”

This lack of flexibility is precisely the problem that adaptive learning and simulation-based training can solve. By integrating virtual reality (VR), augmented reality (AR), and adaptive algorithms, this approach engages learners in complex, life-like scenarios, ensuring they develop skills at their own pace. Dr. Hungler’s research is at the forefront of this transformation, making training more interactive and responsive to individual learning needs.

The Power of Adaptive Learning in Engineering and Beyond

Dr. Hungler’s research centers on intelligent, dynamically adaptive simulation. This approach customizes educational experiences in real-time, adjusting difficulty, pacing, and instructional content based on a learner’s progress and mental state. Using data from wearable sensors, machine learning techniques can decode subtle physiological cues—like a quickened pulse or dilated pupils—to gauge cognitive load and engagement, enabling the training system to adjust the simulation environment to an individual.

“The idea is that you get your own experience—it’ll be tailored to you, to your cognitive load, your expertise in a certain area. Your experience within a simulation will be very different from someone else’s,” he says.

For example, in engineering education, VR and AR can provide immersive, hands-on experiences that are otherwise impossible in a traditional classroom setting. Dr. Hungler developed a cutting-edge industrial facility in VR, allowing students to experiment with complex (and sometimes dangerous) systems without real-world consequences. He explains, “I built a chemical processing plant where students get to go in, do a tour, and then they get to change valves and do different things, which you would never be able to do in a real plant.” This kind of dynamic, experiential training has the potential to redefine education across disciplines, equipping learners with the skills and confidence they need to navigate real-world challenges long before they step into a physical workplace.

Hands-on learning, redefined: A student navigates a chemical processing plant in virtual reality

The Future of Education: Individualized and Immersive

The implications of adaptive learning extend far beyond engineering education. Medicine, aviation, and other high-stakes fields stand to benefit from this technology. Dr. Hungler is currently collaborating with an Ottawa-based flight training company to develop an immersive pilot simulator that moves beyond traditional, instructor-led training. “It’s fully sensored,” he notes.

“We have eye tracking—we know exactly where pilots are looking, whether they’re checking the right instruments, and we can provide precise feedback on their performance.” With this technology, he says, “we’ll be able to produce better pilots in a shorter time frame.”

Similarly, in the field of medicine, adaptive learning and simulation-based training are transforming how future healthcare professionals develop critical skills. Clinical simulation, powered by AI and augmented reality, allows medical learners to engage in lifelike scenarios where they can diagnose conditions, perform procedures, and respond to emergencies in a controlled setting. Dr. Hungler’s research focuses on making these simulation experiences more intelligent and responsive, adjusting complexity in real time based on a learner’s expertise, speed of decision-making, and cognitive load. “We can make [the scenario] more difficult, or we can make it easier,” he explains. This level of adaptability ensures that medical learners are consistently challenged at the right level, reinforcing skills without overwhelming learners.

Breaking the Mold: A Call for Change

Dr. Hungler’s work challenges the traditional structures of education, advocating for a system where students progress at their own pace based on their competencies, rather than arbitrary timelines.

“When you have choice in education and training, it’s powerful,” he emphasizes. “We’ve been too rigid for too long.”

As adaptive learning gains traction, Dr. Hungler’s work serves as an leading example for the future of education—one where learners are met where they are, guided at their own pace, and equipped with the skills they need to succeed.

]]>