  {"id":559,"date":"2025-04-22T23:51:12","date_gmt":"2025-04-22T23:51:12","guid":{"rendered":"https:\/\/cmblog.neuroscience.queensu.ca\/?p=559"},"modified":"2025-07-28T15:10:47","modified_gmt":"2025-07-28T15:10:47","slug":"breaking-the-silence","status":"publish","type":"post","link":"https:\/\/cmblog.neuroscience.queensu.ca\/breaking-the-silence","title":{"rendered":"Breaking the Silence: Sonja Bonar\u2019s Quest to Decode Internal Speech"},"content":{"rendered":"\n<p>Imagine your thoughts\u2014your needs, your questions, your feelings\u2014held captive, with no reliable way to be heard. This is the daily reality for many individuals with communication disabilities, and it&#8217;s the mission that drives Queen\u2019s PhD candidate and Connected Minds trainee Sonja Bonar in her quest to develop brain-computer interfaces (BCIs) that give voice to silent thoughts. Working at the intersection of neuroscience, engineering and human connection, Sonja is developing BCIs that translate internal speech, also known as covert speech, into meaningful communication. As a researcher in the Building and Designing Assistive Technology Lab supervised by Dr. Claire Davies, she\u2019s taking on one of the most nuanced challenges in neurotechnology: enabling individuals with communication disabilities to communicate using covert speech alone.<\/p>\n\n\n\n<p>Sonja\u2019s path into this research area didn\u2019t follow a straight line. \u201cI was interested in prosthetics at the beginning [of graduate school],\u201d she explained during a recent interview. Early on in her time at Queen\u2019s, she became involved in a side project that would ultimately reshape her research direction: observing focus groups made up of individuals who use augmentative and alternative communication (AAC) devices. These tools help people with motor and communication impairments to express themselves, alongside their caregivers and device manufacturers. \u201cA parent said, \u2018I wish we could just have direct thought-to-communication devices.\u2019 And I was like, OK, well, why can\u2019t we?\u201d<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img fetchpriority=\"high\" decoding=\"async\" width=\"768\" height=\"1024\" src=\"https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/04\/Sonja-Headshot-768x1024.jpg\" alt=\"\" class=\"wp-image-560\" style=\"width:487px;height:auto\" srcset=\"https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/04\/Sonja-Headshot-768x1024.jpg 768w, https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/04\/Sonja-Headshot-225x300.jpg 225w, https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/04\/Sonja-Headshot.jpg 960w\" sizes=\"(max-width: 768px) 100vw, 768px\" \/><figcaption class=\"wp-element-caption\">Sonja Bonar, PhD Candidate at Queen&#8217;s University<\/figcaption><\/figure>\n<\/div>\n\n\n<p>That question became the foundation of her doctoral research. Today, Sonja is focused on decoding covert speech from brain signals to build BCIs for individuals who cannot rely on traditional forms of communication.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Rewriting the Rules of Speech Development<\/strong><\/h2>\n\n\n\n<p>Sonja\u2019s work challenges a longstanding psychological theory by Lev Vygotsky, which holds that covert (or inner) speech can only develop from spoken dialogue. \u201cJust by looking at that theory, it excludes populations that have not been able to communicate reliably since development, or individuals with developmental communication impairments,\u201d Sonja said. To explore the assumptions underlying this theory, she conducted a survey with adults who have developmental communication and motor impairments. &nbsp;\u201cWhat I found from the survey is that this population, who has never been able to reliably speak out loud\u2026 actually can develop covert speech.\u201d&nbsp; These findings suggest that inner speech can develop even in the absence of spoken dialogue, calling into question the dominant hypotheses on the form of covert speech. \u201cIf this population can use covert speech, this is potentially a more intuitive or natural input for a BCI compared to other input methods,\u201d Sonja explains.<\/p>\n\n\n\n<p>Traditional communication devices often rely on methods like eye-tracking or visually evoked potentials\u2014electrical signals recorded from the brain in response to visual stimuli\u2014where users focus on flashing letters to spell out words. Recently, motor imagery has emerged as a promising BCI input for AAC devices, requiring users to imagine physical movements, such as moving a hand or articulating with the mouth, to trigger a response. But for individuals who have never reliably spoken or performed these movements, this type of imagery can be abstract, cognitively demanding, or difficult to use due to their lack of motor experience and the system\u2019s reliance on consistent, learned patterns. Covert speech, by contrast, may offer a more direct and intuitive path from thought to communication.<\/p>\n\n\n\n<p>In the current phase of her research, Sonja is exploring whether covert speech can be reliably decoded from brain activity. She is currently recording electroencephalography (EEG) data from typically developing adults as they silently respond to simple yes-or-no questions. \u201cThey\u2019re asked questions that have obvious yes or no answers,\u201d she explains. \u201cI wanted them to be asked questions audibly because that would be the most realistic in any sort of interaction in real life.\u201d<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img decoding=\"async\" width=\"768\" height=\"1024\" src=\"https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/04\/Experimental-Setup-768x1024.jpg\" alt=\"\" class=\"wp-image-561\" style=\"width:498px;height:auto\" srcset=\"https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/04\/Experimental-Setup-768x1024.jpg 768w, https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/04\/Experimental-Setup-225x300.jpg 225w, https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/04\/Experimental-Setup-1152x1536.jpg 1152w, https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/04\/Experimental-Setup-1536x2048.jpg 1536w, https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/04\/Experimental-Setup-scaled.jpg 1920w\" sizes=\"(max-width: 768px) 100vw, 768px\" \/><figcaption class=\"wp-element-caption\">Sonja Bonar sets up her 60+ channel EEG system for a covert speech experiment<\/figcaption><\/figure>\n<\/div>\n\n\n<p>Her early results are promising. In a pilot study, she was able to distinguish between participants\u2019 internal \u201cyes\u201d and \u201cno\u201d responses with approximately 83% accuracy, suggesting that decoding covert speech may be feasible . Encouraged by these findings, Sonja plans to extend the study to adults with developmental communication and motor impairments, focusing on whether similar neural patterns can be observed across participant populations.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>From Academia to Industry: An Internship with Impact<\/strong><\/h2>\n\n\n\n<p>As her research continues to push the boundaries of what&#8217;s possible in brain-computer interface design, Sonja is also stepping into the world of industry. This summer, she\u2019s joining <a href=\"https:\/\/vibraint.ai\/\" data-type=\"link\" data-id=\"https:\/\/vibraint.ai\/\">VIBRAINT Inc.<\/a> , a Toronto-based neurotechnology startup, for a Connected Minds-sponsored internship that will immerse her in the applied side of BCI systems.<\/p>\n\n\n\n<p>\u201cVIBRAINT works on motor rehabilitation with brain-computer interface technologies and VR\u2026 through decoded motor imagery tasks (with EEG) where a client&#8217;s arm is moved by a robotic arm manipulator to match their intended movement,\u201d she explains. While her own research centers on communication rather than movement, Sonja recognizes a valuable connection between the two domains. \u201cThis work is very complimentary to my current project\u2026 Motor imagery is a common BCI input method for communication devices, so it&#8217;s [interesting] to see the process of decoding motor imagery up close.\u201d<\/p>\n\n\n\n<p>The opportunity emerged through a connection made through Connected Minds. \u201cI got my internship through a connection that I made at Connected Minds, Dr. Lauren Sergio from York University,\u201d Sonja says. \u201cI had been interested in VIBRAINT\u2019s work months before I was in contact with them\u2026 I remembered Dr. Sergio from the VIBRAINT website\u2026 When I reached out to her, she was able to put me in contact with VIBRAINT.\u201d<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Looking Ahead<\/h2>\n\n\n\n<p>Sonja\u2019s research offers a hopeful glimpse into the future of communication\u2014but it also highlights the practical limitations of current technology. \u201cThe device I use takes two hours to set up,\u201d she explains, citing the time-consuming process of adjusting sensors, troubleshooting connections, and managing the bulky equipment. &nbsp;\u201cThere are a lot of ways that it&#8217;s still impractical as a communication device\u2026 it&#8217;s definitely not usable [in everyday settings].\u201d<\/p>\n\n\n\n<p>Still, the promise of her research extends beyond proof-of-concept studies. By demonstrating that covert speech can be decoded with accuracy, Sonja\u2019s work lays the foundation for a new class of assistive technologies\u2014tools that are not only scientifically viable but are designed for real-world use. It\u2019s a step toward more portable, accessible BCI systems that could one day offer seamless communication for those who need it most.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Imagine your thoughts\u2014your needs, your questions, your feelings\u2014held captive, with no reliable way to be heard. This is the daily reality for many individuals with communication disabilities, and it&#8217;s the mission that drives Queen\u2019s PhD candidate and Connected Minds trainee Sonja Bonar in her quest to develop brain-computer interfaces (BCIs) that give voice to silent [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":560,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"ocean_post_layout":"","ocean_both_sidebars_style":"","ocean_both_sidebars_content_width":0,"ocean_both_sidebars_sidebars_width":0,"ocean_sidebar":"","ocean_second_sidebar":"","ocean_disable_margins":"enable","ocean_add_body_class":"","ocean_shortcode_before_top_bar":"","ocean_shortcode_after_top_bar":"","ocean_shortcode_before_header":"","ocean_shortcode_after_header":"","ocean_has_shortcode":"","ocean_shortcode_after_title":"","ocean_shortcode_before_footer_widgets":"","ocean_shortcode_after_footer_widgets":"","ocean_shortcode_before_footer_bottom":"","ocean_shortcode_after_footer_bottom":"","ocean_display_top_bar":"default","ocean_display_header":"default","ocean_header_style":"","ocean_center_header_left_menu":"","ocean_custom_header_template":"","ocean_custom_logo":0,"ocean_custom_retina_logo":0,"ocean_custom_logo_max_width":0,"ocean_custom_logo_tablet_max_width":0,"ocean_custom_logo_mobile_max_width":0,"ocean_custom_logo_max_height":0,"ocean_custom_logo_tablet_max_height":0,"ocean_custom_logo_mobile_max_height":0,"ocean_header_custom_menu":"","ocean_menu_typo_font_family":"","ocean_menu_typo_font_subset":"","ocean_menu_typo_font_size":0,"ocean_menu_typo_font_size_tablet":0,"ocean_menu_typo_font_size_mobile":0,"ocean_menu_typo_font_size_unit":"px","ocean_menu_typo_font_weight":"","ocean_menu_typo_font_weight_tablet":"","ocean_menu_typo_font_weight_mobile":"","ocean_menu_typo_transform":"","ocean_menu_typo_transform_tablet":"","ocean_menu_typo_transform_mobile":"","ocean_menu_typo_line_height":0,"ocean_menu_typo_line_height_tablet":0,"ocean_menu_typo_line_height_mobile":0,"ocean_menu_typo_line_height_unit":"","ocean_menu_typo_spacing":0,"ocean_menu_typo_spacing_tablet":0,"ocean_menu_typo_spacing_mobile":0,"ocean_menu_typo_spacing_unit":"","ocean_menu_link_color":"","ocean_menu_link_color_hover":"","ocean_menu_link_color_active":"","ocean_menu_link_background":"","ocean_menu_link_hover_background":"","ocean_menu_link_active_background":"","ocean_menu_social_links_bg":"","ocean_menu_social_hover_links_bg":"","ocean_menu_social_links_color":"","ocean_menu_social_hover_links_color":"","ocean_disable_title":"default","ocean_disable_heading":"default","ocean_post_title":"","ocean_post_subheading":"","ocean_post_title_style":"","ocean_post_title_background_color":"","ocean_post_title_background":0,"ocean_post_title_bg_image_position":"","ocean_post_title_bg_image_attachment":"","ocean_post_title_bg_image_repeat":"","ocean_post_title_bg_image_size":"","ocean_post_title_height":0,"ocean_post_title_bg_overlay":0.5,"ocean_post_title_bg_overlay_color":"","ocean_disable_breadcrumbs":"default","ocean_breadcrumbs_color":"","ocean_breadcrumbs_separator_color":"","ocean_breadcrumbs_links_color":"","ocean_breadcrumbs_links_hover_color":"","ocean_display_footer_widgets":"default","ocean_display_footer_bottom":"default","ocean_custom_footer_template":"","ocean_post_oembed":"","ocean_post_self_hosted_media":"","ocean_post_video_embed":"","ocean_link_format":"","ocean_link_format_target":"self","ocean_quote_format":"","ocean_quote_format_link":"post","ocean_gallery_link_images":"on","ocean_gallery_id":[],"footnotes":""},"categories":[9],"tags":[],"class_list":["post-559","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-connected-minds","entry","has-media"],"rttpg_featured_image_url":{"full":["https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/04\/Sonja-Headshot.jpg",960,1280,false],"landscape":["https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/04\/Sonja-Headshot.jpg",960,1280,false],"portraits":["https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/04\/Sonja-Headshot.jpg",960,1280,false],"thumbnail":["https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/04\/Sonja-Headshot-150x150.jpg",150,150,true],"medium":["https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/04\/Sonja-Headshot-225x300.jpg",225,300,true],"large":["https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/04\/Sonja-Headshot-768x1024.jpg",768,1024,true],"1536x1536":["https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/04\/Sonja-Headshot.jpg",960,1280,false],"2048x2048":["https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/04\/Sonja-Headshot.jpg",960,1280,false],"ocean-thumb-m":["https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/04\/Sonja-Headshot-600x600.jpg",600,600,true],"ocean-thumb-ml":["https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/04\/Sonja-Headshot-800x450.jpg",800,450,true],"ocean-thumb-l":["https:\/\/cmblog.neuroscience.queensu.ca\/wp-content\/uploads\/2025\/04\/Sonja-Headshot-960x700.jpg",960,700,true]},"rttpg_author":{"display_name":"Erika Johannessen","author_link":"https:\/\/cmblog.neuroscience.queensu.ca\/author\/erika"},"rttpg_comment":0,"rttpg_category":"<a href=\"https:\/\/cmblog.neuroscience.queensu.ca\/category\/connected-minds\" rel=\"category tag\">Connected Minds<\/a>","rttpg_excerpt":"Imagine your thoughts\u2014your needs, your questions, your feelings\u2014held captive, with no reliable way to be heard. This is the daily reality for many individuals with communication disabilities, and it&#8217;s the mission that drives Queen\u2019s PhD candidate and Connected Minds trainee Sonja Bonar in her quest to develop brain-computer interfaces (BCIs) that give voice to silent&hellip;","_links":{"self":[{"href":"https:\/\/cmblog.neuroscience.queensu.ca\/wp-json\/wp\/v2\/posts\/559","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cmblog.neuroscience.queensu.ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cmblog.neuroscience.queensu.ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cmblog.neuroscience.queensu.ca\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/cmblog.neuroscience.queensu.ca\/wp-json\/wp\/v2\/comments?post=559"}],"version-history":[{"count":1,"href":"https:\/\/cmblog.neuroscience.queensu.ca\/wp-json\/wp\/v2\/posts\/559\/revisions"}],"predecessor-version":[{"id":562,"href":"https:\/\/cmblog.neuroscience.queensu.ca\/wp-json\/wp\/v2\/posts\/559\/revisions\/562"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cmblog.neuroscience.queensu.ca\/wp-json\/wp\/v2\/media\/560"}],"wp:attachment":[{"href":"https:\/\/cmblog.neuroscience.queensu.ca\/wp-json\/wp\/v2\/media?parent=559"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cmblog.neuroscience.queensu.ca\/wp-json\/wp\/v2\/categories?post=559"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cmblog.neuroscience.queensu.ca\/wp-json\/wp\/v2\/tags?post=559"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}