A New Frontier in Health and Wellness: Subvocalization Input to Neural Network Chatbot Audio Output
--
In Flow, TensorFlow, and Surface Electromyography (sEMG) for Deepfakes in Healthcare I have referenced a National Library of Medicine paper High-resolution surface electromyographic activities of facial muscles during mimic movements in healthy adults: A prospective observational study. sEMG can be used to improve subvocal recognition (SVR).
When sEMG enhances SVR, building external, noninvasive devices that can be affixed to the human jaw and ear is possible. Something that is as easy to put on and take off as over the ear headphones.
If you’re saying PLEASE SPEAK PLAINLY!!!
In 2023, it is possible to do the following:
- Stay quiet, and use “telepathy” to say something to a smartwatch, for example.
- Have the smart watch connected to an Artificial Intelligence Chatbot.
- Have the Chatbot reply to your wireless earphone.
Wait a minute, hasn’t this been done before?
Yes, sort of. The technologies exists.
These devices detect the nerve signals in the larynx and mandible to convert vocalizations into human speech pattern outputs. NASA developed something before this article on January 1, 2007 dubbed Applications for Subvocal Speech, written by Charles Jorgensen.
Also, a story published by MIT News, Computer system transcribes words users “speak silently” on April 4, 2018 talks about AlterEgo developed by MIT Media Lab.
Next Steps?
Using similar technologies allows connectivity with the Internet of Things (IoT). A smartphone, wearable, or computer with a GPT chatbot speaking back is a non-trivial issue.
Combining a subvocalization recognition device interface with augmented reality (AR) and a GPT chatbot can improve navigation and direction-giving experiences.
Rough Project Timeline
Estimated Cost Analysis
- Requirement Analysis: $8,000
- Feasibility Study: $4,000
- Prototype Development: $40,000
- Testing & Validation: $15,000
- Scalability Analysis: $6,000
- Deployment: $5,000
- Maintenance: $1,500
- Total: $79,500
Materials List
- AR Headsets (e.g., Oculus Quest 2, more cost-effective)
- DIY Subvocalization Devices (Raspberry Pi, EMG sensors)
- IoT Microcontrollers for Testing (ESP8266 or ESP32)
- Cloud Servers for Backend (AWS Free Tier)
- Development Machines (existing infrastructure)
- CNC Fabrication Materials
References
Mueller, N., Trentzsch, V., Grassme, R., Guntinas-Lichius, O., Volk, G. F., & Anders, C. (2022). High-resolution surface electromyographic activities of facial muscles during mimic movements in healthy adults: A prospective observational study. Frontiers in Human Neuroscience, 16, 1029415. https://ncbi.nlm.nih.gov/pmc/articles/PMC9790991/
Jorgensen C, Betts B. Technology Utilization And Surface Transportation. NASA Tech Briefs. 2007 Jan; Publication Date. Report/Patent Number: ARC-15519–1. Available from: https://ntrs.nasa.gov/citations/20090040757. Accessed September 8, 2013. Document ID: 20090040757. Public Use Permitted.
Hardesty L. Computer system transcribes words users “speak silently”. MIT News. 2018 Apr 4. Available from: https://news.mit.edu/2018/computer-system-transcribes-words-users-speak-silently-0404.
Helou, L. B., Welch, B., Wang, W., Rosen, C. A., & Verdolini Abbott, K. (2023). Intrinsic Laryngeal Muscle Activity During Subvocalization. Journal of Voice, 37(3), 426–432. https://pubmed.ncbi.nlm.nih.gov/33612369/
Leinenger, M. (2014). Phonological coding during reading. Psychological Bulletin, 140(6), 1534–1555. https://pubmed.ncbi.nlm.nih.gov/25150679/
Beck, J., & Konieczny, L. (2021). Rhythmic subvocalization: An eye-tracking study on silent poetry reading. Journal of Eye Movement Research, 13(3), 10.16910/jemr.13.3.5. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8557949/