Publications

Vijay Rajanna; Murat Russel; Jeffrey Zhao; Tracy Hammond. PressTapFlick: Exploring a Gaze and Foot-based Multimodal Approach to Gaze Typing. International Journal of Human-Computer Studies. vol 161, 2022, doi: https://doi.org/10.1016/j.ijhcs.2022.102787.

Text entry is extremely difficult or sometimes impossible in the scenarios of situationally-induced or physical impairments and disabilities. As a remedy, many rely on gaze typing which commonly uses dwell time as the selection method. However, dwell-based gaze typing could be limited by usability issues, reduced typing speed, high error rate, steep learning curve, and visual fatigue with prolonged usage.

We present a dwell-free, multimodal approach to gaze typing where the gaze input is supplemented with a foot input modality. In this multi-modal setup, the user points her gaze at the desired character, and selects it with the foot input. We further investigated two approaches to foot-based selection, a foot gesture-based selection and a foot press-based selection, which are compared against the dwell-based selection.

We evaluated our system through three experiments involving 51 participants, where each experiment used one of the three target selection methods: dwell-based, foot gesture-based, and foot press-based selection. We found that foot-based selection at least matches, and likely improves, the gaze typing performance compared to dwell-based selection. Among the four foot gestures (toe tapping, heel tapping, right flick and left flick) we used in the study, toe tapping is the most preferred gesture for gaze typing. Furthermore, when using foot-based activation users quickly develop a rhythm in focusing at a character with gaze and selecting it with the foot. This familiarity reduces errors significantly. Overall, based on both typing performance and qualitative feedback the results suggest that gaze and foot-based tying is convenient, easy to learn, and addresses the usability issues associated with dwell-based typing. We believe, our findings would encourage further research in leveraging a supplemental foot input in gaze typing, or in general, would assist in the development of rich foot-based interactions.

@article{RAJANNA2022102787,
title = {PressTapFlick: Exploring a gaze and foot-based multimodal approach to gaze typing},
journal = {International Journal of Human-Computer Studies},
volume = {161},
pages = {102787},
year = {2022},
issn = {1071-5819},
doi = {https://doi.org/10.1016/j.ijhcs.2022.102787},
url = {https://www.sciencedirect.com/science/article/pii/S1071581922000167},
author = {Vijay Rajanna and Murat Russel and Jeffrey Zhao and Tracy Hammond},
keywords = {Gaze typing, Multimodal interaction, Foot-based interaction, Virtual keyboard, Optikey, 3D printing, Microcontroller} }
Vijay Rajanna; Tracy Hammond. Can Gaze Beat Touch? A Fitts' Law Evaluation of Gaze, Touch, and Mouse Inputs. arXiv | Human-Computer Interaction (cs.HC) | Fitts’ Law; ISO 9241-9; gaze; touch; mouse; foot input; throughput: 2022. doi: 10.48550/ARXIV.2208.01248.

Gaze input has been a promising substitute for mouse input for point and select interactions. Individuals with severe motor and speech disabilities primarily rely on gaze input for communication. Gaze input also serves as a hands-free input modality in the scenarios of situationally-induced impairments and disabilities (SIIDs). Hence, the performance of gaze input has often been compared to mouse input through standardized performance evaluation procedure like the Fitts' Law. With the proliferation of touch-enabled devices such as smartphones, tablet PCs, or any computing device with a touch surface, it is also important to compare the performance of gaze input to touch input.

In this study, we conducted ISO 9241-9 Fitts' Law evaluation to compare the performance of multimodal gaze and foot-based input to touch input in a standard desktop environment, while using mouse input as the baseline. From a study involving 12 participants, we found that the gaze input has the lowest throughput (2.55 bits/s), and the highest movement time (1.04 s) of the three inputs. In addition, though touch input involves maximum physical movements, it achieved the highest throughput (6.67 bits/s), the least movement time (0.5 s), and was the most preferred input. While there are similarities in how quickly pointing can be moved from source to target location when using both gaze and touch inputs, target selection consumes maximum time with gaze input. Hence, with a throughput that is over 160% higher than gaze, touch proves to be a superior input modality.

@ARTICLE{fitts_gaze_touch_mouse_rajanna22,
doi = {10.48550/ARXIV.2208.01248},
url = {https://arxiv.org/abs/2208.01248},
author = {Rajanna, Vijay and Hammond, Tracy},
keywords = {Human-Computer Interaction (cs.HC), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Can Gaze Beat Touch? A Fitts' Law Evaluation of Gaze, Touch, and Mouse Inputs},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International} }
Jiayao Li; Samantha Ray; Vijay Rajanna; Tracy Hammond. Evaluating the Performance of Machine Learning Algorithms in Gaze Gesture Recognition Systems. Journal: IEEE Access, vol. 10, pp. 1020-1035, 2022, doi: 10.1109/ACCESS.2021.3136153.
Despite the utility of gaze gestures as an input method, there is a lack of guidelines available regarding how to design gaze gestures, what algorithms to use for gaze gesture recognition, and how these algorithms compare in terms of performance. To facilitate the development of applications that leverage gaze gestures, we have evaluated the performance of a combination of template-based and data-driven algorithms on two custom gesture sets that can map to user actions. Template-based algorithms had consistently high accuracies but the slowest runtimes, making them best for small gesture sets or accuracy-critical applications. Data-driven algorithms run much faster and scale better to larger gesture sets, but require more training data to achieve the accuracy of the template-based methods. The main takeaways for gesture set design are 1) gestures should have distinct forms even when performed imprecisely and 2) gestures should have clear key-points for the eyes to fixate onto.
@ARTICLE{9663039,
author={Li, Jiayao and Ray, Samantha and Rajanna, Vijay and Hammond, Tracy},
journal={IEEE Access},
title = {Pointing by Gaze, Head, and Foot in a Head-Mounted Display},
year = {2022},
volume={10},
publisher = {IEEE},
pages={1020-1035},
doi={10.1109/ACCESS.2021.3136153} }
Katsumi Minakata; John Paulin Hansen; Scott MacKenzie; Per Bækgaard; Vijay Rajanna.Pointing by gaze, head, and foot in a head-mounted display. In Proceedings of the Symposium on Eye Tracking Research and Applications (ETRA '19). ACM, New York, USA. June 25–28, 2019 | Denver, CO, USA.
This paper presents a Fitts' law experiment and a clinical case study performed with a head-mounted display (HMD). The experiment compared gaze, foot, and head pointing. With the equipment setup we used, gaze was slower than the other pointing methods, especially in the lower visual field. Throughputs for gaze and foot pointing were lower than mouse and head pointing and their effective target widths were also higher. A follow-up case study included seven participants with movement disorders. Only two of the participants were able to calibrate for gaze tracking but all seven could use head pointing, although with throughput less than one-third of the non-clinical participants.
@inproceedings{10.1145/3317956.3318150,
author = {Minakata, Katsumi and Hansen, John Paulin and MacKenzie, I. Scott and B\ae{}kgaard, Per and Rajanna, Vijay},
title = {Pointing by Gaze, Head, and Foot in a Head-Mounted Display},
year = {2019},
isbn = {9781450367097},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3317956.3318150},
doi = {10.1145/3317956.3318150},
booktitle = {Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications},
articleno = {69},
numpages = {9},
keywords = {hand controller, foot interaction, Fitts’ law, disability, virtual reality, head interaction, gaze interaction, ISO 9241-9, head-mounted displays, accessibility, dwell activation},
location = {Denver, Colorado},
series = {ETRA ’19} }
Nic Lupfer; Andruid Kerne, Rhema Linder, Hannah Fowler, Vijay Rajanna, Matthew Carrasco, Alyssa Valdez. Multiscale Design Curation: Supporting Computer Science Students' Iterative and Reflective Creative Processes. In Proceedings of the Conference on Conference on Creativity and Cognition (C&C ’19). ACM, New York, USA. June 23–26, 2019 | San Diego, CA, USA.
We investigate new media to improve how teams of students create and organize artifacts as they perform design. Some design artifacts are readymade-e.g., prior work, reference images, code framework repositories-while others are self-made-e.g., storyboards, mock ups, prototypes, and user study reports. We studied how computer science students use the medium of free-form web curation to collect, assemble, and report on their team-based design projects. From our mixed qualitative methods analysis, we found that the use of space and scale was central to their engagement in creative processes of communication and contextualization. Multiscale design curation involves collecting readymade and creating self-made design artifacts, and assembling them-as elements, in a continuous space, using levels of visual scale-for thinking about, ideation, communicating, exhibiting (presenting), and archiving design process. Multiscale design curation instantiates a constructivist approach, elevating the role of design process representation. Student curations are open and unstructured, which helps avoid premature formalism and supported reflection in iterative design processes. Multiscale design curation takes advantage of human spatial cognition, through visual chunking, to support creative processes and collaborative articulation work, in integrated space.
@inproceedings{10.1145/3325480.3325483,
author = {Lupfer, Nic and Kerne, Andruid and Linder, Rhema and Fowler, Hannah and Rajanna, Vijay and Carrasco, Matthew and Valdez, Alyssa},
title = {Multiscale Design Curation: Supporting Computer Science Students’ Iterative and Reflective Creative Processes},
year = {2019},
isbn = {9781450359177},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3325480.3325483},
doi = {10.1145/3325480.3325483},
booktitle = {Proceedings of the 2019 Conference on Creativity and Cognition},
pages = {233–245},
numpages = {13},
keywords = {curation, multiscale, creativity, design curation, iterative design, zui, design, multiscale design curation, design
education, little-c},
location = {San Diego, CA, USA},
series = {C&C ’19}
}
Vijay Rajanna. Addressing Situational and Physical Impairments and Disabilities with a Gaze-Assisted, Multi-Modal, Accessible Interaction Paradigm. Doctoral Dissertation Research. Texas A & M University, College Station, Texas, USA. November 6th, 2018.

Every day we encounter a variety of scenarios that lead to situationally induced impairments and disabilities, i.e., our hands are assumed to be engaged in a task, and hence unavailable for interacting with a computing device. For example, a surgeon performing an operation, a worker in a factory with greasy hands or wearing thick gloves, a person driving a car, and so on all represent scenarios of situational impairments and disabilities. In such cases, performing point-and-click interactions, text entry, or authentication on a computer using conventional input methods like the mouse, keyboard, and touch is either inefficient or not possible. Unfortunately, individuals with physical impairments and disabilities, by birth or due to an injury, are forced to deal with these limitations every single day. Generally, these individuals experience difficulty or are completely unable to perform basic operations on a computer. Therefore, to address situational and physical impairments and disabilities it is crucial to develop hands-free, accessible interactions.

In this research, we try to address the limitations, inabilities, and challenges arising from situational and physical impairments and disabilities by developing a gaze-assisted, multi-modal, hands-free, accessible interaction paradigm. Specifically, we focus on the three primary interactions: 1) point-and-click, 2) text entry, and 3) authentication. We present multiple ways in which the gaze input can be modeled and combined with other input modalities to enable efficient and accessible interactions. In this regard, we have developed a gaze and foot-based interaction framework to achieve accurate ``point-and-click" interactions and to perform dwell-free text entry on computers. In addition, we have developed a gaze gesture-based framework for user authentication and to interact with a wide range of computer applications using a common repository of gaze gestures. The interaction methods and devices we have developed are a) evaluated using the standard HCI procedures like the Fitts' Law, text entry metrics, authentication accuracy and video analysis attacks, b) compared against the speed, accuracy, and usability of other gaze-assisted interaction methods, and c) qualitatively analyzed by conducting user interviews.

From the evaluations, we found that our solutions achieve higher efficiency than the existing systems and also address the usability issues. To discuss each of these solutions, first, the gaze and foot-based system we developed supports point-and-click interactions to address the "Midas Touch" issue. The system performs at least as good (time and precision) as the mouse, while enabling hands-free interactions. We have also investigated the feasibility, advantages, and challenges of using gaze and foot-based point-and-click interactions on standard (up to 24") and large displays (up to 84") through Fitts' Law evaluations. Additionally, we have compared the performance of the gaze input to the other standard inputs like the mouse and touch.

Second, to support text entry, we developed a gaze and foot-based dwell-free typing system, and investigated foot-based activation methods like foot-press and foot gestures. we have demonstrated that our dwell-free typing methods are efficient and highly preferred over conventional dwell-based gaze typing methods. Using our gaze typing system the users type up to 14.98 Words Per Minute (WPM) as opposed to 11.65 WPM with dwell-based typing. Importantly, our system addresses the critical usability issues associated with gaze typing in general.

Third, we addressed the lack of an accessible and shoulder-surfing resistant authentication method by developing a gaze gesture recognition framework, and presenting two authentication strategies that use gaze gestures. Our authentication methods use static and dynamic transitions of the objects on the screen, and they authenticate users with an accuracy of 99% (static) and 97.5% (dynamic). Furthermore, unlike other systems, our dynamic authentication method is not susceptible to single video iterative attacks, and has a lower success rate with dual video iterative attacks.

Lastly, we demonstrated how our gaze gesture recognition framework can be extended to allow users to design gaze gestures of their choice and associate them to appropriate commands like minimize, maximize, scroll, etc., on the computer. We presented a template matching algorithm which achieved an accuracy of 93%, and a geometric feature-based decision tree algorithm which achieved an accuracy of 90.2% in recognizing the gaze gestures. In summary, our research demonstrates how situational and physical impairments and disabilities can be addressed with a gaze-assisted, multi-modal, accessible interaction paradigm.

Rajanna, Vijay Dandur (2018). Addressing Situational and Physical Impairments and Disabilities with a Gaze-Assisted, Multi-Modal, Accessible Interaction Paradigm. Doctoral dissertation, Texas A & M University. Available electronically from https://hdl.handle.net/1969.1 /174462
Vijay Rajanna; John Paulin Hansen. Gaze Typing in Virtual Reality: Impact of Keyboard Design, Selection Method, and Motion. In Proceedings of the Symposium on Eye Tracking Research and Applications (ETRA '18). ACM, New York, USA. June 14–17, 2018 | Warsaw, Poland.
Gaze tracking in virtual reality (VR) allows for hands-free text entry, but it has not yet been explored. We investigate how the keyboard design, selection method, and motion in the field of view may impact typing performance and user experience. We present two studies of people (N=32) typing with gaze+dwell and gaze+click inputs in VR. In study 1, the typing keyboard was flat and within-view; in study 2, it was larger-than-view but curved. Both studies included a stationary and a dynamic motion conditions in the user's field of view. Our findings suggest that 1) gaze typing in VR is viable but constrained, 2) the users perform best (10.15 WPM) when the entire keyboard is within-view; the larger-than-view keyboard (9.15 WPM) induces physical strain due to increased head movements, 3) motion in the field of view impacts the user's performance: users perform better while stationary than when in motion, and 4) gaze+click is better than dwell only interaction in VR.
@inproceedings{ETRA18:VRGazeTyping,
author = {Rajanna, Vijay and Hansen, John Paulin},
title = {Gaze Typing in Virtual Reality: Impact of Keyboard Design, Selection Method, and Motion},
booktitle = {Proceedings of the Tenth Biennial ACM Symposium on Eye Tracking Research and Applications},
series = {ETRA '18},
year = {2018},
isbn = {978-1-4503-5706-7/18/06},
location = {Warsaw, Poland},
doi = {10.1145/3204493.3204541},
acmid = {3204541},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {virtual reality, gaze typing, keyboard design, motion, VR sickness, multimodal input, dwell, mental and physical workload},
}}
Vijay Rajanna; Tracy Hammond. A Fitts' Law Evaluation of Gaze Input on Large Displays Compared to Touch and Mouse Inputs. In Proceedings of the Symposium on Eye Tracking Research and Applications (ETRA 2018 - COGAIN Symposium). ACM, New York, USA. June 14–17, 2018 | Warsaw, Poland.
Gaze-assisted interaction has commonly been used in a standard desktop setting. When interacting with large displays, as new scenarios like situationally-induced impairments emerge, it is more convenient to use the gaze-based multimodal input than other inputs. However, it is unknown as to how the gaze-based multimodal input compares to touch and mouse inputs. We compared gaze+foot multimodal input to touch and mouse inputs on a large display in a Fitts' Law experiment that conforms to ISO 9241-9. From a study involving 23 participants, we found that the gaze input has the lowest throughput (2.33 bits/s), and the highest movement time (1.176 s) of the three inputs. In addition, though touch input involves maximum physical movements, it achieved the highest throughput (5.49 bits/s), the least movement time (0.623 s), and was the most preferred input.
@inproceedings{COGAIN18:FittsLargeDisplay,
author = {Rajanna, Vijay and Hammond, Tracy},
title = {A Fitts' Law Evaluation of Gaze Input on Large Displays Compared to Touch and Mouse Inputs},
booktitle = {COGAIN '18: Workshop on Communication by Gaze Interaction, June 14--17, 2018, Warsaw, Poland},
series = {COGAIN '18},
year = {2018},
isbn = {978-1-4503-5790-6/18/06},
location = {Warsaw, Poland},
doi = {10.1145/3206343.3206348},
acmid = {3206348},
publisher = {ACM},
address = {New York, NY, USA}
}
John Paulin Hansen; Vijay Rajanna; Scott MacKenzie; Per Bækgaard A Fitts' Law Study of Click and Dwell Interaction by Gaze, Head and Mouse with a Head-Mounted Display. In Proceedings of the Symposium on Eye Tracking Research and Applications (ETRA 2018 - COGAIN Symposium). ACM, New York, USA. June 14–17, 2018 | Warsaw, Poland.
Gaze and head tracking, or pointing, in head-mounted displays enables new input modalities for point-select tasks. We conducted a Fitts' law experiment with 41 subjects comparing head pointing and gaze pointing using a 300 ms dwell (n = 22) or click (n = 19) activation, with mouse input providing a baseline for both conditions. Gaze and head pointing were equally fast but slower than the mouse; dwell activation was faster than click activation. Throughput was highest for the mouse (2.75 bits/s), followed by head pointing (2.04 bits/s) and gaze pointing (1.85 bits/s). With dwell activation, however, throughput for gaze and head pointing were almost identical, as was the effective target width (approximately 55 pixels; about 2 degrees) for all three input methods. Subjective feedback rated the physical workload less for gaze pointing than head pointing.
@inproceedings{COGAIN18:FittsVR,
author = {Hansen, John Paulin and Rajanna, Vijay and MacKenzie, I. Scott and B\ae kgaard, Per},
title = {A Fitts' Law Study of Click and Dwell Interaction by Gaze, Head and Mouse with a Head-Mounted Display},
booktitle = {COGAIN '18: Workshop on Communication by Gaze Interaction, June 14--17, 2018, Warsaw, Poland},
series = {COGAIN '18},
year = {2018},
isbn = {978-1-4503-5790-6/18/06},
location = {Warsaw, Poland},
doi = {10.1145/3206343.3206344},
acmid = {3206344},
publisher = {ACM},
address = {New York, NY, USA}
}
Vijay Rajanna; Tracy Hammond. A Gaze Gesture-Based Paradigm for Situational Impairments, Accessibility, and Rich Interactions. In Proceedings of the Symposium on Eye Tracking Research and Applications (ETRA '18). ACM, New York, USA. June 14–17, 2018 | Warsaw, Poland.
Gaze gesture-based interactions on a computer are promising, but the existing systems are limited by the number of supported gestures, recognition accuracy, need to remember the stroke order, lack of extensibility, and so on. We present a gaze gesture-based interaction framework where a user can design gestures and associate them to appropriate commands like minimize, maximize, scroll, and so on. This allows the user to interact with a wide range of applications using a common set of gestures. Furthermore, our gesture recognition algorithm is independent of the screen size, resolution, and the user can draw the gesture anywhere on the target application. Results from a user study involving seven participants showed that the system recognizes a set of nine gestures with an accuracy of 93% and a F-measure of 0.96. We envision, this framework can be leveraged in developing solutions for situational impairments, accessibility, and also for implementing a rich interaction paradigm.
@inproceedings{ETRA2018:Gazegestures,
author = {Rajanna, Vijay and Hammond, Tracy},
title = {A Gaze Gesture-based Paradigm for Situational Impairments, Accessibility, and Rich Interactions},
booktitle = {Proceedings of the Tenth Biennial ACM Symposium on Eye Tracking Research and Applications},
series = {ETRA '18},
year = {2018},
isbn = {978-1-4503-5706-7/18/06},
location = {Warsaw, Poland},
doi = {10.1145/3204493.3208344},
acmid = {3208344},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {Gaze gestures; accessibility; situational impairment; eye tracking},
}
Vijay Rajanna; Adil Hamid Malla; Rahul Ashok Bhagat; Tracy Hammond. DyGazePass: A Gaze Gesture-Based Dynamic Authentication System to Counter Shoulder Surfing and Video Analysis Attacks. IEEE International Conference on Identity, Security and Behavior Analysis (ISBA 2018). January 10–12, 2018 | Singapore
Shoulder surfing enables an attacker to gain the authentication details of a victim through observations and is becoming a threat to visual privacy. We present DyGazePass: Dynamic Gaze Passwords, an authentication strategy that uses dynamic gaze gestures. We also present two authentication interfaces, a dynamic and a static-dynamic interface, that leverage this strategy to counter shoulder surfing attacks. The core idea is, a user authenticates by following uniquely colored circles that move along random paths on the screen. Through multiple evaluations, we discuss how the authentication accuracy varies with respect to transition speed of the circles, and the number of moving and static circles. Furthermore, we evaluate the resiliency of our authentication method against video analysis attacks by comparing it to a gaze- and PIN-based authentication system. Overall, we found that the static-dynamic interface with a transition speed of two seconds was the most effective authentication method with an accuracy of 97.5%.
@INPROCEEDINGS{8311458,
author={V. Rajanna and A. H. Malla and R. A. Bhagat and T. Hammond},
booktitle={2018 IEEE 4th International Conference on Identity, Security, and Behavior Analysis (ISBA)},
title={DyGazePass: A gaze gesture-based dynamic authentication system to counter shoulder surfing and video analysis attacks},
year={2018},
pages={1-8},
keywords={Animation;Authentication;Color;Image color analysis;Password;Pins;Visualization},
doi={10.1109/ISBA.2018.8311458},
month={Jan}}
Bailey Bauman; Regan Gunhouse; Antonia Jones; Willer Da Silva; Shaeeta Sharar; Vijay Rajanna, Josh Cherian, Jung In Koh, Tracy Hammond. VisualEYEze: A Web-based Solution for Receiving Feedback on Artwork Through Eye Tracking. ACM IUI2018 Workshop on Web Intelligence and Interaction (WII 2018). March 07–11, 2018 | Tokyo, Japan
Artists value the ability to determine what parts of their composition is most appreciated by viewers. This information normally comes straight from viewers in the form of oral and written feedback; however, due to the lack of participation on the viewers part and because much of our visual understanding of artwork can be subconscious and difficult to express verbally, the value of this feedback is limited. Eye tracking technology has been used before to analyze artwork, however, most of this work has been performed in a controlled lab setting and as such this technology remains largely inaccessible to individual artists who may seek feedback. To address this issue, we developed a web-based system where artists can upload their artwork to be viewed by the viewers on their computer while a web camera tracks their eye movements. The artist receives feedback in the form of visualized eye tracking data that depicts what areas on the image looked at the most by viewers. We evaluated our system by having 5 artists upload a total of 17 images, which were subsequently viewed by 20 users. The artists expressed that seeing eye tracking data visualized on their artwork indicating the areas of interest is a unique way of receiving feedback and is highly useful. Also, they felt that the platform makes the artists more aware of their compositions; something that can especially help inexperienced artists. Furthermore, 90% of the viewers expressed that they were comfortable in providing eye movement data as a form of feedback to the artists.
TBA
Josh Cherian; Vijay Rajanna; Daniel Goldberg; Tracy Hammond. Did you Remember To Brush? : A Noninvasive Wearable Approach to Recognizing Brushing Teeth for Elderly Care.11th EAI International Conference on Pervasive Computing Technologies for Healthcare. ACM, New York, USA. MAY 23–26, 2017 |Barcelona, Spain
Failing to brush one's teeth regularly can have surprisingly serious health consequences, from periodontal disease to coronary heart disease to pancreatic cancer. This problem is especially worrying when caring for the elderly and/or individuals with dementia, as they often forget or are unable to perform standard health activities such as brushing their teeth, washing their hands, and taking medication. To ensure that such individuals are correctly looked after they are placed under the supervision of caretakers or family members, simultaneously limiting their independence and placing an immense burden on their family members and caretakers. To address this problem we developed a non-invasive wearable system based on a wrist-mounted accelerometer to accurately identify when a person brushed their teeth. We tested the efficacy of our system with a month-long in-the-wild study and achieved an accuracy of 94% and an F-measure of 0.82.
@inproceedings{Cherian2017pervasive,
author = {Cherian, Josh and Rajanna, Vijay and Goldberg, Daniel and Hammond, Tracy},
title = {Did you Remember To Brush? : A Noninvasive Wearable Approach to Recognizing Brushing Teeth for Elderly Care},
booktitle = {11th EAI International Conference on Pervasive Computing Technologies for Healthcare},
series = {ICDC},
year = {2017},
isbn = {},
location = {New York, USA},
pages = {}
}
Vijay Rajanna; Paul Taele; Seth Polsley; Tracy Hammond. A Gaze Gesture-Based User Authentication System to Counter Shoulder-Surfing Attacks. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, USA. May 06-11, 2017 | Denver, Colorado, USA.
Shoulder-surfing is the act of spying on an authorized user of a computer system with the malicious intent of gaining unauthorized access. Current solutions to address shoulder-surfing such as graphical passwords, gaze input, tactile interfaces, and so on are limited by low accuracy, lack of precise gaze-input, and susceptibility to video analysis attack. We present an intelligent gaze gesture-based system that authenticates users from their unique gaze patterns onto moving geometric shapes. The system authenticates the user by comparing their scan-path with each shapes' paths and recognizing the closest path. In a study with 15 users, authentication accuracy was found to be 99% with true calibration and 96% with disturbed calibration. Also, our system is 40% less susceptible and nearly nine times more time-consuming to video analysis attacks compared to a gaze- and PIN-based authentication system.
@inproceedings{Rajanna:2017:GGU:3027063.3053070,
author = {Rajanna, Vijay and Polsley, Seth and Taele, Paul and Hammond, Tracy},
title = {A Gaze Gesture-Based User Authentication System to Counter Shoulder-Surfing Attacks},
booktitle = {Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems},
series = {CHI EA '17},
year = {2017},
isbn = {978-1-4503-4656-6},
location = {Denver, Colorado, USA},
pages = {1978--1986},
numpages = {9},
url = {http://doi.acm.org/10.1145/3027063.3053070},
doi = {10.1145/3027063.3053070},
acmid = {3053070},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {gaze authentication, gaze gestures, pattern matching}, }
Vijay Rajanna; Dr. Tracy Hammond. A gaze-assisted multimodal approach to rich and accessible human-computer interaction. ACM Richard Tapia Celebration of Diversity in Computing (TAPIA '17), ACM, New York, USA. September 20-23, 2017 | Atlanta, Georgia, USA.
Recent advancements in eye tracking technology are driving the adoption of gaze-assisted interaction as a rich and accessible human-computer interaction paradigm. Gaze-assisted interaction serves as a contextual, non-invasive, and explicit control method for users without disabilities; for users with motor or speech impairments, text entry by gaze serves as the primary means of communication. Despite significant advantages, gaze-assisted interaction is still not widely accepted because of its inherent limitations: 1) Midas touch, 2) low accuracy for mouse-like interactions, 3) need for repeated calibration, 4) visual fatigue with prolonged usage, 5) lower gaze typing speed, and so on. This dissertation research proposes a gaze-assisted, multimodal, interaction paradigm, and related frameworks and their applications that effectively enable gaze-assisted interactions while addressing many of the current limitations. In this regard, we present four systems that leverage gaze-assisted interaction: 1) a gaze- and foot-operated system for precise point-and-click interactions, 2) a dwell-free, foot-operated gaze typing system. 3) a gaze gesture-based authentication system, and 4) a gaze gesture-based interaction toolkit. In addition, we also present the goals to be achieved, technical approach, and overall contributions of this dissertation research.
@misc{https://doi.org/10.48550/arxiv.1803.04713,
doi = {10.48550/ARXIV.1803.04713},
url = {https://arxiv.org/abs/1803.04713},
author = {Rajanna, Vijay and Hammond, Tracy},
keywords = {Human-Computer Interaction (cs.HC), FOS: Computer and information sciences, FOS: Computer and information sciences, H.5.2; K.6.5; I.3.7},
title = {A Gaze-Assisted Multimodal Approach to Rich and Accessible Human-Computer Interaction},
publisher = {arXiv},
year = {2018},
copyright = {arXiv.org perpetual, non-exclusive license} }
Received "3rd Place" in student research competition.
Vijay Rajanna; Dr. Tracy Hammond. Gaze Typing Through Foot-Operated Wearable Device. The 18th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '16). ACM, New York, USA. October 24–26, 2016 | Reno, Nevada, USA.
Gaze Typing, a gaze-assisted text entry method, allows individuals with motor (arm, spine) impairments to enter text on a computer using a virtual keyboard and their gaze. Though gaze typing is widely accepted, this method is limited by its lower typing speed, higher error rate, and the resulting visual fatigue, since dwell-based key selection is used. In this research, we present a gaze-assisted, wearable-supplemented, foot interaction framework for dwell-free gaze typing. The framework consists of a custom-built virtual keyboard, an eye tracker, and a wearable device attached to the user's foot. To enter a character, the user looks at the character and selects it by pressing the pressure pad, attached to the wearable device, with the foot. Results from a preliminary user study involving two participants with motor impairments show that the participants achieved a mean gaze typing speed of 6.23 Words Per Minute (WPM). In addition, the mean value of Key Strokes Per Character (KPSC) was 1.07 (ideal 1.0), and the mean value of Rate of Backspace Activation (RBA) was 0.07 (ideal 0.0). Furthermore, we present our findings from multiple usability studies and design iterations, through which we created appropriate affordances and experience design of our gaze typing system.
@inproceedings{Rajanna:2016:GTT:2982142.2982145,
author = {Rajanna, Vijay},
title = {Gaze Typing Through Foot-Operated Wearable Device},
booktitle = {Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility},
series = {ASSETS '16},
year = {2016},
isbn = {978-1-4503-4124-0},
location = {Reno, Nevada, USA},
pages = {345--346},
numpages = {2},
url = {http://doi.acm.org/10.1145/2982142.2982145},
doi = {10.1145/2982142.2982145},
acmid = {2982145},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {foot-operated devices, gaze typing, wearable devices},
} }
Received "1st Place" in graduate poster competition.
Vijay Rajanna; Dr. Tracy Hammond. Gaze-Assisted User Authentication to Counter Shoulder-surfing Attacks. ACM Richard Tapia Celebration of Diversity in Computing (TAPIA '16). ACM, New York, USA. September 14–17, 2016 | Austin, Texas, USA.
A highly secured, foolproof user authentication is still a primary focus of research in the field of User Privacy and Security. Shoulder-surfing is an act of spying when an authorized user is logging into a system; it is promoted by a malicious intent of gaining unauthorized access. We present a gaze-assisted user authentication system as a potential solution counter shoulder-surfing attacks. The system comprises of an eye tracker and an authentication interface with 12 pre-defined shapes (e.g., triangle, circle, etc.) that move on the screen. A user chooses a set of three shapes as a password. To authenticate, the user follows paths of the three shapes as they move, one on each frame, over three consecutive frames. The system uses a template matching algorithms to compare the scan-path of the user's gaze with the path traversed by the shape. The system evaluation involving seven users showed that the template matching algorithm achieves an accuracy of 95%. Our study also shows that Gaze-driven authentication is a foolproof system against shoulder-surfing attacks; the unique pattern of eye movements for each individual makes the system hard to break into.
Purnendu Kaul; Vijay Rajanna; Dr. Tracy Hammond. Exploring Users' Perceived Activities in a Sketch-based Intelligent Tutoring System Through Eye Movement Data. ACM Symposium on Applied Perception (SAP '16). ACM, New York, USA. July 22–23, 2016 | Anaheim, California, USA.
Intelligent tutoring systems (ITS) empower instructors to make teaching more engaging by providing a platform to tutor, deliver learning material, and to assess students' progress. Despite the advantages, existing ITS do not automatically assess how students engage in problem solving? How do they perceive various activities? and How much time they spend on each discrete activity leading to the solution? In this research, we present an eye tracking framework that, based on eye movement data, can assess students' perceived activities and overall engagement in a sketch based Intelligent tutoring system, "Mechanix." Through an evaluation involving 21 participants, we present the key eye movement features, and demonstrate the potential of leveraging eye movement data to recognize students' perceived activities, "reading, gazing at an image, and problem solving," with an accuracy of 97.12%.
@inproceedings{Kaul:2016:EUP:2931002.2948727,
author = {Kaul, Purnendu and Rajanna, Vijay and Hammond, Tracy},
title = {Exploring Users' Perceived Activities in a Sketch-based Intelligent Tutoring System Through Eye Movement Data},
booktitle = {Proceedings of the ACM Symposium on Applied Perception},
series = {SAP '16},
year = {2016},
isbn = {978-1-4503-4383-1},
location = {Anaheim, California},
pages = {134--134},
numpages = {1},
url = {http://doi.acm.org/10.1145/2931002.2948727},
doi = {10.1145/2931002.2948727},
acmid = {2948727},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {eye tracking, intelligent tutoring systems, perception},
}

Seth Polsley; Vijay Rajanna; Larry Powell; Kodi Tapi; Dr. Tracy Hammond. CANE: A Wearable Computer-Assisted Navigation Engine for the Visually Impaired. A joint workshop on Smart Connected and Wearable Things (SCWT'2016). 21st international conference on Intelligent User Interfaces (IUI '16). ACM, New York, USA. March 7–10, 2016 | Sonoma, California, USA.
Navigating unfamiliar environments can be difficult for the visually impaired, so many assistive technologies have been developed to augment these users’ spatial awareness. Existing technologies are limited in their adoption because of various reasons like size, cost, and reduction of situational awareness. In this paper, we present CANE: “Computer Assisted Navigation Engine,” a low cost, wearable, and hapticassisted navigation system for the visually impaired. CANE is a “smart belt,” providing feedback through vibration units lining the inside of the belt so that it does not interfere with the user’s other senses. CANE was evaluated by both visually impaired and sighted users who simulated visual impairment using blindfolds, and the feedback shows that it improved their spatial awareness allowing the users to successfully navigate the course without any additional aids. CANE as a comprehensive navigation assistant has high potential for wide adoption because it is inexpensive, reliable, convenient, and compact.
@inproceedings{polsley2016cane,
title={CANE: A Wearable Computer-Assisted Navigation Engine for the Visually Impaired},
author={Polsley, Seth and Rajanna, Vijay and Powell, Larry and Tapie, Kodi and Hammond, Tracy},
booktitle={Workshop on Smart Connected and Wearable Things 2016},
pages={13}
} }
Vijay Rajanna; Tracy Hammond. GAWSCHI: Gaze-Augmented, Wearable-Supplemented Computer-Human Interaction. In Proceedings of the Symposium on Eye Tracking Research and Applications (ETRA '16). ACM, New York, USA. March 14–17, 2016 | Charleston, South Carolina, USA.
Recent developments in eye tracking technology are paving the way for gaze-driven interaction as the primary interaction modality. Despite successful efforts, existing solutions to the ``Midas Touch" problem have two inherent issues: 1) lower accuracy, and 2) visual fatigue that are yet to be addressed. In this work we present GAWSCHI: a Gaze-Augmented, Wearable-Supplemented Computer-Human Interaction framework that enables accurate and quick gaze-driven interactions, while being completely immersive and hands-free. GAWSCHI uses an eye tracker and a wearable device (quasi-mouse) that is operated with the user's foot, specifically the big toe. The system was evaluated with a comparative user study involving 30 participants, with each participant performing eleven predefined interaction tasks (on MS Windows 10) using both mouse and gaze-driven interactions. We found that gaze-driven interaction using GAWSCHI is as good (time and precision) as mouse-based interaction as long as the dimensions of the interface element are above a threshold (0.60" x 0.51"). In addition, an analysis of NASA Task Load Index post-study survey showed that the participants experienced low mental, physical, and temporal demand; also achieved a high performance. We foresee GAWSCHI as the primary interaction modality for the physically challenged and a means of enriched interaction modality for the able-bodied demographics.
@inproceedings{Rajanna:2016:GGW:2857491.2857499,
author = {Rajanna, Vijay and Hammond, Tracy},
title = {GAWSCHI: Gaze-augmented, Wearable-supplemented Computer-human Interaction},
booktitle = {Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research and Applications},
series = {ETRA '16},
year = {2016},
isbn = {978-1-4503-4125-7},
location = {Charleston, South Carolina},
pages = {233--236},
numpages = {4},
url = {http://doi.acm.org/10.1145/2857491.2857499},
doi = {10.1145/2857491.2857499},
acmid = {2857499},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {eye tracking, foot-operated device, gaze interaction, midas touch, multi-modal interaction, quasi-mouse, wearable devices},
} }
Vijay Rajanna; Dr. Tracy Hammond. Gaze and Foot Input: Toward a Rich and Assistive Interaction Modality. In Proceedings of the 21st international conference on Intelligent User Interfaces (IUI '16). ACM, New York, USA. March 7–10, 2016 | Sonoma, California, USA.
Transforming gaze input into a rich and assistive interaction modality is one of the primary interests in eye tracking research. Gaze input in conjunction with traditional solutions to the ``Midas Touch" problem, dwell time or a blink, is not matured enough to be widely adopted. In this regard, we present our preliminary work, a framework that achieves precise ``point and click" interactions in a desktop environment through combining the gaze and foot interaction modalities. The framework comprises of an eye tracker and a foot-operated quasi-mouse that is wearable. The system evaluation shows that our gaze and foot interaction framework performs as good as a mouse (time and precision) in the majority of tasks. Furthermore, this dissertation work focuses on the goal of realizing gaze-assisted interaction as a primary interaction modality to substitute conventional mouse and keyboard-based interaction methods. In addition, we consider some of the challenges that need to be addressed, and also present the possible solutions toward achieving our goal. We present a framework that combines the gaze and foot interaction modalities to achieve precise ``point and click" interactions in a desktop environment. The framework comprises of an eye tracker and a foot-operated quasi-mouse that is wearable. The system evaluation shows that our gaze and foot interaction framework performs as good as a mouse (time and precision) in the majority of tasks.
@inproceedings{Rajanna:2016:GFI:2876456.2876462,
author = {Rajanna, Vijay Dandur},
title = {Gaze and Foot Input: Toward a Rich and Assistive Interaction Modality},
booktitle = {Companion Publication of the 21st International Conference on Intelligent User Interfaces},
series = {IUI '16 Companion},
year = {2016},
isbn = {978-1-4503-4140-0},
location = {Sonoma, California, USA},
pages = {126--129},
numpages = {4},
url = {http://doi.acm.org/10.1145/2876456.2876462},
doi = {10.1145/2876456.2876462},
acmid = {2876462},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {authentication, eye tracking, foot input, gaze and foot interaction, tabletop interaction},
} }
Rajanna, Vijay; Vo, Patrick; Barth, Jerry; Mjelde, Matthew; Grey, Trevor; Hammond, Tracy. KinoHaptics: An Automated, Wearable, Haptic Assisted, Physio-therapeutic System for Post-surgery Rehabilitation and Self-care. "Journal of medical systems 40, no. 3 (2016): 1-12."
Problem Statement: A carefully planned, structured, and supervised physiotherapy program, following a surgery, is crucial for the successful diagnosis of physical injuries. Nearly 50% of the surgeries fail due to unsupervised, and erroneous physiotherapy. The demand for a physiotherapist for an extended period is expensive to afford, and sometimes inaccessible. With the advancements in wearable sensors and motion tracking, researchers have tried to build affordable, automated, physio-therapeutic systems, which direct a physiotherapy session by providing audio-visual feedback on patient's performance. There are many aspects of automated physiotherapy program which are yet to be addressed by the existing systems: wide variety of patients' physiological conditions to be diagnosed, demographics of the patients (blind, deaf, etc.,), and pursuing them to adopt the system for an extended period for self-care. Objectives and Solution: In our research, we have tried to address these aspects by building a health behavior change support system called KinoHaptics, for post-surgery rehabilitation. KinoHaptics is an automated, wearable, haptic assisted, physio-therapeutic system that can be used by wide variety of demographics and for various patients' physiological conditions. The system provides rich and accurate vibro-haptic feedback that can be felt by any user irrespective of the physiological limitations. KinoHaptics is built to ensure that no injuries are induced during the rehabilitation period. The persuasive nature of the system allows for personal goal-setting, progress tracking, and most importantly life-style compatibility. Evaluation and Results: The system was evaluated under laboratory conditions, involving 14 users. Results show that KinoHaptics is highly convenient to use, and the vibro-haptic feedback is intuitive, accurate, and definitely prevents accidental injuries. Also, results show that KinoHaptics is persuasive in nature as it supports behavior change, and habit building. Conclusion: The successful acceptance of KinoHaptics, an automated, wearable, haptic assisted, physio-therapeutic system proves the need and future-scope of automated physio-therapeutic systems for self-care and behavior change. It also proves that such systems incorporated with vibro-haptic feedback encourage strong adherence to the physiotherapy program; can have profound impact on the physiotherapy experience resulting in a higher acceptance rate.
@Article{Rajanna2015, author="Rajanna, Vijay and Vo, Patrick and Barth, Jerry and Mjelde, Matthew and Grey, Trevor and Oduola, Cassandra and Hammond, Tracy",
title="KinoHaptics: An Automated, Wearable, Haptic Assisted, Physio-therapeutic System for Post-surgery Rehabilitation and Self-care",
journal="Journal of Medical Systems",
year="2015",
volume="40",
number="3",
pages="60",
issn="1573-689X",
doi="10.1007/s10916-015-0391-3",
url="http://dx.doi.org/10.1007/s10916-015-0391-3"
} }
Received the "Best Student Paper" award
Vijay Rajanna; Folami Alamudun; Dr. Daniel Goldberg; Dr. Tracy Hammond. Let Me Relax: Toward Automated Sedentary State Recognition and Ubiquitous Mental Wellness Solutions. MobiHealth 2015 - 5th EAI International Conference on Wireless Mobile Communication and Healthcare - "Transforming healthcare through innovations in mobile and wireless technologies". OCTOBER 14–16, 2015 | LONDON, GREAT BRITAIN.
Advances in ubiquitous computing technology improve workplace productivity, reduce physical exertion, but ultimately result in a sedentary work style. Sedentary behavior is associated with an increased risk of stress, obesity, and other health complications. Let Me Relax is a fully automated sedentary-state recognition framework using a smartwatch and smartphone, which encourages mental wellness through interventions in the form of simple relaxation techniques. The system was evaluated through a comparative user study of 22 participants split into a test and a control group. An analysis of NASA Task Load Index pre- and post- study survey revealed that test subjects who followed relaxation methods, showed a trend of both increased activity as well as reduced mental stress. Reduced mental stress was found even in those test subjects that had increased inactivity. These results suggest that repeated interventions, driven by an intelligent activity recognition system, is an effective strategy for promoting healthy habits, which reduce stress, anxiety, and other health risks associated with sedentary workplaces.
@inproceedings{Rajanna:2015:LMR:2897442.2897461,
author = {Rajanna, Vijay and Alamudun, Folami and Goldberg, Daniel and Hammond, Tracy},
title = {Let Me Relax: Toward Automated Sedentary State Recognition and Ubiquitous Mental Wellness Solutions},
booktitle = {Proceedings of the 5th EAI International Conference on Wireless Mobile Communication and Healthcare},
series = {MOBIHEALTH'15},
year = {2015},
isbn = {978-1-63190-088-4},
location = {London, Great Britain},
pages = {28--33},
numpages = {6},
url = {http://dx.doi.org/10.4108/eai.14-10-2015.2261900},
doi = {10.4108/eai.14-10-2015.2261900},
acmid = {2897461},
publisher = {ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering)},
address = {ICST, Brussels, Belgium, Belgium},
keywords = {anxiety, cognitive reappraisal, intervention techniques, mental wellness, personal health assistant, relaxation, sedentary state recognition, stress, ubiquitous computing},
} }
Rajanna, Vijay; Lara-Garduno, Raniero; Jyoti Behera, Dev; Madanagopal, Karthic; Goldberg, Daniel; Hammond, Tracy. Step Up Life: A Context Aware Health Assistant. Proceedings of the Third ACM SIGSPATIAL International Workshop on the Use of GIS in Public Health. Dallas, Texas. ACM, November 4-7, 2014.
A recent trend in the popular health news is, reporting the dangers of prolonged inactivity in one’s daily routine. The claims are wide in variety and aggressive in nature, linking a sedentary lifestyle with obesity and shortened lifespans. Rather than enforcing an individual to perform a physical exercise for a predefined interval of time, we propose a design, implementation, and evaluation of a context aware health assistant system (called Step Up Life) that encourages a user to adopt a healthy life style by performing simple, and contextually suitable physical exercises. Step Up Life is a smart phone application which provides physical activity reminders to the user considering the practical constraints of the user by exploiting the context information like the user location, personal preferences, calendar events, time of the day and the weather. A fully functional implementation of Step Up Life is evaluated by user studies.
@inproceedings{Rajanna:2014:SUL:2676629.2676636,
author = {Rajanna, Vijay and Lara-Garduno, Raniero and Behera, Dev Jyoti and Madanagopal, Karthic and Goldberg, Daniel and Hammond, Tracy},
title = {Step Up Life: A Context Aware Health Assistant},
booktitle = {Proceedings of the Third ACM SIGSPATIAL International Workshop on the Use of GIS in Public Health},
series = {HealthGIS '14},
year = {2014},
isbn = {978-1-4503-3136-4},
location = {Dallas, Texas},
pages = {21--30},
numpages = {10},
url = {http://doi.acm.org/10.1145/2676629.2676636},
doi = {10.1145/2676629.2676636},
acmid = {2676636},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {context aware systems, environmental monitoring, geographic information systems, healthgis, individual health, personal health assistant, public health, sensors},
} }
Rajanna, VijayFramework for Accelerometer Based Gesture Recognition and Seamless Integration with Desktop ApplicationsInternational Journal of Scientific and Research Publications 3.1 (2013).
Accelerometer is one of the prominent sensors which are commonly embedded in new age handheld devices. Accelerometer measures acceleration forces in three orthogonal axes X, Y, Z. The raw acceleration values obtained due to the movement of device in 3D space which is hosting accelerometer can be used to interact and control wide range of applications running on the device and can also be integrated with desktop applications to enable intuitive ways of interaction. The goal of the project is to build a generic and economic, gesture recognition framework based on accelerometer sensor and enable seamless integration with desktop applications by providing natural ways of interaction with desktop applications based on the gesture information obtained from accelerometer sensor embedded in Smartphone device held in user's hand. This framework provides an alternative to the conventional interface devices like mouse, keyboard and joystick. With the integration of gesture recognition framework with desktop applications user can remotely play games, create drawings, control key and mouse event based applications. And since this is a generic framework, it can be integrated with any of the existing desktop applications irrespective of whether the application exposes APIs or not, or whether it is a legacy or a newly programmed application. A communication protocol is required to transfer Accelerometer data from handheld device to desktop computer, and this can be achieved either through Wi-Fi or Bluetooth communication protocol. The project achieves data transmission between handheld device and desktop computer through "Bluetooth" protocol. Once the Accelerometer data is received at desktop computer, the raw data is initially filtered and processed into appropriate gesture information after many computations through multiple algorithms. The key event publisher will take the processed gestures as input and converts them into appropriate events and publishes them on to the target applications to be controlled. This framework makes interaction with desktop applications very natural and intuitive. And it also enables game and application developers to build creative games and applications which are highly engaging.
@article{rajanna2013framework,
title={Framework for accelerometer based gesture recognition and seamless integration with desktop applications},
author={Rajanna, Vijay D},
journal={International Journal of Scientific and Research Publications},
volume={3},
number={1},
pages={1--5},
year={2013}
} }

Patents

Ninad Sathe, Vijay Rajanna, Ilya Daniel Rosenberg. System and method for modifying haptic feedback response of a touch sensor. Publication/Patent Number: US20230350494A1. Assignee: Sensel, Inc.
Ilya Daniel Rosenberg, Aaron Zarraga, Vijay Rajanna, Tomer Moscovich. System and method for detecting and characterizing touch inputs at a human-computer interface. Publication/Patent Number: US20230214055A1. Assignee: Sensel, Inc.
Ilya Daniel Rosenberg, Aaron Zarraga, Vijay Rajanna, Tomer Moscovich. System and method for detecting and characterizing touch inputs at a human-computer interface. Publication/Patent Number: US20210278967A1. Assignee: Sensel, Inc.
Ilya Daniel Rosenberg, Aaron Zarraga, Vijay Rajanna, Tomer Moscovich. System and method for detecting and characterizing touch inputs at a human-computer interface. Publication/Patent Number: US20220137812A1. Assignee: Sensel, Inc.
Ilya Daniel Rosenberg, Aaron Zarraga, Vijay Rajanna, Tomer Moscovich. System and method for detecting and characterizing touch inputs at a human-computer interface. Publication/Patent Number: US11334190B2. Assignee: Sensel, Inc.
Ilya Daniel Rosenberg, Aaron Zarraga, Vijay Rajanna, Tomer Moscovich. System and method for detecting and characterizing touch inputs at a human-computer interface. Publication/Patent Number: US11635847B2. Assignee: Sensel, Inc.