Cogscent is pleased to announce the initial release of ACT-Touch, an extension to the ACT-R cognitive modeling framework. ACT-Touch, in combination with ACT-R, establishes a working framework useful for modeling and simulation of human interactions with mobile touchscreen devices. As we build computers that leave behind the traditional desktop environment, our cognitive modeling tools must address a different set of human-computer interaction challenges and interaction styles than are present in desktop computer environments, including smaller displays and slower text input due to lack of full-sized physical keyboards. But new advantages brought about by mobility and direct physical manipulation of interface elements also must be addressed. These are important influences on cognition not typically present in a desktop computing environment. ACT-Touch enables modeling cognition situated in such task environments by extending ACT-R with motor movements typically found in multitouch display gestures.
The initial release of ACT-Touch was presented at this summer's 19th Annual ACT-R Workshop at Carnegie Mellon University. ACT-Touch currently supports simulation of basic gestural inputs, such as taps, swipes and other gestures. Additional work is intended to address further challenges such as visual occlusion of the display by the hand, motor learning of gestures, and mobility. ACT-Touch was made possible by grant 60NANB12D134 from the National Institute of Standards and Technology.
Lisp code for ACT-Touch [zip archive].
ACT-Touch Reference Manual [PDF].
We are very proud and excited to announce that the National Institute of Standards and Technology (NIST) has awarded funding for our proposal, Formal Model of Human-System Performance!