Skip to content Skip to navigation

Yeh RB. 2007. Designing interactions that combine pen, paper, and computer. Dissertation. Department of Computer Science, Stanford University.

Year Published: 2007
Abstract: 

Pen and paper are powerful tools for visualizing designs, penning music, and communicating through the written language. This coupling is mobile, flexible, graspable, and robust. It has even evolved in response to technology: we print out electronic articles to read, and scribble annotations on them before meeting with colleagues. The introduction of digital pens that capture handwriting has now made it feasible to augment forms, notepads, and maps with computation. Applications can recognize handwriting, upload notes to the Web, and detect pen gestures for initiating a search. In doing so, we combine paper's affordances with the benefits of technology, including search, redundancy, and remote collaboration. We refer to these as paper+digital interfaces.Developing paper+digital interfaces is challenging. Programmers need to abstract input into high-level events, coordinate interactions across time, and manage output on devices. This is difficult, because interface programmers are accustomed to working with graphical applications that provide real-time feedback on a single display. Additionally, debugging paper interfaces requires added effort, such as printing an interface before testing. As a result, few people currently build paper+digital interfaces.This dissertation explores how pen and paper can be used in concert with computers to make tasks more efficient, engaging, and robust. It contributes new software tools and paper+digital interaction techniques. It comprises PaperToolkit, a platform for building paper applications; ButterflyNet, a paper notebook that automatically structures field data; and GIGAprints, large paper prints that support collaboration and visual search.PaperToolkit helps programmers build applications with digital pens and paper, introducing abstractions and tools to enhance development and testing of these interfaces. We evaluated the toolkit through a class deployment with 17 teams (69 students), an analysis of the code produced by those teams, and extended use in our own research projects. The evaluation found the abstractions to be highly effective. The approach provides a learnable programming model, and also lowers the barrier to debugging multi-device interactions.PaperToolkit's design was influenced by our experience building paper interfaces. Field scientists, such as biologists, rely on paper notebooks to capture and structure field data. This practice inspired ButterflyNet. With ButterflyNet, a scientist captures handwritten notes and photographs using a digital pen, notebook, and camera. When this content is uploaded to a computer, it is automatically organized and presented in a multimedia browser. A lab study with 14 biologists shows that ButterflyNet enhances the field capture of notes and digital media.Beyond the paper notebook, we explored the class of large printed interfaces that augment walls and tables. GIGAprints comprises augmented posters and maps that present a large amount of high resolution graphical content, yet still can be rolled up and taken into the field. Graphical feedback can be displayed on a handheld device, or overlaid on the GIGAprints. Our evaluation found that the large size provides added visual context, and enhances both collocated and remote collaboration.Using PaperToolkit, applications such as ButterflyNet and GIGA prints could have been built in much less time. The toolkit is open source, and is used today in research laboratories around the world, including labs in Paris, Siegen, Darmstadt, and at Stanford.

Article Title: 
Designing interactions that combine pen, paper, and computer
Article ID: 
1179