AniCode: Authoring Coded Artifacts for Network-Free Personalized Animations

31 Jul 2018  ·  Zeyu Wang, Shiyu Qiu, Qingyang Chen, Alexander Ringlein, Julie Dorsey, Holly Rushmeier ·

Time-based media (videos, synthetic animations, and virtual reality experiences) are used for communication, in applications such as manufacturers explaining the operation of a new appliance to consumers and scientists illustrating the basis of a new conclusion. However, authoring time-based media that are effective and personalized for the viewer remains a challenge. We introduce AniCode, a novel framework for authoring and consuming time-based media. An author encodes a video animation in a printed code, and affixes the code to an object. A consumer uses a mobile application to capture an image of the object and code, and to generate a video presentation on the fly. Importantly, AniCode presents the video personalized in the consumer's visual context. Our system is designed to be low cost and easy to use. By not requiring an internet connection, and through animations that decode correctly only in the intended context, AniCode enhances privacy of communication using time-based media. Animation schemes in the system include a series of 2D and 3D geometric transformations, color transformation, and annotation. We demonstrate the AniCode framework with sample applications from a wide range of domains, including product "how to" examples, cultural heritage, education, creative art, and design. We evaluate the ease of use and effectiveness of our system with a user study.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper