Paper: Intent Tagging: Exploring Micro-Prompting Interactions for Supporting Granular Human-GenAI Co-Creation Workflows

Despite Generative AI (GenAI) systems’ potential for enhancing content creation, users often struggle to effectively integrate GenAI into their creative workflows. Core challenges include misalignment of AI-generated content with user intentions (intent elicitation and alignment), user uncertainty around how to best communicate their intents to the AI system (prompt formulation), and insufficient flexibility of AI systems to support diverse creative workflows (workflow flexibility).

Motivated by these challenges, we created IntentTagger: a system for slide creation based on the notion of Intent Tags—small, atomic conceptual units that encapsulate user intent—for exploring granular and non-linear micro-prompting interactions for Human-GenAI co-creation workflows.

Our user study with 12 participants provides insights into the value of flexibly expressing intent across varying levels of ambiguity, meta-intent elicitation, and the benefits and challenges of intent tag-driven workflows. We conclude by discussing the broader implications of our findings and design considerations for GenAI-supported content creation workflows.


Frederic Gmeiner, Nicolai Marquardt, Michael Bentley, Hugo Romat, Michel Pahud, Dave Brown, Asta Roseway, Nikolas Martelaro, Kenneth Holstein, Ken Hinckley, and Nathalie Riche. Intent Tagging: Exploring Micro-Prompting Interactions for Supporting Granular Human-GenAI Co-Creation Workflows. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). ACM, New York, NY, USA, Article 531, 31 pages. Yokohama, Japan, April 26-May 1, 2025. https://doi.org/10.1145/3706598.3713861
[PDF] [Video – mp4] [Talk video – mp4]

Paper: AI-Instruments: Embodying Prompts as Instruments to Abstract & Reflect Graphical Interface Commands as General-Purpose Tools

 

Chat-based prompts respond with verbose linear-sequential texts, making it difficult to explore and refine ambiguous intents, back up and reinterpret, or shift directions in creative AI-assisted design work.

AI-Instruments instead embody “prompts” as interface objects via three key principles:

  1. Reification of user-intent as reusable direct-manipulation instruments;
  2. Reflection of multiple interpretations of ambiguous user-intents (Reflection-in-intent) as well as the range of AI-model responses (Reflection-in-response) to inform design “moves” towards a desired result; and
  3. Grounding to instantiate an instrument from an example, result, or extrapolation directly from another instrument.

Further, AI-Instruments leverage LLM’s to suggest, vary, and refine new instruments, enabling a system that goes beyond hard-coded functionality by generating its own instrumental controls from content.

We demonstrate four  technology probes, applied to image generation, and qualitative insights from twelve participants, showing how AI-Instruments address challenges of intent formulation, steering via direct manipulation, and non-linear iterative workflows to reflect and resolve ambiguous intents. 


Nathalie Riche, Anna Offenwanger, Frederic Gmeiner, Dave Brown, Hugo Romat, Michel Pahud, Nicolai Marquardt, Kori Inkpen, and Ken Hinckley. AI-Instruments: Embodying Prompts as Instruments to Abstract & Reflect Graphical Interface Commands as General-Purpose Tools. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). ACM, New York, NY, USA, Article 1104, 18 pages. Yokohama, Japan, April 26-May 1, 2025. Honorable Mention Award (top 5% of papers). https://doi.org/10.1145/3706598.3714259
[PDF] [Video – mp4]

Paper: SpaceInk: Making Space for In-Context Annotations

When editing or reviewing a document, people directly overlay ink marks on content. For instance, they underline words, or circle elements in a figure. These overlay marks often accompany in-context annotations in the form of handwritten footnotes and marginalia. People tend to put annotations close to the content that elicited them, but have to compose within oft-limited whitespace.

We introduce SpaceInk, a design space of pen+touch techniques that make room for in-context annotations by dynamically reflowing documents. We identify representative techniques in this design space, spanning both new ones and existing ones. We evaluate them in a user study, with results that inform the design of a prototype system. Our system lets users concentrate on capturing fleeting thoughts, streamlining the overall annotation process by enabling the fluid interleaving of space-making gestures with freeform ink.

[Watch SpaceInk video on YouTube]


SpaceInk-UIST-2019-thumbHugo Romat, Emmanuel Pietriga, Nathalie Henry Riche, Ken Hinckley, and Caroline Appert, SpaceInk: Making Space for In-Context Annotations. In UIST 2019 32nd ACM User Interface Software and Technology Symposium (UIST’19), pp. 871–882, New Orleans, LA, USA, October 20-23, 2019.
https://doi.org/10.1145/3332165.3347934
[PDF] [30-second preview – mp4] [30s preview on YouTube] [Full video – mp4] [Watch on YouTube]

See also Hugo Romat’s SpaceInk project page.

Paper: Sensing Posture-Aware Pen+Touch Interaction on Tablets

Thin and lightweight tablets naturally afford a variety of laid-back interactions through touch, as well as digital pen input to capture freeform thoughts. But status-quo pen and touch interfaces often force people to reach for distant toolbars at inopportune times and awkward, fixed locations.

But what if tablets could recognize how you actually hold and use the device, and adapt the interface to suit?

Posture-Aware-Paper_Site_03_20_19_1400x788

We propose sensing techniques for tablet devices that transition on-the-go between mobile and stationary use, via Postural Awareness. Postural Awareness reflects the ability of a tablet to sense how it is being used, held, and oriented in a variety of situations, and then adapt to the particular context of use.

Hero Image - Adapts to Posture

Our resulting Posture-Aware Interface responds to nuances (including those shown in A-G above) such as shifting hand grips, varying screen angles, planting the palm while writing or sketching with a digital pen, and detecting your hand even as it reaches onto the screen – allowing the system to detect left vs. right-handed use, for example, or to know you’re about to touch the screen before your meaty little finger even lands on the glass.

It even makes it super easy to discover how to use our tablet’s Surface Pro Pen simply by laying it down on the glass: the pen is recognized, and simple callouts let you customize the button, eraser-click functions, and ink style to your liking.

Hero Image - Pen-Options

To realize these capabilities, Posture-Aware devices combine three sensing modalities chosen specifically for plausible integration with tablets:

  1. the raw capacitance image from the touchscreen itself,
  2. the tilt and motion of the device, and
  3. electric field sensors around the screen bezel for grasp and hand proximity detection.

Hero Image - Sensors

Our resulting Posture-Aware Interface responds to nuances (including those shown in A-G above) such as shifting hand grips, varying screen angles, planting the palm while writing or sketching with a digital pen, and detecting your hand even as it reaches onto the screen – allowing the system to detect left vs. right-handed use, for example, or to know you’re about to touch the screen before your meaty little finger even lands on the glass.

It even makes it super easy to discover how to use our tablet’s Surface Pro Pen simply by laying it down on the glass: the pen is recognized, and simple callouts let you customize the button, eraser-click functions, and ink style to your liking.

The project then demonstrates how these sensors enable Posture Awareness that adapts interaction and morphs user interface elements to suit fine-grained particulars of the user’s natural arm, hand, and grip placement relative to the device.

Watch Sensing Posture-Aware Pen+Touch Interaction on Tablets video on YouTube


Sensing-Posture-Aware-thumbYang Zhang, Michel Pahud, Christian Holz, Haijun Xia, Gierad Laput, Michael McGuffin, Xiao Tu, Andrew Mittereder, Fei Su, William Buxton, and Ken Hinckley. Sensing Posture-Aware Pen+Touch Interaction on Tablets. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI’19). ACM, New York, NY, USA, 14 pages. Glasgow, Scotland, UK, May 4-9, 2019. Honorable Mention Award (top 5% of papers).
https://doi.org/10.1145/3290605.3300285
[PDF] [30 second preview – MP4] [full video – mp4] [Watch on YouTube]

Paper: Inking Your Insights: Investigating Digital Externalization Behaviors During Data Analysis

Externalizing one’s thoughts can be helpful during data analysis, such as which one marks interesting data, notes hypotheses, and draws diagrams. In this paper, we present two exploratory studies conducted to investigate types and use of externalizations during the analysis process.

We first studied how people take notes during different stages of data analysis using VoyagerNote, a visualization recommendation system augmented to support text annotations, and coupled with participants’ favorite external note-taking tools (e.g., word processor, pen & paper). Externalizations manifested mostly as notes written on paper or in a word processor, with annotations atop views used almost exclusively in the initial phase of analysis.

In the second study, we investigated two specific opportunities: (1) integrating digital pen input to facilitate the use of free-form externalizations and (2) providing a more explicit linking between visualizations and externalizations. We conducted the study with VoyagerInk, a visualization system that enabled free-form externalization with a digital pen as well as touch interactions to link externalizations to data. Participants created more graphical externalizations with VoyagerInk and revisited over half of their externalizations via the linking mechanism.

Reflecting on the findings from these two studies, we discuss implications for the design of data analysis tools.

Watch Inking Your Insights video on YouTube


Inking-Insights-thumbYea-Seul Kim, Nathalie Henry Riche, Bongshin Lee, Matthew Brehmer, Michel Pahud, Ken Hinckley, and Jessica Hullman. Inking Your Insights: Investigating Digital Externalization Behaviors During Data Analysis. In Proceedings of the 2019 ACM International Conference on Interactive Surfaces and Spaces (ISS’19). pp 255–267. ACM, New York, NY, USA, 13 pages. Daejeon, Republic of Korea, November 10 – 13, 2019.
https://doi.org/10.1145/3343055.3359714
[PDF] [30-second preview on YouTube] [Full video – MP4] [Talk on YouTube]

Paper: DataToon: Drawing Data Comics About Dynamic Networks with Pen + Touch Interaction

Comics are an entertaining and familiar medium for presenting compelling stories about data.

However, existing visualization authoring tools do not leverage this expressive medium.

In this paper, we seek to incorporate elements of comics into the construction of data-driven stories about dynamic networks. We contribute DataToon, a flexible data comic storyboarding tool that blends analysis and presentation with pen and touch interactions.

A storyteller can use DataToon to rapidly generate visualization panels, annotate them, and position them within a canvas to produce a visual narrative. In a user study, participants quickly learned to use DataToon for producing data comics.

Watch DataToon video on YouTube


DataToon-thumbNam Wook Kim, Nathalie Henry Riche, Benjamin Bach, Guanpeng
Xu, Mathew Brehmer, Ken Hinckley, Michel Pahud, Haijun Xia,
Michael J. McGuffin, and Hanspeter Pfister. DataToon: Drawing Data Comics About Dynamic Networks with Pen + Touch Interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI’19). ACM, New York, NY, USA, 12 pages. Glasgow, Scotland, UK, May 4-9, 2019.
https://doi.org/10.1145/3290605.3300335
[PDF] [Video – MP4] [Watch on YouTube]

Paper: HoloDoc: Enabling Mixed Reality Workspaces that Harness Physical and Digital Content

Prior research identified that physical paper documents have many positive attributes, for example natural tangibility and inherent physical flexibility.

When documents are presented on digital devices, however, they can provide unique functionality to users, such as the ability to search, view dynamic multimedia content, and make use of indexing.

This work explores the fusion of physical and digital paper documents. It first presents the results of a study that probed how users perform document-intensive analytical tasks when both physical and digital versions of documents were available. The study findings then informed the design of HoloDoc, a mixed reality system that augments physical artifacts with rich interaction and dynamic virtual content.

Finally, we present the interaction techniques that HoloDoc affords, and the results of a second study that assessed HoloDoc’s utility when working with digital and physical copies of academic articles.

Watch 30-second preview video on YouTube


HoloDoc-CHI-2019-thumbZhen Li, Michelle Annett, Ken Hinckley, Karan Singh, and Daniel Wigdor. HoloDoc: Enabling Mixed RealityWorkspaces that Harness Physical and Digital Content. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI’19). ACM, New York, NY, USA, 14 pages. Glasgow, Scotland, UK, May 4-9, 2019.
http://dx.doi.org/10.1145/3290605.3300917
[PDF] [30 second preview video mp4 | watch on YouTube] [Full Video mp4

Paper: DrawMyPhoto: Assisting Novices in Drawing from Photographs

We present DrawMyPhoto, an interactive system that can assist a drawing novice in producing a quality drawing by automatically parsing a photograph in to a step-by-step drawing tutorial.

The system utilizes image processing to produce distinct line work and shading steps from the photograph, and offers novel real-time feedback on pressure and tilt, along with grip suggestions as the user completes the tutorial.

Our evaluation showed that the generated steps and real-time assistance allowed novices to produce significantly better drawings than with a more traditional grid-based approach, particularly with respect to accuracy, shading, and details.

This was confirmed by domain experts who blindly rated the drawings. The participants responded well to the real-time feedback, and believed it helped them learn proper shading techniques and the order in which a drawing should be approached. We saw promising potential in the tool to boost the confidence of novices and lower the barrier to artistic creation.


DrawMyPhoto-C&C-2019-thumbBlake Williford, Michel Pahud, Ken Hinckley, Abhay Doke, and Tracy Hammond. DrawMyPhoto: Assisting Novices in Drawing from Photographs. In Proceedings of the 12th conference on Creativity and Cognition (C&C ’19). ACM, New York, NY, USA, pp. 198-209. San Diego, California, United States, June 2019.
https://doi.org/10.1145/3325480.3325507

[PDF] [Video – mp4

Paper: Dear Pictograph: Investigating the Role of Personalization and Immersion for Consuming and Enjoying Visualizations

Much of the visualization literature focuses on assessment of visual representations with regard to their effectiveness for understanding data.

But in the present work, we instead focus on making data visualization experiences more enjoyable, to foster deeper engagement with data. To this end, we investigate two strategies to make visualization experiences more enjoyable and engaging: personalization and immersion.

First, for personalization, we selected pictographs (composed of multiple data glyphs). This representation affords creative freedom, allowing people to craft symbolic or whimsical shapes of personal significance to represent data.

Second, to probe immersion, we then conducted a qualitative study with 12 participants crafting such personalized pictographs using a large pen-enabled device and while immersed within a VR environment.

Our results indicate that personalization and immersion both have positive impact on making visualizations more enjoyable experiences.

Watch Dear Pictograph video on YouTube


Dear-Pictograph-CHI-2020-thumbHugo Romat, Nathalie Henry Riche, Christophe Hurter, Steven Drucker, Fereshteh Amini, and Ken Hinckley. Dear Pictograph: Investigating the Role of Personalization and Immersion for Consuming and Enjoying Visualizations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI’20). ACM, New York, NY, USA, 13 pages. Honolulu, HI, USA, April 25-30, 2020.
https://doi.org/10.1145/3313831.3376348
[PDF] [30 second preview video – mp4] [Full video – mp4 | watch on YouTube]

See also Hugo Romat’s Dear Pictograph project page.

Paper: InChorus: Designing Consistent Multimodal Interactions for Data Visualization on Tablet Devices

While tablet devices are a promising platform for data visualization, supporting consistent interactions across different types of visualizations on tablets remains an open challenge.

In this paper, we present multimodal interactions that function consistently across different visualizations, supporting common operations during visual data analysis.

By considering standard interface elements (e.g., axes, marks) — and by grounding our design in a set of core concepts including operations, parameters, targets, and instruments — we systematically develop interactions applicable to different visualization types.

And to exemplify how the proposed interactions collectively facilitate data exploration, we employ them in a tablet-based system, InChorus, that supports pen, touch, and speech input.

Based on a study with 12 participants performing replication and fact-checking tasks with InChorus, we discuss how participants adapted to using multimodal input and highlight considerations for future multimodal visualization systems.

Watch InChorus video on YouTube


Inking-Insights-thumbArjun Srinivasan, Bongshin Lee, Nathalie Henry Riche, Steven Drucker, and Ken Hinckley. InChorus: Designing Consistent Multimodal Interactions for Data Visualization on Tablet Devices. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI’20). ACM, New York, NY, USA, 13 pages. Honolulu, HI, USA, April 25-30, 2020. Honorable Mention Award (top 5% of papers). https://doi.org/10.1145/3313831.3376782
[PDF] [30 second preview video – mp4 | YouTube] [Full video – MP4]
[Arjun Srinivasan’s InChorus talk for  CHI’20 on YouTube]