Sensecape

Enabling Multilevel Exploration and Sensemaking with Large Language Models

Sangho Suh

UC San Diego

Bryan Min

UC San Diego

Srishti Palani

UC San Diego

Haijun Xia

UC San Diego

Paper Demo (TBA)

People are increasingly turning to large language models (LLMs) for complex information tasks like academic research or planning a move to another city. However, while they often require working in a nonlinear manner – e.g., to arrange information spatially to organize and make sense of it, current interfaces for interacting with LLMs are generally linear to support conversational interaction. To address this limitation and explore how we can support LLM-powered exploration and sensemaking, we developed Sensecape, an interactive system designed to support complex information tasks with an LLM by enabling users to (1) manage the complexity of information through multilevel abstraction and (2) seamlessly switch between foraging and sensemaking. Our within-subject user study reveals that Sensecape empowers users to explore more topics and structure their knowledge hierarchically, thanks to the externalization of levels of abstraction. We contribute implications for LLM-based workflows and interfaces for information tasks.



Video Demo

See Sensecape in action in this Video Demo.


Bibtex

@inproceedings{suh2023sensecape,
  author = {Suh, Sangho and Min, Bryan and Palani, Srishti and Xia, Haijun},
  title = {Sensecape: Enabling Multilevel Exploration and Sensemaking with Large Language Models},
  year = {2023},
  isbn = {9798400701320},
  publisher = {Association for Computing Machinery},
  address = {New York, NY, USA},
  url = {https://dl.acm.org/doi/10.1145/3586183.3606756},
  doi = {10.1145/3586183.3606756},
  booktitle = {The 36th Annual ACM Symposium on User Interface Software and Technology},
  keywords = {Large Language Model, Multilevel Abstraction, Visualization},
  location = {San Francisco, CA, USA},
  series = {UIST '23}
}