Avatar

Joseph Chee Chang

Technical HCI Research・School of Computer Science・Carnegie Mellon University

I am a PhD candidate at the Language Technologies Institute. I specialize in Human-Computer Interaction, Natural Language Processing, Sensemaking, and Crowdsourcing.

I develop information systems and interaction techniques to support users or crowdworkers to explore and make sense of large amounts of information and make better decisions. For example, using crowds to synthesize search results into coherent articles or empowering consumers to gain deep insights and make confident purchases from thousands of reviews.

I am advised by Aniket Kittur, and my research is supported by Google, Bosch, Yahoo, the ONR, and the NSF. Here’s a link to my Thesis Document.

Mesh

Scaffolding Comparison Tables for Online Decision Making

Consumers can choose from many different products and base their decisions on the tens of thousands of online evidence about each of their options. However, to synthesize this information into confident decisions can incur high interaction and cognitive costs. Online information is scattered across different sources, and evidence such as reviews can be subjective and conflicting, requiring users to interpret them under their personal context. We introduce Mesh, which scaffolds users in iteratively building up a better understanding of both their choices by evaluating evidence gathered across sources. Lab and field deployment studies found that Mesh significantly reduces the costs of gathering and evaluating evidence and scaffolds decision-making through personalized criteria enabling users to gain deeper insights from data to make confident purchase decisions.

Joseph Chee Chang, Nathan Hahn, Aniket Kittur.
ACM UIST 2020 (r=21.6% N=450)

- HCI, Information Retrieval, Interaction, Search, Sensemaking, UIST

SearchLens

Composing and Capturing Complex User Interests for Exploratory Search

Whether figuring out where to eat in an unfamiliar city or deciding which apartment to live in, reviews and forum posts are often a significant factor in online decision making. However, making sense of these rich repositories of diverse opinions can be prohibitively effortful, searchers need to sift through a large number of reviews to characterize each item based on aspects that they care about. We introduce a novel system, SearchLens, where searchers build up a collection of composable and reusable "Lenses" that reflect their different latent interests. Also, the Lenses allowed the system to generate personalized interfaces with visual explanations that promote transparency and enable in-depth exploration.

Joseph Chee Chang, Nathan Hahn, Adam Perer, Aniket Kittur.
ACM IUI 2019 (r=25% N=282)

- HCI, IUI, Information Retrieval, Interaction, Search, Sensemaking

Bento Browser

Complex Mobile Search Without Tabs

Complex searches can be overwhelming, leading to lots of opened tabs. This tab overload can make conducting searches on mobile devices especially difficult where screen real-estate is limited, and progress can often be interrupted. Rather than using tabs to manage information, we introduce browsing through scaffolding. Search result lists serve as mutable workspaces where progress can be suspended and resumed. BentoBrowser is available for download from the iPhone AppStore.

Nathan Hahn, Joseph Chee Chang, Aniket Kittur.
ACM SIGCHI 2018 (r=26% N=2595)

- CHI, HCI, Information Retrieval, Interaction, SIGCHI, Search, Sensemaking

Intentionally Uncertain Input

Supporting Mobile Sensemaking Through Intentionally Uncertain Highlighting

Highlighting can be mentally taxing for learners who are often unsure about how much information they needed to include. We introduce the idea of intentionally uncertain input in the context of highlighting on mobile devices. We present a system that uses force touch and fuzzy bounding boxes to support saving information while users are uncertain about where to highlight.

Joseph Chee Chang, Nathan Hahn, Aniket Kittur.
ACM UIST 2016 (r=21% N=384)

- HCI, Information Foraging, Interaction, Sensemaking, UIST