Avatar

Joseph Chee Chang

Technical HCI Research惻Carnegie Mellon University惻Language Technologies Intitute

I am a PhD Student at the School of Computer Science at Carnegie Mellon University. I specialize in Human-Computer Interaction, Natural Language Processing, and Sensemaking.

I study how people explore, structure, and make sense of new information in complex decision-making scenarios such as exploratory search and data analytics. The insights help me explore new ways of using crowdsourcing, natural language processing, and machine learning approaches to build novel interactive intelligent information systems that augment human cognition to enhance learning, knowledge production, and scientific discovery.

I am advised by Aniket Kittur. My research is supported by Google, Bosch, Yahoo, and NSF.

Solvent

A Mixed Initiative System for Finding Analogies between Research Papers

Analogies in distant domain often lead to scientitic discoveries. However, it can be prohibitively difficult for researchers to find useful analogies from unfamiliar domains as it is poorly supported by search engines. We introduce Solvent, a mixed-initiative system where annotators structure abstracts of acedemic papers into different aspects and use a semantic model to find analogies among research papers and across different domains. These results demonstrate a novel path towards computationally supported knowledge sharing in research communities.

Joel Chan, Joseph Chee Chang, Tom Hope, Dafna Shahaf, Aniket Kittur. CSCW 2018 (r=27% N=)

- Analogy, CSCW, HCI, Information Retrieval, Search, Sensemaking

Bento Browser

Complex Mobile Search Without Tabs

Complex searches can be overwhelming leading to lots of tabs. This also makes conducting searches on mobile devices especially difficult where screen real-estate is limited and tasks can often be inturrupted. Rather than using tabs to manage information, we introduce browsing through scaffolding. Search result lists serve as mutable workspaces where progress can be suspended and resumed. You can download BentoBrowser from the AppStore.

Nathan Hahn, Joseph Chee Chang, Aniket Kittur. CHI 2018 (r=26% N=2595)

- CHI, HCI, Information Retrieval, SIGCHI, Search, Sensemaking

Evorus

Crowd-powered Conversational Assistant Built to Automate Itself Over Time

BEST PAPER NOMINATION
Crowd-powered chatbots are robust than current pure AI approach, but can be slower and more expensive at runtime. We attempted to combine the two approaches for high quality, low latency, and low cost. We introduce Evorus, a crowd-powered chatbot that automate itself over time by learning to integrate AI chatbots, reusing responses, and assess response quality. A 5-month-long public deployment study shows promising results. You can start using Evorus today.

Kenneth Huang, Joseph Chee Chang, Jeff Bigham. CHI 2018 (r=26% N=2595)

- CHI, Crowdsourcing, HCI, Machine Learning, SIGCHI

Revolt

Collaborative Crowdsourcing for Labeling Machine Learning Datasets

Generating comprehensive labeling guidelines for crowdworkers can be challenging for complex datasets. Revolt harnesses crowd disagreements to identify ambiguous concepts in the data and coordinates the crowd to collaboratively create rich structures for requesters to make post-hoc decisions, removing the need for comprehensive guidelines and enabling dynamic label boundaries.

Work done during internship at Microsoft Research, Redmond.

Joseph Chee Chang, Saleema Amershi, Ece Kamar. CHI 2017 (r=25% N=2424)

- CHI, Classification, Crowdsourcing, HCI, Labeling, Machine Learning, SIGCHI, Sensemaking

Intentionally Uncertain Input

Supporting Mobile Sensemaking Through Intentionally Uncertain Highlighting

Highlighting can be mentally taxing for learners who are often unsure about how much information they needed to include. We introduce the idea of intentionally uncertain input in the context of highlighting on mobile devices. We present a system that uses force touch and fuzzy bounding boxes to support saving information while users are uncertain about where to highlight.

Joseph Chee Chang, Nathan Hahn, Aniket Kittur. UIST 2016 (r=21% N=384)

- HCI, Information Foraging, Interaction, Sensemaking, UIST

Alloy

Clustering with Crowds and Computation

BEST PAPER NOMINATION
HCOMP 2016 INVITED ENCORE TALK
Many crowd clustering approaches have difficulties providing global context to workers in order to generate meaningful categories. Alloy uses a sample-and-search technique to provide global context, and combines the deep semantic knowledge from human computation and the scalability of machine learning models to create rich structures from unorganized documents with high quality and efficiency.

Joseph Chee Chang, Aniket Kittur, Nathan Hahn. CHI 2016 (r=23% N=2435)

- Crowdsourcing, HCI, Information Synthesis, Machine Learning, SIGCHI, Sensemaking

The Knowledge Accelorator

Big Picture Thinking in Small Pieces

BEST PAPER NOMINATION
People often search to web to find solutions to problems beyond factual question, such as planning road trips, writing an report, or buying a new camera. The Knowledge Accelerator uses crowdworkers to synthesize different information sources on the web in response to a query. We prototyped this system in order to explore crowdsourcing complex, high context tasks in a microtask environment.

Nathan Hahn, Joseph Chee Chang, Aniket Kittur. CHI 2016 (r=23% N=2435)

- Crowdsourcing, HCI, Information Foraging, Information Retrieval, Information Synthesis, SIGCHI, Sensemaking

Twitter Code-Switching

Recurrent-Neural-Network for Language Detection on Twitter Code-Switching Corpus

Code-switching behavior is common on social media for expressing solidarity or to establish authority. While past work on automatic code-switching detection depends on dictionary look-up or named-entity recognition, our recurrent neural network model that relies on only raw features outperformed the top systems in the EMNLP'14 Code-Switching Workshop by 17% in error rate reduction.

Final project for the Deep Learning course at CMU.

Joseph Chee Chang, Chu-Cheng Lin. arXiv

- Code-Switching, Deep Learning, NLP, Neural Network, arXiv, pre-print

TermMine

Learning to Find Translations and Transliterations on the Web

TermMine is an information extraction system that can automatically mine translation pairs of terms from the web. We used a small set of terms and translations to gather mixed-code text from the web to train a CRF model that can identify translation pairs at run-time.

Joseph Chee Chang, Jason S. Chang, Roger Jang. ACL 2012 (r=21% N=369)

- ACL, Information Extraction, Machine Learning, NLP, Translation

WikiSense

Supersense Tagging Named Entities on Wikipedia

We introduced a method for classifying named-entities into broad semantic categories in WordNet. We extracted rich features from Wikipedia, allowing us to classify named-entities with high precision and coverage. The result is a large scale named-entity semantic database with 1.2 million entries and over 95% accuracy, covering 80% of all named-entities found on Wikipedia.

Joseph Chee Chang, Richard Tsai, Jason S. Chang. PACLIC 2009

- Information Extraction, Machine Learning, NLP, PACLIC