personal photo

Hi, My name is Houjiang Liu ("Whole-jyahng Lyoo"), a doctoral candidate (2021 to present) at the School of Information, UT Austin. Currently, I am advised by professor Matthew Lease in the research group AI & Human-Centered Computing. My broad research interest focuses on Human-Centered AI and Design Research, with an emphasis on integrating human-centered principles into AI system design.

Previously, I was a design researcher in Center for Design at the Northeastern University, advised by professor Miso Kim on designing services grounded in autonomy and Paolo Ciuccarelli on visualizing interactive network graphs. I was also a part-time lecturer teaching Interaction Design in CAMD at Northeastern. Before my academic career, I was an Interaction Designer, working at JD.com (Beijing) and SeetaTech AI (Beijing).

Contact

liu.ho at utexas dot edu
1616 Guadalupe St., 5.520, Austin, TX, 78701

Lab & Affiliation

Research group:
AI & Human-Centered Computing

Associated research community:
UT NLP | Good System Mis/disinformation |
UT CosmicAI Institute

Research (Google Scholar)

My central research focus is designing human-centered AI tools to protect information integrity. By collaborating with diverse stakeholders, such as journalists, fact-checkers, and the public, I help combat mis/disinformation and foster informed public discourse (CSCW25, arxiv24, CSCW24, IPM23). My recent research interest expands to AI-accelerated scientific discovery, investigating how using LLMs and agents might influence different research activities, including research ideation and code generation.

My earlier work focused on service design (TDJ23, SheJi23, IASDR23, IASDR22-2, 22-1, Cumulus21) and information visualization (IwC23, VISAP22).

LLM-debate
Reducing confirmation bias on controversial issues through LLM multi-persona debates
We use eye-tracking data to assess cognitive engagement in an LLM debate system, which enhances information diversity, reduces confirmation bias, and fosters multi-perspective search interactions.
fact-checking
Human-Centered NLP Fact-checking
While there are lots of NLP techniques developed, not many of them have been successfully integrated into existing computational tools to assist human fact-checkers. Thus, by collaborating with different stakeholders, we design human-centered NLP tools for fact-checking.

Updates

[02.2025] Our work on Exploring multidimensional checkworthiness is accepted at CSCW25. I will present the work at iSchools Doctoral Seminar Series in August.

[01.2025] Successfully pass my qual and enter candidacy. Qualifying paper about Rhetorical AI Explanations will be uploaded at Arxiv.

[11.2024] Our work on LLM multi-persona debates to reduce confirmation bias is presented at iSchool AI Showcase.

[11.2024] Our co-design paper on Human-centered NLP Fact-checking using Matchmaking for AI has been awarded honorable mention (top 4% submitted paper) at CSCW24.