personal photo

My name is Houjiang Liu ("Whole ji oy LOU"), a doctoral candidate (2021 to present) at the School of Information, UT Austin. Currently, I am advised by professor Matthew Lease in the research group AI & Human-Centered Computing. My broad research interest focuses on Human-Centered AI and Design Research, with an emphasis on integrating human-centered principles into AI system design.

Previously, I was a design researcher in Center for Design at the Northeastern University, advised by professor Miso Kim on designing services grounded in autonomy and Paolo Ciuccarelli on visualizing interactive network graphs. I was also a part-time lecturer teaching Interaction Design in CAMD at Northeastern. Before my academic career, I was an Interaction Designer, working at JD.com (Beijing) and SeetaTech AI (Beijing).

Contact

liu.ho at utexas dot edu
1616 Guadalupe St., 5.520, Austin, TX, 78701

Lab & Affiliation

Research group:
AI & Human-Centered Computing

Associated research community:
UT NLP | Good System Mis/disinformation |
UT CosmicAI Institute

Research highlights (Google Scholar)

One central research focus is designing human-centered AI tools to protect information integrity. By collaborating with diverse stakeholders, such as journalists, fact-checkers, and the public, I help combat mis/disinformation and foster informed public discourse (CSCW25, arxiv24, CSCW24, IPM23). My recent research interest expands to AI-accelerated scientific discovery, investigating how using LLMs and agents might influence different research activities, including research ideation and code generation.

My earlier work focused on service design (TDJ23, SheJi23, IASDR23, IASDR22-2, 22-1, Cumulus21) and information visualization (IwC23, VISAP22).


Updates

[03.2025] Honored to receive the UT Graduate Continuing Fellowship.

[02.2025] Our work on Exploring Multidimensional Checkworthiness is conditionally accepted at CSCW 2025 (arxiv). I will present the work at iSchools Doctoral Seminar Series in August.

[01.2025] Successfully pass my qual and enter candidacy. Qualifying paper about Rhetorical AI Explanations will be uploaded at Arxiv.

[11.2024] Our work on LLM-generated Multipersona Debates to Reduce Confirmation Bias is presented at iSchool AI Showcase.

[11.2024] Our co-design paper on Human-centered NLP Fact-checking using Matchmaking for AI has been awarded honorable mention (top 4% submitted paper) at CSCW24 (doi).