Research/CV

Research Interests

I am a PhD candidate at University of Michigan, School of Information—advised by Eric Gilbert and Ceren Budak. I am broadly interested in human-AI interactions at the collective level, with a focus on alignment challenges that emerge at population scales. Concretely, this has two research directions—one about building new systems and one about evaluating current systems.

  1. System building → future AI systems. I build multi-agent systems that help users engage with varied perspectives via AI. These systems have two components: (1) representing different perspectives through steerable multi-agent systems, and (2) rendering these perspectives as decision-making aids—with a particular interest in interventions that improve democracy and civic decision-making. My Plurals system (CHI 2025 honorable mention) guides LLMs via simulated social ensembles and now powers follow-up RCTs. I am evaluating a second system, "The As-If Machine", for increasing action towards long-term risks. Such systems can function as “engines” of social science interventions.

  2. Impact surfacing → current AI systems. I also design experiments to surface “non-obvious” AI impacts—meaning, impacts at the collective (rather than individual) and long-run (rather than short-term) level. My creativity paper (Collective Intelligence honorable mention) used a dynamic “many-worlds” design where ideas from participants in one condition fed forward to future participants in that creation, revealing how AI changes the evolution (and not just levels) of human creativity. I also design experiments for AI systems to measure alignment-relevant capabilities at collective scales. For example, My deep value benchmark (NeurIPS 2025 spotlight) used an experiment design that untangled whether models generalize deep values or shallow preferences.

These two directions have tight connections with:

  1. (Pluralistic) alignment: If a system can align to diverse viewpoints, it can provision those as helpful aids.

  2. Collective intelligence: The motivation for both directions is strongly rooted in CI.

  3. Computational social science: I draw on social science theories for Direction 1 and I draw on social science methods for Direction 2.

Selected Peer-Reviewed Publications

Joshua Ashkinaze, Hua Shen, Sai Avula, Eric Gilbert, and Ceren Budak. "Deep Value Benchmark: Measuring Whether Models Generalize Deep Values or Shallow Preferences" The 39th Annual Conference on Neural Information Processing Systems (NeurIPS) (2025)

Note: Spotlight 🏆 (top 3% of submissions)

We introduce the Deep Value Benchmark (DVB), an evaluation framework that directly tests whether large language models (LLMs) learn fundamental human values or merely surface-level preferences. This distinction is critical for AI alignment: systems that capture deeper values are likely to generalize human intentions robustly, while those that capture only superficial patterns risk misaligned behavior. The DVB uses a novel experimental design with controlled confounding between deep values (e.g., moral principles) and shallow features (e.g., superficial attributes like formality). In the training phase, we expose LLMs to preference data with deliberately correlated deep and shallow features—for instance, where a user consistently prefers (non-maleficence, formal language) over (justice, informal language). The testing phase breaks these correlations, presenting choices between (justice, formal language) and (non-maleficence, informal language). This allows us to measure a model's Deep Value Generalization Rate (DVGR)—the probability of generalizing based on underlying values rather than shallow features. Across 9 models, the average DVGR is just 0.30, meaning all models generalize deep values less than chance. Counterintuitively, larger models exhibit slightly lower DVGR than smaller models. The dataset underwent three separate human validation experiments to ensure reliability. DVB provides an interpretable measure of a core feature of alignment, revealing that current models prioritize shallow preferences over deep values.


Joshua Ashkinaze, Ruijia Guan, Laura Kurek, Eytan Adar, Ceren Budak, and Eric Gilbert. “Seeing Like an AI: How LLMs Apply (and Misapply) Wikipedia Neutrality Norms” The 20th International AAAI Conference on Web and Social Media (ICWSM) (2026, forthcoming)

Note: We were very happy that this research had substantial impact with key stakeholders: I was invited to give a talk at a Wikimedia research showcase, and our paper is cited in Wikipedia strategy around integrating AI into the platform.

Large language models (LLMs) are trained on broad corpora and then used in communities with specialized norms. Is providing LLMs with community rules enough for models to follow these norms? We evaluate LLMs’ capacity to detect (Task 1) and correct (Task 2) biased Wikipedia edits according to Wikipedia’s Neutral Point of View (NPOV) policy. LLMs struggled with bias detection, achieving only 64% accuracy on a balanced dataset. Models exhibited contrasting biases (some under- and others over-predicted bias), suggesting distinct priors about neutrality. LLMs performed better at generation, removing 79% of words removed by Wikipedia editors. However, LLMs made additional changes beyond Wikipedia editors’ simpler neutralizations, resulting in high-recall but low-precision editing. Interestingly, crowdworkers rated AI rewrites as more neutral (70%) and fluent (61%) than Wikipedia-editor rewrites. Qualitative analysis found LLMs sometimes applied NPOV more comprehensively than Wikipedia editors but often made extraneous non-NPOV-related changes (such as grammar). LLMs may apply rules in ways that resonate with the public but diverge from community experts. While potentially effective for generation, LLMs may reduce editor agency and increase moderation workload (e.g., verifying additions). Even when rules are easy to articulate, having LLMs apply them like community members may still be difficult.


Joshua Ashkinaze, Emily Fry, Narendra Edara, Eric Gilbert, and Ceren Budak. "Plurals: A System for Guiding LLMs Via Simulated Social Ensembles". Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems

Note 1: Honorable Mention 🏆 (top 5% of submissions)

Note 2: Check out the Github library!

Recent debates raised concerns that language models may favor certain viewpoints. But what if the solution is not to aim for a “view from nowhere” but rather to leverage different viewpoints? We introduce Plurals, a system and Python library for pluralistic AI deliberation. Plurals consists of Agents (LLMs, optionally with personas) which deliberate within customizable Structures, with Moderators overseeing deliberation. Plurals is a generator of simulated social ensembles. Plurals integrates with government datasets to create nationally representative personas, includes deliberation templates inspired by deliberative democracy, and allows users to customize both information-sharing structures and deliberation behavior within Structures. Six case studies demonstrate fidelity to theoretical constructs and efficacy. Three randomized experiments show simulated focus groups produced output resonant with an online sample of the relevant audiences (chosen over zero-shot generation in 75% of trials). Plurals is both a paradigm and a concrete system for pluralistic AI.


Joshua Ashkinaze, Julia Mendelsohn, Qiwei Li, Ceren Budak, and Eric Gilbert. "How AI ideas affect the creativity, diversity, and evolution of human ideas: Evidence from a large, dynamic experiment". Proceedings of the 2025 ACM Collective Intelligence Conference

Note: Honorable Mention 🏆

Exposure to large language model output is rapidly increasing. How will seeing AI-generated ideas affect human ideas? We conducted an experiment (800+ participants, 40+ countries) where participants viewed creative ideas that were from ChatGPT or prior experimental participants and then brainstormed their own idea. We varied the number of AI-generated examples (none, low, or high exposure) and if the examples were labeled as 'AI' (disclosure). Our dynamic experiment design -- ideas from prior participants in an experimental condition are used as stimuli for future participants in the same experimental condition -- speaks to the interdependent process of cultural creation: creative ideas are built upon prior ideas. Hence, we capture the compounding effects of having LLMs 'in the culture loop'. We find that high AI exposure (but not low AI exposure) did not affect the creativity of individual ideas but did increase the average amount and rate of change of collective idea diversity. AI made ideas different, not better. There were no main effects of disclosure. We also found that self-reported creative people were less influenced by knowing an idea was from AI and that participants may knowingly adopt AI ideas when the task is difficult. Our findings suggest that introducing AI ideas may increase collective diversity but not individual creativity.


Shubham Atreja, Joshua Ashkinaze, Lingyao Li, Julia Mendelsohn, Libby Hemphill. “What's in a Prompt?: A Large-Scale Experiment to Assess the Impact of Prompt Design on the Compliance and Accuracy of LLM-Generated Text Annotations.” The 19th International AAAI Conference on Web and Social Media (ICWSM) (2025)

Manually annotating data for computational social science tasks can be costly, time-consuming, and emotionally draining. While recent work suggests that LLMs can perform such annotation tasks in zero-shot settings, little is known about how prompt design impacts LLMs' compliance and accuracy. We conduct a large-scale multi-prompt experiment to test how model selection (GPT-4o, GPT-3.5, PaLM2, and Falcon7b) and prompt design features (definition inclusion, output type, explanation, and prompt length) impact the compliance and accuracy of LLM-generated annotations on four highly relevant and diverse CSS tasks (toxicity, sentiment, rumor stance, and news frames). Our results show that LLM compliance and accuracy are prompt-dependent. For instance, prompting for numerical scores instead of labels reduces all LLMs' compliance and accuracy. Concise prompts can significantly reduce prompting costs but also lead to lower accuracy on tasks like toxicity. Furthermore, minor prompt changes like asking for an explanation can cause large changes in the distribution of LLM-generated labels. By assessing the impact of prompt design on the quality and distribution of LLM-generated annotations, this work serves as both a practical guide and a warning for using LLMs in CSS research.


Joshua Ashkinaze, Eric Gilbert, Ceren Budak. “The Dynamics of (Not) Unfollowing Misinformation Spreaders.” Proceedings of the 2024 ACM Web Conference (Formerly WWW)

Note: Oral presentation (top 9% of submissions); selected for the special Online Trust and Safety day.

Many studies explore how people "come into" misinformation exposure. But much less is known about how people "come out of" misinformation exposure. Do people organically sever ties to misinformation spreaders? And what predicts doing so? Over six months, we tracked the frequency and predictors of ~900K followers unfollowing ~5K health misinformation spreaders on Twitter. We found that misinformation ties are persistent. Monthly unfollowing rates are just 0.52%. In other words, 99.5% of misinformation ties persist each month. Users are also 31% more likely to unfollow non-misinformation spreaders than they are to unfollow misinformation spreaders. Although generally infrequent, the factors most associated with unfollowing misinformation spreaders are (1) redundancy and (2) ideology. First, users initially following many spreaders, or who follow spreaders that tweet often, are most likely to unfollow later. Second, liberals are more likely to unfollow than conservatives. Overall, we observe a strong persistence of misinformation ties. The fact that users rarely unfollow misinformation spreaders suggests a need for external nudges and the importance of preventing exposure from arising in the first place.


Awards

  • NeurIPS Spotlight (top 3% of submissions) — 2025

  • CHI Honorable Mention (top 5% of submissions) — 2025

  • ACM Collective Intelligence Honorable Mention (top 5% of submissions) — 2025

  • IC2S2 Research Impact Nomination (2 papers) — 2024

  • IC2S2 Resource Nomination — 2024

  • IC2S2 Social Impact Nomination — 2024

  • UMSI Preliminary Exam Distinction (top 10% of department defenses) — 2024

  • UMSI Pre-Candidacy Milestone Distinction (top 10% of department defenses) — 2023

  • CHI Outstanding Reviewer (2×) — 2024; 2026

Current Work In Progress

  • The Consideration Machine: Training a multi-agent system to surface disparate impacts of policies

  • The As-If Machine: Built an interactive multi-agent RAG system to reduce psychological distance to long-term risks;

  • Synthetic Social Learning: Measuring how AI shapes human social knowledge through a combination of prevalence estimates, experiments, and parameterized long-run simulations under different AI futures (e.g., varying alignment, pluralism, market concentration)

  • Simulated Focus Groups for Depolarization: Using political datasets to power AI focus groups that reduce polarization of controversial issues

Currently Under Review

Li, Q., Zhang, S., Kasper, A. T., Ashkinaze, J., Eaton, A. A., Schoenebeck, S., & Gilbert, E. (2026). Reporting non-consensual intimate media: An audit study of deepfakes. Manuscript submitted for publication to ACM SIGCHI Conference on Computer-Supported Cooperative Work & Social Computing. Preprint

Goray, C., Li, Q., Ashkinaze, J., Le, V., Gilbert, E., & Schoenebeck, S. (2026). For their eyes only: What makes sharing photos of others inappropriate. Manuscript submitted for publication to ACM SIGCHI Conference on Human Factors in Computing Systems.

Kurek, L., Ashkinaze, J., Budak, C., & Gilbert, E. (2026). Follow nudges without budges: A field experiment on misinformation followers didn't change follow networks. Manuscript submitted for publication to International Conference on Web and Social Media.

Peer-Reviewed Non-Archival Conference Presentations and Posters

Joshua Ashkinaze, Ceren Budak, Eric Gilbert. "Plurals: A System for Simulating Deliberation (With Applications to Political Communication)" 18th Annual Political Networks and Computational Social Science Conference (PolNet-PaCSS), Harvard University and Northeastern University (2024)

Joshua Ashkinaze, Ceren Budak, Eric Gilbert. "PlurChain: Towards a Society of Pluralistic Artificial Intelligence" International Conference of Computational Social Science (IC2S2), University of Pennsylvania (2024)
Note: [Research Impact Nomination🏆; Resource Nomination🏆]

Joshua Ashkinaze, Eric Gilbert, Ceren Budak. "The Dynamics of (Not) Unfollowing Misinformation Spreaders" International Conference of Computational Social Science (IC2S2), University of Pennsylvania (2024)
Note: [Social Impact Nomination 🏆]

Joshua Ashkinaze, Eric Gilbert, Ceren Budak. "Plurals: A system for pluralistic AI via simulated social ensembles" NeurIPS Pluralistic Alignment Workshop, Vancouver, British Columbia (2024)

Invited Talks

"Seeing Like an AI: How LLMs Apply (And Misapply) Wikipedia Neutrality Norms" Wikimedia Foundation, Online (2025)

"Friday Night AI: AI and the Future of Creative Expressions”, Panelist, Ann Arbor District Library, Ann Arbor MI (2024)

“Dynamic modeling of AI's effect on the evolution of creativity” Ebrahim Bagheri Lab, University of Toronto, Online, (2025)

Open-Source Software And Engineering

Plurals: Guiding LLMs via Simulated Social Ensembles (LINK) | Python (Jan. 2024 -- Present)

  • Open-source multi-agent library for pluralistic artificial intelligence with extensive test suite, CI/CD workflows, and auto-deployed documentation

Python Implementations of Statistical Methods (LINK) | Python (November 2025 -- Present)

  • The Python community lacks some of the more niche/advanced statistics packages developed for R and STATA, so I am interested in providing these implementations

  • Example 1: A package I made for implementing inverse-covariance weighted (ICW) indexing for Python, validated against the gold-standard STATA code.

  • Example 2: A Python procedure for calculating “Sharpened q values”, which is a highly efficient form of multiple comparison correction—validated against the gold-standard STATA code.

Shell Journaling App (LINK) | Bash (Jan. 2024 -- Present)

  • A small script that acts as a daily journal manager, can be run entirely from the command line

Mathematical Art (LINK) | Java, Javascript, Python, Processing, GLSL (Jan. 2019 -- Present)

  • Created 20+ mathematical art algorithms and 250+ compositions using probability theory and color theory; displayed at 3 exhibitions and sold 75+ prints

EmotaPal (LINK) | Python, SciKit, NLTK, Pandas (July 2019 -- July 2020)

  • Built open-source package that maps color palettes to emotions using supervised learning and NLP. Note: This was pre multi-modal LLMs!

Industry Experience

Analytics, Automation & Scalability Team @ Criteo [New York, NY | June 2020 -- Aug 2021]

  • Architected scalable analytics infrastructure by refactoring ad-hoc code into modular components and building custom ETL pipelines, shortening analysis from days to hours for products affecting 10,000+ brands

  • Applied semantic similarity models to ad campaign optimization, improving targeting efficiency by identifying campaigns serving similar consumer needs

  • Created core ad-spend reporting infrastructure for a new product offering that generated $250M in quarterly revenue

Analyst, Ogilvy Consulting [New York, NY | July 2019 -- June 2020]

  • Built NLP pipelines (LDA, NMF, Word2Vec) for social media trend detection across 6 major ad campaigns; insights directly informed creative strategy

  • Built econometric models to quantify network effects for a major phone OS and used graph and network analysis to inform restructuring of a professional organization with 150,000+ members

  • Designed measurement framework and quasi-experimental approach for a $10M+ ad campaign

Grants

Humane Studies Fellowship, Fairfax, VA (November 2025)

  • Awarded $3,000 for a project using a multi-agent system I created (Plurals) to power RCTs related to polarization

Initiative for Democracy and Civic Empowerment, Ann Arbor, MI (October 2025)

  • Awarded $2,433 for a project using a multi-agent system I created (Plurals) to power RCTs related to cooperation

Rackham Graduate Student Candidate Grant, Ann Arbor, MI (October 2023)

  • Awarded $3,000 for surveys and experiments on AI’s effect on society

OpenAI Researcher Access Grant, San Francisco, CA (January 2024)

  • Awarded $5,000 in credits for multiple projects related to AI creativity, AI alignment, and AI pluralism

Undergraduate Research Opportunities (UROP) Mentor Grant, Ann Arbor, MI (January 2024; December 2024)

  • Awarded $2,900 in grants for projects related to the effect of AI on society and mentoring undergrads

Rackham Graduate Student Pre-Candidate Grant, Ann Arbor, MI (October 2023)

  • Awarded $1,500 for testing AI interventions

Civic Health Project Grant on LLMs and Partisan Animosity [Finalist], Palo Alto, CA (October 2023)

  • Finalist for Civic Health Project RFP on innovative uses of LLMs to reduce polarization

Phi Beta Kappa, Oberlin, OH (May 2019)

  • Inducted into honor society

Thomas Kutzen '76 Prize in Economics, Oberlin, OH (May 2019)

  • Awarded to college senior for outstanding economics research

Hanson Prize in Economics, Oberlin, OH (May 2018)

  • Awarded to college junior for excellence in economics

Jere Bruner Research Grant, Oberlin, OH (May 2016--May 2017)

  • Received grant to study how macroeconomic conditions correlate with dream content

Mentoring

  • Created a research assistant syllabus complete with core skills, core papers, and communication norms; Implemented an anonymous feedback mechanism to provide feedback on me as a supervisor

  • Mentored and supervised 13 undergraduate research assistants across UROP, independent studies, MIDAS, and UM School of Information Research Experience Development Program (REDP)

  • Wrote 4 graduate school recommendations and 2 industry letters of recommendation; wrote a successful undergrad fellowship letter for former RA Olive Ren, who secured a full tuition fellowship to the University of Michigan via Center for Education of Women

Academic Service

  • Helping to co-organize the University of Michigan’s “Computational Social Science” seminar

  • On a graduate student task force for making university-wide reccomendations on the appropriate use of AI in research

  • Was an elected member of the Doctoral Executive Committee, the group that interfaces between PhD students and administration in my program

  • Reviewed 30+ papers for venues including: CHI, ACL, NeurIPS, CogSci, ICWSM, IC2S2; won reviewer awards for CHI 2024 🏆, CHI 2026 🏆, and ICWSM 2025 🏆