aditijh [at] stanford.edu

I am a postdoc at Stanford Statistics and the Wu Tsai Neurosciences Institute, where I work with Scott Linderman. My work lies at the intersection of probabilistic machine learning and statistical neuroscience—developing interpretable machine learning approaches to understand behavior and neural dynamics. I obtained my PhD at Princeton, advised by Jonathan Pillow. Long time ago, I was an undergrad at the Indian Institute of Technology in Delhi, where I worked with Sumeet Agarwal. I have also spent two wonderful summers working in industry—at Meta Reality labs working on wrist-based neural interfaces, and another at MosaicML working on large language models.

Aside from research: I like to overdose on literary/historical fiction and occasionally put on my creative writing hat. I also like running, painting, and listening to Bollywood music.

EDucation

Princeton Logo

Ph.D. in Electrical and Computer Engineering.

2019-2024. Princeton University [Thesis]

IIT Delhi logo

B.Tech in Electrical Engineering.

2015-2019. IIT Delhi

Research

Disentangling the Roles of Distinct Cell Classes with Cell-Type Dynamical Systems

Aditi Jha, Diksha Gupta, Carlos Brody, Jonathan Pillow
Advances in Neural Information Processing Systems (NeurIPS) 37 (2024)

Paper / Code

LIMIT: Less Is More for Instruction Tuning Across Evaluation Paradigms

Aditi Jha, Sam Havens, Jeremy Dohmann, Alex Trott, Jacob Portes.
Workshop on Instruction Tuning and Instruction Following, NeurIPS 2023.

Paper / Website / Blogpost


BAYESIAN ACTIVE LEARNING FOR DISCRETE LATENT VARIABLE MODELS

Aditi Jha, Zoe C. Ashwood, Jonathan W. Pillow. Neural Computation. Volume 36, Issue 3. March 2023.

Paper / Talk at COSYNE workshops 2022


Poster on extracting low-dimensional psychological representations from CNNs

EXTRACTING LOW-DIMENSIONAL PSYCHOLOGICAL REPRESENTATIONS FROM CONVOLUTIONAL NEURAL NETWORKS

Aditi Jha, Joshua C. Peterson, Thomas L. Griffiths. Cognitive Science. Volume 47, Issue 1. January, 2023.

Paper


DYNAMIC INVERSE REINFORCEMENT LEARNING FOR CHARACTERIZING ANIMAL BEHAVIOR

Zoe C. Ashwood*, Aditi Jha*, Jonathan W. Pillow. Advances in Neural Information Processing Systems (NeurIPS) 35 (2022) [Oral Presentation].

Paper / Code / COSYNE Talk


Poster on CFAD

FACTOR-ANALYTIC INVERSE REGRESSION FOR HIGH-DIMENSIONAL, SMALL-SAMPLE DIMENSIONALITY REDUCTION

Aditi Jha*, Michael J. Morais*, Jonathan W. Pillow. Proceedings of the 38th International Conference on Machine Learning (ICML), PMLR 139 (2021).

Paper/ Code/ Summary Video at Cosyne’21/ Invited Talk at MLSE 2020


Poster on extracting low-dimensional psychological representations from CNNs

EXTRACTING LOW-DIMENSIONAL PSYCHOLOGICAL REPRESENTATIONS FROM CONVOLUTIONAL NEURAL NETWORKS

Aditi Jha, Joshua C. Peterson, Thomas L. Griffiths. Proceedings of the 42nd Annual Conference of the Cognitive Science Society (CogSci), 2020.

Paper/ Summary Video at CogSci’20


Poster on modeling non-linear compositionally in DNNs

DO DEEP NEURAL NETWORKS MODEL NONLINEAR COMPOSITIONALITY IN THE NEURAL REPRESENTATION OF HUMAN-OBJECT INTERACTIONS?

Aditi Jha, Sumeet Agarwal. Proceedings of the 3rd Computational Cognitive Neuroscience Conference (CCN) Berlin, Germany. 2019

Paper

Misc

Pillow Lab’s blog/ PNI’s CompNeuro Journal Club / My current read

Lastly, if you love this show too, we can be best friends