Research at the chopralab

The prevailing approach to AI in healthcare primarily relies on retrospectively collected datasets—such as medical images and electronic health records (EHRs)—to train machine learning models that mimic clinical inference. While this method seems reasonable, it is fundamentally suboptimal, as evidenced by the limited adoption of AI in real-world clinical practice.

One major limitation stems from the nature of healthcare datasets: they are designed for human interpretability, meaning they are structured to ensure clinicians can detect abnormalities. However, machine learning models are not bound by this constraint—why limit them to the same data types that humans rely on? Moreover, human perception itself is inherently limited. What hidden patterns or signals exist within these datasets that we are currently overlooking? With these challenges in mind, my research lab focuses on two key themes:

Learning the Data Acquisition

Can we determine the most informative datasets—whether human-interpretable or not—that provide the richest signals for AI-driven insights?

Uncovering the Unknown Unknowns

Can we detect hidden signals within existing datasets to reveal previously unnoticed patterns and observations?

By shifting the paradigm from mimicking to discovery, we aim to unlock AI's full potential in transforming healthcare.

News and Updates

  • 2025/02: Our paper on prostate cancer risk stratification was accepted in the Journal of Magnetic Resonance Imaging.
  • 2025/01: Our blogpost on multi-modal learning was accepted at the International Conference on Representation Learning (ICLR), 2025.
  • 2024/12: Our paper on principled way of doing multi-modal learning was accepted in NeurIPS.
  • 2024/08: We were the recipients of the Early Stage Research Award from the NYU Discovery Research Fund for Human Health.
  • 2024/07: Our paper on adaptive sampling of k-space in MR imaging was accepted in ICML.
  • 2024/01: Congratulations to Revant Teotia for being selected as the NYU-Meta Fellow.
  • 2023/09: NIH-NSF proposal (with Narges Razavian as PI) on using self-supervised learning for early detection of Dementia got funded.
  • 2023/08: Our paper on robustness of normalization schemes in MR imaging was accepted in MIDL.

Selected Research Projects

End-to-End Magnetic Resonance Triaging

Using AI to enable MR-based diagnostics (a highly accurate but expensive and inaccessible technology) for early detection of diseases at population-level, thereby democratizing access to this advanced diagnostic modality. We accomplish this by learning disease signatures in the raw frequency space (a.k.a., k-space) without the need to reconstruct high-fidelity images.

MR Scanners with Memory

Envision MRI scanners equipped with “patient-specific memory,” capable of recalling and leveraging multiple sources of prior data (e.g., prior imaging, EHR) from the same individual—rather than relying solely on the current scan. Freed from the requirement to collect measurements near the Nyquist rate, these scanners can dramatically reduce scan times without sacrificing image quality, even on lower-cost machines, enabling accessible imaging.

A Principled Approach to Multi-Modal Learning

Traditional approaches to multi-modal learning are sub-optimal because they predominently concentrated on capturing in isolation either the inter-modality dependencies or the intra-modality dependencies. Viewing this problem from the lens of generative models, we consider the target as a source of multiple modalities and the interaction between them, and propose the I2M2 framework, that naturally captures both inter- and intra-modality dependencies, leading to more accurate predictions.

RL to Learn What Data to Acquire in MR Scanning

An MR scanner captures a vast array of high-quality k-space measurements to generate detailed cross-sectional images. However, this process is inherently slow and expensive, as the amount of data collected remains constant regardless of patient characteristics or the suspected disease. We propose a method that learns an adaptive policy to selectively acquire k-space measurements, optimizing for disease detection without the need for image reconstruction.

AI-Driven Precision Education for Radiology Residents

AI in radiology is revolutionizing more than just medical image analysis—it’s transforming the entire radiological ecosystem. From optimizing workflows to enhancing training, its potential is limitless. We are pioneering the world’s first AI-driven platform designed to deliver a truly personalized educational experience for radiology residents. Powered by advanced Large Language Models (LLMs), our adaptive system tailors learning to each resident’s unique journey, analyzing their past case exposure, strengths, and areas for improvement. By leveraging AI-driven insights, we are redefining how radiologists learn, grow, and excel in their field.

Lab Members

Ph.D.

  • Raghav Singhal (2021-; with Rajesh Ranganath)
  • Divyam Madaan (2021-; with Kyunghyun Cho)
  • Umang Sharma (2022-)
  • Arda Atalik (2022-; with Daniel Sodickson)
  • Revant Teotia (2023-)
  • Hao Zhang (2024-; with Rajesh Ranganath)
  • Muhang Tian (2024-; with Rajesh Ranganath)

Undergrad, MS, and Research Engineers

  • Antonio Verdone Sanchez
  • Tarun Dutt
  • Luoyao Chen
  • Divyansh Jha
  • Steven Zhang
  • Ceil Wang
  • Anisha Bhatnagar
  • Varshan Muhunthan
  • Arjun

Program Manager

  • Harold Stern

Teaching

  • Fundamentals of Machine Learning: Fall 2025
  • Fundamentals of Machine Learning: Fall 2023
  • Machine Learning for Healthcare: Fall 2022
  • Fundamentals of Machine Learning: Fall 2021

Professional Activities

Funding

Research within chopralab is funded by the National Science Foundation (NSF), National Institute of Health (NIH), the NYU Discovery Research Fund, and the Global AI Frontier Lab NYU/South Korea.
NIH
NSF

Bio

Sumit Chopra is an Associate Professor at the Courant Institute of Mathematical Sciences, NYU, and the Department of Radiology at the NYU Grossman School of Medicine, where he serves as the Director of Machine Learning Research. His work focuses on advancing AI, with an emphasis on deep learning models and their transformative applications in healthcare.

Before joining NYU, he co-founded Imagen Technologies, a well-funded startup revolutionizing healthcare through AI, where he served as Vice President of AI. Prior to that, he was a research scientist at Facebook (now Meta) AI Research (FAIR), contributing to advancements in natural language understanding. He earned his Ph.D. in Computer Science from New York University under the mentorship of Prof. Yann LeCun. His dissertation introduced a pioneering neural network model for relational regression, which became the conceptual foundation for a startup focused on modeling residential real estate prices. Following his Ph.D., he joined AT&T Labs–Research as a senior scientist in the machine learning and statistics department, where he developed innovative deep learning models for speech recognition, natural language processing, and computer vision. There, his research also extended into areas such as recommender systems, computational advertising, and ranking algorithms.

He is best known for his early pioneering work on learning representation using contrastive learning methods that became the origins of self-supervised learning (SSL), proposing Memory Networks architecture that formed the conceptual foundation of attention-based models, and proposing energy-based models for relational regression. With a career spanning academia, industry, and entrepreneurship, Sumit Chopra is dedicated to pushing the boundaries of AI and driving its real-world impact.

Selected and Recent Publications