Home


I am a Machine Learning/AI researcher in the Machine Intelligence group at Lawrence Livermore National Laboratories. My research interest broadly spans machine learning, statistics and signal processing for applications in computer vision, healthcare, scientific machine learning and network analysis. I collaborate with several researchers and practitioners to enable the use of machine learning and AI technologies to solve challenging real-world problems.

And yes… you can call me Jay.

A quick peek into my research in machine learning

[Reliable ML] [Healthcare AI] [UQ in DL] [Scientific ML]

[Data Skeptic podcast] [nVidia blog]

Updates

  • New Paper Alert: Three new papers accepted to the Distribution Shifts workshop at Neurips 2021 – designing multi-domain ensembles, the role of domain-relabeling in generalization and unsupervised attribute alignment.
  • New Paper Alert: Paper on building scientifically meaningful generative models for inertial confinement fusion accepted to the ML4Physical Sciences workshop at Neurips 2021.
  • New Paper Alert: Deep inversion-based Counterfactual reasoning with computer vision models accepted at Neurips 2021.
  • News feature on our recent work on building effective deep models with limited labeled data.
  • Best Paper Award at the SPIE 2021 Medical Imaging Conference for our paper on Self-Training with Improved Regularization for Sample-Efficient Chest X-Ray Classification!
  • Invited talk at the AIMI group in Stanford – Presented work on using prediction calibration to improve clinical diagnosis models.
  • News article on recent work on Learn-by-Calibrating for building high-fidelity scientific emulators.
  • Featured on the Nature Communications Editor’s Highlights!!
  • New paper accepted for presentation at ICASSP 2021 – “Using deep image priors to generate counterfactual explanations” [preprint]
  • Three papers accepted to AAAI 2021 — robust explanations via loss estimation [preprint], uncertainty matching Graph Neural Networks [preprint] and attribute-guided adversarial augmentation [preprint].

More…