About

I am a computer scientist and a research lead in the Machine Intelligence group at Lawrence Livermore National Laboratories. My experience spans developing AI/ML solutions for computer vision, healthcare, graph analysis and multimodal learning.

If you are looking for postdoc opportunities or staff scientist roles in ML/AI (unsupervised learning, safe ML, knowledge-aware AI, design optimization and scientific machine learning), reach out to me!

Updates

  • #ICML2023: Are large-scale generative models (e.g., StyleGAN-XL) useful for data constrained domain adaptation? We introduce SiSTA (Single Shot Target Augmentations), a new data augmentation method that works even with one shot. [Paper]
  • #MIDL2023: Designing outlier exposure-free OOD detectors for modality and semantic shifts in medical imaging has been a long standing challenge. Our recently accepted paper (oral presentation) finds that the key to this problem is to jointly synthesize latent space inliers and pixel space outliers during model training. [Paper]
  • #CVPR2023: New paper on auditing GAN models accepted for publications. xGA enables attribute-level comparison of two or more StyleGANs in an unsupervised fashion. [Paper][Code]
  • Codes for Delta-UQ uncertainty estimator released!!! If you want to quickly integrate epistemic UQ into your deep model, check this out.
  • #ICASSP2023: Two papers accepted — (1) a closer look at scoring function design for generalization gap predictors [Paper]; (2) generative augmentations for single-shot domain adaptation [Paper][Code]
  • #ICLR2023: Our paper on adapting pre-trained representations to ensure generalization and safety on downstream tasks accepted as a Spotlight. [Paper]
  • New Article that studies the utility of simple deep subspace alignment in practical domain adaptation appeared on IEEE Access. TLDR: PCA is great [Paper]
  • #WACV2023: Improving generalization of meta learners via constrastively-trained knowledge graph bridges — SoTA performance on few-shot dataset generalization [Paper][Code]
  • #WACV2023: Diversity or Adversity? What is more critical for domain generalization? Our new paper answers this question. hint: adversarially trained diverse augmentations is the trick [Paper][Code]
  • New Paper Alert: Solving severely ill-posed problems in CT imaging is valuable in a variety of applications including medical imaging and security. We introduce DOLCE, a conditional diffusion model, that produces state-of-the-art performance in limited angle CT reconstruction. [Paper]
  • Presenting two papers at #NeurIPS2022: (i) Single-model uncertainty estimation using stochastic data centering (Spotlight) and Analyzing Data-Centric Properties for Contrastive Learning on Graphs
  • #ACML2022: Interested in an effective OOD detector for your vision model? Try AMP that uses a neural network anchoring based uncertainty estimates for prediction calibration [Paper][Code]
  • #ACML2022: Fully test-time adaptation meets domain alignment! Check out CATTAn for adapting vision models at test-time under real-world distribution shifts [Paper][Code]
  • New Paper Alert: Zero-shot multi-domain generalization is challenging! We make an important finding that, the choice of domain grouping actually matters. Our new algorithm DReaME automatically discovers domain labels from multi-source data for optimal generalization. [Paper][Code]
  • Received the LLNL Director’s award for best publications during the year 2021 (Designing counterfactual generators from Neurips 2021 and Self-training for chest-xray classification from SPIE 2021)
  • Nominated to attend the #YoungLeadersProgram organized as part of the STS annual forum (Kyoto, Japan)!