#CVPR2023: New paper on auditing GAN models accepted for publications. xGA enables attribute-level comparison of two or more StyleGANs in an unsupervised fashion. [Paper][Code]
Codes for Delta-UQ uncertainty estimator released !!! If you want to quickly integrate epistemic UQ into your deep model, check this out.
#ICLR2023: Our paper on adapting pre-trained representations to ensure generalization and safety on downstream tasks accepted as a Spotlight. [Paper]
#ICASSP2023: Two papers accepted for presentation — (1) a closer look at scoring function design for generalization gap predictors [Paper]; (2) generative augmentations for single-shot domain adaptation [Paper][Code]
New Article that studies the utility of simple deep subspace alignment in practical domain adaptation appeared on IEEE Access. TLDR: PCA is great [Paper]
#WACV2023: Improving generalization of meta learners via constrastively-trained knowledge graph bridges — SoTA performance on few-shot dataset generalization [Paper][Code]
#WACV2023: Diversity or Adversity? What is more critical for domain generalization? Our new paper answers this question. hint: adversarially trained diverse augmentations is the trick [Paper][Code]
New Paper Alert: Solving severely ill-posed problems in CT imaging is valuable in a variety of applications including medical imaging and security. We introduce DOLCE, a conditional diffusion model, that produces state-of-the-art performance in limited angle CT reconstruction. [Paper]
#ACML2022: Interested in an effective OOD detector for your vision model? Try AMP that uses a neural network anchoring based uncertainty estimates for prediction calibration [Paper][Code]
#ACML2022: Fully test-time adaptation meets domain alignment! Check out CATTAn for adapting vision models at test-time under real-world distribution shifts [Paper][Code]
New Paper Alert: Zero-shot multi-domain generalization is challenging! We make an important finding that, the choice of domain grouping actually matters. Our new algorithm DReaME automatically discovers domain labels from multi-source data for optimal generalization. [Paper][Code]
Received the LLNL Director’s award for best publications during the year 2021 (Designing counterfactual generators from Neurips 2021 and Self-training for chest-xray classification from SPIE 2021)
Nominated to attend the #YoungLeadersProgram organized as part of the STS annual forum (Kyoto, Japan)!
Presented a lecture at Microsoft Research on knowledge-aware deep learning [Slides]
Invited talk at Raytheon on OOD generalization and model safety [Slides]
Codes for our new OOD detection approach (AMP) released! Achieves state-of-the-art performance in vision benchmarks. Github repo
New Paper Alert: In our new ICML2022 paper, we introduce SPHInX, a new GAN inversion technique, that effectively inverts OOD onto StyleGAN latent spaces.
Three new papers accepted to the Distribution Shifts workshop at Neurips 2021 – designing multi-domain ensembles, the role of domain-relabeling in generalization and unsupervised attribute alignment.
Paper on building scientifically meaningful generative models for inertial confinement fusion to be presented at ML4Physical Sciences workshop (Neurips 2021).
Deep inversion-based Counterfactual reasoning with computer vision models accepted at Neurips 2021.
News feature on our recent work on building effective deep models with limited labeled data.
New paper accepted for presentation at ICASSP 2021 – “Using deep image priors to generate counterfactual explanations” [preprint]
Three papers accepted to AAAI 2021 — robust explanations via loss estimation [preprint], uncertainty matching Graph Neural Networks [preprint] and attribute-guided adversarial augmentation [preprint].
Presenting our work on connecting sample design and generalization in ML in Neurips 2020. [paper]
Work on using prediction calibration as a training objective for building regression models in scientific problems published in Nature Communications. [paper]
Need to build chest X-ray based diagnostic models with limited data? Check out our recently accepted SPIE paper!! [preprint]
DDxNet, a general deep architecture for time-varying clinical data (ECG/EEG/EHR), published in Nature Scientific Reports. [paper] [code]
Received the LLNL Director’s 2020 Early Career Recognition award.
Our paper on “A Statistical Mechanics Framework for Task-Agnostic Sample Design” accepted at Neurips 2020.
Invited Talk at the NNSA Next-Gen AI for Proliferation Detection Meeting on AI explainability [slides]
DOE Proposal on the integration of knowledge graphs into predictive modeling awarded.
Our paper on using prediction calibration to obtain reliable models in healthcare AI accepted for presentation at the UNSURE workshop, MICCAI 2020.
Presented our paper on Task-agnostic sample design in the Workshop on Real World Experiment Design and Active Learning at ICML 2020.
News article about our recent PNAS paper on using deep learning for surrogate modeling in scientific applications.
Feature article on AI-based analysis of clinical diagnosis models and COVID-19 infections.
New podcast altert: DataSkeptic podcast where I talked about some recent work on interpretability in healthcare AI [Apple] [Spotify].
A preprint of our work at LLNL on designing accurate emulators for scientific processes that is currently under consideration at Nature Communications.
Best paper award at the KDD Applied Data Science for Healthcare workshop.
New paper alert: Do deep learning models in the clinical domain generalize under complex domain and task shifts? Our work explores disease landscapes to characterize this behavior.
New preprint on unsupervised domain adaptation based on classical subspace analysis. Significant performance improvements over SOTA.