News

  • #ICLR2023: Our paper on adapting pre-trained representations to ensure generalization and safety on downstream tasks accepted as a Spotlight. [Paper][Code]
  • #WACV2023: Improving generalization of meta learners via constrastively-trained knowledge graph bridges — SoTA performance on few-shot dataset generalization [Paper][Code]
  • #WACV2023: Diversity or Adversity? What is more critical for domain generalization? Our new paper answers this question. hint: adversarially trained diverse augmentations is the trick [Paper][Code]
  • New Paper Alert: Solving severely ill-posed problems in CT imaging is valuable in a variety of applications including medical imaging and security. We introduce DOLCE, a conditional diffusion model, that produces state-of-the-art performance in limited angle CT reconstruction. [Paper]
  • Presenting two papers at #NeurIPS2022: (i) Single-model uncertainty estimation using stochastic data centering (Spotlight) and Analyzing Data-Centric Properties for Contrastive Learning on Graphs
  • #ACML2022: Interested in an effective OOD detector for your vision model? Try AMP that uses a neural network anchoring based uncertainty estimates for prediction calibration [Paper][Code]
  • #ACML2022: Fully test-time adaptation meets domain alignment! Check out CATTAn for adapting vision models at test-time under real-world distribution shifts [Paper][Code]
  • New Paper Alert: Zero-shot multi-domain generalization is challenging! We make an important finding that, the choice of domain grouping actually matters. Our new algorithm DReaME automatically discovers domain labels from multi-source data for optimal generalization. [Paper][Code]
  • Received the LLNL Director’s award for best publications during the year 2021 (Designing counterfactual generators from Neurips 2021 and Self-training for chest-xray classification from SPIE 2021)
  • Nominated to attend the #YoungLeadersProgram organized as part of the STS annual forum (Kyoto, Japan)!
  • Presented a lecture at Microsoft Research on knowledge-aware deep learning [Slides]
  • Invited talk at Raytheon on OOD generalization and model safety [Slides]
  • Codes for our new OOD detection approach (AMP) released! Achieves state-of-the-art performance in vision benchmarks. Github repo
  • Look out for our papers at ICML2022 Workshops next week – Principles of Distribution Shifts, Updatable ML, Interpretable ML in Healthcare, Healthare AI and Covid-19
  • New Paper Alert: In our new ICML2022 paper, we introduce SPHInX, a new GAN inversion technique, that effectively inverts OOD onto StyleGAN latent spaces.
  • Presented two papers at ICASSP 2022 –  Attribute discovery in StyleGANs and Predicting generalization gap using anchoring.
  • Check out our journal article on transfer learning that was recently published in Machine Learning Science and Technology. #AI4Science
  • New research in Interpretable ML for Healthcare — TraCE appears in Nature Scientific Reports. [Paper]
  • Interested in training machine-learning ready HPC ensembles in sciences? Check out our new article in the Future Generation Computer Systems.
  • Invited talk at AME digital culture series on “Grounding Deep Models via Uncertainty Characterization” [Video][Slides].
  • Co-organized a mini-symposium with Sandeep Madireddy and Prasanna Balaprakash on “Robust and Efficient Probabilistic Deep Learning for Scientific data and Beyond” at the SIAM UQ 2022 Conference.
  • Three new papers accepted to the Distribution Shifts workshop at Neurips 2021 – designing multi-domain ensembles, the role of domain-relabeling in generalization and unsupervised attribute alignment.
  • Paper on building scientifically meaningful generative models for inertial confinement fusion to be presented at ML4Physical Sciences workshop (Neurips 2021).
  • Deep inversion-based Counterfactual reasoning with computer vision models accepted at Neurips 2021.
  • News feature on our recent work on building effective deep models with limited labeled data.
  • Best Paper Award at the SPIE 2021 Medical Imaging Conference for our paper on Self-Training with Improved Regularization for Sample-Efficient Chest X-Ray Classification!
  • Invited talk at the AIMI group in Stanford – Presented work on using prediction calibration to improve clinical diagnosis models.
  • News article on recent work on Learn-by-Calibrating for building high-fidelity scientific emulators.
  • Our paper on calibration-driven learning featured on the Nature Communications Editor’s Highlights!!
  • New paper accepted for presentation at ICASSP 2021 – “Using deep image priors to generate counterfactual explanations” [preprint]
  • Three papers accepted to AAAI 2021 — robust explanations via loss estimation [preprint], uncertainty matching Graph Neural Networks [preprint] and attribute-guided adversarial augmentation [preprint].
  • Presenting our work on connecting sample design and generalization in ML in Neurips 2020. [paper]
  • Work on using prediction calibration as a training objective for building regression models in scientific problems published in Nature Communications. [paper]
  • Need to build chest X-ray based diagnostic models with limited data? Check out our recently accepted SPIE paper!! [preprint]
  • DDxNet, a general deep architecture for time-varying clinical data (ECG/EEG/EHR), published in Nature Scientific Reports. [paper] [code]
  • Received the LLNL Director’s 2020 Early Career Recognition award.
  • Our paper on “A Statistical Mechanics Framework for Task-Agnostic Sample Design” accepted at Neurips 2020.
  • Invited Talk at the NNSA Next-Gen AI for Proliferation Detection Meeting on AI explainability [slides]
  • DOE Proposal on the integration of knowledge graphs into predictive modeling awarded.
  • Our paper on using prediction calibration to obtain reliable models in healthcare AI accepted for presentation at the UNSURE workshop, MICCAI 2020.
  • Paper on Function-preserving linear projections (FPP) for high dimensional scientific data accepted for publication in the journal Machine Learning: Science and Technology. [code]
  • Our paper on unsupervised audio source separation using GAN priors accepted for presentation at Interspeech 2020. [code]
  • Presented our paper on Task-agnostic sample design in the Workshop on Real World Experiment Design and Active Learning at ICML 2020.
  • News article about our recent PNAS paper on using deep learning for surrogate modeling in scientific applications.
  • Feature article on AI-based analysis of clinical diagnosis models and COVID-19 infections.
  • New podcast altert: DataSkeptic podcast where I talked about some recent work on interpretability in healthcare AI [Apple] [Spotify].
  • A preprint of our work at LLNL on designing accurate emulators for scientific processes that is currently under consideration at Nature Communications.
  • A new approach to build reliable and interpretable deep models for Healthcare AI. In collaboration with IBM Research and Arizona State University.
  • Paper on building accurate neural network surrogates for inertial confinement fusion published in Proceedings of the National Academy of Sciences. [code]
  • Coverage-based designs for hyper-parameter optimization in neural nets accepted for publication at IEEE Transactions on Neural Networks and Learning Systems.
  • Our paper on MimicGAN, an easy and effective way to robustly project onto the image manifold, is accepted to IJCV Special Issue on GANs.
  • Listen to our presentation on Learn-by-Calibrating at IEEE ICASSP 2020.
  • New paper alert: How does prediction calibration affect the performance of lottery tickets? Our new paper answers this question!
  • A regularized GAT model for robust semi-supervised learning under node injection attacks accepted as an oral talk at ICASSP 2020.
  • Our paper Learn-by-calibrating that explores the use of a prior-free calibration objective for training regression models accepted at ICASSP 2020.
  • At AAAI presenting our work on building calibrated deep models for regression, time-series forecasting and object localization [poster].
  • Presented our paper on weakly supervised instance labeling in histopathology images at ICMLA 2019 [Slides].
  • Paper on uncertainty quantification with deep neural networks accepted at AAAI 2020.
  • Released the technical report for my recently completed DOE-funded project on High-Dimensional Spectral Sampling.
  • I will be presenting our paper on “Improving Deep Embeddings for Inferencing with Multi-layered Graphs” in the Deep Graph Learning: Methodologies and Applications Workshop at IEEE Big Data 2019.
  • Recieved the DOE-ASCR Artificial Intelligence research grant to develop uncertainty quantification methods for deep learning.
  • Four papers accepted for presentation at the Neurips 2019 Workshops, ML for Physical Sciences, Deep inverse and Graph Representation Learning.
  • New paper alert: Learning interpretable linear embeddings using function preserving projections.
  • Co-organized the 2nd Applied Math Visioning workshop in New Mexico.
  • Paper on weakly supervised instance labeling in histopathological images accepted for oral presentation at ICMLA 2019.
  • Best paper award at the KDD Applied Data Science for Healthcare workshop.
  • New paper alert: Do deep learning models in the clinical domain generalize under complex domain and task shifts? Our work explores disease landscapes to characterize this behavior.
  • New preprint on unsupervised domain adaptation based on classical subspace analysis. Significant performance improvements over SOTA.
  • State-of-the-art results obtained in audio source sepration via multi-scale feature learning using dilated dense U-Nets.
  •  New results on hyper-parameter optimization. Coverage-based sample designs identify highly-optimal configurations.
  • Updated version of the multi-layered graph attention models paper submitted.
  • Co-organized the 1st DOE Applied Math Visioning Workshop and participated in exciting conversations about the future of ML/Data Science.
  • NVIDIA blog features results from our recent work on generative models.
  • Highlights from my research in the Computation Newsletter.