I am a Machine Learning/AI researcher in the Machine Intelligence group at Lawrence Livermore National Laboratories. My research interest broadly spans machine learning, statistics and signal processing for applications in computer vision, healthcare, scientific machine learning and network analysis. I collaborate with several researchers and practitioners to enable the use of machine learning and AI technologies to solve challenging real-world problems.
Check out this site for more details on my research, updates, preprints and code release.
And yes… you can call me Jay.
- New paper alert: Need to build chest X-ray based diagnostic models with limited data? Check out our recently accepted SPIE paper!! [preprint]
- New paper alert: DDxNet, a general deep architecture for time-varying clinical data (ECG/EEG/EHR), published in Nature Scientific Reports. [paper] [code]
- Received the LLNL Director’s 2020 Early Career Recognition award.
- New paper alert: Our paper on “A Statistical Mechanics Framework for Task-Agnostic Sample Design” accepted at Neurips 2020.
- Invited Talk at the Next-Gen AI for Proliferation Detection Meeting on AI explainability [slides]
- Woohooooo! LDRD Proposal on the integration of knowledge graphs into predictive modeling awarded.
- Our paper on using prediction calibration to obtain reliable models in healthcare AI accepted for presentation at the UNSURE workshop, MICCAI 2020.
- Paper on Function-preserving linear projections (FPP) for high dimensional scientific data accepted for publication in the journal Machine Learning: Science and Technology. [code]
- Our paper on unsupervised audio source separation using GAN priors accepted for presentation at Interspeech 2020. [code]
- Presented our paper on Task-agnostic sample design in the Workshop on Real World Experiment Design and Active Learning at ICML 2020.
- News article about our recent PNAS paper on using deep learning for surrogate modeling in scientific applications.
- Feature article on AI-based analysis of clinical diagnosis models and COVID-19 infections.
- New podcast altert: DataSkeptic podcast where I talked about some recent work on interpretability in healthcare AI [Apple] [Spotify].
- A preprint of our work at LLNL on designing accurate emulators for scientific processes that is currently under consideration at Nature Communications.
- A new approach to build reliable and interpretable deep models for Healthcare AI. In collaboration with IBM Research and Arizona State University.
- Paper on building accurate neural network surrogates for inertial confinement fusion published in Proceedings of the National Academy of Sciences. [code]
- Coverage-based designs for hyper-parameter optimization in neural nets accepted for publication at IEEE Transactions on Neural Networks and Learning Systems.
- Our paper on MimicGAN, an easy and effective way to robustly project onto the image manifold, is accepted to IJCV Special Issue on GANs.