My research in general machine learning spans different paradigms in representation learning and my recent focus has been on areas including deep generative models, attention models, supervised metric learning and domain adaptation. Driven by the increasing need for models that can cater to real-world constraints, I work on building solutions for multimodal learning, small-data problems, and robust machine learning, e.g. designing robust loss functions and adversarial defense. I also explore ideas for bridging different machine learning formalisms such as kernel methods and manifold learning with deep representation learning.
Applications: Inverse imaging problems (e.g. reconstruction, denoising, deblurring, source separation), object recongition, segmentation & detection, CT image analysis, and inferring scene semantics.
AI for Healthcare
The hypothesis that computational models can be reliable enough to be adopted in prognosis and patient care is revolutionizing healthcare. As part of multiple collaborative efforts, I develop novel ML algorithms for predictive modeling with EHR data, ECG/EEG analysis, diagnosis from lung CT scans and brain MRI images, traumatic brain injury modeling using bio, genetic and CT markers. I am also very interested in studying the generalizability of clinical models to challenging domain shifts in practice.
Machine Learning for Science
I work with researchers in physical sciences for effectively deploying machine learning techniques into their analysis pipelines. In this context, our team builds customized ML solutions that can exploit prior knowledge about the physical process to build scientifically accurate predictive models. The applications of focus include inertial confinement fusion, geological analysis and reservoir modeling.
Sampling and Sequential Optimization
The notion of sampling appears in a variety of contexts, ranging from surrogate modeling, generating mini-batches for effective neural network training, to hyper-parameter search and reinforcement learning. Broadly, the goal of sampling is to identify one or more effective solutions from a large search space, using the smallest amount of resources. My research is on designing effective space-filling samples (in Euclidean spaces / embedded manifolds) for applications in machine learning.
Machine Learning on Graphs
The prevalence of relational data in several real-world applications, e.g. social network analysis, recommendation systems and neurological modeling, has led to crucial advances in machine learning techniques for graph-structured data. I work on crucial graph-inferencing problems including semi-supervised label propagation (using graph neural networks), brain network analysis for cognition and autism prediction and community detection with multi-layer graphs. I am also interested in building novel graph signal analysis tools.
Techniques for understanding the functioning of complex machine learning models are becoming increasingly popular, not only to improve the validation process, but also to extract new insights about the data via exploratory analysis. I started work on this important reserach direction recently and I have developed techniques for quantifying uncertainties in neural networks and a graph signal analysis-based framework for model introspection. I continue to build simpler constructs that can elucidate the functions of complex model, e.g. Treeview.
Data Analysis and Visualization
With the ever-increasing emphasis on data-centric analysis, studying high-dimensional data has become ubiquitous. I collaborate with reserachers in information visualization and computational topology to design novel analysis tools for exploring structure in high-dimensional spaces. We have successfully deployed these tools to gain insights into complex datasets, that would have otherwise remained elusive. Target applications include scientific data analysis, understanding natural language models, and studying performance characteristics of High Performance Computing (HPC) systems.