Projects

The integration of Artificial Intelligence (AI) and Machine Learning (ML) into drug discovery is promising and addresses a crucial need for faster, more efficient processes. AI/ML enhances the precision and speed of identifying potential therapeutic compounds. AI/ML algorithms can analyze vast heterogeneous datasets, integrate directly with experiments, predict molecular behaviors, and identify novel drug candidates with a higher likelihood of success in clinical trials. This reduces both the time and costs associated with traditional drug discovery methods.

Representation learning on graphs enables downstream tasks such as node/graph classification and link prediction in multiple application scenarios. We are developing novel methods for analyzing graph representations and their relationship to the Graph Neural Networks (GNNs) generating them and to the downstream tasks. We are interested in manifold analysis for quantifying the similarity between two representation spaces and robust means for comparing and aligning representations. 

GNNs (Graph Neural Networks) lack transparency and that is a significant barrier to their adoption in critical domains such as healthcare, finance, and law enforcement. In addition, the ability to explain predictions is critical for understanding potential flaws in a model and for generating insights to aid further refinement. To alleviate this situation, we are developing factual and counterfactual methods for explaining outcomes in graph data along with conditions under which such explanations are easier to obtain.

Advances in imaging technologies have enabled superbly fine-resolution in-vivo examination of the human brain. We are developing deep learning, statistical, and optimization frameworks to aid in the interpretability of the brain and cognition through methods for discovering the relationship between input images and brain responses and how the functional and structural networks interrelate.

Adversarial attacks pose a significant threat in various domains, including social networks, recommendation systems, and cybersecurity. They change the predictions of a model by slight perturbations of the input and because they threaten the security of machine learning-based systems, there has been active research on robust models, including in the context of Graph Neural Networks (GNNs).

Past Research Projects

For more information on the lab's previous research projects, please see our publication page.