GNNs (Graph Neural Networks) lack transparency and that is a significant barrier to their adoption in critical domains such as healthcare, finance, and law enforcement. In addition, the ability to explain predictions is critical for understanding potential flaws in a model and for generating insights to aid further refinement. To alleviate this situation, we are developing factual and counterfactual methods for explaining outcomes in graph data along with conditions under which such explanations are easier to obtain.