Adversarial attacks pose a significant threat in various domains, including social networks, recommendation systems, and cybersecurity. They change the predictions of a model by slight perturbations of the input and because they threaten the security of machine learning-based systems, there has been active research on robust models, including in the context of Graph Neural Networks (GNNs). We are examining the robustness of graph ML methods and architectures by investigating the relationship between graph representations and perturbations including the symmetry/invariance of ML methods under such perturbations.