Geometric and Physical Quantities improve E() Equivariant Message Passing
Abstract
Including covariant information, such as position, force, velocity or spin is important in many tasks in computational physics and chemistry. We introduce Steerable E() Equivariant Graph Neural Networks (SEGNNs) that generalise equivariant graph networks, such that node and edge attributes are not restricted to invariant scalars, but can contain covariant information, such as vectors or tensors. This model, composed of steerable MLPs, is able to incorporate geometric and physical information in both the message and update functions. Through the definition of steerable node attributes, the MLPs provide a new class of activation functions for general use with steerable feature fields. We discuss ours and related work through the lens of equivariant nonlinear convolutions, which further allows us to pinpoint the successful components of SEGNNs: nonlinear message aggregation improves upon classic linear (steerable) point convolutions; steerable messages improve upon recent equivariant graph networks that send invariant messages. We demonstrate the effectiveness of our method on several tasks in computational physics and chemistry and provide extensive ablation studies.
1 Introduction
The success of Convolutional Neural Networks (CNNs) (lecun1998gradient; lecun2015deep; schmidhuber2015deep; krizhevsky2012imagenet) is a key factor for the rise of deep learning, attributed to their capability of exploiting translation symmetries, hereby introducing a strong inductive bias. Recent work has shown that designing CNNs to exploit additional symmetries via group convolutions has even further increased their performance (cohen2016group; cohen2017steerable; worrall2017harmonic; cohen2018spherical; kondor2018generalization; weiler2018; bekkers2018roto; bekkers2019b). Graph neural networks (GNNs) and CNNs are closely related to each other via their aggregation of local information. More precisely, CNNs can be formulated as message passing layers (gilmer2017neural) based on a sum aggregation of messages that are obtained by relative positiondependent linear transformations of neighbouring node features. The power of message passing layers is, however, that node features are transformed and propagated in a highly nonlinear manner. Equivariant GNNs have been proposed before as either PointConvtype (wu2019pointconv; schutt2017schnet) implementations of steerable (thomas2018tensor; anderson2019covariant; fuchs2020se) or regular group convolutions (finzi2020generalizing). The most important component in these methods are the convolution layers. Although powerful, such layers only (pseudo^{1}^{1}1Methods such as SE(3)transformers (fuchs2020se) and Cormorant (anderson2019covariant) include an inputdependent attention component that augments the convolutions.) linearly transform the graphs and nonlinearity is only obtained via pointwise activations.
In this paper, we propose nonlinear E(3) equivariant message passing layers using the same principles that underlie steerable group convolutions, and view them as nonlinear group convolutions. Central to our method is the use of steerable vectors and their equivariant transformations to represent and process node features; we present the underlying mathematics of both in Sec. 2 and illustrate it in Fig. 1 on a molecular graph. As a consequence, information at nodes and edges can now be rotationally invariant (scalar) or covariant (vector, tensor). In steerable message passing frameworks, the ClebschGordan (CG) tensor product is used to steer the update and message functions by geometric information such as relative orientation (pose). Through a notion of steerable node attributes we provide a new class of equivariant activation functions for general use with steerable feature fields (weiler2018; thomas2018tensor). Node attributes can include information such as node velocity, force, or atomic spin. Currently, especially in molecular modelling, most datasets are build up merely of atomic number and position information. In this paper, we demonstrate the potential of enriching node attributes with more geometric and physical quantities.
We demonstrate the effectiveness of SEGNNs by setting a new state of the art on the nbody toy dataset, in which our method leverages the abundance of geometric and physical quantities available. We further test our model on the molecular datasets QM9 and OC20. Although here only (relative) positional information is available as geometric quantites, our SEGNNs further achieve state of the art on the IS2RE dataset of OC20, and achieve competitive performance on QM9. For all experiments we provide extensive ablation studies. The main contributions of this paper are:

Generalisation of equivariant graph networks such that node and edge attributes are not restricted to be scalars.

A new class of equivariant activation functions for steerable vector fields, based on the introduction of steerable node attributes and steerable multilayer perceptrons, which permit the injection of geometric and physical quantities into node updates.

A unifying view on various equivariant graph neural networks through the definition of nonlinear convolutions.

Extensive experimental ablation studies that shows the benefit of steerable over nonsteerable (invariant) message passing, and the benefit of nonlinear over linear convolutions.
2 Generalised E() equivariant steerable message passing
Message passing networks.
Consider a graph , with nodes and edges , with feature vectors attached to each node, and edge attributes attached to each edge. Graph neural networks (GNNs) (scarselli2008; kipf2017semisupervised; defferrard2016; battaglia2018) are designed to learn from graphstructured data and are by construction permutation equivariant with respect to the input. A specific type of GNNs are message passing networks (gilmer2017neural), where a layer updates node features via the following steps:
compute message from node to :  (1)  
aggregate messages and update node features :  (2) 
where represents the set of neighbours of node , and and are commonly parameterised by multilayer perceptrons (MLPs).
Equivariant message passing networks. Our objective is to build graph neural networks that are robust to rotations, reflections, translations and permutations. This is a desirable property since some prediction tasks, such as molecular energy prediction, require E(3) invariance, whereas others, like force prediction, require equivariance. From a technical point of view, equivariance of a function to certain transformations means that for any transformation parameter and all inputs we have where and denote transformations on the input and output domain of , respectively. Equivariant operators applied to atomic graphs allow us to preserve the geometric structure of the system as well as enriching it with increasingly abstract directional information. We build E(3) equivariant GNNs by constraining the functions and of Eqs. (12) to be equivariant, which in return guarantees equivariance of the entire neural network. The following paragraphs introduce the core components behind our method; full mathematical details and background can be found in App. A.
Steerable features. In this work, we achieve equivariant graph neural networks by working with steerable feature vectors, which we denote with a tilde, e.g. a vector is steerable. Steerability of a vector means that for a certain transformation group with transformation parameters , the vector transforms via matrixvector multiplication . For example, a Euclidean vector in is steerable for rotations by multiplying the vector with a rotation matrix, thus . We are however not restricted to only work with 3D vectors; via the construction of steerable vector spaces, we can generalise the notion of 3D rotations to arbitrarily large vectors.
Central to our approach is the use of WignerD matrices ^{2}^{2}2In order to be O(3)—not just SO(3)—equivariant, we include reflections. See App. A for more detail.. These are dimensional matrix representations that act on dimensional vector spaces. These vector spaces that are transformed by th degree WignerD matrices will be referred to as type steerable vector spaces and denoted with .
We note that we can combine two independent steerable vector spaces and of type and by the direct sum, denoted by . Such a combined vector space then transforms by the direct sum of Wigner Dmatrices, i.e., via , which is a blockdiagonal matrix with the WignerD matrices along the diagonal. We denote the direct sum of type vector spaces up to degree by , and copies of the the same vector space with . Regular MLPs are based on transformations between dimensional type0 vector spaces i.e., , and are a special case of our steerable MLPs that act on steerable vector spaces of arbitrary type.
Steerable MLPs. Like regular MLPs, steerable MLPs are constructed by interleaving linear mappings (matrixvector multiplications) with nonlinearities. Now however, the linear maps transform between steerable vector spaces at layer to layer via Steerable MLPs thus have the same functional form as regular MLPs, although, in our case, the linear transformation matrices , defined below, are conditioned on geometric information (e.g. relative atom positions) which is encoded in the steerable vector . Both vectors and are steerable vectors. In this work we will however use the vector to have geometric and structural information encoded and the steer the information flow of through the network. In order to guarantee that maps between steerable vector spaces, the matrices are defined via the ClebschGordan tensor product. By construction the resultant MLPs are equivariant for every transformation parameter via
(3) 
provided that the steerable vectors that condition the MLPs are also obtained equivariantly.
Spherical harmonic embedding of vectors. We can convert any vector into a type vector through the evaluation of spherical harmonics at . For any
(4) 
is a type steerable vector. The spherical harmonic functions are functions on the sphere and we visualise them as such in Figure 4.. We will use spherical harmonic embeddings to include geometric and physical information into steerable MLPs.
Mapping between steerable vector spaces. The ClebschGordan (CG) tensor product is a bilinear operator that combines two O() steerable input vectors of types and and returns another steerable vector of type . Let denote a steerable vector of type and its components with . The CG tensor product is given by
(5) 
in which is a learnable parameter that scales the product, and are the ClebschGordan coefficients that ensure that the resulting vector is type steerable. The CG tensor product is a sparse tensor product, as generally many coefficients are zero. Most notably, whenever or . While Eq. (5) only describes the product between steerable vectors of a single type, e.g. and , it is directly extendable to mixed type steerable vectors that may have multiple channels/multiplicities within a type. In this case, every input to output subvector pair gets its own index in a similar way as the weights in a standard linear layer are indexed with inputoutput indices. We then denote the CG product with with boldfaced to indicate that it is parametrised by a collection of weights.
In order to stay close to standard notation used with MLPs, we treat the CG product with a fixed vector in one of its inputs as a steerable linear layer conditioned on , denoted with
(6) 
where the latter indicates that the CG weights depend on some quantity , e.g. relative distances.
Steerable activation functions. The common recipe for deep neural networks is to alternate linear layers with elementwise nonlinear activation functions. In the steerable setting, careful consideration is required to ensure that the activation functions are equivariant; currently available classes of activations include Fourierbased (cohen2018spherical), normaltering (thomas2018tensor), or gated nonlinearities (weiler2018). All these can be used in alternation with (6). The resulting steerable MLPs themselves in turn provide a new class of steerable activation functions that is able to directly leverage local geometric cues. Namely, through steerable node attributes , either derived from the physical setup (forces, velocities) or from predictions (similar to gating), the MLPs can be applied nodewise and generally used in steerable feature fields as nonlinear activations.
2.1 Steerable E() Equivariant Graph Neural Networks
We extend the message passing equations (1)(2) and define a message passing layer that updates steerable node features at node via the following steps:
(7)  
(8) 
Here, is the squared relative distance between two nodes and , and are O() steerable MLPs, and are steerable edge and node attributes. If additional attributes exist, such as pairwise distance , they can either be concatenated to the attributes that condition our steerable MLPs, or as is more commonly (Sec. 3) done add them as inputs to and . We do the latter, and stack all inputs to a single steerable vector by which the steerable MLP layer is given: with , where is the user specified steerable vector space of node representations. The message network is steered via edge attributes , and the node update network is similarly steered via node attributes .
Injecting geometric and physical quantities. In order to make SEGNNs more expressive, we include geometric and physical information in the edge and node updates. For that purpose, the edge attributes are obtained via the spherical harmonic embedding (Eq. (4)) of relative positions, in most cases, but possibly also relative force or relative momentum. The node attributes could e.g. be the average edge embedding of relative positions over neighbours of a node, i.e., , and could additionally include node force, spin or velocities, as we do in the Nbody experiment. The use of steerable node attributes in the steerable MLPs that define allows us to not just integrate geometric cues into the message functions, but also leverage it in the node updates. We observe that the more geometric and physical quantities are injected to better SEGNNs perform.
3 Message Passing as Convolution, Related Work
In this section, we revisit related work from a convolutional perspective. We build this section on our formulation of steerable layers as introduced in Eq. (6). Such a general framework does not only help to understand and better categorise the existing architectures in literature, but also allows us to clearly motivate the novelty of our approach. The theoretical findings are tested and confirmed in Sec. 4.
Convolutions and point convolutions. Consider a feature map defined on an Euclidean input space. A convolution layer (defined via crosscorrelation) with a pointwise nonlinearity is then given by
(9) 
with being a convolution kernel that provides for every relative position a matrix that transforms features from to . Point convolutions, generally referred to as PointConvs (wu2019pointconv), and SchNet (schutt2017schnet) implement Eq. (9) on point clouds. For a sparse input feature map consisting of locationfeature pairs , the point convolution is given by which describes a message passing layer of Eqs. (1)(2) in which the messages are and the message update .
In the above convolutions, the linear transformation matrices are conditioned on relative positions , which is typically done in one of the following three approaches. (i) Classically, data on a dense discrete grid with shared neighbourhoods is used and transformations for relative positions are stored. This method however does not generalise to nonuniform grids such as point clouds. (ii) Continuous kernel methods parametrise the transformations either by expanding it into a continuous basis that can be sampled at arbitrary locations, i.e., , where are trainable parameters and the basis functions, or by parametrising the transformations as MLPs via . (iii) Steerable kernel methods expand the linear transformations in a steerable basis, such as 3D spherical harmonics, i.e.,
(10) 
where basis coefficients typically depend on the distance between point pairs. In what follows we establish a relation between the second and third type of kernel parametrisations.
Steerable (group) convolutions. In our steerable setting, linear feature transformations are equivalent to steerable linear transformations conditioned on the scalar “1”. I.e, with and the messages are given by
(11) 
When the transformations are parametrised in a steerable basis (10) we can make the identification^{3}^{3}3Exact correspondence is obtained by a sum reduction over the steerable vector components (App. B).
(12) 
in which are the spherical harmonic embeddings (Eq. (4)) of , and the weights that parametrise the CG tensor product depend on the distance between point pairs. Thus, convolutions with kernels expanded in a spherical harmonic basis can either be done as usual via regular, or via steerable convolutions. The advantage of steerable convolutions is that we can directly derive what the result would be if the kernel were to be rotated by making use of the WignerD matrices via
(13) 
In fact, through the identification of steerable vectors with functions on O(3) via the inverse Fourier transform of, we can treat the steerable feature vectors at each location as functions on the group O(3). As such, steerable convolutions produce feature maps on the full group E(3) that for every possible translation/position and rotation provide a feature response . It is precisely this mechanism of transforming convolution kernels via the group action (via ) that underlies group convolutions (cohen2016group). As a result, message passing via explicit kernel rotations (l.h.s. of (13)) corresponds to regular group convolutions, and via steerable transformations (r.h.s. of (13)) corresponds to steerable group convolutions.
The equivariant steerable methods (thomas2018tensor; anderson2019covariant; miller2020relevance; fuchs2020se) that we compare against in our experiments can all be written in convolution form
(14) 
where, in the latter case, the linear transformations additionally depend on an input dependent attention mechanism as in (anderson2019covariant; fuchs2020se), and can be seen as a steerable PointConv version of attentive group convolutions (romero2020). In these attentionbased cases, convolutions are augmented with input dependent weights via . This makes the convolution nonlinear, however, the transformation of input features still happens linearly and thus describes what one may call a pseudolinear transformation. Finally, the recently proposed LieConv (finzi2020generalizing) and NequIP (batzner2021se) also fall in the convolutional message passing class. LieConv is a PointConvtype variation of regular group convolutions on Lie groups (bekkers2019b). NeuqIP follows the approach of Tensor Field Networks (thomas2018tensor), and weighs interactions using an MLP with radial basis functions as input. These functions are obtained as solution of the Schrödinger equation. Thus, learnable filters consisting of spherical harmonics convolve nearby points based on the distance to the central point.
Equivariant message passing as nonlinear convolution. EGNNs (satorras2021n) are equivariant to transformations in E() and outperform most aforementioned steerable methods. This is somewhat surprising as it sends invariant messages, which are obtained via MLPs of the form
(15) 
where . These messages resemble the convolutional messages of point convolutions due to their dependency on relative positions. There are, however, two import differences: (i) the messages are nonlinear transformations of the neighbouring feature values via an MLP and (ii) the messages are only conditioned on the distance between point pairs, and are therefore E() invariant. As such, we regard EGNN layers as nonlinear convolutions with isotropic message functions (the nonlinear counterpart of a rotationally invariant kernel). In our work, we lift the isotropy constraint and generalise to nonlinear steerable convolutions via messages of the form
(16) 
with . The MLP is then conditioned on attribute , which could e.g. be a spherical harmonic embedding of . This allows for the creation of messages more general than those found in convolution, while carrying covariant geometrical information.
Related equivariant message passing methods. A different but also fully message passing based approach can be found in Geometric Vector Perceptrons (GVP) (jing2020learning) and PaiNN (schutt2021equivariant). Compared to SEGNNs which treat equivariant information as fully steerable features, these GVP and PaiNN update scalarvalued attributes using the norm of vectorvalued attributes, and therefore with O() invariant information. These methods restrict the flow of information between attributes of different types, whereas the ClebschGordan tensor product in SEGNNs allows for interaction between spherical harmonics of all orders throughout the network. Another approach for incorporating relative orientation is to utilise invariant angles. Methods such as Dimenet++ (klicpera2020dimenet_plusplus) and SphereNet (liu2021spherical) do so by relying on a secondorder message passing scheme that considers angles between the central point and neighboursofneighbours. These angles are not affected by rigid body motions and can therefore be used without making the model overly sensitive to O transformations. In contrast, our method can directly leverage angular information in a firstorder message passing scheme through the use of steerable vectors.
4 Experiments
Implementation details. The implementation of SEGNN’s O(3) steerable MLPs is based on the e3nn library (geiger2021). We either define the steerable vector spaces as (Nbody, QM9 experiments), i.e., copies of steerable vector spaces up to order , or by dividing an dimensional vector space into approximately equally large type subvector spaces (OC20 experiments) as is done in (finzi2021practical). Furthermore, for a fair comparison between experiments with different , we choose such that the total number of weights in the CG products corresponds to that of a regular (type0) linear layer. Further implementation details are in App. C.
SEGNN architectures and ablations. We consider several variations of SEGNNs. On all tasks we have at least one fully steerable () SEGNN tuned for the specific task at hand. We perform ablation experiments to investigate two main principles that sets SEGNNs apart from the literature. A1 The case of nonsteerable vs steerable EGNNs is obtained by applying the same SEGNN network with different specifications of maximal spherical harmonic order in the feature vectors () and the attributes (). EGNN (satorras2021n) arises as a special case with . These models will be labelled SEGNN. A2 In this ablation, we use steerable equivariant point conv methods (thomas2018tensor) with messages as in Eq. (14) and regular gated nonlinearities as activation/update function, labelled SE, and compare it to the same network but with messages obtained in a nonlinear manner via 2layer steerable MLP as in Eq. (16), labelled as SE.
Nbody system. The charged nbody particle system experiment is a toy experiment used in (kipf2018neural; fuchs2020se; satorras2021n). It consists of 5 particles that carry a positive or negative charge, and have an initial position and an initial velocity in a 3dimensional space. The task is to estimate the positions of the five particles after 1.000 timesteps. We build upon the code and the experimental setting introduced in (satorras2021n) (3.000 particles, 10.000 epochs, 4 layer networks for the SEGNN and more layers for those ablation where we don’t fill the parameter budget). Steerable architectures are designed such that the parameter budget at and matches that of the tested EGNN implementation. More details are provided in App. C. For the implementation, we input the relative position to the center of the system and the velocity as vectors of type with odd parity. We further input the norm of the velocity as scalar, which altogether results in an input vector . The output is embedded as difference vector to the initial position (odd parity), i.e. . In doing so, we keep E() equivariance for vector valued inputs and outputs. The edge attributes are obtained via the spherical harmonic embedding of as described in Eq. 4. Messages additionally have the product of charges and absolute distance included.
SEGNN architectures are compared to steerable equivariant point conv methods (SE) and steerable nonlinear point conv methods (SE). For all cases, we use two embedding and two readout layers. Results and ablation studies are shown in Tab. 1. A full ablation on the performance and runtime of different (maximum) orders of steerable feature vectors () and attributes () is outlined in App. C. For most steerable architectures, the best results are obtained for and , and architectures don’t seem to benefit from higher orders of and . We consider two SEGNN architectures: one where the node attributes are the averaged edge embeddings, i.e. mean over relative orientation. This architecture is labelled SEGNN since only geometric information is used. The second SEGNN architecture has the spherical harmonics embedding of the velocity added to the node attributes. It can thus leverage the whole geometric (relative orientation) and physical (velocity) information, and is consequently labelled SEGNN. Including physical information in addition to geometric information in the node updates considerably boosts SEGNN performance, which demonstrates the potential of the new class of SEGNN node activation functions.
Qm9.
The QM9 dataset (ramakrishnan2014QM9; ruddigkeit2012enumeration) consists of small molecules up to 29 atoms, where each atom is described with 3D position coordinates and a five dimensional onehot mode embedding of its atomic type (H, C, N, O, F). The aim is to regress various chemical properties for each of the molecules. We optimise and report the mean absolute error (MAE) between predictions and ground truth, using the dataset partitions and dataloaders from anderson2019covariant, which splits the dataset into 100K molecules for training, 18K for validation and 13K for testing. In Table 3 we show that by steering with the relative orientation between atoms we observe that for higher (maximum) orders of steerable feature vectors, the performance increases, especially when a small cutoff radius is chosen. While previous methods use relatively large cutoff radii of 4.511Å, we use a cutoff radius of 2Å. Doing so results in a sharp reduction of the number of messages per layer, as shown in App. C. Tables 2 and 3 together show that SEGNNs outperform an architecturally comparable baseline EGNN (satorras2021n), whilst stripping away attention modules from it and reducing graph connectivity from fully connected to only 2Å distant atoms. It is however apt to note that runtime is still limited by the relatively expensive calculation of the ClebschGordan tensor products. We further note that SEGNNs produce results on par with the best performing methods on the nonenergy variables, however lag behind state of the art on the energy variables (, , , ). We conjecture that such targets could benefit from more involved (e.g. including attention or neighbourneighbour interactions) or problemtailored architectures, such as those compared against.
Oc20.
The Open Catalyst Project OC20 dataset (zitnick2020introduction; chanussot2021open), consists of molecular adsorptions onto surfaces where 82 different adsorbates are considered, consisting of oxygen, hydrogen, carbon and nitrogen atoms. We focus on the Initial Structure to Relaxed Energy (IS2RE) task, which takes as input an initial structure and has the target to predict the energy in the final, relaxed state. The IS2RE training set consists of over 450,000 catalyst adsorbate combinations where graphs comprise 70 atoms on average. Optimisation is done for MAE between the predicted and ground truth energy. Additionally, performance is measured in the percentage of structures in which the predicted energy is within a eV threshold (EwT). The four test splits contain indistribution (ID) catalysts and adsorbates, outofdomain adsorbates (OOD Ads), outofdistribution catalysts (OOD Cat), and outofdistribution adsorbates and catalysts (OOD Both). Table 4 shows SEGNN results on the OC20 dataset and comparisons with existing methods. We compare to models which have obtained results by training on the IS2RE training set. This includes methods like SphereNet (liu2021spherical) and DimeNet++ (klicpera2019dimenet; klicpera2020dimenet_plusplus). The best SEGNN performance is seen for and (see App. C). A full ablation study, comparing performance and runtime for different orders of and can be found in App. C).
5 Conclusion
We have introduced Steerable E() Equivariant Graph Neural Networks (SEGNNs) which generalise equivariant graph neural networks, such that information at nodes and edges is not restricted to be invariant (scalar), but can also be covariant (vector, tensor). The key ingredient of SEGNNs is a new class of equivariant activation functions for steerable vector fields, based on the introduction of steerable node attributes and steerable MLPs. Including geometric and physical information in the node updates is a unique feature that considerably boosts model performance; we demonstrate the potential of the new class of steerable node activation functions and consider it a promising new direction in computational physics and chemistry. Extensive ablation studies have further shown the benefit of steerable over nonsteerable (invariant) message passing, and the benefit of nonlinear over linear convolutions. Furthermore, on the OC20 ISRE taks, SEGNNs outperform all competitors.
Acknowledgements
Johannes Brandstetter thanks the Institute of Advanced Research in Artificial Intelligence (IARAI) and the Federal State Upper Austria for the support. This work is part the research programme VENI (grant number 17290), financed by the Dutch Research Council (NWO). The authors thank Markus Holzleitner for helpful comments on this work.
References
Appendix A Mathematical background
This appendix provides the mathematical background and intuition for steerable MLPs. We remark that the reader may appreciate several related works, such as, (thomas2018tensor; anderson2019covariant; fuchs2020se), as excellent alternative resources^{4}^{4}4Each of these works presents unique view points that greatly influenced the writing of this appendix. to get acquainted with the group/representation theory used in this paper. In this appendix we introduce the theory from our own perspective which is tuned towards the idea of steerable MLPs and our viewpoint on group convolutions. It provides complementary intuition to the aforementioned resources. The main concepts explained in this appendix are:

Group definition and examples of groups (Section A.1). The entire framework builds upon notions from group theory and as such a formal definition is in order. In this paper, we model transformations such as translation, rotation and reflection as groups.

Invariance, equivariance and representations (Section A.2). A function is said to be invariant to a transformation if its output is unaffected by a transformation of the input. A function is said to be equivariant if its output transforms predictably under a transformation of the input. In order to make the definition precise, we need a definition of representations; a representation formalises the notion of transformations applied to vectors in the context of group theory.

Steerable vectors, WignerD matrices and irreducible representations (Section A.3). Whereas regular MLPs work with feature vectors whose elements are scalars, our steerable MLPs work with feature vectors consisting of steerable feature vectors. Steerable feature vectors are vectors that transform via socalled WignerD matrices, which are representations of the orthogonal group O(). WignerD matrices are the smallest possible group representations and can be used to define any representation (or conversely, any representation can be reduced to a tensor product of WignerD matrices via a change of basis). As such, the WignerD matrices are irreducible representations.

Spherical harmonics (Section A.4). Spherical harmonics are a class of functions on the sphere and can be thought of as a Fourier basis on the sphere. We show that spherical harmonics are steered by the WignerD matrices and interpret steerable vectors as functions on , which justifies the glyph visualisations used in this paper. Moreover, spherical harmonics allow the embedding of threedimensional displacement vectors into arbitrarily large steerable vectors.

ClebschGordan tensor product and steerable MLPs (Section A.5). In a regular MLP one maps between input and output vector spaces linearly via matrix vector multiplication and applies nonlinearities afterwards. In steerable MLPs one maps between steerable input and steerable output vector spaces via the ClebschGordan tensor product. Akin to the learnable weight matrix in regular MLPs, the learnable GlebschGordan tensor product is the workhorse of our steerable MLPs.
After these concepts are introduced we will in Section B revisit the convolution operator in the light of the steerable, group theoretical viewpoint that we take in this paper. In particular, we show that steerable group convolutions are equivalent to linear group convolutions with convolution kernels expressed in a spherical harmonic basis. With this in mind, we argue that our approach via message passing can be thought of as building neural networks via nonlinear group convolutions.
a.1 Group definition and the groups E() and O()
Group definition.
A group is an algebraic structure that consists of a set and a binary operator , the group product, that satisfies the following axioms: Closure: for all we have ; Identity: there exists an identity element ; Inverse: for each there exists an inverse element such that ; and Associativity: for each we have .
The Euclidean group E().
In this work, we are interested in the group of threedimensional translations, rotations, and reflections which is denoted with E(3), the 3D Euclidean group. Such transformations are parametrised by pairs of translation vectors and orthogonal transformation matrices . The E(3) group product and inverse are defined by
with . One can readily see that with these definitions all four group axioms are satisfied, and that it therefore defines a group. The group product can be seen as a description for how two E() transformations parametrised by and applied one after another are described by single transformation parametrised by . The transformations themselves act on the vector space of 3D positions via the group action, which we also denote with , via
where and .
The orthogonal group O() and special orthogonal group SO().
The group E is a semidirect product (denoted with ) of the group of translations with the group of orthogonal transformations O(). This means that we can conveniently decompose E()transformations in an O()transformation (rotation and/or reflection) followed by a translation. In this work we will mainly focus on dealing with O() transformations as translations are trivially dealt with. When representing the group elements of O() with matrices , as we have done before, the group product and inverse are simply given by the matrixproduct and matrixinverse. I.e., with the group product and inverse are defined by
The group acts on by matrixvector multiplication, i.e., . The group elements of O(3) are square matrices with determinant or . Their action on defines a reflection and/or rotation.
The special orthogonal group SO(3) has the same group product and inverse, but excludes reflections. The group thus consists of matrices with determinant .
The sphere is a homogeneous space of SO(3).
The sphere is not a group as we cannot define a group product on that satisfies the group axioms. It can be convenient to treat it as a homogeneous space of the groups O(3) or SO(3). A space is called a homogeneous space of a group if for any two points there exists a group element such that .
The sphere is a homogeneous space of the rotation group SO(3) since any point on the sphere can be reached via the rotation of some reference vector. Consider for example an XYX parametrisation of SO(3) rotations in which three rotations are applied one after another via
(A.1) 
with and denoting unit vectors along the and axis, and denotes a rotation of degrees around axis . We can model points on the sphere in a similar way via Euler angles via
(A.2) 
So, with two rotation angles, any point on the sphere can be reached. In the above we set in the parametrisation of the rotation matrix (an element from SO()) that rotates the reference vector , but it should be clear that with any the same point is reached. This means that there are many group elements in SO() that all map to the same point on the sphere.
a.2 Invariance, equivariance and representations
Group representations.
We previously defined the group product, which tells us how elements of a group interact. We also showed that the groups E() and O() can transform the three dimensional vector space via the group action. We usually think of the groups E() and O() as groups that describe transformations on , but these groups are not restricted to transformations on and can generally act on arbitrary vector spaces via representations. A representation is an invertible linear transformation parametrised by group elements that acts on some vector space , and which follows the group structure (it is a group homomorphism) via
with .
A representation can act on infinitedimensional vector spaces such as functions. E.g., the socalled leftregular representations of E() on functions on is given by
i.e., it transforms the function by letting act on the domain from the left. Here we used the notation to indicate that transforms the function first, which creates a new function , which is then sampled at .
When representations transform finite dimensional vectors , they are dimensional matrices. In this work, we denote such matrix representations with boldface . A familiar example of a matrix representation of O() on are the matrices themselves, i.e., .
Finally, any two representations, say and , are equivalent if they relate via a similarity transform via
i.e., such representations describe one and the same thing but in a different basis, and the change of basis is carried out by . Now that representations have been introduced we can formally define equivariance.
Invariance and equivariance.
Equivariance is a property of an operator that maps between input and output vector spaces and . Given a group and its representations and which transform vectors in and respectively, an operator is said to be equivariant if it satisfies the following constraint
(A.3) 
Thus, with an equivariant map, the output transforms predictably with transformations on the input. One might say that no information gets lost when the input is transformed, merely restructured. One way to interpret Eq. (A.3) is therefore that the operators and describe the same transformation, but in different spaces.
Invariance is a special case of equivariance in which for all . I.e., an operator is said to be invariant if it satisfies the following constraint
(A.4) 
Thus, with an invariant operator, the output of is unaffected by transformations applied to the input.
a.3 Steerable vectors, WignerD matrices and irreducible representations
One strategy to build equivariant MLPs is to define input and output spaces of the MLPs and define how these spaces transform under the action of a group. This then sets an equivariance constraint on the operator that maps between these spaces. By only working with such equivariant operators we can guarantee that the entire learning framework is equivariant.
In our work, the proposed graph neural networks are translation equivariant by construction as any form of spatial information only enters the pipeline in the form of relative positions between nodes (). Then, any remaining operations are designed to be O() equivariant such that, together with the given translation equivariance, the complete framework is fully E() equivariant. Since translations are trivially dealt with, we focus on SO() and O() and show how to build equivariant MLPs through the use of the ClebschGordan tensor product.
WignerD matrices are irreducible representations.
For SO() there exists a collection of representations, indexed with their order , which act on vector spaces of dimension . These representations are called WignerD matrices and we denote them with . The use of WignerD matrices is motivated by the fact that any matrix representation of SO() that acts on some vector space can be “reduced” to an equivalent block diagonal matrix representation with WignerD matrices along the diagonal:
(A.5) 
with the change of basis that makes them equivalent. The individual WignerD matrices themselves cannot be reduced and are hence irreducible representations of SO(3). Thus, since the block diagonal representations are equivalent to we may as well work with them instead. This is convenient since each block, i.e., each WignerD matrix , only acts on a subspace of . As such we can factorise , which motivates the use of steerable vector spaces and their direct sums as presented in Sec. 2.
The WignerD matrices are the irreducible representations of SO(), but we can easily adapt these representations to be suitable for O() by including the group of reflections as a direct product. We will still refer to these representations as WignerD matrices in the entirety of this work, opting to avoid the distinction in favour of clarity of exposition. We further remark that explicit forms of the WignerD matrices can e.g. be found books such as (sakurai2017) and their numerical implementations in code libraries such as the e3nn library (geiger2021).
Steerable vector spaces.
The dimensional vector space on which a WignerD matrix of order acts will be called a type steerable vector space and is denoted with . E.g., a type3 vector is transformed by via . We remark that this definition is equivalent to the definition of steerable functions commonly used in computer vision (freeman1991design; helor1996steerable) via the viewpoint that steerable vectors can be regarded as the basis coefficients of a function expanded in a spherical harmonic basis. We elicit this viewpoint in Sec. A.4 and B.
At this point we are already familiar with type0 and type1 steerable vector spaces. Namely, type0 vectors are scalars, which are invariant to transformations , i.e., . Type1 features are vectors which transform directly via the matrix representation of the group, i.e., .
a.4 Spherical harmonics
Spherical harmonics.
Related to the WignerD matrices and their steerable vector spaces are the spherical harmonics^{5}^{5}5Solutions to Laplace’s equation are called harmonics. Solutions of Laplace’s equation on the sphere are therefore called spherical harmonics.. Spherical harmonics are a class of functions on the sphere , akin to the familiar circular harmonics that are best known as the 1D Fourier basis. As with a Fourier basis, spherical harmonics form an orthonormal basis for functions on . In this work we use the realvalued spherical harmonics and denote them with .
Spherical Harmonics are WignerD functions.
One can also think of spherical harmonics as functions on SO that are invariant to a subgroup of rotations via
in which we used the parametrisation for and O() given in (A.2) and (A.1) respectively. Then, by definition, is invariant with respect to rotation angle , i.e.., . This viewpoint of regarding the spherical harmonics as invariant functions on O() helps us to draw the connection to the WignerD functions that make up the elements of the WignerD matrices. Namely, the column of WignerD functions are also invariant and, in fact, correspond (up to a normalisation factor) to the spherical harmonics via
(A.6) 
The mapping from vectors into spherical harmonic coefficients is equivariant.
It then directly follows that vectors of spherical harmonics are steerable by the WignerD matrices of the same degree. Let be the embedding of a direction vector in spherical harmonics. Then this vector embedding is equivariant as it satisfies
(A.7) 
Using the and O() parametrization of (A.2) and (A.1) this is derived as follows
Steerable vectors represent steerable functions on .
Just like the 1D Fourier basis forms a complete orthonormal basis for 1D functions, the spherical harmonics form an orthonormal basis for , the space of square integrable functions on the sphere. Any function on the sphere can thus be represented by a steerable vector when it is expressed in a spherical harmonic basis via
(A.8) 
Since spherical harmonics form an orthonormal basis, the coefficient vector can directly be obtained by taking inner products of the function with the spherical harmonics, i.e. ,
(A.9) 
Equation (A.9) is sometimes referred to as the Fourier transform on , and Eq. (A.8) as the inverse spherical Fourier transform. Thus, one can identify steerable vectors with functions on the sphere via the spherical Fourier transform.
In this paper we visualize such functions on via glyphvisualizations which are obtained as surface plots
and each point on this surface is colorcoded with the function value . The visualisations are thus colorcoded spheres that are stretched in each direction via .
Steerable vectors also represent steerable functions on O().
In order to draw a connection between group equivariant message passing and group convolutions, as we did in Sec. 3 of the main paper, it is important to understand that steerable vectors also represent functions on the group SO() via an SO()Fourier transform. The collection of WignerD functions form an orthonormal basis for .
This orthonormal basis allows for a Fourier transform that maps between the function space and steerable vector space ; the forward and inverse Fourier transform on SO() are respectively given by
(A.10)  
(A.11) 
with the Haar measure of the group. Noteworthy, the forward Fourier transform generates a matrix of Fourier coefficients, rather than a vector in spherical case. The coefficent matrix is steerable by leftmultiplication with the WignerD matrices of the same type .
a.5 ClebschGordan product and steerable MLPs
In a regular MLP one maps between input and output vector spaces linearly via matrixvector multiplication and applies nonlinearities afterwards. In steerable MLPs, one maps between steerable input and output vector spaces via the ClebschGordan tensor product and applies nonlinearities afterwards. Akin to the learnable weight matrix in regular MLPs, the learnable GlebschGordan tensor product is the main workhorse of our steerable MLPs.
ClebschGordan tensor product.
The ClebschGordan (CG) tensor product allows us to map between steerale input and output spaces. While there is much to be said about tensors and tensor products in general, we here intend to focus on intuition. In general, a tensor product involves the multiplication between all components of two input vectors. E.g., with two vectors and , the tensor product is given by
which we can flatten into a dimensional vector via an operation which we denote with . In our steerable setting we would like to work exclusively with steerable vectors and as such we would like for any two steerable vectors and , that the tensor product’s output is again steerable with a O() representation such that the following equivariance constraint is satisfied:
(A.12) 
Via the identity , we can show that the output is indeed steerable:
The resulting vector is thus steered by a representation . Since any matrix representation of O() can be reduced to a direct sum of WignerD matrices (see (A.5)), the resulting vector can be organised via a change of basis into parts that individually transform via WignerD matrices of different type. I.e. , with the steerable subvector spaces of type .
With the CG tensor product we directly obtain the vector components for the steerable subvectors of type as follows. Let denote a steerable vector of type and