Machine-learning applications have garnered widespread adoption over the last several years. Graph Neural Networks have been proposed as an extension of machine-learning models to graph-structured data. The training and inference tasks on graph neural networks involve graph convolution operations that can be equivalently expressed as three-matrix multiplications. In this work, we propose FusedGCN, a custom systolic architecture that computes in a fused, i.e., combined, manner the product of three matrices. FusedGCN supports compressed sparse representations and tiled computations, which allow the design to adapt to the available input/output bandwidth without losing the regularity of a systolic architecture. The experimental results show that FusedGCN achieves lower execution times than the current best-performing state-of-the-art architecture for computing representative GCN applications. Most importantly, this result is achieved by consuming only marginally more area/power than a traditional systolic array used for two-matrix multiplications.