Using neural networks to solve variational problems, and other scientific machine learning tasks, has been limited by a lack of consistency and an inability to exactly integrate expressions involving neural network architectures. We address these limitations by formulating a polynomial-spline network, a novel shallow multilinear perceptron (MLP) architecture incorporating free knot B-spline basis functions into a polynomial mixture-of-experts model. Effectively, our architecture performs piecewise polynomial approximation on each cell of a trainable partition of unity while ensuring the MLP and its derivatives can be integrated exactly, obviating a reliance on sampling or quadrature and enabling error-free computation of variational forms. We demonstrate hp-convergence for regression problems at convergence rates expected from approximation theory and solve elliptic problems in one and two dimensions, with a favorable comparison to adaptive finite elements.