We present a content-adaptive, low-complexity video frame synthesis algorithm. Our approach applies the dynamic convolutions content adaptation approach to the widely used frame synthesis algorithm IFRNet. By introducing dynamic convolutions into both the pyramid encoder and the coarse-to-fine decoders of IFRNet, we enforce sparsity, thereby limiting the computationally expensive operations to only the necessary pixels. Training for specific sparsity targets allows us to achieve overall less computational complexity compared to IFRNet while having similar performance. We demonstrate the performance and content adaptivity in two test scenarios and show the savings in computational budget (approximately 20-40%) compared to the baseline IFRNet.