Off-chip DRAM memory accesses limit the energy efficiency and training time of state-of-the-art deep neural networks (DNN). Compute-in-memory (CIM) accelerators leveraging pseudo-crossbar arrays and on-chip weight storage have emerged as alternatives to GPUs for fast and efficient training. However, this comes at the cost of reduced training accuracy due to weight cell non-idealities such as: low bit precision, nonlinearity, asymmetry, low G max /G min ratio, and slow programming speed. Here, we engineer the ferroelectric domain structure in a carefully designed superlattice (SL) ferroelectric(FE)/dielectric(DE) stack, to experimentally demonstrate high precision FEFET analog weight cells with excellent linearity and symmetry during potentiation and depression. We demonstrate switching speed as low as 100 ns in the SL-based ferroelectric capacitor (FECAP), with no degradation in either retention or endurance. We integrate the SL FE/DE/FE with a back-end-of-line (BEOL) compatible Indium Tungsten Oxide transistors, to demonstrate 128 stable conductance states with improved linearity and symmetry. System-level analysis of SL-FEFET based CIM accelerators show an excellent 94.1% online learning accuracy without degrading any other performance parameter, with potential for monolithic 3D integration.