The deployment of machine learning algorithms on edge devices is challenging due to the required low power consumption and high computing power. Neuromorphic systems operate in a parallel and distributed way, with multiple processing elements that mimic or simulate biological neurons on silicon. This approach is promising in equipping sensors with always-on signal processing capabilities, naturally executing spiking neural network algorithms on dedicated hardware. Analog silicon neurons exhibit the lowest energy per spike, but are traditionally implemented in planar technology processes. In order to increase their density, hence the computing power, in the same area, it is essential to explore implementations in more scaled technology nodes. In this work an analog two-variable spiking neuron is described and implemented in 16 nm FinFET, adopting a supply voltage of 400 mV and employing transistors in the subthreshold region to reduce power consumption. Transient simulations for the regular spiking and fast spiking patterns are reported, showing the accelerated timescale at which the spiking neuron operates, with $\mathbf{38}\ \mu \mathrm{s}$ and $\mathbf{7}\ \mu \mathrm{s}$ time interval between two consecutive spikes respectively. With a required energy per spike of 117.5 fJ for the regular spiking configuration and the possibility of tuning external voltage references to modify the resulting spiking pattern, the presented circuit is suitable for developing large-scale neuromorphic architectures.