Deep neural networks (DNN) have recently shown impressive performance as acoustic models for large vocabulary continuous speech recognition (LVCSR) tasks. Typically, the frame shift of the output of neural networks is much shorter than the average length of the modeling units, so the posterior vectors of neighbouring frames are likely to be similar. The similarity, together with the better discrimination of neural networks than typical acoustic models, shows a possibility of removing frames of neural network outputs according to the distance of posterior vectors. Then, the computation costs of beam searching can be effectively reduced. Based on that, the paper introduces a novel variable-frame-rate decoding approach based on neural network computation that accelerates the beam searching for speech recognition with minor loss of accuracy. By computing the distances of posterior vectors and removing frames with a posterior vector similar to the previous frame, the approach can make use of redundant information between frames and do a much quicker beam searching. Experiments on LVCSR tasks show a 2.4-times speed up of decoding compared to the typical framewise decoding implementation.