Softmax activation function is widely used in deep learning for multi-classification tasks. However, because of the exponentiation computation, its complexity is high in hardware implementation. Without adversely affecting mathematical characteristics and functions, we propose a new hardware-friendly softmax function: 2 β-softmax. The experiment based on MNIST dataset demonstrates the effectiveness of the proposed method. And compared with the traditional softmax function, its superiority lies in that the power consumption and area can be reduced by more than 50% in the hardware implementation.