Most deep neural network (DNN) accelerators for edge devices adopts fixed-point (FXP) data representation due to restricted hardware resource. However, the representable range of FXP format is limited, leading to accuracy degradation, in particular with small bit-width or in DNN training requiring a wide dynamic range. An alternative choice is floating-point (FLP) DNN designs which usually require much larger hardware costs compared to FXP-based alternatives. Recently, a new FLP-like format, called posit, has been proposed, which has a larger dynamic range than conventional FLP at the same bit-width. However, the decoding/encoding of posit numbers takes a significant portion of area cost. In this paper, we present a new DNN design based on so-called block posit, which could achieve similar classification accuracy with a much smaller hardware area cost. Implementation results show that the proposed block-posit-based DNN accelerator can reduce more than 80% area without classification accuracy loss compared to the normal posit-based design.