In this paper, we investigate digital twin (DT)-assisted adaptive deep neural network (DNN) inference in the Industrial Internet of Things (IIoT). We consider a scenario that an edge server has a full-size DNN for high-accuracy inference, while an IIoT device has a lightweight DNN for fast on-device inference. The IIoT device generates computing tasks, such as object recognition, to be processed by DNN. For each task, a local controller at the network edge determines whether or not to offload the task to the edge server before it enters each layer of the lightweight DNN. The objective is to find the task offloading point that maximizes a utility including delay, inference accuracy, and on-device energy consumption. To achieve this objective, we propose an online DT-assisted task offloading scheme, which exploits DTs to capture the task processing status at the IIoT device and the workload at the edge server. Simulation results demonstrate the excellent performance of the proposed adaptive DT-assisted DNN inference on delay, inference accuracy, and on-device energy consumption.