Graph neural network (GNN) accelerators have achieved prominent performance speedup on static graphs but fallen with inefficiency on dynamic graphs. The reason is that in dynamic graphs, updating on a few vertices will introduce enormous redundant neighbor reaggregation and feature reupdating. Moreover, evolving graph structure makes graph preprocessing impractical and incurs random memory accesses which can only be determined at runtime. In this article, we propose DeltaGNN, an algorithm and accelerator co-design for GNN acceleration on dynamic graphs. In algorithm, we first propose a delta updating algorithm, which identifies the sensitivity of vertices and reduces the aggregation and updating operations of insensitive vertices without accuracy compromise. In hardware, we propose a novel sensitivity remapping cache to satisfy the dissimilar reusability of vertices under different sensitivity without preprocessing requirement. To tackle the workload imbalance, we implement feature-disperse execution to support different feature updating between sensitive and insensitive vertices. Moreover, we introduce vertex feature coalescing to reduce the amount of feature vectors by exploiting the locality within vertex accesses. We then propose lightweight yet effective hardware optimizations to make our design applicable to conventional GNN accelerators. Compared to the state-of-the-art GNN accelerators, our DeltaGNN gains an average of $1.5\times $ – $11.8\times $ speedup and $1.3\times $ – $8.6\times $ energy efficiency improvement on dynamic graphs.