Current edge cloud resource management approaches generally target clusters with specific purposes and can only be optimized for one load variation at a time. However, large general industrial IoT cloud platforms have multiple system architectures, which provide a wide range of resources and service characteristics. Meanwhile, there is a huge difference between the application type and the resource demand of the application, which leads to drastic energy consumption fluctuations and resource heterogeneity. The existing edge computing architecture and scheduling algorithm do not consider the impact of dynamic factors on the computational load. In this paper, a scheduling mechanism based on game theory and queuing networks is proposed for resource allocation and load balancing in IIoT.