Security is a critical concern in Internet-of-Things (IoT) environments, including industrial IoT and one solution to enhance security is to deploy Network Intrusion Detection Systems (NIDS) using machine learning (ML) models in edge such as gateway devices. However, the resource constraints of these devices can pose challenges for implementing ML models. This study examines the impact of different training set sizes on the performance and resource usage of One-Class ML models that seem particularly adequate to this use case. The results indicate that One-Class Support Vector Machine, Isolation Forest, and Elliptic Envelope models are suitable for resource-constrained devices due to their low model size, classification time, and consistent performance with increasing sample size. The Local Outlier Factor model exhibited a high detection rate and low false alarm rate at the cost of high model size and classification time. Our results can help develop more efficient and effective network intrusion detection for IoT systems.