Many emerging IoT systems adopt the model-based design and operations, where an appropriate model is first trained based on input/output sensory data streams and further used for prediction and estimation. We address a simple, yet fundamental question on securing such systems: Can we directly secure the trained model, rather than sensor data streams? We depart from data-oriented security schemes and design MORSE, which offers a model obfuscation to IoT applications. The core of our design is a novel data-dependent sampling scheme, which hides the dependency between input and output variables. The sampling leverages the dependency from the Bayesian network and enables efficient processing. MORSE further adopts a shuffling method to complement sampling. Consequently, the adversary cannot obtain the real model given the observed data, while an authentic MORSE user who is aware of the scheme can reconstruct the model. We analyze the security and convergence of MORSE and confirm its effectiveness via empirical evaluations.