Iterative linear quadratic regulator (iLQR) has gained wide popularity in addressing trajectory optimization problems with nonlinear system models. However, as a model-based shooting method, it relies heavily on an accurate system model to update the optimal control actions and the trajectory determined with forward integration, thus becoming vulnerable to inevitable model inaccuracies. Recently, substantial research efforts in learning-based methods for optimal control problems have been progressing significantly in addressing unknown system models, particularly when the system has complex interactions with the environment. Yet an incremental machine learning method or a deep neural network is normally required to fit a substantial scale of sampling motion data. Existing works mainly focus on capturing the whole picture of the motion pattern from trial data (which is essentially noisy and hard to identify with any simple predefined structures) for complex robot systems. Instead, we present Neural-iLQR, a learning-aided shooting method over the unconstrained control space, in which a neural network with a simple structure is used to drive the optimization process towards optimal direction. Through comprehensive evaluations on two illustrative control tasks using three different kinds of lightweight network structures, the proposed method is shown to outperform the conventional iLQR significantly with fast and continuous cost convergence in the presence of inaccuracies in system models, which demonstrates the generalizability and robustness of the proposed learning-aided method.