It is well-known that estimation of the directed dependency between high-dimensional data sequences suffers from the "curse of dimensionality" problem. To reduce the dimensionality of the data, and thereby improve the accuracy of the estimation, we propose a new progressive input variable selection technique. Specifically, in each iteration, the remaining input variables are ranked according to a weighted sum of the amount of new information provided by the variable and the variable’s prediction accuracy. Then, the highest ranked variable is included, if it is significant enough to improve the accuracy of the prediction. A simulation study on synthetic non-linear autoregressive and Henon maps data, shows a significant improvement over existing estimator, especially in the case of small amounts of high-dimensional and highly correlated data.