Recurrent neural network language modeling (RNNLM) have been shown to outperform most other advanced language modeling techniques, however, it suffers from high computational complexity. In this paper, we present techniques for building faster and more accurate RNNLMs. In particular, we show that Brown clustering of the vocabulary is much more effective than other techniques. We also present an algorithm for converting an ensemble of RNNLMs into a single model that can be further tuned or adapted. The resulting models have significantly lower perplexity than single models with the same number of parameters. An error rate reduction of 5.9% was observed on a state of the art multi-pass voice-mail to text ASR system using RNNLMs trained with the proposed algorithm.