This poster paper presents a high-level description of the Metalogue project that is developing a multi-modal dialogue system that is able to implement interactive behaviors that seem natural to users and is flexible enough to exploit the full potential of multimodal interaction. We provide an outline of the initial work undertaken to define a an open architecture for the integrated Metalogue system. This system includes components that are necessary for the implementation of the processing stages for a variety of application domains: initialization, training, information gathering, orchestration, multimodality, dialogue management, speech recognition, speech synthesis and user modelling.