Most on-premises database architectures employ advanced database features such as replication (standby databases), consolidation and separation (pluggable databases), and high availability (HA) through clustered configurations, each with a dizzying number of metrics to help monitor, assess and notify if errors occur. These advanced database architectures are explicitly designed to enforce enterprise Service Level Agreements (SLAs), maximising stability while serving critical business functions. In Cloud architectures, a trade off between the cost of the infrastructure and the ability for custom configuration has become apparent, therefore knowing what size, configuration and where to place workloads is most important prior to any cloud adoption if we are to achieve the benefits that clouds profess to bring. It has been advocated by cloud service providers that cloud configurations reduce cost, speed up key business processes and improve support for agile development lifecycles. However, when enterprises suffer from server sprawl, the ability to individually assess each database system prior to cloud migration has resulted in paralysis-by-analysis. This can lead to guestimates of what resources are perceived as being used or required. In this thesis we investigate how to account for complex database architectures and their workloads in clouds, focusing on accounting, forecasting and capacity planning, workload placement, answering questions that are key to successful cloud adoption such as: What types of workloads are employed? Do those workloads exhibit complex data patterns such as trends, shocks or seasonality? What resources are the workloads consuming and require in the future? How should the workloads be placed to fully utilise database cloud architectures without compromising existing SLAs? Introducing new techniques and automation, we remove the high level of manual technical expertise and understanding currently employed by enterprises, which can be seen as error prone, cumbersome and time consuming.