Bayesian statistics provides an important framework for parameter estimation and uncertainty quantification in a wide range of settings. Prior information on unknown parameters can be incorporated into Bayesian models in principled ways, resulting in posterior distributions that describe the uncertainty in those parameters. In this thesis we utilise this Bayesian framework to study problems in statistical network analysis and data subsampling. In both cases, the key task will be uncovering latent structure within datasets. To perform statistical network analysis within the Bayesian setting, we use a Bayesian nonparametric (BNP) approach, in which infinitely many parameters are used to capture the complexities present in real-world data. This framework allows us to model sparse networks, a property commonly exhibited by real-world datasets. Furthermore, it allows us the flexibility to capture the structure exhibited by these networks, by modelling the popularity of nodes within them. We focus specifically on core-periphery networks, where there exists a subset of important, highly connected nodes within the overall network, and dynamic networks, where the structure of the network evolves over time. We perform posterior inference on these models using Markov chain Monte Carlo (MCMC). One problem shared by our BNP network models and many other complex models is that exact inference is rarely possible, and thus approximate sampling methods such as MCMC are needed. However, these can be prohibitively slow for very large datasets. One solution to this problem is to replace the full dataset with a small, weighted subset - a coreset - that captures key features of the full dataset. The key idea here is that most of the data is often redundant, and an appropriately weighted subset can capture most of the relevant information. We introduce a novel Bayesian coreset construction algorithm, which is based on first uniformly subsampling the data, and then optimizing the weights. Our method is simple to implement, significantly faster than the state-of-the-art (in terms of performance), and comes with theoretical error guarantees.