Bi-level optimization, where the objective function depends on the solution of an inner optimization problem, provides a flexible framework for solving a rich class of problems such as hyper-parameter optimization and model-agnostic meta-learning. This work puts forth the first S tochastic B i-level F rank- W olfe (SBFW) algorithm to solve the stochastic bi-level optimization problems in a projection-free manner. We utilize a momentum-based gradient tracker that results in a sample complexity of $\mathcal {O}(\epsilon ^{-3})$ for convex outer objectives and $\mathcal {O}(\epsilon ^{-4})$ for non-convex outer objectives with strongly convex inner objectives. The stochastic compositional optimization problems (a special case of bi-level optimization problems entailing the minimization of a composition of two expected-value functions) are also considered within the same rubric. The proposed S tochastic C ompositional F rank- W olfe (SCFW) algorithm is shown to achieve a sample complexity of $\mathcal {O}(\epsilon ^{-2})$ for convex objectives and $\mathcal {O}(\epsilon ^{-3})$ for non-convex objectives, at par with the state-of-the-art rates of projection-free algorithms for single-level problems. The usefulness and flexibility of SBFW and SCFW algorithms are demonstrated via extensive numerical tests. We show that SBFW outperforms the state-of-the-art methods for the problem of matrix completion with denoising, and achieve improvements of up to 82% in terms of the wall-clock time required to achieve a particular level of accuracy. Furthermore, we demonstrate the improved performance of SCFW over the competing projection-free variants on the policy evaluation problem in reinforcement learning.