This paper addresses the problem of determining certain cooking parameters for monolingual recipes extracted from various websites. The problem is framed as multiple classification tasks, which we address with a number of (increasingly complex) solutions, considering various text representation alternatives (both sparse and dense) and several classifiers, investigating the effect of certain domain engineered features, exploring a joint learning strategy to exploit information sharing between the different classification problems and addressing specific challenges for class imbalance and varying text lengths. We found that textual information alone fed into domain adapted pretrained language models is enough to obtain the best classification accuracy. Moreover, a joint training approach, by uniformly adding the losses, significantly improves the accuracy on all classification tasks; performing a class-weighted loss aggregation does not further improve the behavior of the joint training approach.