This paper introduces a novel model-free solution for a multi-objective model-following control problem, utilizing an observer-based adaptive learning approach. The goal is to regulate model-following error dynamics and optimize process variables simultaneously. Integral reinforcement learning is employed to adapt three key strategies, including observation, closed-loop stabilization, and reference trajectory tracking. Implementation uses an approximate projection estimation method under mild conditions on learning parameters.