Dialogue is a process of information exchanging, where global background is stable while local focuses are transiting. Thus, at the ongoing dialogue turn, there are both relevant and irrelevant semantics existing in dialogue contexts. How to filter out noises and selectively utilize context can pave the way to successful dialogue generation. Current work on dialogue context utilization either processes contexts as vanilla monologue text ignoring dynamic conversation flows, or depends on weighted strategies to fuse all contexts where irrelevant utterances cannot be filter out even may overwhelm relevant ones. To deal with this, this paper proposes a Hard-style Selective Context Utilization method (HardSCU). We first define and measure the information density of the last utterance (query) of a dialogue, marking it as “strong” or “weak”. For a dialogue with strong query, HardSCU directly inputs the query into a RNN-based or T5-based encoder–decoder framework to generate a response; for a dialogue with weak query, HardSCU conducts a selective context utilization for dialogue generation, where a semantic interaction module introduces relevant semantics of context to enrich the query and the co-reference relations existing in dialogue are extracted to promote the learning process of response decoder. Extensive experiments on two benchmark conversation corpora verify that our HardSCU method can outperform competitive baselines on generating appropriate responses for chit-chat-bots with yielding strong robustness to the variations of dialogue lengths.