Graph convolution networks (GCNs) are useful in remote sensing (RS) image retrieval. It is found to be effective because, in a graph representation, the relative geometrical interactions between different regions (or segments) are appropriately captured, along with their region-wise features in their region adjacency graphs. Also, the attention mechanism has often been applied to the nodes to highlight the essential features in each node. In this regard, a significant amount of high-frequency information is missed since each image segment is effectively summarized within a single node. To account for this and increase the learning capacity, we propose to attend over the edge/adjacency matrix to highlight the interactions among meaningful regions that contribute to supervised learning from images. We exploit this novel edge attention mechanism together with node attention to highlight essential image context by allowing more importance to the meaningful neighboring regions that highlight a relevant node. We implement the proposed context-attended GCN framework for image retrieval on the benchmarked UC-Merced and the PatternNet datasets. We observe a notable improvement in the results compared to the state of the art.