Low-light image enhancement aims to improve the illumination intensity while restoring color information. Despite recent advancements using deep learning methods, they still struggle with over or under-exposure in complex lighting scenes and poor color recovery in dark regions. To address these drawbacks, we propose a novel pipeline (called LRCR-Net) to perform Light Restoration and Color Refinement in a coarse-to-fine manner. In the coarse step, we focus on improving the illumination adaptively while avoiding inappropriate enhancement in the brighter or darker regions. This is achieved by introducing a region-calibrated residual block (RCRB) that balances local and global dependencies among different image regions. In the fine step, we aim to retouch the color of the images enhanced in the coarse step. To achieve this goal, we propose learnable image processing operators (LIPOs), including contrast and saturation operators, to refine the color according to the input images' color and contrast information. The final result is an image with proper illumination and rich color. Experiments on four benchmark datasets (NASA, LIME, MEF, and NPE) show that our model outperforms state-of-the-art methods.