Since 2007, the floating green tide on the Yellow Sea has become a perennial marine ecological disaster, which has attracted widespread attention from the academic community. Satellite remote sensing is the most important technology to monitor the occurrence and development of floating green tides. However, due to the limitations of satellite imaging mechanisms, resolutions, repeat cycles, and various inversion algorithms used, the results of green tides detected by different satellites are quite different, even reaching an order of magnitude. To this end, based on the satellite's imaging knowledge and floating algae's domain knowledge, we propose a deep-learning algorithm for floating green tide extraction that can deeply fuse the detection results from optical and microwave synthetic aperture radar (SAR) images under different sea conditions. The imaging and domain knowledge considered mainly include band channel components and their combinations as input, the new loss function for algae-water sample imbalance, texture feature enhancement, attention mechanism, etc. As a result, the deep-learning model reached a high-level extraction capability, i.e., the accuracy of 97.03%/99.83% and mean intersection over union (IOU) of 48.57%/86.31%, and it's widely applicable to optical images such as MODIS/GOCI and SAR images such as Sentinel-1/Gaofen-3. However, the most widely used method for expressing algal life stages is through the time series of satellite algae coverage, which is usually consistent with its life state. For this reason, based on the imaging mechanisms of two different satellites, optical and SAR, this paper redefines and obtains an important physical parameter of floating algae, the “floating ratio” of green algae patches, to represent the entire life process of “emergence-outbreak-maintain-dissipation” more accurately.