dc.description.abstract |
Satellite data provides valuable insights into environmental changes and natural resource management, such as monitoring deforestation, mapping land use changes, and identifying areas at risk of soil degradation. Landsat-8 and Sentine1-2 are the publicly available high spatial resolution satellites launched in recent years. But, both have a moderate temporal resolution which limits their use in the applications like precision agriculture, land cover mapping, disaster monitoring, etc. For such applications, daily or weekly monitoring is better suited. Fusing data from the two satellites can provide enhanced observations. Both Landsat-8 and Sentine1-2 satellites have the same geographic coordinate systems which makes them amiable for fusion. But, fusing data at the pixel level for these satellites is challenging as they visit the same location on different days. The proposed model `LSFuseNet’ effectively fuses data at the feature level. It is a dual-fusion model in which bi-directional cross-modal attention is used to identify and exchange the hotspot information in the two modalities. A feature alignment module learns the fine-grained features and mitigates the noise in the data. We have innovatively applied contrastive learning to improve the quality of the learned representations of the data from the two satellites. We evaluate our model for two applications - crop yield prediction and snow cover prediction. For crop yield prediction, we have taken two crops, viz. corn, and soybean, for approximately 500 counties in the US. For snow cover prediction, we considered approximately 1300 US counties. Our extensive experiments show that LSFuseNet outperforms competing models. Also, the benefit of fusing the data from two satellites over using the data from a single satellite is evident from the results of both applications. We have further modified the model to include meteorological and/or soil data (if applicable) to further enhance the performance of the model. |
en_US |