Day 4

Detailed paper information

Back to list

Paper title PHI-Sat-2: Onboard AI Apps for innovative Earth Observation techniques
  1. Alessandro Marin CGI Italy S.R.L. Speaker
  2. César Coelho CGI Deutschland
  3. Christine Gläßer CGI Deutschland
  4. Irina Babkina
  5. Aleix Megias
  6. Oriol Aragon Open cosmos Ltd.
  7. Gaetano Pace CGI Italy S.R.L.
Form of presentation Poster
  • B7. NewSpace missions
    • B7.03 New Space missions with small and nanosatellites
Abstract text Following the success of the PHI-Sat mission, in 2020, the European Space Agency (ESA) announced the opportunity to present CubeSat-based ideas for the PHI-Sat-2 mission to promote innovative technologies such as Artificial Intelligence (AI) capabilities onboard Earth Observation (EO) missions.
The PHI-Sat-2 mission-idea, submitted jointly by Open Cosmos and CGI, leverages the latest research and developments in the European ecosystem: a game-changing EO CubeSat platform capable of running AI Apps that can be developed, uploaded and deployed and orchestrated on the spacecraft and updated during flight operations. This approach allows continuous improvement of the AI model parameters using the very same images acquired by the satellite.
The Development is divided into two sequential phases: the Mission Concept Phase, which is almost completed, which shall demonstrate the readiness of the mission by demonstrating the innovative EO application through a breadboard base validation test and a Mission Development Phase, which shall be dedicated to the design and development of the space and ground segments, launch, in-orbit operations, data exploitation, and distribution.
The PHI-Sat-2 Mission, lead by OpenCosmos, will be used to demonstrate the AI enabling capability for new useful innovative EO techniques of relevance to EO user communities. The overall objective is to address innovative mission concepts, fostering novel architectures to meet user-driven science and applications by means of on-board processing. The latter will be based on state-of-the-art AI techniques and on-board AI-accelerator processors
The mission will take advantage of the latest research for mission operations of CubeSats and use the NanoSat MO Framework, that allows software to be deployed in space as simple Apps, in a similar fashion to Android apps, previously demonstrated in ESA’s OPS-SAT mission, and supports the orchestration of on-board Apps.
Φ-sat-2 will a set of default AI Apps, which will cover different ML approaches and methodologies such as supervised (image segmentation, object detection) and unsupervised learning (with auto encoders and generative networks) and presented below.
Since the Φ-sat-2 mission relies on an optical sensor, the availability of a Cloud Detection App –Develop by KP-Labs- which will generate cloud mask and identify cloud free areas is a baseline. But this information can be exploited by the other Apps and this is not only relevant for onboard resources optimization, but also will demonstrate the AI Apps pipeline onboard;
Autonomous Vessel Awareness App –developed by CEiiA- - will detect and classify vessels. Together with the demonstration of the possibility to perform scouting with a sensor with wider swath width, this App will demonstrate how information generated in space can be exploited for mission operation e.g. in a satellite constellation to identify areas for the next acquisitions.
The Sat2Map App – developed by CGI- transforms a satellite image to a street map in emergency field using Artificial Intelligence. The software takes advantage of the Cycle-Consistent Adversarial Networks (CycleGAN) technique to do the transformation from the satellite image to the street map. This App will enable the satellite to provide to rescue teams on ground in case of emergency (Earthquake, flood etc.) real time of still available and accessible street.
High Compression App developed by Geo-K- will exploit deep auto encoders to push AI image compression on-board and on ground reconstruction. The performances of the App will not be measured only in terms of standard compression rate VS image similarity but also in terms of how the reconstructed image will be exploitable by other Apps e.g. for objects recognition pushing the limit of image compression in space based on AI and reconstruction on the ground.
On top of this, the mission will be open to applications developed by third parties and this augment the disruptiveness of a new mission concept where the Satellite can be seen available to a community already in space for research and development as a commodity. Those third party APPs can then be uploaded and started/stopped on demand. This concept is extremely powerful enabling future AI software to be developed and easily deployed in the spacecraft. This will represent an enabler for in-flight on-mission continuous learning of the AI networks.
The presentation aims to describe the PHI-Sat-2 mission objectives and how the different AI applications, orchestrated by the NanoSat MO Framework, will demonstrate the disruptive advantages that the onboard AI brings to the mission.