New AI system to help save lives of earthquake survivors in Turkey

An AI system called "xView2" is helping ground rescue efforts in regions of Turkey devastated by this month's earthquakes.
Chris Young
Aftermath of the devastating earthquakes hit the southern Turkey.
Aftermath of the devastating earthquakes hit the southern Turkey.

Getty Images 

The U.S. Department of Defense is using a visual computing artificial intelligence system to aid ongoing disaster response efforts in Turkey and Syria following the devastating earthquake on February 6 that has claimed tens of thousands of lives.

The AI system, called xView2, is still in the early development phase, but it has already been deployed to help ground rescue missions in Turkey.

AI system helps disaster response teams in Turkey

xView2 is an open-source project sponsored and developed by the Pentagon's Defense Innovation Unit and Carnegie Mellon University's Software Engineering Institute. It has also collaborated with large organizations, including Microsoft.

The system uses machine-learning algorithms on satellite imagery to categorize damage in the disaster area at a much faster rate than is possible using other existing methods.

This method is incredibly important given the number of separate earthquakes and aftershocks that have occurred since the first earthquake incident in the early hours of February 6. Yesterday, February 20, for example, another 6.4 magnitude tremor struck near the city of Antakya, close to the border with Syria, trapping more people under the rubble.

An MIT Technology Review report points out that xView2 has recently also been deployed in response to wildfires in California as well as during recovery efforts after flooding in Nepal, where it helped to identify damage from landslides caused by the floods.

In Turkey, the AI system has so far been used by at least two different ground teams for search and rescue in Adiyam, Turkey, which has been devastated by the earthquake. In an interview with MIT Technology Review, Ritwik Gupta, principal AI scientist at the Defense Innovation Unit, said that xView2 was able to help rescue workers "able to find areas that were damaged that they were unaware of."

How does the xView2 AI disaster response system work?

The xView2 system uses a technique similar to object recognition called "semantic segmentation." This method investigates each individual pixel of a satellite image and its relation to surrounding pixels to analyze the state of things on the ground. The AI will then highlight the damage in red. It is a method that could previously have taken weeks but can now be carried out, with the help of machine learning, in a matter of hours.

This method is also much more efficient than the traditional method of relying on eyewitness accounts to assess damage to help organize disaster response operations. More recently, response teams have also used drones to sweep over large areas, but this is still a time-consuming process.

As Gupta pointed out, however, there are still a few issues to be ironed out in the xView2 system. One problem, for example, is that it is reliant on satellite imagery taken during the daytime, meaning it can't currently provide data quickly for disasters that occur in the early hours or at nighttime. The service can also be hindered by cloud coverage.

Still, "if we can save one life, that's a good use of the technology," Gupta explained. Hopefully, though, it will be able to save many more.

Add Interesting Engineering to your Google News feed.
Add Interesting Engineering to your Google News feed.
message circleSHOW COMMENT (1)chevron
Job Board