Impact of AI on Earth and space exploration

Hello, this is Elena Kuznetsova, business process automation specialist Sherpa Robotics. Today I have prepared for you a translation of an article regarding the use of AI in the field of scientific research into space and planet Earth. In my work, I automate routine processes, but in a different area, and I was interested in how AI takes over the scientific routine and today opens up to us knowledge that we would have spent many years acquiring. I invite you to read the views of researchers Katie Buman, Matthew Graham, John Dabiri and Zachary Ross. At the end of the article, I share my thoughts on how AI is pushing us to evolve and focus on more complex tasks.

Machine learning and the Event Horizon telescope. Making images of black holes

Katherine Boisman, assistant professor of computational and mathematical sciences, electrical engineering and astronomy, narrates; Rosenberg Fellow and Heritage Institute for Medical Research Investigator.

Studying black holes is not only exciting, but also extremely challenging. A network of telescopes, known as the Event Horizon Telescope (EHT), collects data on black holes using the synchronized operation of multiple facilities scattered around the world. Together they form one giant virtual telescope that helps us look into the most remote corners of the Universe.

When collecting data, gaps in information coverage occur, which leads to uncertainty in the formation of images. By combining data from different telescopes, we use algorithms to fill these gaps. However, the task of estimating this uncertainty is very time-consuming. For example, it took an international team several months to image the black hole M87*, while the more recent image of Sgr A* at the center of our Galaxy took years.

This is where machine learning comes to the rescue, namely generative deep learning models. These models can not only produce a single image, but also generate a whole universe of possible images that correspond to the complex data we collect. This approach allows you to more effectively assess uncertainty and extract more information from the available data.

One exciting area of ​​application of machine learning is sensor optimization for computational imaging. We are developing machine learning methods to identify locations for new telescopes that will be integrated into the EHT. By co-designing telescope layouts and image reconstruction software, we can extract more information from the collected data and produce higher quality images with less uncertainty. This concept of co-design of computational imaging systems is relevant not only for EHT, but also for medical imaging and other fields.

Caltech astronomy professors Greg Hallinan and Vikram Ravi are leading the DSA-2000 project, which will use 2,000 telescopes in Nevada to image the entire sky at radio wavelengths. Unlike EHT, where there is a need to fill data gaps, this project will collect a huge amount of information – about 5 terabytes per second. All processing steps such as correlation, calibration and visualization must be fast and automated.

Scientists do not have time to work with these traditional methods. So they use deep learning techniques that automatically clean up the image so that users get results within just a few minutes of collecting the data.

How AI is changing astronomy. Matthew Graham's perspective on the evolution of astronomical research

Astronomy is undergoing significant changes thanks to big data, as Matthew Graham, scientific director of the Zwicky Transient Facility project, explains.

Over the past 20 years, astronomy has undergone significant changes, which are largely associated with the processing of large volumes of data. Today's data is too complex, vast, and sometimes coming from telescopes too quickly—sometimes at gigabytes per second.

Today, scientists are actively turning to machine learning methods. The use of new large data sets is becoming a key factor in the search for rare objects. For example, if you are looking for an object with a one-in-a-million probability, in a dataset of a million objects you will probably find it. But in a project like Rubin's Legacy Survey of Space and Time, which is expected to have 40 billion objects, you already have a chance of finding as many as 40,000 of these one-in-a-million objects.

Personally, I'm interested in active galactic nuclei – regions where there is a supermassive black hole at the center of the galaxy, surrounded by a disk of gas and dust that falls into it and makes it incredibly bright. I can explore the dataset to find such objects. I have an idea of ​​what patterns to look for and am developing a machine learning approach for this task. I can also model what these objects should look like and then train an algorithm to find similar objects in real data.

Today, we use computers for routine work that used to be done by undergraduate or graduate students. However, we are gradually moving into more complex areas where machine learning is becoming more sophisticated. We're starting to ask computers, “What patterns do you find in this data?”

Machine learning doesn't just help with data processing – it opens up new horizons for astronomy, allowing us to identify previously undetected patterns and make more accurate predictions. How will this affect our understanding of the universe? This is a question we all hope to have an answer to in the near future.

How is AI changing ocean monitoring and exploration? Engineer John Dabiri's View

Engineer John Dabiri describes how modern technologies, including artificial intelligence, have the potential to revolutionize the way we explore and monitor the oceans.

Surprisingly, only 5–10% of the ocean's volume has been explored. Traditional ship-based measurement methods have proven expensive, and scientists and engineers have increasingly turned to underwater robots to survey the ocean floor, study interesting objects, and analyze water chemistry.

Our team is developing technologies to create swarms of small, autonomous underwater drones and bionic jellyfish that will aid in ocean exploration. Drones encounter difficult currents, and fighting them wastes energy or throws them off course. Instead, we want these robots to use ocean currents the same way hawks use thermal currents in the air to reach great heights.

However, in conditions of complex ocean currents, we cannot calculate and control the trajectory of each robot in the same way as is done for spacecraft.

When it comes to deep sea exploration, joystick control of drones from the surface at 20,000 feet is nearly impossible. We also cannot transmit data about local ocean currents to them, since we cannot detect robots from the surface. As a result, there is a need to give ocean-going drones the ability to make autonomous motion decisions.

To do this, we equipped the drones with artificial intelligence, specifically deep reinforcement learning networks running on low-power microcontrollers just as small as a square inch. Using data from the drones' gyroscopes and accelerometers, the AI ​​iteratively calculates trajectories. With each new experiment, he learns how to move and maneuver effectively in different currents.

Thus, the introduction of artificial intelligence into underwater technologies opens up new horizons for ocean exploration. We are on the cusp of an era when drones will be able to autonomously explore the mysterious depths of the ocean, providing us with new data and discoveries that were once thought out of reach. How will this affect our understanding of ocean ecology and dynamics? There are more questions than answers, but the future looks promising.

How is machine learning changing earthquake monitoring? The view of seismologist Zach Ross

The word “earthquake” conjures up images of powerful tremors. However, it is important to remember that small tremors occur before and after this moment. To understand the entire process, it is necessary to analyze all earthquake signals and identify the general behavior of these vibrations. The more data we collect on tremors and earthquakes, the clearer the picture becomes of the complex network of faults within the Earth responsible for earthquakes.

Monitoring earthquake signals is not an easy task. Seismologists work with much larger amounts of data than is available to the general public through the Southern California Seismic Network. This volume of data is too large to handle manually. Additionally, we have no easy way to separate useful earthquake signals from “noise” signals, such as sounds from loud events or passing trucks.

Previously, students in our seismology laboratory spent a lot of time measuring the properties of seismic waves. Although this process can be mastered in a few minutes, it is not very fun to do. These routine tasks become a barrier to true scientific analysis. Students would prefer to use their time for more creative tasks.

Now artificial intelligence helps us recognize signals that interest us. We first train machine learning algorithms to detect different types of signals in data that has been carefully annotated by hand. We then apply our model to the new incoming data. The model makes decisions with the same accuracy as seismologists.

Thus, the introduction of machine learning into seismology opens new horizons for a deeper understanding of earthquakes and their preceding tremors. AI allows us to reduce time on routine tasks and focus on what is more important – analyzing and interpreting data, which can ultimately lead to better prediction and understanding of seismic processes. The future of seismology looks promising, and we are only just beginning to realize the potential that new technologies offer.

Comment

Actually, this article got me thinking. We constantly talk about how AI reduces task completion time and allows you to focus on more important processes. Which processes are more important?

We can quickly obtain data for analysis, on the basis of which we can draw conclusions at an unprecedented speed and use the information obtained to make decisions.

And it turns out that the most important process for which AI frees up time for us is precisely decision-making: setting goals, formulating tasks, etc.
AI can give us the answer to a question, but it’s up to humans to decide what to do with this information.

The main difference between artificial intelligence and natural intelligence is the absence of will. The AI ​​itself does not ask questions about what is there – in the depths of the ocean, in the bowels of the Earth, or in the vast expanses of space.

The same thing happens on a more mundane level – AI itself will not earn money and will not create a business. But a person with AI as a tool receives not only new, previously unavailable opportunities, but new requirements for the level of his own development and the speed of adaptation to changes.

AI management has become a new competency. Already now we are faced with such a phenomenon as neural employees, on the use of which we need to somehow make decisions, train personnel in interaction, write technical specifications to developers for their functionality, etc.

I would say that AI takes away from us the functionality that we have long mastered well, but are slowly completing, and confronts us with the need to perform more complex activities: find a goal and strive to achieve it.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *