Risks of Artificial Intelligence in Critical Infrastructure

In April of this year, the American research organization RAND published a rather curious research report1which focused on the risks of artificial intelligence (AI) to critical infrastructure. The study’s authors relied on information about “smart cities” and considered attributes such as accessibility, monitoring and control of critical infrastructure, as well as malicious use of AI, when assessing the technologies.

Technologies used in smart cities include:

  • machine learning, including deep learning and predictive analytics;

  • Natural language processing (NLP), including translation, extraction, classification of information, and clustering;

  • computer speech, including speech-to-text and text-to-speech conversion;

  • computer vision, including image recognition and machine vision;

  • expert systems and robotics.

Other emerging and converging technologies such as cyberspace, big data analytics, and the Internet of Things have also become inextricably linked with AI. The authors of the study believe that early opportunities to implement AI in critical urban infrastructure are the initial stages of developing a “smart city.”

This example provides a blueprint for how AI-enabled applications are likely to be included in areas such as education, healthcare, energy, environment, waste management, agriculture, privacy and security, mobility and transportation, localized risk and disaster management.

Assumptions and concerns

According to the authors of the study, AI will become as ubiquitous as the Internet, mobile communications, and geo-location. This proliferation will create both opportunities and challenges for the use of AI. It has already increased productivity and improved efficiency in many critical infrastructure facilities.

More broadly, AI can provide the ability to explore and analyze big data to gain a deeper understanding of nature and improve human life, for example through personalized medicine.

However, many also warn of catastrophic scenarios in which humans lose agency and become subservient to the machines they have created. One terrifying scenario also suggests a time when AI systems develop values ​​and goals that are antithetical to those of humanity. As AI becomes more integrated into everyday life, such scenarios may become more likely. But even mundane uses of AI can have adverse effects.

Research and development in artificial intelligence rely on huge amounts of data, which is a critical component of AI training. These are variations in the characteristics of volume, velocity, value, variety, truthfulness, validity, variability, and visualization. If the data for training AI does not take these characteristics into account, then the likelihood of errors and misuse of AI increases. AI technologies must be responsible, fair, traceable, reliable, and governable.

The authors also conclude that potential problems with AI will be cumulative. Cities and critical infrastructure will increasingly use AI. We should also expect the number of bad actors to grow. Their capabilities will grow as the technology spreads, more use cases emerge, and the full power of AI becomes apparent.

The authors of the study drew attention to the debate over intellectual property protection associated with the development of artificial intelligence systems. These systems require huge amounts of data. Some experts say that this would be the total number of books ever written and fed into an AI platform during supervised learning. The creators of such content, including authors and artists, are concerned about the lack of compensation for their role in creating assets for AI platforms. This creates a difficult ethical dilemma and increases the likelihood that AI developers will abandon key potential assets protected by intellectual property rights. In this case, an important layer of information that could be useful in training models will be excluded.

How advanced is AI today?

Today, there are three categories of AI:

  • artificial narrow intelligence, or “weak” AI (ANI);

  • general (universal) artificial intelligence (AGI);

  • artificial superintelligence (ASI).

So far, the technology has only reached the level of ANI, although some claim that a few applications have demonstrated early AGI.

AI is already being used directly in critical infrastructure applications. Examples include:

  • medicine (e.g. diagnosing patients and predicting outcomes);

  • finance (e.g. fraud detection and improving customer service);

  • transportation (e.g. development of unmanned vehicles and predictive maintenance);

  • production (for example, its optimization and quality improvement).

But these are narrow areas of AI application in which technological maturity has been achieved.

Contextual AI analysis shows that handwriting, speech and image recognition, reading and language understanding by machine learning models already exceed human capabilities. And tasks such as common sense, basic mathematics and code generation have reached approximately 85 to 98% of human capabilities. In the Russian market, successful examples include Sber products GigaChat APIchatbot SaluteBot And SaluteSpeech VoiceCloning with the help of which it is possible to optimize business processes and create services, while ensuring the security of corporate information infrastructure data.

Two sides of the same coin

The rollout of ChatGPT-4 in March 2023 is an interesting example of how AI technologies will be shaped and released to the public. Previous versions had been in development for years, mostly for research and development. ChatGPT-4 has attracted interest from a wider audience. Even casual users have been intrigued by the new AI platform. However, the initial rollout has revealed issues with accuracy and consistency within the platform, forcing developers to make changes quickly to address emerging issues.

The same early prototyping will likely be true for critical infrastructure, since the architecture of the individual systems that together will make up the future AI is itself an AI-enabled technology.

This approach is common to many technologies, such as the Internet, social media, and now artificial intelligence. For critical infrastructure, this will mean that subsystems that rely on big data, networks, high-performance computing, and the Internet of Things will be deeply embedded. It also means that the stakes will be very high if the system fails catastrophically, especially in the critical infrastructure sector.

Minimizing such problems requires developing technology and appropriate mitigation measures. The values ​​and goals of AI platforms must be clearly articulated and understood. Constraints must also be developed to prevent deviations from human expectations and norms.

The authors of the study believe that a better understanding of the use of unsupervised machine learning is needed. Since it is allowed in AI, tracking such systems becomes a more difficult task. AI platforms should also include the use of generative adversarial networks (GANs). GANs consist of two neural networks: a generator and a discriminator. Generators create realistic fake data, and discriminators learn to distinguish them from real ones. This allows AI systems to learn and generate new data. It also provides the ability to assess the credibility of AI, as well as detect deepfakes and other fakes.

Additionally, the same AI capabilities that can enable advances in cybersecurity can also be used to empower attackers. AI systems can be trained to adapt their behavior to trick security professionals into making erroneous decisions. New versions of malware can also be developed that are more likely to bypass antivirus scanners, conduct network reconnaissance, identify vulnerabilities, and use social engineering to infiltrate systems.

Threats from artificial intelligence to critical infrastructure may come from several possible sources. AI may be used in the design and monitoring of critical infrastructure, which could provide benefits in optimizing design, increasing efficiency, and ensuring safety. However, such use of AI may pose threats.

The October 2023 Microsoft Digital Defense Report states:

Artificial intelligence technologies can provide automated interpretation of signals generated during attacks, effective threat prioritization, and adaptive responses to the speed and scale of adversarial activity. These techniques show great promise for quickly analyzing and correlating patterns across billions of data points to track a wide range of cybercrimes.

However, the same GAN technology can also be used for an effective attack.

The growing adoption of IoT capabilities also increases vulnerability to cyberattacks. AI is already impacting energy systems through its ability to digest usage patterns and provide accurate estimates of future energy demand, making it a key technology for managing the grid. However, this also increases the digital footprint and entry points for hackers.

Conclusion

Artificial intelligence capabilities will increasingly be used to improve the efficiency and effectiveness of existing infrastructure. This will bring both challenges and opportunities that need to be carefully managed. AI technology is evolving rapidly and will be integrated at different rates into different critical infrastructure applications. And one of the key challenges will be balancing efficiency gains with security, especially given that the private sector owns a large share of critical infrastructure.

  1. Gerstein, Daniel M. and Erin N. Leidy, Emerging Technology and Risk Analysis: Artificial Intelligence and Critical Infrastructure, Homeland Security Operational Analysis Center operated by the RAND Corporation, RR-A2873-1, 2024. As of June 5, 2024

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *