top of page
Writer's pictureNth Generation

AI Risks: Critical Infrastructure and Smart Technologies

Updated: 4 days ago


This is the second in a five-part series on the dangers of AI that corporate leaders and even cybersecurity personnel are not fully cognizant of, as well as how to implement protection and safety measures for you, the infrastructure you live and work in, and your sensitive data.





In this article, we do a deep dive into the risks specific to critical infrastructure of the U.S. as well as smart technologies that are, along with AI, permeating everything from our corporate offices to our homes, and intersections between the two.


As stated in the last article, there’s good news. Securing your organization for the use of AI helps you greatly elevate your system and user base’s security posture. Nth’s recommendation is to use the issues discussed below as a way to securely use AI and make the overall environment more secure. Again, it’s a corporate win-win in the long run.

 

AI Risks: Critical Infrastructure and Smart Technologies


1. Risks to Critical Infrastructure

Operational Risks: The reliance on AI can pose risks if the tools fail or are misused, potentially disrupting critical infrastructure operational processes.


Electric Grid: AI systems controlling power distribution can optimize efficiency but are vulnerable to cyberattacks that could disrupt service and cause widespread outages.


Water Treatment and Distribution: AI can enhance the management of water resources but may be manipulated to contaminate supplies or disrupt water availability.


Manufacturing: AI-driven automation improves productivity but introduces risks if systems malfunction or are hacked, potentially halting production or causing safety hazards.


Medical Services: AI enhances diagnostic accuracy and treatment personalization, but errors or attacks on AI systems can lead to misdiagnosis or treatment delays, endangering lives.


Communications: AI optimizes network performance but poses risks if used to intercept or manipulate communications.


Transportation: AI improves traffic management and autonomous vehicle navigation but can be exploited to disrupt transit systems, leading to accidents and congestion.


2. Risks in Operational Technology (OT) and Smart Systems

Embedded Systems/IoT: The proliferation of AI-enabled IoT devices increases the attack surface for cybercriminals. Compromised devices can disrupt services or be used as entry points for larger attacks.


Smart Buildings: AI in smart buildings optimizes energy use and enhances security but vulnerabilities can lead to unauthorized access or control of building systems including power, communications, HVAC, etc. One of the first large-scale attacks (Target) occurred not through the main company but through the HVAC service provider.


Smart Grid: AI enhances energy distribution but is susceptible to attacks that can cause large-scale blackouts or grid instability. This is due, as in many cases, to a dependence on AI and eventually to a belief that it is superior to human decision making, especially in light of a nation-state attack.


Telecommunications: AI manages network traffic efficiently but hacking controlling AI systems can lead to service interruptions or data breaches. Again, as AI is trusted above human operators, telecommunications becomes increasingly susceptible to AI hacking, as opposed to historical circuit or controller hacking.


Mass Transportation: AI improves scheduling and safety but can be a target for attacks, leading to widespread disruptions or accidents on a large scale.


Smart Cities: AI is used to integrate city services for improved efficiency and sustainability but presents risks of coordinated cyberattacks affecting multiple services simultaneously. Critical infrastructure for cities mirrors some of the interactions and interconnectedness at the national level including power, water, communications, transportation, emergency services, etc. An attack by foreign nation-states on the AI systems of a city could have far-reaching effects, possibly to county and state levels.

 

3. Lack of Human in the Loop (HITL) Risks

Speed and Throughput vs. Safety and Oversight: Given the need for ever speedier networks and capabilities, removing HITL can enhance operational speed and accuracy but increases the risk of unchecked errors or malicious interventions which is exactly what attackers seek.


Lack of Human Oversight: Critical decisions made without human oversight may lack contextual understanding, leading to inappropriate or harmful actions.


System Failures: In the absence of human intervention, AI system failures may go unnoticed until significant damage has occurred.


Security Vulnerabilities: Automated systems without HITL are more prone to being exploited without immediate human detection.

 


Risk-Specific Mitigation Strategies

 

1. Maintain Human Oversight

Human-AI collaboration should prioritize systems critical to human life, safety, and global peace. It is crucial to maintain human oversight for contextual understanding and intervention capabilities. Fail-safes should be in place post-decision point, allowing them to be disabled even after activation. Examples include recall, disarmament, and self-destruct commands, enabling humans to catch up and/or intervene promptly, especially when time is a critical factor.

Training and Simulation: Regularly train staff on AI systems and conduct simulations to prepare for potential failures or attacks.

 

2. Develop and Enforce Regulations

Regulatory Compliance: In many if not most critical infrastructure sectors, there are NO mandated security standards at all, much less AI requirements relative to security. This shocking and incredibly dangerous state must be immediately reversed. If you violate the requirements in the military, your system or network gets unplugged. Critical infrastructure providers must adhere to industry regulations and standards for AI and critical infrastructure. If those types of regulations are lacking, adoption of a recognized security framework (e.g., CIS CSC, NIST CSF, ISO 27000 series) is crucial.

 

___________



In summary, AI's integration into critical infrastructure and smart technologies presents significant risks.


In critical infrastructure, AI systems can optimize processes but also expose vulnerabilities, from power grids susceptible to cyberattacks, to water treatment systems that could be compromised, to manufacturing and medical services where malfunctions or hacks could lead to severe disruptions or loss of life.


The proliferation of IoT devices and smart systems in buildings, power grids, and cities has increased the attack surface, leading to the risk of widespread service disruptions and security breaches. Moreover, the removal of human oversight to enhance speed and efficiency of these systems and services can lead to unchecked errors or malicious intent, which could cause unexpected massive system failures.


To mitigate these risks, it is crucial to maintain human oversight with fail-safes and to develop stringent regulations to ensure AI systems in critical infrastructure adhere to high-security standards.

 

Recent Posts

See All

Opmerkingen


bottom of page