Automotive silicon in the era of AI, functional safety, and cybersecurity

Functional safety must be addressed alongside cybersecurity and AI from the earliest stages of silicon design. The post Automotive silicon in the era of AI, functional safety, and cybersecurity appeared first on EDN.

Automotive silicon in the era of AI, functional safety, and cybersecurity

Automotive silicon design is entering a phase where functional safety, cybersecurity and artificial intelligence (AI) can no longer be treated as separate concerns. In connected, software-defined vehicles, safety outcomes depend not only on protection against random hardware faults, but also on resilience to malicious interference and software vulnerabilities. As a result, many of the decisions that determine system safety are now made at the silicon architecture level.

When ISO 26262 was first published in 2011, it marked a major step forward in structuring functional safety for automotive electronics. But the vehicles being designed today are fundamentally different. Autonomous driving, electrification, AI-based perception, vehicle-to-everything (V2X) connectivity, and centralized compute architectures were not primary considerations at the time.

The core objective remains unchanged: to avoid hazards to people. However, the way this objective is achieved is now deeply tied to how safety is architected into semiconductor devices.

Functional safety is no longer just a system-level concern; it’s a design-time challenge for ASIC and SoC engineers. For many safety-critical functions, whether ISO 26262 targets can be met depends on decisions made in the earliest stages of silicon architecture.

A growing and converging standards landscape

The industry has responded to new challenges by expanding the safety and security framework. ISO 26262:2018 addresses functional safety in road vehicles, while ISO 21448 (SOTIF) considers hazards arising from insufficient or incorrect system behavior. ISO/PAS 8800:2024 begins to address the safety implications of AI-based systems.

Alongside these, ISO/SAE 21434 introduces requirements for automotive cybersecurity, and platform-level schemes such as PSA Certified, while not automotive-specific, are shaping expectations for secure-by-design silicon, roots of trust, and independently evaluated security assurance.

In practice, these frameworks cannot be applied in isolation. Safety and cybersecurity requirements must be interpreted together and traced into silicon architecture, verification strategies, and ultimately the safety case. This convergence increases complexity, but it also reflects the reality of modern automotive systems: safety now depends on both fault tolerance and system integrity.

Figure 1 Functional safety is now a silicon architecture problem that must be addressed alongside cybersecurity and AI from the earliest stages of design. Source: EnSilica

Safety is implemented in silicon

In today’s vehicles, many critical safety mechanisms are implemented directly in hardware. Fault detection, redundancy schemes, error correction, watchdogs, and safe-state control are embedded within ASICs and SoCs. Typical techniques include lockstep CPU architectures for execution monitoring, ECC-protected memories to detect and correct bit errors, and dedicated safety islands that supervise system health and enforce safe-state transitions.

These mechanisms are responsible for ensuring that faults are either corrected or managed in a way that prevents hazardous behavior. Increasingly, they must also be robust against unintended interactions and deliberate manipulation, not just random faults.

This creates a fundamental shift. Functional safety is no longer something that can be added at the system level; it must be designed into silicon architecture from the outset. Decisions around redundancy affect area and cost. Diagnostic features influence power consumption and performance. Detection latency must be balanced against system constraints. These trade-offs are often made before the full system context is completely defined.

At the same time, safety mechanisms are only effective if the system enforcing them remains trustworthy. Ensuring that trust is now a core architectural concern.

Cybersecurity as a determinant of safety

Cybersecurity is no longer adjacent to functional safety—it’s a determinant of it. A system that meets ASIL targets for random faults may still be unsafe if it can be compromised through software, interfaces, or update mechanisms. In connected vehicles, a maliciously induced fault can have the same or greater impact than a hardware failure.

At the silicon level, this translates into requirements for hardware roots of trust, secure boot, run-time integrity checking, and domain isolation. These mechanisms ensure that only authenticated software can control safety-critical functions and that faults or compromises in non-critical domains cannot propagate into safety paths.

From a design perspective, this expands the traditional fault model. In addition to random hardware failures, engineers must now consider adversarial conditions such as fault injection attacks, privilege escalation, and corrupted firmware. Safety architectures must be capable of detecting, containing, and responding to both types of failure.

The limits of the V-model in silicon development

ISO 26262 promotes the V-model as a structured development approach, moving from requirements to implementation and back through verification. While this provides a useful framework, it does not always reflect how safety-critical ASICs are developed in practice.

Silicon design requires early decisions that cut across the V-model structure. Process technology selection, architectural partitioning, testability, and diagnostic coverage must all be considered at a very early stage. These decisions directly influence safety mechanisms and compliance with ASIL requirements.

In reality, ASIC development is highly iterative, moving between architecture, implementation constraints, and verification. The goal is not strict adherence to a linear process, but maintaining traceability, safety intent, and configuration control throughout the design cycle.

Traditional safety analysis is under pressure

Safety analysis methods such as failure modes and effects analysis (FMEA) and fault tree analysis (FTA) remain foundational. However, their application at the ASIC level is becoming increasingly challenging.

Modern automotive SoCs integrate CPUs, AI accelerators, high-speed interfaces, and complex interconnect structures on a single device. Applying traditional analysis techniques at this scale is difficult, often requiring abstraction that introduces uncertainty.

As complexity increases, the question is no longer whether analysis has been performed, but whether it’s sufficient to capture all relevant failure modes, particularly when both accidental faults and adversarial conditions must be considered.

Toward simulation-driven safety verification

To address these challenges, the industry is moving toward more dynamic, simulation-driven approaches. Fault simulation, long used in semiconductor tests, is increasingly applied in a functional safety context.

Instead of simply identifying faults, the focus shifts to system response. When a fault is injected, engineers must determine whether it is detected, whether it is corrected, and whether the system transitions to a safe state within the required time.

This approach integrates safety analysis with design verification and provides more concrete evidence that safety mechanisms operate correctly under realistic conditions. Increasingly, safety metrics such as single-point fault metric (SPFM) and latent fault metric (LFM) can increasingly be supported by fault-injection and simulation-based evidence, alongside analytical safety analysis.

Figure 2 The fault injection verification flow demonstrates how the design contains, detects, and correct faults. Source: EnSilica

AI moves the challenge further into silicon

AI introduces both new risks and new opportunities for functional safety. On the hardware side, AI workloads are implemented in dedicated accelerators within automotive SoCs, further shifting safety responsibility into silicon.

Designers must consider how these accelerators behave under fault conditions and how their outputs are monitored and validated. On the system side, AI raises fundamental challenges around verification. Unlike deterministic logic, AI systems exhibit probabilistic behavior influenced by data and operating conditions.

AI also reinforces the convergence between safety and security. Ensuring the integrity of inputs, models and execution becomes critical, as corrupted data or manipulated models can lead directly to hazardous behavior.

Memory safety and system integrity

One emerging approach to improving robustness is the use of hardware-enforced memory safety. Capability-based architectures, such as CHERI, provide fine-grained control over memory access, reducing the likelihood that software defects or exploitable vulnerabilities propagate into safety-critical behavior.

By mitigating broad classes of memory-corruption vulnerabilities at the hardware level, these techniques contribute to both system integrity and functional safety, particularly in complex software-defined environments.

Designing for long-term security

Automotive systems are expected to operate reliably over long lifetimes, often exceeding a decade. This introduces additional challenges for cybersecurity.

Cryptographic mechanisms that are secure today may not remain so over the lifetime of the vehicle. As a result, there is growing interest in cryptographic agility and support for post-quantum cryptography (PQC), particularly for secure boot, firmware updates, and vehicle communications.

These considerations further reinforce the need to treat security as a foundational aspect of silicon design, rather than a feature added later in the development process.

However, the automotive industry does not need to abandon existing safety standards; instead, it must adapt how they are applied in the context of semiconductor design. Take, for instance, functional safety, which is no longer just a system integration challenge. It’s a silicon architecture problem that must be addressed alongside cybersecurity and AI from the earliest stages of design.

At the silicon level, the distinction between safety and security is becoming increasingly artificial. Safety mechanisms must operate correctly in the presence of both accidental faults and malicious interference. This requires a unified architectural approach, where safety, security and system integrity are designed, verified, and validated together.

As vehicles become more intelligent, connected and autonomous, the role of custom silicon in delivering safe operation will only grow. The standards still matter, but increasingly, it’s silicon that determines whether those standards can be met in practice.

Enrique Martinez-Asensio is functional safety manager at EnSilica. He has more than 35 years of experience in the semiconductor industry, having worked on mixed-signal IC design and technical support and management in several semiconductor companies.

Related Content

The post Automotive silicon in the era of AI, functional safety, and cybersecurity appeared first on EDN.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow