CES 2026: AI, automotive, and robotics dominate

If the Consumer Electronics Show (CES) is a benchmark for what’s next in the electronic component industry, you’ll find thatContinue Reading The post CES 2026: AI, automotive, and robotics dominate appeared first on EDN.

CES 2026: AI, automotive, and robotics dominate
Bosch Sensortec’s BMI563 IMU for robotics.

If the Consumer Electronics Show (CES) is a benchmark for what’s next in the electronic component industry, you’ll find that artificial intelligence permeates across all industries, from consumer electronics and wearables to automotive and robotics. Many chipmakers are placing big bets on edge AI as a key growth area along with robotics and IoT.

Here’s a sampling of the latest devices and technologies launched at CES 2026, covering AI advances for automotive, robotics, and wearables applications.

AI SoCs, chiplets, and development

Ambarella Inc. announced its CV7 edge AI vision system-on-chip (SoC), optimized for a wide range of AI perception applications, such as advanced AI-based 8K consumer products (action and 360° cameras), multi-imager enterprise security cameras, robotics (aerial drones), industrial automation, and high-performance video conferencing devices. The 4-nm SoC provides simultaneous multi-stream video and advanced on-device edge AI processing while consuming very low power.

The CV7 may also be used for multi-stream automotive designs, particularly for those running convolutional neural networks (CNNs) and transformer-based networks at the edge, such as AI vision gateways and hubs in fleet video telematics, 360° surround-view and video-recording applications, and passive advanced driver-assistance systems (ADAS).

Compared with its predecessor, the CV7 consumes 20% less power, thanks in part to Samsung’s 4-nm process technology, which is Ambarella’s first on this node, the company said. It incorporates Ambarella’s proprietary AI accelerator, image-signal processor (ISP), and video encoding, together with Arm cores, I/Os, and other functions for an efficient AI vision SoC.

The high AI performance is powered by Ambarella’s proprietary, third-generation CVflow AI accelerator, with more than 2.5× AI performance over the previous-generation CV5 SoC. This allows the CV7 to support a combination of CNNs and transformer networks, running in tandem.

In addition, the CV7 provides higher-performance ISP, including high dynamic range (HDR), dewarping for fisheye cameras, and 3D motion-compensated temporal filtering with better image quality than its predecessor, thanks to both traditional ISP techniques and AI enhancements. It provides high image quality in low light, down to 0.01 lux, as well as improved HDR for video and images.

Other upgrades include its hardware-accelerated video encoding (H.264, H.265, MJPEG), which boosts encode performance by 2× over the CV5 and its on-chip general-purpose processing upgrade to a quad-core Arm Cortex-A73, offering 2× higher CPU performance over the previous SoC. It also provides a 64-bit DRAM interface, delivering a significant improvement in available DRAM bandwidth compared with the CV5, Ambarella said. CV7 SoC samples are available now.

Ambiq Micro Inc. delivers the industry’s first ultra-low-power neural processing unit (NPU) built on its Subthreshold Power Optimized Technology (SPOT) platform. It is designed for real-time, always-on AI at the edge.

Delivering both performance and low power consumption, the SPOT-optimized NPU is claimed as the first to leverage sub- and near-threshold voltage operation for AI acceleration to deliver leading power efficiency for complex edge AI workloads. It leverages the Arm Ethos-U85 NPU, which supports sparsity and on-the-fly decompression, enabling compute-intensive workloads directly on-device, with 200 GOPS of on-device AI performance.

It also incorporates SPOT-based ultra-wide-range dynamic voltage and frequency scaling that enables operation at lower voltage and lower power than previously possible, Ambiq said, making room in the power budget for higher levels of intelligence.

Ambiq said the Atomiq SoC enables a new class of high-performance, battery-powered devices that were previously impractical due to power and thermal constraints. One example is smart cameras and security for always-on, high-resolution object recognition and tracking without frequent recharging or active cooling.

For development, Ambiq offers the Helia AI platform, together with its AI development kits and the modular neuralSPOT software development kit.

Ambiq’s Atomiq SoC.
Ambiq’s Atomiq SoC (Source: Ambiq Micro Inc.)

On the development side, Cadence Design Systems Inc. and its IP partners are delivering pre-validated chiplets, targeting physical AI, data center, and high-performance computing (HPC) applications. Cadence announced at CES a partner ecosystem to deliver pre-validated chiplet solutions, based on the Cadence physical AI chiplet platform. Initial IP partners include Arm, Arteris, eMemory, M31 Technology, Silicon Creations, and Trilinear Technologies, as well as silicon analytics partner proteanTecs.

The new chiplet spec-to-packaged parts ecosystem is designed to reduce engineering complexity and accelerate time to market for developing chiplets. To help reduce risk, Cadence is also collaborating with Samsung Foundry to build out a silicon prototype demonstration of the Cadence physical AI chiplet platform. This includes pre-integrated partner IP on the Samsung Foundry SF5A process.

Extending its close collaboration with Arm, Cadence will use Arm’s advanced Zena Compute Subsystem and other essential IP for the physical AI chiplet platform and chiplet framework. The solutions will meet edge AI processing requirements for automobiles, robotics, and drones, as well as standards-based I/O and memory chiplets for data center, cloud, and HPC applications.

These chiplet architectures are standards-compliant for broad interoperability across the chiplet ecosystem, including the Arm Chiplet System Architecture and future OCP Foundational Chiplet System Architecture. Cadence’s Universal Chiplet Interconnect Express (UCIe) IP provides industry-standard die-to-die connectivity, with a protocol IP portfolio that enables fast integration of interfaces such as LPDDR6/5X, DDR5-MRDIMM, PCI Express 7.0, and HBM4.

Cadence’s physical AI chiplet platform.
Cadence’s physical AI chiplet platform (Source: Cadence Design Systems Inc.)

NXP Semiconductors N.V. launched its eIQ Agentic AI Framework at CES 2026, which simplifies agentic AI development and deployment for both expert and novice device makers. It is one of the first solutions to enable agentic AI development at the edge, according to the company. The framework works together with NXP’s secure edge AI hardware to help simplify agentic AI development and deployment for autonomous AI systems at the edge and eliminate development bottlenecks with deterministic real-time decision-making and multi-model coordination.

Offering low latency and built-in security, the eIQ Agentic AI Framework is designed for real-time, multi-model agentic workloads, including applications in robotics, industrial control, smart buildings, and transportation. A few examples cited include instantly controlling factory equipment to mitigate safety risks, alerting medical staff to urgent conditions, updating patient data in real time, and autonomously adjusting HVAC systems, without cloud connectivity.

For expert developers, they can integrate sophisticated, multi-agent workflows into existing toolchains, while novice developers can quickly build functional edge-native agentic systems without deep technical experience.

The framework integrates hardware-aware model preparation and automated tuning workflows. It enables developers to run multiple models in parallel, including vision, audio, time series, and control, while maintaining deterministic performance in constrained environments, NXP said. Workloads are distributed across CPU, NPU, and integrated accelerators using an intelligent scheduling engine.

The eIQ Agentic AI Framework supports the i.MX 8 and i.MX 9 families of application processors and Ara discrete NPUs. It aligns with open agentic standards, including Agent to Agent and Model Context Protocol.

NXP has also introduced its eIQ AI Hub, a cloud-based developer platform that gives users access to edge AI development tools for faster prototyping. Developers can deploy on cloud-connected hardware boards but still have the option for on-premise deployments.

NXP’s Agentic AI framework.
NXP’s Agentic AI framework (Source: NXP Semiconductors N.V.)

Sensing solutions

Bosch Sensortec launched its BMI5 motion sensor platform at CES 2026, targeting high-precision performance for a range of applications, including immersive XR systems, advanced robotics, and wearables. The new generation of inertial sensors—BMI560, BMI563, and BMI570—is built on the same hardware and is adapted through intelligent software.

Based on Bosch’s latest MEMS architecture, these inertial sensors, housed in an LGA package, claim ultra-low noise and exceptional vibration robustness. They offer twice the full-scale range of the previous generation. Key specifications include a latency of less than 0.5 ms, combined with a time increment of approximately 0.6 µs, and a timing resolution of 1 ns, which can deliver responsive motion tracking in highly dynamic environments.

The sensors also leverage a programmable edge AI classification engine that supports always-on functionality by analyzing motion patterns directly on the sensor. This reduces system power consumption and accelerates customer-specific use cases, the company said.

The BMI560, optimized for XR headsets and glasses, delivers low noise, low latency, and precise time synchronization. Its advanced OIS+ performance helps capture high-quality footage even in dynamic environments for smartphones and action cameras.

Targeting robotics and XR controllers, the BMI563 offers an extended full-scale range with the platform’s vibration robustness. It supports simultaneous localization and mapping, high dynamic XR motion tracking, and motion-based automatic scene tagging in action cameras.

The BMI570, optimized for wearables and hearables, delivers activity tracking, advanced gesture recognition, and accurate head-orientation data for spatial audio. Thanks to its robustness, it is suited for next-generation wearables and hearables.

Samples are now available for direct customers. High-volume production is expected to start in the third quarter of 2026.

Bosch also announced the BMI423 inertial measurement unit (IMU) at CES. The BMI423 IMU offers an extended measurement range of ±32 g (accelerometer) and ±4,000 dps (gyroscope), which enable precise tracking of fast, dynamic motion, making it suited for wearables, hearables, and robotics applications.

The BMI423 delivers low current consumption of 25 µA for always-on, acceleration-based applications in small devices. Other key specifications include low noise levels of 5.5 mdps/√Hz (gyro) and 90 µg/√Hz (≤ 8 g) or 120 µg/√Hz (≥ 16 g) (accelerometer), along with several interface options including I3C, I2C, and serial peripheral interface (SPI).

For wearables and hearables, the BMI423 integrates voice activity detection based on bone-conduction sensing, which helps save power while enhancing privacy, Bosch said. The sensor detects when a user is speaking and activates the microphone only when required. Other on-board functions include wrist-gesture recognition, multi-tap detection, and step counting, allowing the main processor to remain in sleep mode until needed and extending battery life in compact devices such as smartwatches, earbuds, and fitness bands.

The BMI423 is housed in a compact, 2.5 × 3 × 0.8-mm3 LGA package for space-constrained devices. The BMI423 will be available through Bosch Sensortec’s distribution partners starting in the third quarter of 2026.

Bosch Sensortec’s BMI563 IMU for robotics.
Bosch Sensortec’s BMI563 IMU for robotics (Source: Bosch Sensortec)

Also targeting hearables and wearables, TDK Corp. launched a suite of InvenSense SmartMotion custom sensing solutions for true wireless stereo (TWS) earbuds, AI glasses, augmented-reality eyewear, smartwatches, fitness bands, and other IoT devices. The three newest IMUs are based on TDK’s latest ultra-low-power, high-performance ICM-456xx family that offers edge intelligence for consumer devices at the highest motion-tracking accuracy, according to the company.

Instead of relying on central processors, SmartMotion on-chip software enables computational processing related to motion tracking to be offloaded to the motion sensor itself so that intelligence decisions may be made locally, which allows other parts of the system to remain in low-power mode, TDK said. In addition, the sensor fusion algorithm and machine-learning capability are reported to deliver seamless motion sensing with minimum software effort by the customer.

The SmartMotion solutions, based on the ICM-456xx family of six-axis IMUs, include the SmartMotion ICM-45606 for TWS applications including earbuds, headphones, and other hearable products; the SmartMotion ICM-45687 for wearable and IoT technology; and the SmartMotion for Smart Glasses ICM-45685, which now enables new features, including sensing whether users are putting glasses on or taking glasses off (wear detection) and vocal vibration detection for identifying the source of the speech through its on-chip sensor fusion algorithms. The ICM-45685 also enables high-precision head-orientation tracking, optical/electronic image stabilization, intuitive UI control, posture recognition, and real-time translation.

TDK’s SmartMotion ICM-45685.
TDK’s SmartMotion ICM-45685 (Source: TDK Corp.)

TDK also announced a new group company, TDK AIsight, to address technologies needed for AI glasses. The company will focus on the development of custom chips, cameras, and AI algorithms enabling end-to-end system solutions. This includes combining software technologies such as eye intent/tracking and multiple TDK technologies, such as sensors, batteries, and passive components.

As part of the launch, TDK AIsight introduced the SED0112 microprocessor for AI glasses. The next-generation, ultra-low-power digital-signal processor (DSP) platform integrates a microcontroller (MCU), state machine, and hardware CNN engine. The built-in hardware CNN architecture is optimized for eye intent. The MCU features ultra-low-power DSP processing, eyeGenI sensors, and connection to a host processor.

The SED0112, housed in a 4.6 × 4.6-mm package, supports the TDK AIsight eyeGI software and multiple vision sensors at different resolutions. Commercial samples are available now.

SDV devices and development

Infineon Technologies AG and Flex launched their Zone Controller Development Kit. The modular design for zone control units (ZCUs) is aimed at accelerating the development of software-defined-vehicle (SDV)-ready electrical/electronic architectures. Delivering a scalable solution, the development kit combines about 30 unique building blocks.

With the building block approach, developers can right-size their designs for different implementations while preserving feature headroom for future models, the company said. The design platform enables over 50 power distribution, 40 connectivity, and 10 load control channels for evaluation and early application development. A dual MCU plug-on module is available for high-end ZCU implementations that need high I/O density and computational power.

The development kit enables all essential zone control functions, including I2t (ampere-squared seconds), overcurrent protection, overvoltage protection, capacitive load switching, reverse-polarity protection, secure data routing with hardware accelerators, A/B swap for over-the-air software updates, and cybersecurity. The pre-validated hardware combines automotive semiconductor components from Infineon, including AURIX MCUs, OPTIREG power supply, PROFET and SPOC smart power switches, and MOTIX motor control solutions with Flex’s design, integration, and industrialization expertise. Pre-orders for the Zone Controller Development Kit are open now.

Infineon and Flex’s Zone Controller Development Kit.
Infineon and Flex’s Zone Controller Development Kit (Source: Infineon Technologies AG)

Infineon also announced a deeper collaboration with HL Klemove to advance technologies in vehicle electronic architectures for SDVs and autonomous driving. This strategic partnership will leverage Infineon’s semiconductor and system expertise with HL Klemove’s capabilities in advanced autonomous-driving systems.

The three key areas of collaboration are ZCUs, vehicle Ethernet-based ADAS and camera solutions, and radar technologies.

The companies will jointly develop zone controller applications using Infineon’s MCUs and power semiconductors, with HL Klemove as the lead in application development. Enabling high-speed in-vehicle network solutions, the partnership will also develop front camera modules and ADAS parking control units, leveraging Infineon’s Ethernet technology, while HL Klemove handles system and product development.

Lastly, HL Klemove will use Infineon’s radar semiconductor solutions to develop high-resolution and short-range satellite radar. They will also develop high-resolution imaging radar for precise object recognition.

NXP introduced its S32N7 super-integration processor series, designed to centralize core vehicle functions, including propulsion, vehicle dynamics, body, gateway, and safety domains. Targeting SDVs, the S32N7 series, with access to core vehicle data and high compute performance, becomes the central AI control point.

Enabling scalable hardware and software across models and brands, the S32N7 simplifies vehicle architectures and reduces total cost of ownership by as much as 20%, according to NXP, by eliminating dozens of hardware modules and delivering enhanced efficiencies in wiring, electronics, and software.

NXP said that by centralizing intelligence, automakers can scale intelligent features, such as personalized driving, predictive maintenance, and virtual sensors. In addition, the high-performance data backbone on the S32N7 series provides a future-proof path for upgrading to the latest AI silicon without re-architecting the vehicle.

The S32N7 series, part of NXP’s S32 automotive processing platform, offers 32 compatible variants that provide application and real-time compute with high-performance networking, hardware isolation technology, AI, and data acceleration on an SoC. They also meet the strict timing, safety, and security requirements of the vehicle core.

Bosch announced that it is the first to deploy the S32N7 in its vehicle integration platform. NXP and Bosch have co-developed reference designs, safety frameworks, hardware integration, and an expert enablement program.

The S32N79, the superset of the series, is sampling now with customers.

NXP’s S32N7 super-integration processor series.
NXP’s S32N7 super-integration processor series (Source: NXP Semiconductors N.V.)

Texas Instruments Inc. (TI) expanded its automotive portfolio for ADAS and SDVs with a range of automotive semiconductors and development resources for automotive safety and autonomy across vehicle models. The devices include the scalable TDA5 HPC SoC family, which offers power- and safety-optimized processing and edge AI; the single-chip AWR2188 8 × 8 4D imaging radar transceiver, designed to simplify high-resolution radar systems; and the DP83TD555J-Q1 10BASE-T1S Ethernet physical layer (PHY).

The TDA5 SoC family offers edge AI acceleration from 10 TOPS to 1,200 TOPS, with power efficiency beyond 24 TOPS/W. This scalability is enabled by its chiplet-ready design with UCIe interface technology, TI said, enabling designers to implement different feature sets.

The TDA5 SoCs provide up to 12× the AI computing of previous generations with similar power consumption, thanks to the integration of TI’s C7 NPU, eliminating the need for thermal solutions. This performance supports billions of parameters within language models and transformer networks, which increases in-vehicle intelligence while maintaining cross-domain functionality, the company said. It also features the latest Arm Cortex-A720AE cores, enabling the integration of more safety, security, and computing applications.

Supporting up to SAE Level 3 vehicle autonomy, the TDA5 SoCs target cross-domain fusion of ADAS, in-vehicle infotainment, and gateway systems within a single chip and help automakers meet ASIL-D safety standards without external components.

TI is partnering with Synopsys to provide a virtual development kit for TDA5 SoCs. The digital-twin capabilities help engineers accelerate time to market for their SDVs by up to 12 months, TI said.

The AWR2188 4D imaging radar transceiver integrates eight transmitters and eight receivers into a single launch-on-package chip for both satellite and edge architectures. This integration simplifies higher-resolution radar systems because 8 × 8 configurations do not require cascading, TI said, while scaling up to higher channel counts requires fewer devices.

The AWR2188 offers enhanced analog-to-digital converter data processing and a radar chirp signal slope engine, both supporting 30% faster performance than currently available solutions, according to the company. It supports advanced radar use cases such as detecting lost cargo, distinguishing between closely positioned vehicles, and identifying objects in HDR scenarios. The transceiver can detect objects with greater accuracy at distances greater than 350 meters.

With Ethernet an enabler of SDVs and higher levels of autonomy, the DP83TD555J-Q1 10BASE-T1S Ethernet SPI PHY with an integrated media access controller offers nanosecond time synchronization, as well as high reliability and Power over Data Line capabilities. This brings high-performance Ethernet to vehicle edge nodes and reduces cable design complexity and costs, TI said.

The TDA54 software development kit is now available on TI.com. Samples of the TDA54-Q1 SoC, the first device in the family, will be sampling to select automotive customers by the end of 2026. Pre-production quantities of the AWR2188 transceiver, AWR2188 evaluation module, DP83TD555J-Q1 10BASE-T1S Ethernet PHY, and evaluation module are now available on request at TI.com.

Robotics: processors and modules

Qualcomm Technologies Inc. introduced a next-generation robotics comprehensive-stack architecture that integrates hardware, software, and compound AI. As part of the launch, Qualcomm also introduced its latest, high-performance robotics processor, the Dragonwing IQ10 Series, for industrial autonomous mobile robots and advanced full-sized humanoids.

The Dragonwing industrial processor roadmap supports a range of general-purpose robotics form factors, including humanoid robots from Booster, VinMotion, and other global robotics providers. This architecture supports advanced-perception, motion planning with end-to-end AI models such as VLAs and VMAs. These features enable generalized manipulation capabilities and human-robot interaction.

Qualcomm’s general-purpose robotics architecture with the Dragonwing IQ10 combines heterogeneous edge computing, edge AI, mixed-criticality systems, software, machine-learning operations, and an AI data flywheel, along with a partner ecosystem and a suite of developer tools. This portfolio enables robots to reason and adapt to the spatial and temporal environments intelligently, Qualcomm said, and is optimized to scale across various form factors with industrial-grade reliability.

Qualcomm’s growing partner ecosystem for its robotics platforms includes Advantech, APLUX, AutoCore, Booster, Figure, Kuka Robotics, Robotec.ai, and VinMotion.

Qualcomm’s Dragonwing IQ10 industrial processor.
Qualcomm’s Dragonwing IQ10 industrial processor (Source: Qualcomm Technologies Inc.)

Quectel Wireless Solutions released its SH602HA-AP smart robotic computing module. Based on the D-Robotics Sunrise 5 (X5M) chip platform and with an integrated Ubuntu operating system, the module features up to 10 TOPS of brain-processing-unit computing power. The robotic computing modules target demanding robotic workloads, supporting advanced large-scale models such as Transformer, Bird’s-Eye View, and Occupancy.

The module works seamlessly with Quectel’s independent LTE Cat 1, LTE Cat 4, 5G, Wi-Fi 6, and GNSS modules, offering expanded connectivity options and a broader range of robotics use cases. These include smart displays, express lockers, electricity equipment, industrial control terminals, and smart home appliances.

The module, measuring 40.5 × 40.5 × 2.9 mm, operates over the –25°C to 85°C temperature range. It supplies a default memory of 4 GB plus 32 GB and numerous memory options. It supports data input and fusion processing for multiple sensors, including LiDAR, structured light, time-of-flight, and voice, meeting the AI and vision requirements in robotic applications.

The module supports 4k video at 60 fps with video encoding and decoding, binocular depth processing, AI and visual simultaneous localization and mapping, speech recognition, 3D point-cloud computing, and other mainstream robot perception algorithms. It provides Bluetooth, DSI, RGMII, USB 3.0, USB 2.0, SDIO, QSPI, seven UART, seven I2C, and two I2S interfaces.

The module integrates easily with additional Quectel modules, such as the KG200Z LoRa and the FCS950 Wi-Fi and Bluetooth module for more connectivity options.

Quectel’s SH602HA-AP smart robotic computing module.
Quectel’s SH602HA-AP smart robotic computing module (Source: Quectel Wireless Solutions)

The post CES 2026: AI, automotive, and robotics dominate appeared first on EDN.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow