Improving trust in autonomous technology

Improving trust in autonomous technology

Model-based systems engineering is a full lifecycle approach that uses modeling to explore the behavior of a system, the interactions of components, and intersections with potential future environments. This allows for the simulation and prediction of system behavior under different circumstances, enabling developers to proactively seek weaknesses or threats. These and other methodologies will change how AI- and robotics-powered products are developed and validated, ultimately reducing cost and time to market.

Over time, Kimmel predicts, safety and testing collaboration between ecosystem partners will itself generate new standards and leading practices for validation and verification, paving the way for seamless, safe, and widespread deployment of autonomous systems across sectors.

EY-Parthenon teams support original equipment manufacturers (OEMs) in autonomous systems integration. This includes developing safety strategies and performance indicators, helping with data for training of autonomous systems, training algorithms, and developing digital twins, such as digitizing human-defined “road rules” that could boost transparency in autonomous vehicle safety. “We also support the development of testing and evaluation tools that create interoperable live virtual constructive test environments, and cataloging performance data and creating ‘test databases’ including common operating cases and known risks,” says Kimmel. “This allows participants to benchmark performance, for instance, on issues like pedestrian interactions as a factor for autonomous vehicle safety.”

Looking to the future, Kimmel outlines five coming trends in the autonomous systems industry.

  • Trust will be key for autonomous systems, both for consumers and regulators. As a result, companies are building cultures of safety and risk management, such as through safety management systems (SMS).
  • Interoperability and virtual testing will become an imperative. Different systems may need to interact effectively with one another and be tested together in virtual test environments. These environments and testing toolchains will become able to assess performance in a large range of potential scenarios and conditions far more quickly than physical testing can.
  • Safety performance indicators will level up. The industry likely needs to shift from conventional approaches, like numbers of crashes or failures, to predictive metrics like incursions into a “safety envelope,” erratic or unpredictable motion control, and latency—and to provide evidence of the predictive power of these new metrics.
  • Standards and common verification systems will offer credibility as emerging technologies scale. Without standards, a fragmented approach to safety may prove detrimental to the industry. Companies that take proactive approaches to shaping and complying with standards can reduce risks and build a competitive advantage.  
  • Governments will take a proactive role to both to regulate and accelerate. Governments function both as regulators and as catalysts for R&D, raising safety concerns and also accelerating development of strategies and enabling technologies for safer AI and robotic systems.

Learn more about EY-Parthenon disruptive technology solutions at ey.com/us/disruptivetech.

The views expressed in this article are not necessarily the views of Ernst & Young LLP or other members of the global EY organization.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Add a Comment