
Tech Trends 2030: The next era of generative AI
This Tech Trends report explores generative industrial AI developments and their industry impact. Uncover key trends and future scenarios.


The field of AI encompasses a wide range of disciplines and technologies. This glossary of the most important key terms can help broaden your understanding and delve deeper into this fascinating world.
Agentic AI refers to advanced AI systems that go beyond merely responding to commands; they generate content, autonomously execute tasks, and achieve goals. These systems combine reasoning capabilities, memory functions, and feedback loops to independently plan and perform actions, often utilizing various digital tools and adapting their approach through learning. Unlike traditional AI, agentic AI can operate both independently and collaboratively with other AI agents, making autonomous decisions while interfacing with different platforms and systems to complete complex tasks.
In the industrial context, Agentic AI involves deploying AI systems that can independently monitor, analyze, and control various aspects of industrial operations, such as predictive maintenance, quality control, inventory management, or the optimization of production processes.
Artificial Intelligence (AI) refers to software that has the ability to learn and adapt. AI can solve tasks that require it to interpret the meaning of input data and adapt to the requirements. Typically, these are tasks that previously could only be solved by natural intelligence. There are several types of AI methods, which differ significantly in terms of their fields of application, their potentials and the risks associated with them. The basic principles of AI were developed in the 20th century. Because all AI methods require large amounts of training data, the technology is now gaining increased critical relevance through digitization and big data.
A technology that allows digital information to be overlayed on real world environments and objects, typically using immersive 3D Virtual Reality. AR allows an enhanced version of the physical world by adding digital visual, sound and other sensory elements.
Systems that can operate without human intervention, such as self-driving cars and drones.
Vehicles that can operate without human intervention, such as self-driving cars and trucks.
Unintended prejudice or favoritism that may occur in AI systems due to biased training data or algorithms.
Large and complex data sets, often generated by (industrial) sensors, but also by companies, organizations and people. As this data is often unstructured, incomplete or incorrect, non-AI-powered software usually cannot process it in a meaningful way.
An AI-powered program that can interact with humans through text or voice communication.
A type of AI that aims to replicate human cognitive processes, such as perception, reasoning, and decision-making.
A subset of AI that allows computers to extract information from visuals, such as images and videos, to understand and interpret them.
Strategies, measurements, and tools to help secure digital information from external attackers. AI can be used to detect and prevent cyberattacks, and to identify and respond to security breaches.
The process of analyzing and interpreting data to uncover insights and make informed decisions.
Computer systems that are designed to assist humans in making decisions by providing relevant information and analysis.
A subset of Machine Learning that involves the use of neural networks with multiple layers to enable machines to learn from data.
A mathematical model that describes the behavior of a physical object or process. In a simulation environment, a digital twin can be used to simulate what would happen in the real world if the parameters of the system were to be changed. Digital twins can be used throughout the product lifecycle, including the design, manufacturing, operation, and service phases. Visual representations of digital twins look and behave like their physical counterparts, mirroring the real world and adapting in real time to what is happening there.
Edge Computing is a type of system architecture that, unlike cloud computing, brings computing and data storage closer to the data sources (the "edge"). It helps to reduce response times and the amount of energy required for data transfer. Edge AI systems can be implemented physically close to the actual execution device. These devices can run AI applications without being connected to the cloud.
AI that is designed to interact with and navigate the physical world, often through the use of robots or autonomous vehicles.
The study and application of moral principles in the development and use of AI, including issues such as bias, privacy, and accountability.
AI that is designed to be transparent and explainable, enabling humans to understand how and why a machine made a particular decision.
Is a training method in machine learning where multiple separate devices train a machine learning model with their own (separate) dataset. Only the end results are shared with the main actor in the network.
AI that is designed to generate new content, such as images, videos, and music by combining and learning from existing content.
Capability of an application, e.g., CAD software, to autonomously generate a number of design alternatives given a set of constraints. Uses techniques such as AI, optimization and simulation.
Industrial AI refers to the application of AI within the industries that form the backbone of our economies – industry, infrastructure, mobility and healthcare.
Industrial Foundation Models (IFMs) are pre-trained on industry-specific data to deeply understand the “language” of engineering, automation, and manufacturing, and to enable faster and more accurate deployment of AI solutions. They provide a standardized starting point, saving time, resources, and energy through economies of scale. IFMs are tailored to solve real-world industrial challenges. They act as the intelligence layer behind Industrial Copilots and facilitate knowledge transfer and collaboration across sectors. They support not only text, images, and audio but also 3D models, 2D drawings, and other complex structures such as industry-specific time-series data (see also Multimodal LLMs).
Industrial-grade AI denotes a level of quality; reliable, secure, and trustworthy, designed to meet the rigorous requirements and standards of the most demanding professional environments.
A term used to describe the fourth industrial revolution, which involves the integration of AI, IoT, and other advanced technologies into manufacturing and industry.
The network of technical devices embedded with sensors, software, and connectivity to enable data exchange. The IoT is one of the main drivers of digitalization and big data.
A database that represents knowledge as a graph of interconnected nodes and edges, used for AI applications such as NLP and search.
A type of AI language model that is trained on massive amounts of data, such as GPT-3, to generate human-like text.
A subset of AI that involves the use of algorithms and statistical models to enable machines to learn from experience or data.
A subset of AI that allows machines with attached cameras to extract visual information for understanding and interpreting their surroundings.
Multimodal LLMs can understand and process multiple types of data - such as text, images, audio, or sensor data - simultaneously. They are integrated into applications like computer vision, autonomous vehicles, and robotics. They improve object recognition, scene understanding, and enable machines to follow complex instructions. Multimodal LLMs have the potential to impact the processing and generation of industry-specific data - such as time series, 2D and 3D models, or data for machine vision - in the same way that conventional LLMs have impacted text and speech processing.
A subset of AI that focuses on the interaction between computers and human language.
An interface that enables humans to interact with computers using natural gestures, speech, and other forms of expression.
A type of Machine Learning algorithm that is modeled after the structure of the human brain and is used to recognize patterns in data.
A process for analyzing changes in the voltage and current of buildings or machines that comprise multiple sub-devices to deduce the individual contribution of each device in the system.
Physical AI refers to the integration of artificial intelligence into machines – such as robots – that can sense their environment and act within it. Inspired by the human sensorimotor cycle, Physical AI processes sensory inputs (such as 3D cameras or tactile sensors), generates control commands from them, and enables machines to perform complex tasks adaptively and autonomously in physical, 3D environments.
Physics-informed AI, also known as Physics-aware AI, refers to a new class of artificial intelligence methods that incorporate physics laws directly into the training process. Unlike conventional AI approaches which rely heavily on large datasets to learn behavior, Physics-informed AI integrates physics-based constraints to guide learning. This enables AI systems to reason and make predictions even when real world data is limited, by leveraging our existing knowledge of how the physical world works. Instead of learning only from examples, these models use their physics knowledge to steer learning toward more optimal and physically consistent solutions.
Predictive AI leverages statistical analysis and machine learning to identify patterns in real-time and historical operational data from machines and equipment, enabling it to predict future behaviors, detect anomalies, forecast potential failures, and recommend maintenance actions. It’s used to enhance asset health and reliability, reduce unplanned downtime, and support faster data-driven decision-making across industrial operations.
The use of AI and statistical models to predict future events or trends based on historical data.
The use of AI to predict when machines will need maintenance or repairs, based on real-time data.
The use of AI to detect defects and ensure that products meet quality standards.
A type of Machine Learning where untrained agents learn a strategy through penalties and rewards of the system after performed actions.
AI applications that meet defined ethical and moral standards.
The branch of engineering and AI that focuses on the design, construction, and operation of robots.
The use of AI to analyze and interpret the emotions and opinions expressed in text or speech.
An electrical grid that uses AI and other advanced technologies to optimize the generation, distribution, and consumption of electricity.
Specialized hardware, such as Graphics Processing Units (GPUs) or Language Processing Units (LPUs)-enabled edge devices, is an emerging trend in industrial AI. These devices provide high-performance computing power at the edge, enabling real-time processing of AI algorithms. Their integration allows for parallel processing and accelerated performance, resulting in faster execution of complex AI tasks. This local processing reduces latency and reliance on cloud resources, making it crucial for time-sensitive applications. Specialized hardware also supports advanced AI models, leading to enhanced insights and improved performance. Moreover, it reduces costs by minimizing the need for extensive cloud infrastructure and data transfer.
The ability of machines to recognize and interpret human speech.
A learning method where machine learning models are trained with a labeled (known) data sets to predict an outcome.
Flow optimization of goods and materials in a supply chain to reduce cost and improve efficiency. AI is often used for processes automation, inefficiency detections, quality assurance of goods and demand forecasting.
Artificial data generated by algorithms rather than real-world events that is used to train and validate Machine Learning models. The quality of the synthetic data is critical. It determines whether the AI will produce acceptable results after training.
A learning method where Machine Learning models discover patterns and groupings in data that is previously unknown (unlabeled).
Virtual Reality (VR) presents a digitally rendered environment that can replicate an actual space, create an alternative reality or combine the two. The user is able to explore the virtual space from the confines of home, office or factory floor.

