Skip to content

Offering deep insights into performance and operational quality of energy systems and their plants.

Overview is a framework for data analytics of heating, ventilation & air conditioning (HVAC) systems, energy-related plants, buildings, as well as energy networks such as district heating and cooling grids. The framework has a special focus on scalable analytics of large sets of time series data.

Its configuration and analysis workflow is straightforward:

  1. Collect available data and instance components of the building and its plants.
  2. Map datapoints to instanced components.
  3. Configure analysis on instanced components.
  4. Receive recommendations and explore results.

Please refer to the figure below for a schematic overview of the framework.

Figure 1: Schematic overview of

The goal of is to support technicians and engineers who want to optimize (or commission) systems in terms of indoor comfort, energy efficiency, and maintenance and operation costs. For this purpose, aims at profound transparency and interpretation of system operation at a deep, data-driven level. aedifion continuously extends its scope.

In the following, we will explain first the ingredients and second the processes of Finally we provide an application example and its various use cases.

Technical documentation

The framework consists of a component data model library, a library of analysis functions, a knowledge & fault pattern database, a decision engine, an analysis configuration pattern, and an analysis runtime environment. Results are provided via API.

All required interaction is available via APIs.

Component data model library

Short summary on terminology:

  • Components are virtual or logical objects within a building or energy-related plants, such as e.g. pumps, boilers, thermal zones, control loops and so forth.
  • Component data models are generic data models of components.
  • Instanced components are component data models instanced for a specific project. They can be mapped to adapt them to specific projects.
  • Mapping is the process of linking datapoints to pins of the component data model and adding meta data tags.

All contemporary available component data models are collected in the aedifion component data model library. The component data models needed for a specific project can be chosen from this library. As soon as a component gets instanced to a specific project, it can be mapped to specify it for that project. Configuring an instanced component with analysis functions enables its analysis.

Learn more? Explore the available components.

Analysis functions

Analysis functions are granular and generic functions to analyse the operation of components. The aedifion analysis function library inherits all contemporary available analysis functions within

Analysis functions are available per component data model and get executed on mapped pins and meta data of instanced components. E.g., an analysis of plant cycles is available for several instanced components like heat pumps, air handling units, boilers and so forth. This analysis requires a mapping of the pin operating message of the analyzed instanced component.

Learn more? Explore the available analysis functions.

Analysis runtime

The analysis runtime is the engine which executes analysis determinations. It utilizes the stream and batch processing services of the platform and performs evaluations of the analysis configuration on demand. If an interpretation of the analysis results is required, the analysis runtime calls the decision engine.

Decision engine

The decision engine is the part of the analytics process which interprets a determined analytics result. It takes digitized engineering knowledge from the knowledge & fault pattern database into account in order to decide either the operation of the instanced components is okay, sub-optimal, faulty, dangerous, etc. Interpretations of the operational quality and recommendations on how to optimize it are based on this decision.

  • A heat pump cycles several times per hour.
  • This can easily be identified via the KPI number of cycles per hour.
  • The decision engine decides at which threshold value this is too frequent is made in the decision engine.
  • If the decision is too frequent, recommendation on how to increase the cycle time is queried from the knowledge & fault pattern database by the decision engine.
  • The decision and recommendations are returned to the analytics runtime.

Knowledge & fault pattern database

The knowledge & fault pattern database is the gathered engineering knowledge used to interpret analysis results, identify faulty component operation and give recommendations of optimization measures.


The framework process starts with instancing a component: assigning of a component data model to a specific project. An instanced component gets project individualized by mapping the component which assigns datapoints and meta data to the instanced component. Configuring an analysis describes the process of choosing analysis functions which shall be run on the instanced component. Exploring results demonstrates how to query results from a configured analysis and how to explore them.

Instancing a component

Instancing a component describes the process of assigning a generic component data model to a specific project. Colloquially expressed: Choose the components of your building/project from the component data model library.

Mapping a component

Mapping a component is the process which individualizes the generic instanced component data model for a specific project. This comprises linking datapoints, respectively their time series, to the pins of the instanced component and adding meta data tags to it. A mapped component is ready for analysis.

Ingested as well as AI-generated meta data can be used to support the mapping, especially the linking of datapoints and pins.

Configuring an analysis

Configuring an analysis is the process to individualize the analysis which should be run on an instanced component. Choices are:

  • Which analysis functions shall be run on the component? This can be a subset of the analysis functions available for the component data model.
  • It is possible to define several configurations on the same instanced component and thus create individual analysis sets.
  • Advanced settings: analyse multi-time intervals. This option allows to perform analyses over a fixed number of time intervals, a fixed interval length or a combination of both.

The analysis configuration will be passed to the analytics runtime when analysis results of this configuration are queried.

Exploring results

Querying results is easy: Choose a start and end time and an analysis config which shall be executed. The analytics runtime will evaluate the analysis functions of the config and return its results within seconds.

Depending on the utilized analysis functions, the result type differentiates. A set of key performance indicators, restructured or virtually determined time series, qualitative evaluations - e.g. in traffic light colors -, notification types, interpretations and recommendations is returned.

Key performance indicators: Known indicators from engineering and thermodynamics in order to get a quick, comparable overview over a component's operation and performance, e.g., the coefficient of performance.

Restructured time series: Restructuring time series helps to focus on a certain aspect of the time series and allows visual analysis of this aspect. E.g., the overall load distribution of a component can easily be analysed via a load duration graph which is just a restructured power time series.

Virtually determined time series: Some analysis functions determine time series via mathematical correlations. E.g., a fluid heat flux via two temperature sensors, a volume flow sensor, and knowledge of the fluid medium.

Qualitative evaluation: Sometimes green, yellow, red is all what is required to get an overview over the components operation.

Notification types: Notification types help to prioritize results. They come in the dimensions indoor comfort, energy efficiency, maintenance, and system integration with the escalation information ok, notice, warning, and critical.

Interpretation: This is the interpretation of the analysis results by the decision machine. E.g., "The component is pulsing in extremely high frequency."

Recommendation: This is the recommendation of optimization measures given by the decision engine. E.g., "Reduce component pulsing by throttling the output heat power. Throttling can be realized by partial-load operation of the component or installation of an input power choke."

Learn more? Explore the available analysis functions.


School A has extraordinary high primary energy consumption for heating. A technician is asked to optimize this system. After the technician plug and play installed at School A, the analysis of the building can start: One condensing boiler, and three heat distribution circuits shall be analysed.

The technician adds one boiler, and three heating loop component data models to the School A project and maps the datapoints to the pins of the instanced components - supported by the provided meta data on data points. Since the technician suspects something might be wrong with the temperature levels, the set-point compliance analysis function is run on the heat distribution circuits and on the boiler.

The analysis results confirm the assumption: All three circuits exceed their temperature levels while the boiler meets its set-point temperature quiet fine. The reason for that is identified by the decision engine of The heating curve of the boiler is not designed according to demand. Therefore, the analytics results recommend an adjustment of the heat curve which the technician does right away.

This means that not only can the boiler be operated with a significantly lower load, but also overheating of the classrooms can also be avoided. The school principal is glad about the saved energy costs. And the pupils and teachers are happy about the fact that they don't have to constantly open the window in winter, because it is too warm in the room.

Use cases

The use of is beneficial in several scenarios and business cases. Just to mention a few:

Optimization projects: provides deep system transparency and recommendations to optimize operations in the dimensions of energy service delivery, indoor comfort, energy efficiency, and maintenance expanses. Therefore can be used by technicians or engineers to support their optimization projects.

Original equipment manufacturers (OEMs): provides scalable analysis which can be offered as additional data services to end customers of OEMs. Furthermore, supports R&D departments of OEMs with deep insights on actual operational behavior and usage of their equipment in the field - of course without revealing the individuals behind the data.

Enhancement of existing software: Existing data applications and cloud services can be extended with functionalities. Integration of the API endpoints is all it requires.

Operation and energy monitoring: provides durable energy and maintenance efficiency throughout the whole building/plant life time, identifies aging phenomenons of components and recommends fixes. Therefore, significantly lowers operating costs. Furthermore, enables energy monitoring.

Commissioning project: supports commissioning via field layer and component functional tests.

You have further ideas or questions if your use case can be supported by Contact us!

Last update: 2021-03-04