Skip to content


How to access the MQTT API.


MQTT is a lightweight publish/subscribe messaging protocol. Originally designed for machine to machine (M2M) telemetry in low bandwidth environments, MQTT has nowadays become one of the main protocols for (data collection in) Internet of Things (IoT) deployments [1,2].

For simple messaging use cases, MQTT has multiple advantages over HTTP [3] and other protocols:

  • Due to its binary encoding and minimal packet overhead, MQTT is 20x faster than HTTP, uses 50x less traffic, and consumes 20% less energy than HTTP according to the direct performance comparison presented in [4].
  • Contrary to HTTP's client/server architecture, MQTT's publish/subscribe pattern decouples data sources from data sinks through a third party, the MQTT broker. According to [5], decoupling means that sources never directly talk to sinks which implies that:

    1. they do not need to know each other,
    2. they do not need to run at the same time, and
    3. they do not require synchronization

    All this easily allows building flexible 1-to-many, many-to-1, and many-to-many data pipelines. - MQTT has message filtering built-in, i.e., data sinks can subscribe to arbitrary subsets of the data collected from the sources specified through a hierarchical topic theme [6].

When you are building an application that streams data into or from the platform, MQTT is probably a better choice than the HTTP API.

Part of the platform is an MQTT broker which serves as the single logical point of data ingress to the, e.g., all data collected in the field through the aedifion Edge Devices is ingested to through this MQTT broker and, in turn, can also be subscribed to. The MQTT broker is clustered, i.e., distributed over multiple independent servers, to ensure seamless scalability and high availability.

MQTT Broker

The MQTT API is providd by an MQTT broker. To use the MQTT API, you need to connect to this broker with a unique client id using login credentials for authentication and authorization.


If you are using the cloud platform, the MQTT broker's URL is

If you are using a dedicated platform instance, remember to use the correct subdomain


where <REALM> is specific to your dedicated platform.

Dedicated platforms and subdomains

In the following and throughtout this documentation, we will use URLs that refer to the cloud platform. If you are using a dedicated platform, remember to use the correct subdomain as explained above.

The MQTT broker accepts connections on two ports:

  • Port 8883 accepts plain MQTT, i.e., connections that transport MQTT directly via TLS. This is the standard case and, if in doubt, this port is the right choice.
  • Port 9001 accepts websockets connections, i.e., connections that transport the MQTT protocol within the websockets protocol which in turn is transported via TLS. MQTT via websockets is the right choice when you want to send/receive MQTT data directly from a web browser. Read more about MQTT over websockets here and here.

MQTT brokers only accept TLS 1.2 and TLS 1.3 encrypted connections, i.e., all plain TCP connections are rejected. The MQTT broker authenticates itself towards the client using an X.509 certificate issued by Let's Encrypt. Your operating system (OS) will accept this certificate if the root certificates of Let's Encrypt are installed as a trusted root certification authority (CA) in your OS. Don't worry, this is probably the case and you don't have to do anything.

Test your connection

To test the connection to the MQTT broker, run the following command on your command line:

openssl s_client -connect -servername

There will be a warning verify error:num=20:unable to get local issuer certificate at the top, which can be fixed by providing option -CAfile or -CApath and pointing to the right locations depending on your OS, e.g., -CAfile /etc/ssl/cert.pem on Mac OS.

Client Identifiers

Clients are identified by a unique client_id. As per the MQTT 3.1.1 specification (Section, client identifiers are between 1 and 23 alphanumeric characters (0-9, a-z, A-Z). Most MQTT brokers support longer client identifiers from the the full range of UTF-8 characters and only characters /, +, and # which have a special meaning in MQTT and are disallowed for security reasons.

It is important to note that there can only be one connection per client_id per Broker. If two clients connect with the same client_id the older connection is terminated in favor of the newer. This restriction does not extend to your login credentials (see authentication). You can open multiple connections using the same login credentials as long as you use a different client_id for each concurrent connection.

Choose a client_id and avoid special characters. Postfix the client_id with a random string or integer to ensure it is unique across concurrent connections.


The MQTT brokers only accept connections from authenticated clients. To authenticate, the MQTT client has to present login credentials (username and password) to the MQTT broker within the initial MQTT handshake, i.e., as part of the CONNECT message.

Client credentials can be obtained with limited and unlimited validity.

  • Credentials with unlimited validity are provided only on request by the aedifion staff. Please email us at
  • Credentials with limited validity can be created through the HTTP API. Please refer to the corresponding guide for further instructions and details.


The MQTT broker authorizes clients to subscribe and publish based on topics.

Once connected and authenticated, the client can publish or subscribe to one or multiple topics, but not without authorization. To subscribe, the client needs read access to that topic. To publish, the client needs write access. Note that write access does not imply read access.

Authorization is specified through a list of topics (following exactly MQTT's topic syntax and semantics) where for each topic it is specified whether the user has read and/or write access. Make sure to familiarize yourself with MQTT's topic structure, especially with hierarchy levels and the # wildcard.

Topic hierarchy

All MQTT topics on have a hierarchy that consists of two main parts, i.e., a fixed prefix and a variable postfix.

The prefix has two hierarchies and is assigned by aedifion:

  • The top level hierarchy, the load-balancing-group, is fixed and assigned by aedifion. It serves to separate different customers and projects and ensures that each customer and project is guaranteed separate and sufficient processing and communication resources.
  • The second level hierarchy, the project-handle, is a fixed (human-readable) string assigned by aedifion that uniquely identifies your project. This hierarchy separates different projects on the same load balancing group.

As an customer, you receive authorization to this prefix, i.e., to the topic <load-balancing-group>/<project-handle>/#, i.e., you can publish and subscribe to "anything below" the project level of the topic hierarchy.

The postfix matching the # can generally have arbitrary length or structure as long as they are UTF-8 strings.

  • If you've purchased an aedifion Edge Device, e.g., this device collects data from different datapoints on your building network and publishes them to the postfixes datapoint_1, ..., datapoint_n. Via MQTT, you thus have datapoint-level publish/subscribe access to the datapoints of your building. For efficiency reasons, the Edge Device uses short 4 to 12 characters long identifiers generated from the full datapoint names, e.g., the 8-character hash identifier of the datapoint my_very_veeeeeery_long_datapoint_id is just VD0pZLej.

    aedifion's hash identifiers creation

    1. Build a SHA1 hash out of the UTF-8 encoded datapoint id.
    2. base62-encode the shortenend hash.
    3. Cut the first hash_id_length characters where hash_id_length is configured individually per project (default = 8).

    Here's a sample implementation in Python using the pybase62 module.

    import base62
    import hashlib
    def base62id(s: str, hash_id_length: int = 8):
        return base62.encodebytes(hashlib.sha1(s.encode('utf-8')).digest())[:hash_id_length]
  • If you ingest data yourself, you can publish to arbitrary postfixes since the # wildcard at the end of your topic authorization matches any number of sublevels.

It is important to note two things about publishing your own data via MQTT:

  1. The postfix is only used for routing messages on the broker, e.g., you can use it to group data for different subscribers. The postfix does not determine which time series data is stored to. This is determined by the payload of your messages (see below).
  2. aedifion does not prevent you from writing data to datapoints that are at the same time written by the aedifion Edge Device. If you have a datapoint_A on your local building network that is discovered by the Edge Device and you also write to datapoint_A yourself, this data will be stored and intermingled in the same time-series.

Payload format

The payload format depends on the type of the topic published to or subscribed from. There are three types of topics on the platform:

  • Timeseries data topics: These topics are used to send and receive timeseries data from buildings. They are usually in the form <load balance group>/<project handle>.
  • Meta data topics: These topics are used to send and receive meta data from buildings. They are usually in the form META/<load balance group>/<project handle>.
  • Controls topics: These topics are used to send setpoints and schedules to buildings. They are usually in the form CONTROLS/<load balance group>/<project handle>.

Timeseries data

All messages containing timeseries data you publish to or receive from the MQTT broker must strictly adhere to Influx Line Protocol format:

RoomTemperature value=20.3 1465839830100400200
--------------- ---------- -------------------
    |               |              |
    |               |              |
+---------+  +-------------+   +---------+
|datapoint|  | observation |   |timestamp|
+---------+  +-------------+   +---------+
  • datapoint is an arbitrary non-empty UTF-8 string that identifies your datapoint. If it contains blank spaces or quotes, then you must quote the string and escape blanks as well as quotes using a backslash \, e.g., "this\ is\ a\ \"datapoint\"\ with\ spaces and\ quotes". The reported observation will be stored on in a time series with exactly this name as you will see when logging in to the frontend.
  • observation is the reported measurement and must have the form of value=<float> where <float> is parsable as a floating point number. observation must be separated from the datapoint by a single blank.
  • timestamp is the timestamp of your reported observation in nanosecond-precision Unix time, i.e., the number of nanoseconds since January 1st, 1970, 00:00:00 UTC. It must be separated from the observation by a single blank. If your timestamp is in millisecond or microsecond precision you must append 6 or 3 zeros, respectively.

Influx Line Protocol additionally allows to concatenate an optional tag set after datapoint. The tag set is a comma-separated list of tags (key=value pairs) that are associated with this observation. It is stored in the timeseries database together with the reported observation but is currently not yet available via the API. Tags sent with the observation in the tag set are regarded as dynamic and time-dependent, e.g., they can be used to mark maintenance or test periods. To define static time-independent tags, e.g., a fixed location or the unit of the datapoint, use metadata messages or the HTTP API.

Influx Line Protocol allows publishing multiple datapoints in a single message. You have to separate them by a line brake with \n, for example:

RoomTemperature value=20.9 1465839830100400200
ExtTemperature value=14.7 1465839830100400200
OfficeTemperature value=21.3 1465839830100400200
HallTemperature value=17.9 1465839830100400200

The outcome of this is a small performance gain, the disadvantage is that it is not possible to publish each datapoint to its own topic, as described above.

Timeseries data in messages that do not strictly adhere to this format will be received but will not be stored in the platform.


All messages containing metadata you publish to or receive from the MQTT broker must strictly adhere to JSON format. A metadata message sent via MQTT cannot exceed 1 MB in size, and a single tag value cannot exceed 100.000 characters.

Each received metadata message is related to exactly one device or datapoint in the following manner:

  • The topic on which the message is received determines exactly one project. If that project does not exist, the message is silently dropped.
  • The message is parsed for a unique identifier depending on the message scheme to identify exactly one device or datapoint in the project. If no such device or datapoint is found, it is created.

Currently, there are two schemes to send metadata:

  • Scheme V1 (Deprecated):

    The message is parsed for key name.
    The value of this key must identify exactly one device or datapoint in the project depending on the value of the second key objtype.
    When the metadata message has been associated with a device or datapoint, all top-level key-value pairs (including objtype but excluding name) are initialized as tags on the associated device or datapoint.

    Example json:

      "name": "AS11-239291_C'Btr06'TempHall",
      "objtype": "trendLog",
      "units": "",
      "recordsSinceNotification": "44",
      "stopWhenFull": "False",
      "reliability": "noFaultDetected",
      "startTime": "{'time': (255, 255, 255, 255), 'date': 'Date(*-*-* *)'}",
      "eventTimeStamps": "[{'dateTime': {'time': (255, 255, 255, 255), 'date': (255, 255, 255, 255)}}, {'dateTime': {'time': (255, 255, 255, 255), 'date': (255, 255, 255, 255)}}, {'dateTime': {'time': (255, 255, 255, 255), 'date': (255, 255, 255, 255)}}]",
      "address": "",
      "objinstance": 18,
      "source": "BACnet"
  • Scheme V2 (New):

    The message contains 4 keys: entity, mode, tags, datapoints.

    The entity is a dictionary, containing 3 compulsory key-value pairs that are used to identify a device or a datapoint.

    • id: the alphanumeric identifier
    • type: device or datapoint
    • source: the source of the device or datapoint

    The mode option defines how the tags are processed. There are three possible modes that can be used in combination.

    • clean: drop all currently assigned tags and assign only new ones
    • add: only add new tags
    • update: only update existing tags

    Once the object is identified, all items in tags are assigned as tags to the object, depending on the matching modes. tags is a list of dictionaries, each with two keys key and value of a tag ({"key": <tag_key>, "value": <tag_value>}). Additionally, an optional key datapoints can be used for a device object to pass a list of child datapoint identifiers.

    Example json:

            "id": "AS11-239291_C'Btr06'TempHall",
            "type": "datapoint",
            "source": "BACnet"
      "mode": ["add", "update"],
      "tags": [
          {"key": "units", "value": ""},
          {"key": "recordsSinceNotification", "value": "44"},
          {"key": "stopWhenFull", "value": "False"},
          {"key": "reliability", "value": "noFaultDetected"},
          {"key": "startTime", "value": "{'time': (255, 255, 255, 255), 'date': 'Date(*-*-* *)'}"},
          {"key": "eventTimeStamps", "value": "[{'dateTime': {'time': (255, 255, 255, 255), 'date': (255, 255, 255, 255)}}, {'dateTime': {'time': (255, 255, 255, 255), 'date': (255, 255, 255, 255)}}, {'dateTime': {'time': (255, 255, 255, 255), 'date': (255, 255, 255, 255)}}]"},
          {"key": "reliability", "value": "trendLog"},
          {"key": "objtype", "value": "noFaultDetected"},
          {"key": "address", "value": ""},
          {"key": "objinstance", "value": 18}

Let's consider this example of a metadata message on the topic META/lbg1/buildinginc_headquarter. The displayed metadata was collected from a BACnet datapoint. Metadata for datapoints from Modbus, LON, etc. will look different. Values that contain nested structures are not unfolded but saved as-is into the value of the tag. Since tag values on the platform can be a text of any length, it is possible to save whole nested JSON objects as tag values.

        "id": "AS11-239291_C'Btr06'TempHall",
        "type": "datapoint",
        "source": "BACnet"
  "mode": ["add", "update"],
  "tags": [
    {"key": "notifyType", "value": "event"},
    {"key": "logInterval", "value": "6000"},
    {"key": "recordCount", "value": "5000"},
    {"key": "lastNotifyRecord", "value": "1749389"},
    {"key": "totalRecordCount", "value": "1749433"},
    {"key": "units", "value": ""},
    {"key": "recordsSinceNotification", "value": "44"},
    {"key": "stopWhenFull", "value": "False"},
    {"key": "reliability", "value": "noFaultDetected"},
    {"key": "startTime", "value": "{'time': (255, 255, 255, 255), 'date': 'Date(*-*-* *)'}"},
    {"key": "eventTimeStamps", "value": "[{'dateTime': {'time': (255, 255, 255, 255), 'date': (255, 255, 255, 255)}}, {'dateTime': {'time': (255, 255, 255, 255), 'date': (255, 255, 255, 255)}}, {'dateTime': {'time': (255, 255, 255, 255), 'date': (255, 255, 255, 255)}}]"},
    {"key": "objtype", "value": "trendLog"},
    {"key": "eventEnable", "value": "[1, 1, 1]"},
    {"key": "intervalOffset", "value": "0"},
    {"key": "address", "value": ""},
    {"key": "covResubscriptionInterval", "value": "1800"},
    {"key": "objinstance", "value": 18},
    {"key": "ackedTransitions", "value": "[1, 1, 1]"},
    {"key": "notificationClass", "value": "61"},
    {"key": "enable", "value": "True"},
    {"key": "notificationThreshold", "value": "80"},
    {"key": "clientCovIncrement", "value": "{'defaultIncrement': ()}"},
    {"key": "stopTime", "value": "{'time': (255, 255, 255, 255), 'date': 'Date(*-*-* *)'}"},
    {"key": "alignIntervals", "value": "True"},
    {"key": "logDeviceObjectProperty", "value": "{'deviceIdentifier': ('device', 2098179), 'objectIdentifier': ('pulseConverter', 8), 'propertyIdentifier': 'presentValue'}"},
    {"key": "eventState", "value": "normal"},
    {"key": "description", "value": "Trend Kumuliertes Volumen"},
    {"key": "statusFlags", "value": "[0, 0, 0, 0]"},
    {"key": "bufferSize", "value": "5000"},
    {"key": "bacnet_id", "value": "239291"},
    {"key": "trigger", "value": "False"},
    {"key": "loggingType", "value": "polled"}
  • The topic prefix META/ identifies this message as a metadata message and we expect JSON format. buildinginc_heatquarteris the unique project handle assigned by aedifion to this project (It is usually formed from a shorthand for the customer and a shorthand for the project. It is not used as a display name).
  • In this case, type under entity is datapoint so we identify it as a datapoint. Which datapoint is determined by the id key, i.e., here we associate the message to the datapoint AS11-239291_C'Btr06'TempHall.
  • All fields in tags are initialized as tags. In this example, 32 new tags are created from this message on datapoint AS11-239291_C'Btr06'TempHall.
  • The field source is special since it describes the origin of the whole metadata message. Every tag association to the device or datapoint gets this source identifier as you can see in the frontend.


On all CONTROLS/... topics, the format and flow of messages is defined by the SWOP protocol. SWOP is a simple JSON-based protocol for setpoint and schedule writing specified and released as open source by aedifion. It decouples the aedifion Edge Device from the platform and allows anyone to implement and use their own edge device.

Writing a setpoint is as simple as sending the following exemplary message to the Edge Device:

  "type": "NEWSPT",
  "swop_version": 0.1,
  "datapoint": "bacnet93-4120-External-Room-Set-Temperature-RTs",
  "value": 20.3,
  "priority": 13

Head over to the SWOP protocol specifications for more examples.

Connection Rate Limiting

We as aedifion often face situtations where customers use non-unique client ids. Unfortunately, this leads to a "connection ping-pong" between the two (or more) clients that use the same id leading to excessive connection rates. In order to ensure a smooth user experience, we have thus decided to technically enforce a connection rate limit on the MQTT broker.

Rate limiting looks at the connection attempts per IP address. It restricts access if too many connection attempts were made in the last window_sec seconds.

  • If there are more than throttle_threshold connection attempts in this window, each connection attempt is slowed down by throttle_delay_ms ms.
  • If more than leaky_connection_threshold connection attempts were made in this window, the connections are slowed down by throttle_delay_ms ms and additionally only every leaky_connection_factor-th connection is accepted at all and all other connections from this IP are dropped.

Please note that the following parameters are subject to change without prior notification:

Parameter Value
window_sec 10
throttle_threshold 2
throttle_delay_ms 2000
leaky_connection_threshold 4
leaky_connection_factor 6

Fair use

We as aedifion give our best to ensure seamless scalability and the highest availability of our MQTT services. Since we give priority to a clean and simple user experience, we currently do not enforce any rate limits on the MQTT ingress and egress. Deliberately, this allows you to send bursts of data, e.g., to import a batch of historical data.

This being said, we will negotiate a quota with each customer that we consider the basis of fair use of our MQTT services. In favor of your user experience, this quota will be monitored but not strictly enforced. aedifion reserves the right to technically enforce the fair use quota on repeated violations without prior notice.


What are Quality of Service (QoS) levels and how can I use them?
There are three QoS levels, which, by standard, define the guarantee of delivering messages.

  • With QoS0 messages are being send without recognition of the receiving status. Therefore this method is called fire and forget.
  • On QoS1 the message is repeatedly send, until it is received at least once.
  • With QoS2 the message it exactly received once.

The QoS level is handled by the client on subscription to the broker. Find the detailed explanation in this often referenced article of HiveMQ.

Does the MQTT broker buffer messages while my client is disconnected?
When a client-broker connection with QoS1 or 2 is interrupted, the missing messages could be send to the client afterwards. This depends on the broker's configuration.

My client can't connect or keeps reconnecting?
Usually, the error is one of the following:

  • Connecting on the wrong port.
  • Connecting to a TLS endpoint without TLS.
  • User's local or network firewall blocks outgoing MQTT connections.
  • Wrong client credentials.
  • Using a client id that is already connected. MQTT will kick the older connection. If the other client then reconnects, these two clients play ping-pong.


[1] Introduction to MQTT:
[2] Introduction to MQTT:
[3] MQTT vs. HTTP:
[4] MQTT vs. HTTP:
[5] MQTT publish/subscribe:
[6] MQTT topics:

Last update: 2023-08-17
Back to top