Skip to content


Specifications of the MQTT API.


Part of the platform is an MQTT broker which serves as the single logical point of data ingress to the, e.g., all data collected in the field through the aedifion Edge Devices is ingested to through this MQTT broker and, in turn, can also be subscribed to. The MQTT broker is clustered, i.e., distributed over multiple independent servers, to ensure seamless scalability and high availability.

MQTT Broker

aedifion currently maintains two MQTT brokers:

  • Production
    • Stable broker that receives updates and new features only after an intensive testing phase in our development environment.
    • Host:
    • Ports:
      • 8884 - MQTT over TLS 1.2 (use in standalone clients)
      • 9001 - MQTT over websockets over TLS 1.2 (use from within browsers)
  • Development
    • Semi-stable broker that receives updates and new features after a short internal testing phase.
    • Host:
    • Ports:
      • 8884 - MQTT over TLS 1.2 (use in standalone clients)
      • 9001 - MQTT over websockets over TLS 1.2 (use from within browsers)

Both brokers only accept TLS 1.2 encrypted connections, i.e., all plain TCP connections are rejected. The broker's certificates can be viewed, e.g., by connecting to or from any browser.

Figure 1: Server MQTT certificate of the production MQTT broker

Your operating system (OS) will accept this certificate if the DST Root CA X3 certificate is installed as a trusted root certification authority (CA) in your OS [1]. Don't worry, this is probably the case and you don't have to do anything. You can test if the certificate is accepted by navigating to or - if your browser doesn't issue a warning, you're fine.

Both brokers accept connections on two ports: 8884 and 9001. Port 8884 accepts plain MQTT, i.e., connections that transport MQTT directly via TLS. This is the standard case and, if in doubt, this port is the right choice. Port 9001 accepts websockets connections, i.e., connections that transport the MQTT protocol within the websockets protocol which in turn is transported via TLS. MQTT via websockets is the right choice when you want to send/receive MQTT data directly from a web browser [2,3].


openssl s_client -connect -servername

Check that the TLS handshake is ok

  • There will be a warning verify error:num=20:unable to get local issuer certificate at the top, which can be fixed by providing option -CAfile or -CApath and pointing to the right locations depending on your OS, e.g., -CAfile /etc/ssl/cert.pem on Mac OS.

Sources and further resources:
[1] Let's encrypt's chain of trust:
[2] MQTT over websockets:
[3] MQTT over websockets:


The MQTT brokers only accept connections from authenticated clients.

After having established a TLS connection, the MQTT client has to present login credentials (username and password) to the MQTT broker. Client credentials can be obtained with limited and unlimited validity.

Credentials with unlimited validity are provided only on request by the aedifion staff.
Please email us at

Credentials with limited validity can be created through the HTTP API using the POST /v2/project/{project_id}/mqttuser endpoint. This endpoint requires the following parameters:

Parameter Datatype Type Required Description Example
project_id integer path yes The numeric id of the project for which to add a new MQTT user account. 1
username string body (JSON) yes The username of the new MQTT user. If that username exists, the request is rejected. my_mqtt_user
password string body (JSON) yes The password for the new MQTT user. mys3cr3tp4ssw0rd
rights string body (JSON) no Grant read or write permissions to this account. Note that write permission imply read permissions. Defaults to read. read
validity integer body (JSON) no This user account will expire after this many seconds. Maximum validity is 2 hours = 7200 seconds. 3600
description string body (JSON) no Human readable description what this account is about. A new test account just for reading.

Explore our HTTP API tutorials or the HTTP API developer articles to learn how to build, authenticate, and post a corresponding HTTP request to the POST /v2/project/{project_id}/mqttuser endpoint. A successful response looks likes this:

  "operation": "create",
  "resource": {
    "description": "A new test account just for reading.",
    "id": 42,
    "topics": [
        "id": 123,
        "rights": "read",
        "topic": "lbg01/mybuilding/#"
    "username": "my_mqtt_user",
    "valid_until": "2019-01-18T16:23:01.707344Z"
  "success": true

The response is in JSON format, which can be easily parsed in any programming language. The resource field contains the details of the newly created user (not the password, of course, for security reasons). Note that this request was posted at 16:23h CET and with a requested validity of 1 hour, i.e., exactly 16:23h UTC since CET = UTC + 1.

After the MQTT account expires it will be automatically removed. You can either create a new account with the same username afterwards or renew the existing account before it expires using the PUT /v2/project/{project_id}/mqttuser/{mqttuser_id} endpoint. This endpoint generally allows you to modify the MQTT user account. It accepts the following parameters:

Parameter Datatype Type Required Description Example
project_id integer path yes The numeric id of the project for which to edit an MQTT user account. 1
mqttuser_id integer path yes The id of the existing MQTT user account to apply changes to. 42
password string body (JSON) no The changed password for the given MQTT user account. ch4ng3dp4ssw0rd
rights string body (JSON) no The changed read or write permissions to this account. Note that write permission imply read permissions. read
validity integer body (JSON) no The expiry of the given MQTT user account will be extended by this many seconds into the future from the time of the update. Maximum validity is 2 hours = 7200 seconds. 7200
description string body (JSON) no The changed human readable description what this account is about. A new test account just for reading with a new password.

A successful response looks like this:

  "operation": "",
  "resource": {
    "description": "A new test account just for reading with a new password.",
    "id": 42,
    "topics": [
        "id": 123,
        "rights": "read",
        "topic": "lbg01/mybuilding/#"
    "username": "my_mqtt_user",
    "valid_until": "2019-01-18T17:28:39.392003Z"
  "success": true

As you can see, the expiry date of the MQTT user account was extended by 7200 seconds = 2 hours from the time of the update (in this example, the update was sent roughly five minutes after the initial creation of the account).

Client Identifiers

Clients are identified by a unique client_id.

As per MQTT 3.1.1 specification (Section, client identifiers are between 1 and 23 alphanumeric characters (0-9, a-z, A-Z). Most MQTT brokers support longer client identifiers from the the full range of UTF-8 characters except characters /, +, and # which have a special meaning in MQTT and are disallowed for security reasons.

There can only be one connection per client_id per Broker. If two clients connect with the same client_id the older connection is terminated in favor of the newer. This restriction does not extend to your login credentials. You can open multiple connections using the same login credentials as long as you use a different client_id for each concurrent connection.

Choose a client_id and avoid special characters. Postfix the client_id with a random string or integer to ensure it is unique across concurrent connections.


The MQTT broker authorizes clients to subscribe and publish based on topics.

Once connected and authenticated, the client can publish or subscribe to one or multiple topics [1,2] - but not without authorization. To subscribe, the client needs read access to that topic. To publish, the client needs write access. Note that write access implies read access.

Authorization is specified through a list of topics (following exactly MQTT's topic syntax and semantics [1,2]) where for each topic it is specified whether the user has read or read/write access. Make sure to familiarize yourself with MQTT's topic structure, especially with hierarchy levels and the # wildcard [1,2].

Topic hierarchy

All MQTT topics on have a hierarchy that consists of two main parts, i.e., a fixed prefix and a variable postfix.

The prefix has two hierarchies and is assigned by aedifion:

  • The top level hierarchy, the load-balancing-group, is fixed and assigned by aedifion. It serves to separate different customers and projects and ensures that each customer and project is guaranteed separate and sufficient processing and communication resources.
  • The second level hierarchy, the project-handle, is a fixed (human-readable) string assigned by aedifion that uniquely identifies your project. This hierarchy separates different projects on the same load balancing group.

As an customer, you receive authorization to this prefix, i.e., to the topic load-balancing-group/project-handle/#, i.e., you can publish and subscribe to "anything below" the project level of the topic hierarchy.

The postfix matching the # can generally have arbitrary length or structure as long as they are UTF-8 strings.

  • If you've purchased an aedifion Edge Device, e.g., this device collects data from different datapoints on your building network and publishes them to the postfixes datapoint_1, ..., datapoint_n. Via MQTT, you thus have datapoint-level publish/subscribe access to the datapoints of your building. For efficiency reasons, the Edge Device uses short 4 to 12 characters long identifiers generated from the full datapoint names, for example: load-balancing-group/project-handle/0OHgK8nP.

    aedifion's hash identifiers creation

    1. Build a SHA1 hash out of the UTF-8 encoded datapoint id.
    2. base62-encode the shortenend hash.
    3. Cut the first hash_id_length characters where hash_id_length is configured individually per project (default = 8).

    Here's a sample implementation in Python using the pybase62 module.

    import base62
    import hashlib
    def base62id(s: str, hash_id_length: int = 8):
        return base62.encodebytes(hashlib.sha1(s.encode('utf-8')).digest())[:hash_id_length]
  • If you ingest data yourself, you can publish to arbitrary postfixes since the # wildcard of your topic authorization matches any number of sublevels.

It is important to note two things about publishing your own data via MQTT:

  1. The postfix is only used for routing messages on the broker, e.g., you can use it to group data for different subscribers. The postfix does not determine which time series data is stored to. This is determined by the payload of your messages (see below).
  2. aedifion does not prevent you from writing data to datapoints that are at the same time written by the aedifion Edge Device. If you have a datapoint_A on your local building network that is discovered by the Edge Device and you also write to datapoint_A yourself, this data will be stored and intermingled in the same time-series.

Payload format

The payload format depends on the type of the topic published to or subscribed from. There are three types of topics on the platform:

  • Timeseries data topics: These topics are used to send and receive timeseries data from buildings. They are usually in the form <load balance group>/<project handle>.
  • Meta data topics: These topics are used to send and receive meta data from buildings. They are usually in the form META/<load balance group>/<project handle>.
  • Controls topics: These topics are used to send setpoints and schedules to buildings. They are usually in the form CONTROLS/<load balance group>/<project handle>.

Timeseries data

All messages containing timeseries data you publish to or receive from the MQTT broker must strictly adhere to Influx Line Protocol format:

RoomTemperature value=20.3 1465839830100400200
--------------- ---------- -------------------
    |               |              |
    |               |              |
+---------+  +-------------+   +---------+
|datapoint|  | observation |   |timestamp|
+---------+  +-------------+   +---------+
  • datapoint is an arbitrary non-empty UTF-8 string that identifies your datapoint. If it contains blank spaces or quotes, then you must quote the string and escape blanks as well as quotes using a backslash \, e.g., "this\ is\ a\ \"datapoint\"\ with\ spaces and\ quotes". The reported observation will be stored on in a time series with exactly this name as you will see when logging in to the frontend.
  • observation is the reported measurement and must have the form of value=<float> where <float> is parsable as a floating point number. observation must be separated from the datapoint by a single blank.
  • timestamp is the timestamp of your reported observation in nanosecond-precision Unix time, i.e., the number of nanoseconds since January 1st, 1970, 00:00:00 UTC. It must be separated from the observation by a single blank. If your timestamp is in millisecond or microsecond precision you must append 6 or 3 zeros, respectively.

Influx Line Protocol additionally allows to concatenate an optional tag set after datapoint. The tag set is a comma-separated list of tags (key=value pairs) that are associated with this observation. It is stored in the timeseries database together with the reported observation but is currently not yet available via the API. Tags sent with the observation in the tag set are regarded as dynamic and time-dependent, e.g., they can be used to mark maintenance or test periods. To define static time-independent tags, e.g., a fixed location or the unit of the datapoint, use metadata messages or the HTTP API.

Influx Line Protocol allows publishing multiple datapoints in a single message. You have to separate them by a line brake with \n, for example:

RoomTemperature value=20.9 1465839830100400200
ExtTemperature value=14.7 1465839830100400200
OfficeTemperature value=21.3 1465839830100400200
HallTemperature value=17.9 1465839830100400200

The outcome of this is a small performance gain, the disadvantage is that it is not possible to publish each datapoint to its own topic, as described above.

Timeseries data in messages that do not strictly adhere to this format will be received but will not be stored in the platform.


All messages containing metadata you publish to or receive from the MQTT broker must strictly adhere to JSON format.

Each received metadata message is related to exactly one device or datapoint in the following manner:

  • The topic on which the message is received determines exactly one project. If that project does not exist, the message is silently dropped.
  • The message is parsed for a unique identifier depending on the message scheme to identify exactly one device or datapoint in the project. If no such device or datapoint is found, it is created.

Currently, there are two schemes to send metadata:

  • Scheme V1 (Deprecated):

    The message is parsed for key name.
    The value of this key must identify exactly one device or datapoint in the project depending on the value of the second key objtype.
    When the metadata message has been associated with a device or datapoint, all top-level key-value pairs (including objtype but excluding name) are initialized as tags on the associated device or datapoint. Example json:

      "name": "AS11-239291_C'Btr06'TempHall",
      "objtype": "trendLog",
      "units": "",
      "recordsSinceNotification": "44",
      "stopWhenFull": "False",
      "reliability": "noFaultDetected",
      "startTime": "{'time': (255, 255, 255, 255), 'date': 'Date(*-*-* *)'}",
      "eventTimeStamps": "[{'dateTime': {'time': (255, 255, 255, 255), 'date': (255, 255, 255, 255)}}, {'dateTime': {'time': (255, 255, 255, 255), 'date': (255, 255, 255, 255)}}, {'dateTime': {'time': (255, 255, 255, 255), 'date': (255, 255, 255, 255)}}]",
      "address": "",
      "objinstance": 18,
      "source": "BACnet"

  • Scheme V2 (New):

    The message contains 4 keys: entity, mode, tags, datapoints.

    The entity is a dictionary, containing 3 compulsory key-value pairs that are used to identify a device or a datapoint.

    • id: the alphanumeric identifier
    • type: device or datapoint
    • source: the source of the device or datapoint

    The mode option defines how the tags are processed. There are three possible modes that can be used in combination.

    • clean: drop all currently assigned tags and assign only new ones
    • add: only add new tags
    • update: only update existing tags

    Once the object is identified, all items in tags are assigned as tags to the object, depending on the matching modes. tags is a list of dictionaries, each with two keys key and value of a tag ({"key": <tag_key>, "value": <tag_value>}). Additionally, an optional key datapoints can be used for a device object to pass a list of child datapoint identifiers.

    Example json:

            "id": "AS11-239291_C'Btr06'TempHall",
            "type": "datapoint",
            "source": "BACnet"
      "mode": ["add", "update"],
      "tags": [
          {"key": "units", "value": ""},
          {"key": "recordsSinceNotification", "value": "44"},
          {"key": "stopWhenFull", "value": "False"},
          {"key": "reliability", "value": "noFaultDetected"},
          {"key": "startTime", "value": "{'time': (255, 255, 255, 255), 'date': 'Date(*-*-* *)'}"},
          {"key": "eventTimeStamps", "value": "[{'dateTime': {'time': (255, 255, 255, 255), 'date': (255, 255, 255, 255)}}, {'dateTime': {'time': (255, 255, 255, 255), 'date': (255, 255, 255, 255)}}, {'dateTime': {'time': (255, 255, 255, 255), 'date': (255, 255, 255, 255)}}]"},
          {"key": "reliability", "value": "trendLog"},
          {"key": "objtype", "value": "noFaultDetected"},
          {"key": "address", "value": ""},
          {"key": "objinstance", "value": 18}

Let's consider this example of a metadata message on the topic META/lbg1/buildinginc_headquarter. The displayed metadata was collected from a BACnet datapoint. Metadata for datapoints from Modbus, LON, etc. will look different. Values that contain nested structures are not unfolded but saved as-is into the value of the tag. Since tag values on the platform can be a text of any length, it is possible to save whole nested JSON objects as tag values.

        "id": "AS11-239291_C'Btr06'TempHall",
        "type": "datapoint",
        "source": "BACnet"
  "mode": ["add", "update"],
  "tags": [
    {"key": "notifyType", "value": "event"},
    {"key": "logInterval", "value": "6000"},
    {"key": "recordCount", "value": "5000"},
    {"key": "lastNotifyRecord", "value": "1749389"},
    {"key": "totalRecordCount", "value": "1749433"},
    {"key": "units", "value": ""},
    {"key": "recordsSinceNotification", "value": "44"},
    {"key": "stopWhenFull", "value": "False"},
    {"key": "reliability", "value": "noFaultDetected"},
    {"key": "startTime", "value": "{'time': (255, 255, 255, 255), 'date': 'Date(*-*-* *)'}"},
    {"key": "eventTimeStamps", "value": "[{'dateTime': {'time': (255, 255, 255, 255), 'date': (255, 255, 255, 255)}}, {'dateTime': {'time': (255, 255, 255, 255), 'date': (255, 255, 255, 255)}}, {'dateTime': {'time': (255, 255, 255, 255), 'date': (255, 255, 255, 255)}}]"},
    {"key": "objtype", "value": "trendLog"},
    {"key": "eventEnable", "value": "[1, 1, 1]"},
    {"key": "intervalOffset", "value": "0"},
    {"key": "address", "value": ""},
    {"key": "covResubscriptionInterval", "value": "1800"},
    {"key": "objinstance", "value": 18},
    {"key": "ackedTransitions", "value": "[1, 1, 1]"},
    {"key": "notificationClass", "value": "61"},
    {"key": "enable", "value": "True"},
    {"key": "notificationThreshold", "value": "80"},
    {"key": "clientCovIncrement", "value": "{'defaultIncrement': ()}"},
    {"key": "stopTime", "value": "{'time': (255, 255, 255, 255), 'date': 'Date(*-*-* *)'}"},
    {"key": "alignIntervals", "value": "True"},
    {"key": "logDeviceObjectProperty", "value": "{'deviceIdentifier': ('device', 2098179), 'objectIdentifier': ('pulseConverter', 8), 'propertyIdentifier': 'presentValue'}"},
    {"key": "eventState", "value": "normal"},
    {"key": "description", "value": "Trend Kumuliertes Volumen"},
    {"key": "statusFlags", "value": "[0, 0, 0, 0]"},
    {"key": "bufferSize", "value": "5000"},
    {"key": "bacnet_id", "value": "239291"},
    {"key": "trigger", "value": "False"},
    {"key": "loggingType", "value": "polled"}
  • The topic prefix META/ identifies this message as a metadata message and we expect JSON format. buildinginc_heatquarteris the unique project handle assigned by aedifion to this project (It is usually formed from a shorthand for the customer and a shorthand for the project. It is not used as a display name).
  • In this case, type under entity is datapoint so we identify it as a datapoint. Which datapoint is determined by the id key, i.e., here we associate the message to the datapoint AS11-239291_C'Btr06'TempHall.
  • All fields in tags are initialized as tags. In this example, 32 new tags are created from this message on datapoint AS11-239291_C'Btr06'TempHall.
  • The field source is special since it describes the origin of the whole metadata message. Every tag association to the device or datapoint gets this source identifier as you can see in the frontend.


On all CONTROLS/... topics, the format and flow of messages is defined by the SWOP protocol. SWOP is a simple JSON-based protocol for setpoint and schedule writing specified and released as open source by aedifion. It decouples the aedifion Edge Device from the platform and allows anyone to implement and use their own edge device.

Writing a setpoint is as simple as sending the following exemplary message to the Edge Device:

  "type": "NEWSPT",
  "swop_version": 0.1,
  "datapoint": "bacnet93-4120-External-Room-Set-Temperature-RTs",
  "value": 20.3,
  "priority": 13

Head over to the SWOP protocol specifications for more examples.


What are Quality of Service (QoS) levels and how can I use them?
There are three QoS levels, which, by standard, define the guarantee of delivering messages.

  • With QoS0 messages are being send without recognition of the receiving status. Therefore this method is called fire and forget.
  • On QoS1 the message is repeatedly send, until it is received at least once.
  • With QoS2 the message it exactly received once.

The QoS level is handled by the client on subscription to the broker. Find the detailed explanation in this often referenced article of HiveMQ.

Does the MQTT broker buffer messages while my client is disconnected?
When a client-broker connection with QoS1 or 2 is interrupted, the missing messages could be send to the client afterwards. This depends on the broker's configuration.

My client can't connect or keeps reconnecting?
Usually, the error is one of the following:

  • Connecting on the wrong port.
  • Connecting to a TLS endpoint without TLS.
  • User's local or network firewall blocks outgoing MQTT connections.
  • Wrong client credentials.
  • Using a client id that is already connected. MQTT will kick the older connection. If the other client then reconnects, these two clients play ping-pong.

Fair use

We as aedifion give our best to ensure seamless scalability and the highest availability of our MQTT services. Since we give priority to a clean and simple user experience, we currently do not enforce any rate limits on the MQTT ingress and egress. Deliberately, this allows you to send bursts of data, e.g., to import a batch of historical data.

This being said, we will negotiate a quota with each customer that we consider the basis of fair use of our MQTT services. In favor of your user experience, this quota will be monitored but not strictly enforced. aedifion reserves the right to technically enforce the fair use quota on repeated violations without prior notice.

Last update: