Priority Buckets and Labeling
  • 12 Apr 2023
  • 8 Minutes to read
  • Dark
    Light

Priority Buckets and Labeling

  • Dark
    Light

Article summary

This section describes infrastructure enhancements used to optimize core services and control the sequence in which services are started.

Priority Buckets for Services

Currently, all the services are started with equal priority. However there are instances where an application service or a core service comes up ahead of another core service on which it depends. This leads to multiple service restarts and causes a delay in bringing up all the services upon iNode reboot.
To avoid this delay, infrastructure assigns priority buckets to core services to control the sequence of services starting up.

Service Dependency and Priority Recommendation

In the case of Secure Edge core services—DHCP, PowerDNS and Postgres, service dependency is as follows:

  • Power DNS expects Postgres;
  • DHCP requires Postgres and Power DNS (in case of Dynamic DNS update) to be running.

The recommendation is to assign the priority (with 0 the highest to 7 the lowest priority) as follows:

ServicePriority

Postgres (Core)

0

PowerDNS (Core)

1

DHCP (Core)

2

Default priority for other services

7

The priority is assigned to a service using the following label definition in the respective pod specification.

Label:

io_iotium_pod_priority: 2

Special Labels for Service Specifications

This section describes the View Secure Edge special labels for services.

Labels for Service Deployment in Clusters

NameTypeRequiredDescription

_iotium_master_elect

String

False

Value: subscribed. Will set the env variable IOTIUM_NODE_ROLE as master/ slave based on node role in a cluster. This env can be used by application services that need to differentiate between the service instance running in cluster master or slave.

_iotium_master_elect_ip / _iotium_master_elect_ip_prefixlen

String

False

If set, the _iotium_master_elect_ip and _iotium_master_elect_ip_prefixlen will be set as the IPAddress/PrefixLen of the Application service instance running on the Master (the iNode with IOTIUM_NODE_ROLE equal to Master). For this to take effect, _iotium_master_elect must be subscribed to.

Application services may make some execution decisions based on the node they are running in the cluster.

  • The "_iotium_master_elect:subscribed" label ensures that the application services get this information.
  • The "iotium_master_elect_ip:<IPaddress>" and "_iotium_master_elect_ip_prefixlen" labels contain information about the static IP address to use for replica instance of service. This static IP address is applied to the replica instance that is running in the MASTER node within a cluster.

Label for Core Services (to Avoid Conflicts with Container Time Zone Setting)

NameValuesRequiredDescriptionValid Deployments

_iotium_core_service

true / false

True (ioTium core service).
False (other services).

Used to set a service as "core service".
When set to true, container time zone changes will not affect this service; it will continue to use the iNode time zone.

Standalone iNode and Cluster.

Users can change the application service container time zone, as needed. While all the application services adhere to that time zone, the node will remain in the time zone (default UTC) as configured. It is desirable to have core services work in the same time zone as the node.

To avoid any impact by Container Time Zone setting changes, core services must be labeled explicitly as core services, as follows:

"_iotium_core_service": "true" key-pair

Label for Pod Priority

NameValuesRequiredUsageValid Deployments

io_iotium_pod_priority

0-2: Reserved for ioTium Core
Services.
3-7: User application service priority scheduling.

False

Used to provide a service chaining mechanism.
Services required to be available for another service are brought up first by receiving a higher priority. Please refer to the "Priority Scheduling" section for more details.

Standalone iNode and Cluster.

Labels to Avoid Service Restart During Master Failover

NameValuesRequiredDescriptionValid Deployments

_iotium_master_elect_env_volume

String

Volume name given in the volumes section of pod specification.

False

Give the volume name where the IOTIUM_NODE_ROLE and cluster related env will be loaded. Instead of setting the ENV variables in the service specification, they are written into a file "runtime.env" in the volume specified.

Cluster.
Avoid restart of replica service instance when there is master failover

You can reduce the downtime in cluster deployment when there is a master failover. Master failover results in new election and all services that depend on the node role in the cluster restart. You can provide a mechanism so that on failover, these services seamlessly change their roles without undergoing a service restart. The replica application service mode with the service restart avoidance configuration effectively reduces the downtime on cluster master failover.

You can avoid service restart by setting the volume where the node role related environment variables are written and disable updating the same as service environment variables.

The above two labels help accomplish this. In addition, you need to ensure that the DNS policy for the service is set to None, and the DNS IP address is set. Please refer to the PowerDNS / Postgres service specification for details.

The application service must be able to read from the environment variable file if this feature is used.

Special Labels for Core Services Service Specification

This section describes the POST bodies of PowerDNS, PostgreSQL, DHCP, and NTP that exercise the controls described in this section. It also describes an option for enabling remote logging for the services.

Postgres Priority and Core Service Setting

For Postgres Service, the priority label is set to 0 and the core service label is set to true. Postgres Service image version (iotium/postgres:12.3.0-3-amd64) is required.

To bring up the Postgres core service:

  1. To update an existing postgres service spec, edit the spec and make the necessary changes related to IPs, Secret IDs, Network ID, Cluster ID, and DNS.
  2. Make sure to add the following labels in the "labels" section of the pod spec:
"io_iotium_pod_priority": "0"
"_iotium_core_service": "true"

Postgres service specification example:

{
	"name": "DB",
	"labels": {
		"io_iotium_template": "postgresqacluster",
		"_iotium_master_elect": "subscribed",
		"_iotium_master_elect_ip": "10.102.0.2",
		"_iotium_master_elect_ip_prefixlen": "24",
		"_iotium_master_elect_env_volume": "iotium-vol",
		"_iotium_master_elect_set_env": "disable",
                       "io_iotium_pod_priority": "0",
                       "_iotium_core_service": "true"
	},
	"networks": [{
		"network_id": "n-82e48b339e69df75"
	}],
	"services": [{
		"image": {
			"name": "iotium/postgres",
			"version": "12.3.0-3-amd64"
		},
		"docker": {
			"environment_vars": {
				"POSTGRESQL_PASSWORD": "postgres",
				"DHCP_DB_NAME": "dhcp",
				"DHCP_DB_USER": "dhcp",
				"DHCP_DB_PASSWORD": "dhcp",
				"DNS_DB_NAME": "pdns",
				"DNS_DB_USER": "pdns",
				"DNS_DB_PASSWORD": "pdns",
				"POSTGRESQL_MASTER_HOST": "10.102.0.2"
			},
			"volume_mounts": [{
					"name": "datadir",
					"mount_path": "/bitnami/postgresql"
				},
				{
					"mount_path": "/config/",
					"name": "iotium-vol",
                                                           "read_only": true
				}
			]
		},
		"liveness_probe": {
			"exec": {
				"command": ["/healthcheck.sh"]
			},
			"initial_delay_seconds": 10,
			"timeout_seconds": 5,
			"period_seconds": 30,
			"success_threshold": 1,
			"failure_threshold": 3
		},
		"image_pull_policy": "IfNotPresent"
	}],
	"volumes": [{
		"name": "datadir",
		"emptyDir": {}
	}, {
		"name": "iotium-vol",
		"emptyDir": {}
	}],
	"dns_policy": "None",
	"dns": [
		"8.8.8.8",
		"10.102.0.3"
	],
	"termination_grace_period_in_seconds": 60,
	"kind": "REPLICA",
	"cluster_id": "67664978-55c3-4e56-b04e-a1dd59a5496e",
	"node_selector": {
		"_iotium.cluster.candidate": true
	}
}

PowerDNS Priority and Core Service Setting

For PowerDNS Service, the priority label is set to 1 and the core service label is set to true.

The PowerDNS Service image version iotium/powerdns:4.0.8-3-amd64) can be used instead of the readiness probe scripts.

To bring up the PowerDNS core service:

  1. To update the existing pod, edit the pod spec and make the necessary changes related to IPs, Secret IDs, Network ID, Cluster ID, and DNS.
  2. Make sure to add the following labels in the "label" section of the pod spec:
"io_iotium_pod_priority": "1"
"_iotium_core_service": "true"

PowerDNS service specification for API example

The example specification is used to run PowerDNS in Replica mode in a cluster with no service restart on cluster master failover.

{
	"name": "DNS",
	"labels": {
		"io_iotium_template": "pdns-2208",
		"_iotium_core_service": "true",
		"io_iotium_pod_priority": "1",
		"_iotium_master_elect": "subscribed",
		"_iotium_master_elect_ip_prefixlen": "24",
		"_iotium_master_elect_ip": "10.200.100.4",
		"_iotium_master_elect_set_env": "disable",
		"_iotium_master_elect_env_volume": "iotium-vol",
		"_iotium_template": "pdns-2208"
	},
	"networks": [{
		"network_id": "n-6a2c225bfb36ec6f"
	}],
	"services": [{
			"name": "pdnsrecursor",
			"image": {
				"name": "iotium/dnsrecursor",
				"version": "4.5.8-1-amd64"
			},
			"docker": {
				"environment_vars": {
					"PDNS_API_KEY": "changeme",
					"PDNS_WEBSERVER_ALLOW_FROM": "0.0.0.0/0",
					"PDNS_ALLOW_RECURSION": "",
					"PDNS_RECURSOR": ""
				},
				"volume_mounts": [{
					"name": "zonefile",
					"mount_path": "/var/pdns/zonefiles",
					"read_only": false
				}]
			},
			"image_pull_policy": "IfNotPresent"
		},
		{
			"image": {
				"name": "iotium/powerdns",
				"version": "4.5.4-1amd64"
			},
			"docker": {
				"environment_vars": {
					"PDNS_GPGSQL_PASSWORD": "pdns",
					"PDNS_ALLOW_DNSUPDATE_FROM": "10.200.100.5",
					"PDNS_GPGSQL_DBNAME": "pdns",
					"PDNS_GPGSQL_USER": "pdns",
					"PDNS_API_KEY": "changeme",
					"PDNS_WEBSERVER_ALLOW_FROM": "0.0.0.0/0",
					"PDNS_GPGSQL_HOST": "10.200.100.3",
					"ENABLE_REMOTE_LOGGING": "true"
				},
				"volume_mounts": [{
						"name": "zonefile",
						"mount_path": "/var/pdns/zonefiles"
					},
					{
						"name": "iotium-vol",
						"mount_path": "/iotium",
						"read_only": "true"
					},
					{
						"name": "named",
						"mount_path": "/var/pdns/config"
					},
					{
						"name": "logs",
						"mount_path": "/var/log"
					}
				]
			},
			"image_pull_policy": "IfNotPresent"
		},
		{
			"image": {
				"name": "fluent/fluent-bit",
				"version": "1.5"
			},
			"docker": {
				"volume_mounts": [{
						"name": "logs",
						"mount_path": "/var/log"
					},
					{
						"name": "fluentbit",
						"mount_path": "/fluent-bit/etc/"
					}
				]
			},
			"image_pull_policy": "IfNotPresent"
		}
	],
	"volumes": [{
			"name": "zonefile",
			"secret_volume": {
				"secret": "217a8876-ef07-4785-bf38-600dc0f23026"
			}
		},
		{
			"name": "iotium-vol",
			"emptyDir": {}
		},
		{
			"name": "named",
			"secret_volume": {
				"secret": "ff13a50f-db22-4f5d-b36a-d24abf5bd5c7"
			}
		},
		{
			"name": "logs",
			"emptydir": {}
		},
		{
			"name": "fluentbit",
			"secret_volume": {
				"secret": "64c883e5-76f1-455d-b995-f61b9ac839e9"
			}
		}
	],
	"dns_policy": "None",
	"dns": [
		"10.200.100.4"
	],
	"kind": "REPLICA",
	"cluster_id": "1ed3a792-5a18-4b95-be25-30873654535b",
	"node_selector": {
		"_iotium.cluster.candidate": "true"
	}
}

DHCP Service Priority and Core Service Setting

For the DHCP service instance the priority label is set to 2 and the core service label is set to true.

To bring up the DHCP core service:

  1. To update the existing pod, edit the pod spec and make the necessary changes related to IPs, Secret IDs, Network ID, Cluster ID, and DNS.
  2. Make sure to add the following labels in the "label" section of the pod spec:
"io_iotium_pod_priority": "2"
"_iotium_core_service": "true"

DHCP service specification for API example:

{
	"kind": "SINGLETON",
	"name": "DHCP",
	"cluster_id": "96cfabed-9410-4b97-be56-71dd9ffc2e7f",
	"networks": [{
		"network_id": "n-4900829c7c563ffd",
		"ip_address": "172.31.0.5"
	}],
	"labels": {
		"io_iotium_pod_priority": "2",
		"_iotium_core_service": "true",
		"io_iotium_template": "dhcpqacluster",
		"io_iotium_fileName": ""
	},

	"services": [{
			"docker": {
				"volume_mounts": [{
						"mount_path": "/etc/kea/",
						"name": "dhcp3"
					},
					{
						"mount_path": "/var/lib/kea",
						"name": "leasedir"
					},
					{
						"mount_path": "/etc/keaddns/",
						"name": "ddns3"
					},
					{
						"mount_path": "/var/log",
						"name": "logs"
					}
				]
			},
			"image_pull_policy": "IfNotPresent",
			"image": {
				"version": "1.6.2-2-amd64",
				"name": "iotium/dhcpd"
			}
		},
		{
			"docker": {
				"volume_mounts": [{
						"mount_path": "/var/log",
						"name": "logs"
					},
					{
						"mount_path": "/fluent-bit/etc/",
						"name": "fluent-bit.conf"
					}
				]
			},
			"image_pull_policy": "IfNotPresent",
			"image": {
				"version": "1.5",
				"name": "fluent/fluent-bit"
			}
		}
	],

	"volumes": [{
			"secret_volume": {
				"secret": "fdbfe29a-e1b3-4c17-9a12-f2c3697ac553"
			},
			"name": "dhcp3"
		},
		{
			"emptyDir": {},
			"name": "leasedir"
		},
		{
			"secret_volume": {
				"secret": "757d6de6-3f74-40db-b7f1-0626c0b5f789"
			},
			"name": "ddns3"
		},
		{
			"emptydir": {},
			"name": "logs"
		},
		{
			"secret_volume": {
				"secret": "ab2179c7-134f-48a9-9414-afd15727e7c0"
			},
			"name": "fluent-bit.conf"
		}
	]
}

NTP Service Priority and Core Service Setting

You don’t need to set a priority for the NTP service. Set the core service label to true to ensure the service is not affected by the container timezone configuration.

To bring up the NTP core service:

  1. To update the existing pod, edit the pod spec and make the necessary changes related to IPs, Secret IDs, Network ID, Cluster ID, and DNS.
  2. Make sure to add the following label in the "label" section of the pod spec:
"_iotium_core_service": "true"

NTP service specification request body for API example:

{
	"kind": "SINGLETON",
	"name": "NTP",
	"cluster_id": "96cfabed-9410-4b97-be56-71dd9ffc2e7f",
	"networks": [{
		"network_id": "n-4900829c7c563ffd",
		"ip_address": "172.31.0.6"
	}],
	"labels": {
		"_iotium_template": "ntpqacluster",
		"io_iotium_template": "ntpqacluster",
		"_iotium_core_service": "true"
	},
	"services": [{
			"docker": {
				"volume_mounts": [{
					"mount_path": "/var/log",
					"name": "logs"
				}],
				"cap_add": [
					"SYS_TIME",
					"SYS_RESOURCE"
				],
				"environment_vars": {
					"ENABLE_REMOTE_LOGGING": "true"
				}
			},
			"image_pull_policy": "IfNotPresent",
			"image": {
				"version": "4.2.8p10-2-amd64",
				"name": "iotium/ntp"
			}
		},
		{
			"docker": {
				"volume_mounts": [{
						"mount_path": "/var/log",
						"name": "logs"
					},
					{
						"mount_path": "/fluent-bit/etc/",
						"name": "fluent-bit.conf"
					}
				]
			},
			"image_pull_policy": "IfNotPresent",
			"image": {
				"version": "1.5",
				"name": "fluent/fluent-bit"
			}
		}
	],
	"volumes": [{
			"emptydir": {},
			"name": "logs"
		},
		{
			"secret_volume": {
				"secret": "ab2179c7-134f-48a9-9414-afd15727e7c0"
			},
			"name": "fluent-bit.conf"
		}
	]

}

Was this article helpful?