Generally when I have monitored network equipment in the past I have relied on using SNMP. SNMP is not without it’s issues though. It’s not the most secure protocol, and it’s also not the fastest. Additionally I had a ton of issues getting Promethues to scrape the metrics from the Mikrotik equipment that I currently use. I ended up using a very complicated setup with a custom snmp-exporter with a configuration file I found somewhere on the internet. All that to say, I’ve found a much simpler way to get metrics from my Mikrotik devices. Enter the mktxp exporter.
I don’t remeber how I stumbled across this project, but I was happy to see it considering the issues I had with getting metrics into my Grafana Cloud instance. To use mktxp
on my Kubernetes cluster was very simple considering there was already a sample Deployment and Secret template.
Mikrotik Setup
First however you need to add a new user to your Mikrotik device, turn on the API access, and open the firewall to the correct IP or VLAN or interface list. Make sure to modify the values to suit your environment.
/ip service api-ssl address=10.0.0.0/24
/ip service api address=10.0.0.0/24
/ip firewall filter add action=input chain=input comment="Allow API access" dst-port=8728 in-interface-list=MANAGE protocol=tcp
/ip firewall filter add action=input chain=input comment="Allow SSL API access" dst-port=8729 in-interface-list=MANAGE protocol=tcp
/user group add name=mktxp_group policy=api,read
/user add name=mktxp_user group=mktxp_group password=mktxp_user_password
Mktxp setup
Next make the necessary modifications to the mktxp secrets.yaml
file putting in your username, password, device ip address, and port. Deploy both the secrets.yaml
and the deployment.yaml
.
kubectl apply -f secrets.yaml
kubectl apply -f deployment.yaml
Once it’s running you can either exec
into the pod to check that it’s working, or I deployed a “test” pod in the same namespace and used curl
to check that it was serving metrics on port 49090. Here is the yaml for that temp pod:
apiVersion: v1
kind: Pod
metadata:
name: basic-test
spec:
containers:
- name: basic-test
image: debian:bookworm
imagePullPolicy: Always
# Just spin & wait forever
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
Just deploy that yaml and then install curl to test the endpoint:
kubectl apply -f temp.yaml
kubectl exec -ti basic-test -- /bin/bash
$ apt update
$ apt install curl
$ curl mktxp-exporter:49090
You should see a whole bunch of output related to your router. Once this is verified, delete that temporary pod. Now if you use Grafan Cloud it’s time to set up alloy.
Grafana Cloud
Originally I didn’t want to use Alloy for this because it is nowhere else in my infrastructure, but the setup ended up being dead simple. The configuration file is below, make sure to add your username/password for your Grafana Cloud Prometheus instance:
prometheus.remote_write "default" {
endpoint {
url = "https://prometheus-us-central1.grafana.net/api/prom/push"
// Get basic authentication
basic_auth {
username = ""
password = ""
}
}
}
prometheus.scrape "mikrotik" {
targets = [{
__address__ = "mktxp-exporter:49090",
}]
forward_to = [prometheus.remote_write.default.receiver]
}
Now, even though this has a password in it I applied it as a configMap, because, lets face it, it’s not like Kubernetes secrets are all that secret, and it’s my personal cluster. YMMV. Anyway, apply it: kubectl create cm alloy-config --from-file=alloy-config
Create a simple values.yaml
for the helm chart:
alloy:
configMap:
name: alloy-config
key: alloy_config
Then deploy it using helm:
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
helm install --namespace <NAMESPACE> <RELEASE_NAME> grafana/alloy -f values.yaml
That’s it! In a minute or two you should see metrics coming in that start with mktxp_
.