We are currently running our Kubernetes infrastructure in AWS using Kops. This provides a lot of advantages, including being able to easily set and use Auto-Scaling Groups (ASGs). Part of the Kubernetes Autoscaler repository is the cluster autoscaler which watches for events on your Kubernetes cluster and responds by scaling up and down nodes needed to run pods.
With the release of Kubernetes 1.8 RBAC (Role-Based Access Controls) came out of beta and became generally available. If you have used AWS in the past, RBAC is no different than IAM roles with certain accounts getting access to perform certain actions. With the use of RBAC everything you run that needs API access will now need either a Role or a ClusterRole, and a RoleBinding or ClusterRoleBinding. What these are is beyond the point of this post, if you need additional clarification see the Kubernetes docs.
Surprisingly there are no default Roles or ClusterRoles for using the Cluster Autoscaler. There is an open issue around it, but still nothing actually defined. I took a first pass at defining a ClusterRole for the pod, which is currently working on my cluster. I would love anyone to take a pass at pairing it down, as I would bet that it is still too broad in what it allows.
apiVersion: rbac.authorization.k8s.io/v1alpha1 kind: ClusterRole metadata: name: cluster-autoscaler rules: - apiGroups: - "*" resources: - "*" verbs: - get - watch - list - create - update - nonResourceURLs: ["*"] verbs: - get - watch - list