If you have workloads deployed on Kubernetes, you need to be sure they are appropriately constrained and monitored to avoid excess spend.
Cloud Cost Management (CCM) can ingest Kubernetes data around Elastic Kubernetes Service (EKS) and Azure Kubernetes Service (AKS). You can then receive recommendations for how to Right Size Kubernetes containers.
The integration with Kubernetes collects metrics that allow you to analyze usage on your Kubernetes containers. You can receive recommendations for right-sizing workloads to reduce costs. You can also configure constraints based on your key metrics and your risk tolerance. Reports can be sent to selected recipients via email.
For AWS, EKS can support either Fargate or EC2 or a combination of both compute instances. Fargate pods display direct cost, and right-sizing recommendations are provided directly for the containers.
If a cluster is running partially or exclusively on EC2, then those instances are right-sized under the typical EC2 right sizing that is already in place. For containers backed by traditional EC2s, you will see container right-sizing recommendations without cost metrics.
Kubernetes metrics collection is available for all CCM customers with a Pro or a Trial license. You will need to configure your Kubernetes environment before you can integrate it with CCM.
Related Topics
To collect, or scrape, metrics from Kubernetes clusters, the Prometheus collector must be deployed and a listening service properly configured.
-
To collect Kubernetes metrics, the Prometheus collector is required. A collector must be available for each account in Virtana Platform that includes Kubernetes.
You must install Prometheus with an HTTPS connection and use a CA certificate (self-signed certificates are not supported). You must also provide Cloud Cost Management (CCM) public access to the Prometheus API.
Prometheus can be configured inside or outside of a cluster. It can also use federated or nonfederated services.
See Prometheus documentation for download and configuration instructions.
-
The kube-state-metrics (KSM) listening service must be available. The KSM service sends metrics about the state of objects in a Kubernetes cluster. Prometheus must be able to scrape from the endpoint '/metrics' on port 8080 by default.
Depending on your CSP and how Kubernetes was installed, the KSM service might be available by default or you might need to install it manually.
If manually installed, you might need to add annotations to each Kubernetes pod to be scraped.
If needed, the annotations to be added are:
prometheus.io/scrape: 'true' prometheus.io/path: '/metrics' prometheus.io/port: '8080' Services or pods from which metrics are to be collected must specify the /scrape: 'true' annotation. The default path and port can be changed.
In the Kubernetes config-map file, there are three entries (_meta_kubernetes_pod_annotation_prometheus_io_<...>) that indicate that information about scraping, metrics path, and port should be read from the pod annotations.
See the kube-state-metrics GitHub site for details about the service.
Related Topics
The Prometheus metrics collector is open source software that is required to collect Kubernetes data from your cloud service provider (CSP). Right-sizing metrics are collected for containers and Fargate pods, but not for clusters.
About This Task
You must deploy a single Prometheus instance per Virtana account. If you have linked accounts in Virtana Platform, each parent and child account must have a Prometheus instance associated with it.
If you have existing Prometheus instances, you can integrate those with Virtana Platform.
To integrate with Virtana Platform, it doesn't matter if the Prometheus collector is using federated or nonfederated services, or is configured within or outside of a cluster.
Prerequisites
Cloud provider accounts that include Kubernetes resources must have Prometheus collectors installed on them. See Prometheus documentation for download and configuration instructions.
Prometheus version 2.14.0 or later is required.
The Prometheus server you are adding to Virtana Platform externalizes container metrics and must have a public URL with a routable port. The default port is 9090.
If you want to configure Prometheus with basic authentication, you must provide the Prometheus username and password. The credentials should be available from your infrastructure admin.
Tip
Basic authentication is the only authentication method provided with Virtana Platform. If your environment requires an alternative authentication method, submit a feature request using the Resource Center in Virtana Platform.
Steps
-
Under Settings, navigate to Integrations > Metrics Collectors and click Add Integration.
-
Enter the Data Source from which the metrics will be collected.
-
Enter the Prometheus Domain.
You can use HTTP, or HTTPS with a valid certificate.
(If you would like support for self-signed certificates, vote for the feature in the Virtana Platform Resource Center.)
Example: https://sample.prometheus.com
You can also use an IP address or URL to the service. The domain, IP address, or URL must be a public endpoint.
The URL to a Kubernetes service is in the format: service-name.namespace.svc.cluster.local:service-port, where cluster.local is the Kubernetes cluster name.
Tip
The domain can be a DNS URL, an IP address, or a URL like the one used for a load balancer. The load balancer URL can be located by selecting the CSP Kubernetes cluster, navigating to Services, and then selecting the Prometheus server. Alternatively, you can get the external address using the command:
kubectl get svc --all-namespaces
-
Optional: Select Enable Basic Authentication and enter the Prometheus Username and Password.
Tip
Prometheus credentials can be provided by an administrator with access to the Prometheus configuration file.
-
Click Test Connection.
A message indicates if the test is successful.
-
Click Save.
The Prometheus collector displays in the table on the Metrics Collectors page.
Related Topics
You can view right-sizing information about Kubernetes resources on the Cost Saving Opportunities (CSO) page. Cloud Cost Management (CCM) supports AWS Elastic Kubernetes Service (EKS) and Azure Kubernetes Service (AKS).
From the Recommendation Details table you can see the current type and cost of your Kubernetes resources, as well as the proposed changes and potential savings. The drop-down for each row in the table provides additional details, including the name of the cluster and namespace that each container belongs to.
For Amazon EKS, CCM provides recommendations for both EC2 and Fargate instances, based on the constraints you set in the associated policy.
When viewing recommendations for Kubernetes, the Current Type and Proposed Type columns display recommended values, based on the Kubernetes configurations for CPU and memory. CPU and memory values are set in the configuration file, as shown in the Kubernetes documentation.
The following image show examples of Kubernetes recommended values, displayed as CPU-memory, such as 1000mCPU-128MiB. CPU values are in milli-CPUs.
Containers on which a Prometheus service is deployed for Kubernetes are not right-sized, and are therefore not included in reports.
Tip
When running Kubernetes and hosting your own instances on your cloud provider, no cost data is associated with the container recommendations, and only the limit sizes display. In this circumstance, cost data is displayed for the instance rather than the container. Cost values in the Container column will display as dashes.
When right-sizing in this circumstance, we recommend that you size your containers first, then size the underlying EC2s or VMs.
From the Right-Sizing tab, you can do the following:
-
View recommendations for optimizing Kubernetes containers based on performance, cost, and risk.
-
See an overview of potential savings for Kubernetes entities.
-
Edit the policy for the Kubernetes resource and configure policy constraints based on required CPU and memory utilization, and adjust the data aggregation method.
-
View details of recommendations to see how various performance indicators would be affected by implementing the recommendations.
-
Configure reports for emailing.
-
Implement change requests.
Related Topics
For more detailed information about Kubernetes concepts, see the Kubernetes documentation.
- Azure Kubernetes Service (AKS)
-
The Microsoft Azure service for Kubernetes containers.
- cluster
-
A set of nodes, some of which host the Kubernetes pods. A cluster has one management node and at least one worker node. The containerized applications run on the cluster.
- ConfigMap
-
A file that contains the external configuration of an application.
- container
-
An executable image that contains software and all of its related dependencies. Containers decouple the software from the infrastructure and are therefore easier to deploy and are portable.
- Elastic Kubernetes Service (EKS)
-
The Amazon service for Kubernetes containers.
- exporter
-
Translates metrics from the application into a format readable by Prometheus. It also makes the metrics available to be scraped by Prometheus.
- Google Kubernetes Engine (GKE)
-
The Google service for Kubernetes containers.
- Helm
-
A tool for simplifying the deployment of Kubernetes applications and services.
- Helm chart
-
A package that contains all resource definitions needed to run an application or service in a Kubernetes cluster.
- Ingress Controller
-
An application that includes a Kubernetes load balancer, a network plugin, and exporter capabilities. The controller requires a public IP address or domain to access clients outside the Kubernetes cluster.
- kube-state-metrics (KSM)
-
A listening service that uses the Kubernetes API server to gather data and generate metrics about the state of objects such as nodes, pods, and deployments.
- namespace
-
An abstraction for organizing cluster objects. Resources within a namespace must have unique names.
- node
-
A machine in Kubernetes. Depending on the cluster configuration, nodes can be either virtual or physical. There is a single management (master) node and one or more worker nodes, which run the applications. A node can have multiple pods.
- pod
-
A group of containers on a cluster.
- service
-
An abstraction layer used to define a logical set of pods. A service allows the application to be exposed externally, provides load balancing, and allows service discovery for pods, thereby permitting your applications to receive traffic.
- workload
-
An application running on Kubernetes.
Comments
0 comments
Article is closed for comments.