Welcome to Anarcher’s Trashcan, a personal blog about programming, technology, and more.

When Kubernetes Nodes Exceed 1000

TL;DR

  1. When Kubernetes nodes exceed 1000, node exporters deployed as DaemonSets on each node also increase.
  2. When Prometheus Operator performs Service discovery with Service monitor, it references the service’s endpoints by default.
  3. Kubernetes Endpoints objects have a default limit of 1000 IPs.
  4. Only 1000 Prometheus scrape targets are maintained.
  5. Prometheus should use endpointslices instead of Kubernetes endpoints for Service discovery.

As Kubernetes clusters grow and the number of nodes exceeds 1000, various challenges arise. One particularly important issue from a monitoring perspective is Prometheus Service Discovery.

A toy tool for generating Kubernetes manifests in Rust

My personal criticism of Helm is the part where it generates Kubernetes resources by composing YAML with text templates. Actually, the template/values pattern of Helm charts is also used in kube-prometheus’s jsonnet, and there’s a need to abstract and manage complex configurations.

GreptimeDB as Prometheus Long-term Storage

Is GreptimeDB suitable as a long-term storage solution for Prometheus? To find an answer to this question, I set up a simple configuration of GreptimeDB (v0.13) to investigate.

What is GreptimeDB?

GreptimeDB is an open-source cloud-native time series database that integrates metrics, logs, and events.

Jsonnet: The Good, the Bad, and the Meh

Each solution is the root of the next problem – Gerald M. Weinberg

I’ve been using Jsonnet for several years, and I think it would be good to summarize my experiences so far.

The good

The best thing about Jsonnet is that it’s a superset of JSON. As a data templating language for generating JSON, it provides many features of programming languages (variables, functions, arithmetic operations, conditionals). Since it can generate JSON, it can also generate YAML, which is why I use it with tanka to create Kubernetes manifests.

Some notes about cortex architecture

Cortex

Cortex, started by Tom Wilkie and Julius Volz (Prometheus’ co-founder) in 2016, has several interesting architectural features. As one of Prometheus’ long-term storage solutions, Cortex has been referenced by many time-series based storage architectures (tracing, log) since then (especially in the Grafana stack).

kroller : a tiny (restart) tool to help for kubernetes cluster upgrade

Kubernetes upgrades (especially EKS) are categorized into two types based on the Kubernetes architecture:

  • Control plane upgrade (+ etcd)
  • Node upgrade

Particularly when using cloud-managed Kubernetes like EKS, since AWS manages the control plane, you’ll mostly handle node upgrades directly (if you’re not using managed nodegroups).

Prometheus 101 (slide) and Graphite

Prometheus 101

slide: Prometheus 101

slide: query

slide: range vector

I created a simple presentation about Prometheus. I uploaded it using sporto/hugo-remark: A theme for using remark.js with hugo, and I found that creating it in markdown rather than PowerPoint allowed me to focus more on the content. (But that doesn’t necessarily mean the content is better.)

Make REST API Documentation using swagger in Go

For golang based HTTP/REST API documentation,I choose swagger. go-swagger has several features for swagger documentation. The go-swagger can generate swagger spec based code generation but I already have an REST API server. I use go-swagger with golang comment annotation for swagger spec generation.

For REST API development, Design first with writing spec and then generating codes from it is a good approach. goa is a famous tool for this style.

Releasing with bumpversion, govvv and drone

One of pleasures about coding is using good tools. Recently I use bumpversion, govvv and drone for version releasing.

bumpversion

bumpversion is automation for semantic versioning. Most of my projects have simple config file like below. (.bumpversion.cfg)

[bumpversion]
commit = True
tag = True
current_version = 0.8.4
parse = (?P<major>\d+)\.(?P<minor>\d+)\.(?P<patch>\d+)
serialize = 
	{major}.{minor}.{patch}
[bumpversion:file:VERSION]

Before release,I just run like below:

Using docker-machine

The Docker machine is a command tool created by the docker team to manage docker servers. It automatically creates hosts and installs docker engine on them and configures the docker client to talk.

If you install the docker machine tool,you can use it like below: