Image for post
Image for post

Ok great! Now that we have established that all requests for all clients in some form or the other hits ClusterIP, we just need to understand how the request finally get routed to the appropriate pod. In the above diagram, which one of those three green dots gets the request?

Perhaps there is one another burning question one might ask is “Why do you even need this? Why don’t we simply go round robin from ClusterIP (from iptables for example) to pods directly?” Kubernetes justifies it here.

Every node in the Kubernetes cluster run a container called kube-proxy.kube-proxy is a…


Let’s step back a little.

When you are working with Docker Swarm or simply using docker-compose one notices how the world is so beautiful. When you do a stack deploy or docker-compose up the DNS names are by default the name of your containers mentioned in your yml files. 😃

http://nginx:8080

When you are learning Kubernetes and come ample amount of time with swarm and compose, it can be quite confusing what to expect from Kubernetes. Especially when you’re a developer and need to “creates services” in it. When you created that application listening to requests on port 8080 (for…


How your requests are routed to the pod is the question we are going to solve now. Perhaps, instead of “how” maybe we should ask “who” first. Who is actually trying to connect to your services? And since that question would really help us answering the one and only question — what is my service type? Who is it going to serve?

ClusterIP

If your clients are always going to be internal — aka pods inside the kubernetes cluster, then you need to create a service of type ClusterIP. As the official doc states, it exposes the Service on a cluster-internal…


Image for post
Image for post

The above is a basic representation of how scaling works with Kubernetes. When a person executes the scale command, they are essentially updating a deployment spec. Then the deployment controller updates it’s replica set spec. And the replica set changes it pod count to 2 — (it created the new pod). This triggers and control plane now decides which nodes the pods (both the new and the existing one) need to exist in.


I am currently learning stuff about Kubernetes. And as I was doing my assignments and learning how to create a deployment I stumbled upon something awkward. When you intend to create a deployment using this command:

kubectl run my-apache --image httpd

the expectation is to create something called Deployment, then ReplicaSet, and finally the Pod running your apache server. But you know what happened? It messed my flow up! It DID NOT create my Deployment and ReplicaSet!

Image for post
Image for post
Photo by Arwan Sutanto on Unsplash

When I ran the command to get this listed. It wasn’t there.


As we continue to investigate the workings of Kubernetes one cannot get away from the need of understanding how a POD is created. In this tutorial, we will go through a set of things that need to happen before a Pod created. Let’s see what kind of things happen when we create the pod using the cli command that resembled below:

# before v1.18
kubectl run my-nginx --image nginx
# on & after v1.18
kubectl create deployment nginx --image nginx

kubectl run

When we run the above command using run to create the pod there are several steps that happen. It uses…


Kubernetes (or popularly known as K8), is a portable, extensible, open-source platform for managing containerized workloads and services. In other words, how to load balance requests/tasks of your services and, how to manage those very services — whether it be deploying them with zero downtime or perhaps start the service back up if it went down (self-heal). It is important to understand how it all happens on a high level and what are the basic components involved in these actions. In this blog we will look at what Kubernetes is comprised of and their roles in it.

What Kubernetes is not?

Unlike traditional PaaS…


Image for post
Image for post
Photo by Stefan Kunze on Unsplash

I had autocompletion scripts written in bash. After my mac upgrade, they don’t work in ZSH. I had been trying to understand the best way to migrate them. I spent a couple days in their documentation to understand bits and pieces so that I could migrate my existing scripts. During the process I wasn’t able to find many examples so it lead me to do some studying this zsh completion system documentation with some trial and error. For starters I found the piece written by Mads Hartmann quite helpful to get started. …


Image for post
Image for post

Today I came across a new Scala library for generating AVRO schema called avros4s. One of the cool things I found was it’s capability to manage recursive schemas. The library brings support of using our lovely case classes for generation of AVRO schema. Before I found out avro4s I was using another popular scala plugin called avrohugger. Avrohugger works great if you would like to generate your models as part of your compile task in sbt. However, this needed an .avsc file around in your classpath to generate the semi-scala like code in your target folder. Anyways, I like Avro4s…


Image for post
Image for post

How to remove a user from producing

kafka-acls --authorizer-properties zookeeper.connect=yourZookeeper:2181 --remove --allow-principal User:yourUser --producer --topic yourTopic

How to remove a user from consuming

kafka-acls --authorizer-properties zookeeper.connect=yourZookeeeper:2181 --remove --allow-principal User:yourUser --consumer --topic yourTopic — group yourGroup

Karan Gupta

Just a curious developer, a proud uncle, a weightlifter, & your neighborhood yogi.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store