Day 37: 90DaysOfChallenge

Day 37: 90DaysOfChallenge

Kubernetes Important interview Questions

  • What is Kubernetes and why it is important?

Kubernetes, often abbreviated as k8s, is a sophisticated container orchestration platform that manages the deployment, scaling, and operation of application containers across clusters of hosts. It is important as it provides several advantages over Docker, a container platform like single host management, auto-scaling, auto-healing and Enterprise Level Support.

  • What is difference between docker swarm and Kubernetes?

Docker Swarm is a lightweight, easy-to-use orchestration tool with limited offerings compared to Kubernetes. Kubernetes is complex but powerful and provides various features like self-healing, persistent storage, service discovery, configuration and secret management.

  • How does Kubernetes handle network communication between containers?

Kubernetes uses a Pod Network to manage container communication. Kubernetes assigns each pod its own IP address, allowing containers within the pod to freely communicate using that IP. Pods can communicate with each other directly via their IP addresses while containers within the same Pod can communicate using localhost.

  • How does Kubernetes handle scaling of applications?

Kubernetes provides two main approaches for scaling applications:

  1. Horizontal Pod Autoscaler (HPA): This automatically adjusts the number of replicas (pods) in a deployment or replica set based on observed CPU utilization or custom metrics. It scales up when resource usage is high and scales down during periods of low usage, ensuring efficient resource utilization and maintaining performance.

  2. Vertical Pod Autoscaler (VPA): VPA adjusts the resource requests and limits of individual pods based on their actual resource usage. It allocates more resources to pods that need them and reduces resources for pods that don't fully utilize their allocated resources. This helps in optimizing resource usage at the pod level.

Together, these autoscaling mechanisms enable Kubernetes to efficiently manage application scalability, ensuring that resources are allocated optimally to handle varying workloads.

  • What is a Kubernetes Deployment and how does it differ from a ReplicaSet?

Deployments offer higher-level features for managing the application's lifecycle, including updates and rollbacks, while ReplicaSets focus on ensuring a specified number of Pods are always running. Deployments often use ReplicaSets internally to achieve their desired state.

  • Can you explain the concept of rolling updates in Kubernetes?

A rolling update allows a Deployment update to take place with zero downtime. It does this by incrementally replacing the current Pods with new ones. The new Pods are scheduled on Nodes with available resources, and Kubernetes waits for those new Pods to start before removing the old Pods.

  • How does Kubernetes handle network security and access control?

Kubernetes manages network security and access control through mechanisms like Network Policies, Service Accounts, Role-Based Access Control (RBAC), authentication and authorization plugins, TLS encryption, and Pod Security Policies. These ensure fine-grained control over network traffic, user access, and resource permissions within the cluster, enhancing security and protecting against unauthorized access and malicious activities.

  • Can you give an example of how Kubernetes can be used to deploy a highly available application?

The steps to deploy a highly available application in Kubernetes are:

i) Containerize the Application: First, containerize your web application using Docker. Create a Dockerfile to define the application's environment and dependencies.

ii) Define Kubernetes Deployment: Write a Kubernetes Deployment manifest to describe how to run your application.

iii) Create a Kubernetes Service: Define a Kubernetes Service to expose your application internally or externally.

iv) Configure Persistent Storage (Optional): If your application requires persistent storage, configure a PersistentVolumeClaim and PersistentVolume to store data across Pod restarts.

v) High Availability Configuration: To ensure high availability, deploy your application across multiple nodes in the Kubernetes cluster.

vi) Scaling: Configure Horizontal Pod Autoscaler (HPA) to automatically scale the number of application replicas based on CPU or custom metrics to handle varying loads.

  • What is namespace is Kubernetes? Which namespace any pod takes if we don't specify any namespace?

A namespace is a way to logically divide cluster resources, network policies, RBAC and everything among multiple users, teams, or projects. This helps in organizing and isolating resources, policies, and permissions. For example, there are two projects using same Kubernetes cluster. One project can use ns1 and other project can use ns2 without any overlap and authentication problems.

If we don't specify any pod, it goes into the default namespace by default.

  • How ingress helps in Kubernetes?

Ingress helps manage external access to services within the cluster by providing HTTP and HTTPS routing based on defined rules. It enables load balancing, SSL/TLS termination, path-based routing, name-based virtual hosting, and seamless integration with the Kubernetes ecosystem, simplifying external access configuration for applications.

  • Explain different types of services in Kubernetes?

Services are objects that provide stable network identities to Pods and abstract away the details of Pod IP addresses. Services allow Pods to receive traffic from other Pods, Services, and external clients.

There are three types of services - ClusterIP, NodePort and LoadBalancer.

  • ClusterIP - Exposes the Service on a cluster-internal IP.

  • NodePort - Exposes the Service on each Node's IP at a static port (NodePort).

  • LoadBalancer - Exposes the Service externally using an external load balancer.

  • Can you explain the concept of self-healing in Kubernetes and give examples of how it works?

Self-healing refers to the ability of the system to automatically detect and recover from failures without human intervention. This ensures that the desired state of the application is maintained despite unforeseen issues.

Kubernetes achieves self-healing through Controllers. For example, if a Node fails, a Node Controller detects it and reschedules affected Pods to healthy Nodes automatically. No manual intervention is required.

  • How does Kubernetes handle storage management for containers?

Kubernetes uses Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) to manage storage. A PV represents physical storage, and a PVC requests and uses that storage within a Pod. Kubernetes ensures that PVCs are bound to available PVs, providing data persistence.

  • How does the NodePort service work?

The NodePort service in Kubernetes exposes a service on each node's IP address at a static port (the NodePort). It makes the service accessible from outside the Kubernetes cluster by mapping a port on the node's IP address to a port on the service. Kubernetes automatically assigns a port in a predefined range (30000-32767) for the NodePort.

  • What is a multinode cluster and single-node cluster in Kubernetes?

Multi-node Cluster: A multi-node cluster consists of multiple worker nodes, each running Kubernetes components and hosting one or more pods. These clusters distribute the workload across multiple nodes, providing fault tolerance, high availability, and scalability.

Single-node Cluster: A single-node cluster consists of only one worker node, which hosts all the Kubernetes components, including the control plane (API server, scheduler, controller manager) and the container runtime (such as Docker or containerd).

  • Difference between create and apply in Kubernetes?

create The create command is used to create new resources in the Kubernetes cluster. If the resource already exists, the command will throw error, preventing you from accidentally creating duplicates. For example, you can create a deployment using kubectl create -f deployment.yaml.

apply The kubectl apply command creates or updates resources based on a YAML or JSON manifest file. If the resource already exists, apply will update it with the new configuration, effectively performing a patch or rolling update.

Thanks for reading!

Happy Learning!