Kubernetes
Frequently used code for kubernetes
Recipes
Installing kubectl
(dependencies):
Check version
Installing minikube
Installing KVM
Installing helm
Start cluster
Disconnect from remote cluster
Delete namespace
Create namespace
To run a service from minikube
minikube service <name-of-the-service> -n <name-of-the-namespace>
for example for jupyterhub: minikube service proxy-public -n jhub
To inspect running cluster
minikube dashboard
Applying conf changes
Change config.yaml
Execute
helm upgrade
as follows:Verify hub and proxy pods:
Set new image for a given deployment
Get revisions
Rollback to previous revision
Inspect revisions
Revisions used for rollback are available as their new revision instead of their original.
Access a pod shell
Restart a deployment
That's what the selector field in the service manifest is for
Scale deployment
Set namespace
Services
Connection
Environment Variables As soon as the Pod starts on any worker node, the kubelet daemon running on that node adds a set of environment variables in the Pod for all active Services. For example, if we have an active Service called redis-master, which exposes port 6379, and its ClusterIP is 172.17.0.6, then, on a newly created Pod, we can see the following environment variables:
*REDIS_MASTER_SERVICE_HOST=172.17.0.6* *REDIS_MASTER_SERVICE_PORT=6379* *REDIS_MASTER_PORT=tcp://172.17.0.6:6379* *REDIS_MASTER_PORT_6379_TCP=tcp://172.17.0.6:6379* *REDIS_MASTER_PORT_6379_TCP_PROTO=tcp* *REDIS_MASTER_PORT_6379_TCP_PORT=6379* *REDIS_MASTER_PORT_6379_TCP_ADDR=172.17.0.6*
With this solution, we need to be careful while ordering our Services, as the Pods will not have the environment variables set for Services which are created after the Pods are created.
DNS Kubernetes has an add-on for DNS, which creates a DNS record for each Service and its format is my-svc.my-namespace.svc.cluster.local. Services within the same Namespace find other Services just by their names. If we add a Service redis-master in *my-ns* Namespace, all Pods in the same my-ns Namespace lookup the Service just by its name, *redis-master*. Pods from other Namespaces, such as test-ns, lookup the same Service by adding the respective Namespace as a suffix, such as redis-master.my-ns or providing the FQDN of the service as redis-master.my-ns.svc.cluster.local. This is the most common and highly recommended solution. For example, in the previous section's image, we have seen that an internal DNS is configured, which maps our Services frontend-svc and *db-svc* to 172.17.0.4 and 172.17.0.5 IP addresses respectively.
ServiceType
ClusterIP is the default *ServiceType*. A Service receives a Virtual IP address, known as its ClusterIP. This Virtual IP address is used for communicating with the Service and is accessible only from within the cluster.
With the NodePort ServiceType, in addition to a ClusterIP, a high-port, dynamically picked from the default range 30000-32767, is mapped to the respective Service, from all the worker nodes. For example, if the mapped NodePort is 32233 for the service frontend-svc, then, if we connect to any worker node on port *32233*, the node would redirect all the traffic to the assigned ClusterIP - *172.17.0.4*. If we prefer a specific high-port number instead, then we can assign that high-port number to the NodePort from the default range when creating the Service.
The *NodePort* ServiceType is useful when we want to make our Services accessible from the external world. The end-user connects to any worker node on the specified high-port, which proxies the request internally to the ClusterIP of the Service, then the request is forwarded to the applications running inside the cluster. Let's not forget that the Service is load balancing such requests, and only forwards the request to one of the Pods running the desired application. To manage access to multiple application Services from the external world, administrators can configure a reverse proxy - an ingress, and define rules that target specific Services within the cluster.
LoadBalancer
NodePort and ClusterIP are automatically created, and the external load balancer will route to them
The Service is exposed at a static port on each worker node
The Service is exposed externally using the underlying cloud provider's load balancer feature.
The LoadBalancer ServiceType will only work if the underlying infrastructure supports the automatic creation of Load Balancers and have the respective support in Kubernetes, as is the case with the Google Cloud Platform and AWS. If no such feature is configured, the LoadBalancer IP address field is not populated, it remains in Pending state, but the Service will still work as a typical NodePort type Service.
Sample manifests
configMap
secrets
**deployment **
service
Storage
Resources
https://keda.sh/
https://www.bodyworkml.com/
https://github.com/CrunchyData/postgres-operator
References
https://kubernetes.io/docs/tasks/tools/install-minikube/
https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-linux
https://minikube.sigs.k8s.io/docs/start/
https://help.ubuntu.com/community/KVM/Installation#Installation
https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel
https://levelup.gitconnected.com/deploy-your-first-flask-mongodb-app-on-kubernetes-8f5a33fa43b4
https://github.com/testdrivenio/flask-vue-kubernetes !! Useful
https://kubernetes.io/docs/concepts/services-networking/network-policies/
https://artifacthub.io/
https://github.com/helm/charts
https://kubernetes.io/docs/concepts/workloads/controllers/job/
https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/
https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/
Last updated