| Page Jul 2016
| Page
Jul 2016
| Page
Scaling Docker with Kubernetes
Jul 2016
Liran CohenCloud platform & DevOps TL , Liveperson
| Page
Agenda
● Docker ?● Kubernetes introduction.● Kubernetes Addons. ● Basic Troubleshooting● GB Demo● Q&A
demo files - https://github.com/sliranc/k8s_workshop
| Page
Docker?Going back in time , most applications were deployed directly on physical hardware.
● Single userspace.
● Shared runtime env between applications.
● Hardware resources generally underutilized.
| Page
To overcome the limitation of shared runtime env , underutilized resources and more. The IT industry adopted virtualization with hypervisor such as KVM , ESX and more.
Docker?
| Page
Moving from VM => “Virtual OS” :
● We removed the hypervisor layer to reduce
complexity.
● The containers approach is to package each application
with all dependencies \ runtime environment.
● We have different application running on the same
host and isolated using the containers technology.
Docker?
| Page
Virtual Machines VS Containers
VS
| Page
Docker: Application centric.
● A clean, safe, portable runtime environment for your app.
● No more worries about missing dependencies, packages
and other pain points during deployments.
● Run each app in its own isolated container (fs , cgroup ,
pid etc ….)
● Easy to pack into a box and super portable.
Build once... (finally) run anywhere*
Docker?
| Page
Docker’s architecture● Docker uses client server architecture.
● server: running the Docker daemon.
● Client: communicate with the server via sockets or RESTful API .
● Docker registry: public or private stores from which the server upload or download images
● The client can run on any host.
| Page
Docker?DEMOFROM nginx
MAINTAINER Liran Cohen <[email protected]>COPY index.html /usr/share/nginx/html/index.htmlCMD ["nginx", "-g", "daemon off;" ],
Dockerfile
1. Build docker image
docker build -t sliranc/hello_docker:latest .
2. Run docker container.
docker run -p 32769:80 -d --name hello_docker sliranc/hello_docker:latest
curl http://192.168.99.100:32769
3. Push to remote registry.
docker push sliranc/hello_docker
| Page
The name Kubernetes originates from Greek, meaning “helmsman” or “pilot””(WiKi).
A helmsman or helm is a person who steers a ship, sailboat, submarine...
Kubernetes - κυβερνήτης
| Page
More facts:● Originated at Google (Borg). ● Supports multiple cloud and bare-metal environments● Supports multiple container runtimes (Docker , rkt) ● 100% Open source, written in Go● k8s is an abbreviation derived by replacing the 8 letters “ubernete” with 8.
Manage containerized applications , not machines.
Kubernetes ?
Kubernetes is a container cluster manager. It aims to provide a "platform for automating deployment, scaling, and operations of application containers across clusters of machines.
| Page
Deploy single tier \ single container APP is “easy”
Deploying a complex multi tier APP is more difficult ● One or more containers.● replication of containers.● Persistent storage.
Deploying lots of complex APPs (microservices) can be a challenge.
More Info...
Why kubernetes ?
| Page
Control Plane
Node ControllerReplication ControllerEndpoints ControllerService Account Token ControllersAnd more...`
architecture
| Page
A node is a physical or virtual machine running Kubernetes, onto which pods can be scheduled.
Node
operating system
kubelet
kube-proxy
k8s
| Page
Kubectl - get (Display one or many resources)
1. List kubernetes nodes.
kubectl get nodes
kubectl get nodes --context=kube-aws
DEMO
| Page
Pod is a Small group of co-located containers with optionally shared volume between the containers.
Pods are the basic deployment unit in Kubernetes.
● Shared namespace○ Share IP address , localhost○ Every pod gets a unique IP
● Managed Lifecycle○ Bound to a node , in place restart○ Cannot move between nodes
Pod(po)
| Page
Pod(po) - yaml manifestapiVersion: v1kind: Podmetadata: labels: phase: prod role: frontend name: myfirstpod name: myfirstpod
spec: containers: - name: filepuller image: sliranc/filepuller:latest volumeMounts: - mountPath: /usr/share/nginx/html name: static-vol - name: webserver image: nginx:latest ports: - containerPort: 80 volumeMounts: - mountPath: /usr/share/nginx/html name: static-vol volumes: - name: static-vol emptyDir: {}
Spec: is the specification of the desired state of an object.
kind: System object \ resource Examples: Pod, RC, Service etc...
| Page
kubectl
1. Create pod myfirstpod by filename.
kubectl create -f myfirst-pod.yaml
kubectl create -f myfirst-pod.yaml --context kube-aws
2. List pods.
kubectl get pods
kubectl get pods --context=kube-aws
DEMO
| Page
Label are key / value pairs - object metadata
● Label are attached to pods , services , rc or almost any other objects in k8s● Can be used to organize or select subset of object.● queryable by selectors
labels: app: rcweb phase: production role: frontend
http://kubernetes.io/docs/user-guide/labels/
Labels
| Page
Label selector - query object using labels
● Can identify a set of objects ● Group a set of objects ● Used in svc and rc to select the monitored \ watched objects
replication controller selector example:
selector: app: rcweb phase: production
Selectors
| Page
direct traffic to pods
Defines a logical set of pods and a policy by which to access them.
● are abstraction on top of the pods (LB)● use selector to create the logical set of pods.● Gets a stable virtual IP and Port. ● Cluster IP are only available inside k8s
services (svc)
Can define:
● What the 'internal' IP should be.(ClusterIP)● What the 'external' IP should be. (NodePort , LoadBalancer)● What port the service should listen on.
| Page
services LoadBalancer
NodePort
ClusterIP
| Page
services (svc) - yaml manifest
apiVersion: v1kind: Servicemetadata: name: mywebspec: ports: - port: 80 # the port that this service should serve on. # (e.g. 'www') or a number (e.g. 80) targetPort: 80 protocol: TCP # just like the selector in the replication controller, # but this time it identifies the set of pods to load balance traffic to. selector: name: myfirstpod
System object \ resource Examples: Pod, RC, Service etc...
Spec is the specification of the desired state of an object.
labels: phase: prod role: frontend name: myfirstpod
| Page
kubectl1. Create service myweb by filename:
kubectl create -f myfirst-svc.yaml
kubectl create -f myfirst-svc.yaml --context=kube-aws
2. NodePort:
kubectl describe -f myfirst-svc.yaml
http://ctor-knb001:<node_port>/
3. LoadBalancer:
kubectl describe -f myfirst-svc.yaml --context=kube-aws
Change static content in github - Edit index.html
DEMO
| Page
Service Iptables mode:
Node X
Pod \ Client A
Cluster IP(VIP)
Client B
Kube-Proxy
Kubelet
NodePort NodePort
Kube-Proxy
Pod 1 Pod 2 Pod 3
Iptables
| Page
Replication controller == pods supervisor
Ensures that a specified number of pod "replicas" are running at any given time:
● Too many pods will trigger pods termination.● Too few pods will trigger new pods creation.● Main Goal = Replicas: x current / x desired.
replication controller will monitor all the pods defined in the label selector.
replication controller (rc) \ ReplicaSet (rs)ReplicationControllerreplica: 4name: rcwebselector: app: rcweb phase: production
| Page
apiVersion: v1kind: ReplicationControllermetadata: name: rcweb labels: name: rcwebspec: replicas: 2 # selector identifies the set of pods that this replication controller is responsible for managing selector: app: rcweb phase: production # template defines the 'cookie cutter' used for creating new pods when necessary template: metadata: labels: app: rcweb role: frontend phase: production name: rcwebpod spec: containers: - name: staticweb image: sliranc/rcweb:latest
Pod Template
replication controller (rc) - yaml manifest
| Page
replication controller (rc) \ ReplicaSetmaster
Node 1 Node 2
Replication controllerreplica: 3name: rcwebselector: app: rcweb Phase: production
Pod 2label: app:rcweb phase:production
Pod 1label: app:rcweb phase:production
Pod 3label: app:rcweb phase:production
Pod Xlabel:app:my-appphase:alpha
| Page
kubectl
1. create all resources in a directory.
kubectl create -f rcweb
kubectl get pods -l app=rcweb
kubectl describe svc/rcweb
kubectl rolling-update rcweb --image=sliranc/rcweb:v2 --update-period="10s"
DEMO
| Page
http://12factor.net/
III. Config
Store config in the environment
| Page
ConfigMap & Secrets● ConfigMap is a resource available in kubernetes for managing
application configuration. The goal is to decouple the app configuration from the image content in order to keep the container portable and k8s agnostic.
● ConfigMap are key \ value pairs of configuration data.
http://kubernetes.io/docs/user-guide/configmap/
kind: ConfigMapapiVersion: v1metadata: name: default-appdata: db-host: MYDB
apiVersion: v1kind: Podmetadata: name: test-default-appspec: containers: - name: test-defaultapp image:sliranc/rcweb env: - name: DB_HOST valueFrom: configMapKeyRef: name: default-app key: db-host
| Page
St rage
| Page
● emptyDir
● hostPath
● gcePersistentDisk
● awsElasticBlockStore
● nfs
● iscsi
● flocker
● glusterfs
Volume types● rbd
● gitRepo
● secret
● persistentVolumeClaim
● downwardAPI
● azureFileVolume
● vsphereVirtualDisk
| Page
emptyDiremptyDir is a temporary directory that shares a pod's lifetime. ● Storage provider = Local host● Files will be erased on pod deletion.● Mounted by containers inside the pod.
emptyDir path on node:/var/lib/kubelet/pods/<id>/volumes/kubernetes.io~empty-dir/<volume_name>
volumes: - name: static-vol emptyDir: {}
| Page
hostPath
hostPath is a bare host directory volume.● Acts as data volume in Docker.● Containers can RW files on localhost.● There is no control on quota.
Volumes: - name: static-vol hostPath: path: /target/path/on/host
| Page
PersistentVolumeKubernetes provides abstraction for volumes using PersistentVolume (PV).
User - Claim PV using (PVC) persistentVolumeClaim (pvc001 , pvc002)
PersistentVolume(PV)
nfs awsElasticBlockStore
rbdgcePersistentDisk
PersistentVolumeClaim(PVC)
Pod1 Pod2 Pod3
Admin - Creates pool of PVs (pv0001 , pv0002)
volumes: - name: my-vol persistentVolumeClaim: claimName: "pvc001"
| Page
Multi tenancy in kubernetes.
● A single cluster should be able to satisfy the needs of multiple users or groups of users.
Each user community has its own:1. resources (pods, services, replication controllers, etc.)
2. policies (who can or cannot perform actions in their community)
3. constraints (this community is allowed this much quota, etc.)
Kubernetes starts with two initial namespaces:default - The default namespace for objects with no other namespace.
kube-system - The namespace for objects created by the Kubernetes system
Namespaces
| Page
ResourceQuotaapiVersion: v1
kind: ResourceQuota
metadata:
name: quota
spec:
hard:
cpu: "20"
memory: 1Gi
pods: "10"
replicationcontrollers: "20"
resourcequotas: "1"
services: "5"
Namespaces
| Page
Managing your appLoggos video
| Page
Addons
| Page
DNS
● The DNS add-on allows your services to have a DNS name in addition to an IP address. This is helpful for simplified service discovery between applications.
*As of Kubernetes 1.3, DNS is a built-in service launched automatically using the addon manager cluster add-on
| Page
Monitoring - Heapster
● Heapster enables monitoring and performance analysis in Kubernetes Clusters. Heapster collects signals from kubelets(cadvisor) and the api server, processes them, and exports them...
HeapstersinkData push
Query
Master (API)Node (Kubelet)
| Page
Kubernetes Dashboard
http://kubernetes.io/docs/user-guide/ui/
| Page
Centralized logging Node X
Pod - App1
Pod - App2
Pod , k8s plugin
Pod - App3
/var/lib/docker/containers/idTail
Stdout , stderr stream
Stdout , stderr stream
Stdout , stderr stream
Stdout ,
stderr
stream
JSON
| Page
Basic Troubleshooting
| Page
DEMO
kubectl - describe (Show details of a specific resource or group of resources)
1. describe a pod (po)
kubectl describe pod/myfirstpod
2. describe a service (svc) - check Endpoints
kubectl describe svc/myweb
3. describe a node
kubectl describe node/ctor-knb002
| Page
kubectl - logs (Print the logs for a container in a pod)
1. create logme pod from url.
kubectl create -f https://raw.githubusercontent.com/sliranc/k8s_workshop/master/cli_demo/logme-pod.yaml
2. Print the logs of a pod with one container
kubectl logs logme
3. stream the logs
kubectl logs -f logme
4. print the logs of a container filepuller in pod myfirstpod
kubectl logs myfirstpod -c filepuller
DEMO
| Page
kubectl - exec (Execute a command in a container)
1. Inject bash to a single pod container
kubectl exec -it logme bash
ps -auxwww
exit
2. Inject bash to a multi container pod
kubectl exec -it myfirstpod -c webserver bash
ps -auxwww
exit
DEMO
| Page
kubectl (Edit a resource from the default editor)
1. Edit ReplicationController
kubectl edit rc/rcweb
Restart pods
2. kubectl port-forward - forwards connections to a port on a pod
kubectl port-forward myfirstpod 8888:80
curl http://localhost:8888
DEMO
| Page
RedisSlave
Guestbook - DEMO Deploy multi tier web_app - Guestbook
frontend - Pod
SVC - frontend
frontend - Pod
PodRedisMaster
SVC RedisMaster
SVC RedisSlave
PodRedisSlave
https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook
| Page
Guestbook - DEMO
1. Create the replication controller for frontendkubectl create -f gb-frontend-rc.yaml
2. Create the service for frontendkubectl create -f gb-frontend-svc.yaml
3. Create the redis-master replication controllerkubectl create -f redis-master-rc.yaml
4. Create the service for redis-masterkubectl create -f redis-master-svc.yaml
5. Create the redis-slave replication controllerkubectl create -f redis-slave-rc.yaml
6. Create the service for redis-masterkubectl create -f redis-slave-svc.yaml
7. Get the external url for svc and browse to the website. kubectl describe svc frontend
8. Delete all pods Kubectl delete ...
9. Scale your rckubectl scale ...
10. Delete all resources svc , rc
DEMO
| Page
Pod Health checks
Liveliness Readiness
On failure Kill container Stop sending traffic to pod
Check types Http , exec , tcpSocket Http , exec , tcpSocket
Declaration example (Pod.yaml)
livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8080
readinessProbe: httpGet: path: /status port: 8080
| Page
Kubernetes - Proxies 1. Api-proxy
kubectl cluster-info
<kubernetes_master_address>/api/v1/proxy/namespaces/<namespace_name>/services/<service_name>[:port_name]
http://qtvr-kma01:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
1. kubectl proxy - proxies from a localhost address to the apiserver
kubectl proxy
http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
2. Kube-proxy - proxies UDP and TCP - provides load balancing.
| Page
https://github.com/kubernetes/minikube
Minikube
easily run Kubernetes locallyThe project goal is to build an easy-to-use, high-fidelity
Kubernetes distribution that can be run locally on Mac, Linux and Windows with a single command.
| Page
prometheus
https://prometheus.io/
Inspired by google’s Borgmon monitoring system.
| Page
K8S on Openstack OpenStack on Kubernetes
Murano Stackanetes , kolla
Magnum https://www.youtube.com/watch?v=DPYJxYulxO4
Configuration management tools (Puppet , Ansible , salt etc…)
Openstack cloudprovider driver for kubernetes. (BlockStorage , LBaaS)
Kubelet
Kubernetes <=> Openstack
| Page
Q&A
| Page
THANK YOU!
We are hiring
| Page
Backup Slides
| Page
CLI Basics - kubectl
| Page
kubectl - kubernetes client.
1. Looking for Help...
kubectl help
Use "kubectl help [command]" for more information about a command
2. Version - get the server and client version.
kubectl version
3. Cluster info
kubectl cluster-info
*Display urls of the master and services(*api proxy) with label kubernetes.io/cluster-service=true
| Page
Kubectl - Available commands
| Page
kubectl - create
1. Create a resource by filename.
kubectl create -f myfirst-pod.yaml
2. create all resources in a directory.
kubectl create -f rcweb
3. create a resource from stdin.
cat myfirst-svc.yaml | kubectl create -f -
4. create a resource from url.
kubectl create -f https://raw.githubusercontent.com/sliranc/k8s_workshop/master/cli_demo/logme-pod.yaml
| Page
kubectl - get (Display one or many resources)
1. List all types of resource to get.
kubectl get
2. List all pods
kubectl get pods
3. List all pods in wide format
kubectl get pods -o wide
4. Display specific pod in yaml format
kubectl get pod/myfirstpod -o yaml
| Page
kubectl - get (Display one or many resources)
1. Query for pod with specific label
kubectl get pod -l name=rcwebpod
2. List all pods and services
kubectl get pods,svc
3. List all nodes (no)
kubectl get nodes
| Page
kubectl - describe (Show details of a specific resource or group of resources)
1. List all types of resource to describe.
kubectl describe
2. describe a pod (po)
kubectl describe pod/myfirstpod
3. describe a service (svc)
kubectl describe svc/myweb
browse to LoadBalancer Ingress of the service http://ingress...
4. describe a node
kubectl describe node/<use one from the get nodes output>
| Page
kubectl - logs (Print the logs for a container in a pod)
1. Print the logs for pod with one container
kubectl logs logme
2. stream the logs
kubectl logs -f logme
3. print the logs of a container filepuller in pod myfirstpod
kubectl logs myfirstpod -c filepuller
HW : check out the -p flag for logs.
| Page
kubectl - scale (Set a new size for a Replication Controller)
1. Scale rc rcweb to 3 replicas
kubectl get pods -l name=rcwebpod
kubectl scale --replicas=3 rc rcweb
kubectl get pods -l name=rcwebpod
2. Scale only if the current replication is X
kubectl scale --current-replicas=3 --replicas=2 rc rcweb
kubectl get pods -l name=rcwebpod
| Page
kubectl - exec (Execute a command in a container)
1. Inject bash to a single pod container
kubectl exec -it logme bash
ps -auxwww
exit
2. Inject bash to a multi container pod
kubectl exec -it myfirstpod -c webserver bash
ps -auxwww
exit
| Page
kubectl - rolling-update (Set a new size for a Replication Controller)
1. Get the service ingress and browse to http://ingress……..
kubectl describe svc/rcweb
2. upgrade your rc to new rc with 10s delay between pod.
kubectl rolling-update rcweb --image=sliranc/rcweb:v2 --update-period="10s"
Refresh your browser.
kubectl get pods -l name=rcwebpod
| Page
kubectl edit (Edit a resource from the default editor)
1. Edit the number of replicas
kubectl edit rc rcweb
kubectl get pods -l name=rcwebpod
| Page
kubectl - delete (Delete a resource by filename, stdin, resource and name, or by resources and label selector)
1. Delete resource by file
kubectl delete -f myfirst-pod.yaml
2. Delete resource by name
kubectl delete svc myweb rcweb
3. Delete all pods
kubectl delete pods --all
4. Delete rc by name
kubectl delete rc rcweb
| Page
kubectl - delete (Delete a resource by filename, stdin, resource and name, or by resources and label selector)
1. Delete resource by file
kubectl delete -f myfirst-pod.yaml
2. Delete resource by name
kubectl delete svc myweb rcweb
3. Delete all pods
kubectl delete pods --all
4. Delete rc by name
kubectl delete rc rcweb
| Page
Docker images
● Docker images are the basis of
containers.
● Docker images are read-only
templates we use for creating containers.
● Docker images are multilayer.
● docker images are highly portable
and can be shared.
| Page
External source such as DB (headless service)
Web - Pod
SVC - Web
Web - Pod
App - Pod
SVC - App
App - Pod
SVC - DB
External DATA store
| Page
| Page
Userspace