# kubectl describe <object type> <object name>
> kubectl describe pod client-pod
...........
Name: client-pod
Namespace: default
Node: minikube/10.0.2.15
Start Time: Sat, 02 Feb 2019 12:05:16 +0900
Labels: component=web
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"component":"web"},"name":"client-pod","namespace":"default"},"spec":{"container...
Status: Running
IP: 172.17.0.16
Containers:
client:
Container ID: docker://465ecbe522f537a36c26c021d88c1efb21782daf1e6fffd1e93be3469701a4d5
Image: bear2u/multi-worker
Image ID: docker-pullable://bear2u/multi-worker@sha256:6559ad68144e14b8f6f3054ab0f19056853ea07a7c4ead068d9140bd0a33b926
Port: 3000/TCP
Host Port: 0/TCP
State: Running
Started: Sat, 09 Feb 2019 10:24:04 +0900
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 09 Feb 2019 10:06:12 +0900
Finished: Sat, 09 Feb 2019 10:24:01 +0900
Ready: True
Restart Count: 3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-28mbg (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-28mbg:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-28mbg
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6d default-scheduler Successfully assigned client-pod to minikube
Normal SuccessfulMountVolume 6d kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-28mbg"
Normal Pulling 6d kubelet, minikube pulling image "bear2u/multi-client"
Normal Pulled 6d kubelet, minikube Successfully pulled image "bear2u/multi-client"
Normal Created 6d kubelet, minikube Created container
Normal Started 6d kubelet, minikube Started container
Normal SuccessfulMountVolume 12h kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-28mbg"
Normal SandboxChanged 12h kubelet, minikube Pod sandbox changed, it will be killed andre-created.
Normal Pulling 12h kubelet, minikube pulling image "bear2u/multi-client"
Normal Pulled 12h kubelet, minikube Successfully pulled image "bear2u/multi-client"
Normal Created 12h kubelet, minikube Created container
Normal Started 12h kubelet, minikube Started container
Normal SuccessfulMountVolume 25m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-28mbg"
Normal SandboxChanged 25m kubelet, minikube Pod sandbox changed, it will be killed andre-created.
Normal Pulling 25m kubelet, minikube pulling image "bear2u/multi-client"
Normal Pulled 25m kubelet, minikube Successfully pulled image "bear2u/multi-client"
Normal Killing 7m kubelet, minikube Killing container with id docker://client:Container spec hash changed (3635549375 vs 3145631940).. Container will be killed and recreated.
Normal Pulling 7m kubelet, minikube pulling image "bear2u/multi-worker"
Normal Created 7m (x2 over 25m) kubelet, minikube Created container
Normal Pulled 7m kubelet, minikube Successfully pulled image "bear2u/multi-worker"
Normal Started 7m (x2 over 25m) kubelet, minikube Started container
업데이트 오류
만약 pod 설정파일에서 containerPort를 변경시 어떻게 되는지 보자
> kubectl apply -f client-pod.yaml
.......
the Pod "client-pod" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
{"Volumes":[{"Name":"default-token-28mbg","HostPath":null,"EmptyDir":null,"GCEPersistentDisk":null,"AWSElasticBlockStore":null,"GitRepo":null,"Secret":{"SecretName":"default-token-28mbg","Items":null,"DefaultMode":420,"Optional":null},"NFS":null,"ISCSI":null,"Glusterfs":null,"PersistentVolumeClaim":null,"RBD":null,"Quobyte":null,"FlexVolume":null,"Cinder":null,"CephFS":null,"Flocker":null,"DownwardAPI":null,"FC":null,"AzureFile":null,"ConfigMap":null,"VsphereVolume":null,"AzureDisk":null,"PhotonPersistentDisk":null,"Projected":null,"PortworxVolume":null,"ScaleIO":null,"StorageOS":null}],"InitContainers":null,"Containers":[{"Name":"client","Image":"bear2u/multi-worker","Command":null,"Args":null,"WorkingDir":"","Ports":[{"Name":"","HostPort":0,"ContainerPort":
A: 9999,"Protocol":"TCP","HostIP":""}],"EnvFrom":null,"Env":null,"Resources":{"Limits":null,"Requests":null},"VolumeMounts":[{"Name":"default-token-28mbg","ReadOnly":true,"MountPath":"/var/run/secrets/kubernetes.io/serviceaccount","SubPath":"","MountPropagation":null}],"VolumeDevices":null,"LivenessProbe":null,"ReadinessProbe":null,"Lifecycle":null,"TerminationMessagePath":"/dev/termination-log","TerminationMessagePolicy":"File","ImagePullPolicy":"Always","SecurityContext":null,"Stdin":false,"StdinOnce":false,"TTY":false}],"RestartPolicy":"Always","TerminationGracePeriodSeconds":30,"ActiveDeadlineSeconds":null,"DNSPolicy":"ClusterFirst","NodeSelector":null,"ServiceAccountName":"default","AutomountServiceAccountToken":null,"NodeName":"minikube","SecurityContext":{"HostNetwork":false,"HostPID":false,"HostIPC":false,"ShareProcessNamespace":null,"SELinuxOptions":null,"RunAsUser":null,"RunAsGroup":null,"RunAsNonRoot":null,"SupplementalGroups":null,"FSGroup":null},"ImagePullSecrets":null,"Hostname":"","Subdomain":"","Affinity":null,"SchedulerName":"default-scheduler","Tolerations":[{"Key":"node.kubernetes.io/not-ready","Operator":"Exists","Value":"","Effect":"NoExecute","TolerationSeconds":300},{"Key":"node.kubernetes.io/unreachable","Operator":"Exists","Value":"","Effect":"NoExecute","TolerationSeconds":300}],"HostAliases":null,"PriorityClassName":"","Priority":null,"DNSConfig":null}
B: 3000,"Protocol":"TCP","HostIP":""}],"EnvFrom":null,"Env":null,"Resources":{"Limits":null,"Requests":null},"VolumeMounts":[{"Name":"default-token-28mbg","ReadOnly":true,"MountPath":"/var/run/secrets/kubernetes.io/serviceaccount","SubPath":"","MountPropagation":null}],"VolumeDevices":null,"LivenessProbe":null,"ReadinessProbe":null,"Lifecycle":null,"TerminationMessagePath":"/dev/termination-log","TerminationMessagePolicy":"File","ImagePullPolicy":"Always","SecurityContext":null,"Stdin":false,"StdinOnce":false,"TTY":false}],"RestartPolicy":"Always","TerminationGracePeriodSeconds":30,"ActiveDeadlineSeconds":null,"DNSPolicy":"ClusterFirst","NodeSelector":null,"ServiceAccountName":"default","AutomountServiceAccountToken":null,"NodeName":"minikube","SecurityContext":{"HostNetwork":false,"HostPID":false,"HostIPC":false,"ShareProcessNamespace":null,"SELinuxOptions":null,"RunAsUser":null,"RunAsGroup":null,"RunAsNonRoot":null,"SupplementalGroups":null,"FSGroup":null},"ImagePullSecrets":null,"Hostname":"","Subdomain":"","Affinity":null,"SchedulerName":"default-scheduler","Tolerations":[{"Key":"node.kubernetes.io/not-ready","Operator":"Exists","Value":"","Effect":"NoExecute","TolerationSeconds":300},{"Key":"node.kubernetes.io/unreachable","Operator":"Exists","Value":"","Effect":"NoExecute","TolerationSeconds":300}],"HostAliases":null,"PriorityClassName":"","Priority":null,"DNSConfig":null}
이미지만 변경할수 있다는 걸 유념하자
Deployment
기존 포드형태에서는 이미지말고는 변경이 안된다. 이걸 극복하기 위해서 Depoyment라는 개념을 하나 더 추가를 하자
Deployment에서는 Pod 설정을 가지고 있다.
pod 에서 포트를 변경시 Deployment 에서는 포트를 죽이고 새로운 포트를 올린다.
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
client-deployment 1 1 1 1 56d
pods 확인시 deployment 로 자동 생성되는 걸 확인 가능하다
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
client-deployment-848b54d879-ch26z 1/1 Running 5 56d
이미지를 바꿔서 새롭게 deployment 에서 포드가 변경되는 걸 보자
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
client-deployment-848b54d879-ch26z 1/1 Running 5 56d
client-deployment-89bb69575-54pnn 0/1 ContainerCreating 0 5s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
client-deployment-89bb69575-54pnn 1/1 Running 0 43s
자세한 설명을 보여주는 명령어
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
client-deployment-89bb69575-54pnn 1/1 Running 0 7m 172.17.0.1 minikube
kubectl describe pods client-deployment
deployment.yaml 에서 replica를 변경시 숫자를 주목하자
...
replicas: 5
....
$ kubectl apply -f client-deployment.yaml
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
client-deployment 5 3 3 1 56d
# 노드가 담긴 alpine 이미지 가져오기
FROM node:10.15-alpine
//작업할 디렉토리 설정
WORKDIR "/app"
//npm install을 캐싱하기 위해 package.json만 따로 카피
COPY ./package.json ./
RUN npm install
// 소스 복사
COPY . .
//client 소스 실행
CMD ["npm","run","start"]
data <Buffer ed 85 8c ec 8a a4 ed 8a> 8
data <Buffer b8 31 ed 85 8c ec 8a a4> 8
data <Buffer ed 8a b8 32 ed 85 8c ec> 8
data <Buffer 8a a4 ed 8a b8 31 ed 85> 8
data <Buffer 8c ec 8a a4 ed 8a b8 32> 8
data <Buffer ed 85 8c ec 8a a4 ed 8a> 8
data <Buffer b8 31 ed 85 8c ec 8a a4> 8
data <Buffer ed 8a b8 32 ed 85 8c ec> 8
data <Buffer 8a a4 ed 8a b8 31 ed 85> 8
data <Buffer 8c ec 8a a4 ed 8a b8 32> 8
data <Buffer ed 85 8c ec 8a a4 ed 8a> 8
data <Buffer b8 31 ed 85 8c ec 8a a4> 8
data <Buffer ed 8a b8 32 ed 85 8c ec> 8
data <Buffer 8a a4 ed 8a b8 31 ed 85> 8
data <Buffer 8c ec 8a a4 ed 8a b8 32> 8
data <Buffer ed 85 8c ec 8a a4 ed 8a> 8
data <Buffer b8 31 ed 85 8c ec 8a a4> 8
data <Buffer ed 8a b8 32 ed 85 8c ec> 8
data <Buffer 8a a4 ed 8a b8 31 ed 85> 8
data <Buffer 8c ec 8a a4 ed 8a b8 32> 8
data <Buffer ed 85 8c ec 8a a4 ed 8a> 8
data <Buffer b8 31 ed 85 8c ec 8a a4> 8
data <Buffer ed 8a b8 32 ed 85 8c ec> 8
data <Buffer 8a a4 ed 8a b8 31 ed 85> 8
data <Buffer 8c ec 8a a4 ed 8a b8 32> 8
constjumpData=tf.tensor([
[70, 70, 70],
[80, 70, 90],
[70, 70, 70]
]);
constplayerData=tf.tensor([
[1,160],
[2,160],
[3,160],
[4,160],
]);
jumpData.concat(playerData)
>> 에러
Error:Errorin concat2D: Shape of tensors[1] (4,2) does not match the shape of the rest (3,3) along the non-concatenated axis 1.
// node myFile.jsconstpendingTimers= [];
constpendingOSTasks= [];
constpendingOperations= [];
// New timbers, tasks, operations are recorded from myFile runningmyFile.runContents();
functionshouldContinue() {
// Check one : Any pending setTimeout, setInterval, setImmediate?// Check two: Any pending OS tasks? (Like server listening to port)// Check three: Any pending long running operations? (Like fs module)returnpendingTimers.length||pendingOSTasks.length||pendingOperations.length;
}
// Entire body executes in one 'tick'while(shouldContinue()) {
// 1) Node looks at pendintTimers and sees if any functions// are ready to be called. setTimeout, setInterval// 2) Node looks at pendingOSTasks and pendingOperations// and calls relevant callbacks// 3) Pause execution. Continue when ....// - a new pendingOSTask is done// - a new pendingOperation is done// - a timer is about to complete// 4) Look at pendingTimers. Call any setImmediate// 5) Handle any 'close' events
}
// exit back to terminal
만약에 2개의 쓰레드가 있다고 가정하고 한개의 쓰레드가 I/O 로 시간이 걸린다고 했을 경우 스케쥴러에서 따로 뺀 공간에서 처리를 하고 2번이 끝나고 1을 다시 넣어서 마무리를 할 수 있다.
추후 다시 한번 소스를 보면서 쓰레드 관련된 내용을 살펴볼 예정이다.
Event loop
이벤트 루프는 한 쓰레드가 무엇을 해야하는지 결정하는 제어 구조와 같다고 생각하면 된다.
전체적인 이벤트 루프 구조를 Pseudo code 로 살펴보자.
크게 3가지 제어 구문이 끝나야 이벤트 루프가 끝난다.
pendingTimers
Check Any pending setTimeout, setInterval, setImmediate?
pendingOSTasks
Check Any pending OS tasks? (Like server listening to port)
pendingOperations
Check Any pending long running operations? (Like fs module)
// node myFile.jsconstpendingTimers= [];
constpendingOSTasks= [];
constpendingOperations= [];
// New timbers, tasks, operations are recorded from myFile runningmyFile.runContents();
functionshouldContinue() {
// Check one : Any pending setTimeout, setInterval, setImmediate?// Check two: Any pending OS tasks? (Like server listening to port)// Check three: Any pending long running operations? (Like fs module)returnpendingTimers.length||pendingOSTasks.length||pendingOperations.length;
}
// Entire body executes in one 'tick'while(shouldContinue()) {
// 1) Node looks at pendintTimers and sees if any functions// are ready to be called. setTimeout, setInterval// 2) Node looks at pendingOSTasks and pendingOperations// and calls relevant callbacks// 3) Pause execution. Continue when ....// - a new pendingOSTask is done// - a new pendingOperation is done// - a timer is about to complete// 4) Look at pendingTimers. Call any setImmediate// 5) Handle any 'close' events
}
// exit back to terminal
Dockerfile.dev 파일 내용과 같으며 마지막에 커맨드 라인만 dev -> start 로 변경해준다.
FROM node:alpine
WORKDIR "/app"
COPY ./package.json ./
RUN npm install
COPY . .
CMD ["npm", "run", "start"]
server / Dockerfile
FROM node:alpine
WORKDIR "/app"
COPY ./package.json ./
RUN npm install
COPY . .
CMD ["npm", "run", "start"]
nginx / Dockerfile
FROM nginx
COPY ./default.conf /etc/ngnix/conf.d/default.conf
Client with another nginx
client 은 좀 복잡하다.
싱글 콘테이너를 만들때 nginx 와 react build를 통해서 서버를 구성했었다.
하지만 멀티 콘테이너를 구성시 외부에도 nginx 를 두고 내부에도 nginx 를 세팅해야 하는 걸 명심하자.
그럼 멀티 콘테이너 구성시 어떻게 하는지 알아보자.
Client 내부 ngnix 구성
client / nginx/ default.conf
3000 포트로 들어오도록 허용하며 index 파일을 메인으로 한다.
server {
listen 3000;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
}
client/Dockerfile
builder 이미지에 build 된 Prod 파일들을 넣고
nginx 설정을 복사를 하고
nginx 에 이전 builder 내 /app/build 폴더에 있는 Prod 파일들을 html 로 복사를 해준다.
FROM node:alpine as builder
WORKDIR '/app'
COPY ./package.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx
EXPOSE 3000
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
COPY --from=builder /app/build /usr/share/nginx/html
Client test 수정
현재로썬 Client Test 진행시 충돌이 발생될 수 있다고 한다. 당장은 비활성화를 하고 진행하도록 하자.
client / src/ App.test.js
import React from 'react';
import ReactDOM from 'react-dom';
import App from './App';
it('renders without crashing', () => {});
이제 준비가 다 되었다. Travis 를 연결해서 배포를 해보자.
Travis + Github 연동
만약 새로운 프로젝트이면 Github에 저장소를 만들어서 푸시하고Travis 에서 활성화를 해주도록 하자.
Github 에 푸시
Travis 에서 연동
만약 목록에 안나온다면 왼쪽 상단에 Sync Account를 클릭해서 동기화를 하자.
Travis 설정
.travis.yml
주의점은 travis 설정시 태그를 꼭 도커 허브 아이디를 태그명으로 지정해줘야 한다.
sudo: required
services:
- docker
before_install:
- docker build -t bear2u/react-test -f ./client/Dockerfile.dev ./client
script:
- docker run bear2u/react-test npm test -- --coverage
after_success:
- docker build -t bear2u/multi-client ./client
- docker build -t bear2u/multi-nginx ./nginx
- docker build -t bear2u/multi-server ./server
- docker build -t bear2u/multi-worker ./worker
# Log in to the docker CLI
- echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_ID" --password-stdin
# Take those images and push them to docker hub
- docker push bear2u/multi-client
- docker push bear2u/multi-nginx
- docker push bear2u/multi-server
- docker push bear2u/multi-worker
Beanstalk 에 올리는 과정을 제외한 docker hub에 올리는 것까지 진행되었다.