반응형
ARO(Azure Redhat Openshift)에 3scale을 올려 사용하고있는데,
개발서버 쪽에 자원이 부족하다는 알림에 급히 3scale resource 조정을 확인해보았다.
3scale 기본사양
먼저 3scale 각 pod들의 cpu/memory 제한을 살펴보자.
Component | CPU Requests | CPU Limits | Memory Requests | Memory Limits |
system-app's system-master | 50m | 1000m | 600Mi | 800Mi |
system-app's system-provider | 50m | 1000m | 600Mi | 800Mi |
system-app's system-developer | 50m | 1000m | 600Mi | 800Mi |
system-sidekiq | 100m | 1000m | 500Mi | 2Gi |
system-sphinx | 80m | 1000m | 250Mi | 512Mi |
system-redis | 150m | 500m | 256Mi | 32Gi |
system-mysql | 250m | No limit | 512Mi | 2Gi |
system-postgresql | 250m | No limit | 512Mi | 2Gi |
backend-listener | 500m | 1000m | 550Mi | 700Mi |
backend-worker | 150m | 1000m | 50Mi | 300Mi |
backend-cron | 50m | 150m | 40Mi | 80Mi |
backend-redis | 1000m | 2000m | 1024Mi | 32Gi |
apicast-production | 500m | 1000m | 64Mi | 128Mi |
apicast-staging | 50m | 100m | 64Mi | 128Mi |
zync | 150m | 1 | 250M | 512Mi |
zync-que | 250m | 1 | 250M | 512Mi |
zync-database | 50m | 250m | 250M | 2G |
cpu limit로 보면, 최대 13 core 까지 사용할 수 있는 상태이다.
ARO는 worker node VM사이즈에 따라 cpu, memory, pv 수량 제한들이 명확히 있는데,
현재 사용하는 worker node의 vm 제품은
Standard_E4s_v3으로 vCPU 4core, Memory 32GB, pv는 8개까지 붙일 수 있다.
worker node*3대로 사용할 수 있는 최대 한도는 vcpu 12core, Memory 96GB, pv 24개....
이로써 일단 request cpu를 10m 으로 조정하였다.
3scale 리소스 조정
cmd버전
그냥 oc 명령어로 수정해도됨.
oc edit apimanager <생성한 manager 이름> -n 네임스페이스
$ oc edit apimanager apimanager-sample -n 3scale
웹콘솔버전
리소스 수정을 위해 3scale Operator > APIManager > YAML 로 이동한다.
YAML 파일 수정
apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
annotations:
apps.3scale.net/apimanager-threescale-version: '2.11'
apps.3scale.net/threescale-operator-version: 0.8.1
spec:
imageStreamTagImportInsecure: false
resourceRequirementsEnabled: true
system:
appSpec:
developerContainerResources:
limits:
cpu: 1
memory: 800M
requests:
cpu: 10m
memory: 600M
masterContainerResources:
limits:
cpu: 1
memory: 800M
requests:
cpu: 10m
memory: 600M
providerContainerResources:
limits:
cpu: 1
memory: 800M
requests:
cpu: 10m
memory: 600M
replicas: 1
database:
mysql:
resources:
limits:
cpu: 250m
memory: 2Gi
requests:
cpu: 10m
memory: 512M
fileStorage:
persistentVolumeClaim:
storageClassName: azure-file
memcachedResources:
limits:
cpu: 250m
memory: 96M
requests:
cpu: 10m
memory: 64M
redisResources:
limits:
cpu: 500m
memory: 32Gi
requests:
cpu: 10m
memory: 256Mi
sidekiqSpec:
replicas: 1
resources:
limits:
cpu: '1'
memory: 2Gi
requests:
cpu: 10m
memory: 500M
sphinxSpec:
resources:
limits:
cpu: '1'
memory: 512M
requests:
cpu: 10m
memory: 250M
appLabel: 3scale-api-management
zync:
appSpec:
replicas: 1
resources:
limits:
cpu: '1'
memory: 512M
requests:
cpu: 10m
memory: 250M
databaseResources:
limits:
cpu: 250m
memory: 2Gi
requests:
cpu: 10m
memory: 250M
queSpec:
replicas: 1
resources:
limits:
cpu: '1'
memory: 512M
requests:
cpu: 10m
memory: 250M
backend:
cronSpec:
replicas: 1
resources:
limits:
cpu: 500m
memory: 500M
requests:
cpu: 10m
memory: 100M
listenerSpec:
replicas: 1
resources:
limits:
cpu: 1
memory: 700M
requests:
cpu: 10m
memory: 550M
redisResources:
limits:
cpu: '2'
memory: 32Gi
requests:
cpu: 10m
memory: 1Gi
workerSpec:
replicas: 1
resources:
limits:
cpu: '1'
memory: 300M
requests:
cpu: 10m
memory: 50M
tenantName: 3scale
apicast:
managementAPI: status
openSSLVerify: false
productionSpec:
replicas: 1
resources:
limits:
cpu: '1'
memory: 128M
requests:
cpu: 10m
memory: 64M
registryURL: 'http://apicast-staging:8090/policies'
responseCodes: true
stagingSpec:
replicas: 1
resources:
limits:
cpu: 100m
memory: 128M
requests:
cpu: 10m
memory: 64M
wildcardDomain: apps.i3jhq2a5.australiaeast.aroapp.io
반영결과 확인
$ oc get po -o custom-columns="Name:metadata.name,CPU-Requet:spec.containers[*].resources.requests.cpu,CPU-limit:spec.containers[*].resources.limits.cpu,MEM-Request:spec.containers[*].resources.requests.memory,MEM-limit:spec.containers[*].resources.limits.memory" -n 3scale
Name | CPU-Requet | CPU-limit | MEM-Request | MEM-limit |
apicast-production-6-fbc42 | 10m | 1 | 64M | 128M |
apicast-staging-5-wtcnj | 10m | 100m | 64M | 128M |
backend-cron-5-4ccgp | 10m | 500m | 100M | 500M |
backend-listener-5-72vm7 | 10m | 1 | 550M | 700M |
backend-redis-7-n28pq | 10m | 2 | 1Gi | 32Gi |
backend-worker-5-cxqdz | 10m | 1 | 50M | 300M |
system-app-8-hw9qh (master) | 10m | 1 | 600M | 800M |
system-app-8-hw9qh (provider) | 10m | 1 | 600M | 800M |
system-app-8-hw9qh (developer) | 10m | 1 | 600M | 800M |
system-memcache-5-m88fh | 10m | 250m | 64M | 96M |
system-mysql-5-wp8m6 | 10m | 250m | 512M | 2Gi |
system-redis-5-dx66s | 10m | 500m | 256Mi | 32Gi |
system-sidekiq-5-zkg9l | 10m | 1 | 500M | 2Gi |
system-sphinx-5-srp8t | 10m | 1 | 250M | 512M |
threescale-operator-controller-manager-v2-74dd5566f5-g2dqp | 100m | 100m | 100Mi | 100Mi |
zync-5-987jb | 10m | 1 | 250M | 512M |
zync-database-5-92kj5 | 10m | 250m | 250M | 2Gi |
zync-que-5-j58hl | 10m | 1 | 250M | 512M |
redhat 사이트의 docs를 통해 직접 테스트하였습니다.
반응형
'엔지니어링 > 3scale' 카테고리의 다른 글
[OCP] 3scale 백업 자동화 고도화하기 - 2편(dockerfile, 쉘스크립트) (0) | 2022.08.31 |
---|---|
[OCP] 3scale 백업 자동화 고도화하기 - 1편(도입부) (0) | 2022.08.31 |
[OCP] 3scale 백업 자동화 크론잡 (0) | 2022.08.24 |
[OCP] 3scale Operator 복원(복구)하기 (0) | 2022.08.19 |
[OCP] 3scale Operator 백업하기 (0) | 2022.08.19 |