ããã¯ããªã«ãããããŠæžãããã®ïŒ
çŸåšã®Javaã¯ãã³ã³ããç°å¢äžã§ã¯ãã¹ãåŽã§ã¯ãªãã³ã³ããã«ãªãœãŒã¹å¶éããããããŠããã°ãã¡ãã®å€ãèŠãããã«
ãªã£ãŠããŸãã
ããã¯ã©ãã®å€ãèŠãŠããã®ããªïŒãšããã®ã確èªããŠã¿ãããªããŸããŠã
ãªããèªåã«ã¯cgroupã«é¢ããç¥èã¯ã»ãŒãããŸããããããŸã§ãJavaãã©ãã®æ
å ±ãèŠãŠãããïŒãšãã芳ç¹ã§
è¿œã£ãŠããŸãã
JDK-8146115
Javaã以åã¯ãã¹ãåŽã®CPUæ°ãã¡ã¢ãªãµã€ãºãåç
§ããŠããã®ã§ãããJDK-8146115ïŒããã³ãã®ããã¯ããŒãïŒã
å
¥ã£ãŠããã¯ã³ã³ããã«å²ãåœãŠãããCPUæ°ãã¡ã¢ãªãµã€ãºãèŠãããã«ãªããŸããã
https://bugs.openjdk.java.net/browse/JDK-8146115
Java 10以éãJava 8ã«ã€ããŠã¯8u191以éã§å¯Ÿå¿ããŠããŸãã
ããã©ã«ãã§ãã®æ©èœã¯æå¹ã«ãªã£ãŠããŠãæ瀺çã«ç¡å¹ã«ãããå Žåã¯-XX:-UseContainerSupport
ãæå®ããã°
OKã§ãã
ãŸããCPUæ°ã«ã€ããŠã¯-XX:ActiveProcessorCount=[CPUæ°]
ã§ãªãŒããŒã©ã€ãããããšãã§ããŸãã
ä»åã¯ãJDK-8146115çã®å¯Ÿå¿ã§ãJavaãã©ã®ãããªæ
å ±ãåç
§ããŠCPUæ°ãã¡ã¢ãªãµã€ãºãååŸããŠããã®ãã
調ã¹ãŠã¿ãŸããã
ç°å¢
確èªç°å¢ã¯ããã¡ãã§ãã
$ docker version Client: Docker Engine - Community Version: 20.10.12 API version: 1.41 Go version: go1.16.12 Git commit: e91ed57 Built: Mon Dec 13 11:45:33 2021 OS/Arch: linux/amd64 Context: default Experimental: true Server: Docker Engine - Community Engine: Version: 20.10.12 API version: 1.41 (minimum version 1.12) Go version: go1.16.12 Git commit: 459d0df Built: Mon Dec 13 11:43:42 2021 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.4.12 GitCommit: 7b11cfaabd73bb80907dd23182b9347b4245eb5d runc: Version: 1.0.2 GitCommit: v1.0.2-0-g52b36a2 docker-init: Version: 0.19.0 GitCommit: de40ad0
ãã¹ãåŽã¯Ubuntu Linux 20.04 LTSã
$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.3 LTS Release: 20.04 Codename: focal $ uname -srvmpio Linux 5.4.0-91-generic #102-Ubuntu SMP Fri Nov 5 16:31:28 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
CPUæ°ã¯8åãã¡ã¢ãªã¯16Gã§ãã
$ grep ^processor /proc/cpuinfo processor : 0 processor : 1 processor : 2 processor : 3 processor : 4 processor : 5 processor : 6 processor : 7 $ head -n 1 /proc/meminfo MemTotal: 16306452 kB
確èªã«ã¯ãEclipse Temurinã®Dockerã€ã¡ãŒãžã䜿ãããšã«ããŸãã
ä»åå©çšããããŒãžã§ã³ã¯ããã¡ãã
$ docker container run -it --rm --name java eclipse-temurin:17-jdk-focal bash # java --version openjdk 17.0.1 2021-10-19 OpenJDK Runtime Environment Temurin-17.0.1+12 (build 17.0.1+12) OpenJDK 64-Bit Server VM Temurin-17.0.1+12 (build 17.0.1+12, mixed mode, sharing)
Dockerã³ã³ããã®ãªãœãŒã¹å¶éã«ã€ããŠ
ãŸããDockerã³ã³ããã§ãªãœãŒã¹å¶éããæ¹æ³ãèŠãŠãããŸãããã
Runtime options with Memory, CPUs, and GPUs | Docker Documentation
ã¡ã¢ãªã«é¢ããŠã¯ã--memory
ã§ã³ã³ããã䜿ããã¡ã¢ãªãå¶éã§ããŸãã
Runtime options with Memory, CPUs, and GPUs / Memory
CPUã«é¢ããŠã¯ã--cpu-period
ã§æå®ããæéãããã®CPU䜿çšæéã®äžéãã--cpu-quota
ã§æå®ããŸãã
ãã®å€ãšãã¹ãåŽã®CPUæ°ãããã³ã³ããåŽã§å©çšã§ããCPUæ°ã決ãŸããŸãã
ãªã®ã§ãããããã¥ã¡ã³ãã§ãå§ããããŠããããã«--cpus
ã§CPUæ°ã§æå®ããã®ãããããããã§ãã
--cpus
ã¯ã--cpu-period
ãš--cpu-quota
ãDockeråŽã«èšç®ããããããªæå³ã«ãªããŸãã
--cpu-shares
ã§ã¯ãåªå
床ãã³ã³ãããŒã«ããŸãã
Runtime options with Memory, CPUs, and GPUs / CPU
ãŸããã³ã³ããã«å²ãåœãŠãCPUãå
·äœçã«æå®ããã«ã¯ã--cpuset-cpus
ã䜿çšããŸãã
ãããããã³ã³ããå ã¯ã©ããªã£ãŠããã®ãïŒ
ããããã³ã³ããå ã§ã¯ã©ããªã£ãŠããã®ããã¡ãã£ãšç¢ºèªããŠã¿ãŸãããã
ç¹ã«ãªã«ãå¶éããã«ã³ã³ãããèµ·åããŠã¿ãŸãã
$ docker container run -it --rm --name java eclipse-temurin:17-jdk-focal bash
ã³ã³ããå ã§ãCPUãã¡ã¢ãªã®æ å ±ãèŠãŠã¿ãŸãã
# grep ^processor /proc/cpuinfo processor : 0 processor : 1 processor : 2 processor : 3 processor : 4 processor : 5 processor : 6 processor : 7 # head -n 1 /proc/meminfo MemTotal: 16306452 kB
ãã¹ãåŽãšå·®ããªãã§ããã
Javaã§èŠãŠãåæ§ã§ãã
# jshell Jan 04, 2022 5:06:09 PM java.util.prefs.FileSystemPreferences$1 run INFO: Created user preferences directory. | Welcome to JShell -- Version 17.0.1 | For an introduction type: /help intro jshell> Runtime.getRuntime().availableProcessors() $1 ==> 8 jshell> ((com.sun.management.OperatingSystemMXBean)java.lang.management.ManagementFactory.getOperatingSystemMXBean()).getAvailableProcessors() $2 ==> 8 jshell> ((com.sun.management.OperatingSystemMXBean)java.lang.management.ManagementFactory.getOperatingSystemMXBean()).getTotalMemorySize() $3 ==> 16697806848
docker container inspect
ã§èŠãŠããç¹ã«å€ã¯æå®ãããŠããŸããã
$ docker container inspect java | jq '.[].HostConfig' | grep -iE 'cpu|memory' | grep -v Kernel "CpuShares": 0, "Memory": 0, "NanoCpus": 0, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "CpuCount": 0, "CpuPercent": 0,
次ã«ãCPU 2ã€ãã¡ã¢ãª2Gã«å¶éããŠã³ã³ãããèµ·åããŠã¿ãŸãã
$ docker container run -it --rm --name java --cpus 2 --memory 2G eclipse-temurin:17-jdk-focal bash
/proc
ã§èŠããæ
å ±ã«ã¯ãå€åããããŸããããã¹ãåŽã®æ
å ±ãèŠãããŸãŸã§ãã
# grep ^processor /proc/cpuinfo processor : 0 processor : 1 processor : 2 processor : 3 processor : 4 processor : 5 processor : 6 processor : 7 # head -n 1 /proc/meminfo MemTotal: 16306452 kB
JavaåŽã§ã¯ããã®å¶éãèªèã§ããŠããŸãã
# jshell Jan 04, 2022 5:05:11 PM java.util.prefs.FileSystemPreferences$1 run INFO: Created user preferences directory. | Welcome to JShell -- Version 17.0.1 | For an introduction type: /help intro jshell> Runtime.getRuntime().availableProcessors() $1 ==> 2 jshell> ((com.sun.management.OperatingSystemMXBean)java.lang.management.ManagementFactory.getOperatingSystemMXBean()).getAvailableProcessors() $2 ==> 2 jshell> ((com.sun.management.OperatingSystemMXBean)java.lang.management.ManagementFactory.getOperatingSystemMXBean()).getTotalMemorySize() $3 ==> 2147483648
docker container inspect
ãããšãå¶éãå
¥ã£ãŠããããšã¯ç¢ºèªã§ããŸãã
$ docker container inspect java | jq '.[].HostConfig' | grep -iE 'cpu|memory' | grep -v Kernel "CpuShares": 0, "Memory": 2147483648, "NanoCpus": 2000000000, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "MemoryReservation": 0, "MemorySwap": -1, "MemorySwappiness": null, "CpuCount": 0, "CpuPercent": 0,
ãšããããã§ãç¹ã«èæ
®ããªããŸãŸã ãšDockerã³ã³ããã§ãªãœãŒã¹å¶éãããŠãããã¹ãåŽã®ãªãœãŒã¹ã®æ
å ±ã
èŠãŠããŸãããšãããããŸãã
JDK-8146115ã§ã®å¯Ÿå¿å 容ã確èªãã
ããã§ãJDK-8146115ã§ã¯ã©ã®ããã«ããŠCPUæ°ãã¡ã¢ãªãµã€ãºãç®åºããã®ãèŠãŠãããããšæããŸãã
https://bugs.openjdk.java.net/browse/JDK-8146115
çãã¯ãissueå ã«æžããŠãããŸãã
CPUæ°ã¯ã以äžã®èŠé ã§ç®åºããŸãã
- cpu_quota / cpu_period
- è£è¶³ïŒ cpu_quotaãèšå®ãããŠããïŒ-1ã§ãªãå ŽåïŒã«æå¹
- ã³ã³ããã«cpu_sharesãæå®ãããŠããå Žåã¯ãcpu_shares / 1024
Number of CPUs
Use a combination of number_of_cpus() and cpu_sets() in order to determine how many processors are available to the process and adjust the JVMs os::active_processor_count appropriately. The number_of_cpus() will be calculated based on the cpu_quota() and cpu_period() using this formula: number_of_cpus() = cpu_quota() / cpu_period(). If cpu_shares has been setup for the container, the number_of_cpus() will be calculated based on cpu_shares()/1024. 1024 is the default and standard unit for calculating relative cpu usage in cloud based container management software.
Also add a new VM flag (-XX:ActiveProcessorCount=xx) that allows the number of CPUs to be overridden. This flag will be honored even if UseContainerSupport is not enabled.
ã¡ã¢ãªã®ç·éã«ã€ããŠã¯ãcgroupãã¡ã€ã«ã·ã¹ãã ã®memory_limitã䜿çšããŠååŸããŸãã
Total available memory
Use the memory_limit() value from the cgroup file system to initialize the os::physical_memory() value in the VM. This value will propagate to all other parts of the Java runtime.
䜿çšããŠããã¡ã¢ãªã«ã€ããŠã¯ãOSã®å©çšå¯èœãªã¡ã¢ãªïŒos::available_memory
ïŒããmemory_usage_in_bytesã
åŒããŠç®åºããŸãã
Memory usage
Use memory_usage_in_bytes() for providing os::available_memory() by subtracting the usage from the total available memory allocated to the container.
cgroupãåºãŠããŸãããã
é¢é£ããã«ãŒãã«ã®ããã¥ã¡ã³ããšOpenJDKã®ãœãŒã¹ã³ãŒã
ãã®ãããã®è©±é¡ã§ãé¢é£ããã«ãŒãã«ã®ããã¥ã¡ã³ãã¯ãã¡ãã§ãã
- cgroup v1
- cgroup v2
cgroupã¯v1ãšv2ããããŸãã
OpenJDKã®cgroup v1ãv2ã«ãããã察å¿ããŠãããã§ãããcgroupSubsystem_linux.cpp
ã§ãã®æ¯ãåããè¡ã£ãŠ
ããŸãã
cgroup v1ã«ã€ããŠã¯ãã¡ãã
https://github.com/openjdk/jdk17u/blob/jdk-17.0.1%2B12/src/hotspot/os/linux/cgroupV1Subsystem_linux.hpp https://github.com/openjdk/jdk17u/blob/jdk-17.0.1%2B12/src/hotspot/os/linux/cgroupV1Subsystem_linux.cpp
cgroup v2ã«ã€ããŠã¯ãã¡ãã
https://github.com/openjdk/jdk17u/blob/jdk-17.0.1%2B12/src/hotspot/os/linux/cgroupV2Subsystem_linux.hpp https://github.com/openjdk/jdk17u/blob/jdk-17.0.1%2B12/src/hotspot/os/linux/cgroupV2Subsystem_linux.cpp
ã³ã³ããå
ã®ã©ãã®æ
å ±ãèŠãŠããã®ãã¯ãcgroup v1ã§ããã°cgroupV1Subsystem_linux.cpp
ãã
cgroup v2ã§ããã°cgroupV2Subsystem_linux.cpp
ãèŠãã°ãããããã«ãªã£ãŠããŸãã
ããšãã°ãcgroup v1ã®cpu_quota
ãšcpu_period
ã¯ãcpu.cfs_quota_us
ãšcpu.cfs_period_us
ã§ãã
/* cpu_quota * * Return the number of microseconds per period * process is guaranteed to run. * * return: * quota time in microseconds * -1 for no quota * OSCONTAINER_ERROR for not supported */ int CgroupV1Subsystem::cpu_quota() { GET_CONTAINER_INFO(int, _cpu->controller(), "/cpu.cfs_quota_us", "CPU Quota is: %d", "%d", quota); return quota; } int CgroupV1Subsystem::cpu_period() { GET_CONTAINER_INFO(int, _cpu->controller(), "/cpu.cfs_period_us", "CPU Period is: %d", "%d", period); return period; }
controller
ãšããéšåãããããŸããããããã¯ãŸãåŸã§ã
ã¡ãªã¿ã«ãå©çšå¯èœãªCPUæ°ã®ç®åºæ¹æ³ã¯cgroupSubsystem_linux.cpp
ã«ã³ã¡ã³ããšããŠãæžãããŠããŸãã
ãŸããDockerã®CPUå¶éã®ãšããã§ãCFSã¹ã±ãžã¥ãŒã©ãŒãšããèšèãåºãŠããã®ã§ããã
Specify the CPU CFS scheduler period, which is used alongside --cpu-quota.
Runtime options with Memory, CPUs, and GPUs / CPU
ããã¯ãã¡ãã®ããšã§ããã
CFS Scheduler — The Linux Kernel documentation
èªåã®ç°å¢ã確èªãã
ããã§ãèªåã®æå ã®ç°å¢ãcgroup v1ãªã®ãcgroup v2ãªã®ãã確èªããããšæããŸãã
mount
ã§èŠããšããã¿ãããªã®ã§ãããcgroup v1ãšcgroup v2ã®äž¡æ¹ãå
¥ã£ãŠããŸãã
$ mount | grep cgroup tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd) cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
ããã¯ãcgroup v1ãããã©ã«ãã§ã¯ããŠã³ãããããããªããªãã£ããã®ãcgroup v2ã§ããŠã³ããããŠãããããªã®ã§ããã
A cgroup v2 controller is available only if it is not currently in use via a mount against a cgroup v1 hierarchy. Or, to put things another way, it is not possible to employ the same controller against both a v1 hierarchy and the unified v2 hierarchy.
cgroups(7) - Linux manual page
å®è³ªãèªåã®ç°å¢ã§ã¯cgroup v1ã§ããã
ã¡ãªã¿ã«ãã³ã³ããå ã§ç¢ºèªãããšå®å šã«cgroup v1ã«ãªã£ãŠããŸãã
$ docker container run -it --rm --name java eclipse-temurin:17-jdk-focal bash -c 'mount | grep cgroup' tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,relatime,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (ro,nosuid,nodev,noexec,relatime,xattr,name=systemd) cgroup on /sys/fs/cgroup/rdma type cgroup (ro,nosuid,nodev,noexec,relatime,rdma) cgroup on /sys/fs/cgroup/blkio type cgroup (ro,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/hugetlb type cgroup (ro,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (ro,nosuid,nodev,noexec,relatime,cpu,cpuacct) cgroup on /sys/fs/cgroup/perf_event type cgroup (ro,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/freezer type cgroup (ro,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/devices type cgroup (ro,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/memory type cgroup (ro,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/cpuset type cgroup (ro,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (ro,nosuid,nodev,noexec,relatime,net_cls,net_prio) cgroup on /sys/fs/cgroup/pids type cgroup (ro,nosuid,nodev,noexec,relatime,pids)
cgroup v1ã§ã³ã³ããã«å²ãåœãŠããããªãœãŒã¹ã確èªãã
ããããå ã¯ãæå ã®ç°å¢ïŒcgroup v1ïŒã§ç¢ºèªããŠãããããšæããŸãã
å床ãCPUã2ã€ãã¡ã¢ãªã2Gã«å¶éããã³ã³ããã«å ¥ã£ãŠã¿ãŸãã
$ docker container run -it --rm --name java --cpus 2 --memory 2G eclipse-temurin:17-jdk-focal bash
èªèº«ã®cgroupã®æ
å ±ã¯ã/proc/self/cgroup
ã§ç¢ºèªã§ããŸãã
# cat /proc/self/cgroup 12:pids:/docker/435086c1708f59eaae929124b129e64b2a93adea24c6cb570ed99323064d4dba 11:net_cls,net_prio:/docker/435086c1708f59eaae929124b129e64b2a93adea24c6cb570ed99323064d4dba 10:cpuset:/docker/435086c1708f59eaae929124b129e64b2a93adea24c6cb570ed99323064d4dba 9:memory:/docker/435086c1708f59eaae929124b129e64b2a93adea24c6cb570ed99323064d4dba 8:devices:/docker/435086c1708f59eaae929124b129e64b2a93adea24c6cb570ed99323064d4dba 7:freezer:/docker/435086c1708f59eaae929124b129e64b2a93adea24c6cb570ed99323064d4dba 6:perf_event:/docker/435086c1708f59eaae929124b129e64b2a93adea24c6cb570ed99323064d4dba 5:cpu,cpuacct:/docker/435086c1708f59eaae929124b129e64b2a93adea24c6cb570ed99323064d4dba 4:hugetlb:/docker/435086c1708f59eaae929124b129e64b2a93adea24c6cb570ed99323064d4dba 3:blkio:/docker/435086c1708f59eaae929124b129e64b2a93adea24c6cb570ed99323064d4dba 2:rdma:/ 1:name=systemd:/docker/435086c1708f59eaae929124b129e64b2a93adea24c6cb570ed99323064d4dba 0::/system.slice/containerd.service
2åç®ã«ããmemory
ãcpu,cpuacct
ãšããã®ã¯ããµãã·ã¹ãã ãè¡šããŠããŸãã
ãããŠã/proc/self/mountinfo
ãèŠããšãåãµãã·ã¹ãã ãã©ãã«ããã®ããããããŸãã
# grep docker /proc/self/mountinfo 1646 1569 0:135 / / rw,relatime master:726 - overlay overlay rw,lowerdir=/var/lib/docker/overlay2/l/GJRR3TKMSGCOQ6S7DE34V67ZDU:/var/lib/docker/overlay2/l/VSPKK6AJ62IU6LXW7CUURU5NBL:/var/lib/docker/overlay2/l/NM6ZKZ76GIHXKGJFFADB3UAEB7:/var/lib/docker/overlay2/l/L4PQYHUEGPQIFE5VDUMPZK3ZJ2:/var/lib/docker/overlay2/l/RXIPGFSFBRXJUBRP7FGTUIN5Y2,upperdir=/var/lib/docker/overlay2/dbf0492bd7fdc22e4755c6364b451cdc69acdb306cbda5f7c7a6897ed0262180/diff,workdir=/var/lib/docker/overlay2/dbf0492bd7fdc22e4755c6364b451cdc69acdb306cbda5f7c7a6897ed0262180/work,xino=off 1658 1657 0:30 /docker/435086c1708f59eaae929124b129e64b2a93adea24c6cb570ed99323064d4dba /sys/fs/cgroup/systemd ro,nosuid,nodev,noexec,relatime master:11 - cgroup cgroup rw,xattr,name=systemd 1660 1657 0:35 /docker/435086c1708f59eaae929124b129e64b2a93adea24c6cb570ed99323064d4dba /sys/fs/cgroup/blkio ro,nosuid,nodev,noexec,relatime master:17 - cgroup cgroup rw,blkio 1661 1657 0:36 /docker/435086c1708f59eaae929124b129e64b2a93adea24c6cb570ed99323064d4dba /sys/fs/cgroup/hugetlb ro,nosuid,nodev,noexec,relatime master:18 - cgroup cgroup rw,hugetlb 1662 1657 0:37 /docker/435086c1708f59eaae929124b129e64b2a93adea24c6cb570ed99323064d4dba /sys/fs/cgroup/cpu,cpuacct ro,nosuid,nodev,noexec,relatime master:19 - cgroup cgroup rw,cpu,cpuacct 1663 1657 0:38 /docker/435086c1708f59eaae929124b129e64b2a93adea24c6cb570ed99323064d4dba /sys/fs/cgroup/perf_event ro,nosuid,nodev,noexec,relatime master:20 - cgroup cgroup rw,perf_event 1664 1657 0:39 /docker/435086c1708f59eaae929124b129e64b2a93adea24c6cb570ed99323064d4dba /sys/fs/cgroup/freezer ro,nosuid,nodev,noexec,relatime master:21 - cgroup cgroup rw,freezer 1665 1657 0:40 /docker/435086c1708f59eaae929124b129e64b2a93adea24c6cb570ed99323064d4dba /sys/fs/cgroup/devices ro,nosuid,nodev,noexec,relatime master:22 - cgroup cgroup rw,devices 1666 1657 0:41 /docker/435086c1708f59eaae929124b129e64b2a93adea24c6cb570ed99323064d4dba /sys/fs/cgroup/memory ro,nosuid,nodev,noexec,relatime master:23 - cgroup cgroup rw,memory 1667 1657 0:42 /docker/435086c1708f59eaae929124b129e64b2a93adea24c6cb570ed99323064d4dba /sys/fs/cgroup/cpuset ro,nosuid,nodev,noexec,relatime master:24 - cgroup cgroup rw,cpuset 1668 1657 0:43 /docker/435086c1708f59eaae929124b129e64b2a93adea24c6cb570ed99323064d4dba /sys/fs/cgroup/net_cls,net_prio ro,nosuid,nodev,noexec,relatime master:25 - cgroup cgroup rw,net_cls,net_prio 1669 1657 0:44 /docker/435086c1708f59eaae929124b129e64b2a93adea24c6cb570ed99323064d4dba /sys/fs/cgroup/pids ro,nosuid,nodev,noexec,relatime master:26 - cgroup cgroup rw,pids 1672 1646 8:8 /var/lib/docker/containers/435086c1708f59eaae929124b129e64b2a93adea24c6cb570ed99323064d4dba/resolv.conf /etc/resolv.conf rw,relatime - ext4 /dev/sda8 rw,errors=remount-ro 1673 1646 8:8 /var/lib/docker/containers/435086c1708f59eaae929124b129e64b2a93adea24c6cb570ed99323064d4dba/hostname /etc/hostname rw,relatime - ext4 /dev/sda8 rw,errors=remount-ro 1674 1646 8:8 /var/lib/docker/containers/435086c1708f59eaae929124b129e64b2a93adea24c6cb570ed99323064d4dba/hosts /etc/hosts rw,relatime - ext4 /dev/sda8 rw,errors=remount-ro
ããšãã°ãmemory
ãµãã·ã¹ãã ã§ããã°/sys/fs/cgroup/memory
ãcpu,cpuacct
ãµãã·ã¹ãã ã§ããã°
/sys/fs/cgroup/cpu,cpuacct
ãåç
§ããã°ããããšãããããŸãã
å®éãOpenJDKã§ããããã®æ å ±ã¯èŠãŠããããã§ãã
ããã§ãCPUæ°ã¯cpu_quotaãcpu_periodã§å²ãã°ç®åºã§ãããšããããšã§ããããããŠãcgroup v1ã®cpu_quota
ãš
cpu_period
ã¯ãcpu.cfs_quota_us
ãšcpu.cfs_period_us
ã§ããã
ãšããããã§ãã³ã³ããå ã§ããããã®å€ã確èªããŸãã
# cat /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us 200000 # cat /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us 100000
200000
/ 100000
ãªã®ã§ã2ã§ãããã³ã³ããã«å²ãåœãŠãCPUæ°ãšäžèŽããŸããã
ã³ã³ããèªäœãèªèããŠããCPUã¯ããã¹ãåŽã®8ã€å
šéšã®ããã§ããããããã¹ã±ãžã¥ãŒãªã³ã°ããŠå®è³ª2åã«
ããŠããæãã§ããããã
# cat /sys/fs/cgroup/cpuset/cpuset.cpus 0-7
ã¡ã¢ãªã«ã€ããŠã¯ã/sys/fs/cgroup/memory/memory.limit_in_bytes
ãèŠãŸãã
# cat /sys/fs/cgroup/memory/memory.limit_in_bytes 2147483648
ããã¯ãJavaã§èŠãçµæãšããdocker container inspect
ã§èŠãçµæãšãäžèŽããŸããã
jshell> ((com.sun.management.OperatingSystemMXBean)java.lang.management.ManagementFactory.getOperatingSystemMXBean()).getTotalMemorySize() $3 ==> 2147483648 $ docker container inspect java | jq '.[].HostConfig' | grep -iE 'cpu|memory' | grep -v Kernel "CpuShares": 0, "Memory": 2147483648, ãçç¥ã
çŸåšã®ã¡ã¢ãªäœ¿çšéãªãã/sys/fs/cgroup/memory/memory.usage_in_bytes
ã¿ããã§ãã
# cat /sys/fs/cgroup/memory/memory.usage_in_bytes 4104192
ã¡ãªã¿ã«ãcpu_shares
ã«ã€ããŠã¯/sys/fs/cgroup/cpu,cpuacct/cpu.shares
ãåç
§ããŸããä»åã¯ããã¯èª¿æŽããŸãããã
# cat /sys/fs/cgroup/cpu,cpuacct/cpu.shares 1024
cpuset-cpusãæå®ãããšïŒ
--cpuset-cpus
ã䜿ããCPUã2ååºå®ã§å²ãåœãŠãŠã¿ãŸãã
$ docker container run -it --rm --name java --cpuset-cpus 0,1 eclipse-temurin:17-jdk-focal bash
cpu.cfs_quota_us
ã-1
ãªã®ã§ããã¡ãã¯ç¡å¹ã®ããã§ãã
# cat /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us -1 # cat /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us 100000
ããã§ãJavaåŽã¯èªèã§ããŠããŸãã
# jshell Jan 04, 2022 6:05:44 PM java.util.prefs.FileSystemPreferences$1 run INFO: Created user preferences directory. | Welcome to JShell -- Version 17.0.1 | For an introduction type: /help intro jshell> Runtime.getRuntime().availableProcessors() $1 ==> 2 jshell> ((com.sun.management.OperatingSystemMXBean)java.lang.management.ManagementFactory.getOperatingSystemMXBean()).getAvailableProcessors() $2 ==> 2
/sys/fs/cgroup/cpuset/cpuset.cpus
ãèŠããšã確ãã«2åå²ãåœãŠãããŠããŸãã
# cat /sys/fs/cgroup/cpuset/cpuset.cpus 0-1
ããã¯ã©ããã£ãŠããããšãããšãsched_getaffinity
ã®çµæãèŠãŠããŸãã
sched_getaffinity(2) - Linux man page
https://github.com/openjdk/jdk17u/blob/jdk-17.0.1%2B12/src/hotspot/os/linux/os_linux.cpp#L4678-L4685
--cpuset-cpus
ã®å Žåã¯ãã³ã³ããå
ã®ããã»ã¹ã«å¯ŸããŠCPUã¢ãã£ããã£ã§å®çŸããŠããããã§ãã
確èªçšã®ã³ãŒãã
â»sysconf
ããã³_SC_NPROCESSORS_CONF
ãç»å Žããçç±ã¯ãåŸã§åºãŠããŸã
print_processors_count.c
#define _GNU_SOURCE #include <stdio.h> #include <unistd.h> #include <sched.h> #include <stdlib.h> int main() { printf("available processors = %lu\n", sysconf(_SC_NPROCESSORS_CONF)); cpu_set_t cpu_set; CPU_ZERO(&cpu_set); sched_getaffinity(0, sizeof(cpu_set), &cpu_set); printf("number of cpus: %d\n", CPU_COUNT(&cpu_set)); }
ããããã«ãããŠã³ã³ããå ã«éã蟌ã¿ã
$ gcc print_processors_count.c -o print_processors_count $ docker container cp print_processors_count java:/
ã³ã³ããå ã§å®è¡ãããšä»¥äžã®çµæã«ãªããŸãã
# /print_processors_count available processors = 8 number of cpus: 2
èªèããŠããCPUæ°ã¯8ã§ãããsched_getaffinity
ã§åŸãããçµæã¯2ã«ãªã£ãŠããŸãã
docker container inspect
ã§ã¯ããããªããŸããã
$ docker container inspect java | jq '.[].HostConfig' | grep -iE 'cpu|memory' | grep -v Kernel "CpuShares": 0, "Memory": 0, "NanoCpus": 0, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "0,1", "CpusetMems": "", "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "CpuCount": 0, "CpuPercent": 0,
åãæ å ±ããã¹ãåŽã§åç §ãããšïŒ
ãããã®æ å ±ãããã¹ãåŽã§åç §ãããšã©ããªã£ãŠãããã§ãããïŒ
$ cat /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us -1 $ cat /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us 100000 $ cat /sys/fs/cgroup/memory/memory.limit_in_bytes 9223372036854771712 $ cat /sys/fs/cgroup/cpu,cpuacct/cpu.shares 1024
/sys/fs/cgroup/memory/memory.limit_in_bytes
ã®å€ããããããšã«ãªã£ãŠããŸãâŠã
ã¡ãªã¿ã«ãã³ã³ããã«ãªãœãŒã¹å¶éãå ¥ããã«ã³ã³ããå ã§åãæ å ±ã確èªãããšãåæ§ã®çµæã«ãªããŸããã
ã³ã³ãã察å¿ãç¡å¹ã«ããå Žåã¯ïŒãããããå ã¯ã©ããåç §ããŠããïŒ
cgroupã®æ
å ±ãåç
§ããåŠçã¯ã-XX:-UseContainerSupport
ãæå®ãããšè¡ããã«é£ã°ããŠããŸããŸãã
ãšããããããããå ã¯ã©ãã®æ å ±ãèŠãŠãããã§ããããïŒ
sysconf
ã¿ããã§ããCPUæ°ã¯_SC_NPROCESSORS_CONF
ãããã¡ã¢ãªã¯_SC_PHYS_PAGES
ãš_SC_PAGESIZE
ããã
https://github.com/openjdk/jdk17u/blob/jdk-17.0.1%2B12/src/hotspot/os/linux/os_linux.cpp#L364-L379
sysconf(3) - Linux manual page
ãŸãšã
Javaãã©ããã£ãŠã³ã³ããåŽã®CPUæ°ãã¡ã¢ãªãµã€ãºãååŸããŠããã®ããèå³ããã£ãã®ã§èª¿ã¹ãŠã¿ãŸããã
äžå¿ãæ å ±ã¯ã ãããæã£ãæ°ãããŸãããcgroupã«å¯Ÿããç解ãå šç¶ãªãã®ã§ãæ¬åœã«æ å ±ã䞊ãã§ããã ãã§ãã
ããã¡ãã£ãšè¿œãããæ°ãããã®ã§ããããã£ãããããŸã§ã§âŠã
ã§ããããããå匷ã«ãªããŸããã
ã³ã³ããã«å²ãåœãŠããããªãœãŒã¹ãåç
§ãããããã®æ
å ±ã«åãããŠåäœãããããªããã°ã©ã ã¯ããã®ãããã®
æ
å ±ãæèããªããšãããªããšããããšã«ãªããã§ããã
åè
第3回 Linuxカーネルのコンテナ機能[2] ─cgroupとは?(その1) | gihyo.jp