Skip to content

Crack SDE

Most of the content are generated by AI, with human being reviewed, edited, and revised

Menu
  • Home
  • Daily English Story
  • Tech Interviews
  • Cloud Native
  • DevOps
  • Artificial Intelligence
Menu

Context-switching/cgroup in Linux

Posted on 12/10/202312/12/2023 by user

context-switching

Context switching in Linux refers to the process of storing and restoring the execution context or state of a CPU so that it can resume execution of a different process.

  • The Linux kernel performs context switches to schedule multiple processes efficiently on limited CPU resources. It allows the illusion of parallel execution.
  • The execution state of a process includes registers, stack pointer, and various process control blocks. This state is saved and restored when switching between processes.
  • Context switching involves saving the context of the current process and loading the context of the new process to be scheduled next. This has an associated CPU cost.
  • Context switching can happen under several conditions like when a process makes a system call, an interrupt occurs, a process voluntarily gives up control, etc.
  • The Linux scheduler decides when to context switch based on algorithm logic and the process states. Common scheduling algorithms include the Completely Fair Scheduler and O(1).
  • Frequent context switching results in wasted CPU cycles and overhead. It is important to optimize context switches to improve performance. Techniques involve avoiding unnecessary blocking of processes.
  • Tools like vmstat and mpstat can be used to monitor context switches. High context switch rates could indicate an issue with scheduling, starvation, or other bottlenecks.

control-groups

cgroups (control groups) is a Linux kernel feature that allows processes to be organized and resource-limited into hierarchical groups. Here are some key points about cgroups:

  • cgroups limits, accounts for, and isolates the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes.
  • Processes can be added to cgroups either manually or automatically based on predefined rules. This provides a mechanism for fine-grained resource control.
  • cgroups enables resource limits to be configured and enforced at a per-group level rather than a per-process level.
  • Hierarchical organization of cgroups provides flexibility in how resource allocation and limits are structured. Cgroups can have parent/child relationships.
  • Common use cases for cgroups include system resource management, prioritization, containers/virtualization, and process monitoring/auditing.
  • Major subsystems for resource accounting and limitation in cgroups include cpu, cpuset, memory, blkio, cpu accounting, and more.
  • cgroups exposes various statistics and metrics about resource usage by each control group, facilitating monitoring and chargeback models.
  • The cgroups API is provided through virtual file systems allowing limits and controls to be configured through simple file modifications.

Docker leverages cgroups to manage resources for containers and implement Linux containers.

  • Docker creates a cgroup hierarchy when the Docker daemon starts – this defines the possible resource limits containers can have.
  • When a new container is created, Docker allocates a new child cgroup for the container. This setups up resource accounting and limiting for that container.
  • Limits and settings can be passed at container startup time to restrict memory, CPU, devices, etc for the container. This is implemented using the capabilities of cgroups.
  • All processes spawned in the container are assigned to the container’s cgroup automatically. So cgroups controls resource usage by processes in the container.
  • As the container’s processes execute, cgroups monitors and accounts for their usage, enforcing any resource limits set up via Docker.
  • Usage metrics and statistics are also available through the exposed cgroups interfaces. Docker can access these metrics for containers.
  • When the container terminates, the specific cgroup is deleted and releases those allocated resources.

The key differences between CNCF containers and Docker containers are:

  1. Standards – CNCF containers adhere to open standards like OCI specs, while Docker containers use Docker’s own proprietary formats and runtime.
  2. Portability – CNCF prioritizes portability across platforms and environments. Docker initially tied containers to Docker Engine, only lately opening up to OCI standards.
  3. Runtimes – CNCF containers can run on standardized runtimes like containerd, CRI-O etc. Docker containers run on Docker Engine by design.
  4. Ecosystem – CNCF focuses on a vendor-neutral ecosystem. Docker promotes its own commercial ecosystem and tools.
  5. Scope – The CNCF container ecosystem covers container runtime, image distribution, networking, orchestration etc. Docker mainly focuses on containers and images.
  6. Governance – CNCF container technologies are open source and community driven. Docker innovations are driven more by Docker Inc’s business needs.

In summary, CNCF containers aim to provide loosely coupled modular components following cloud-native principles and open standards. Docker containers have a monolithic focus, only recently adopting interoperability standards, focused on ease of use rather than portability.


Running as non-root user

Running an application in a container without using the root user enhances security and aligns with best practices in containerized environments. Here are the key benefits summarized:

  1. Enhanced Security: Running as a non-root user limits the privileges of the application, reducing the risk of system-level compromises. If an attacker exploits the application, their ability to cause harm is restricted to the permissions of that user.
  2. Minimized Attack Surface: Containers running as non-root have fewer permissions, thereby reducing the attack surface. This is crucial in mitigating the impact of potential vulnerabilities within the container or the application.
  3. Compliance with Security Policies: Many organizational and industry security policies require applications to run with the least privileges necessary. Running containers as non-root helps in complying with these security standards.
  4. Best Practice for Container Orchestration: Platforms like Kubernetes have policies that restrict the use of root containers. Running as non-root ensures compatibility with these orchestration tools and their security models.
  5. Prevention of Accidental Damage: Running as a non-root user prevents accidental changes to the host system or other containers, which could be catastrophic if the container had root privileges. This is particularly important in multi-tenant environments.

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on X (Opens in new window) X

Related

9

Recent Posts

  • LC#622 Design Circular Queue
  • Started with OpenTelemetry in Go
  • How Prometheus scrap works, and how to find the target node and get the metrics files
  • How to collect metrics of container, pods, node and cluster in k8s?
  • LC#200 island problem

Recent Comments

  1. another user on A Journey of Resilience

Archives

  • May 2025
  • April 2025
  • February 2025
  • July 2024
  • April 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • June 2023
  • May 2023

Categories

  • Artificial Intelligence
  • Cloud Computing
  • Cloud Native
  • Daily English Story
  • Database
  • DevOps
  • Golang
  • Java
  • Leetcode
  • Startups
  • Tech Interviews
©2025 Crack SDE | Design: Newspaperly WordPress Theme
Manage Cookie Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
Manage options Manage services Manage {vendor_count} vendors Read more about these purposes
View preferences
{title} {title} {title}