[{"data":1,"prerenderedAt":825},["ShallowReactive",2],{"/en-us/blog/tracking-down-missing-tcp-keepalives":3,"navigation-en-us":48,"banner-en-us":458,"footer-en-us":468,"blog-post-authors-en-us-Stan Hu":708,"blog-related-posts-en-us-tracking-down-missing-tcp-keepalives":722,"blog-promotions-en-us":762,"next-steps-en-us":815},{"id":4,"title":5,"authorSlugs":6,"authors":8,"body":10,"category":11,"categorySlug":11,"config":12,"content":16,"date":20,"description":17,"extension":30,"externalUrl":31,"featured":14,"heroImage":19,"isFeatured":14,"meta":32,"navigation":33,"path":34,"publishedDate":20,"rawbody":35,"seo":36,"slug":13,"stem":41,"tagSlugs":42,"tags":46,"template":15,"updatedDate":31,"__hash__":47},"blogPosts/en-us/blog/tracking-down-missing-tcp-keepalives.yml","What tracking down missing TCP Keepalives taught me about Docker, Golang, and GitLab",[7],"stan-hu",[9],"Stan Hu","This blog post was originally published on the GitLab Unfiltered blog. It was reviewed and republished on 2019-12-03.\n\n\nWhat began as failure in a GitLab static analysis check led to a\ndizzying investigation that uncovered a subtle [bug in the Docker client\nlibrary code](https://github.com/docker/for-linux/issues/853) used by\nthe GitLab Runner. We ultimately worked around the problem by upgrading\nthe Go compiler, but in the process we uncovered an unexpected change in\nthe Go TCP keepalive defaults that fixed an issue with Docker and GitLab\nCI.\n\nThis investigation started on October 23, when backend engineer [Luke\nDuncalfe](/company/team/#.luke) mentioned, \"I'm seeing\n[`static-analysis` failures with no output](https://gitlab.com/gitlab-org/gitlab/-/jobs/331174397).\nIs there something wrong with this job?\" He opened [a GitLab\nissue](https://gitlab.com/gitlab-org/gitlab/issues/34951) to discuss.\n\nWhen Luke ran the static analysis check locally on his laptop, he saw\nuseful debugging output when the test failed. For example, an extraneous\nnewline would accurately be reported by Rubocop. However, when the same\ntest ran in GitLab's automated test infrastructure, the test failed\nquietly:\n\n![Failed job](https://about.gitlab.com/images/blogimages/docker-tcp-keepalive-debug/job-failure.png)\n\nNotice how the job log did not include any clues after the `bin/rake\nlint:all` step. This made it difficult to determine whether a real\nproblem existed, or whether this was just a flaky test.\n\nIn the ensuing days, numerous team members reported the same problem.\nNothing kills productivity like silent test failures.\n\n## Was something wrong with the test itself?\n\nIn the past, we had seen that if that specific test generated enough\nerrors, [the output buffer would fill up, and the continuous integration\n(CI) job would lock\nindefinitely](https://gitlab.com/gitlab-org/gitlab-foss/issues/61432). We\nthought we had [fixed that issue months\nago](https://gitlab.com/gitlab-org/gitlab-foss/merge_requests/28402). Upon\nfurther review, that fix seemed to eliminate any chance of a thread\ndeadlock.\n\nDid we have to flush the buffer? No, because the Linux kernel will do\nthat for an exiting process already.\n\n## Was there a change in how CI logs were handled?\n\nWhen a test runs in GitLab CI, the [GitLab\nRunner](https://gitlab.com/gitlab-org/gitlab-runner/) launches a Docker\ncontainer that runs commands specified by a `.gitlab-ci.yml` inside the\nproject repository. As the job runs, the runner streams the output to\nthe GitLab API via PATCH requests. The GitLab backend saves this data\ninto a file. The following sequence diagram shows how this works:\n\n```text\n== Get a job! ==\nRunner -> GitLab: POST /api/v4/jobs/request\nGitLab -> Runner: 201 Job was scheduled\n\n== Job sends logs (1 of 2) ==\nRunner -> GitLab: PATCH /api/v4/job/:id/trace\nGitLab -> File: Save to disk\nGitLab -> Runner: 202 Accepted\n\n== Job sends logs (2 of 2) ==\nRunner -> GitLab: PATCH /api/v4/job/:id/trace\nGitLab -> File: Save to disk\nGitLab -> Runner: 202 Accepted\n```\n\n[Henrich Lee Yu](/company/team/#engwan) mentioned\nthat we had recently [disabled a feature flag that changed how GitLab\nhandled CI job\nlogs](https://docs.gitlab.com/administration/job_logs/#new-incremental-logging-architecture). [The\ntiming seemed to line\nup](https://gitlab.com/gitlab-org/gitlab/issues/34951#note_236723888).\n\nThis feature, called live CI traces, eliminates the need for a shared\nPOSIX filesystem (e.g., NFS) when saving job logs to disk by:\n\n1. Streaming data into memory via Redis\n2. Persisting the data in the database (PostgreSQL)\n3. Archiving the final data into object storage\n\nWhen this flag is enabled, the flow of CI job logs looks something like\nthe following:\n\n```text\n== Get a job! ==\nRunner -> GitLab: POST /api/v4/jobs/request\nGitLab -> Runner: 201 Job was scheduled\n\n== Job sends logs ==\nRunner -> GitLab: PATCH /api/v4/job/:id/trace\nGitLab -> Redis: Save chunk\nGitLab -> Runner: 202 Accepted\n...\n== Copy 128 KB chunks from Redis to database ==\nGitLab -> Redis: GET gitlab:ci:trace:id:chunks:0\nGitLab -> PostgreSQL: INSERT INTO ci_build_trace_chunks\n...\n== Job finishes ==\n\nRunner -> GitLab: PUT /api/v4/job/:id\nGitLab -> Runner: 200 Job was updated\n\n== Archive trace to object storage ==\n```\n\nLooking at the flow diagram above, we see that this approach has more\nsteps. After receiving data from the runner, something could have gone\nwrong with handling a chunk of data. However, we still had many\nquestions:\n\n1. Did the runners send the right data in the first place?\n1. Did GitLab drop a chunk of data somewhere?\n1. Did this new feature actually have anything to do with the problem?\n1. Are they really making another Gremlins movie?\n\n## Reproducing the bug: Simplify the `.gitlab-ci.yml`\n\nTo help answer those questions, we simplified the `.gitlab-ci.yml` to\nrun only the `static-analysis` step. We inserted a known Rubocop error,\nreplacing a `eq` with `eql`. We first ran this test on a separate GitLab\ninstance with a private runner. No luck there – the job showed the right\noutput:\n\n```text\nOffenses:\n\nee/spec/models/project_spec.rb:55:42: C: RSpec/BeEql: Prefer be over eql.\n        expect(described_class.count).to eql(2)\n                                         ^^^\n\n12669 files inspected, 1 offense detected\n```\n\nHowever, we repeated the test on our staging server and found that we\nreproduced the original problem. In addition, the live CI trace feature\nflag had been activated on staging. Since the problem occurred with and\nwithout the feature, we could eliminate that feature as a possible\ncause.\n\nPerhaps something with the GitLab server environment caused a\nproblem. For example, could the load balancers be rate-limiting the\nrunners? As an experiment, we pointed a private runner at the staging\nserver and re-ran the test. This time, it succeeded: the output was\nshown. That seemed to suggest that the problem had more to do with the\nrunner than with the server.\n\n## Docker Machine vs. Docker\n\nOne key difference between the two tests: One runner used a shared,\nautoscaled runner using a [Docker\nMachine](https://docs.docker.com/machine/overview/) executor, and the\nprivate runner used a [Docker\nexecutor](https://docs.gitlab.com/runner/executors/docker/).\n\nWhat does Docker Machine do exactly? The following diagram may help\nillustrate:\n\n![Docker Machine](https://docs.docker.com/machine/img/machine.png)\n\nThe top-left shows a local Docker instance. When you run Docker from the\ncommand-line interface (e.g., `docker attach my-container`), the program\njust makes [REST calls to the Docker Engine\nAPI](https://docs.docker.com/engine/api/v1.40/).\n\nThe rest of the diagram shows how Docker Machine fits into the\npicture. Docker Machine is an entirely separate program. The GitLab\nRunner shells out to `docker-machine` to create and destroy virtual\nmachines using cloud-specific (e.g. Amazon, Google, etc.) drivers. Once\na machine is running, the runner then uses the Docker Engine API to run,\nwatch, and stop containers.\n\nNote that this API is used securely over an HTTPS connection. This is an\nimportant difference between the Docker Machine executor and Docker\nexecutor: The former needs to communicate across the network, while the\nlatter can either use a local TCP socket or UNIX domain socket.\n\n## Google Cloud Platform timeouts\n\nWe've known for a while that Google Cloud [has a 10-minute idle\ntimeout](https://cloud.google.com/compute/docs/troubleshooting/general-tips),\nwhich has caused issues in the past:\n\n> Note that idle connections are tracked for a maximum of 10 minutes,\n> after which their traffic is subject to firewall rules, including the\n> implied deny ingress rule. If your instance initiates or accepts\n> long-lived connections with an external host, you should adjust TCP\n> keep-alive settings on your Compute Engine instances to less than 600\n> seconds to ensure that connections are refreshed before the timeout\n> occurs.\n\nWas the problem caused by this timeout? With the Docker Machine\nexecutor, we found that we could reproduce the problem with a simple\n`.gitlab-ci.yml`:\n\n```yaml\nimage: \"busybox:latest\"\n\ntest:\n  script:\n    - date\n    - sleep 601\n    - echo \"Hello world!\"\n    - date\n    - exit 1\n\n```\n\nThis would reproduce the failure, where we would never see the `Hello\nworld!` output. Changing the `sleep 601` to `sleep 599` would make the\nproblem go away. Hurrah! All we have to do is tweak the system TCP\nkeepalives, right? Google provided these sensible settings:\n\n```sh\nsudo /sbin/sysctl -w net.ipv4.tcp_keepalive_time=60 net.ipv4.tcp_keepalive_intvl=60 net.ipv4.tcp_keepalive_probes=5\n```\n\nHowever, enabling these kernel-level settings didn't solve the\nproblem. Were keepalives even being sent? Or was there some other issue?\nWe turned our attention to network traces.\n\n## Eavesdropping on Docker traffic\n\nIn order to understand what was happening, we needed to be able to\nmonitor the network communication between the runner and the Docker\ncontainer. But how exactly does the GitLab Runner stream data from a\nDocker container to the GitLab server?  The following diagram\nillustrates the flow:\n\n```text\nRunner -> Docker: POST /containers/name/attach\nDocker -> Runner: \u003Ccontainer output>\nDocker -> Runner: \u003Ccontainer output>\nRunner -> GitLab: PATCH /api/v4/job/:id/trace\nGitLab -> File: Save to disk\nGitLab -> Runner: 202 Accepted\n```\n\nFirst, the runner makes a [POST request to attach to the container\noutput](https://docs.docker.com/engine/api/v1.40/#operation/ContainerAttach).\nAs soon as a process running in the container outputs some data, Docker\nwill transmit the data over this HTTPS stream. The runner then copies\nthis data to GitLab via the PATCH request.\n\nHowever, as mentioned earlier, traffic between a GitLab Runner and the\nremote Docker machine is encrypted over HTTPS on port 2376. Was there an\neasy way to disable HTTPS? Searching through the code of Docker Machine,\nwe found that it did not appear to be supported out of the box.\n\nSince we couldn't disable HTTPS, we had two ways to eavesdrop:\n\n1. Use a man-in-the-middle proxy (e.g. [mitmproxy](https://mitmproxy.org/))\n1. Record the traffic and decrypt the traffic later using the private keys\n\n## Ok, let's be the man-in-the-middle!\n\nThe first seemed more straightforward, since [we already had experience\ndoing this with the Docker\nclient](https://docs.gitlab.com/administration/packages/container_registry/#running-the-docker-daemon-with-a-proxy).\n\nHowever, after [defining the proxy variables for GitLab\nRunner](https://docs.gitlab.com/runner/configuration/proxy/#adding-proxy-variables-to-the-runner-config),\nwe found we were only able to intercept the GitLab API calls with\n`mitmproxy`. The Docker API calls still went directly to the remote\nhost. Something wasn't obeying the proxy configuration, but we didn't\ninvestigate further. We tried the second approach.\n\n## Decrypting TLS data\n\nTo decrypt TLS data, we would need to obtain the encryption keys. Where\nwere these located for a newly-created system with `docker-machine`? It\nturns out `docker-machine` worked in the following way:\n\n1. Call the Google Cloud API to create a new machine\n1. Create a `/root/.docker/machine/machines/:machine_name` directory\n1. Generate a new SSH keypair\n1. Install the SSH key on the server\n1. Generate a new TLS certificate and key\n1. Install and configure Docker on the newly-created machine with TLS certificates\n\nAs long as the machine runs, the directory will contain the information\nneeded to decode this traffic. We ran `tcpdump` and saved the private keys.\n\nOur first attempt at decoding the traffic failed. Wireshark could not\ndecode the encrypted traffic, although general TCP traffic could still\nbe seen. Researching more, we found out why: If the encrypted traffic\nused a [Diffie-Hellman key\nexchange](https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange),\nhaving the private keys would not suffice! This is by design, a property\ncalled [perfect forward\nsecrecy](https://en.m.wikipedia.org/wiki/Forward_secrecy).\n\nTo get around that limitation, we modified the GitLab Runner to disable\ncipher suites that used the Diffie-Hellman key exchange:\n\n```diff\ndiff --git a/vendor/github.com/docker/go-connections/tlsconfig/config_client_ciphers.go b/vendor/github.com/docker/go-connections/tlsconfig/config_client_ciphers.go\nindex 6b4c6a7c0..a3f86d756 100644\n","engineering",{"slug":13,"featured":14,"template":15},"tracking-down-missing-tcp-keepalives",false,"BlogPost",{"title":5,"description":17,"authors":18,"heroImage":19,"date":20,"body":10,"category":11,"tags":21},"An in-depth recap of debugging a bug in the Docker client library.",[9],"https://res.cloudinary.com/about-gitlab-com/image/upload/v1749680874/Blog/Hero%20Images/network.jpg","2019-11-15",[22,23,24,25,26,25,27,28,29],"community","git","GitOps","CI","google","AWS","testing","features","yml",null,{},true,"/en-us/blog/tracking-down-missing-tcp-keepalives","seo:\n  title: >-\n    Tracking TCP Keepalives: Lessons in Docker, Golang & GitLab\n  description: An in-depth recap of debugging a bug in the Docker client library.\n  ogTitle: >-\n    Tracking TCP Keepalives: Lessons in Docker, Golang & GitLab\n  ogDescription: An in-depth recap of debugging a bug in the Docker client library.\n  noIndex: false\n  ogImage: >-\n    https://res.cloudinary.com/about-gitlab-com/image/upload/v1749680874/Blog/Hero%20Images/network.jpg\n  ogUrl: https://about.gitlab.com/blog/tracking-down-missing-tcp-keepalives\n  ogSiteName: https://about.gitlab.com\n  ogType: article\n  canonicalUrls: https://about.gitlab.com/blog/tracking-down-missing-tcp-keepalives\ncontent:\n  title: >-\n    What tracking down missing TCP Keepalives taught me about Docker, Golang,\n    and GitLab\n  description: An in-depth recap of debugging a bug in the Docker client library.\n  authors:\n    - Stan Hu\n  heroImage: >-\n    https://res.cloudinary.com/about-gitlab-com/image/upload/v1749680874/Blog/Hero%20Images/network.jpg\n  date: '2019-11-15'\n  body: >\n    This blog post was originally published on the GitLab Unfiltered blog. It\n    was reviewed and republished on 2019-12-03.\n\n\n\n    What began as failure in a GitLab static analysis check led to a\n\n    dizzying investigation that uncovered a subtle [bug in the Docker client\n\n    library code](https://github.com/docker/for-linux/issues/853) used by\n\n    the GitLab Runner. We ultimately worked around the problem by upgrading\n\n    the Go compiler, but in the process we uncovered an unexpected change in\n\n    the Go TCP keepalive defaults that fixed an issue with Docker and GitLab\n\n    CI.\n\n\n    This investigation started on October 23, when backend engineer [Luke\n\n    Duncalfe](/company/team/#.luke) mentioned, \"I'm seeing\n\n    [`static-analysis` failures with no\n    output](https://gitlab.com/gitlab-org/gitlab/-/jobs/331174397).\n\n    Is there something wrong with this job?\" He opened [a GitLab\n\n    issue](https://gitlab.com/gitlab-org/gitlab/issues/34951) to discuss.\n\n\n    When Luke ran the static analysis check locally on his laptop, he saw\n\n    useful debugging output when the test failed. For example, an extraneous\n\n    newline would accurately be reported by Rubocop. However, when the same\n\n    test ran in GitLab's automated test infrastructure, the test failed\n\n    quietly:\n\n\n    ![Failed\n    job](https://about.gitlab.com/images/blogimages/docker-tcp-keepalive-debug/job-failure.png)\n\n\n    Notice how the job log did not include any clues after the `bin/rake\n\n    lint:all` step. This made it difficult to determine whether a real\n\n    problem existed, or whether this was just a flaky test.\n\n\n    In the ensuing days, numerous team members reported the same problem.\n\n    Nothing kills productivity like silent test failures.\n\n\n    ## Was something wrong with the test itself?\n\n\n    In the past, we had seen that if that specific test generated enough\n\n    errors, [the output buffer would fill up, and the continuous integration\n\n    (CI) job would lock\n\n    indefinitely](https://gitlab.com/gitlab-org/gitlab-foss/issues/61432). We\n\n    thought we had [fixed that issue months\n\n    ago](https://gitlab.com/gitlab-org/gitlab-foss/merge_requests/28402). Upon\n\n    further review, that fix seemed to eliminate any chance of a thread\n\n    deadlock.\n\n\n    Did we have to flush the buffer? No, because the Linux kernel will do\n\n    that for an exiting process already.\n\n\n    ## Was there a change in how CI logs were handled?\n\n\n    When a test runs in GitLab CI, the [GitLab\n\n    Runner](https://gitlab.com/gitlab-org/gitlab-runner/) launches a Docker\n\n    container that runs commands specified by a `.gitlab-ci.yml` inside the\n\n    project repository. As the job runs, the runner streams the output to\n\n    the GitLab API via PATCH requests. The GitLab backend saves this data\n\n    into a file. The following sequence diagram shows how this works:\n\n\n    ```text\n\n    == Get a job! ==\n\n    Runner -> GitLab: POST /api/v4/jobs/request\n\n    GitLab -> Runner: 201 Job was scheduled\n\n\n    == Job sends logs (1 of 2) ==\n\n    Runner -> GitLab: PATCH /api/v4/job/:id/trace\n\n    GitLab -> File: Save to disk\n\n    GitLab -> Runner: 202 Accepted\n\n\n    == Job sends logs (2 of 2) ==\n\n    Runner -> GitLab: PATCH /api/v4/job/:id/trace\n\n    GitLab -> File: Save to disk\n\n    GitLab -> Runner: 202 Accepted\n\n    ```\n\n\n    [Henrich Lee Yu](/company/team/#engwan) mentioned\n\n    that we had recently [disabled a feature flag that changed how GitLab\n\n    handled CI job\n\n    logs](https://docs.gitlab.com/administration/job_logs/#new-incremental-logging-architecture).\n    [The\n\n    timing seemed to line\n\n    up](https://gitlab.com/gitlab-org/gitlab/issues/34951#note_236723888).\n\n\n    This feature, called live CI traces, eliminates the need for a shared\n\n    POSIX filesystem (e.g., NFS) when saving job logs to disk by:\n\n\n    1. Streaming data into memory via Redis\n\n    2. Persisting the data in the database (PostgreSQL)\n\n    3. Archiving the final data into object storage\n\n\n    When this flag is enabled, the flow of CI job logs looks something like\n\n    the following:\n\n\n    ```text\n\n    == Get a job! ==\n\n    Runner -> GitLab: POST /api/v4/jobs/request\n\n    GitLab -> Runner: 201 Job was scheduled\n\n\n    == Job sends logs ==\n\n    Runner -> GitLab: PATCH /api/v4/job/:id/trace\n\n    GitLab -> Redis: Save chunk\n\n    GitLab -> Runner: 202 Accepted\n\n    ...\n\n    == Copy 128 KB chunks from Redis to database ==\n\n    GitLab -> Redis: GET gitlab:ci:trace:id:chunks:0\n\n    GitLab -> PostgreSQL: INSERT INTO ci_build_trace_chunks\n\n    ...\n\n    == Job finishes ==\n\n\n    Runner -> GitLab: PUT /api/v4/job/:id\n\n    GitLab -> Runner: 200 Job was updated\n\n\n    == Archive trace to object storage ==\n\n    ```\n\n\n    Looking at the flow diagram above, we see that this approach has more\n\n    steps. After receiving data from the runner, something could have gone\n\n    wrong with handling a chunk of data. However, we still had many\n\n    questions:\n\n\n    1. Did the runners send the right data in the first place?\n\n    1. Did GitLab drop a chunk of data somewhere?\n\n    1. Did this new feature actually have anything to do with the problem?\n\n    1. Are they really making another Gremlins movie?\n\n\n    ## Reproducing the bug: Simplify the `.gitlab-ci.yml`\n\n\n    To help answer those questions, we simplified the `.gitlab-ci.yml` to\n\n    run only the `static-analysis` step. We inserted a known Rubocop error,\n\n    replacing a `eq` with `eql`. We first ran this test on a separate GitLab\n\n    instance with a private runner. No luck there – the job showed the right\n\n    output:\n\n\n    ```text\n\n    Offenses:\n\n\n    ee/spec/models/project_spec.rb:55:42: C: RSpec/BeEql: Prefer be over eql.\n            expect(described_class.count).to eql(2)\n                                             ^^^\n\n    12669 files inspected, 1 offense detected\n\n    ```\n\n\n    However, we repeated the test on our staging server and found that we\n\n    reproduced the original problem. In addition, the live CI trace feature\n\n    flag had been activated on staging. Since the problem occurred with and\n\n    without the feature, we could eliminate that feature as a possible\n\n    cause.\n\n\n    Perhaps something with the GitLab server environment caused a\n\n    problem. For example, could the load balancers be rate-limiting the\n\n    runners? As an experiment, we pointed a private runner at the staging\n\n    server and re-ran the test. This time, it succeeded: the output was\n\n    shown. That seemed to suggest that the problem had more to do with the\n\n    runner than with the server.\n\n\n    ## Docker Machine vs. Docker\n\n\n    One key difference between the two tests: One runner used a shared,\n\n    autoscaled runner using a [Docker\n\n    Machine](https://docs.docker.com/machine/overview/) executor, and the\n\n    private runner used a [Docker\n\n    executor](https://docs.gitlab.com/runner/executors/docker/).\n\n\n    What does Docker Machine do exactly? The following diagram may help\n\n    illustrate:\n\n\n    ![Docker Machine](https://docs.docker.com/machine/img/machine.png)\n\n\n    The top-left shows a local Docker instance. When you run Docker from the\n\n    command-line interface (e.g., `docker attach my-container`), the program\n\n    just makes [REST calls to the Docker Engine\n\n    API](https://docs.docker.com/engine/api/v1.40/).\n\n\n    The rest of the diagram shows how Docker Machine fits into the\n\n    picture. Docker Machine is an entirely separate program. The GitLab\n\n    Runner shells out to `docker-machine` to create and destroy virtual\n\n    machines using cloud-specific (e.g. Amazon, Google, etc.) drivers. Once\n\n    a machine is running, the runner then uses the Docker Engine API to run,\n\n    watch, and stop containers.\n\n\n    Note that this API is used securely over an HTTPS connection. This is an\n\n    important difference between the Docker Machine executor and Docker\n\n    executor: The former needs to communicate across the network, while the\n\n    latter can either use a local TCP socket or UNIX domain socket.\n\n\n    ## Google Cloud Platform timeouts\n\n\n    We've known for a while that Google Cloud [has a 10-minute idle\n\n    timeout](https://cloud.google.com/compute/docs/troubleshooting/general-tips),\n\n    which has caused issues in the past:\n\n\n    > Note that idle connections are tracked for a maximum of 10 minutes,\n\n    > after which their traffic is subject to firewall rules, including the\n\n    > implied deny ingress rule. If your instance initiates or accepts\n\n    > long-lived connections with an external host, you should adjust TCP\n\n    > keep-alive settings on your Compute Engine instances to less than 600\n\n    > seconds to ensure that connections are refreshed before the timeout\n\n    > occurs.\n\n\n    Was the problem caused by this timeout? With the Docker Machine\n\n    executor, we found that we could reproduce the problem with a simple\n\n    `.gitlab-ci.yml`:\n\n\n    ```yaml\n\n    image: \"busybox:latest\"\n\n\n    test:\n      script:\n        - date\n        - sleep 601\n        - echo \"Hello world!\"\n        - date\n        - exit 1\n\n    ```\n\n\n    This would reproduce the failure, where we would never see the `Hello\n\n    world!` output. Changing the `sleep 601` to `sleep 599` would make the\n\n    problem go away. Hurrah! All we have to do is tweak the system TCP\n\n    keepalives, right? Google provided these sensible settings:\n\n\n    ```sh\n\n    sudo /sbin/sysctl -w net.ipv4.tcp_keepalive_time=60\n    net.ipv4.tcp_keepalive_intvl=60 net.ipv4.tcp_keepalive_probes=5\n\n    ```\n\n\n    However, enabling these kernel-level settings didn't solve the\n\n    problem. Were keepalives even being sent? Or was there some other issue?\n\n    We turned our attention to network traces.\n\n\n    ## Eavesdropping on Docker traffic\n\n\n    In order to understand what was happening, we needed to be able to\n\n    monitor the network communication between the runner and the Docker\n\n    container. But how exactly does the GitLab Runner stream data from a\n\n    Docker container to the GitLab server?  The following diagram\n\n    illustrates the flow:\n\n\n    ```text\n\n    Runner -> Docker: POST /containers/name/attach\n\n    Docker -> Runner: \u003Ccontainer output>\n\n    Docker -> Runner: \u003Ccontainer output>\n\n    Runner -> GitLab: PATCH /api/v4/job/:id/trace\n\n    GitLab -> File: Save to disk\n\n    GitLab -> Runner: 202 Accepted\n\n    ```\n\n\n    First, the runner makes a [POST request to attach to the container\n\n    output](https://docs.docker.com/engine/api/v1.40/#operation/ContainerAttach).\n\n    As soon as a process running in the container outputs some data, Docker\n\n    will transmit the data over this HTTPS stream. The runner then copies\n\n    this data to GitLab via the PATCH request.\n\n\n    However, as mentioned earlier, traffic between a GitLab Runner and the\n\n    remote Docker machine is encrypted over HTTPS on port 2376. Was there an\n\n    easy way to disable HTTPS? Searching through the code of Docker Machine,\n\n    we found that it did not appear to be supported out of the box.\n\n\n    Since we couldn't disable HTTPS, we had two ways to eavesdrop:\n\n\n    1. Use a man-in-the-middle proxy (e.g. [mitmproxy](https://mitmproxy.org/))\n\n    1. Record the traffic and decrypt the traffic later using the private keys\n\n\n    ## Ok, let's be the man-in-the-middle!\n\n\n    The first seemed more straightforward, since [we already had experience\n\n    doing this with the Docker\n\n    client](https://docs.gitlab.com/administration/packages/container_registry/#running-the-docker-daemon-with-a-proxy).\n\n\n    However, after [defining the proxy variables for GitLab\n\n    Runner](https://docs.gitlab.com/runner/configuration/proxy/#adding-proxy-variables-to-the-runner-config),\n\n    we found we were only able to intercept the GitLab API calls with\n\n    `mitmproxy`. The Docker API calls still went directly to the remote\n\n    host. Something wasn't obeying the proxy configuration, but we didn't\n\n    investigate further. We tried the second approach.\n\n\n    ## Decrypting TLS data\n\n\n    To decrypt TLS data, we would need to obtain the encryption keys. Where\n\n    were these located for a newly-created system with `docker-machine`? It\n\n    turns out `docker-machine` worked in the following way:\n\n\n    1. Call the Google Cloud API to create a new machine\n\n    1. Create a `/root/.docker/machine/machines/:machine_name` directory\n\n    1. Generate a new SSH keypair\n\n    1. Install the SSH key on the server\n\n    1. Generate a new TLS certificate and key\n\n    1. Install and configure Docker on the newly-created machine with TLS\n    certificates\n\n\n    As long as the machine runs, the directory will contain the information\n\n    needed to decode this traffic. We ran `tcpdump` and saved the private keys.\n\n\n    Our first attempt at decoding the traffic failed. Wireshark could not\n\n    decode the encrypted traffic, although general TCP traffic could still\n\n    be seen. Researching more, we found out why: If the encrypted traffic\n\n    used a [Diffie-Hellman key\n\n    exchange](https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange),\n\n    having the private keys would not suffice! This is by design, a property\n\n    called [perfect forward\n\n    secrecy](https://en.m.wikipedia.org/wiki/Forward_secrecy).\n\n\n    To get around that limitation, we modified the GitLab Runner to disable\n\n    cipher suites that used the Diffie-Hellman key exchange:\n\n\n    ```diff\n\n    diff --git\n    a/vendor/github.com/docker/go-connections/tlsconfig/config_client_ciphers.go\n    b/vendor/github.com/docker/go-connections/tlsconfig/config_client_ciphers.go\n\n    index 6b4c6a7c0..a3f86d756 100644\n  category: engineering\n  tags:\n    - community\n    - git\n    - GitOps\n    - CI\n    - google\n    - CI\n    - AWS\n    - testing\n    - features\nconfig:\n  slug: tracking-down-missing-tcp-keepalives\n  featured: false\n  template: BlogPost\n",{"title":37,"description":17,"ogTitle":37,"ogDescription":17,"noIndex":14,"ogImage":19,"ogUrl":38,"ogSiteName":39,"ogType":40,"canonicalUrls":38},"Tracking TCP Keepalives: Lessons in Docker, Golang & GitLab","https://about.gitlab.com/blog/tracking-down-missing-tcp-keepalives","https://about.gitlab.com","article","en-us/blog/tracking-down-missing-tcp-keepalives",[22,23,43,44,26,44,45,28,29],"gitops","ci","aws",[22,23,24,25,26,25,27,28,29],"Qj8rYckKhxWr9dW-FKdG9fSK2IWP-Kn9_FE64ZMboxk",{"data":49},{"logo":50,"freeTrial":55,"sales":60,"login":65,"items":70,"search":378,"minimal":409,"duo":428,"switchNav":437,"pricingDeployment":448},{"config":51},{"href":52,"dataGaName":53,"dataGaLocation":54},"/","gitlab logo","header",{"text":56,"config":57},"Get free trial",{"href":58,"dataGaName":59,"dataGaLocation":54},"https://gitlab.com/-/trial_registrations/new?glm_source=about.gitlab.com&glm_content=default-saas-trial/","free trial",{"text":61,"config":62},"Talk to sales",{"href":63,"dataGaName":64,"dataGaLocation":54},"/sales/","sales",{"text":66,"config":67},"Sign in",{"href":68,"dataGaName":69,"dataGaLocation":54},"https://gitlab.com/users/sign_in/","sign in",[71,98,193,198,299,359],{"text":72,"config":73,"cards":75},"Platform",{"dataNavLevelOne":74},"platform",[76,82,90],{"title":72,"description":77,"link":78},"The intelligent orchestration platform for DevSecOps",{"text":79,"config":80},"Explore our Platform",{"href":81,"dataGaName":74,"dataGaLocation":54},"/platform/",{"title":83,"description":84,"link":85},"GitLab Duo Agent Platform","Agentic AI for the entire software lifecycle",{"text":86,"config":87},"Meet GitLab Duo",{"href":88,"dataGaName":89,"dataGaLocation":54},"/gitlab-duo-agent-platform/","gitlab duo agent platform",{"title":91,"description":92,"link":93},"Why GitLab","See the top reasons enterprises choose GitLab",{"text":94,"config":95},"Learn more",{"href":96,"dataGaName":97,"dataGaLocation":54},"/why-gitlab/","why gitlab",{"text":99,"left":33,"config":100,"link":102,"lists":106,"footer":175},"Product",{"dataNavLevelOne":101},"solutions",{"text":103,"config":104},"View all Solutions",{"href":105,"dataGaName":101,"dataGaLocation":54},"/solutions/",[107,131,154],{"title":108,"description":109,"link":110,"items":115},"Automation","CI/CD and automation to accelerate deployment",{"config":111},{"icon":112,"href":113,"dataGaName":114,"dataGaLocation":54},"AutomatedCodeAlt","/solutions/delivery-automation/","automated software delivery",[116,120,123,127],{"text":117,"config":118},"CI/CD",{"href":119,"dataGaLocation":54,"dataGaName":117},"/solutions/continuous-integration/",{"text":83,"config":121},{"href":88,"dataGaLocation":54,"dataGaName":122},"gitlab duo agent platform - product menu",{"text":124,"config":125},"Source Code Management",{"href":126,"dataGaLocation":54,"dataGaName":124},"/solutions/source-code-management/",{"text":128,"config":129},"Automated Software Delivery",{"href":113,"dataGaLocation":54,"dataGaName":130},"Automated software delivery",{"title":132,"description":133,"link":134,"items":139},"Security","Deliver code faster without compromising security",{"config":135},{"href":136,"dataGaName":137,"dataGaLocation":54,"icon":138},"/solutions/application-security-testing/","security and compliance","ShieldCheckLight",[140,144,149],{"text":141,"config":142},"Application Security Testing",{"href":136,"dataGaName":143,"dataGaLocation":54},"Application security testing",{"text":145,"config":146},"Software Supply Chain Security",{"href":147,"dataGaLocation":54,"dataGaName":148},"/solutions/supply-chain/","Software supply chain security",{"text":150,"config":151},"Software Compliance",{"href":152,"dataGaName":153,"dataGaLocation":54},"/solutions/software-compliance/","software compliance",{"title":155,"link":156,"items":161},"Measurement",{"config":157},{"icon":158,"href":159,"dataGaName":160,"dataGaLocation":54},"DigitalTransformation","/solutions/visibility-measurement/","visibility and measurement",[162,166,170],{"text":163,"config":164},"Visibility & Measurement",{"href":159,"dataGaLocation":54,"dataGaName":165},"Visibility and Measurement",{"text":167,"config":168},"Value Stream Management",{"href":169,"dataGaLocation":54,"dataGaName":167},"/solutions/value-stream-management/",{"text":171,"config":172},"Analytics & Insights",{"href":173,"dataGaLocation":54,"dataGaName":174},"/solutions/analytics-and-insights/","Analytics and insights",{"title":176,"items":177},"GitLab for",[178,183,188],{"text":179,"config":180},"Enterprise",{"href":181,"dataGaLocation":54,"dataGaName":182},"/enterprise/","enterprise",{"text":184,"config":185},"Small Business",{"href":186,"dataGaLocation":54,"dataGaName":187},"/small-business/","small business",{"text":189,"config":190},"Public Sector",{"href":191,"dataGaLocation":54,"dataGaName":192},"/solutions/public-sector/","public sector",{"text":194,"config":195},"Pricing",{"href":196,"dataGaName":197,"dataGaLocation":54,"dataNavLevelOne":197},"/pricing/","pricing",{"text":199,"config":200,"link":202,"lists":206,"feature":290},"Resources",{"dataNavLevelOne":201},"resources",{"text":203,"config":204},"View all resources",{"href":205,"dataGaName":201,"dataGaLocation":54},"/resources/",[207,240,263],{"title":208,"items":209},"Getting started",[210,215,220,225,230,235],{"text":211,"config":212},"Install",{"href":213,"dataGaName":214,"dataGaLocation":54},"/install/","install",{"text":216,"config":217},"Quick start guides",{"href":218,"dataGaName":219,"dataGaLocation":54},"/get-started/","quick setup checklists",{"text":221,"config":222},"Learn",{"href":223,"dataGaLocation":54,"dataGaName":224},"https://university.gitlab.com/","learn",{"text":226,"config":227},"Product documentation",{"href":228,"dataGaName":229,"dataGaLocation":54},"https://docs.gitlab.com/","product documentation",{"text":231,"config":232},"Best practice videos",{"href":233,"dataGaName":234,"dataGaLocation":54},"/getting-started-videos/","best practice videos",{"text":236,"config":237},"Integrations",{"href":238,"dataGaName":239,"dataGaLocation":54},"/integrations/","integrations",{"title":241,"items":242},"Discover",[243,248,253,258],{"text":244,"config":245},"Customer success stories",{"href":246,"dataGaName":247,"dataGaLocation":54},"/customers/","customer success stories",{"text":249,"config":250},"Blog",{"href":251,"dataGaName":252,"dataGaLocation":54},"/blog/","blog",{"text":254,"config":255},"The Source",{"href":256,"dataGaName":257,"dataGaLocation":54},"/the-source/","the source",{"text":259,"config":260},"Remote",{"href":261,"dataGaName":262,"dataGaLocation":54},"https://handbook.gitlab.com/handbook/company/culture/all-remote/","remote",{"title":264,"items":265},"Connect",[266,271,275,280,285],{"text":267,"config":268},"GitLab Services",{"href":269,"dataGaName":270,"dataGaLocation":54},"/services/","services",{"text":272,"config":273},"Community",{"href":274,"dataGaName":22,"dataGaLocation":54},"/community/",{"text":276,"config":277},"Forum",{"href":278,"dataGaName":279,"dataGaLocation":54},"https://forum.gitlab.com/","forum",{"text":281,"config":282},"Events",{"href":283,"dataGaName":284,"dataGaLocation":54},"/events/","events",{"text":286,"config":287},"Partners",{"href":288,"dataGaName":289,"dataGaLocation":54},"/partners/","partners",{"textColor":291,"title":292,"text":293,"link":294},"#000","What’s new in GitLab","Stay updated with our latest features and improvements.",{"text":295,"config":296},"Read the latest",{"href":297,"dataGaName":298,"dataGaLocation":54},"/releases/whats-new/","whats new",{"text":300,"config":301,"lists":303},"Company",{"dataNavLevelOne":302},"company",[304],{"items":305},[306,311,317,319,324,329,334,339,344,349,354],{"text":307,"config":308},"About",{"href":309,"dataGaName":310,"dataGaLocation":54},"/company/","about",{"text":312,"config":313,"footerGa":316},"Jobs",{"href":314,"dataGaName":315,"dataGaLocation":54},"/jobs/","jobs",{"dataGaName":315},{"text":281,"config":318},{"href":283,"dataGaName":284,"dataGaLocation":54},{"text":320,"config":321},"Leadership",{"href":322,"dataGaName":323,"dataGaLocation":54},"/company/team/e-group/","leadership",{"text":325,"config":326},"Team",{"href":327,"dataGaName":328,"dataGaLocation":54},"/company/team/","team",{"text":330,"config":331},"Handbook",{"href":332,"dataGaName":333,"dataGaLocation":54},"https://handbook.gitlab.com/","handbook",{"text":335,"config":336},"Investor relations",{"href":337,"dataGaName":338,"dataGaLocation":54},"https://ir.gitlab.com/","investor relations",{"text":340,"config":341},"Trust Center",{"href":342,"dataGaName":343,"dataGaLocation":54},"/security/","trust center",{"text":345,"config":346},"AI Transparency Center",{"href":347,"dataGaName":348,"dataGaLocation":54},"/ai-transparency-center/","ai transparency center",{"text":350,"config":351},"Newsletter",{"href":352,"dataGaName":353,"dataGaLocation":54},"/company/contact/#contact-forms","newsletter",{"text":355,"config":356},"Press",{"href":357,"dataGaName":358,"dataGaLocation":54},"/press/","press",{"text":360,"config":361,"lists":362},"Contact us",{"dataNavLevelOne":302},[363],{"items":364},[365,368,373],{"text":61,"config":366},{"href":63,"dataGaName":367,"dataGaLocation":54},"talk to sales",{"text":369,"config":370},"Support portal",{"href":371,"dataGaName":372,"dataGaLocation":54},"https://support.gitlab.com","support portal",{"text":374,"config":375},"Customer portal",{"href":376,"dataGaName":377,"dataGaLocation":54},"https://customers.gitlab.com/customers/sign_in/","customer portal",{"close":379,"login":380,"suggestions":387},"Close",{"text":381,"link":382},"To search repositories and projects, login to",{"text":383,"config":384},"gitlab.com",{"href":68,"dataGaName":385,"dataGaLocation":386},"search login","search",{"text":388,"default":389},"Suggestions",[390,392,396,398,402,406],{"text":83,"config":391},{"href":88,"dataGaName":83,"dataGaLocation":386},{"text":393,"config":394},"Code Suggestions (AI)",{"href":395,"dataGaName":393,"dataGaLocation":386},"/solutions/code-suggestions/",{"text":117,"config":397},{"href":119,"dataGaName":117,"dataGaLocation":386},{"text":399,"config":400},"GitLab on AWS",{"href":401,"dataGaName":399,"dataGaLocation":386},"/partners/technology-partners/aws/",{"text":403,"config":404},"GitLab on Google Cloud",{"href":405,"dataGaName":403,"dataGaLocation":386},"/partners/technology-partners/google-cloud-platform/",{"text":407,"config":408},"Why GitLab?",{"href":96,"dataGaName":407,"dataGaLocation":386},{"freeTrial":410,"mobileIcon":415,"desktopIcon":420,"secondaryButton":423},{"text":411,"config":412},"Start free trial",{"href":413,"dataGaName":59,"dataGaLocation":414},"https://gitlab.com/-/trials/new/","nav",{"altText":416,"config":417},"Gitlab Icon",{"src":418,"dataGaName":419,"dataGaLocation":414},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758203874/jypbw1jx72aexsoohd7x.svg","gitlab icon",{"altText":416,"config":421},{"src":422,"dataGaName":419,"dataGaLocation":414},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758203875/gs4c8p8opsgvflgkswz9.svg",{"text":424,"config":425},"Get Started",{"href":426,"dataGaName":427,"dataGaLocation":414},"https://gitlab.com/-/trial_registrations/new?glm_source=about.gitlab.com/get-started/","get started",{"freeTrial":429,"mobileIcon":433,"desktopIcon":435},{"text":430,"config":431},"Learn more about GitLab Duo",{"href":88,"dataGaName":432,"dataGaLocation":414},"gitlab duo",{"altText":416,"config":434},{"src":418,"dataGaName":419,"dataGaLocation":414},{"altText":416,"config":436},{"src":422,"dataGaName":419,"dataGaLocation":414},{"button":438,"mobileIcon":443,"desktopIcon":445},{"text":439,"config":440},"/switch",{"href":441,"dataGaName":442,"dataGaLocation":414},"#contact","switch",{"altText":416,"config":444},{"src":418,"dataGaName":419,"dataGaLocation":414},{"altText":416,"config":446},{"src":447,"dataGaName":419,"dataGaLocation":414},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1773335277/ohhpiuoxoldryzrnhfrh.png",{"freeTrial":449,"mobileIcon":454,"desktopIcon":456},{"text":450,"config":451},"Back to pricing",{"href":196,"dataGaName":452,"dataGaLocation":414,"icon":453},"back to pricing","GoBack",{"altText":416,"config":455},{"src":418,"dataGaName":419,"dataGaLocation":414},{"altText":416,"config":457},{"src":422,"dataGaName":419,"dataGaLocation":414},{"title":459,"button":460,"config":465},"See how agentic AI transforms software delivery",{"text":461,"config":462},"Watch GitLab Transcend now",{"href":463,"dataGaName":464,"dataGaLocation":54},"/events/transcend/virtual/","transcend event",{"layout":466,"icon":467,"disabled":33},"release","AiStar",{"data":469},{"text":470,"source":471,"edit":477,"contribute":482,"config":487,"items":492,"minimal":697},"Git is a trademark of Software Freedom Conservancy and our use of 'GitLab' is under license",{"text":472,"config":473},"View page source",{"href":474,"dataGaName":475,"dataGaLocation":476},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/","page source","footer",{"text":478,"config":479},"Edit this page",{"href":480,"dataGaName":481,"dataGaLocation":476},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/-/blob/main/content/","web ide",{"text":483,"config":484},"Please contribute",{"href":485,"dataGaName":486,"dataGaLocation":476},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/-/blob/main/CONTRIBUTING.md/","please contribute",{"twitter":488,"facebook":489,"youtube":490,"linkedin":491},"https://twitter.com/gitlab","https://www.facebook.com/gitlab","https://www.youtube.com/channel/UCnMGQ8QHMAnVIsI3xJrihhg","https://www.linkedin.com/company/gitlab-com",[493,540,592,636,663],{"title":194,"links":494,"subMenu":509},[495,499,504],{"text":496,"config":497},"View plans",{"href":196,"dataGaName":498,"dataGaLocation":476},"view plans",{"text":500,"config":501},"Why Premium?",{"href":502,"dataGaName":503,"dataGaLocation":476},"/pricing/premium/","why premium",{"text":505,"config":506},"Why Ultimate?",{"href":507,"dataGaName":508,"dataGaLocation":476},"/pricing/ultimate/","why ultimate",[510],{"title":511,"links":512},"Contact Us",[513,516,518,520,525,530,535],{"text":514,"config":515},"Contact sales",{"href":63,"dataGaName":64,"dataGaLocation":476},{"text":369,"config":517},{"href":371,"dataGaName":372,"dataGaLocation":476},{"text":374,"config":519},{"href":376,"dataGaName":377,"dataGaLocation":476},{"text":521,"config":522},"Status",{"href":523,"dataGaName":524,"dataGaLocation":476},"https://status.gitlab.com/","status",{"text":526,"config":527},"Terms of use",{"href":528,"dataGaName":529,"dataGaLocation":476},"/terms/","terms of use",{"text":531,"config":532},"Privacy statement",{"href":533,"dataGaName":534,"dataGaLocation":476},"/privacy/","privacy statement",{"text":536,"config":537},"Cookie preferences",{"dataGaName":538,"dataGaLocation":476,"id":539,"isOneTrustButton":33},"cookie preferences","ot-sdk-btn",{"title":99,"links":541,"subMenu":550},[542,546],{"text":543,"config":544},"DevSecOps platform",{"href":81,"dataGaName":545,"dataGaLocation":476},"devsecops platform",{"text":547,"config":548},"AI-Assisted Development",{"href":88,"dataGaName":549,"dataGaLocation":476},"ai-assisted development",[551],{"title":552,"links":553},"Topics",[554,559,562,567,572,577,582,587],{"text":555,"config":556},"CICD",{"href":557,"dataGaName":558,"dataGaLocation":476},"/topics/ci-cd/","cicd",{"text":24,"config":560},{"href":561,"dataGaName":43,"dataGaLocation":476},"/topics/gitops/",{"text":563,"config":564},"DevOps",{"href":565,"dataGaName":566,"dataGaLocation":476},"/topics/devops/","devops",{"text":568,"config":569},"Version Control",{"href":570,"dataGaName":571,"dataGaLocation":476},"/topics/version-control/","version control",{"text":573,"config":574},"DevSecOps",{"href":575,"dataGaName":576,"dataGaLocation":476},"/topics/devsecops/","devsecops",{"text":578,"config":579},"Cloud Native",{"href":580,"dataGaName":581,"dataGaLocation":476},"/topics/cloud-native/","cloud native",{"text":583,"config":584},"AI for Coding",{"href":585,"dataGaName":586,"dataGaLocation":476},"/topics/devops/ai-for-coding/","ai for coding",{"text":588,"config":589},"Agentic AI",{"href":590,"dataGaName":591,"dataGaLocation":476},"/topics/agentic-ai/","agentic ai",{"title":593,"links":594},"Solutions",[595,597,599,604,608,611,615,618,620,623,626,631],{"text":141,"config":596},{"href":136,"dataGaName":141,"dataGaLocation":476},{"text":130,"config":598},{"href":113,"dataGaName":114,"dataGaLocation":476},{"text":600,"config":601},"Agile development",{"href":602,"dataGaName":603,"dataGaLocation":476},"/solutions/agile-delivery/","agile delivery",{"text":605,"config":606},"SCM",{"href":126,"dataGaName":607,"dataGaLocation":476},"source code management",{"text":555,"config":609},{"href":119,"dataGaName":610,"dataGaLocation":476},"continuous integration & delivery",{"text":612,"config":613},"Value stream management",{"href":169,"dataGaName":614,"dataGaLocation":476},"value stream management",{"text":24,"config":616},{"href":617,"dataGaName":43,"dataGaLocation":476},"/solutions/gitops/",{"text":179,"config":619},{"href":181,"dataGaName":182,"dataGaLocation":476},{"text":621,"config":622},"Small business",{"href":186,"dataGaName":187,"dataGaLocation":476},{"text":624,"config":625},"Public sector",{"href":191,"dataGaName":192,"dataGaLocation":476},{"text":627,"config":628},"Education",{"href":629,"dataGaName":630,"dataGaLocation":476},"/solutions/education/","education",{"text":632,"config":633},"Financial services",{"href":634,"dataGaName":635,"dataGaLocation":476},"/solutions/finance/","financial services",{"title":199,"links":637},[638,640,642,644,647,649,651,653,655,657,659,661],{"text":211,"config":639},{"href":213,"dataGaName":214,"dataGaLocation":476},{"text":216,"config":641},{"href":218,"dataGaName":219,"dataGaLocation":476},{"text":221,"config":643},{"href":223,"dataGaName":224,"dataGaLocation":476},{"text":226,"config":645},{"href":228,"dataGaName":646,"dataGaLocation":476},"docs",{"text":249,"config":648},{"href":251,"dataGaName":252,"dataGaLocation":476},{"text":244,"config":650},{"href":246,"dataGaName":247,"dataGaLocation":476},{"text":259,"config":652},{"href":261,"dataGaName":262,"dataGaLocation":476},{"text":267,"config":654},{"href":269,"dataGaName":270,"dataGaLocation":476},{"text":272,"config":656},{"href":274,"dataGaName":22,"dataGaLocation":476},{"text":276,"config":658},{"href":278,"dataGaName":279,"dataGaLocation":476},{"text":281,"config":660},{"href":283,"dataGaName":284,"dataGaLocation":476},{"text":286,"config":662},{"href":288,"dataGaName":289,"dataGaLocation":476},{"title":300,"links":664},[665,667,669,671,673,675,677,681,686,688,690,692],{"text":307,"config":666},{"href":309,"dataGaName":302,"dataGaLocation":476},{"text":312,"config":668},{"href":314,"dataGaName":315,"dataGaLocation":476},{"text":320,"config":670},{"href":322,"dataGaName":323,"dataGaLocation":476},{"text":325,"config":672},{"href":327,"dataGaName":328,"dataGaLocation":476},{"text":330,"config":674},{"href":332,"dataGaName":333,"dataGaLocation":476},{"text":335,"config":676},{"href":337,"dataGaName":338,"dataGaLocation":476},{"text":678,"config":679},"Sustainability",{"href":680,"dataGaName":678,"dataGaLocation":476},"/sustainability/",{"text":682,"config":683},"Diversity, inclusion and belonging (DIB)",{"href":684,"dataGaName":685,"dataGaLocation":476},"/diversity-inclusion-belonging/","Diversity, inclusion and belonging",{"text":340,"config":687},{"href":342,"dataGaName":343,"dataGaLocation":476},{"text":350,"config":689},{"href":352,"dataGaName":353,"dataGaLocation":476},{"text":355,"config":691},{"href":357,"dataGaName":358,"dataGaLocation":476},{"text":693,"config":694},"Modern Slavery Transparency Statement",{"href":695,"dataGaName":696,"dataGaLocation":476},"https://handbook.gitlab.com/handbook/legal/modern-slavery-act-transparency-statement/","modern slavery transparency statement",{"items":698},[699,702,705],{"text":700,"config":701},"Terms",{"href":528,"dataGaName":529,"dataGaLocation":476},{"text":703,"config":704},"Cookies",{"dataGaName":538,"dataGaLocation":476,"id":539,"isOneTrustButton":33},{"text":706,"config":707},"Privacy",{"href":533,"dataGaName":534,"dataGaLocation":476},[709],{"id":710,"title":9,"body":31,"config":711,"content":713,"description":31,"extension":30,"meta":717,"navigation":33,"path":718,"seo":719,"stem":720,"__hash__":721},"blogAuthors/en-us/blog/authors/stan-hu.yml",{"template":712},"BlogAuthor",{"name":9,"config":714},{"headshot":715,"ctfId":716},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1749659504/Blog/Author%20Headshots/stanhu-headshot.jpg","stanhu",{},"/en-us/blog/authors/stan-hu",{},"en-us/blog/authors/stan-hu","KmQVCb_7YcWghHApaS2EI3J2bQ0dRustgOz4wYyOnVk",[723,737,750],{"content":724,"config":735},{"title":725,"description":726,"authors":727,"heroImage":729,"date":730,"body":731,"category":11,"tags":732},"How to build CI/CD observability at scale","This practical guide to GitLab pipeline analytics helps self-managed users gain operational insights using Prometheus and Grafana.",[728],"Paul Meresanu","https://res.cloudinary.com/about-gitlab-com/image/upload/v1774465167/n5hlvrsrheadeccyr1oz.png","2026-04-28","CI/CD optimization starts with visibility. Building a successful DevOps platform at enterprise scale **should include** understanding pipeline performance, job execution patterns, and quantifiable operational insights — especially for organizations running GitLab self-managed instances.\n\nTo help GitLab customers maximize their platform investments, we developed the GitLab CI/CD Observability solution as part of our Platform Excellence program, which transforms raw pipeline metrics into actionable operational insights.\n\nA leading financial services organization partnered with GitLab's customer success architect to gain visibility into their GitLab self-managed deployment. Together, we implemented a containerized observability solution combining the open-source gitlab-ci-pipelines-exporter with enterprise-grade Prometheus and Grafana infrastructure.\n\nIn this article, you'll learn the challenges they faced managing pipelines at scale and how GitLab CI/CD Observability addressed them with a practical, end-to-end implementation.\n\n## The challenge: Measuring CI/CD performance\nBefore implementing any observability solution, define your measurement landscape:\n*   **What metrics matter?** Pipeline duration, job success rates, queue times, runner utilization\n*   **Who needs visibility?** Developers, DevOps engineers, platform teams, leadership\n*   **What decisions will this drive?** Infrastructure investment, bottleneck remediation, capacity planning\n\n## Solution architecture: A full set of dashboards for observability\nOnce deployed, the observability stack provides a set of Grafana dashboards that give real-time and historical visibility into your CI/CD platform. A typical deployment includes:\n*   **Pipeline Overview Dashboard:** A top-level view showing total pipeline runs, success/failure rates over time (as stacked bar or time-series charts), and average pipeline duration trends. Panels use color-coded status indicators (green for success, red for failure, amber for cancelled) so platform teams can spot degradation at a glance.\n*   **Job Performance Dashboard:** Drill-down panels showing individual job duration distributions (histogram), the top 10 slowest jobs by average duration, and job failure heatmaps by project and stage. This is where teams identify specific bottleneck jobs worth optimizing.\n*   **Runner & Infrastructure Dashboard:** Combines Node Exporter host metrics (CPU, memory, disk) with pipeline queue-time data to correlate infrastructure saturation with pipeline wait times. Useful for capacity planning decisions such as scaling runner pools or upgrading instance sizes.\n*   **Deployment Frequency Dashboard:** Tracks deployment count and deployment duration over time per environment, aligned with DORA metrics. Helps engineering leadership assess delivery throughput and environment drift (commits behind main).\n\nEach dashboard is provisioned automatically via Grafana's file-based provisioning, so it deploys consistently across environments. The dashboards can be further customized with Grafana variables to filter by project, ref/branch, or time range.\n\n![Solution architecture](https://res.cloudinary.com/about-gitlab-com/image/upload/v1777382608/Blog/Imported/blog-building-ci-cd-observability-stack-for-gitlab-self-managed/image1.png)\n\nThe solution requires two exporters:\n*   **Pipeline Exporter:** Collects CI/CD metrics via GitLab API (pipeline duration, job status, deployments)\n*   **Node Exporter:** Collects host-level metrics (CPU, memory, disk) for infrastructure correlation\n\n**Prerequisites:**\n*   GitLab Self-Managed Version 18.1+\n*   **Container orchestration platform:** A Kubernetes cluster (recommended for enterprise deployments) or a container runtime such as Docker/Podman for smaller scale or proof-of-concept environments. The primary deployment guide below targets Kubernetes; a Docker Compose alternative is provided in the appendix for local testing and evaluation\n*   GitLab Personal Access Token (**read_api** scope)\n\n## Kubernetes deployment (recommended)\nFor enterprise environments, deploy each component as a separate Deployment within a dedicated namespace. This approach integrates with existing cluster infrastructure, secrets management, and network policies.\n\n### 1. Create namespace and secret\n```bash\nkubectl create namespace gitlab-observability\n\n# Create the GitLab token secret (see Secrets Management section below\n# for enterprise-grade approaches using external secret operators)\nkubectl create secret generic gitlab-token \\\n  --from-literal=token=glpat-xxxxxxxxxxxx \\\n  -n gitlab-observability\n```\n\n\n### 2. Deploy the Pipeline Exporter\n```yaml\n# exporter-deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: gitlab-ci-pipelines-exporter\n  namespace: gitlab-observability\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: gitlab-ci-pipelines-exporter\n  template:\n    metadata:\n      labels:\n        app: gitlab-ci-pipelines-exporter\n    spec:\n      containers:\n        - name: exporter\n          image: mvisonneau/gitlab-ci-pipelines-exporter:latest\n          ports:\n            - containerPort: 8080\n          env:\n            - name: GCPE_GITLAB_TOKEN\n              valueFrom:\n                secretKeyRef:\n                  name: gitlab-token\n                  key: token\n            - name: GCPE_CONFIG\n              value: /etc/gcpe/config.yml\n          volumeMounts:\n            - name: config\n              mountPath: /etc/gcpe\n      volumes:\n        - name: config\n          configMap:\n            name: gcpe-config\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: gitlab-ci-pipelines-exporter\n  namespace: gitlab-observability\nspec:\n  selector:\n    app: gitlab-ci-pipelines-exporter\n  ports:\n    - port: 8080\n      targetPort: 8080\n```\n\n### 3. Deploy Node Exporter (DaemonSet)\n```yaml\n# node-exporter-daemonset.yaml\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n  name: node-exporter\n  namespace: gitlab-observability\nspec:\n  selector:\n    matchLabels:\n      app: node-exporter\n  template:\n    metadata:\n      labels:\n        app: node-exporter\n    spec:\n      containers:\n        - name: node-exporter\n          image: prom/node-exporter:latest\n          ports:\n            - containerPort: 9100\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: node-exporter\n  namespace: gitlab-observability\nspec:\n  selector:\n    app: node-exporter\n  ports:\n    - port: 9100\n      targetPort: 9100\n```\n\n### 4. Deploy Prometheus\n```yaml\n# prometheus-deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: prometheus\n  namespace: gitlab-observability\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: prometheus\n  template:\n    metadata:\n      labels:\n        app: prometheus\n    spec:\n      containers:\n        - name: prometheus\n          image: prom/prometheus:latest\n          ports:\n            - containerPort: 9090\n          volumeMounts:\n            - name: config\n              mountPath: /etc/prometheus\n      volumes:\n        - name: config\n          configMap:\n            name: prometheus-config\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: prometheus\n  namespace: gitlab-observability\nspec:\n  selector:\n    app: prometheus\n  ports:\n    - port: 9090\n      targetPort: 9090\n```\n\n### 5. Deploy Grafana\nThe Grafana deployment below starts with authentication disabled (`GF_AUTH_ANONYMOUS_ENABLED: true`) for initial setup convenience.\n\n**This setting allows anyone with network access to view all dashboards without logging in.** For production deployments, remove this variable or set it to false and configure a proper authentication provider (LDAP, SAML/SSO, or OAuth) to restrict access to authorized users.\n```yaml\n# grafana-deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: grafana\n  namespace: gitlab-observability\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: grafana\n  template:\n    metadata:\n      labels:\n        app: grafana\n    spec:\n      containers:\n        - name: grafana\n          image: grafana/grafana:10.0.0\n          ports:\n            - containerPort: 3000\n          env:\n            # REMOVE or set to 'false' for production.\n            # When 'true', any user with network access can\n            # view dashboards without authentication.\n            - name: GF_AUTH_ANONYMOUS_ENABLED\n              value: 'true'\n          volumeMounts:\n            - name: dashboards-provider\n              mountPath: /etc/grafana/provisioning/dashboards\n            - name: datasources\n              mountPath: /etc/grafana/provisioning/datasources\n            - name: dashboards\n              mountPath: /var/lib/grafana/dashboards\n      volumes:\n        - name: dashboards-provider\n          configMap:\n            name: grafana-dashboards-provider\n        - name: datasources\n          configMap:\n            name: grafana-datasources\n        - name: dashboards\n          configMap:\n            name: grafana-dashboards\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: grafana\n  namespace: gitlab-observability\nspec:\n  selector:\n    app: grafana\n  ports:\n    - port: 3000\n      targetPort: 3000\n```\n\n### 6. Set network policy\nRestrict inter-pod traffic to only the required communication paths:\n```yaml\n# network-policy.yaml\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n  name: observability-policy\n  namespace: gitlab-observability\nspec:\n  podSelector: {}\n  policyTypes:\n    - Ingress\n  ingress:\n    # Prometheus scrapes exporter and node-exporter\n    - from:\n        - podSelector:\n            matchLabels:\n              app: prometheus\n      ports:\n        - port: 8080\n        - port: 9100\n    # Grafana queries Prometheus\n    - from:\n        - podSelector:\n            matchLabels:\n              app: grafana\n      ports:\n        - port: 9090\n```\n\n### 7. Validate\n```bash\nkubectl get pods -n gitlab-observability\nkubectl port-forward svc/grafana 3000:3000 -n gitlab-observability\ncurl http://localhost:3000/api/health\n```\n\n## Configuration reference\n### Exporter configuration\n```yaml\n# gitlab-ci-pipelines-exporter.yml (ConfigMap: gcpe-config)\nlog:\n  level: info\ngitlab:\n  url: https://gitlab.your-domain.com\n  maximum_requests_per_second: 10\nproject_defaults:\n  pull:\n    pipeline:\n      jobs:\n        enabled: true\nwildcards:\n  - owner:\n      name: your-group-name\n      kind: group\n    archived: false\n```\n\n### Prometheus configuration\n```yaml\n# prometheus.yml (ConfigMap: prometheus-config)\nglobal:\n  scrape_interval: 15s\nscrape_configs:\n  - job_name: 'gitlab-ci-pipelines-exporter'\n    static_configs:\n      - targets: ['gitlab-ci-pipelines-exporter:8080']\n  - job_name: 'node-exporter'\n    static_configs:\n      - targets: ['node-exporter:9100']\n```\n\n### Grafana data sources\n```yaml\n# datasources.yml (ConfigMap: grafana-datasources)\napiVersion: 1\ndatasources:\n  - name: Prometheus\n    type: prometheus\n    access: proxy\n    url: http://prometheus:9090\n    isDefault: true\n# dashboards.yml (ConfigMap: grafana-dashboards-provider)\napiVersion: 1\nproviders:\n  - name: 'default'\n    folder: 'GitLab CI/CD'\n    type: file\n    options:\n      path: /var/lib/grafana/dashboards\n```\n\n## Key metrics\n### Pipeline Exporter metrics\n| Metric | Description |\n| :---- | :---- |\n| `gitlab_ci_pipeline_duration_seconds` | Pipeline execution time |\n| `gitlab_ci_pipeline_status` | Pipeline success/failure by project |\n| `gitlab_ci_pipeline_job_duration_seconds` | Individual job execution time |\n| `gitlab_ci_pipeline_job_status` | Job success/failure status |\n| `gitlab_ci_pipeline_job_artifact_size_bytes` | Artifact storage consumption |\n| `gitlab_ci_pipeline_coverage` | Code coverage percentage |\n| `gitlab_ci_environment_deployment_count` | Deployment frequency |\n| `gitlab_ci_environment_deployment_duration_seconds` | Deployment execution time |\n| `gitlab_ci_environment_behind_commits_count` | Environment drift from main |\n\n### Node Exporter metrics\n| Metric | Description |\n| :---- | :---- |\n| `node_cpu_seconds_total` | CPU utilization |\n| `node_memory_MemAvailable_bytes` | Available memory |\n| `node_filesystem_avail_bytes` | Disk space available |\n| `node_load1` | 1-minute load average |\n\n## Troubleshooting\n### Air-gapped Grafana plugin installation\nFor offline environments, install plugins manually. Example for Kubernetes:\n```bash\n# Copy plugin zip into the Grafana pod\nkubectl cp grafana-polystat-panel-2.1.16.zip \\\n  gitlab-observability/grafana-\u003Cpod-id>:/tmp/\n# Extract plugin\nkubectl exec -it -n gitlab-observability deploy/grafana -- \\\n  sh -c \"unzip /tmp/grafana-polystat-panel-2.1.16.zip -d /var/lib/grafana/plugins/\"\n# Restart Grafana pod\nkubectl rollout restart deployment/grafana -n gitlab-observability\n# Verify installation\nkubectl exec -it -n gitlab-observability deploy/grafana -- \\\n  ls -al /var/lib/grafana/plugins/\n```\n\n## Enterprise considerations\nFor regulated industries, ensure:\n*   **Token security:** Store GitLab Personal Access Tokens in a dedicated secrets manager rather than hardcoded in ConfigMaps. Enforce token rotation policies and limit scope to **read\\_api** only.\n*   **Network segmentation:** Deploy behind a reverse proxy with TLS termination. In Kubernetes, use an Ingress controller with automated certificate provisioning.\n*   **Authentication:** Configure Grafana with your organization's identity provider (SAML, LDAP, or OAuth/OIDC) to enforce role-based access control on dashboards.\n\n## Why GitLab?\nGitLab's API-first design enables custom observability solutions that complement native capabilities like Value Stream Analytics and DORA metrics. The open architecture allows organizations to integrate proven open-source tooling — like the gitlab-ci-pipelines-exporter — directly with their existing enterprise infrastructure, without disrupting established workflows.\n\nAs your observability maturity grows, GitLab's built-in Observability capabilities provide a natural next step — offering deeper, integrated visibility without additional tooling. Learn more about what's available natively in the platform for [GitLab Observability](https://docs.gitlab.com/operations/observability/observability/).\n",[117,733,734],"product","tutorial",{"featured":14,"template":15,"slug":736},"how-to-build-ci-cd-observability-at-scale",{"content":738,"config":748},{"body":739,"title":740,"description":741,"authors":742,"heroImage":744,"date":745,"category":11,"tags":746},"Most CI/CD tools can run a build and ship a deployment. Where they diverge is what happens when your delivery needs get real: a monorepo with a dozen services, microservices spread across multiple repositories, deployments to dozens of environments, or a platform team trying to enforce standards without becoming a bottleneck.\n  \nGitLab's pipeline execution model was designed for that complexity. Parent-child pipelines, DAG execution, dynamic pipeline generation, multi-project triggers, merge request pipelines with merged results, and CI/CD Components each solve a distinct class of problems. Because they compose, understanding the full model unlocks something more than a faster pipeline. In this article, you'll learn about the five patterns where that model stands out, each mapped to a real engineering scenario with the configuration to match.\n  \nThe configs below are illustrative. The scripts use echo commands to keep the signal-to-noise ratio low. Swap them out for your actual build, test, and deploy steps and they are ready to use.\n\n\n## 1. Monorepos: Parent-child pipelines + DAG execution\n\n\nThe problem: Your monorepo has a frontend, a backend, and a docs site. Every commit triggers a full rebuild of everything, even when only a README changed.\n\n\nGitLab solves this with two complementary features: [parent-child pipelines](https://docs.gitlab.com/ci/pipelines/downstream_pipelines/#parent-child-pipelines) (which let a top-level pipeline spawn isolated sub-pipelines) and [DAG execution via `needs`](https://docs.gitlab.com/ci/yaml/#needs) (which breaks rigid stage-by-stage ordering and lets jobs start the moment their dependencies finish).\n\n\nA parent pipeline detects what changed and triggers only the relevant child pipelines:\n\n```yaml\n# .gitlab-ci.yml\nstages:\n  - trigger\n\ntrigger-services:\n  stage: trigger\n  trigger:\n    include:\n      - local: '.gitlab/ci/api-service.yml'\n      - local: '.gitlab/ci/web-service.yml'\n      - local: '.gitlab/ci/worker-service.yml'\n    strategy: depend\n```\n\n\nEach child pipeline is a fully independent pipeline with its own stages, jobs, and artifacts. The parent waits for all of them via [strategy: depend](https://docs.gitlab.com/ci/pipelines/downstream_pipelines/#wait-for-downstream-pipeline-to-complete) so you get a single green/red signal at the top level, with full drill-down into each service's pipeline. This organizational separation is the bigger win for large teams: each service owns its pipeline config, changes in one cannot break another, and the complexity stays manageable as the repo grows.\n\n\nOne thing worth knowing: when you pass [multiple files to a single `trigger: include:`](https://docs.gitlab.com/ci/pipelines/downstream_pipelines/#combine-multiple-child-pipeline-configuration-files), GitLab merges them into a single child pipeline configuration. This means jobs defined across those files share the same pipeline context and can reference each other with `needs:`, which is what makes the DAG optimization possible. If you split them into separate trigger jobs instead, each would be its own isolated pipeline and cross-file `needs:` references would not work.\n\n\nCombine this with `needs:` inside each child pipeline and you get DAG execution. Your integration tests can start the moment the build finishes, without waiting for other jobs in the same stage.\n\n```yaml\n# .gitlab/ci/api-service.yml\nstages:\n  - build\n  - test\n\nbuild-api:\n  stage: build\n  script:\n    - echo \"Building API service\"\n\ntest-api:\n  stage: test\n  needs: [build-api]\n  script:\n    - echo \"Running API tests\"\n```\n\n\nWhy it matters: Teams with large monorepos typically report significant reductions in pipeline runtime after switching to DAG execution, since jobs no longer wait on unrelated work in the same stage. Parent-child pipelines add the organizational layer that keeps the configuration maintainable as the repo and team grow.\n\n![Local downstream pipelines](https://res.cloudinary.com/about-gitlab-com/image/upload/v1775738759/Blog/Imported/hackathon-fake-blog-post-s/image3_vwj3rz.png \"Local downstream pipelines\")\n\n## 2. Microservices: Cross-repo, multi-project pipelines\n\n\nThe problem: Your frontend lives in one repo, your backend in another. When the frontend team ships a change, they have no visibility into whether it broke the backend integration and vice versa.\n\n\nGitLab's [multi-project pipelines](https://docs.gitlab.com/ci/pipelines/downstream_pipelines/#multi-project-pipelines) let one project trigger a pipeline in a completely separate project and wait for the result. The triggering project gets a linked downstream pipeline right in its own pipeline view.\n\n\nThe frontend pipeline builds an API contract artifact and publishes it, then triggers the backend pipeline. The backend fetches that artifact directly using the [Jobs API](https://docs.gitlab.com/api/jobs/#download-a-single-artifact-file-from-specific-tag-or-branch) and validates it before allowing anything to proceed. If a breaking change is detected, the backend pipeline fails and the frontend pipeline fails with it.\n\n```yaml\n# frontend repo: .gitlab-ci.yml\nstages:\n  - build\n  - test\n  - trigger-backend\n\nbuild-frontend:\n  stage: build\n  script:\n    - echo \"Building frontend and generating API contract...\"\n    - mkdir -p dist\n    - |\n      echo '{\n        \"api_version\": \"v2\",\n        \"breaking_changes\": false\n      }' > dist/api-contract.json\n    - cat dist/api-contract.json\n  artifacts:\n    paths:\n      - dist/api-contract.json\n    expire_in: 1 hour\n\ntest-frontend:\n  stage: test\n  script:\n    - echo \"All frontend tests passed!\"\n\ntrigger-backend-pipeline:\n  stage: trigger-backend\n  trigger:\n    project: my-org/backend-service\n    branch: main\n    strategy: depend\n  rules:\n    - if: $CI_COMMIT_BRANCH == \"main\"\n```\n\n```yaml\n# backend repo: .gitlab-ci.yml\nstages:\n  - build\n  - test\n\nbuild-backend:\n  stage: build\n  script:\n    - echo \"All backend tests passed!\"\n\nintegration-test:\n  stage: test\n  rules:\n    - if: $CI_PIPELINE_SOURCE == \"pipeline\"\n  script:\n    - echo \"Fetching API contract from frontend...\"\n    - |\n      curl --silent --fail \\\n        --header \"JOB-TOKEN: $CI_JOB_TOKEN\" \\\n        --output api-contract.json \\\n        \"${CI_API_V4_URL}/projects/${FRONTEND_PROJECT_ID}/jobs/artifacts/main/raw/dist/api-contract.json?job=build-frontend\"\n    - cat api-contract.json\n    - |\n      if grep -q '\"breaking_changes\": true' api-contract.json; then\n        echo \"FAIL: Breaking API changes detected - backend integration blocked!\"\n        exit 1\n      fi\n      echo \"PASS: API contract is compatible!\"\n```\n\n\nA few things worth noting in this config. The `integration-test` job uses `$CI_PIPELINE_SOURCE == \"pipeline\"` to ensure it only runs when triggered by an upstream pipeline, not on a standalone push to the backend repo. The frontend project ID is referenced via `$FRONTEND_PROJECT_ID`, which should be set as a [CI/CD variable](https://docs.gitlab.com/ci/variables/) in the backend project settings to avoid hardcoding it.\n\n\nWhy it matters: Cross-service breakage that previously surfaced in production gets caught in the pipeline instead. The dependency between services stops being invisible and becomes something teams can see, track, and act on.\n\n\n![Cross-project pipelines](https://res.cloudinary.com/about-gitlab-com/image/upload/v1775738762/Blog/Imported/hackathon-fake-blog-post-s/image4_h6mfsb.png \"Cross-project pipelines\")\n\n\n## 3. Multi-tenant / matrix deployments: Dynamic child pipelines\n\n\nThe problem: You deploy the same application to 15 customer environments, or three cloud regions, or dev/staging/prod. Updating a deploy stage across all of them one by one is the kind of work that leads to configuration drift. Writing a separate pipeline for each environment is unmaintainable from day one.\n\n\nGitLab's [dynamic child pipelines](https://docs.gitlab.com/ci/pipelines/downstream_pipelines/#dynamic-child-pipelines) let you generate a pipeline at runtime. A job runs a script that produces a YAML file, and that YAML becomes the pipeline for the next stage. The pipeline structure itself becomes data.\n\n\n```yaml\n# .gitlab-ci.yml\nstages:\n  - generate\n  - trigger-environments\n\ngenerate-config:\n  stage: generate\n  script:\n    - |\n      # ENVIRONMENTS can be passed as a CI variable or read from a config file.\n      # Default to dev, staging, prod if not set.\n      ENVIRONMENTS=${ENVIRONMENTS:-\"dev staging prod\"}\n      for ENV in $ENVIRONMENTS; do\n        cat > ${ENV}-pipeline.yml \u003C\u003C EOF\n      stages:\n        - deploy\n        - verify\n      deploy-${ENV}:\n        stage: deploy\n        script:\n          - echo \"Deploying to ${ENV} environment\"\n      verify-${ENV}:\n        stage: verify\n        script:\n          - echo \"Running smoke tests on ${ENV}\"\n      EOF\n      done\n  artifacts:\n    paths:\n      - \"*.yml\"\n    exclude:\n      - \".gitlab-ci.yml\"\n\n.trigger-template:\n  stage: trigger-environments\n  trigger:\n    strategy: depend\n\ntrigger-dev:\n  extends: .trigger-template\n  trigger:\n    include:\n      - artifact: dev-pipeline.yml\n        job: generate-config\n\ntrigger-staging:\n  extends: .trigger-template\n  needs: [trigger-dev]\n  trigger:\n    include:\n      - artifact: staging-pipeline.yml\n        job: generate-config\n\ntrigger-prod:\n  extends: .trigger-template\n  needs: [trigger-staging]\n  trigger:\n    include:\n      - artifact: prod-pipeline.yml\n        job: generate-config\n  when: manual\n```\n\n\nThe generation script loops over an `ENVIRONMENTS` variable rather than hardcoding each environment separately. Pass in a different list via a CI variable or read it from a config file and the pipeline adapts without touching the YAML. The trigger jobs use [extends:](https://docs.gitlab.com/ci/yaml/#extends) to inherit shared configuration from `.trigger-template`, so `strategy: depend` is defined once rather than repeated on every trigger job. Add a new environment by updating the variable, not by duplicating pipeline config. Add [when: manual](https://docs.gitlab.com/ci/yaml/#when) to the production trigger and you get a promotion gate baked right into the pipeline graph.\n\n\nWhy it matters: SaaS companies and platform teams use this pattern to manage dozens of environments without duplicating pipeline logic. The pipeline structure itself stays lean as the deployment matrix grows.\n\n\n![Dynamic pipeline](https://res.cloudinary.com/about-gitlab-com/image/upload/v1775738765/Blog/Imported/hackathon-fake-blog-post-s/image7_wr0kx2.png \"Dynamic pipeline\")\n\n\n## 4. MR-first delivery: Merge request pipelines, merged results, and workflow routing\n\n\nThe problem: Your pipeline runs on every push to every branch. Expensive tests run on feature branches that will never merge. Meanwhile, you have no guarantee that what you tested is actually what will land on `main` after a merge.\n\n\nGitLab has three interlocking features that solve this together:\n\n\n*   [Merge request pipelines](https://docs.gitlab.com/ci/pipelines/merge_request_pipelines/) run only when a merge request exists, not on every branch push. This alone eliminates a significant amount of wasted compute.\n\n*   [Merged results pipelines](https://docs.gitlab.com/ci/pipelines/merged_results_pipelines/) go further. GitLab creates a temporary merge commit (your branch plus the current target branch) and runs the pipeline against that. You are testing what will actually exist after the merge, not just your branch in isolation.\n\n*   [Workflow rules](https://docs.gitlab.com/ci/yaml/workflow/) let you define exactly which pipeline type runs under which conditions and suppress everything else. The `$CI_OPEN_MERGE_REQUESTS` guard below prevents duplicate pipelines firing for both a branch and its open MR simultaneously.\n\n\nWith those three working together, here is what a tiered pipeline looks like:\n\n```yaml\n# .gitlab-ci.yml\nworkflow:\n  rules:\n    - if: $CI_PIPELINE_SOURCE == \"merge_request_event\"\n    - if: $CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS\n      when: never\n    - if: $CI_COMMIT_BRANCH\n    - if: $CI_PIPELINE_SOURCE == \"schedule\"\n\nstages:\n  - fast-checks\n  - expensive-tests\n  - deploy\n\nlint-code:\n  stage: fast-checks\n  script:\n    - echo \"Running linter\"\n  rules:\n    - if: $CI_PIPELINE_SOURCE == \"push\"\n    - if: $CI_PIPELINE_SOURCE == \"merge_request_event\"\n    - if: $CI_COMMIT_BRANCH == \"main\"\n\nunit-tests:\n  stage: fast-checks\n  script:\n    - echo \"Running unit tests\"\n  rules:\n    - if: $CI_PIPELINE_SOURCE == \"push\"\n    - if: $CI_PIPELINE_SOURCE == \"merge_request_event\"\n    - if: $CI_COMMIT_BRANCH == \"main\"\n\nintegration-tests:\n  stage: expensive-tests\n  script:\n    - echo \"Running integration tests (15 min)\"\n  rules:\n    - if: $CI_PIPELINE_SOURCE == \"merge_request_event\"\n    - if: $CI_COMMIT_BRANCH == \"main\"\n\ne2e-tests:\n  stage: expensive-tests\n  script:\n    - echo \"Running E2E tests (30 min)\"\n  rules:\n    - if: $CI_PIPELINE_SOURCE == \"merge_request_event\"\n    - if: $CI_COMMIT_BRANCH == \"main\"\n\nnightly-comprehensive-scan:\n  stage: expensive-tests\n  script:\n    - echo \"Running full nightly suite (2 hours)\"\n  rules:\n    - if: $CI_PIPELINE_SOURCE == \"schedule\"\n\ndeploy-production:\n  stage: deploy\n  script:\n    - echo \"Deploying to production\"\n  rules:\n    - if: $CI_COMMIT_BRANCH == \"main\"\n      when: manual\n```\n\nWith this setup, the pipeline behaves differently depending on context. A push to a feature branch with no open MR runs lint and unit tests only. Once an MR is opened, the workflow rules switch from a branch pipeline to an MR pipeline, and the full integration and E2E suite runs against the merged result. Merging to `main` queues a manual production deployment. A nightly schedule runs the comprehensive scan once, not on every commit.\n\n\nWhy it matters: Teams routinely cut CI costs significantly with this pattern, not by running fewer tests, but by running the right tests at the right time. Merged results pipelines catch the class of bugs that only appear after a merge, before they ever reach `main`.\n\n\n![Conditional pipelines (within a branch with no MR)](https://res.cloudinary.com/about-gitlab-com/image/upload/v1775738768/Blog/Imported/hackathon-fake-blog-post-s/image6_dnfcny.png \"Conditional pipelines (within a branch with no MR)\")\n\n\n\n![Conditional pipelines (within an MR)](https://res.cloudinary.com/about-gitlab-com/image/upload/v1775738772/Blog/Imported/hackathon-fake-blog-post-s/image1_wyiafu.png \"Conditional pipelines (within an MR)\")\n\n\n\n![Conditional pipelines (on the main branch)](https://res.cloudinary.com/about-gitlab-com/image/upload/v1775738774/Blog/Imported/hackathon-fake-blog-post-s/image5_r6lkfd.png \"Conditional pipelines (on the main branch)\")\n\n## 5. Governed pipelines: CI/CD Components\n\n\nThe problem: Your platform team has defined the right way to build, test, and deploy. But every team has their own `.gitlab-ci.yml` with subtle variations. Security scanning gets skipped. Deployment standards drift. Audits are painful.\n\n\nGitLab [CI/CD Components](https://docs.gitlab.com/ci/components/) let platform teams publish versioned, reusable pipeline building blocks. Application teams consume them with a single `include:` line and optional inputs — no copy-paste, no drift. Components are discoverable through the [CI/CD Catalog](https://docs.gitlab.com/ci/components/#cicd-catalog), which means teams can find and adopt approved building blocks without needing to go through the platform team directly.\n\n\nHere is a component definition from a shared library:\n\n```yaml\n# templates/deploy.yml\nspec:\n  inputs:\n    stage:\n      default: deploy\n    environment:\n      default: production\n---\ndeploy-job:\n  stage: $[[ inputs.stage ]]\n  script:\n    - echo \"Deploying $APP_NAME to $[[ inputs.environment ]]\"\n    - echo \"Deploy URL: $DEPLOY_URL\"\n  environment:\n    name: $[[ inputs.environment ]]\n```\nAnd here is how an application team consumes it:\n\n```yaml\n# Application repo: .gitlab-ci.yml\nvariables:\n  APP_NAME: \"my-awesome-app\"\n  DEPLOY_URL: \"https://api.example.com\"\n\ninclude:\n  - component: gitlab.com/my-org/component-library/build@v1.0.6\n  - component: gitlab.com/my-org/component-library/test@v1.0.6\n  - component: gitlab.com/my-org/component-library/deploy@v1.0.6\n    inputs:\n      environment: staging\n\nstages:\n  - build\n  - test\n  - deploy\n```\n\nThree lines of `include:` replace hundreds of lines of duplicated YAML. The platform team can push a security fix to `v1.0.7` and teams opt in on their own schedule — or the platform team can pin everyone to a minimum version. Either way, one change propagates everywhere instead of needing to be applied repo by repo.\n\n\nPair this with [resource groups](https://docs.gitlab.com/ci/resource_groups/) to prevent concurrent deployments to the same environment, and [protected environments](https://docs.gitlab.com/ci/environments/protected_environments/) to enforce approval gates - and you have a governed delivery platform where compliance is the default, not the exception.\n\n\nWhy it matters: This is the pattern that makes GitLab CI/CD scale across hundreds of teams. Platform engineering teams enforce compliance without becoming a bottleneck. Application teams get a fast path to a working pipeline without reinventing the wheel.\n\n\n![Component pipeline (imported jobs)](https://res.cloudinary.com/about-gitlab-com/image/upload/v1775738776/Blog/Imported/hackathon-fake-blog-post-s/image2_pizuxd.png \"Component pipeline (imported jobs)\")\n\n## Putting it all together\n\nNone of these features exist in isolation. The reason GitLab's pipeline model is worth understanding deeply is that these primitives compose:\n\n*   A monorepo uses parent-child pipelines, and each child uses DAG execution\n\n*   A microservices platform uses multi-project pipelines, and each project uses MR pipelines with merged results\n\n*   A governed platform uses CI/CD components to standardize the patterns above across every team\n\n\nMost teams discover one of these features when they hit a specific pain point. The ones who invest in understanding the full model end up with a delivery system that actually reflects how their engineering organization works, not a pipeline that fights it.\n\n## Other patterns worth exploring\n\n\nThe five patterns above cover the most common structural pain points, but GitLab's pipeline model goes further. A few others worth looking into as your needs grow:\n\n\n*   [Review apps with dynamic environments](https://docs.gitlab.com/ci/environments/) let you spin up a live preview for every feature branch and tear it down automatically when the MR closes. Useful for teams doing frontend work or API changes that need stakeholder sign-off before merging.\n\n*   [Caching and artifact strategies](https://docs.gitlab.com/ci/caching/) are often the fastest way to cut pipeline runtime after the structural work is done. Structuring `cache:` keys around dependency lockfiles and being deliberate about what gets passed between jobs with [artifacts:](https://docs.gitlab.com/ci/yaml/#artifacts) can make a significant difference without changing your pipeline shape at all.\n\n*   [Scheduled and API-triggered pipelines](https://docs.gitlab.com/ci/pipelines/schedules/) are worth knowing about because not everything should run on a code push. Nightly security scans, compliance reports, and release automation are better modeled as scheduled or [API-triggered](https://docs.gitlab.com/ci/triggers/) pipelines with `$CI_PIPELINE_SOURCE` routing the right jobs for each context.\n\n## How to get started\n\nModern software delivery is complex. Teams are managing monorepos with dozens of services, coordinating across multiple repositories, deploying to many environments at once, and trying to keep standards consistent as organizations grow. GitLab's pipeline model was built with all of that in mind.\n\nWhat makes it worth investing time in is how well the pieces fit together. Parent-child pipelines bring structure to large codebases. Multi-project pipelines make cross-team dependencies visible and testable. Dynamic pipelines turn environment management into something that scales gracefully. MR-first delivery with merged results ensures confidence at every step of the review process. And CI/CD Components give platform teams a way to share best practices across an entire organization without becoming a bottleneck.\n\nEach of these features is powerful on its own, and even more so when combined. GitLab gives you the building blocks to design a delivery system that fits how your team actually works, and grows with you as your needs evolve.\n\n> [Start a free trial of GitLab Ultimate](https://about.gitlab.com/free-trial/) to use pipeline logic today.\n\n## Read more\n\n*   [Variable and artifact sharing in GitLab parent-child pipelines](https://about.gitlab.com/blog/variable-and-artifact-sharing-in-gitlab-parent-child-pipelines/)\n*   [CI/CD inputs: Secure and preferred method to pass parameters to a pipeline](https://about.gitlab.com/blog/ci-cd-inputs-secure-and-preferred-method-to-pass-parameters-to-a-pipeline/)\n*   [Tutorial: How to set up your first GitLab CI/CD component](https://about.gitlab.com/blog/tutorial-how-to-set-up-your-first-gitlab-ci-cd-component/)\n*   [How to include file references in your CI/CD components](https://about.gitlab.com/blog/how-to-include-file-references-in-your-ci-cd-components/)\n*   [FAQ: GitLab CI/CD Catalog](https://about.gitlab.com/blog/faq-gitlab-ci-cd-catalog/)\n*   [Building a GitLab CI/CD pipeline for a monorepo the easy way](https://about.gitlab.com/blog/building-a-gitlab-ci-cd-pipeline-for-a-monorepo-the-easy-way/)\n*   [A CI/CD component builder's journey](https://about.gitlab.com/blog/a-ci-component-builders-journey/)\n*   [CI/CD Catalog goes GA: No more building pipelines from scratch](https://about.gitlab.com/blog/ci-cd-catalog-goes-ga-no-more-building-pipelines-from-scratch/)","5 ways GitLab pipeline logic solves real engineering problems","Learn how to scale CI/CD with composable patterns for monorepos, microservices, environments, and governance.",[743],"Omid Khan","https://res.cloudinary.com/about-gitlab-com/image/upload/v1772721753/frfsm1qfscwrmsyzj1qn.png","2026-04-09",[117,747,734,29],"DevOps platform",{"featured":33,"template":15,"slug":749},"5-ways-gitlab-pipeline-logic-solves-real-engineering-problems",{"content":751,"config":760},{"title":752,"description":753,"authors":754,"heroImage":756,"date":757,"body":758,"category":11,"tags":759},"How to use GitLab Container Virtual Registry with Docker Hardened Images","Learn how to simplify container image management with this step-by-step guide.",[755],"Tim Rizzi","https://res.cloudinary.com/about-gitlab-com/image/upload/v1772111172/mwhgbjawn62kymfwrhle.png","2026-03-12","If you're a platform engineer, you've probably had this conversation:\n  \n*\"Security says we need to use hardened base images.\"*\n\n*\"Great, where do I configure credentials for yet another registry?\"*\n\n*\"Also, how do we make sure everyone actually uses them?\"*\n\nOr this one:\n\n*\"Why are our builds so slow?\"*\n\n*\"We're pulling the same 500MB image from Docker Hub in every single job.\"*\n\n*\"Can't we just cache these somewhere?\"*\n\nI've been working on [Container Virtual Registry](https://docs.gitlab.com/user/packages/virtual_registry/container/) at GitLab specifically to solve these problems. It's a pull-through cache that sits in front of your upstream registries — Docker Hub, dhi.io (Docker Hardened Images), MCR, and Quay — and gives your teams a single endpoint to pull from. Images get cached on the first pull. Subsequent pulls come from the cache. Your developers don't need to know or care which upstream a particular image came from.\n\nThis article shows you how to set up Container Virtual Registry, specifically with Docker Hardened Images in mind, since that's a combination that makes a lot of sense for teams concerned about security and not making their developers' lives harder.\n\n## What problem are we actually solving?\n\nThe Platform teams I usually talk to manage container images across three to five registries:\n\n* **Docker Hub** for most base images\n* **dhi.io** for Docker Hardened Images (security-conscious workloads)\n* **MCR** for .NET and Azure tooling\n* **Quay.io** for Red Hat ecosystem stuff\n* **Internal registries** for proprietary images\n\nEach one has its own:\n\n* Authentication mechanism\n* Network latency characteristics\n* Way of organizing image paths\n\nYour CI/CD configs end up littered with registry-specific logic. Credential management becomes a project unto itself. And every pipeline job pulls the same base images over the network, even though they haven't changed in weeks.\n\nContainer Virtual Registry consolidates this. One registry URL. One authentication flow (GitLab's). Cached images are served from GitLab's infrastructure rather than traversing the internet each time.\n\n## How it works\n\nThe model is straightforward:\n\n```text\nYour pipeline pulls:\n  gitlab.com/virtual_registries/container/1000016/python:3.13\n\nVirtual registry checks:\n  1. Do I have this cached? → Return it\n  2. No? → Fetch from upstream, cache it, return it\n\n```\n\nYou configure upstreams in priority order. When a pull request comes in, the virtual registry checks each upstream until it finds the image. The result gets cached for a configurable period (default 24 hours).\n\n```text\n┌─────────────────────────────────────────────────────────┐\n│                    CI/CD Pipeline                       │\n│                          │                              │\n│                          ▼                              │\n│   gitlab.com/virtual_registries/container/\u003Cid>/image   │\n└─────────────────────────────────────────────────────────┘\n                           │\n                           ▼\n┌─────────────────────────────────────────────────────────┐\n│            Container Virtual Registry                   │\n│                                                         │\n│  Upstream 1: Docker Hub ────────────────┐               │\n│  Upstream 2: dhi.io (Hardened) ────────┐│               │\n│  Upstream 3: MCR ─────────────────────┐││               │\n│  Upstream 4: Quay.io ────────────────┐│││               │\n│                                      ││││               │\n│                    ┌─────────────────┴┴┴┴──┐            │\n│                    │        Cache          │            │\n│                    │  (manifests + layers) │            │\n│                    └───────────────────────┘            │\n└─────────────────────────────────────────────────────────┘\n```\n\n## Why this matters for Docker Hardened Images\n\n[Docker Hardened Images](https://docs.docker.com/dhi/) are great because of the minimal attack surface, near-zero CVEs, proper software bills of materials (SBOMs), and SLSA provenance. If you're evaluating base images for security-sensitive workloads, they should be on your list.\n\nBut adopting them creates the same operational friction as any new registry:\n\n* **Credential distribution**: You need to get Docker credentials to every system that pulls images from dhi.io.\n* **CI/CD changes**: Every pipeline needs to be updated to authenticate with dhi.io.\n* **Developer friction**: People need to remember to use the hardened variants.\n* **Visibility gap**: It's difficult to tell if teams are actually using hardened images vs. regular ones.\n\nVirtual registry addresses each of these:\n\n**Single credential**: Teams authenticate to GitLab. The virtual registry handles upstream authentication. You configure Docker credentials once, at the registry level, and they apply to all pulls.\n\n**No CI/CD changes per-team**: Point pipelines at your virtual registry. Done. The upstream configuration is centralized.\n\n**Gradual adoption**: Since images get cached with their full path, you can see in the cache what's being pulled. If someone's pulling `library/python:3.11` instead of the hardened variant, you'll know.\n\n**Audit trail**: The cache shows you exactly which images are in active use. Useful for compliance, useful for understanding what your fleet actually depends on.\n\n## Setting it up\n\nHere's a real setup using the Python client from this demo project.\n\n### Create the virtual registry\n\n```python\nfrom virtual_registry_client import VirtualRegistryClient\n\nclient = VirtualRegistryClient()\n\nregistry = client.create_virtual_registry(\n    group_id=\"785414\",  # Your top-level group ID\n    name=\"platform-images\",\n    description=\"Cached container images for platform teams\"\n)\n\nprint(f\"Registry ID: {registry['id']}\")\n# You'll need this ID for the pull URL\n```\n\n### Add Docker Hub as an upstream\n\nFor official images like Alpine, Python, etc.:\n\n```python\ndocker_upstream = client.create_upstream(\n    registry_id=registry['id'],\n    url=\"https://registry-1.docker.io\",\n    name=\"Docker Hub\",\n    cache_validity_hours=24\n)\n```\n\n### Add Docker Hardened Images (dhi.io)\n\nDocker Hardened Images are hosted on `dhi.io`, a separate registry that requires authentication:\n\n```python\ndhi_upstream = client.create_upstream(\n    registry_id=registry['id'],\n    url=\"https://dhi.io\",\n    name=\"Docker Hardened Images\",\n    username=\"your-docker-username\",\n    password=\"your-docker-access-token\",\n    cache_validity_hours=24\n)\n```\n\n### Add other upstreams\n\n```python\n# MCR for .NET teams\nclient.create_upstream(\n    registry_id=registry['id'],\n    url=\"https://mcr.microsoft.com\",\n    name=\"Microsoft Container Registry\",\n    cache_validity_hours=48\n)\n\n# Quay for Red Hat stuff\nclient.create_upstream(\n    registry_id=registry['id'],\n    url=\"https://quay.io\",\n    name=\"Quay.io\",\n    cache_validity_hours=24\n)\n```\n\n### Update your CI/CD\n\nHere's a `.gitlab-ci.yml` that pulls through the virtual registry:\n\n```yaml\nvariables:\n  VIRTUAL_REGISTRY_ID: \u003Cyour_virtual_registry_ID>\n\n  \nbuild:\n  image: docker:24\n  services:\n    - docker:24-dind\n  before_script:\n    # Authenticate to GitLab (which handles upstream auth for you)\n    - echo \"${CI_JOB_TOKEN}\" | docker login -u gitlab-ci-token --password-stdin gitlab.com\n  script:\n    # All of these go through your single virtual registry\n    \n    # Official Docker Hub images (use library/ prefix)\n    - docker pull gitlab.com/virtual_registries/container/${VIRTUAL_REGISTRY_ID}/library/alpine:latest\n    \n    # Docker Hardened Images from dhi.io (no prefix needed)\n    - docker pull gitlab.com/virtual_registries/container/${VIRTUAL_REGISTRY_ID}/python:3.13\n    \n    # .NET from MCR\n    - docker pull gitlab.com/virtual_registries/container/${VIRTUAL_REGISTRY_ID}/dotnet/sdk:8.0\n```\n\n### Image path formats\n\nDifferent registries use different path conventions:\n\n| Registry | Pull URL Example |\n|----------|------------------|\n| Docker Hub (official) | `.../library/python:3.11-slim` |\n| Docker Hardened Images (dhi.io) | `.../python:3.13` |\n| MCR | `.../dotnet/sdk:8.0` |\n| Quay.io | `.../prometheus/prometheus:latest` |\n\n### Verify it's working\n\nAfter some pulls, check your cache:\n\n```python\nupstreams = client.list_registry_upstreams(registry['id'])\nfor upstream in upstreams:\n    entries = client.list_cache_entries(upstream['id'])\n    print(f\"{upstream['name']}: {len(entries)} cached entries\")\n\n```\n\n## What the numbers look like\n\nI ran tests pulling images through the virtual registry:\n\n| Metric | Without Cache | With Warm Cache |\n|--------|---------------|-----------------|\n| Pull time (Alpine) | 10.3s | 4.2s |\n| Pull time (Python 3.13 DHI) | 11.6s | ~4s |\n| Network roundtrips to upstream | Every pull | Cache misses only |\n\n\n\n\nThe first pull is the same speed (it has to fetch from upstream). Every pull after that, for the cache validity period, comes straight from GitLab's storage. No network hop to Docker Hub, dhi.io, MCR, or wherever the image lives.\n\nFor a team running hundreds of pipeline jobs per day, that's hours of cumulative build time saved.\n\n## Practical considerations\nHere are some considerations to keep in mind:\n\n### Cache validity\n\n24 hours is the default. For security-sensitive images where you want patches quickly, consider 12 hours or less:\n\n```python\nclient.create_upstream(\n    registry_id=registry['id'],\n    url=\"https://dhi.io\",\n    name=\"Docker Hardened Images\",\n    username=\"your-username\",\n    password=\"your-token\",\n    cache_validity_hours=12\n)\n```\n\nFor stable, infrequently-updated images (like specific version tags), longer validity is fine.\n\n### Upstream priority\n\nUpstreams are checked in order. If you have images with the same name on different registries, the first matching upstream wins.\n\n### Limits\n\n* Maximum of 20 virtual registries per group\n* Maximum of 20 upstreams per virtual registry\n\n## Configuration via UI\n\nYou can also configure virtual registries and upstreams directly from the GitLab UI—no API calls required. Navigate to your group's **Settings > Packages and registries > Virtual Registry** to:\n\n* Create and manage virtual registries\n* Add, edit, and reorder upstream registries\n* View and manage the cache\n* Monitor which images are being pulled\n\n## What's next\n\nWe're actively developing:\n\n* **Allow/deny lists**: Use regex to control which images can be pulled from specific upstreams.\n\nThis is beta software. It works, people are using it in production, but we're still iterating based on feedback.\n\n## Share your feedback\n\nIf you're a platform engineer dealing with container registry sprawl, I'd like to understand your setup:\n\n* How many upstream registries are you managing?\n* What's your biggest pain point with the current state?\n* Would something like this help, and if not, what's missing?\n\nPlease share your experiences in the [Container Virtual Registry feedback issue](https://gitlab.com/gitlab-org/gitlab/-/work_items/589630).\n## Related resources\n- [New GitLab metrics and registry features help reduce CI/CD bottlenecks](https://about.gitlab.com/blog/new-gitlab-metrics-and-registry-features-help-reduce-ci-cd-bottlenecks/#container-virtual-registry)\n- [Container Virtual Registry documentation](https://docs.gitlab.com/user/packages/virtual_registry/container/)\n- [Container Virtual Registry API](https://docs.gitlab.com/api/container_virtual_registries/)",[734,733,29],{"featured":14,"template":15,"slug":761},"using-gitlab-container-virtual-registry-with-docker-hardened-images",{"promotions":763},[764,778,789,801],{"id":765,"categories":766,"header":768,"text":769,"button":770,"image":775},"ai-modernization",[767],"ai-ml","Is AI achieving its promise at scale?","Quiz will take 5 minutes or less",{"text":771,"config":772},"Get your AI maturity score",{"href":773,"dataGaName":774,"dataGaLocation":252},"/assessments/ai-modernization-assessment/","modernization assessment",{"config":776},{"src":777},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138786/qix0m7kwnd8x2fh1zq49.png",{"id":779,"categories":780,"header":781,"text":769,"button":782,"image":786},"devops-modernization",[733,576],"Are you just managing tools or shipping innovation?",{"text":783,"config":784},"Get your DevOps maturity score",{"href":785,"dataGaName":774,"dataGaLocation":252},"/assessments/devops-modernization-assessment/",{"config":787},{"src":788},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138785/eg818fmakweyuznttgid.png",{"id":790,"categories":791,"header":793,"text":769,"button":794,"image":798},"security-modernization",[792],"security","Are you trading speed for security?",{"text":795,"config":796},"Get your security maturity score",{"href":797,"dataGaName":774,"dataGaLocation":252},"/assessments/security-modernization-assessment/",{"config":799},{"src":800},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138786/p4pbqd9nnjejg5ds6mdk.png",{"id":802,"paths":803,"header":806,"text":807,"button":808,"image":813},"github-azure-migration",[804,805],"migration-from-azure-devops-to-gitlab","integrating-azure-devops-scm-and-gitlab","Is your team ready for GitHub's Azure move?","GitHub is already rebuilding around Azure. Find out what it means for you.",{"text":809,"config":810},"See how GitLab compares to GitHub",{"href":811,"dataGaName":812,"dataGaLocation":252},"/compare/gitlab-vs-github/github-azure-migration/","github azure migration",{"config":814},{"src":788},{"header":816,"blurb":817,"button":818,"secondaryButton":823},"Start building faster today","See what your team can do with the intelligent orchestration platform for DevSecOps.\n",{"text":819,"config":820},"Get your free trial",{"href":821,"dataGaName":59,"dataGaLocation":822},"https://gitlab.com/-/trial_registrations/new?glm_content=default-saas-trial&glm_source=about.gitlab.com/","feature",{"text":514,"config":824},{"href":63,"dataGaName":64,"dataGaLocation":822},1777493649848]