[{"data":1,"prerenderedAt":820},["ShallowReactive",2],{"/en-us/blog/tutorial-install-vs-code-on-a-cloud-provider-vm-and-set-up-remote-access":3,"navigation-en-us":41,"banner-en-us":452,"footer-en-us":462,"blog-post-authors-en-us-Cesar Saavedra":703,"blog-related-posts-en-us-tutorial-install-vs-code-on-a-cloud-provider-vm-and-set-up-remote-access":717,"blog-promotions-en-us":757,"next-steps-en-us":810},{"id":4,"title":5,"authorSlugs":6,"authors":8,"body":10,"category":11,"categorySlug":11,"config":12,"content":16,"date":20,"description":17,"extension":25,"externalUrl":26,"featured":14,"heroImage":19,"isFeatured":14,"meta":27,"navigation":14,"path":28,"publishedDate":20,"rawbody":29,"seo":30,"slug":13,"stem":35,"tagSlugs":36,"tags":39,"template":15,"updatedDate":26,"__hash__":40},"blogPosts/en-us/blog/tutorial-install-vs-code-on-a-cloud-provider-vm-and-set-up-remote-access.yml","Tutorial: Install VS Code on a cloud provider VM and set up remote access",[7],"cesar-saavedra",[9],"Cesar Saavedra","DevSecOps teams can sometimes find they need to run an instance of Visual Studio Code (VS Code) remotely for team members to share when they don't have enough local resources. However, installing, running, and using VS Code on a remote virtual machine (VM) via a cloud provider can be a complex process full of pitfalls and false starts. This tutorial covers how to automate the installation of VS Code on a VM running on a cloud provider.\n\nThis approach involves two separate GitLab projects, each with its own pipeline. The first one uses Terraform to instantiate a virtual machine in GCP running Linux Debian. The second one installs VS Code on the newly instantiated VM. Lastly, we provide a procedure on how to set up your local Mac laptop to connect and use the VS Code instance installed on the remote VM.\n\n## Create a Debian Linux distribution VM on GCP\n\nHere are the steps to create a Debian Linux distribution VM on GCP.\n\n### Prerequisites\n\n1. A GCP account. If you don't have one, please [create one](https://cloud.google.com/free?hl=en).\n2. A GitLab account on [gitlab.com](https://gitlab.com/users/sign_in)\n\n**Note:** This installation uses:\n\n- Debian 5.10.205-2 (2023-12-31) x86_64 GNU/Linux, a.k.a Debian 11\n\n### Create a service account and download its key\n\nBefore you create the first GitLab project, you need to create a service account in GCP and then generate and download a key. You will need this key so that your GitLab pipelines can communicate to GCP and the GitLab API.\n\n1. To authenticate GCP with GitLab, sign in to your GCP account and create a [GCP service account](https://cloud.google.com/docs/authentication#service-accounts) with the following roles:\n- `Compute Network Admin`\n- `Compute Admin`\n- `Service Account User`\n- `Service Account Admin`\n- `Security Admin`\n\n3. Download the JSON file with the service account key you created in the previous step.\n4. On your computer, encode the JSON file to `base64` (replace `/path/to/sa-key.json` to the path where your key is located):\n\n   ```shell\n   base64 -i /path/to/sa-key.json | tr -d \\\\n\n   ```\n\n**NOTE:** Save the output of this command. You will use it later as the value for the `BASE64_GOOGLE_CREDENTIALS` environment variable.\n\n### Configure your GitLab project\n\nNext, you need to create and configure the first GitLab project.\n\n1. Create a group in your GitLab workspace and name it `gcpvmlinuxvscode`.\n\n1. Inside your newly created group, clone the following project:\n\n   ```shell\n   git@gitlab.com:tech-marketing/sandbox/gcpvmlinuxvscode/gcpvmlnxsetup.git\n   ```\n\n1. Drill into your newly cloned project, `gcpvmlnxsetup`, and set up the following CI/CD variables to configure it:\n   1. On the left sidebar, select **Settings > CI/CD**.\n   1. Expand **Variables**.\n   1. Set the variable `BASE64_GOOGLE_CREDENTIALS` to the `base64` encoded JSON file you created in the previous section.\n   1. Set the variable `TF_VAR_gcp_project` to your GCP `project` ID.\n   1. Set the variable `TF_VAR_gcp_region` to your GCP `region` ID, e.g. us-east1, which is also its default value.\n   1. Set the variable `TF_VAR_gcp_zone` to your GCP `zone` ID, e.g. us-east1-d, which is also its default value.\n   1. Set the variable `TF_VAR_machine_type` to the GCP `machine type` ID, e.g. e2-standard-2, which is also its default value.\n   1. Set the variable `TF_VAR_gcp_vmname` to the GCP `vm name` you want to give the VM, e.g. my-test-vm, which is also its default value.\n\n**Note:** We have followed a minimalist approach to set up this VM. If you would like to customize the VM further, please refer to the [Google Terraform provider](https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference) and the [Google Compute Instance Terraform provider](https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_instance) documentation for additional resource options.\n\n### Provision your VM\n\nAfter configuring your project, manually trigger the provisioning of your VM as follows:\n\n1. On the left sidebar, go to **Build > Pipelines**.\n1. Next to **Play** (**{play}**), select the dropdown list icon (**{chevron-lg-down}**).\n1. Select **Deploy** to manually trigger the deployment job.\n\nWhen the pipeline finishes successfully, you can see your new VM on GCP:\n\n- Check it on your [GCP console's VM instances list](https://console.cloud.google.com/compute/instances).\n\n### Remove the VM\n\n**Important note:** Only run the cleanup job when you no longer need the GCP VM and/or the VS Code that you installed in it.\n\nA manual cleanup job is included in your pipeline by default. To remove all created resources:\n\n1. On the left sidebar, select **Build > Pipelines** and select the most recent pipeline.\n1. For the `destroy` job, select **Play** (**{play}**).\n\n## Install and set up VS Code on a GCP VM\n\nPerform the steps in this section only after you have successfully finished the previous sections above. In this section, you will create the second GitLab project that will install VS Code and its dependencies on the running VM on GCP.\n\n### Prerequisites\n\n1. A provisioned GCP VM. We covered this in the previous sections.\n\n**Note:** This installation uses:\n\n- VS Code Version 1.85.2\n\n### Configure your project\n\n**Note:** Since you will be using the `ssh` command multiple times on your laptop, we strongly suggest that you make a backup copy of your laptop local directory `$HOME/.ssh` before continuing.\n\nNext, you need to create and configure the second GitLab project.\n\n1. Head over to your GitLab group `gcpvmlinuxvscode`, which you created at the beginning of this post.\n\n1. Inside group, `gcpvmlinuxvscode`, clone the following project:\n\n   ```shell\n   git@gitlab.com:tech-marketing/sandbox/gcpvmlinuxvscode/vscvmsetup.git\n   ```\n\n1. Drill into your newly cloned project, `vscvmsetup` and set up the following CI/CD variables to configure it:\n   1. On the left sidebar, select **Settings > CI/CD**.\n   1. Expand **Variables**.\n   1. Set the variable `BASE64_GOOGLE_CREDENTIALS` to the `base64` encoded JSON file you created in project `gcpvmlnxvsc`. You can copy this value from the variable with the same name in project `gcpvmlnxvsc`.\n   1. Set the variable `gcp_project` to your GCP `project` ID.\n   1. Set the variable `gcp_vmname` to your GCP `region` ID, e.g. us-east1.\n   1. Set the variable `gcp_zone` to your GCP `zone` ID, e.g. us-east1-d.\n   1. Set the variable `vm_pwd` to the password that you will use to ssh to the VM.\n   1. Set the variable `gcp_vm_username` to the first portion (before the \"@\" sign) of the email associated to your GCP account, which should be your GitLab email.\n\n### Run the project pipeline\n\nAfter configuring the second GitLab project, manually trigger the provisioning of VS Code and its dependencies to the GCP VM as follows:\n\n1. On the left sidebar, select **Build > Pipelines** and click on the button **Run Pipeline**. On the next screen, click on the button **Run pipeline**.\n\n    The pipeline will:\n\n    - install `xauth` on the virtual machine. This is needed for effective X11 communication between your local desktop and the VM \n    - install `git` on the VM\n    - install `Visual Studio Code` on the VM.\n\n2. At this point, you can wait until the pipeline successfully completes. If you don't want to wait, you can continue to do the first step of the next section. However, you must ensure the pipeline has successfully completed before you can perform Step 2 of the next section.\n\n### Connect to your VM from your local Mac laptop\n\nNow that you have an instance of VS Code running on a Linux VM on GCP, you need to configure your Mac laptop to be able to act as a client to the remote VM. Follow these steps:\n\n1. To connect to the remote VS Code from your Mac, you must first install `XQuartz` on your Mac. You can execute the following command on your Mac to install it:\n\n```shell\nbrew install xquartz\n```\nOr, you can follow the instructions from the following [tutorial](https://und.edu/research/computational-research-center/tutorials/mac-x11.html) from the University of North Dakota.\n\nAfter the pipeline for project `vscvmsetup` successfully executes to completion (pipeline you manually executed in the previous section), you can connect to the remote VS Code as follows:\n\n2. Launch `XQuartz` on your Mac (it should be located in your Applications folder). Its launching should open up an `xterm` on your Mac. If it does not, then you can select **Applications > Terminal** from the `XQuartz` top menu. \n3. On the `xterm`, enter the following command:\n\n```shell\ngcloud compute ssh --zone \"[GCP zone]\" \"[name of your VM]\" --project \"[GCP project]\" --ssh-flag=\"-Y\"\n```\nWhere:\n\n- `[VM name]` is the name of the VM you created in project `gcpvmlnxvsc`. Its value should be the same as the `gcp_project` variable.\n- `[GCP zone]` is the zone where the VM is running. Its value should be the same as the `gcp_vmname` variable.\n- `[GCP project]` is the name of your GCP project assigned name. Its value should be the same as the `gcp_project` variable.\n\n***Note: If you have not installed the Google Cloud CLI, please do so by following the [Google documentation](https://cloud.google.com/sdk/docs/install).***\n\n4. If you have not used SSH on your Mac before, you may not have a `.ssh` in your `HOME` directory. If this is the case, you will be asked if you would like to continue with the creation of this directory. Answer **Y**.\n\n5. Next, you will be asked to enter the same password twice to generate a public/private key. Enter the same password you used when defining the variable `vm_pwd` in the required configuration above.\n\n6. Once the SSH key is done propagating, you will need to enter the password again two times to log in to the VM.\n\n7. You should now be logged in to the VM.\n\n### Create a personal access token\n\nThe assumption here is that you already have a GitLab project that you would want to open from and work on the remote VS Code. To do this, you will need to clone your GitLab project from the VM. First, you will be using a personal access token (PAT) to clone your project.\n\n1. Head over to your GitLab project (the one that you'd like to open from the remote VS Code).\n2. From your GitLab project, create a [PAT](https://docs.gitlab.com/user/profile/personal_access_tokens/#create-a-personal-access-token), name it `pat-gcpvm` and ensure that it has the following scopes: `read_repository`, `write_repository`, `read_registry`, `write_registry`, and `ai_features`\n3. Save the generated PAT somewhere safe; you will need it later.\n\n### Clone the read_repository\n\n1. On your local Mac, from the `xterm` where you are logged on to the remote VM, enter the following command:\n\n```shell\ngit clone https://[your GitLab username]:[personal_access_token]@gitlab.com/[GitLab project name].git \n```\n\nWhere:\n\n- `[your GitLab username]` is your GitLab handle.\n- `[personal_access_token]` is the PAT you created in the previous section.\n- `[GitLab project name]` is the name of the project that contains the GitLab Code Suggestions test cases.\n\n## Launch Visual Studio Code\n\n1. From the `xterm` where you are logged in to the VM, enter the following command:\n\n```text\ncode\n```\n\nWait for a few seconds and Visual Studio Code will appear on your Mac screen.\n\n2. From the VS Code menu, select **File > Open Folder...\"\n3. In the File chooser, select the top-level directory of the GitLab project you cloned in the previous section\n\nThat's it! You're ready to start working on your cloned GitLab project using the VS Code that you installed on a remote Linux-based VM.\n\n### Troubleshooting\n\nWhile using the remotely installed VS Code from your local Mac, you may encounter a few issues. In this section, we provide guidance on how to mitigate them.\n\n#### Keyboard keys not mapped correctly\n\nIf, while running VS Code, you are having issues with your keyboard keys not being mapped correctly, e.g. letter e is backspace, letter r is tab, letter s is clear line, etc., do the following:\n\n1. In VS Code, select **File > Preferences > Settings**.\n1. Search for \"keyboard\". If having issues with the letter e, then search for \"board\". Click on the \"Keyboard\" entry under \"Application.\"\n1. Ensure that the Keyboard Dispatch is set to \"keyCode.\"\n1. Restart VS Code.\n1. If you need further help, this is a good resource for [keyboard problems](https://github.com/microsoft/vscode/wiki/Keybinding-Issues#troubleshoot-linux-keybindings).\n\n#### Error loading webview: Error\n\nIf while running VS Code, you get a message saying:\n\n\"Error loading webview: Error: Could not register service worker: InvalidStateError: Failed to register a ServiceWorker: The document is in an invalid state.\"\n\n1. Exit VS Code and then enter this cmd from the `xterm` window:\n\n`killall code`\n\nYou may need to execute this command two or three times in a row to kill all VS Code processes.\n\n2. Ensure that all VS Code-related processes are gone by entering the following command from the `xterm` window:\n\n`ps -ef | grep code`\n\n3. Once all the VS Code-related processes are gone, restart VS Code by entering the following command from the `xterm` window:\n\n`code`\n\n#### Some useful commands to debug SSH\n\nHere are some useful commands to run on the VM that can help you debug SSH issues:\n\n1. To get the status, location and latest event of sshd:\n\n`sudo systemctl status ssh`\n\n2. To see the log of sshd:\n\n`journalctl -b -a -u ssh`\n\n3. To restart to SSH daemon:\n\n`sudo systemctl restart ssh.service`\n\nOr\n\n`sudo systemctl restart ssh`\n\n4. To start a root shell:\n\n`sudo -s`\n\n## Get started\n\nThis article described how to:\n- instantiate a Linux-based VM on GCP\n- install VS Code and dependencies on the remote VM\n- clone an existing GitLab project of yours in the remote VM\n- open your remotely cloned project from the remotely installed VS Code\n\nAs a result, you can basically use your laptop as a thin client that accesses a remote server, where all the work takes place.\n\n> The automation to get all these parts in place was done by GitLab. Sign up for a [free GitLab Ultimate trial](https://about.gitlab.com/free-trial/) to get started today!\n","engineering",{"slug":13,"featured":14,"template":15},"tutorial-install-vs-code-on-a-cloud-provider-vm-and-set-up-remote-access",true,"BlogPost",{"title":5,"description":17,"authors":18,"heroImage":19,"date":20,"body":10,"category":11,"tags":21},"Learn how to automate the installation of VS Code on a VM running on a cloud provider and how to access it from your local laptop.",[9],"https://res.cloudinary.com/about-gitlab-com/image/upload/v1749670563/Blog/Hero%20Images/cloudcomputing.jpg","2024-05-06",[22,23,24],"cloud native","tutorial","open source","yml",null,{},"/en-us/blog/tutorial-install-vs-code-on-a-cloud-provider-vm-and-set-up-remote-access","seo:\n  title: 'Tutorial: Install VS Code on a cloud provider VM and set up remote access'\n  description: >-\n    Learn how to automate the installation of VS Code on a VM running on a cloud\n    provider and how to access it from your local laptop.\n  ogTitle: 'Tutorial: Install VS Code on a cloud provider VM and set up remote access'\n  ogDescription: >-\n    Learn how to automate the installation of VS Code on a VM running on a cloud\n    provider and how to access it from your local laptop.\n  noIndex: false\n  ogImage: >-\n    https://res.cloudinary.com/about-gitlab-com/image/upload/v1749670563/Blog/Hero%20Images/cloudcomputing.jpg\n  ogUrl: >-\n    https://about.gitlab.com/blog/tutorial-install-vs-code-on-a-cloud-provider-vm-and-set-up-remote-access\n  ogSiteName: https://about.gitlab.com\n  ogType: article\n  canonicalUrls: >-\n    https://about.gitlab.com/blog/tutorial-install-vs-code-on-a-cloud-provider-vm-and-set-up-remote-access\ncontent:\n  title: 'Tutorial: Install VS Code on a cloud provider VM and set up remote access'\n  description: >-\n    Learn how to automate the installation of VS Code on a VM running on a cloud\n    provider and how to access it from your local laptop.\n  authors:\n    - Cesar Saavedra\n  heroImage: >-\n    https://res.cloudinary.com/about-gitlab-com/image/upload/v1749670563/Blog/Hero%20Images/cloudcomputing.jpg\n  date: '2024-05-06'\n  body: >\n    DevSecOps teams can sometimes find they need to run an instance of Visual\n    Studio Code (VS Code) remotely for team members to share when they don't\n    have enough local resources. However, installing, running, and using VS Code\n    on a remote virtual machine (VM) via a cloud provider can be a complex\n    process full of pitfalls and false starts. This tutorial covers how to\n    automate the installation of VS Code on a VM running on a cloud provider.\n\n\n    This approach involves two separate GitLab projects, each with its own\n    pipeline. The first one uses Terraform to instantiate a virtual machine in\n    GCP running Linux Debian. The second one installs VS Code on the newly\n    instantiated VM. Lastly, we provide a procedure on how to set up your local\n    Mac laptop to connect and use the VS Code instance installed on the remote\n    VM.\n\n\n    ## Create a Debian Linux distribution VM on GCP\n\n\n    Here are the steps to create a Debian Linux distribution VM on GCP.\n\n\n    ### Prerequisites\n\n\n    1. A GCP account. If you don't have one, please [create\n    one](https://cloud.google.com/free?hl=en).\n\n    2. A GitLab account on [gitlab.com](https://gitlab.com/users/sign_in)\n\n\n    **Note:** This installation uses:\n\n\n    - Debian 5.10.205-2 (2023-12-31) x86_64 GNU/Linux, a.k.a Debian 11\n\n\n    ### Create a service account and download its key\n\n\n    Before you create the first GitLab project, you need to create a service\n    account in GCP and then generate and download a key. You will need this key\n    so that your GitLab pipelines can communicate to GCP and the GitLab API.\n\n\n    1. To authenticate GCP with GitLab, sign in to your GCP account and create a\n    [GCP service\n    account](https://cloud.google.com/docs/authentication#service-accounts) with\n    the following roles:\n\n    - `Compute Network Admin`\n\n    - `Compute Admin`\n\n    - `Service Account User`\n\n    - `Service Account Admin`\n\n    - `Security Admin`\n\n\n    3. Download the JSON file with the service account key you created in the\n    previous step.\n\n    4. On your computer, encode the JSON file to `base64` (replace\n    `/path/to/sa-key.json` to the path where your key is located):\n\n       ```shell\n       base64 -i /path/to/sa-key.json | tr -d \\\\n\n       ```\n\n    **NOTE:** Save the output of this command. You will use it later as the\n    value for the `BASE64_GOOGLE_CREDENTIALS` environment variable.\n\n\n    ### Configure your GitLab project\n\n\n    Next, you need to create and configure the first GitLab project.\n\n\n    1. Create a group in your GitLab workspace and name it `gcpvmlinuxvscode`.\n\n\n    1. Inside your newly created group, clone the following project:\n\n       ```shell\n       git@gitlab.com:tech-marketing/sandbox/gcpvmlinuxvscode/gcpvmlnxsetup.git\n       ```\n\n    1. Drill into your newly cloned project, `gcpvmlnxsetup`, and set up the\n    following CI/CD variables to configure it:\n       1. On the left sidebar, select **Settings > CI/CD**.\n       1. Expand **Variables**.\n       1. Set the variable `BASE64_GOOGLE_CREDENTIALS` to the `base64` encoded JSON file you created in the previous section.\n       1. Set the variable `TF_VAR_gcp_project` to your GCP `project` ID.\n       1. Set the variable `TF_VAR_gcp_region` to your GCP `region` ID, e.g. us-east1, which is also its default value.\n       1. Set the variable `TF_VAR_gcp_zone` to your GCP `zone` ID, e.g. us-east1-d, which is also its default value.\n       1. Set the variable `TF_VAR_machine_type` to the GCP `machine type` ID, e.g. e2-standard-2, which is also its default value.\n       1. Set the variable `TF_VAR_gcp_vmname` to the GCP `vm name` you want to give the VM, e.g. my-test-vm, which is also its default value.\n\n    **Note:** We have followed a minimalist approach to set up this VM. If you\n    would like to customize the VM further, please refer to the [Google\n    Terraform\n    provider](https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference)\n    and the [Google Compute Instance Terraform\n    provider](https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_instance)\n    documentation for additional resource options.\n\n\n    ### Provision your VM\n\n\n    After configuring your project, manually trigger the provisioning of your VM\n    as follows:\n\n\n    1. On the left sidebar, go to **Build > Pipelines**.\n\n    1. Next to **Play** (**{play}**), select the dropdown list icon\n    (**{chevron-lg-down}**).\n\n    1. Select **Deploy** to manually trigger the deployment job.\n\n\n    When the pipeline finishes successfully, you can see your new VM on GCP:\n\n\n    - Check it on your [GCP console's VM instances\n    list](https://console.cloud.google.com/compute/instances).\n\n\n    ### Remove the VM\n\n\n    **Important note:** Only run the cleanup job when you no longer need the GCP\n    VM and/or the VS Code that you installed in it.\n\n\n    A manual cleanup job is included in your pipeline by default. To remove all\n    created resources:\n\n\n    1. On the left sidebar, select **Build > Pipelines** and select the most\n    recent pipeline.\n\n    1. For the `destroy` job, select **Play** (**{play}**).\n\n\n    ## Install and set up VS Code on a GCP VM\n\n\n    Perform the steps in this section only after you have successfully finished\n    the previous sections above. In this section, you will create the second\n    GitLab project that will install VS Code and its dependencies on the running\n    VM on GCP.\n\n\n    ### Prerequisites\n\n\n    1. A provisioned GCP VM. We covered this in the previous sections.\n\n\n    **Note:** This installation uses:\n\n\n    - VS Code Version 1.85.2\n\n\n    ### Configure your project\n\n\n    **Note:** Since you will be using the `ssh` command multiple times on your\n    laptop, we strongly suggest that you make a backup copy of your laptop local\n    directory `$HOME/.ssh` before continuing.\n\n\n    Next, you need to create and configure the second GitLab project.\n\n\n    1. Head over to your GitLab group `gcpvmlinuxvscode`, which you created at\n    the beginning of this post.\n\n\n    1. Inside group, `gcpvmlinuxvscode`, clone the following project:\n\n       ```shell\n       git@gitlab.com:tech-marketing/sandbox/gcpvmlinuxvscode/vscvmsetup.git\n       ```\n\n    1. Drill into your newly cloned project, `vscvmsetup` and set up the\n    following CI/CD variables to configure it:\n       1. On the left sidebar, select **Settings > CI/CD**.\n       1. Expand **Variables**.\n       1. Set the variable `BASE64_GOOGLE_CREDENTIALS` to the `base64` encoded JSON file you created in project `gcpvmlnxvsc`. You can copy this value from the variable with the same name in project `gcpvmlnxvsc`.\n       1. Set the variable `gcp_project` to your GCP `project` ID.\n       1. Set the variable `gcp_vmname` to your GCP `region` ID, e.g. us-east1.\n       1. Set the variable `gcp_zone` to your GCP `zone` ID, e.g. us-east1-d.\n       1. Set the variable `vm_pwd` to the password that you will use to ssh to the VM.\n       1. Set the variable `gcp_vm_username` to the first portion (before the \"@\" sign) of the email associated to your GCP account, which should be your GitLab email.\n\n    ### Run the project pipeline\n\n\n    After configuring the second GitLab project, manually trigger the\n    provisioning of VS Code and its dependencies to the GCP VM as follows:\n\n\n    1. On the left sidebar, select **Build > Pipelines** and click on the button\n    **Run Pipeline**. On the next screen, click on the button **Run pipeline**.\n\n        The pipeline will:\n\n        - install `xauth` on the virtual machine. This is needed for effective X11 communication between your local desktop and the VM \n        - install `git` on the VM\n        - install `Visual Studio Code` on the VM.\n\n    2. At this point, you can wait until the pipeline successfully completes. If\n    you don't want to wait, you can continue to do the first step of the next\n    section. However, you must ensure the pipeline has successfully completed\n    before you can perform Step 2 of the next section.\n\n\n    ### Connect to your VM from your local Mac laptop\n\n\n    Now that you have an instance of VS Code running on a Linux VM on GCP, you\n    need to configure your Mac laptop to be able to act as a client to the\n    remote VM. Follow these steps:\n\n\n    1. To connect to the remote VS Code from your Mac, you must first install\n    `XQuartz` on your Mac. You can execute the following command on your Mac to\n    install it:\n\n\n    ```shell\n\n    brew install xquartz\n\n    ```\n\n    Or, you can follow the instructions from the following\n    [tutorial](https://und.edu/research/computational-research-center/tutorials/mac-x11.html)\n    from the University of North Dakota.\n\n\n    After the pipeline for project `vscvmsetup` successfully executes to\n    completion (pipeline you manually executed in the previous section), you can\n    connect to the remote VS Code as follows:\n\n\n    2. Launch `XQuartz` on your Mac (it should be located in your Applications\n    folder). Its launching should open up an `xterm` on your Mac. If it does\n    not, then you can select **Applications > Terminal** from the `XQuartz` top\n    menu. \n\n    3. On the `xterm`, enter the following command:\n\n\n    ```shell\n\n    gcloud compute ssh --zone \"[GCP zone]\" \"[name of your VM]\" --project \"[GCP\n    project]\" --ssh-flag=\"-Y\"\n\n    ```\n\n    Where:\n\n\n    - `[VM name]` is the name of the VM you created in project `gcpvmlnxvsc`.\n    Its value should be the same as the `gcp_project` variable.\n\n    - `[GCP zone]` is the zone where the VM is running. Its value should be the\n    same as the `gcp_vmname` variable.\n\n    - `[GCP project]` is the name of your GCP project assigned name. Its value\n    should be the same as the `gcp_project` variable.\n\n\n    ***Note: If you have not installed the Google Cloud CLI, please do so by\n    following the [Google\n    documentation](https://cloud.google.com/sdk/docs/install).***\n\n\n    4. If you have not used SSH on your Mac before, you may not have a `.ssh` in\n    your `HOME` directory. If this is the case, you will be asked if you would\n    like to continue with the creation of this directory. Answer **Y**.\n\n\n    5. Next, you will be asked to enter the same password twice to generate a\n    public/private key. Enter the same password you used when defining the\n    variable `vm_pwd` in the required configuration above.\n\n\n    6. Once the SSH key is done propagating, you will need to enter the password\n    again two times to log in to the VM.\n\n\n    7. You should now be logged in to the VM.\n\n\n    ### Create a personal access token\n\n\n    The assumption here is that you already have a GitLab project that you would\n    want to open from and work on the remote VS Code. To do this, you will need\n    to clone your GitLab project from the VM. First, you will be using a\n    personal access token (PAT) to clone your project.\n\n\n    1. Head over to your GitLab project (the one that you'd like to open from\n    the remote VS Code).\n\n    2. From your GitLab project, create a\n    [PAT](https://docs.gitlab.com/user/profile/personal_access_tokens/#create-a-personal-access-token),\n    name it `pat-gcpvm` and ensure that it has the following scopes:\n    `read_repository`, `write_repository`, `read_registry`, `write_registry`,\n    and `ai_features`\n\n    3. Save the generated PAT somewhere safe; you will need it later.\n\n\n    ### Clone the read_repository\n\n\n    1. On your local Mac, from the `xterm` where you are logged on to the remote\n    VM, enter the following command:\n\n\n    ```shell\n\n    git clone https://[your GitLab\n    username]:[personal_access_token]@gitlab.com/[GitLab project name].git \n\n    ```\n\n\n    Where:\n\n\n    - `[your GitLab username]` is your GitLab handle.\n\n    - `[personal_access_token]` is the PAT you created in the previous section.\n\n    - `[GitLab project name]` is the name of the project that contains the\n    GitLab Code Suggestions test cases.\n\n\n    ## Launch Visual Studio Code\n\n\n    1. From the `xterm` where you are logged in to the VM, enter the following\n    command:\n\n\n    ```text\n\n    code\n\n    ```\n\n\n    Wait for a few seconds and Visual Studio Code will appear on your Mac\n    screen.\n\n\n    2. From the VS Code menu, select **File > Open Folder...\"\n\n    3. In the File chooser, select the top-level directory of the GitLab project\n    you cloned in the previous section\n\n\n    That's it! You're ready to start working on your cloned GitLab project using\n    the VS Code that you installed on a remote Linux-based VM.\n\n\n    ### Troubleshooting\n\n\n    While using the remotely installed VS Code from your local Mac, you may\n    encounter a few issues. In this section, we provide guidance on how to\n    mitigate them.\n\n\n    #### Keyboard keys not mapped correctly\n\n\n    If, while running VS Code, you are having issues with your keyboard keys not\n    being mapped correctly, e.g. letter e is backspace, letter r is tab, letter\n    s is clear line, etc., do the following:\n\n\n    1. In VS Code, select **File > Preferences > Settings**.\n\n    1. Search for \"keyboard\". If having issues with the letter e, then search\n    for \"board\". Click on the \"Keyboard\" entry under \"Application.\"\n\n    1. Ensure that the Keyboard Dispatch is set to \"keyCode.\"\n\n    1. Restart VS Code.\n\n    1. If you need further help, this is a good resource for [keyboard\n    problems](https://github.com/microsoft/vscode/wiki/Keybinding-Issues#troubleshoot-linux-keybindings).\n\n\n    #### Error loading webview: Error\n\n\n    If while running VS Code, you get a message saying:\n\n\n    \"Error loading webview: Error: Could not register service worker:\n    InvalidStateError: Failed to register a ServiceWorker: The document is in an\n    invalid state.\"\n\n\n    1. Exit VS Code and then enter this cmd from the `xterm` window:\n\n\n    `killall code`\n\n\n    You may need to execute this command two or three times in a row to kill all\n    VS Code processes.\n\n\n    2. Ensure that all VS Code-related processes are gone by entering the\n    following command from the `xterm` window:\n\n\n    `ps -ef | grep code`\n\n\n    3. Once all the VS Code-related processes are gone, restart VS Code by\n    entering the following command from the `xterm` window:\n\n\n    `code`\n\n\n    #### Some useful commands to debug SSH\n\n\n    Here are some useful commands to run on the VM that can help you debug SSH\n    issues:\n\n\n    1. To get the status, location and latest event of sshd:\n\n\n    `sudo systemctl status ssh`\n\n\n    2. To see the log of sshd:\n\n\n    `journalctl -b -a -u ssh`\n\n\n    3. To restart to SSH daemon:\n\n\n    `sudo systemctl restart ssh.service`\n\n\n    Or\n\n\n    `sudo systemctl restart ssh`\n\n\n    4. To start a root shell:\n\n\n    `sudo -s`\n\n\n    ## Get started\n\n\n    This article described how to:\n\n    - instantiate a Linux-based VM on GCP\n\n    - install VS Code and dependencies on the remote VM\n\n    - clone an existing GitLab project of yours in the remote VM\n\n    - open your remotely cloned project from the remotely installed VS Code\n\n\n    As a result, you can basically use your laptop as a thin client that\n    accesses a remote server, where all the work takes place.\n\n\n    > The automation to get all these parts in place was done by GitLab. Sign up\n    for a [free GitLab Ultimate trial](https://about.gitlab.com/free-trial/) to\n    get started today!\n  category: engineering\n  tags:\n    - cloud native\n    - tutorial\n    - open source\nconfig:\n  slug: tutorial-install-vs-code-on-a-cloud-provider-vm-and-set-up-remote-access\n  featured: true\n  template: BlogPost\n",{"title":5,"description":17,"ogTitle":5,"ogDescription":17,"noIndex":31,"ogImage":19,"ogUrl":32,"ogSiteName":33,"ogType":34,"canonicalUrls":32},false,"https://about.gitlab.com/blog/tutorial-install-vs-code-on-a-cloud-provider-vm-and-set-up-remote-access","https://about.gitlab.com","article","en-us/blog/tutorial-install-vs-code-on-a-cloud-provider-vm-and-set-up-remote-access",[37,23,38],"cloud-native","open-source",[22,23,24],"La5He9_xhfJIUoT8VaDqluvsBiI4qh7NFINp52Q4zkM",{"data":42},{"logo":43,"freeTrial":48,"sales":53,"login":58,"items":63,"search":372,"minimal":403,"duo":422,"switchNav":431,"pricingDeployment":442},{"config":44},{"href":45,"dataGaName":46,"dataGaLocation":47},"/","gitlab logo","header",{"text":49,"config":50},"Get free trial",{"href":51,"dataGaName":52,"dataGaLocation":47},"https://gitlab.com/-/trial_registrations/new?glm_source=about.gitlab.com&glm_content=default-saas-trial/","free trial",{"text":54,"config":55},"Talk to sales",{"href":56,"dataGaName":57,"dataGaLocation":47},"/sales/","sales",{"text":59,"config":60},"Sign in",{"href":61,"dataGaName":62,"dataGaLocation":47},"https://gitlab.com/users/sign_in/","sign in",[64,91,186,191,293,353],{"text":65,"config":66,"cards":68},"Platform",{"dataNavLevelOne":67},"platform",[69,75,83],{"title":65,"description":70,"link":71},"The intelligent orchestration platform for DevSecOps",{"text":72,"config":73},"Explore our Platform",{"href":74,"dataGaName":67,"dataGaLocation":47},"/platform/",{"title":76,"description":77,"link":78},"GitLab Duo Agent Platform","Agentic AI for the entire software lifecycle",{"text":79,"config":80},"Meet GitLab Duo",{"href":81,"dataGaName":82,"dataGaLocation":47},"/gitlab-duo-agent-platform/","gitlab duo agent platform",{"title":84,"description":85,"link":86},"Why GitLab","See the top reasons enterprises choose GitLab",{"text":87,"config":88},"Learn more",{"href":89,"dataGaName":90,"dataGaLocation":47},"/why-gitlab/","why gitlab",{"text":92,"left":14,"config":93,"link":95,"lists":99,"footer":168},"Product",{"dataNavLevelOne":94},"solutions",{"text":96,"config":97},"View all Solutions",{"href":98,"dataGaName":94,"dataGaLocation":47},"/solutions/",[100,124,147],{"title":101,"description":102,"link":103,"items":108},"Automation","CI/CD and automation to accelerate deployment",{"config":104},{"icon":105,"href":106,"dataGaName":107,"dataGaLocation":47},"AutomatedCodeAlt","/solutions/delivery-automation/","automated software delivery",[109,113,116,120],{"text":110,"config":111},"CI/CD",{"href":112,"dataGaLocation":47,"dataGaName":110},"/solutions/continuous-integration/",{"text":76,"config":114},{"href":81,"dataGaLocation":47,"dataGaName":115},"gitlab duo agent platform - product menu",{"text":117,"config":118},"Source Code Management",{"href":119,"dataGaLocation":47,"dataGaName":117},"/solutions/source-code-management/",{"text":121,"config":122},"Automated Software Delivery",{"href":106,"dataGaLocation":47,"dataGaName":123},"Automated software delivery",{"title":125,"description":126,"link":127,"items":132},"Security","Deliver code faster without compromising security",{"config":128},{"href":129,"dataGaName":130,"dataGaLocation":47,"icon":131},"/solutions/application-security-testing/","security and compliance","ShieldCheckLight",[133,137,142],{"text":134,"config":135},"Application Security Testing",{"href":129,"dataGaName":136,"dataGaLocation":47},"Application security testing",{"text":138,"config":139},"Software Supply Chain Security",{"href":140,"dataGaLocation":47,"dataGaName":141},"/solutions/supply-chain/","Software supply chain security",{"text":143,"config":144},"Software Compliance",{"href":145,"dataGaName":146,"dataGaLocation":47},"/solutions/software-compliance/","software compliance",{"title":148,"link":149,"items":154},"Measurement",{"config":150},{"icon":151,"href":152,"dataGaName":153,"dataGaLocation":47},"DigitalTransformation","/solutions/visibility-measurement/","visibility and measurement",[155,159,163],{"text":156,"config":157},"Visibility & Measurement",{"href":152,"dataGaLocation":47,"dataGaName":158},"Visibility and Measurement",{"text":160,"config":161},"Value Stream Management",{"href":162,"dataGaLocation":47,"dataGaName":160},"/solutions/value-stream-management/",{"text":164,"config":165},"Analytics & Insights",{"href":166,"dataGaLocation":47,"dataGaName":167},"/solutions/analytics-and-insights/","Analytics and insights",{"title":169,"items":170},"GitLab for",[171,176,181],{"text":172,"config":173},"Enterprise",{"href":174,"dataGaLocation":47,"dataGaName":175},"/enterprise/","enterprise",{"text":177,"config":178},"Small Business",{"href":179,"dataGaLocation":47,"dataGaName":180},"/small-business/","small business",{"text":182,"config":183},"Public Sector",{"href":184,"dataGaLocation":47,"dataGaName":185},"/solutions/public-sector/","public sector",{"text":187,"config":188},"Pricing",{"href":189,"dataGaName":190,"dataGaLocation":47,"dataNavLevelOne":190},"/pricing/","pricing",{"text":192,"config":193,"link":195,"lists":199,"feature":284},"Resources",{"dataNavLevelOne":194},"resources",{"text":196,"config":197},"View all resources",{"href":198,"dataGaName":194,"dataGaLocation":47},"/resources/",[200,233,256],{"title":201,"items":202},"Getting started",[203,208,213,218,223,228],{"text":204,"config":205},"Install",{"href":206,"dataGaName":207,"dataGaLocation":47},"/install/","install",{"text":209,"config":210},"Quick start guides",{"href":211,"dataGaName":212,"dataGaLocation":47},"/get-started/","quick setup checklists",{"text":214,"config":215},"Learn",{"href":216,"dataGaLocation":47,"dataGaName":217},"https://university.gitlab.com/","learn",{"text":219,"config":220},"Product documentation",{"href":221,"dataGaName":222,"dataGaLocation":47},"https://docs.gitlab.com/","product documentation",{"text":224,"config":225},"Best practice videos",{"href":226,"dataGaName":227,"dataGaLocation":47},"/getting-started-videos/","best practice videos",{"text":229,"config":230},"Integrations",{"href":231,"dataGaName":232,"dataGaLocation":47},"/integrations/","integrations",{"title":234,"items":235},"Discover",[236,241,246,251],{"text":237,"config":238},"Customer success stories",{"href":239,"dataGaName":240,"dataGaLocation":47},"/customers/","customer success stories",{"text":242,"config":243},"Blog",{"href":244,"dataGaName":245,"dataGaLocation":47},"/blog/","blog",{"text":247,"config":248},"The Source",{"href":249,"dataGaName":250,"dataGaLocation":47},"/the-source/","the source",{"text":252,"config":253},"Remote",{"href":254,"dataGaName":255,"dataGaLocation":47},"https://handbook.gitlab.com/handbook/company/culture/all-remote/","remote",{"title":257,"items":258},"Connect",[259,264,269,274,279],{"text":260,"config":261},"GitLab Services",{"href":262,"dataGaName":263,"dataGaLocation":47},"/services/","services",{"text":265,"config":266},"Community",{"href":267,"dataGaName":268,"dataGaLocation":47},"/community/","community",{"text":270,"config":271},"Forum",{"href":272,"dataGaName":273,"dataGaLocation":47},"https://forum.gitlab.com/","forum",{"text":275,"config":276},"Events",{"href":277,"dataGaName":278,"dataGaLocation":47},"/events/","events",{"text":280,"config":281},"Partners",{"href":282,"dataGaName":283,"dataGaLocation":47},"/partners/","partners",{"textColor":285,"title":286,"text":287,"link":288},"#000","What’s new in GitLab","Stay updated with our latest features and improvements.",{"text":289,"config":290},"Read the latest",{"href":291,"dataGaName":292,"dataGaLocation":47},"/releases/whats-new/","whats new",{"text":294,"config":295,"lists":297},"Company",{"dataNavLevelOne":296},"company",[298],{"items":299},[300,305,311,313,318,323,328,333,338,343,348],{"text":301,"config":302},"About",{"href":303,"dataGaName":304,"dataGaLocation":47},"/company/","about",{"text":306,"config":307,"footerGa":310},"Jobs",{"href":308,"dataGaName":309,"dataGaLocation":47},"/jobs/","jobs",{"dataGaName":309},{"text":275,"config":312},{"href":277,"dataGaName":278,"dataGaLocation":47},{"text":314,"config":315},"Leadership",{"href":316,"dataGaName":317,"dataGaLocation":47},"/company/team/e-group/","leadership",{"text":319,"config":320},"Team",{"href":321,"dataGaName":322,"dataGaLocation":47},"/company/team/","team",{"text":324,"config":325},"Handbook",{"href":326,"dataGaName":327,"dataGaLocation":47},"https://handbook.gitlab.com/","handbook",{"text":329,"config":330},"Investor relations",{"href":331,"dataGaName":332,"dataGaLocation":47},"https://ir.gitlab.com/","investor relations",{"text":334,"config":335},"Trust Center",{"href":336,"dataGaName":337,"dataGaLocation":47},"/security/","trust center",{"text":339,"config":340},"AI Transparency Center",{"href":341,"dataGaName":342,"dataGaLocation":47},"/ai-transparency-center/","ai transparency center",{"text":344,"config":345},"Newsletter",{"href":346,"dataGaName":347,"dataGaLocation":47},"/company/contact/#contact-forms","newsletter",{"text":349,"config":350},"Press",{"href":351,"dataGaName":352,"dataGaLocation":47},"/press/","press",{"text":354,"config":355,"lists":356},"Contact us",{"dataNavLevelOne":296},[357],{"items":358},[359,362,367],{"text":54,"config":360},{"href":56,"dataGaName":361,"dataGaLocation":47},"talk to sales",{"text":363,"config":364},"Support portal",{"href":365,"dataGaName":366,"dataGaLocation":47},"https://support.gitlab.com","support portal",{"text":368,"config":369},"Customer portal",{"href":370,"dataGaName":371,"dataGaLocation":47},"https://customers.gitlab.com/customers/sign_in/","customer portal",{"close":373,"login":374,"suggestions":381},"Close",{"text":375,"link":376},"To search repositories and projects, login to",{"text":377,"config":378},"gitlab.com",{"href":61,"dataGaName":379,"dataGaLocation":380},"search login","search",{"text":382,"default":383},"Suggestions",[384,386,390,392,396,400],{"text":76,"config":385},{"href":81,"dataGaName":76,"dataGaLocation":380},{"text":387,"config":388},"Code Suggestions (AI)",{"href":389,"dataGaName":387,"dataGaLocation":380},"/solutions/code-suggestions/",{"text":110,"config":391},{"href":112,"dataGaName":110,"dataGaLocation":380},{"text":393,"config":394},"GitLab on AWS",{"href":395,"dataGaName":393,"dataGaLocation":380},"/partners/technology-partners/aws/",{"text":397,"config":398},"GitLab on Google Cloud",{"href":399,"dataGaName":397,"dataGaLocation":380},"/partners/technology-partners/google-cloud-platform/",{"text":401,"config":402},"Why GitLab?",{"href":89,"dataGaName":401,"dataGaLocation":380},{"freeTrial":404,"mobileIcon":409,"desktopIcon":414,"secondaryButton":417},{"text":405,"config":406},"Start free trial",{"href":407,"dataGaName":52,"dataGaLocation":408},"https://gitlab.com/-/trials/new/","nav",{"altText":410,"config":411},"Gitlab Icon",{"src":412,"dataGaName":413,"dataGaLocation":408},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758203874/jypbw1jx72aexsoohd7x.svg","gitlab icon",{"altText":410,"config":415},{"src":416,"dataGaName":413,"dataGaLocation":408},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758203875/gs4c8p8opsgvflgkswz9.svg",{"text":418,"config":419},"Get Started",{"href":420,"dataGaName":421,"dataGaLocation":408},"https://gitlab.com/-/trial_registrations/new?glm_source=about.gitlab.com/get-started/","get started",{"freeTrial":423,"mobileIcon":427,"desktopIcon":429},{"text":424,"config":425},"Learn more about GitLab Duo",{"href":81,"dataGaName":426,"dataGaLocation":408},"gitlab duo",{"altText":410,"config":428},{"src":412,"dataGaName":413,"dataGaLocation":408},{"altText":410,"config":430},{"src":416,"dataGaName":413,"dataGaLocation":408},{"button":432,"mobileIcon":437,"desktopIcon":439},{"text":433,"config":434},"/switch",{"href":435,"dataGaName":436,"dataGaLocation":408},"#contact","switch",{"altText":410,"config":438},{"src":412,"dataGaName":413,"dataGaLocation":408},{"altText":410,"config":440},{"src":441,"dataGaName":413,"dataGaLocation":408},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1773335277/ohhpiuoxoldryzrnhfrh.png",{"freeTrial":443,"mobileIcon":448,"desktopIcon":450},{"text":444,"config":445},"Back to pricing",{"href":189,"dataGaName":446,"dataGaLocation":408,"icon":447},"back to pricing","GoBack",{"altText":410,"config":449},{"src":412,"dataGaName":413,"dataGaLocation":408},{"altText":410,"config":451},{"src":416,"dataGaName":413,"dataGaLocation":408},{"title":453,"button":454,"config":459},"See how agentic AI transforms software delivery",{"text":455,"config":456},"Watch GitLab Transcend now",{"href":457,"dataGaName":458,"dataGaLocation":47},"/events/transcend/virtual/","transcend event",{"layout":460,"icon":461,"disabled":14},"release","AiStar",{"data":463},{"text":464,"source":465,"edit":471,"contribute":476,"config":481,"items":486,"minimal":692},"Git is a trademark of Software Freedom Conservancy and our use of 'GitLab' is under license",{"text":466,"config":467},"View page source",{"href":468,"dataGaName":469,"dataGaLocation":470},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/","page source","footer",{"text":472,"config":473},"Edit this page",{"href":474,"dataGaName":475,"dataGaLocation":470},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/-/blob/main/content/","web ide",{"text":477,"config":478},"Please contribute",{"href":479,"dataGaName":480,"dataGaLocation":470},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/-/blob/main/CONTRIBUTING.md/","please contribute",{"twitter":482,"facebook":483,"youtube":484,"linkedin":485},"https://twitter.com/gitlab","https://www.facebook.com/gitlab","https://www.youtube.com/channel/UCnMGQ8QHMAnVIsI3xJrihhg","https://www.linkedin.com/company/gitlab-com",[487,534,587,631,658],{"title":187,"links":488,"subMenu":503},[489,493,498],{"text":490,"config":491},"View plans",{"href":189,"dataGaName":492,"dataGaLocation":470},"view plans",{"text":494,"config":495},"Why Premium?",{"href":496,"dataGaName":497,"dataGaLocation":470},"/pricing/premium/","why premium",{"text":499,"config":500},"Why Ultimate?",{"href":501,"dataGaName":502,"dataGaLocation":470},"/pricing/ultimate/","why ultimate",[504],{"title":505,"links":506},"Contact Us",[507,510,512,514,519,524,529],{"text":508,"config":509},"Contact sales",{"href":56,"dataGaName":57,"dataGaLocation":470},{"text":363,"config":511},{"href":365,"dataGaName":366,"dataGaLocation":470},{"text":368,"config":513},{"href":370,"dataGaName":371,"dataGaLocation":470},{"text":515,"config":516},"Status",{"href":517,"dataGaName":518,"dataGaLocation":470},"https://status.gitlab.com/","status",{"text":520,"config":521},"Terms of use",{"href":522,"dataGaName":523,"dataGaLocation":470},"/terms/","terms of use",{"text":525,"config":526},"Privacy statement",{"href":527,"dataGaName":528,"dataGaLocation":470},"/privacy/","privacy statement",{"text":530,"config":531},"Cookie preferences",{"dataGaName":532,"dataGaLocation":470,"id":533,"isOneTrustButton":14},"cookie preferences","ot-sdk-btn",{"title":92,"links":535,"subMenu":544},[536,540],{"text":537,"config":538},"DevSecOps platform",{"href":74,"dataGaName":539,"dataGaLocation":470},"devsecops platform",{"text":541,"config":542},"AI-Assisted Development",{"href":81,"dataGaName":543,"dataGaLocation":470},"ai-assisted development",[545],{"title":546,"links":547},"Topics",[548,553,558,563,568,573,577,582],{"text":549,"config":550},"CICD",{"href":551,"dataGaName":552,"dataGaLocation":470},"/topics/ci-cd/","cicd",{"text":554,"config":555},"GitOps",{"href":556,"dataGaName":557,"dataGaLocation":470},"/topics/gitops/","gitops",{"text":559,"config":560},"DevOps",{"href":561,"dataGaName":562,"dataGaLocation":470},"/topics/devops/","devops",{"text":564,"config":565},"Version Control",{"href":566,"dataGaName":567,"dataGaLocation":470},"/topics/version-control/","version control",{"text":569,"config":570},"DevSecOps",{"href":571,"dataGaName":572,"dataGaLocation":470},"/topics/devsecops/","devsecops",{"text":574,"config":575},"Cloud Native",{"href":576,"dataGaName":22,"dataGaLocation":470},"/topics/cloud-native/",{"text":578,"config":579},"AI for Coding",{"href":580,"dataGaName":581,"dataGaLocation":470},"/topics/devops/ai-for-coding/","ai for coding",{"text":583,"config":584},"Agentic AI",{"href":585,"dataGaName":586,"dataGaLocation":470},"/topics/agentic-ai/","agentic ai",{"title":588,"links":589},"Solutions",[590,592,594,599,603,606,610,613,615,618,621,626],{"text":134,"config":591},{"href":129,"dataGaName":134,"dataGaLocation":470},{"text":123,"config":593},{"href":106,"dataGaName":107,"dataGaLocation":470},{"text":595,"config":596},"Agile development",{"href":597,"dataGaName":598,"dataGaLocation":470},"/solutions/agile-delivery/","agile delivery",{"text":600,"config":601},"SCM",{"href":119,"dataGaName":602,"dataGaLocation":470},"source code management",{"text":549,"config":604},{"href":112,"dataGaName":605,"dataGaLocation":470},"continuous integration & delivery",{"text":607,"config":608},"Value stream management",{"href":162,"dataGaName":609,"dataGaLocation":470},"value stream management",{"text":554,"config":611},{"href":612,"dataGaName":557,"dataGaLocation":470},"/solutions/gitops/",{"text":172,"config":614},{"href":174,"dataGaName":175,"dataGaLocation":470},{"text":616,"config":617},"Small business",{"href":179,"dataGaName":180,"dataGaLocation":470},{"text":619,"config":620},"Public sector",{"href":184,"dataGaName":185,"dataGaLocation":470},{"text":622,"config":623},"Education",{"href":624,"dataGaName":625,"dataGaLocation":470},"/solutions/education/","education",{"text":627,"config":628},"Financial services",{"href":629,"dataGaName":630,"dataGaLocation":470},"/solutions/finance/","financial services",{"title":192,"links":632},[633,635,637,639,642,644,646,648,650,652,654,656],{"text":204,"config":634},{"href":206,"dataGaName":207,"dataGaLocation":470},{"text":209,"config":636},{"href":211,"dataGaName":212,"dataGaLocation":470},{"text":214,"config":638},{"href":216,"dataGaName":217,"dataGaLocation":470},{"text":219,"config":640},{"href":221,"dataGaName":641,"dataGaLocation":470},"docs",{"text":242,"config":643},{"href":244,"dataGaName":245,"dataGaLocation":470},{"text":237,"config":645},{"href":239,"dataGaName":240,"dataGaLocation":470},{"text":252,"config":647},{"href":254,"dataGaName":255,"dataGaLocation":470},{"text":260,"config":649},{"href":262,"dataGaName":263,"dataGaLocation":470},{"text":265,"config":651},{"href":267,"dataGaName":268,"dataGaLocation":470},{"text":270,"config":653},{"href":272,"dataGaName":273,"dataGaLocation":470},{"text":275,"config":655},{"href":277,"dataGaName":278,"dataGaLocation":470},{"text":280,"config":657},{"href":282,"dataGaName":283,"dataGaLocation":470},{"title":294,"links":659},[660,662,664,666,668,670,672,676,681,683,685,687],{"text":301,"config":661},{"href":303,"dataGaName":296,"dataGaLocation":470},{"text":306,"config":663},{"href":308,"dataGaName":309,"dataGaLocation":470},{"text":314,"config":665},{"href":316,"dataGaName":317,"dataGaLocation":470},{"text":319,"config":667},{"href":321,"dataGaName":322,"dataGaLocation":470},{"text":324,"config":669},{"href":326,"dataGaName":327,"dataGaLocation":470},{"text":329,"config":671},{"href":331,"dataGaName":332,"dataGaLocation":470},{"text":673,"config":674},"Sustainability",{"href":675,"dataGaName":673,"dataGaLocation":470},"/sustainability/",{"text":677,"config":678},"Diversity, inclusion and belonging (DIB)",{"href":679,"dataGaName":680,"dataGaLocation":470},"/diversity-inclusion-belonging/","Diversity, inclusion and belonging",{"text":334,"config":682},{"href":336,"dataGaName":337,"dataGaLocation":470},{"text":344,"config":684},{"href":346,"dataGaName":347,"dataGaLocation":470},{"text":349,"config":686},{"href":351,"dataGaName":352,"dataGaLocation":470},{"text":688,"config":689},"Modern Slavery Transparency Statement",{"href":690,"dataGaName":691,"dataGaLocation":470},"https://handbook.gitlab.com/handbook/legal/modern-slavery-act-transparency-statement/","modern slavery transparency statement",{"items":693},[694,697,700],{"text":695,"config":696},"Terms",{"href":522,"dataGaName":523,"dataGaLocation":470},{"text":698,"config":699},"Cookies",{"dataGaName":532,"dataGaLocation":470,"id":533,"isOneTrustButton":14},{"text":701,"config":702},"Privacy",{"href":527,"dataGaName":528,"dataGaLocation":470},[704],{"id":705,"title":9,"body":26,"config":706,"content":708,"description":26,"extension":25,"meta":712,"navigation":14,"path":713,"seo":714,"stem":715,"__hash__":716},"blogAuthors/en-us/blog/authors/cesar-saavedra.yml",{"template":707},"BlogAuthor",{"name":9,"config":709},{"headshot":710,"ctfId":711},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1749659600/Blog/Author%20Headshots/csaavedra1-headshot.jpg","csaavedra1",{},"/en-us/blog/authors/cesar-saavedra",{},"en-us/blog/authors/cesar-saavedra","SMqRf-z0W5m5GROz_dXGjmuIb3YaOwm_n_RfeK16GcA",[718,731,745],{"content":719,"config":729},{"title":720,"description":721,"authors":722,"heroImage":724,"date":725,"body":726,"category":11,"tags":727},"How to build CI/CD observability at scale","This practical guide to GitLab pipeline analytics helps self-managed users gain operational insights using Prometheus and Grafana.",[723],"Paul Meresanu","https://res.cloudinary.com/about-gitlab-com/image/upload/v1774465167/n5hlvrsrheadeccyr1oz.png","2026-04-28","CI/CD optimization starts with visibility. Building a successful DevOps platform at enterprise scale **should include** understanding pipeline performance, job execution patterns, and quantifiable operational insights — especially for organizations running GitLab self-managed instances.\n\nTo help GitLab customers maximize their platform investments, we developed the GitLab CI/CD Observability solution as part of our Platform Excellence program, which transforms raw pipeline metrics into actionable operational insights.\n\nA leading financial services organization partnered with GitLab's customer success architect to gain visibility into their GitLab self-managed deployment. Together, we implemented a containerized observability solution combining the open-source gitlab-ci-pipelines-exporter with enterprise-grade Prometheus and Grafana infrastructure.\n\nIn this article, you'll learn the challenges they faced managing pipelines at scale and how GitLab CI/CD Observability addressed them with a practical, end-to-end implementation.\n\n## The challenge: Measuring CI/CD performance\nBefore implementing any observability solution, define your measurement landscape:\n*   **What metrics matter?** Pipeline duration, job success rates, queue times, runner utilization\n*   **Who needs visibility?** Developers, DevOps engineers, platform teams, leadership\n*   **What decisions will this drive?** Infrastructure investment, bottleneck remediation, capacity planning\n\n## Solution architecture: A full set of dashboards for observability\nOnce deployed, the observability stack provides a set of Grafana dashboards that give real-time and historical visibility into your CI/CD platform. A typical deployment includes:\n*   **Pipeline Overview Dashboard:** A top-level view showing total pipeline runs, success/failure rates over time (as stacked bar or time-series charts), and average pipeline duration trends. Panels use color-coded status indicators (green for success, red for failure, amber for cancelled) so platform teams can spot degradation at a glance.\n*   **Job Performance Dashboard:** Drill-down panels showing individual job duration distributions (histogram), the top 10 slowest jobs by average duration, and job failure heatmaps by project and stage. This is where teams identify specific bottleneck jobs worth optimizing.\n*   **Runner & Infrastructure Dashboard:** Combines Node Exporter host metrics (CPU, memory, disk) with pipeline queue-time data to correlate infrastructure saturation with pipeline wait times. Useful for capacity planning decisions such as scaling runner pools or upgrading instance sizes.\n*   **Deployment Frequency Dashboard:** Tracks deployment count and deployment duration over time per environment, aligned with DORA metrics. Helps engineering leadership assess delivery throughput and environment drift (commits behind main).\n\nEach dashboard is provisioned automatically via Grafana's file-based provisioning, so it deploys consistently across environments. The dashboards can be further customized with Grafana variables to filter by project, ref/branch, or time range.\n\n![Solution architecture](https://res.cloudinary.com/about-gitlab-com/image/upload/v1777382608/Blog/Imported/blog-building-ci-cd-observability-stack-for-gitlab-self-managed/image1.png)\n\nThe solution requires two exporters:\n*   **Pipeline Exporter:** Collects CI/CD metrics via GitLab API (pipeline duration, job status, deployments)\n*   **Node Exporter:** Collects host-level metrics (CPU, memory, disk) for infrastructure correlation\n\n**Prerequisites:**\n*   GitLab Self-Managed Version 18.1+\n*   **Container orchestration platform:** A Kubernetes cluster (recommended for enterprise deployments) or a container runtime such as Docker/Podman for smaller scale or proof-of-concept environments. The primary deployment guide below targets Kubernetes; a Docker Compose alternative is provided in the appendix for local testing and evaluation\n*   GitLab Personal Access Token (**read_api** scope)\n\n## Kubernetes deployment (recommended)\nFor enterprise environments, deploy each component as a separate Deployment within a dedicated namespace. This approach integrates with existing cluster infrastructure, secrets management, and network policies.\n\n### 1. Create namespace and secret\n```bash\nkubectl create namespace gitlab-observability\n\n# Create the GitLab token secret (see Secrets Management section below\n# for enterprise-grade approaches using external secret operators)\nkubectl create secret generic gitlab-token \\\n  --from-literal=token=glpat-xxxxxxxxxxxx \\\n  -n gitlab-observability\n```\n\n\n### 2. Deploy the Pipeline Exporter\n```yaml\n# exporter-deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: gitlab-ci-pipelines-exporter\n  namespace: gitlab-observability\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: gitlab-ci-pipelines-exporter\n  template:\n    metadata:\n      labels:\n        app: gitlab-ci-pipelines-exporter\n    spec:\n      containers:\n        - name: exporter\n          image: mvisonneau/gitlab-ci-pipelines-exporter:latest\n          ports:\n            - containerPort: 8080\n          env:\n            - name: GCPE_GITLAB_TOKEN\n              valueFrom:\n                secretKeyRef:\n                  name: gitlab-token\n                  key: token\n            - name: GCPE_CONFIG\n              value: /etc/gcpe/config.yml\n          volumeMounts:\n            - name: config\n              mountPath: /etc/gcpe\n      volumes:\n        - name: config\n          configMap:\n            name: gcpe-config\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: gitlab-ci-pipelines-exporter\n  namespace: gitlab-observability\nspec:\n  selector:\n    app: gitlab-ci-pipelines-exporter\n  ports:\n    - port: 8080\n      targetPort: 8080\n```\n\n### 3. Deploy Node Exporter (DaemonSet)\n```yaml\n# node-exporter-daemonset.yaml\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n  name: node-exporter\n  namespace: gitlab-observability\nspec:\n  selector:\n    matchLabels:\n      app: node-exporter\n  template:\n    metadata:\n      labels:\n        app: node-exporter\n    spec:\n      containers:\n        - name: node-exporter\n          image: prom/node-exporter:latest\n          ports:\n            - containerPort: 9100\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: node-exporter\n  namespace: gitlab-observability\nspec:\n  selector:\n    app: node-exporter\n  ports:\n    - port: 9100\n      targetPort: 9100\n```\n\n### 4. Deploy Prometheus\n```yaml\n# prometheus-deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: prometheus\n  namespace: gitlab-observability\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: prometheus\n  template:\n    metadata:\n      labels:\n        app: prometheus\n    spec:\n      containers:\n        - name: prometheus\n          image: prom/prometheus:latest\n          ports:\n            - containerPort: 9090\n          volumeMounts:\n            - name: config\n              mountPath: /etc/prometheus\n      volumes:\n        - name: config\n          configMap:\n            name: prometheus-config\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: prometheus\n  namespace: gitlab-observability\nspec:\n  selector:\n    app: prometheus\n  ports:\n    - port: 9090\n      targetPort: 9090\n```\n\n### 5. Deploy Grafana\nThe Grafana deployment below starts with authentication disabled (`GF_AUTH_ANONYMOUS_ENABLED: true`) for initial setup convenience.\n\n**This setting allows anyone with network access to view all dashboards without logging in.** For production deployments, remove this variable or set it to false and configure a proper authentication provider (LDAP, SAML/SSO, or OAuth) to restrict access to authorized users.\n```yaml\n# grafana-deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: grafana\n  namespace: gitlab-observability\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: grafana\n  template:\n    metadata:\n      labels:\n        app: grafana\n    spec:\n      containers:\n        - name: grafana\n          image: grafana/grafana:10.0.0\n          ports:\n            - containerPort: 3000\n          env:\n            # REMOVE or set to 'false' for production.\n            # When 'true', any user with network access can\n            # view dashboards without authentication.\n            - name: GF_AUTH_ANONYMOUS_ENABLED\n              value: 'true'\n          volumeMounts:\n            - name: dashboards-provider\n              mountPath: /etc/grafana/provisioning/dashboards\n            - name: datasources\n              mountPath: /etc/grafana/provisioning/datasources\n            - name: dashboards\n              mountPath: /var/lib/grafana/dashboards\n      volumes:\n        - name: dashboards-provider\n          configMap:\n            name: grafana-dashboards-provider\n        - name: datasources\n          configMap:\n            name: grafana-datasources\n        - name: dashboards\n          configMap:\n            name: grafana-dashboards\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: grafana\n  namespace: gitlab-observability\nspec:\n  selector:\n    app: grafana\n  ports:\n    - port: 3000\n      targetPort: 3000\n```\n\n### 6. Set network policy\nRestrict inter-pod traffic to only the required communication paths:\n```yaml\n# network-policy.yaml\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n  name: observability-policy\n  namespace: gitlab-observability\nspec:\n  podSelector: {}\n  policyTypes:\n    - Ingress\n  ingress:\n    # Prometheus scrapes exporter and node-exporter\n    - from:\n        - podSelector:\n            matchLabels:\n              app: prometheus\n      ports:\n        - port: 8080\n        - port: 9100\n    # Grafana queries Prometheus\n    - from:\n        - podSelector:\n            matchLabels:\n              app: grafana\n      ports:\n        - port: 9090\n```\n\n### 7. Validate\n```bash\nkubectl get pods -n gitlab-observability\nkubectl port-forward svc/grafana 3000:3000 -n gitlab-observability\ncurl http://localhost:3000/api/health\n```\n\n## Configuration reference\n### Exporter configuration\n```yaml\n# gitlab-ci-pipelines-exporter.yml (ConfigMap: gcpe-config)\nlog:\n  level: info\ngitlab:\n  url: https://gitlab.your-domain.com\n  maximum_requests_per_second: 10\nproject_defaults:\n  pull:\n    pipeline:\n      jobs:\n        enabled: true\nwildcards:\n  - owner:\n      name: your-group-name\n      kind: group\n    archived: false\n```\n\n### Prometheus configuration\n```yaml\n# prometheus.yml (ConfigMap: prometheus-config)\nglobal:\n  scrape_interval: 15s\nscrape_configs:\n  - job_name: 'gitlab-ci-pipelines-exporter'\n    static_configs:\n      - targets: ['gitlab-ci-pipelines-exporter:8080']\n  - job_name: 'node-exporter'\n    static_configs:\n      - targets: ['node-exporter:9100']\n```\n\n### Grafana data sources\n```yaml\n# datasources.yml (ConfigMap: grafana-datasources)\napiVersion: 1\ndatasources:\n  - name: Prometheus\n    type: prometheus\n    access: proxy\n    url: http://prometheus:9090\n    isDefault: true\n# dashboards.yml (ConfigMap: grafana-dashboards-provider)\napiVersion: 1\nproviders:\n  - name: 'default'\n    folder: 'GitLab CI/CD'\n    type: file\n    options:\n      path: /var/lib/grafana/dashboards\n```\n\n## Key metrics\n### Pipeline Exporter metrics\n| Metric | Description |\n| :---- | :---- |\n| `gitlab_ci_pipeline_duration_seconds` | Pipeline execution time |\n| `gitlab_ci_pipeline_status` | Pipeline success/failure by project |\n| `gitlab_ci_pipeline_job_duration_seconds` | Individual job execution time |\n| `gitlab_ci_pipeline_job_status` | Job success/failure status |\n| `gitlab_ci_pipeline_job_artifact_size_bytes` | Artifact storage consumption |\n| `gitlab_ci_pipeline_coverage` | Code coverage percentage |\n| `gitlab_ci_environment_deployment_count` | Deployment frequency |\n| `gitlab_ci_environment_deployment_duration_seconds` | Deployment execution time |\n| `gitlab_ci_environment_behind_commits_count` | Environment drift from main |\n\n### Node Exporter metrics\n| Metric | Description |\n| :---- | :---- |\n| `node_cpu_seconds_total` | CPU utilization |\n| `node_memory_MemAvailable_bytes` | Available memory |\n| `node_filesystem_avail_bytes` | Disk space available |\n| `node_load1` | 1-minute load average |\n\n## Troubleshooting\n### Air-gapped Grafana plugin installation\nFor offline environments, install plugins manually. Example for Kubernetes:\n```bash\n# Copy plugin zip into the Grafana pod\nkubectl cp grafana-polystat-panel-2.1.16.zip \\\n  gitlab-observability/grafana-\u003Cpod-id>:/tmp/\n# Extract plugin\nkubectl exec -it -n gitlab-observability deploy/grafana -- \\\n  sh -c \"unzip /tmp/grafana-polystat-panel-2.1.16.zip -d /var/lib/grafana/plugins/\"\n# Restart Grafana pod\nkubectl rollout restart deployment/grafana -n gitlab-observability\n# Verify installation\nkubectl exec -it -n gitlab-observability deploy/grafana -- \\\n  ls -al /var/lib/grafana/plugins/\n```\n\n## Enterprise considerations\nFor regulated industries, ensure:\n*   **Token security:** Store GitLab Personal Access Tokens in a dedicated secrets manager rather than hardcoded in ConfigMaps. Enforce token rotation policies and limit scope to **read\\_api** only.\n*   **Network segmentation:** Deploy behind a reverse proxy with TLS termination. In Kubernetes, use an Ingress controller with automated certificate provisioning.\n*   **Authentication:** Configure Grafana with your organization's identity provider (SAML, LDAP, or OAuth/OIDC) to enforce role-based access control on dashboards.\n\n## Why GitLab?\nGitLab's API-first design enables custom observability solutions that complement native capabilities like Value Stream Analytics and DORA metrics. The open architecture allows organizations to integrate proven open-source tooling — like the gitlab-ci-pipelines-exporter — directly with their existing enterprise infrastructure, without disrupting established workflows.\n\nAs your observability maturity grows, GitLab's built-in Observability capabilities provide a natural next step — offering deeper, integrated visibility without additional tooling. Learn more about what's available natively in the platform for [GitLab Observability](https://docs.gitlab.com/operations/observability/observability/).\n",[110,728,23],"product",{"featured":31,"template":15,"slug":730},"how-to-build-ci-cd-observability-at-scale",{"content":732,"config":743},{"body":733,"title":734,"description":735,"authors":736,"heroImage":738,"date":739,"category":11,"tags":740},"Most CI/CD tools can run a build and ship a deployment. Where they diverge is what happens when your delivery needs get real: a monorepo with a dozen services, microservices spread across multiple repositories, deployments to dozens of environments, or a platform team trying to enforce standards without becoming a bottleneck.\n  \nGitLab's pipeline execution model was designed for that complexity. Parent-child pipelines, DAG execution, dynamic pipeline generation, multi-project triggers, merge request pipelines with merged results, and CI/CD Components each solve a distinct class of problems. Because they compose, understanding the full model unlocks something more than a faster pipeline. In this article, you'll learn about the five patterns where that model stands out, each mapped to a real engineering scenario with the configuration to match.\n  \nThe configs below are illustrative. The scripts use echo commands to keep the signal-to-noise ratio low. Swap them out for your actual build, test, and deploy steps and they are ready to use.\n\n\n## 1. Monorepos: Parent-child pipelines + DAG execution\n\n\nThe problem: Your monorepo has a frontend, a backend, and a docs site. Every commit triggers a full rebuild of everything, even when only a README changed.\n\n\nGitLab solves this with two complementary features: [parent-child pipelines](https://docs.gitlab.com/ci/pipelines/downstream_pipelines/#parent-child-pipelines) (which let a top-level pipeline spawn isolated sub-pipelines) and [DAG execution via `needs`](https://docs.gitlab.com/ci/yaml/#needs) (which breaks rigid stage-by-stage ordering and lets jobs start the moment their dependencies finish).\n\n\nA parent pipeline detects what changed and triggers only the relevant child pipelines:\n\n```yaml\n# .gitlab-ci.yml\nstages:\n  - trigger\n\ntrigger-services:\n  stage: trigger\n  trigger:\n    include:\n      - local: '.gitlab/ci/api-service.yml'\n      - local: '.gitlab/ci/web-service.yml'\n      - local: '.gitlab/ci/worker-service.yml'\n    strategy: depend\n```\n\n\nEach child pipeline is a fully independent pipeline with its own stages, jobs, and artifacts. The parent waits for all of them via [strategy: depend](https://docs.gitlab.com/ci/pipelines/downstream_pipelines/#wait-for-downstream-pipeline-to-complete) so you get a single green/red signal at the top level, with full drill-down into each service's pipeline. This organizational separation is the bigger win for large teams: each service owns its pipeline config, changes in one cannot break another, and the complexity stays manageable as the repo grows.\n\n\nOne thing worth knowing: when you pass [multiple files to a single `trigger: include:`](https://docs.gitlab.com/ci/pipelines/downstream_pipelines/#combine-multiple-child-pipeline-configuration-files), GitLab merges them into a single child pipeline configuration. This means jobs defined across those files share the same pipeline context and can reference each other with `needs:`, which is what makes the DAG optimization possible. If you split them into separate trigger jobs instead, each would be its own isolated pipeline and cross-file `needs:` references would not work.\n\n\nCombine this with `needs:` inside each child pipeline and you get DAG execution. Your integration tests can start the moment the build finishes, without waiting for other jobs in the same stage.\n\n```yaml\n# .gitlab/ci/api-service.yml\nstages:\n  - build\n  - test\n\nbuild-api:\n  stage: build\n  script:\n    - echo \"Building API service\"\n\ntest-api:\n  stage: test\n  needs: [build-api]\n  script:\n    - echo \"Running API tests\"\n```\n\n\nWhy it matters: Teams with large monorepos typically report significant reductions in pipeline runtime after switching to DAG execution, since jobs no longer wait on unrelated work in the same stage. Parent-child pipelines add the organizational layer that keeps the configuration maintainable as the repo and team grow.\n\n![Local downstream pipelines](https://res.cloudinary.com/about-gitlab-com/image/upload/v1775738759/Blog/Imported/hackathon-fake-blog-post-s/image3_vwj3rz.png \"Local downstream pipelines\")\n\n## 2. Microservices: Cross-repo, multi-project pipelines\n\n\nThe problem: Your frontend lives in one repo, your backend in another. When the frontend team ships a change, they have no visibility into whether it broke the backend integration and vice versa.\n\n\nGitLab's [multi-project pipelines](https://docs.gitlab.com/ci/pipelines/downstream_pipelines/#multi-project-pipelines) let one project trigger a pipeline in a completely separate project and wait for the result. The triggering project gets a linked downstream pipeline right in its own pipeline view.\n\n\nThe frontend pipeline builds an API contract artifact and publishes it, then triggers the backend pipeline. The backend fetches that artifact directly using the [Jobs API](https://docs.gitlab.com/api/jobs/#download-a-single-artifact-file-from-specific-tag-or-branch) and validates it before allowing anything to proceed. If a breaking change is detected, the backend pipeline fails and the frontend pipeline fails with it.\n\n```yaml\n# frontend repo: .gitlab-ci.yml\nstages:\n  - build\n  - test\n  - trigger-backend\n\nbuild-frontend:\n  stage: build\n  script:\n    - echo \"Building frontend and generating API contract...\"\n    - mkdir -p dist\n    - |\n      echo '{\n        \"api_version\": \"v2\",\n        \"breaking_changes\": false\n      }' > dist/api-contract.json\n    - cat dist/api-contract.json\n  artifacts:\n    paths:\n      - dist/api-contract.json\n    expire_in: 1 hour\n\ntest-frontend:\n  stage: test\n  script:\n    - echo \"All frontend tests passed!\"\n\ntrigger-backend-pipeline:\n  stage: trigger-backend\n  trigger:\n    project: my-org/backend-service\n    branch: main\n    strategy: depend\n  rules:\n    - if: $CI_COMMIT_BRANCH == \"main\"\n```\n\n```yaml\n# backend repo: .gitlab-ci.yml\nstages:\n  - build\n  - test\n\nbuild-backend:\n  stage: build\n  script:\n    - echo \"All backend tests passed!\"\n\nintegration-test:\n  stage: test\n  rules:\n    - if: $CI_PIPELINE_SOURCE == \"pipeline\"\n  script:\n    - echo \"Fetching API contract from frontend...\"\n    - |\n      curl --silent --fail \\\n        --header \"JOB-TOKEN: $CI_JOB_TOKEN\" \\\n        --output api-contract.json \\\n        \"${CI_API_V4_URL}/projects/${FRONTEND_PROJECT_ID}/jobs/artifacts/main/raw/dist/api-contract.json?job=build-frontend\"\n    - cat api-contract.json\n    - |\n      if grep -q '\"breaking_changes\": true' api-contract.json; then\n        echo \"FAIL: Breaking API changes detected - backend integration blocked!\"\n        exit 1\n      fi\n      echo \"PASS: API contract is compatible!\"\n```\n\n\nA few things worth noting in this config. The `integration-test` job uses `$CI_PIPELINE_SOURCE == \"pipeline\"` to ensure it only runs when triggered by an upstream pipeline, not on a standalone push to the backend repo. The frontend project ID is referenced via `$FRONTEND_PROJECT_ID`, which should be set as a [CI/CD variable](https://docs.gitlab.com/ci/variables/) in the backend project settings to avoid hardcoding it.\n\n\nWhy it matters: Cross-service breakage that previously surfaced in production gets caught in the pipeline instead. The dependency between services stops being invisible and becomes something teams can see, track, and act on.\n\n\n![Cross-project pipelines](https://res.cloudinary.com/about-gitlab-com/image/upload/v1775738762/Blog/Imported/hackathon-fake-blog-post-s/image4_h6mfsb.png \"Cross-project pipelines\")\n\n\n## 3. Multi-tenant / matrix deployments: Dynamic child pipelines\n\n\nThe problem: You deploy the same application to 15 customer environments, or three cloud regions, or dev/staging/prod. Updating a deploy stage across all of them one by one is the kind of work that leads to configuration drift. Writing a separate pipeline for each environment is unmaintainable from day one.\n\n\nGitLab's [dynamic child pipelines](https://docs.gitlab.com/ci/pipelines/downstream_pipelines/#dynamic-child-pipelines) let you generate a pipeline at runtime. A job runs a script that produces a YAML file, and that YAML becomes the pipeline for the next stage. The pipeline structure itself becomes data.\n\n\n```yaml\n# .gitlab-ci.yml\nstages:\n  - generate\n  - trigger-environments\n\ngenerate-config:\n  stage: generate\n  script:\n    - |\n      # ENVIRONMENTS can be passed as a CI variable or read from a config file.\n      # Default to dev, staging, prod if not set.\n      ENVIRONMENTS=${ENVIRONMENTS:-\"dev staging prod\"}\n      for ENV in $ENVIRONMENTS; do\n        cat > ${ENV}-pipeline.yml \u003C\u003C EOF\n      stages:\n        - deploy\n        - verify\n      deploy-${ENV}:\n        stage: deploy\n        script:\n          - echo \"Deploying to ${ENV} environment\"\n      verify-${ENV}:\n        stage: verify\n        script:\n          - echo \"Running smoke tests on ${ENV}\"\n      EOF\n      done\n  artifacts:\n    paths:\n      - \"*.yml\"\n    exclude:\n      - \".gitlab-ci.yml\"\n\n.trigger-template:\n  stage: trigger-environments\n  trigger:\n    strategy: depend\n\ntrigger-dev:\n  extends: .trigger-template\n  trigger:\n    include:\n      - artifact: dev-pipeline.yml\n        job: generate-config\n\ntrigger-staging:\n  extends: .trigger-template\n  needs: [trigger-dev]\n  trigger:\n    include:\n      - artifact: staging-pipeline.yml\n        job: generate-config\n\ntrigger-prod:\n  extends: .trigger-template\n  needs: [trigger-staging]\n  trigger:\n    include:\n      - artifact: prod-pipeline.yml\n        job: generate-config\n  when: manual\n```\n\n\nThe generation script loops over an `ENVIRONMENTS` variable rather than hardcoding each environment separately. Pass in a different list via a CI variable or read it from a config file and the pipeline adapts without touching the YAML. The trigger jobs use [extends:](https://docs.gitlab.com/ci/yaml/#extends) to inherit shared configuration from `.trigger-template`, so `strategy: depend` is defined once rather than repeated on every trigger job. Add a new environment by updating the variable, not by duplicating pipeline config. Add [when: manual](https://docs.gitlab.com/ci/yaml/#when) to the production trigger and you get a promotion gate baked right into the pipeline graph.\n\n\nWhy it matters: SaaS companies and platform teams use this pattern to manage dozens of environments without duplicating pipeline logic. The pipeline structure itself stays lean as the deployment matrix grows.\n\n\n![Dynamic pipeline](https://res.cloudinary.com/about-gitlab-com/image/upload/v1775738765/Blog/Imported/hackathon-fake-blog-post-s/image7_wr0kx2.png \"Dynamic pipeline\")\n\n\n## 4. MR-first delivery: Merge request pipelines, merged results, and workflow routing\n\n\nThe problem: Your pipeline runs on every push to every branch. Expensive tests run on feature branches that will never merge. Meanwhile, you have no guarantee that what you tested is actually what will land on `main` after a merge.\n\n\nGitLab has three interlocking features that solve this together:\n\n\n*   [Merge request pipelines](https://docs.gitlab.com/ci/pipelines/merge_request_pipelines/) run only when a merge request exists, not on every branch push. This alone eliminates a significant amount of wasted compute.\n\n*   [Merged results pipelines](https://docs.gitlab.com/ci/pipelines/merged_results_pipelines/) go further. GitLab creates a temporary merge commit (your branch plus the current target branch) and runs the pipeline against that. You are testing what will actually exist after the merge, not just your branch in isolation.\n\n*   [Workflow rules](https://docs.gitlab.com/ci/yaml/workflow/) let you define exactly which pipeline type runs under which conditions and suppress everything else. The `$CI_OPEN_MERGE_REQUESTS` guard below prevents duplicate pipelines firing for both a branch and its open MR simultaneously.\n\n\nWith those three working together, here is what a tiered pipeline looks like:\n\n```yaml\n# .gitlab-ci.yml\nworkflow:\n  rules:\n    - if: $CI_PIPELINE_SOURCE == \"merge_request_event\"\n    - if: $CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS\n      when: never\n    - if: $CI_COMMIT_BRANCH\n    - if: $CI_PIPELINE_SOURCE == \"schedule\"\n\nstages:\n  - fast-checks\n  - expensive-tests\n  - deploy\n\nlint-code:\n  stage: fast-checks\n  script:\n    - echo \"Running linter\"\n  rules:\n    - if: $CI_PIPELINE_SOURCE == \"push\"\n    - if: $CI_PIPELINE_SOURCE == \"merge_request_event\"\n    - if: $CI_COMMIT_BRANCH == \"main\"\n\nunit-tests:\n  stage: fast-checks\n  script:\n    - echo \"Running unit tests\"\n  rules:\n    - if: $CI_PIPELINE_SOURCE == \"push\"\n    - if: $CI_PIPELINE_SOURCE == \"merge_request_event\"\n    - if: $CI_COMMIT_BRANCH == \"main\"\n\nintegration-tests:\n  stage: expensive-tests\n  script:\n    - echo \"Running integration tests (15 min)\"\n  rules:\n    - if: $CI_PIPELINE_SOURCE == \"merge_request_event\"\n    - if: $CI_COMMIT_BRANCH == \"main\"\n\ne2e-tests:\n  stage: expensive-tests\n  script:\n    - echo \"Running E2E tests (30 min)\"\n  rules:\n    - if: $CI_PIPELINE_SOURCE == \"merge_request_event\"\n    - if: $CI_COMMIT_BRANCH == \"main\"\n\nnightly-comprehensive-scan:\n  stage: expensive-tests\n  script:\n    - echo \"Running full nightly suite (2 hours)\"\n  rules:\n    - if: $CI_PIPELINE_SOURCE == \"schedule\"\n\ndeploy-production:\n  stage: deploy\n  script:\n    - echo \"Deploying to production\"\n  rules:\n    - if: $CI_COMMIT_BRANCH == \"main\"\n      when: manual\n```\n\nWith this setup, the pipeline behaves differently depending on context. A push to a feature branch with no open MR runs lint and unit tests only. Once an MR is opened, the workflow rules switch from a branch pipeline to an MR pipeline, and the full integration and E2E suite runs against the merged result. Merging to `main` queues a manual production deployment. A nightly schedule runs the comprehensive scan once, not on every commit.\n\n\nWhy it matters: Teams routinely cut CI costs significantly with this pattern, not by running fewer tests, but by running the right tests at the right time. Merged results pipelines catch the class of bugs that only appear after a merge, before they ever reach `main`.\n\n\n![Conditional pipelines (within a branch with no MR)](https://res.cloudinary.com/about-gitlab-com/image/upload/v1775738768/Blog/Imported/hackathon-fake-blog-post-s/image6_dnfcny.png \"Conditional pipelines (within a branch with no MR)\")\n\n\n\n![Conditional pipelines (within an MR)](https://res.cloudinary.com/about-gitlab-com/image/upload/v1775738772/Blog/Imported/hackathon-fake-blog-post-s/image1_wyiafu.png \"Conditional pipelines (within an MR)\")\n\n\n\n![Conditional pipelines (on the main branch)](https://res.cloudinary.com/about-gitlab-com/image/upload/v1775738774/Blog/Imported/hackathon-fake-blog-post-s/image5_r6lkfd.png \"Conditional pipelines (on the main branch)\")\n\n## 5. Governed pipelines: CI/CD Components\n\n\nThe problem: Your platform team has defined the right way to build, test, and deploy. But every team has their own `.gitlab-ci.yml` with subtle variations. Security scanning gets skipped. Deployment standards drift. Audits are painful.\n\n\nGitLab [CI/CD Components](https://docs.gitlab.com/ci/components/) let platform teams publish versioned, reusable pipeline building blocks. Application teams consume them with a single `include:` line and optional inputs — no copy-paste, no drift. Components are discoverable through the [CI/CD Catalog](https://docs.gitlab.com/ci/components/#cicd-catalog), which means teams can find and adopt approved building blocks without needing to go through the platform team directly.\n\n\nHere is a component definition from a shared library:\n\n```yaml\n# templates/deploy.yml\nspec:\n  inputs:\n    stage:\n      default: deploy\n    environment:\n      default: production\n---\ndeploy-job:\n  stage: $[[ inputs.stage ]]\n  script:\n    - echo \"Deploying $APP_NAME to $[[ inputs.environment ]]\"\n    - echo \"Deploy URL: $DEPLOY_URL\"\n  environment:\n    name: $[[ inputs.environment ]]\n```\nAnd here is how an application team consumes it:\n\n```yaml\n# Application repo: .gitlab-ci.yml\nvariables:\n  APP_NAME: \"my-awesome-app\"\n  DEPLOY_URL: \"https://api.example.com\"\n\ninclude:\n  - component: gitlab.com/my-org/component-library/build@v1.0.6\n  - component: gitlab.com/my-org/component-library/test@v1.0.6\n  - component: gitlab.com/my-org/component-library/deploy@v1.0.6\n    inputs:\n      environment: staging\n\nstages:\n  - build\n  - test\n  - deploy\n```\n\nThree lines of `include:` replace hundreds of lines of duplicated YAML. The platform team can push a security fix to `v1.0.7` and teams opt in on their own schedule — or the platform team can pin everyone to a minimum version. Either way, one change propagates everywhere instead of needing to be applied repo by repo.\n\n\nPair this with [resource groups](https://docs.gitlab.com/ci/resource_groups/) to prevent concurrent deployments to the same environment, and [protected environments](https://docs.gitlab.com/ci/environments/protected_environments/) to enforce approval gates - and you have a governed delivery platform where compliance is the default, not the exception.\n\n\nWhy it matters: This is the pattern that makes GitLab CI/CD scale across hundreds of teams. Platform engineering teams enforce compliance without becoming a bottleneck. Application teams get a fast path to a working pipeline without reinventing the wheel.\n\n\n![Component pipeline (imported jobs)](https://res.cloudinary.com/about-gitlab-com/image/upload/v1775738776/Blog/Imported/hackathon-fake-blog-post-s/image2_pizuxd.png \"Component pipeline (imported jobs)\")\n\n## Putting it all together\n\nNone of these features exist in isolation. The reason GitLab's pipeline model is worth understanding deeply is that these primitives compose:\n\n*   A monorepo uses parent-child pipelines, and each child uses DAG execution\n\n*   A microservices platform uses multi-project pipelines, and each project uses MR pipelines with merged results\n\n*   A governed platform uses CI/CD components to standardize the patterns above across every team\n\n\nMost teams discover one of these features when they hit a specific pain point. The ones who invest in understanding the full model end up with a delivery system that actually reflects how their engineering organization works, not a pipeline that fights it.\n\n## Other patterns worth exploring\n\n\nThe five patterns above cover the most common structural pain points, but GitLab's pipeline model goes further. A few others worth looking into as your needs grow:\n\n\n*   [Review apps with dynamic environments](https://docs.gitlab.com/ci/environments/) let you spin up a live preview for every feature branch and tear it down automatically when the MR closes. Useful for teams doing frontend work or API changes that need stakeholder sign-off before merging.\n\n*   [Caching and artifact strategies](https://docs.gitlab.com/ci/caching/) are often the fastest way to cut pipeline runtime after the structural work is done. Structuring `cache:` keys around dependency lockfiles and being deliberate about what gets passed between jobs with [artifacts:](https://docs.gitlab.com/ci/yaml/#artifacts) can make a significant difference without changing your pipeline shape at all.\n\n*   [Scheduled and API-triggered pipelines](https://docs.gitlab.com/ci/pipelines/schedules/) are worth knowing about because not everything should run on a code push. Nightly security scans, compliance reports, and release automation are better modeled as scheduled or [API-triggered](https://docs.gitlab.com/ci/triggers/) pipelines with `$CI_PIPELINE_SOURCE` routing the right jobs for each context.\n\n## How to get started\n\nModern software delivery is complex. Teams are managing monorepos with dozens of services, coordinating across multiple repositories, deploying to many environments at once, and trying to keep standards consistent as organizations grow. GitLab's pipeline model was built with all of that in mind.\n\nWhat makes it worth investing time in is how well the pieces fit together. Parent-child pipelines bring structure to large codebases. Multi-project pipelines make cross-team dependencies visible and testable. Dynamic pipelines turn environment management into something that scales gracefully. MR-first delivery with merged results ensures confidence at every step of the review process. And CI/CD Components give platform teams a way to share best practices across an entire organization without becoming a bottleneck.\n\nEach of these features is powerful on its own, and even more so when combined. GitLab gives you the building blocks to design a delivery system that fits how your team actually works, and grows with you as your needs evolve.\n\n> [Start a free trial of GitLab Ultimate](https://about.gitlab.com/free-trial/) to use pipeline logic today.\n\n## Read more\n\n*   [Variable and artifact sharing in GitLab parent-child pipelines](https://about.gitlab.com/blog/variable-and-artifact-sharing-in-gitlab-parent-child-pipelines/)\n*   [CI/CD inputs: Secure and preferred method to pass parameters to a pipeline](https://about.gitlab.com/blog/ci-cd-inputs-secure-and-preferred-method-to-pass-parameters-to-a-pipeline/)\n*   [Tutorial: How to set up your first GitLab CI/CD component](https://about.gitlab.com/blog/tutorial-how-to-set-up-your-first-gitlab-ci-cd-component/)\n*   [How to include file references in your CI/CD components](https://about.gitlab.com/blog/how-to-include-file-references-in-your-ci-cd-components/)\n*   [FAQ: GitLab CI/CD Catalog](https://about.gitlab.com/blog/faq-gitlab-ci-cd-catalog/)\n*   [Building a GitLab CI/CD pipeline for a monorepo the easy way](https://about.gitlab.com/blog/building-a-gitlab-ci-cd-pipeline-for-a-monorepo-the-easy-way/)\n*   [A CI/CD component builder's journey](https://about.gitlab.com/blog/a-ci-component-builders-journey/)\n*   [CI/CD Catalog goes GA: No more building pipelines from scratch](https://about.gitlab.com/blog/ci-cd-catalog-goes-ga-no-more-building-pipelines-from-scratch/)","5 ways GitLab pipeline logic solves real engineering problems","Learn how to scale CI/CD with composable patterns for monorepos, microservices, environments, and governance.",[737],"Omid Khan","https://res.cloudinary.com/about-gitlab-com/image/upload/v1772721753/frfsm1qfscwrmsyzj1qn.png","2026-04-09",[110,741,23,742],"DevOps platform","features",{"featured":14,"template":15,"slug":744},"5-ways-gitlab-pipeline-logic-solves-real-engineering-problems",{"content":746,"config":755},{"title":747,"description":748,"authors":749,"heroImage":751,"date":752,"body":753,"category":11,"tags":754},"How to use GitLab Container Virtual Registry with Docker Hardened Images","Learn how to simplify container image management with this step-by-step guide.",[750],"Tim Rizzi","https://res.cloudinary.com/about-gitlab-com/image/upload/v1772111172/mwhgbjawn62kymfwrhle.png","2026-03-12","If you're a platform engineer, you've probably had this conversation:\n  \n*\"Security says we need to use hardened base images.\"*\n\n*\"Great, where do I configure credentials for yet another registry?\"*\n\n*\"Also, how do we make sure everyone actually uses them?\"*\n\nOr this one:\n\n*\"Why are our builds so slow?\"*\n\n*\"We're pulling the same 500MB image from Docker Hub in every single job.\"*\n\n*\"Can't we just cache these somewhere?\"*\n\nI've been working on [Container Virtual Registry](https://docs.gitlab.com/user/packages/virtual_registry/container/) at GitLab specifically to solve these problems. It's a pull-through cache that sits in front of your upstream registries — Docker Hub, dhi.io (Docker Hardened Images), MCR, and Quay — and gives your teams a single endpoint to pull from. Images get cached on the first pull. Subsequent pulls come from the cache. Your developers don't need to know or care which upstream a particular image came from.\n\nThis article shows you how to set up Container Virtual Registry, specifically with Docker Hardened Images in mind, since that's a combination that makes a lot of sense for teams concerned about security and not making their developers' lives harder.\n\n## What problem are we actually solving?\n\nThe Platform teams I usually talk to manage container images across three to five registries:\n\n* **Docker Hub** for most base images\n* **dhi.io** for Docker Hardened Images (security-conscious workloads)\n* **MCR** for .NET and Azure tooling\n* **Quay.io** for Red Hat ecosystem stuff\n* **Internal registries** for proprietary images\n\nEach one has its own:\n\n* Authentication mechanism\n* Network latency characteristics\n* Way of organizing image paths\n\nYour CI/CD configs end up littered with registry-specific logic. Credential management becomes a project unto itself. And every pipeline job pulls the same base images over the network, even though they haven't changed in weeks.\n\nContainer Virtual Registry consolidates this. One registry URL. One authentication flow (GitLab's). Cached images are served from GitLab's infrastructure rather than traversing the internet each time.\n\n## How it works\n\nThe model is straightforward:\n\n```text\nYour pipeline pulls:\n  gitlab.com/virtual_registries/container/1000016/python:3.13\n\nVirtual registry checks:\n  1. Do I have this cached? → Return it\n  2. No? → Fetch from upstream, cache it, return it\n\n```\n\nYou configure upstreams in priority order. When a pull request comes in, the virtual registry checks each upstream until it finds the image. The result gets cached for a configurable period (default 24 hours).\n\n```text\n┌─────────────────────────────────────────────────────────┐\n│                    CI/CD Pipeline                       │\n│                          │                              │\n│                          ▼                              │\n│   gitlab.com/virtual_registries/container/\u003Cid>/image   │\n└─────────────────────────────────────────────────────────┘\n                           │\n                           ▼\n┌─────────────────────────────────────────────────────────┐\n│            Container Virtual Registry                   │\n│                                                         │\n│  Upstream 1: Docker Hub ────────────────┐               │\n│  Upstream 2: dhi.io (Hardened) ────────┐│               │\n│  Upstream 3: MCR ─────────────────────┐││               │\n│  Upstream 4: Quay.io ────────────────┐│││               │\n│                                      ││││               │\n│                    ┌─────────────────┴┴┴┴──┐            │\n│                    │        Cache          │            │\n│                    │  (manifests + layers) │            │\n│                    └───────────────────────┘            │\n└─────────────────────────────────────────────────────────┘\n```\n\n## Why this matters for Docker Hardened Images\n\n[Docker Hardened Images](https://docs.docker.com/dhi/) are great because of the minimal attack surface, near-zero CVEs, proper software bills of materials (SBOMs), and SLSA provenance. If you're evaluating base images for security-sensitive workloads, they should be on your list.\n\nBut adopting them creates the same operational friction as any new registry:\n\n* **Credential distribution**: You need to get Docker credentials to every system that pulls images from dhi.io.\n* **CI/CD changes**: Every pipeline needs to be updated to authenticate with dhi.io.\n* **Developer friction**: People need to remember to use the hardened variants.\n* **Visibility gap**: It's difficult to tell if teams are actually using hardened images vs. regular ones.\n\nVirtual registry addresses each of these:\n\n**Single credential**: Teams authenticate to GitLab. The virtual registry handles upstream authentication. You configure Docker credentials once, at the registry level, and they apply to all pulls.\n\n**No CI/CD changes per-team**: Point pipelines at your virtual registry. Done. The upstream configuration is centralized.\n\n**Gradual adoption**: Since images get cached with their full path, you can see in the cache what's being pulled. If someone's pulling `library/python:3.11` instead of the hardened variant, you'll know.\n\n**Audit trail**: The cache shows you exactly which images are in active use. Useful for compliance, useful for understanding what your fleet actually depends on.\n\n## Setting it up\n\nHere's a real setup using the Python client from this demo project.\n\n### Create the virtual registry\n\n```python\nfrom virtual_registry_client import VirtualRegistryClient\n\nclient = VirtualRegistryClient()\n\nregistry = client.create_virtual_registry(\n    group_id=\"785414\",  # Your top-level group ID\n    name=\"platform-images\",\n    description=\"Cached container images for platform teams\"\n)\n\nprint(f\"Registry ID: {registry['id']}\")\n# You'll need this ID for the pull URL\n```\n\n### Add Docker Hub as an upstream\n\nFor official images like Alpine, Python, etc.:\n\n```python\ndocker_upstream = client.create_upstream(\n    registry_id=registry['id'],\n    url=\"https://registry-1.docker.io\",\n    name=\"Docker Hub\",\n    cache_validity_hours=24\n)\n```\n\n### Add Docker Hardened Images (dhi.io)\n\nDocker Hardened Images are hosted on `dhi.io`, a separate registry that requires authentication:\n\n```python\ndhi_upstream = client.create_upstream(\n    registry_id=registry['id'],\n    url=\"https://dhi.io\",\n    name=\"Docker Hardened Images\",\n    username=\"your-docker-username\",\n    password=\"your-docker-access-token\",\n    cache_validity_hours=24\n)\n```\n\n### Add other upstreams\n\n```python\n# MCR for .NET teams\nclient.create_upstream(\n    registry_id=registry['id'],\n    url=\"https://mcr.microsoft.com\",\n    name=\"Microsoft Container Registry\",\n    cache_validity_hours=48\n)\n\n# Quay for Red Hat stuff\nclient.create_upstream(\n    registry_id=registry['id'],\n    url=\"https://quay.io\",\n    name=\"Quay.io\",\n    cache_validity_hours=24\n)\n```\n\n### Update your CI/CD\n\nHere's a `.gitlab-ci.yml` that pulls through the virtual registry:\n\n```yaml\nvariables:\n  VIRTUAL_REGISTRY_ID: \u003Cyour_virtual_registry_ID>\n\n  \nbuild:\n  image: docker:24\n  services:\n    - docker:24-dind\n  before_script:\n    # Authenticate to GitLab (which handles upstream auth for you)\n    - echo \"${CI_JOB_TOKEN}\" | docker login -u gitlab-ci-token --password-stdin gitlab.com\n  script:\n    # All of these go through your single virtual registry\n    \n    # Official Docker Hub images (use library/ prefix)\n    - docker pull gitlab.com/virtual_registries/container/${VIRTUAL_REGISTRY_ID}/library/alpine:latest\n    \n    # Docker Hardened Images from dhi.io (no prefix needed)\n    - docker pull gitlab.com/virtual_registries/container/${VIRTUAL_REGISTRY_ID}/python:3.13\n    \n    # .NET from MCR\n    - docker pull gitlab.com/virtual_registries/container/${VIRTUAL_REGISTRY_ID}/dotnet/sdk:8.0\n```\n\n### Image path formats\n\nDifferent registries use different path conventions:\n\n| Registry | Pull URL Example |\n|----------|------------------|\n| Docker Hub (official) | `.../library/python:3.11-slim` |\n| Docker Hardened Images (dhi.io) | `.../python:3.13` |\n| MCR | `.../dotnet/sdk:8.0` |\n| Quay.io | `.../prometheus/prometheus:latest` |\n\n### Verify it's working\n\nAfter some pulls, check your cache:\n\n```python\nupstreams = client.list_registry_upstreams(registry['id'])\nfor upstream in upstreams:\n    entries = client.list_cache_entries(upstream['id'])\n    print(f\"{upstream['name']}: {len(entries)} cached entries\")\n\n```\n\n## What the numbers look like\n\nI ran tests pulling images through the virtual registry:\n\n| Metric | Without Cache | With Warm Cache |\n|--------|---------------|-----------------|\n| Pull time (Alpine) | 10.3s | 4.2s |\n| Pull time (Python 3.13 DHI) | 11.6s | ~4s |\n| Network roundtrips to upstream | Every pull | Cache misses only |\n\n\n\n\nThe first pull is the same speed (it has to fetch from upstream). Every pull after that, for the cache validity period, comes straight from GitLab's storage. No network hop to Docker Hub, dhi.io, MCR, or wherever the image lives.\n\nFor a team running hundreds of pipeline jobs per day, that's hours of cumulative build time saved.\n\n## Practical considerations\nHere are some considerations to keep in mind:\n\n### Cache validity\n\n24 hours is the default. For security-sensitive images where you want patches quickly, consider 12 hours or less:\n\n```python\nclient.create_upstream(\n    registry_id=registry['id'],\n    url=\"https://dhi.io\",\n    name=\"Docker Hardened Images\",\n    username=\"your-username\",\n    password=\"your-token\",\n    cache_validity_hours=12\n)\n```\n\nFor stable, infrequently-updated images (like specific version tags), longer validity is fine.\n\n### Upstream priority\n\nUpstreams are checked in order. If you have images with the same name on different registries, the first matching upstream wins.\n\n### Limits\n\n* Maximum of 20 virtual registries per group\n* Maximum of 20 upstreams per virtual registry\n\n## Configuration via UI\n\nYou can also configure virtual registries and upstreams directly from the GitLab UI—no API calls required. Navigate to your group's **Settings > Packages and registries > Virtual Registry** to:\n\n* Create and manage virtual registries\n* Add, edit, and reorder upstream registries\n* View and manage the cache\n* Monitor which images are being pulled\n\n## What's next\n\nWe're actively developing:\n\n* **Allow/deny lists**: Use regex to control which images can be pulled from specific upstreams.\n\nThis is beta software. It works, people are using it in production, but we're still iterating based on feedback.\n\n## Share your feedback\n\nIf you're a platform engineer dealing with container registry sprawl, I'd like to understand your setup:\n\n* How many upstream registries are you managing?\n* What's your biggest pain point with the current state?\n* Would something like this help, and if not, what's missing?\n\nPlease share your experiences in the [Container Virtual Registry feedback issue](https://gitlab.com/gitlab-org/gitlab/-/work_items/589630).\n## Related resources\n- [New GitLab metrics and registry features help reduce CI/CD bottlenecks](https://about.gitlab.com/blog/new-gitlab-metrics-and-registry-features-help-reduce-ci-cd-bottlenecks/#container-virtual-registry)\n- [Container Virtual Registry documentation](https://docs.gitlab.com/user/packages/virtual_registry/container/)\n- [Container Virtual Registry API](https://docs.gitlab.com/api/container_virtual_registries/)",[23,728,742],{"featured":31,"template":15,"slug":756},"using-gitlab-container-virtual-registry-with-docker-hardened-images",{"promotions":758},[759,773,784,796],{"id":760,"categories":761,"header":763,"text":764,"button":765,"image":770},"ai-modernization",[762],"ai-ml","Is AI achieving its promise at scale?","Quiz will take 5 minutes or less",{"text":766,"config":767},"Get your AI maturity score",{"href":768,"dataGaName":769,"dataGaLocation":245},"/assessments/ai-modernization-assessment/","modernization assessment",{"config":771},{"src":772},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138786/qix0m7kwnd8x2fh1zq49.png",{"id":774,"categories":775,"header":776,"text":764,"button":777,"image":781},"devops-modernization",[728,572],"Are you just managing tools or shipping innovation?",{"text":778,"config":779},"Get your DevOps maturity score",{"href":780,"dataGaName":769,"dataGaLocation":245},"/assessments/devops-modernization-assessment/",{"config":782},{"src":783},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138785/eg818fmakweyuznttgid.png",{"id":785,"categories":786,"header":788,"text":764,"button":789,"image":793},"security-modernization",[787],"security","Are you trading speed for security?",{"text":790,"config":791},"Get your security maturity score",{"href":792,"dataGaName":769,"dataGaLocation":245},"/assessments/security-modernization-assessment/",{"config":794},{"src":795},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138786/p4pbqd9nnjejg5ds6mdk.png",{"id":797,"paths":798,"header":801,"text":802,"button":803,"image":808},"github-azure-migration",[799,800],"migration-from-azure-devops-to-gitlab","integrating-azure-devops-scm-and-gitlab","Is your team ready for GitHub's Azure move?","GitHub is already rebuilding around Azure. Find out what it means for you.",{"text":804,"config":805},"See how GitLab compares to GitHub",{"href":806,"dataGaName":807,"dataGaLocation":245},"/compare/gitlab-vs-github/github-azure-migration/","github azure migration",{"config":809},{"src":783},{"header":811,"blurb":812,"button":813,"secondaryButton":818},"Start building faster today","See what your team can do with the intelligent orchestration platform for DevSecOps.\n",{"text":814,"config":815},"Get your free trial",{"href":816,"dataGaName":52,"dataGaLocation":817},"https://gitlab.com/-/trial_registrations/new?glm_content=default-saas-trial&glm_source=about.gitlab.com/","feature",{"text":508,"config":819},{"href":56,"dataGaName":57,"dataGaLocation":817},1777493585472]