How to test a core pipeline repo against another, dependent repo with Azure DevOps PR build validation?


In my Azure DevOps project, I have multiple project repos using a common Azure DevOps pipeline. I have accomplished this by defining the pipeline in a core pipeline repo and then referring to it from each project repo.

Repo: core-pipeline

File: pipeline.yml

  - job: exampleJob
    displayName: Example job
      - checkout: core_pipeline # Not magic; see azure-pipelines.yml below.
        path: $(PIPELINE_DIR_RELATIVE)
      - checkout: self
        path: $(PROJECT_DIR_RELATIVE)
      - task: ShellScript@2
        displayName: Show project repo info
          scriptPath: $(PIPELINE_DIR)/
          cwd: $(PROJECT_DIR)

  __SOURCES_DIR__: s
  PROJECT_DIR_RELATIVE: $(__SOURCES_DIR__)/$(Build.Repository.Name)
  PROJECT_DIR: $(Pipeline.Workspace)/$(PROJECT_DIR_RELATIVE)



ls -al

Repo: example-project

File: azure-pipelines.yml

    - repository: core_pipeline # Must match `checkout:` in core pipeline repo.
      type: git
      name: MyAzureDevOpsProject/core-pipeline
      ref: master

  - master

  template: pipeline.yml@core_pipeline # Must match `repository:` above.

(This is the only piece of code duplicated across project repos.)


The setup described above works well. For example, I can have build validation for pull requests in example-project, and I can trigger pipeline runs manually from the Azure DevOps web GUI. In the latter case, I can optionally select another branch for the core_pipeline repository resource, which is useful for testing changes to the core-pipeline repo before merging them.

However, there is nothing preventing me from merging PRs in core-pipeline without having tested them whatsoever. I have to remember to manually test each project repo against the core-pipeline PR branch before merging the PR, which is somewhat tedious and, above all, very error-prone. There is effectively no protection against making arbitrarily bad changes to the core pipeline, breaking the workflow of all project repos depending on it.

I can add example-project‘s pipeline to the build validation policies for PRs in core-pipeline, but then core-pipeline‘s master branch is used for the validation build, which is useless; the point is to use the PR branch.


I would like to have a build validation policy for PRs in the core-pipeline repo such that example-project must pass a pipeline run using that PR branch of core-pipeline before the PR can me merged.

It is not necessary that all project repos using the core pipeline repo automatically be covered by said build validation; I’d be perfectly happy if I had to manually select (once) which project repos should be included.

Important note

The core pipeline consists of two equally important parts:

  • the YAML template – processed at template compile time, before the pipeline run is started
  • the Bash script – cloned at runtime in the checkout task

Checking out a specific ref using inline syntax, e.g.

- checkout: 'git://MyAzureDevOpsProject/core_pipeline@${{ parameters.ref }}'

is not a (complete) solution, because even if parameters.ref can be populated with the name of the PR branch, only the Bash script will be affected; the YAML template will still be from the master branch, not the PR branch.

Go to Source
Author: Simon Alling

How does CI deployment to AWS typically work at scale?

I am familiar with deploying a personal app to Heroku with git push and scaling it up by adding more dynos. But how do you deploy to a scaled AWS infrastructure with thousands of private instances behind dozens of load balancers across multiple regions?

I have searched and searched for this on Google and only found “hello world” tutorials describing cloning a repo directly to a single instance, or using CodeDeploy to deploy to a single instance, and then using autoscaling groups. Basically equivalent to my Heroku example.

But what does it actually look like in production systems at say Facebook, GitHub, Twitter, Stripe, or other large companies, if they were to run on AWS? Are they pushing to a single “global” bastion instance, and then it fans out to the rest of the private instances across multiple regions in one swoop? Or is some sort of plan (like a terraform plan) created, which needs manual approval, and then there’s a custom deploy script which ssh’s into the bastion instance and then fans out to each region? Or is it a git hook integrated into CI somehow? Are AMIs typically produced? Is a Load Balancer switch switched? Etc.

What is typical of a large AWS deployment in terms of how you actually deploy your latest code changes to production, given you have thousands of instances across multiple availability zones and regions?

I am just wondering about a single service. I would imagine this process would be repeated per microservice or whatever. So for the sake of the question, imagine there is a single webserver with thousands of instances in every region. How would a deployment for that typically look? I am looking to create some GitHub actions to deploy to AWS as practice for a large project, but have no idea what the state of the art is, and haven’t been able to find any information on the topic.

These are sort of helpful:

Go to Source
Author: Lance Pollard

How to render assets when building a static website with GitLab Pages and Jekyll

I’m using Jekyll to build a static site and deploy it with GitLab Pages. This site is compounded of posts (contained in the site/_posts folder) and refer to other HTML and Markdown documents that I placed in an assets folder at the project’s root, as recommended by Jekyll docs.

When I build the website in the pipeline it runs and deploys it successfully to GitLab Pages. However, if I am able to find my posts, the links towards the files allegedly contained in assets return a 404. When I browse the public artifacts file, I can’t see said files in the assets folder, even though I put them there before building the site.

What am I missing here? It runs fine when serving locally but as soon as it is deployed on GitLab Pages I can’t access the assets file.

project architecture:

|__> _posts/
|__> v{{ version }}/
|____> yellow-ducky.html

.gitlab-ci.yml excerpt:

  stage: deploy
  tags: [docker-connect]
    # is the blog post I want to publish
    - mv site/_posts/`date +'%Y-%m-%d'`
    # reports come from a previous job's artifacts and contain all files I want to place in assets
    - mkdir -p assets assets/v${TAG}
    - mv reports/* assets/v${TAG}/
    # build website
    - gem install bundler
    - cd site
    - bundle install
    - bundle exec jekyll build -d public
    # move built files to the folder where GitLab expects to find them
    - cd .. && mkdir -p public && mv site/public/* public/
      - public

_config.yml excerpt:

title: RubberDucky
description: >- # this means to ignore newlines until "baseurl:"
  This is the place where to find all info you need about rubber ducks. 
baseurl: "/toys/rubberduck" # the subpath of your site, e.g. /blog
url: "" # the base hostname & protocol for your site, e.g.
# Build settings
theme: minima
  - jekyll-feed

blog post excerpt:

<ul style="list-style-type:none">
    {% for duck in ducks %}
        <h2>{{ }}</h2>
        <li><a href="assets/v{{ version }}/{{ duck.url }}">View</a> | <a href="assets/v{{ version }}/{{ duck.url }}" download>Download</a></li>
    {% endfor %}

Go to Source
Author: avazula

Gitlab CI push to docker-registry fails

I have GitLab server and a separate docker-registry. I have created a testuser with a testpassword.

My .gitlab-ci.yml

image: docker:latest
  - docker:18.09.7-dind
  DOCKER_HOST: tcp://docker:2375/
  DOCKER_DRIVER: overlay2
    # See
  REGISTRY: myregistry:5000/golang-test

  - build
 # - test
  - docker login -u testuser -p testpassword  myregistry:5000

    stage: build
      - docker-ci
      - docker ps
      - docker build -t $REGISTRY .
      - docker push $REGISTRY

When CI pipeline runs, I get following error.

WARNING! Using –password via the CLI is insecure. Use
–password-stdin. Error response from daemon: Get https://myregistry:5000/v2/: dial tcp: lookup worker1 on no such host ERROR: Job failed: exit code 1

concurrent = 1
check_interval = 0

  session_timeout = 1800

  name = "docker-ci"
  url = ""
  token = "89MfxRTby83ny3eTkTf2"
  tls-ca-file = "/etc/gitlab-runner/certs/"
  executor = "docker"
  pre_build_script = "export DOCKER_HOST=tcp://docker:2375"
    tls_verify = false
    image = "ubuntu:18.04"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/cache"]
    shm_size = 0

This was after suggested modifications here.

I can successfully login to my docker-registry using docker login -u testuser -p testpassword https://myregistry:5000 from gitlab running server.

Go to Source
Author: Sachith Muhandiram

Where would the Jenkins job run if I don’t mention any Agent labels?

I my Jenkins setup I have a few set of jobs which uses agent to build and deploy. I achieved it by mentioning the agent label in the job configurations.

I have one job which has to execute on the master jenkins. If i don’t mention any label in job configuration and if no agents are online, the job gets executed on master jenkins. If any agent is online, instead of executing in master jenkins, it picks up some random agent and executes the job. How can i execute it on master even if agents are online?

Go to Source
Author: B. Akshay