How to Fix Gradle Wrapper Permission Denied Error

I am getting this “gradlew permission denied” error after pushing my code to my GitLab repository. The build is not able to continue because the gradle wrapper is not able to run.

What is causing this and how to fix it?


From a local terminal/command line, use the Git command that follows to fix this issue:

git update-index --chmod=+x gradlew

Continue to commit the modifications to the gradlew file.

Push the changes to your Git repository.

You can read more at this page:

How to render assets when building a static website with GitLab Pages and Jekyll

I’m using Jekyll to build a static site and deploy it with GitLab Pages. This site is compounded of posts (contained in the site/_posts folder) and refer to other HTML and Markdown documents that I placed in an assets folder at the project’s root, as recommended by Jekyll docs.

When I build the website in the pipeline it runs and deploys it successfully to GitLab Pages. However, if I am able to find my posts, the links towards the files allegedly contained in assets return a 404. When I browse the public artifacts file, I can’t see said files in the assets folder, even though I put them there before building the site.

What am I missing here? It runs fine when serving locally but as soon as it is deployed on GitLab Pages I can’t access the assets file.

project architecture:

|__> _posts/
|__> v{{ version }}/
|____> yellow-ducky.html

.gitlab-ci.yml excerpt:

  stage: deploy
  tags: [docker-connect]
    # is the blog post I want to publish
    - mv site/_posts/`date +'%Y-%m-%d'`
    # reports come from a previous job's artifacts and contain all files I want to place in assets
    - mkdir -p assets assets/v${TAG}
    - mv reports/* assets/v${TAG}/
    # build website
    - gem install bundler
    - cd site
    - bundle install
    - bundle exec jekyll build -d public
    # move built files to the folder where GitLab expects to find them
    - cd .. && mkdir -p public && mv site/public/* public/
      - public

_config.yml excerpt:

title: RubberDucky
description: >- # this means to ignore newlines until "baseurl:"
  This is the place where to find all info you need about rubber ducks. 
baseurl: "/toys/rubberduck" # the subpath of your site, e.g. /blog
url: "" # the base hostname & protocol for your site, e.g.
# Build settings
theme: minima
  - jekyll-feed

blog post excerpt:

<ul style="list-style-type:none">
    {% for duck in ducks %}
        <h2>{{ }}</h2>
        <li><a href="assets/v{{ version }}/{{ duck.url }}">View</a> | <a href="assets/v{{ version }}/{{ duck.url }}" download>Download</a></li>
    {% endfor %}

Go to Source
Author: avazula

GitLab keeps loading and finally fails when deploying a dockerized node.js app

GitLab Job Log

[0KRunning with gitlab-runner 13.2.0-rc2 (45f2b4ec)
[0;m[0K  on docker-auto-scale fa6cab46
[0K[0K[36;1mPreparing the "docker+machine" executor[0;m
[0;m[0KUsing Docker executor with image gitlab/dind:latest ...
[0;m[0KStarting service docker:dind ...
[0;m[0KPulling docker image docker:dind ...
[0;m[0KUsing docker image sha256:d5d139be840a6ffa04348fc87740e8c095cade6e9cb977785fdba51e5fd7ffec for docker:dind ...
[0;m[0KWaiting for services to be up and running...
[0;33m*** WARNING:[0;m Service runner-fa6cab46-project-18378289-concurrent-0-31a688551619da9f-docker-0 probably didn't start properly.

Health check error:
service "runner-fa6cab46-project-18378289-concurrent-0-31a688551619da9f-docker-0-wait-for-service" timeout

Health check container logs:

Service container logs:
2020-07-20T08:21:19.734721788Z time="2020-07-20T08:21:19.734543379Z" level=info msg="Starting up"
2020-07-20T08:21:19.742928068Z time="2020-07-20T08:21:19.742802844Z" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
2020-07-20T08:21:19.743943014Z time="2020-07-20T08:21:19.743853574Z" level=warning msg="[!] DON'T BIND ON ANY IP ADDRESS WITHOUT setting --tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING [!]"
2020-07-20T08:21:19.764021012Z time="2020-07-20T08:21:19.763898078Z" level=info msg="libcontainerd: started new containerd process" pid=23
2020-07-20T08:21:19.764159337Z time="2020-07-20T08:21:19.764107864Z" level=info msg="parsed scheme: "unix"" module=grpc
2020-07-20T08:21:19.764207629Z time="2020-07-20T08:21:19.764179926Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
2020-07-20T08:21:19.764319635Z time="2020-07-20T08:21:19.764279612Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
2020-07-20T08:21:19.764371375Z time="2020-07-20T08:21:19.764344798Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
2020-07-20T08:21:19.969344247Z time="2020-07-20T08:21:19.969193121Z" level=info msg="starting containerd" revision=7ad184331fa3e55e52b890ea95e65ba581ae3429 version=v1.2.13 
2020-07-20T08:21:19.969863044Z time="2020-07-20T08:21:19.969784495Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1 
2020-07-20T08:21:19.970042302Z time="2020-07-20T08:21:19.969997665Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1 
2020-07-20T08:21:19.970399514Z time="2020-07-20T08:21:19.970336671Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" 
2020-07-20T08:21:19.970474776Z time="2020-07-20T08:21:19.970428684Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1 
2020-07-20T08:21:20.019585153Z time="2020-07-20T08:21:20.019421401Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "ip: can't find device 'aufs'nmodprobe: can't change directory to '/lib/modules': No such file or directoryn": exit status 1" 
2020-07-20T08:21:20.019709540Z time="2020-07-20T08:21:20.019668899Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1 
2020-07-20T08:21:20.019934319Z time="2020-07-20T08:21:20.019887606Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1 
2020-07-20T08:21:20.020299876Z time="2020-07-20T08:21:20.020218529Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1 
2020-07-20T08:21:20.021038477Z time="2020-07-20T08:21:20.020887571Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1 
2020-07-20T08:21:20.021162370Z time="2020-07-20T08:21:20.021121663Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1 
2020-07-20T08:21:20.021406797Z time="2020-07-20T08:21:20.021348536Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "ip: can't find device 'aufs'nmodprobe: can't change directory to '/lib/modules': No such file or directoryn": exit status 1" 
2020-07-20T08:21:20.021487917Z time="2020-07-20T08:21:20.021435946Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" 
2020-07-20T08:21:20.021581245Z time="2020-07-20T08:21:20.021533539Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" 
2020-07-20T08:21:20.030531741Z time="2020-07-20T08:21:20.030427430Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1 
2020-07-20T08:21:20.030639854Z time="2020-07-20T08:21:20.030604536Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1 
2020-07-20T08:21:20.030779501Z time="2020-07-20T08:21:20.030736875Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1 
2020-07-20T08:21:20.030865060Z time="2020-07-20T08:21:20.030833703Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1 
2020-07-20T08:21:20.030955439Z time="2020-07-20T08:21:20.030912981Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1 
2020-07-20T08:21:20.031027842Z time="2020-07-20T08:21:20.030998003Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1 
2020-07-20T08:21:20.031132325Z time="2020-07-20T08:21:20.031083782Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1 
2020-07-20T08:21:20.031202966Z time="2020-07-20T08:21:20.031174445Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1 
2020-07-20T08:21:20.031286993Z time="2020-07-20T08:21:20.031253528Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1 
2020-07-20T08:21:20.031370557Z time="2020-07-20T08:21:20.031312376Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1 
2020-07-20T08:21:20.031709756Z time="2020-07-20T08:21:20.031650044Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2 
2020-07-20T08:21:20.031941868Z time="2020-07-20T08:21:20.031897088Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1 
2020-07-20T08:21:20.032929781Z time="2020-07-20T08:21:20.032846588Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1 
2020-07-20T08:21:20.033064279Z time="2020-07-20T08:21:20.033014391Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1 
2020-07-20T08:21:20.034207198Z time="2020-07-20T08:21:20.034120505Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1 
2020-07-20T08:21:20.034316027Z time="2020-07-20T08:21:20.034279582Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1 
2020-07-20T08:21:20.034402334Z time="2020-07-20T08:21:20.034369239Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1 
2020-07-20T08:21:20.034482782Z time="2020-07-20T08:21:20.034452282Z" level=info msg="loading plugin ""..." type=io.containerd.grpc.v1 
2020-07-20T08:21:20.034564724Z time="2020-07-20T08:21:20.034533365Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1 
2020-07-20T08:21:20.034645756Z time="2020-07-20T08:21:20.034617060Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1 
2020-07-20T08:21:20.034722695Z time="2020-07-20T08:21:20.034689037Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1 
2020-07-20T08:21:20.034800005Z time="2020-07-20T08:21:20.034770572Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1 
2020-07-20T08:21:20.034873069Z time="2020-07-20T08:21:20.034837050Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1 
2020-07-20T08:21:20.036608424Z time="2020-07-20T08:21:20.036525701Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1 
2020-07-20T08:21:20.036722927Z time="2020-07-20T08:21:20.036684403Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1 
2020-07-20T08:21:20.036799326Z time="2020-07-20T08:21:20.036769392Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1 
2020-07-20T08:21:20.036876692Z time="2020-07-20T08:21:20.036844684Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1 
2020-07-20T08:21:20.037291381Z time="2020-07-20T08:21:20.037244979Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock" 
2020-07-20T08:21:20.037493736Z time="2020-07-20T08:21:20.037445814Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock" 
2020-07-20T08:21:20.037563487Z time="2020-07-20T08:21:20.037522305Z" level=info msg="containerd successfully booted in 0.069638s" 
2020-07-20T08:21:20.087933162Z time="2020-07-20T08:21:20.087804902Z" level=info msg="Setting the storage driver from the $DOCKER_DRIVER environment variable (overlay2)"
2020-07-20T08:21:20.088415387Z time="2020-07-20T08:21:20.088327506Z" level=info msg="parsed scheme: "unix"" module=grpc
2020-07-20T08:21:20.088533804Z time="2020-07-20T08:21:20.088465157Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
2020-07-20T08:21:20.088620947Z time="2020-07-20T08:21:20.088562235Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
2020-07-20T08:21:20.088709546Z time="2020-07-20T08:21:20.088654016Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
2020-07-20T08:21:20.092857445Z time="2020-07-20T08:21:20.092749940Z" level=info msg="parsed scheme: "unix"" module=grpc
2020-07-20T08:21:20.092962469Z time="2020-07-20T08:21:20.092914347Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
2020-07-20T08:21:20.093060327Z time="2020-07-20T08:21:20.093013905Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
2020-07-20T08:21:20.093142744Z time="2020-07-20T08:21:20.093102173Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
2020-07-20T08:21:20.149109416Z time="2020-07-20T08:21:20.148965236Z" level=info msg="Loading containers: start."
2020-07-20T08:21:20.159351905Z time="2020-07-20T08:21:20.159146135Z" level=warning msg="Running modprobe bridge br_netfilter failed with message: ip: can't find device 'bridge'nbridge                167936  1 br_netfilternstp                    16384  1 bridgenllc                    16384  2 bridge,stpnip: can't find device 'br_netfilter'nbr_netfilter           24576  0 nbridge                167936  1 br_netfilternmodprobe: can't change directory to '/lib/modules': No such file or directoryn, error: exit status 1"
2020-07-20T08:21:20.280536391Z time="2020-07-20T08:21:20.280402152Z" level=info msg="Default bridge (docker0) is assigned with an IP address Daemon option --bip can be used to set a preferred IP address"
2020-07-20T08:21:20.337028532Z time="2020-07-20T08:21:20.336889956Z" level=info msg="Loading containers: done."
2020-07-20T08:21:20.435200532Z time="2020-07-20T08:21:20.435033092Z" level=info msg="Docker daemon" commit=48a66213fe graphdriver(s)=overlay2 version=19.03.12
2020-07-20T08:21:20.436386855Z time="2020-07-20T08:21:20.436266338Z" level=info msg="Daemon has completed initialization"
2020-07-20T08:21:20.476621441Z time="2020-07-20T08:21:20.475137317Z" level=info msg="API listen on [::]:2375"
2020-07-20T08:21:20.477679219Z time="2020-07-20T08:21:20.477535808Z" level=info msg="API listen on /var/run/docker.sock"


[0KPulling docker image gitlab/dind:latest ...
[0;m[0KUsing docker image sha256:cc674e878f23bdc3c36cc37596d31adaa23bca0fc3ed18bea9b59abc638602e1 for gitlab/dind:latest ...
[0K[0K[36;1mPreparing environment[0;m
[0;mRunning on runner-fa6cab46-project-18378289-concurrent-0 via runner-fa6cab46-srm-1595233216-1bd30100...
[0K[0K[36;1mGetting source from Git repository[0;m
[0;m[32;1m$ eval "$CI_PRE_CLONE_SCRIPT"[0;m
[32;1mFetching changes with git depth set to 50...[0;m
Initialized empty Git repository in /builds/
[32;1mCreated fresh repository.[0;m
[32;1mChecking out 257ffdf2 as stage...[0;m

[32;1mSkipping Git submodules setup[0;m
[0K[0K[36;1mRestoring cache[0;m
[0;m[32;1mChecking cache for stage node:14.5.0-alpine-2...[0;m
Downloading from[0;m 
[32;1mSuccessfully extracted cache[0;m
[0K[0K[36;1mExecuting "step_script" stage of the job script[0;m
[0;mln: failed to create symbolic link '/sys/fs/cgroup/systemd/name=systemd': Operation not permitted
time="2020-07-20T08:22:14.844844859Z" level=warning msg="[!] DON'T BIND ON ANY IP ADDRESS WITHOUT setting -tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING [!]" 
time="2020-07-20T08:22:14.846663310Z" level=info msg="libcontainerd: new containerd process, pid: 57" 
time="2020-07-20T08:22:14.906788853Z" level=info msg="Graph migration to content-addressability took 0.00 seconds" 
time="2020-07-20T08:22:14.907996055Z" level=info msg="Loading containers: start." 
time="2020-07-20T08:22:14.910877638Z" level=warning msg="Running modprobe bridge br_netfilter failed with message: modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/4.19.78-coreos/modules.dep.bin'nmodprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/4.19.78-coreos/modules.dep.bin'n, error: exit status 1" 
time="2020-07-20T08:22:14.912665866Z" level=warning msg="Running modprobe nf_nat failed with message: `modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/4.19.78-coreos/modules.dep.bin'`, error: exit status 1" 
time="2020-07-20T08:22:14.914201302Z" level=warning msg="Running modprobe xt_conntrack failed with message: `modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/4.19.78-coreos/modules.dep.bin'`, error: exit status 1" 
time="2020-07-20T08:22:14.989456423Z" level=warning msg="Could not load necessary modules for IPSEC rules: Running modprobe xfrm_user failed with message: `modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/4.19.78-coreos/modules.dep.bin'`, error: exit status 1" 
time="2020-07-20T08:22:14.990108153Z" level=info msg="Default bridge (docker0) is assigned with an IP address Daemon option --bip can be used to set a preferred IP address" 

time="2020-07-20T08:22:15.029286773Z" level=info msg="Loading containers: done." 
time="2020-07-20T08:22:15.029664106Z" level=info msg="Daemon has completed initialization" 
time="2020-07-20T08:22:15.029823541Z" level=info msg="Docker daemon" commit=23cf638 graphdriver=overlay2 version=1.12.1 
time="2020-07-20T08:22:15.048665494Z" level=info msg="API listen on /var/run/docker.sock" 
time="2020-07-20T08:22:15.049046558Z" level=info msg="API listen on [::]:7070" 

# Keeps loading and finally fails after a couple of seconds


  key: '$CI_COMMIT_REF_NAME node:14.5.0-alpine'
    - node_modules/

  - release
  - deploy


  stage: release
  image: docker:19.03.12
    - docker:dind
    DOCKER_DRIVER: overlay2
    - docker version
    - docker info
    - echo "$CI_JOB_TOKEN" | docker login --username $CI_REGISTRY_USER --password-stdin $CI_REGISTRY
    - docker build --pull --tag $TAGGED_IMAGE --cache-from $TAGGED_IMAGE --build-arg NODE_ENV=$CI_ENVIRONMENT_NAME .
    - docker push $TAGGED_IMAGE
    - docker logout $CI_REGISTRY

  stage: deploy
  image: gitlab/dind:latest
    - docker:dind
    DOCKER_COMPOSE_PATH: '~/docker-composes/$CI_PROJECT_PATH/docker-compose.yml'
    - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
    - eval $(ssh-agent -s)
    - echo "$DEPLOY_SERVER_PRIVATE_KEY" | tr -d 'r' | ssh-add -
    - mkdir -p ~/.ssh
    - chmod 700 ~/.ssh
    - ssh-keyscan $DEPLOYMENT_SERVER_IP >> ~/.ssh/known_hosts
    - chmod 644 ~/.ssh/known_hosts
    - rsync -avR --rsync-path="mkdir -p ~/docker-composes/$CI_PROJECT_PATH/; rsync" ./docker-compose.yml root@$DEPLOYMENT_SERVER_IP:~/docker-composes/$CI_PROJECT_PATH/
    - ssh root@$DEPLOYMENT_SERVER_IP "echo "$CI_REGISTRY_PASSWORD" | docker login --username $CI_REGISTRY_USER --password-stdin $CI_REGISTRY; docker-compose -f $DOCKER_COMPOSE_PATH rm -f -s -v $CI_COMMIT_REF_NAME; docker pull $TAGGED_IMAGE; docker-compose -f $DOCKER_COMPOSE_PATH up -d $CI_COMMIT_REF_NAME;"

  extends: .release
    - stage
    name: staging

  extends: .deploy
    - stage
    name: staging



FROM node:14.5.0-alpine



# Set working directory
WORKDIR /var/www/

# Install app dependencies
COPY package.json package-lock.json ./
RUN npm ci --silent --only=production

COPY . ./

# Start the application
CMD [ "npm", "run", "start" ]


version: '3.8'

    container_name: redis-stage
    image: redis:6.0.5-alpine
      - '7075:6379'
    restart: always
      - my-proxy-net

    container_name: xxx-backend-stage
    build: .
      - '7070'
    restart: always
      - my-proxy-net
      - redis-stage
      VIRTUAL_PROTO: https

      name: mynetwork

Update 1

I got a warning on the page claims I have used over 30% of my shared runner minutes. Maybe it is about not having enough minutes.

Update 2

The release stage gets completed successfully.

Update 3

Before I get into this problem, I deployed once successfully. I decided to test that commit once again and see if it succeeds, but it fails!

Go to Source
Author: Alireza

Gitlab CI push to docker-registry fails

I have GitLab server and a separate docker-registry. I have created a testuser with a testpassword.

My .gitlab-ci.yml

image: docker:latest
  - docker:18.09.7-dind
  DOCKER_HOST: tcp://docker:2375/
  DOCKER_DRIVER: overlay2
    # See
  REGISTRY: myregistry:5000/golang-test

  - build
 # - test
  - docker login -u testuser -p testpassword  myregistry:5000

    stage: build
      - docker-ci
      - docker ps
      - docker build -t $REGISTRY .
      - docker push $REGISTRY

When CI pipeline runs, I get following error.

WARNING! Using –password via the CLI is insecure. Use
–password-stdin. Error response from daemon: Get https://myregistry:5000/v2/: dial tcp: lookup worker1 on no such host ERROR: Job failed: exit code 1

concurrent = 1
check_interval = 0

  session_timeout = 1800

  name = "docker-ci"
  url = ""
  token = "89MfxRTby83ny3eTkTf2"
  tls-ca-file = "/etc/gitlab-runner/certs/"
  executor = "docker"
  pre_build_script = "export DOCKER_HOST=tcp://docker:2375"
    tls_verify = false
    image = "ubuntu:18.04"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/cache"]
    shm_size = 0

This was after suggested modifications here.

I can successfully login to my docker-registry using docker login -u testuser -p testpassword https://myregistry:5000 from gitlab running server.

Go to Source
Author: Sachith Muhandiram

Uploading package to GitLab Maven Repository trough Sbt

I am working with a fresh trial of GitLab 13.0.6-ee installed via docker-compose. I would like to investigate the usage of GitLab Package Registry in our projects. Especially the support for Maven artifacts, Npm libraries and containers images.

I can see that my registry is empty via the “Packages & Registries” menu entry. So, I assume that the registry is enabled.

Because this is my main concern, I have tried to upload a package trough Sbt. But I cannot manage to have it working.

I have created a “deploy key” for my project and configured the repository but the publication fail:

credentials += Credentials(
    "GitLab repository",
publishTo := Some(
    "GitLab repository" at 

[error] (publish) Access to URL http://localhost/api/v4/projects/2/packages/maven/default/infrastructure_2.12/0.1.0-SNAPSHOT/infrastructure_2.12-0.1.0-SNAPSHOT.pom was refused by the server: Unauthorized; Response Body: {“message”:“401 Unauthorized”}

Please note that when I create the token, I can only see and select the “read_repository” scope.

  • Do you have succesfully published a maven artifact trough Sbt ?

I guess I should configure the Http headers with Sbt but have no idea on how to do it.

  • Why can’t I select the write scopes ?

Is there some requirements or specific configurations to have the write scopes in a registry ?


Go to Source
Author: gervais.b

sharing assets between domain and subdomain without sharing with public

I want to put my images in img.mysite.tld hosted in gitlab And use them in my static site mysite.tld & www. mysite.tld, hosted in netlify. But ensure that no other domain can use them like using cors or something else Can you show some light on how to achieve this?

Go to Source
Author: Sk Samim