How to Fix Gradle Wrapper Permission Denied Error

I am getting this “gradlew permission denied” error after pushing my code to my GitLab repository. The build is not able to continue because the gradle wrapper is not able to run.

What is causing this and how to fix it?

ANSWER

From a local terminal/command line, use the Git command that follows to fix this issue:

git update-index --chmod=+x gradlew

Continue to commit the modifications to the gradlew file.

Push the changes to your Git repository.

You can read more at this page: https://www.joseyamut.xyz/2020/08/15/fix-gradlew-permission-denied-on-openshift-deploy/

How to deploy python microservices app updates?

I’m working on python microservices based app and I’m wondering how to ship updates.

Let’s start by a summary of what I have:

  • Code base on git (several repos)
  • The app is running on several remote raspberry pis
  • I access these via vpn network
  • Raspberry pis have a limited access to internet via GSM MtoM network
  • The main custom lib is installed with pip
  • Dependencies are installed at the first launch in house with good internet connection
  • The code of each service is ofuscated in place
    At the moment when I update my services I wrote a shell script which goes and download latest version for each service.
    It feels like it’s such a repetitive task for each device.

I only see 2 simple options:

  • connect to each device via vpn + ssh and run the update script (even an enhanced version)
  • At the end of the CI run a script that send latest.zip and create a service (script + cron ? python + systemd ) on each device that check for new .zip to update the services.
    Each of those options doesn’t look pretty clean.

What is usualy done in similar cases ?

Do you have any idea to share with me I don’t find anything that suits my needs.

Thanks for your inputs !

Go to Source
Author: AFZ84

How does CI deployment to AWS typically work at scale?

I am familiar with deploying a personal app to Heroku with git push and scaling it up by adding more dynos. But how do you deploy to a scaled AWS infrastructure with thousands of private instances behind dozens of load balancers across multiple regions?

I have searched and searched for this on Google and only found “hello world” tutorials describing cloning a repo directly to a single instance, or using CodeDeploy to deploy to a single instance, and then using autoscaling groups. Basically equivalent to my Heroku example.

But what does it actually look like in production systems at say Facebook, GitHub, Twitter, Stripe, or other large companies, if they were to run on AWS? Are they pushing to a single “global” bastion instance, and then it fans out to the rest of the private instances across multiple regions in one swoop? Or is some sort of plan (like a terraform plan) created, which needs manual approval, and then there’s a custom deploy script which ssh’s into the bastion instance and then fans out to each region? Or is it a git hook integrated into CI somehow? Are AMIs typically produced? Is a Load Balancer switch switched? Etc.

What is typical of a large AWS deployment in terms of how you actually deploy your latest code changes to production, given you have thousands of instances across multiple availability zones and regions?

I am just wondering about a single service. I would imagine this process would be repeated per microservice or whatever. So for the sake of the question, imagine there is a single webserver with thousands of instances in every region. How would a deployment for that typically look? I am looking to create some GitHub actions to deploy to AWS as practice for a large project, but have no idea what the state of the art is, and haven’t been able to find any information on the topic.

These are sort of helpful:

Go to Source
Author: Lance Pollard

How to automate the key exchange in WireGuard when you deploy a cluster of machines?

Let’s assume you want to deploy a cluster of machines on Hetzer Cloud. For simplicity let’s call them worker1, worker2, worker3. They need to communicate with a server called master, which will be running on different account then the workers. Ideally, the whole setup should not be open to the internet. Unfortunately, Hetzner supports only private networks within the same account.

To make it work, you can setup your own VPN using WireGuard. Conceptually, it is not hard. You need to setup three connections (between the master and each worker). The tricky part is how to automate the key exchange. Ideally, it should not be more work if you deploy additional workers (e.g. 100 instead 3 workers).

Setting up such a VPN cluster sounds like a common problem, but I cannot find any recommendations on how to setup 1-to-n or n-to-m connections, only tutorials on how to peer two machines. I’m thinking of automating the key exchange with Ansible (generate keys, gather them, install them on the master), but wanted to check first whether there is an easier solution to the problem that I missed.

In SSH, workers could share their key, which would simplify the problem. In WireGuard, keys cannot be shared, as far as I understood. How would you automate the setup of a VPN with WireGuard, so each worker can reach the master? Or is WireGuard the wrong choice for the problem?

Clarification:

  • In my scenario, it is not possible to move the workers and master to the same account; otherwise, Hetzner networks would be the straightforward solution for setting up a private network.
  • If you are not familiar with Hetzner Cloud, it is not a problem. You can assume that you get normal Linux machines, but then you are on your own (it does not support VPC peering across accounts as AWS does). Yet you can use all Linux tools available for creating the VPN setup. WireGuard would be my first choice, but I’m open to other techniques.

Go to Source
Author: Philipp Cla├čen

Improving deployment process of agent software

I’m a developer on an agent software team, researching ways to streamline the mass deployment of our agent on clients’ machines. Currently the process involves running a script to determine the exact package needed, downloading a bsx from our company’s FTP and running it, followed by running an executable with flags for registration.

I’ve looked into Ansible as a possible improvement, however I’m unclear on the value it provides in such a case. What’s more, I’m not sure I’m the right person to be authoring the clients’ playbooks for such a thing. At this point it would appear to me that developing our own script to further automate the process would be a better solution, and let the customer DevOps team deploy it in whichever way they see fit.

Before I present these findings to my managers, I would appreciate any opinions on the matter the community could provide.

Go to Source
Author: Ekadanta

Deploy .NET application from Jenkins on Linux to MS Azure Web service

We have .NET application which deployed to Azure Web services. Now is time to create deployment pipeline for it.

And want to know what is high level plan how .NET application can be deployed to Azure from Linux servers based on CentOS.

Do we need an AZ cli installed on Linux server to deploy it and Azure .NET SDK?

Go to Source
Author: pleyades

How to manage user java deployments in a small team, across different java applications (which require different java versions)?

I provide a java application (desktop client) which requires jdk11 (it really does).
We installed that SDK on all users and were happy for a while.

Then, someone shared with us his java application (also a desktop client, but unrelated to mine), which only works with jre8 (he is not willing to recompile to 11 and his status in the company is such that we must conform to him at all costs)…

I’ve instructed the team members to change their JAVA_HOME between an installed jre8 and jdk11, as they need, but this doesn’t seem like more than an annoying manual stop gap.

I am wondering what is a good, modern approach to allow, for example: a system-wide jre8 for the applications I don’t control but also code a little something into my project that can switch to jdk11 for the duration of their use of my application?

Since I am delivering a project with source code (which compiles every time they run, because they sometimes need to modify some of the java code), I am thinking along the lines of including a portable jdk11 and have the project refer to that as the SDK, relatively (so that they don’t have to install anything to get my application going).

I actually tried this a little bit but couldn’t get it done and I am wondering if it’s a good approach that is worth fixing or there is a better idea?

Can you share your experience about this relatively common problem?

Edit: + I am not sure a containerized solution is viable. It sounds like I’ll have to help them set up their system to support that and then wrap my project in a container – which sounds like a can of worms for someone with my skills (middleweight java dev, at best)

Go to Source
Author: DraxDomax