I am close to publishing by first open-source library to the Sonatype repository. Everything works fine locally, but I want to automate the process so that whenever I create a new tag/version, the version get's published directly from Gitlab CI
Now, I have two problems: the secring.gpg file and the passwords.
I know that I can have protected variables for my CI, but I don't know two things:
How to use them during my build process (I am using gradle):
I need the signing.password, the signing.secretKeyRingFile path and the ossrhPassword, all of which are located in the gradle.properties file
How to store the sigring.gpg file, which is a binary file and I can't copy-paste it in the variable.
Related
I have a standalone cloud source repository, (not cloned from Github).
I am using this to automate deploying of ETL pipelines . So I am folowing Google recommended guidelines, i.e committing the ETL pipeline as a .py file.
The cloud build trigger associated with the Cloud source repository will run the code as mentioned in the cloudbuild.yaml file and put the resultant .py file on the composer DAG bucket.
Composer will pick up this DAG and run it .
Now my question is, how do I orchestrate the CICD in dev and prod? I did not find any proper documentation to do this. So as of now I am following manual approach. If my code passes in dev, I am committing the same to the prod repo. Is there a way to do this in a better way?
Cloud Build Triggers allow you to conditionally execute a cloudbuily.yaml file on various ways. Have you tried setting up a trigger that fires only on changes to a dev branch?
Further, you can add substitutions to your trigger and use them in the cloudbuild.yaml file to, for example, name the generated artifacts based on some aspect of the input event.
See: https://cloud.google.com/build/docs/configuring-builds/substitute-variable-values and https://cloud.google.com/build/docs/configuring-builds/use-bash-and-bindings-in-substitutions
For one of our repositories we set "Custom CI configuration path" inside GitLab to a remote gitlab-ci.yml. We want to do this to prevent Developers to change the gitlab-ci.yml file (as protected files are available in EE Premium and up). But except this purpose, the Custom CI configuration path feature should work anyway for Merge Requests.
Being in repo
group1/repo1
we set
.gitlab-ci.yml#group1/repo1-ci
repo1-ci repository exists and ci works correctly when we push to configured branches etc.
For Merge Request functionality GitLab tells us:
Detached merge request pipeline #123 failed for ...
Project group1/repo1-ci not found or access denied!
We added the developers to repo1-ci repo as developers, to be able to read the files. It does not help. Anyway the expectation is, that it is not run with user permissions, so it should simply find the gitlab-ci.yml file.
Any ideas on this?
So our expectations were right an it seems that we have to add one important thing into our considerations:
If a user interacts in the GitLab UI with the Merge Request features and you are using "Custom CI configuration path" for your gitlab-ci.yml file, please ensure
this user needs at least read permissions to that remote file, even if you moved it to another repo on purpose (e.g. use enhanced file protection in PREMIUM/ULTIMATE or push/merge protect the branches for the Developer role)
the user got this permission change applied in a running session
The last part failed for our users, as it worked one day later. Seems that they just continued working from their open merge request page and GitLab checks the accessibility out of this session (using a cookie, token or something which was not updated with the the access to the remote repo/file)
It works!
I am using gitlab as repository and want to push my code on ec2 whenever any commit is done on gitlab. The gitlab CD/CI documentation states that I have to add a file .gitlab-ci.yml at the root directory of my repo. This is actually a problem for me because, I want project repo to have only code and not any configuration related info like build and deploy etc. Also when anybody clones the repo, they would have access to location where my code is pushed/deployed on ec2. Is there any work around for this problem ?
You'll need to use a gitlab-ci.yml filke to deploy your application. The file provides instructions and a pipeline "infrastructure" which, if properly configured, will build, test and automatically deploy your code.
If you are worried about leaking credentials, you should use the built-in instance variables to mask your important bits, like a "$SERVERNAME" or "$DB_PASSWORD" for instance.
Lastly, you can use the power of gitignore, in order to not publish all of your credentials or sensitive bits to your projects' servers or instances.
I want to create automatic upload to ftp, using 'FTP Upload' runner, with different build configuration, which depends on successfull build of main configuration. But the thing is I don't know the pattern. As for now path looks like this:
C:\ProgramData\JetBrains\TeamCity\system\artifacts\<project_name>\<build config name>\528
What variable contains this last number?
The problem was with bad description of my problem, more definiteve one:
I have to store artifacts on FTP. FTP is on the same machine as TC server and agent (don't ask me why). So I have to somehow grab artifacts and put them into ftp://"project"/msi and ftp://"project"/nuget, depending on build configuration. I've tried: Grabbing artifacts directly - from folder shown in the initial post, idea failed.
The solution is to create another build configuration and set Artifact dependencies, this makes artifacts reachable from new build configuration, which allows to use FTP Upload runner.
Thanks everyone!
How best should I accomplish the following deployment objectives with Git deployment for Azure?
Easily switch when working locally to either use fake in-memory data or (eventually) non-production snapshot of real data
Deploy to staging environment on Azure such that at first I could use fake in-memory data and eventually move to non-production snapshot of real data.
Deploy to production with real data
I currently deploy using Github and a staging branch to a staging Azure website. Since I deploy to a public repo, the web.config file is ignored by git. (EDIT: I just learned that ignoring web.config actually causes deployment error on azure)
Any help/suggestion is appreciated.
It's actually supposed to be simpler than that. Please see this page. Basically, the idea is that you set some AppSettings in the Azure portal to override the default values that are committed to your repo.
Well... Here's what I did that works for me right now.
To quickly switch between fake in-memory data locally, I use a compilation symbol LOCAL and a preprocessor directive #if LOCAL.
Same compilation symbol works when you deploy to Azure, so I can work on fake data until I'm ready to switch to real db. I can also use the app settings if I really want to make to switch it more easily.
The challenge was to keep a web.config with "secrets" (like connection string) locally and not expose it to Github. I added it to .gitignore, but then my deployments started failing on Azure because it could not find the web.config. Just copying it to wwwroot via ftp did not help - Azure was looking for web.config in the repository.
So, to make this work I "slightly" altered the deployment process by first copying the Web.config from wwwroot to the repository before running the default deploy.cmd. This was simple - this is what you do:
Create a .deployment file in the root of your repository with the following:
[config]
command = deploy.my.cmd
Create deploy.my.cmd with the following script:
xcopy %DEPLOYMENT_TARGET%\Web.config %DEPLOYMENT_SOURCE%\\ /Y
deploy.cmd
Now, I have web.config with secrets locally. Git ignores this file. I uploaded the correct web.config to Azure via FTP, and it gets used whenever I deploy.