Convert .lic file to plain text - continuous-integration

I have a .lic file with credentials needed to download a program into a docker container during build time. How do I convert the .lic file to a plain text string that can be stored in a ci/cd platform's (Github Actions) environment variable so I can use it during build and deployment?
I found a solution a while ago that involved converting the .lic file to a .b64, then to a string. Unfortunately, I didn't bookmark the solution and cannot find it again.

Related

Add file as parameter

I want to attach a file as env parameter. But in settings there is only text type of parameter. I couldn't just copy-past from file and add it as plain text. Because the file has a binary type(could be open with pickle etc). Is it some other way to solve this problem?
A workaround for passing a file as a parameter would be to :
copy the file to all the build agents that can run the build (at location %FileAsParamLocation%)
pass its location as a build parameter : %FileAsParamLocation%
If you have only one build agent, or if all your build agents run on the same machine, then you only need to copy the file once. Else you should copy the parameter-file to every agent, at the same %FileAsParamLocation%.

How to put a file (.jks, .p12) as variable?

I'm working on a gitlab-ci pipeline to automate build, sign "apk" and deploy to the play store.
The pipeline work fine but the two files ".jks" to sign the "apk" and the ".p12" for my Google cloud platform service are now in my repository and it's not a secure way to do it. What I want to do is to put this two files ".jks and .p12" as a gitlab-ci variable to avoid putting this files in my repo...
Yes you could define gitlab ci variable of file type
here is the doc for gitlab ce :
https://docs.gitlab.com/ee/ci/variables/#use-file-type-cicd-variables
In the settings of your project, you will need to define a variable (file type).
Then gitlab will create a temporary file for this variable with your file content.
You could use it directly
or use cp command like this:
cp $MY_SECRET_FILE $HOME/.m2/settings.xml
edit:
binary files seems not supported, so follow and upvote following issue : https://gitlab.com/gitlab-org/gitlab/-/issues/205379

Can lftp execute a command on the downloaded files (as part of the mirroring process)?

This may be asking too much from an already very powerful tool, but is there a chance that lftp mirror can execute a command during the mirroring process (from remote directory to the local machine)?
Specific example: lftp is asked to mirror a remote directory with xml files into a local folder and as soon as each file is downloaded/updated, it converts the file to JSON format using xml2json.
I can think of a solution that relies on monitoring the local copy of the mirrored folder for changes via find and then executing xml2json on the new/updated files, but perhaps there is a simpler way?
You can use xfer:verify and xfer:verify-command settings to run a local command on every transferred file.

Jekyll private deployment?

I have created jekyll site. Regarding the deployment I don't want to host on github pages. To host private domain I came know from documentation to copy the all files from _site folder. That's all wicked.
Question:
Each time I add new blog post, I am running cmd>jekyll build then I am copying newly created html to hosted domain. Is there any easy way to update without compiling each time ?
The reason, Why I am asking is because it will updated by non technical person
Thanks for the help!!
If you don't want to use GitHub Pages, AFAIK there's no other way than to compile your site each time you make a change.
But of course you can script/automate as much as possible.
That's what I do with my own blog as well. I'm hosting it on my own webspace instead of GitHub Pages, so I need to do these steps for each update:
Compile on local machine
Upload via FTP
I can do this with a single click (okay, a single double-click).
Note: I'm on Windows, so the following solution is for Windows.
But if you're using Linux/MacOS/whatever, of course you can use the tools given there to build something similar.
I'm using a batch file (the Windows equivalent to a shell script) to compile my site and then call WinSCP, a free command-line FTP client.
WinSCP allows me to store session configurations, so I saved the connection to my server there once.
Because of this, I didn't want to commit WinSCP to my (public) repository, so my script expects WinSCP in the parent folder.
The batch file looks like this:
call jekyll build
echo If the build succeeded, press RETURN to upload!
pause
set uploadpath=%~dp0\_site
%~dp0\..\winscp.com /script=build-upload.txt /xmllog=build-upload.log
pause
The first parameter in the WinSCP call (/script=build-upload.txt) specifies the script file which contains the actual WinSCP commands
This is in the script file:
option batch abort
option confirm off
open blog
synchronize remote -delete "%uploadpath%"
close
exit
Some explanations:
%~dp0 (in the batch file) is the folder where the current batch file is
The set uploadpath=... line (in the batch file) saves the complete path to the generated site into an environment variable
The open blog line (in the script file) opens a connection to the pre-saved session configuration (which I named blog)
The synchronize remote ... line (in the script file) uses the synchronize command to sync from the local folder (saved in %uploadpath%, the environment variable from step 2) to the server.
IMO this solution is suitable for non-technical persons as well.
If the technical person in your case doesn't know how to use source control, you could even script committing & pushing, too.
There are a number of options available which are mentioned in the documentation: http://jekyllrb.com/docs/deployment-methods/
If you are using Git, I would recommend the Git Post-Receive Hook approach. It simply builds the site after the new code is received:
GIT_REPO=$HOME/myrepo.git
TMP_GIT_CLONE=$HOME/tmp/myrepo
PUBLIC_WWW=/var/www/myrepo
git clone $GIT_REPO $TMP_GIT_CLONE
jekyll build -s $TMP_GIT_CLONE -d $PUBLIC_WWW
rm -Rf $TMP_GIT_CLONE
exit
Since you mentioned that it will be updated by a non-technical person, you might try something like rack-jekyll to automatically rebuild when new files are FTP'd.

How can I track system-specific config files in a repo/project?

I have a ruby project, and the database host and port might be different on dev and production. I need a way to get different values for those into my scripts for the two environments.
The project should be complete - so there should be some way to specify default values. I don't want a clone to be missing the config files. So ignoring them completely won't work.
How do you solve this problem with git?
I would recommend using:
a template config file (a file with variable name in place of the host and port value)
a script able to replace those variable names with the appropriate values depending on the environment (detected by the script)
The Git solution is then a git attribute filter driver (see also GitPro book).
A filter driver consists of a clean command and a smudge command, either of which can be left unspecified.
Upon checkout, when the smudge command is specified, the command is fed the blob object from its standard input, and its standard output is used to update the worktree file.
Similarly, the clean command is used to convert the contents of worktree file upon check-in.
That way, the script (managed with Git) referenced by the smudge can replace all the variables by environement-specific values, while the clean script will restore its content to an untouched config file.
When you checkout your Git repo on a prod environment, the smudge process will produce a prod-like config file in the resulting working tree.

Resources