How to use `.netrc` file in coursier fetch/launch - coursier

I have very simple need of launching jar which has all dependencies burned in and main class configured. The only thing which I need to do is fetch, cache (this is important because jar is large) and launch and I want to do all of this with one liner.
Coursier works just fine till the point when it needs to go through auth:
...
unauthorized: https://
...
I already have .netrc configured and I wonder if there is an option for coursier to use credentials from there (curl and python can do that).
Alternatively, is there way to explicitly pass credentials to repository via command line?

Related

Fetching artifacts from Nexus to Rundeck

I'm creating a Rundeck job which will be used to rollback an application. My .jar files are stored in a Nexus repository and I would like to add an option to Rundeck where I can choose a .jar version from Nexus and then run the rollback job on this.
I have tried using this plugin: https://github.com/nongfenqi/nexus3-rundeck-plugin, but it doesn't seem to be working. When I am logged in to Nexus I can access the JSON file listing the artifacts from my browser, but when I am logged off the JSON file is empty, even if the Nexus server is running.
When adding the JSON URL as a remote URL option in Rundeck like the picture below, I get no option to choose from when running the job, even if I am logged in to Nexus, as shown by picture number 2. Is there a way to pass user credentials with options, or any other workaround for this?
I would recommend you to install Apache / HTTPD locally on your rundeck server and use a CGI script for this.
Write a CGI script that queries your Nexus3 service for versions available on the jar file, and echo the results in JSON format.
Place the script in /var/www/cgi-bin/ with executable bit enabled. You can test it like so:
curl 'http://localhost/cgi-bin/script-name.py'
In your job you can configure your remote URL accordingly.
I find using local CGI script to be much more reliable and flexible. You can also handle any authentication requirements there.

Fetching private repo using go mod in circle ci

I followed this and added the keys in SSH Permissions as well as the fingerprint in my circle config file.
I also added this to my ~/.gitconfig as part of my circle compile step.
[url "ssh://git#github.com/MYORGANIZATION/"]
insteadOf = https://github.com/MYORGANIZATION/
following the official recommendation
When I SSH into the circle image I can see the fingerprint being added using this command ssh-add -l -E md5. But there's no keys added in ~/.ssh/. I'd expect to have ~/.ssh/id_rsa_<fingerprint> in there.
However I still get access denied when I try to retrieve the package.
The easiest way to get this to work is to follow the instructions for adding a machine user: https://circleci.com/docs/2.0/gh-bb-integration/#enable-your-project-to-check-out-additional-private-repositories
For a more complicated solution, read on.
I recently attempted the same thing. The add_ssh_keys keys should (and did in my case) add the id_rsa_<fingerprint> file.
The problem I ran into was that the key is added with an ssh config that contains:
Host !github.com *
I believe the problem is that it uses the default CircleCI key to authenticate with github. That key is valid, so github accepts it, but it most likely does not have access to the private repo in your dependencies.
To get it to work what I had to do was:
# Disable ssh-agent which seemed to override `-i`
export SSH_AUTH_SOCK=none
# Tell get to ssh with the key I want to use
export GIT_SSH_COMMAND="ssh -i /root/.ssh/id_rsa_FINGERPRINT
# Run some command to pull dependencies
go test ./...

Running a BASH command on a file located on the Puppet server and not the client

I'm trying to find a way to run a command on a SELinux .te file that is located on the puppet server, but not the client (I use the puppet-selinux module from puppetforge to compile the .te file into the .pp module file, so I don't need it on the client server). My basic idea is something like:
class security::selinux_module {
exec { 'selinux_module_check':
command => "grep module selinux_module_source.te | awk '{print $3}' | sed 's/;//' > /tmp/selinux_module_check.txt",
source => 'puppet:///modules/security/selinux_module_source.te',
}
}
Though when trying to run it on the client server I get:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Invalid parameter source on Exec[selinux_module_check] at /etc/puppet/environments/master/security/manifests/selinux_module.pp:3 on node client.domain.com
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
Any assistance on this would be greatly appreciated.
You can use Puppet's generate() function to run commands on the master during catalog compilation and capture their output, but this is rarely a good thing to do, especially if the commands in question are expensive. If you intend to transfer the resulting output to the client for some kind of use there, then you also need to pay careful attention to ensuring that it is appropriate for the client, which might not be the case if the client differs too much from the server.
I'm trying to find a way to run a command on a SELinux .te file that is located on the puppet server, but not the client (I use the puppet-selinux module from puppetforge to compile the .te file into the .pp module file, so I don't need it on the client server
The simplest approach would be to run the needed command directly, once for all, from an interactive shell, and to put the result in a file from which the agent can obtain it, via Puppet or otherwise. Only if the type enforcement file were dynamically generated would it make any sense to compile it every time you build a catalog.
I suggest, however, that you build a package (RPM, DEB, whatever) containing the selinux policy file and any needed installation scripts. Put that package in your local repository, and manage it via a Package resource.

Why does this user data script not pull from Git repo?

I have a launch configuration and auto-scaling group set up. The launch config uses an AMI that I've already created, based on Ubuntu 14.04, that installs Nginx, Git, and has my static files stored as a Git repo in Nginx's /usr/share/nginx/html/ directory.
The problem: the static files in my nginx/html directory are only as new as the files that were loaded in the AMI when I created it.
To remedy this, I have tried to add a "User Data" field into the launch config. The field is defined as:
#! /bin/bash
cd /usr/share/nginx/html/
git pull origin master
<my git repo's password>
But when I check to see if the instance has the latest version of the repo, I see that it doesn't. Something is wrong with my script, and I'm not sure what.
I have tested entering these commands one-by-one exactly as is into the EC2 instance via SSH, and it works exactly as expected.
Why doesn't this work in the user data field?
Note: I have verified that the 'bash' file is indeed present in /bin/bash.
You need to pass username and password of your repository with the repo url
Sample example :
#! /bin/bash
cd /usr/share/nginx/html/
git clone https://username:password#yourRepoURL.git
Problem is definitely in the bash script. Everything it contains is executed by bash, so it actually tries to execute your password as a command.
There are multiple ways to provide a password to Git in a script. See for example this question: How to provide username and password when run "git clone git#remote.git"?
It basically depends on how secure do you need it. Maybe it's enough to have a plain text password in Git's config (it doesn't have to be so bad if you set a restricted mode for that file, it would be similar to using a private key without passphrase).
It's been a while since I asked this- I've learned a lot since then.
You can pass your username and password as part of the URL, but that is bad form as if you share the code with anyone or give anyone access to your script then they will know your account credentials.
The best way to do this would be to set up your server to connect to your Git repo over SSH - I believe this is industry best practice as it is more secure and password-less.

Global Subversion SSH config in Windows / Checking out Subversion project as SYSTEM on Windows

I'm trying to set up a scheduled Subversion commit from Windows Server 2003 machine over SVN+SSH as a task. I'd like the commit script to be executed as SYSTEM-user. So I'm guessing, for that to work I need to check-out the repository as SYSTEM, too - but am unable to achieve it so far.
I'm already able to achieve the above with my own user over SSH. I've done the following:
I added a [tunnels] entity in my local subversion configuration:
ssh = plink.exe -i "C:/Keys/my_key.ppk"
Added the key to the authorized_keys file on the server running Subversion
I checked out the repository with a script as below:
svn co svn+ssh://user#server/path/to/repo/ C:\Local\Project\Path
I'd now like to reproduce the above steps for SYSTEM user, to be able to run a scheduled commit later. The problem I'm facing is I don't know how to check out the repository as SYSTEM, because:
I don't know the syntax to use to check out a repository as SYSTEM
I don't know where the global (or SYSTEM's) Subversion config is stored on a Windows Server 2003. I've already tried: C:\Documents and Settings\Default User\Application Data\Subversion and C:\Documents and Settings\Administrator\Application Data\Subversion, but without success.
I also read somewhere I possibly could use svn switch for what I want, but wouldn't know how to svn switch as SYSTEM. I also considered writing scripts for svn check-out or switch and running them as SYSTEM, but then I still need global SVN config to add my_key.ppk, too.
I hope the above description is clear enough. I've been struggling with it for a long time now and am having problems summarizing it myself. Any hints appreciated.
As a side, that doesn't seem to be totally off-topic: https://serverfault.com/q/9325/122307
This is not a real answer to your question, yet it might solve your problem: Why not use svn <command> --config-dir ARG or svn <command> --config-option ARG?
You could specify the config file/option like this, thus being able to set [tunnels].
#cxxl really answered on question, when mentioned --config-dir. I'll just try to shed some light on problem
I'm guessing, for that to work I need to check-out the repository as SYSTEM
Wrong and bad guessing, because stored locally user's auth data doesn't used in case of SSH-auth, for ssh remote authentication performed. Per-user auth-dir
\%AppData%\Subversion\auth>dir /W
...
[svn.simple] [svn.ssl.client-passphrase]
[svn.ssl.server] [svn.username]
...
contain stored credentials only for http|https|svn and cert-based client authentication, and nothing for ssh-related repositories
I.e your executed under LSA script must be able to
* read Working Copy files (checkouted under any other real local user), maybe write (can't recall requirement for .svn dir permissions)
* read and, thus, use predefined and fine-tuned Subversion's config files (tunnel section), which can be config of any other user
PS: swn switch change linked URL of repository for Working Copy and have nothing common with users

Resources