Generate a certificate with letsencrypt locally - https

I'm new to generating certificates and using letsencrypt, so I'm not sure if this is a dumb question or even possible.
I want to create a small example webapplication using node.js. And I want to test how to implement https, and how to get a proper certificate.
So I tried to use letsencrypt. But it doesn't seem to work.
I'm using my local machine (win10) and I'm cloning the git. Afterwards I try to execute the command ./letsencrypt-auto but windows won't recognize the script as a command.
How is it possible to use letsencrypt locally on my win10 machine, where no webserver (usually) is running.

letsencrypt-auto only works with Apache on Debian-based OSes (for now). There's no way to use it on Windows, yet.
That said, people are trying. You might find this project interesting. (Disclaimer: I have no affiliation with that and haven't tried it myself.)

Alternatively you can look at - https://github.com/minio/concert built using golang, you can get a windows binary quite naturally.
Install
You need to have golang installed to compile concert.
$ go get -u github.com/minio/concert
How to run?
Generates certs in certs directory by default.
$ sudo concert gen <EMAIL> <DOMAIN>
Generate certificates in custom directory.
$ sudo concert gen --dir my-certs-dir <EMAIL> <DOMAIN>
Renew certificates in certs directory by default.
$ sudo concert renew <EMAIL>
Generate certificates in custom directory.
$ sudo concert renew --dir my-certs-dir <EMAIL>
Run a server with automatic renewal.
$ sudo concert server <EMAIL> <DOMAIN>

Alternatively, you can use ngrok to expose your local port 80 and make it available to the world via the secure tunnel on subdomain.ngrok.io. There is also a possibility to pass that domain as a CNAME for your own domain name.
All you have to do is:
Create free account with https://ngrok.com/ It works on all operating systems.
Run ngrok http 80 and note your subdomain.ngrok.io
Add the above subdomain to your /etc/hosts as 127.0.0.1 subdomain.ngrok.io. This way you will be able to access that domain locally with SSL, while ngrok will make sure Let's Encrypt is able to access it via the Internet.
Edit: Note that this method might not work reliably. Let's encrypt has 20 certificates rate limit per registered domain. Which means up 20 certificates in total can be generated for all ngrok users per week.
Disclaimer: I have no affiliation with ngrok.io.

Related

Puppet certificate store

i got a Puppet Enterprise Master Server 2018.1.3 which should get the Code with Code Manager from a git-Repository via https, where the server certificate of the git server is signed by a third party CA.
after getting everything afaik correctly configured, i get following:
> puppet-code deploy --dry-run
Dry-run deploying all environments.
Errors while collecting a list of environments to deploy (exit code: 1).
ERROR -> Unable to determine current branches for Git source 'puppet'
(/etc/puppetlabs/code-staging/environments)
Original exception:
The SSL certificate is invalid
executing directly r10k produces a similar error. which makes sense, since i have not installed the third party CA certificate anywhere yet.
so i thought, r10k most likely runs jruby which runs java (i do not any idea about ruby), so i will install the certificate in the jvm:
keytool -import -file gitCA.cer -alias gitCA -keystore /opt/puppetlabs/server/apps/java/lib/jvm/java/jre/lib/security/cacerts -storepass changeit
but i am still getting the same error, also after a system restart, so ok, it means r10k does not use jruby but ruby, so i will install also the certificate in the OS, put the certificate under /etc/pki/trust/anchors and called update-ca-certificates (on SLES12). After that, i can access the git-Repo-URL with wget without getting any certificate error, so the certificate is installed in the OS correctly, but still, even after a system restart, i am getting the same error with r10k.
after lot of goggling for certificate stores and ruby i found out that
export SSL_CERT_FILE=<path_to_cert>
fixes the direct call of r10k:
> r10k deploy display --fetch ---
:sources:
- :name: :puppet
:basedir: "/etc/puppetlabs/code/environments"
:remote: https://xxx#git.xxx/git/puppet
:environments:
- develop
- master
- production
- puppet_test
but puppet-code still not working with same error message. but i thought, obviously i am right now root and puppet-code is executed by user pe-puppet, so i put the export command in the /etc/profile.local file, so it is available to all users.
still not working. even after system restart and deleting /opt/puppetlabs/server/data/puppetserver/r10k/ that was created with user root while directly calling r10k.
first question: why does r10k works, but puppet-code not?
second question: where is the correct place for that certificate?
many thanks
Michael
UPDATE: 27.AUG.2018
i tried this:
sudo -H -u pe-puppet bash -c '/opt/puppetlabs/puppet/bin/r10k deploy display --fetch'
which did not work, despite i am setting the SSL_CERT_FILE variable in the /etc/profile.local file.
but i got it working by setting the variable in the /etc/environment file.
but puppet code still not working. why?
for those looking for a solution to this problem checkout this post on the Puppet Support Base.
Simply put you have two options:
Use a Git source instead of an HTTPS source to refer to your repository in your Puppetfile. This option requires adding SSH keys to your Puppet master and your repository.
Add a certificate authority (CA) cert for the repository to the list of trusted CAs in /opt/puppetlabs/puppet/ssl/cert.pem.
Option one: Use a Git source instead of an HTTPS source
To deploy code from your repository using a Git source, configure a private SSH key on your Puppet master and a public SSH key on your repository:
In your Puppetfile, change references to your Git repository from an HTTPS source to a Git source:
For example, change:
mod 'site_date', :git: 'https://example.com/user/site_data.git',
to:
mod 'site_data', :git: 'ssh://user#example.com:22/user/site_data.git',
Configure your SSH keys. Configure the private key using our documentation on how to Declare module or data content with SSH private key authentication for PE 2018.1.
Note: Use the version selector to choose the right version of our documentation for your deployment.
The details of configuring your public key depend on how your Git repository is configured. Talk to your Git repository administrator.
Option two: Add a trusted CA cert
If you are unable to specify a Git s
ource, add your repository to the list of CAs trusted by Code Manager by adding a CA cert to the file /opt/puppetlabs/puppet/ssl/cert.pem.
Transfer the cert (ca.pem) file to your CA node.
On the CA node, add the cert to the list of CAs trusted by Code Manager: cat ca.pem >> /opt/puppetlabs/puppet/ssl/cert.pem
Agent runs won't revert changes made to cert.pem because the file isn't managed by PE, but upgrades to PE will overwrite the file. After you upgrade PE, you must add the CA cert to cert.pem again.
so, i got it working, but not happy with the solution.
i turned on debug logging on /etc/puppetlabs/puppetserver/logback.xml, confirming that puppet-code is indeed calling r10k:
2018-08-27T14:54:24.149+02:00 DEBUG [qtp462609859-78] [p.c.core] Invoking shell:
/opt/puppetlabs/bin/r10k deploy --config /opt/puppetlabs/server/data/code-manager/r10k.yaml --verbose warn display --format=json --fetch
2018-08-27T14:54:24.913+02:00 ERROR [qtp462609859-78] [p.c.app] Errors while collecting a list of environments to deploy (exit code: 1).
ERROR -> Unable to determine current branches for Git source 'puppet' (/etc/puppetlabs/code-staging/environments)
Original exception:
The SSL certificate is invalid
so i did it the very quick and dirty way:
cd /opt/puppetlabs/puppet/bin/
mv r10k r10k-bin
touch r10k
chmod +x r10k
vi r10k
and
#!/bin/bash
export SSL_CERT_FILE=<new_cert_path>
/opt/puppetlabs/puppet/bin/r10k-bin "$#"
now it is working:
puppet:~ # puppet-code deploy --dry-run
Dry-run deploying all environments.
Found 5 environments.
but not happy, any better idea?

GitHub - Using multiple deploy keys on a single server

Background
I have a system where when I push changes to my Repository, A web hook sends a request to my site which runs a bash script to pull the changes and copy any updated files.
I added a second repository with its own deploy key but after doing so i was getting a permission denied error when trying to pull changes.
Question
Is there a way to use 2 deploy key's on the same server?
Environment Details
Site uses Laravel 5.6, Symfony used to run shell script
Git 1.7
Go Daddy web hosting (Basic Linux one)
Notes
Script just runs git pull command
Error given is " Permission denied (publickey) "
SHH is used as a deploy key so only read access, there is one other project also using a deploy key on the same server
Thank you in advance for you help! Any other suggestions are welcome!
Edit #1
Edited post to reflect true problem as it was different to what I though (Feel free to revert if this is bad practice), please see answer below for details and solution
What i though was an issue with authentication what actually an issue with the git service not knowing which ssh key to use as i had multiple on the server.
The solution was to use a config file in the .ssh folder and assign alias to specify which ssh key to use for git operations in separate repositories.
Solution is here: Gist with solution
This gist explains the general idea, it suggests using sub-domains however a comment further down uses alias which seems neater.
I have now resolved the issue and the system is working fine with a read-only, passphrase-less deploy key.
This can be done by customizing the GIT_SSH_COMMAND. As ssh .config only gets the host, you have to create aliases to handle different paths. Alternatively, as the git CLI sends the path of the repo to the GIT_SSH_COMMAND, you can intercept the request in a custom script, added in between git and ssh.
You can create a solution where you extract the path and add in the related identity file, if available on the server.
One approach to do this can be found here.
Usage:
cp deploy_key_file ~/.ssh/git-keys/github-practice
GIT_SSH_COMMAND=custom_keys_git_ssh git clone git#github.com:github/practice.git

Automate Heroku CLI login

I'm developing a bash script to automatic clone some projects and another task in dev VM's, but we have one project in Heroku and repository is in it. In my .sh file I have:
> heroku login
And this prompt to enter credentials, I read the "help" guide included on binary and documentation but I can't found anything to automatic insert username and password, I want something like this:
> heroku login -u someUser -p mySecurePassword
Exist any way similar to it?
The Heroku CLI only uses your username and password to retrieve your API key, which it stores in your ~/.netrc file ($HOME\_netrc on Windows).
You can manually retrieve your API key and add it to your ~/.netrc file:
Log into the Heroku web interface
Navigate to your Account settings page
Scroll down to the API Key section and click the Reveal button
Copy your API key
Open your ~/.netrc file, or create it, with your favourite text editor
Add the following content:
machine api.heroku.com
login <your-email#address>
password <your-api-key>
machine git.heroku.com
login <your-email#address>
password <your-api-key>
Replace <your-email#address> with the email address registered with Heroku, and <your-api-key> with the API key you copied from Heroku.
This should manually accomplish what heroku login does automatically. However, I don't recommend this. Running heroku login does the same thing more easily and with fewer opportunities to make a mistake.
If you decide to copy ~/.netrc files between machines or accounts you should be aware of two major caveats:
This file is used by many other programs; be careful to only copy the configuration stanzas you want.
Your API key offers full programmatic access to your account. You should protect it as strongly as you protect your password.
Please be very careful if you intend to log into Heroku using any mechanism other than heroku login.
You can generate a non-expiring OAuth token then pass it to the CLI via an environment variable. This is useful if you need to run Heroku CLI commands indefinitely from a scheduler and you don't want the login to expire. Do it like this (these are not actual Tokens and IDs, BTW):
$ heroku authorizations:create
Creating OAuth Authorization... done
Client: <none>
ID: 80fad839-876b-4ea0-a41e-6a9a2fb0cf97
Description: Long-lived user authorization
Scope: global
Token: ddf4a0e5-9294-4c5f-8820-b51c52fce4f9
Updated at: Fri Aug 02 2019 21:26:09 GMT+0100 (British Summer Time) (less than a minute ago)
Get the token (not the ID) from that authorization and pass it to your CLI:
$ HEROKU_API_KEY='ddf4a0e5-9294-4c5f-8820-b51c52fce4f9' heroku run ls --app my-app
Running ls on ⬢ my-app... up, run.2962 (Hobby)
<some file names>
$
By the way this also solves the problem of how to use the Heroku CLI when you have MFA enabled on your Heroku account but your machine doesn't have a web browser e.g., if you are working on an EC2 box via SSH:
$ heroku run ls --app my-app
heroku: Press any key to open up the browser to login or q to exit:
› Error: quit
$ HEROKU_API_KEY='ddf4a0e5-9299-4c5f-8820-b51c52fce4f9' heroku run ls --app my-app
Running ls on ⬢ my-app... up, run.5029 (Hobby)
<some file names>
$
EDIT: For Windows Machines
After you run heroku authorizations:create, copy the "Token", and run the following commands:
set HEROKU_API_KEY=ddf4a0e5-9299-4c5f-8820-b51c52fce4f9
heroku run ls --app my-app
If your goal is just to get the source code, you could use a simple git client. You just need the api key.
Steps to get api key
Log into the Heroku web interface
Navigate to your Account settings page
Scroll down to the API Key section and click the Reveal button
Copy your API key
Download source code using git
Use this url template for git clone
https://my_user:my_password#git.heroku.com/name_of_your_app.git
In my case the user value was my email without domain.
Example :
if mail is **duke#gmail.com**
user for heroku auth will be **duke**
Finally just clone it like any other git repositories:
git clone https://duke:my_password#git.heroku.com/name_of_your_app.git
I agree that Heroku should have by now provided a way to do this with their higher level CLI tool.
You can avoid extreme solutions (and you should, just like Chris mentioned in his answer) by simply using curl and the Heroku API. Heroku allow you to use your API Token (obtainable through your user settings / profile page on the Heroku dashboard).
You can then use the API to achieve whatever it is you wanted to do with their command line tool.
For example, if I wanted to get all config vars for an app I would write a script that did something like the following:
-H "Accept: application/vnd.heroku+json; version=3" \
-H "Authorization: Bearer YOUR_TOKEN```
If *YOUR_APP_NAME* had only one config variable called *my_var* the response of the above call would be
{
"my_var": some_value
}
I've found using this all the time in CI tools that need access to *Heroku* information / resources.

Go cd configuration issue

I've been having an issue trying to add github materials from a private repo on a Windows server.
I've seen lots of people suggesting how to add the ssh keys and where but on unix based systems. Haven't found anything related to Windows Servers.
I'm using Go latest release and have installed Go Server & Agent on a Windows Server 2008 with git installed.
I can connect to the private repo using Git Bash.
Whenever I try to add the materials it keeps saying Checking Connection and looks like it stays there forever.
If I use basic auth it works but I would like to make it work without exposing my password in the URL.
Is there a way to do that?
If you run Go under the default local system account, you can follow the suggestions from http://opensourcetester.co.uk/2013/06/28/jenkins-windows-ssh/ to setup the ssh keys for local system account.
If you run Go Server under a domain account (and not the default Local System account), check if you have uploaded your ssh keys to %USERPROFILE%/.ssh/ folder on the server machine, %USERPROFILE% being HOME folder for the domain user. Once you set that up, Go server would be able to pick the required keys. The same holds good for the agent machines. Just so you know, Go would not invoke Git-bash internally to run the git commands, so any setup on bash will not take effect when running git from within Go.
If you are using a windows machine to host GoCD server and agents , it does not run under a normal user account, it runs under the “Local System Account”
So even you can access your git repo from git bash (logged in as the current user),GOCD cannot access the same.
So you need to add the SSH keys for the Local System Account from your your current user.
1.First find the home directory for the Local System Account(It will not reside under C:/Users )
2.Use any remote administration tool to find the home directory- If you go with http://download.sysinternals.com/files/PSTools.zip
a)unzip and run command-line as administrator
b)PsExec.exe -i -s cmd.exe -start the tool )
c)run echo %userprofile% to get the home directory (eg:C:\Windows\system32\config\systemprofile)
3.Now you can either copy the SSH key files from current user or create a new one using ssh commands.
Try checking Connection after creating/copying the SSH keys, it will show Connection OK!

Hosting Git Repository in Windows

Is there currently a way to host a shared Git repository in Windows? I understand that you can configure the Git service in Linux with:
git daemon
Is there a native Windows option, short of sharing folders, to host a Git service?
EDIT:
I am currently using the cygwin install of git to store and work with git repositories in Windows, but I would like to take the next step of hosting a repository with a service that can provide access to others.
Here are some steps you can follow to get the git daemon running under Windows:
(Prerequisites: A default Cygwin installation and a git client that supports git daemon)
Step 1: Open a bash shell
Step 2: In the directory /cygdrive/c/cygwin64/usr/local/bin/, create a file named "gitd" with the following content:
#!/bin/bash
/usr/bin/git daemon --reuseaddr --base-path=/git --export-all --verbose --enable=receive-pack
Step 3: Run the following cygrunsrv command from an elevated prompt (i.e. as admin) to install the script as a service (Note: assumes Cygwin is installed at C:\cygwin64):
cygrunsrv --install gitd \
--path c:/cygwin64/bin/bash.exe \
--args c:/cygwin64/usr/local/bin/gitd \
--desc "Git Daemon" \
--neverexits \
--shutdown
Step 4: Run the following command to start the service:
cygrunsrv --start gitd
You are done. If you want to test it, here is a quick and dirty script that shows that you can push over the git protocol to your local machine:
#!/bin/bash
echo "Creating main git repo ..."
mkdir -p /git/testapp.git
cd /git/testapp.git
git init --bare
touch git-daemon-export-ok
echo "Creating local repo ..."
cd
mkdir testapp
cd testapp
git init
echo "Creating test file ..."
touch testfile
git add -A
git commit -m 'Test message'
echo "Pushing master to main repo ..."
git push git://localhost/testapp.git master
GitStack might be your best choice. It is currently free (for up to 2 users) and open source at the time of writing.
Here's a dedicated git server for windows: https://github.com/jakubgarfield/Bonobo-Git-Server/wiki
If you are working in a Windows environment, have you considered Mercurial? It is a distributed version control system like Git, but integrates far more neatly and easily with Windows.
Installing CygWin is an overkill, read this tutorial on how to do it faster and native:
http://code.google.com/p/tortoisegit/wiki/HOWTO_CentralServerWindowsXP
If you get the error cygrunsrv: Error starting a service: QueryServiceStatus: Win32 error 1062: The service has not been started. after running the command:
cygrunsrv --start gitd
that means that you did not create the 'base-path' folder.
Creating the folder '/git' and rerunning the command will fix this.
I'm currently using cygwin's ssh daemon on Windows to serve up and allow remote access to my repo. It works quite well, I have complete control over who accesses my repo by their ssh certificates, and the performance blazes, even over remote WAN and VPN links.
Another solution is to use Gitosis. It is a tool that makes hosting repos much easier.
You do not need to host a service, you can also create a shared repository on a shared drive. Just create a bare repository. You can clone an existing repo into a shared one using: "git clone --bare --shared [source] [dest]". You can also init a new repository using "git init --bare --shared=all".
Henk
Have you considered using the cygwin layer? See this link.
Now msysGit supports git daemon ! It works fine (for me at least). I gonna try to make it run as service...
SCM Manager
Lightweight http-server for Git, Mercurial, Subversion repos from a box (only Java is needed)
Web-interface for management of users, ACLs, repos
On Windows, you can also serve Git repositories with Apache over HTTP or HTTPS, using the DAV extension.
The Git repository path can then be protected with Apache authentication checks such as restricting to certain IP addresses or htpasswd/htdigest type authentication.
The limitation of using htpasswd/htdigest authentication is that the username:password is passed in the requested Git URL, so restricting access to the Git URL to certain IP addresses is better.
Edit: Note, you can leave the password out of the Git URL and Git will prompt you for the password on push and fetch/pull instead.
Using HTTPS means all the data is encrypted in transfer.
It's easy enough to set up, and works.
The following example shows the combination of access control by IP address and user:password over standard HTTP.
Example Apache Virtualhost
## GIT HTTP DAV ##
<VirtualHost *:80>
ServerName git.example.com
DocumentRoot C:\webroot\htdocs\restricted\git
ErrorLog C:\webroot\apache\logs\error-git-webdav.log
<Location />
DAV on
# Restrict Access
AuthType Basic
AuthName "Restricted Area"
AuthUserFile "C:\webroot\apache\conf\git-htpasswd"
# To valid user
Require valid-user
# AND valid IP address
Order Deny,Allow
Deny from all
# Example IP 1
Allow from 203.22.56.67
# Example IP 2
Allow from 202.12.33.44
# Require both authentication checks to be satisfied
Satisfy all
</Location>
</VirtualHost>
Example .git/config
[core]
repositoryformatversion = 0
filemode = true
bare = false
logallrefupdates = true
[remote "origin"]
fetch = +refs/heads/*:refs/remotes/origin/*
url = http://username:password#git.example.com/codebase.git
[branch "master"]
remote = origin
merge = refs/heads/master
At work I'm using GitBlit GO installed on a Windows Server. Work flawlessly and integrate well with ActiveDirectory for user authentication and authorization. It is also free and opensource (Apache licensed)
GitBlit homepage
Only HTTP(S) access is supported, no SSH, but under Windows you shouldn't need anything more.
this is a 2015 answer to a question that is over 7 years old.
For $10 one time payment, from https://bitbucket.org/product/server, one can purchase a 64-bit Windows licence for up to 10 users.
Apparently 32-bit versions are only available via their archive.
Bitbucket Server was previously known as Stash.
Please note that i have not tried this version but $10 seems like a good deal; here i read that Atlassian gives the $10 to charity. FWIW
I think what Henk is saying is that you can create a shared repository on a drive and then copy it to some common location that both of you have access to. If there is some company server or something that you both have ssh access to, you can put the repository someplace where you can SCP it back to your own computer, and then pull from that. I did this for my self a little while, since I have two computers. It's a hassle, but it does work.
For Windows 7 x64 and Cygwin 1.7.9 I needed to use /usr/bin/gitd as the args argument of cygrunsrv
cygrunsrv --install gitd \
--path c:/cygwin/bin/bash.exe \
--args /usr/bin/gitd \
--desc "Git Daemon" \
--neverexits \
--shutdown
Also, I needed to run bash as an Administrator to install the service.

Resources