nginx home environment variable? - bash

I have some nginx config in a repo, and the application root is sometimes different on different machines and setups:
server {
listen 80;
server_name admin.triface.local;
root /Users/xxxxxx/Sites/triface-admin/public;
index triface.html;
}
I want to set a variable somewhere (like an bash environment variable or equivalent) that lets me avoid hardcoding the server root. It seems like this should be straightforward, but I can't find anything on it. Any clues?

So the answer is, there isn't any! Intentionally! And once I read the reasoning, it actually made sense. Though it is a bummer I can't do local nginx installs to people's $HOME dir, but I can live with that.
See this stackoverflow answer:
How do I pass ImageMagick environment variables to nginx mongrels?

Sure:
set $homedir /Users/xxxxxx/Sites/triface-admin/public;
Then just call $homedir

Related

How to overwrite a default variable in ansible.cfg dynamically?

I have a playbook with the following task that must copy the 2 Gb file from local to remote servers and extract files:
- name: Copy archived file to target server and extract
unarchive:
src: /path_to_source_dir/file.tar.gz
dest: /path_to_dest_dir
This task fails because ansible copies file to /home mount point on the target server and there's not enough space there:
sftp> put /path_to_source_dir/file.tar.gz /home/my_user_name/.ansible/tmp/ansible-tmp-1551129648.53-14181330218552/source
scp: /home/my_user_name/.ansible/tmp/ansible-tmp-1551129648.53-14181330218552/source: No space left on device
The reason for that is because ansible.cfg has a default parameter:
remote_tmp = ~/.ansible/tmp
How to overwrite this parameter from the playbook (if possible) and make ansible to copy file to the same destination directory specified in the task? So it would be like this:
remote_tmp = /path_to_dest_dir/.ansible/tmp
And the destination path is going to be different each time for a different target server!
Cleaning /home is not an option for me.
The answer here unfortunately is not very clear to me.
There are a few different ways to achieve what you are looking to do. Which one is a matter of preference and your use case.
You found the first way, setting an environment variable before running the playbook. Great for a quick on-the-fly job. Remembering to do that every time you run a certain playbook is indeed annoying. A slight variation of that is to use the environment keyword to set that variable for the play. You can also set environment variable in a role, block or a single task. https://docs.ansible.com/ansible/devel/reference_appendices/playbooks_keywords.html?highlight=environment%20directive. Here is an example of it in use: https://docs.ansible.com/ansible/devel/reference_appendices/faq.html?highlight=environment.
Using the environment keyword in a play et al works well for a specific application of automation, but what if you want Ansible to always use a different remote tmp path for specific servers? Like all variables, the remote_tmp can be sourced from inventory host and group variables not just the config file or environment variables. You need to mind you variable precedence if it is being set in different places. With this you could set remote_tmp in your inventory for that host or a group of hosts. Ansible will always use that path for that host or hosts in that group without having to define it in every play or roles. If you need to change that path, you change it in your inventory and it changes the behavior for all playbook runs without any additional edits. Here is an example of it being used as a host variable in static inventory: https://docs.ansible.com/ansible/devel/reference_appendices/faq.html?highlight=remote_tmp#running-on-solaris
So while the specific issue of "dynamically" setting the remote tmp directory on a host is not a best practice topic per se, it does become an example of the best practice of making the most of variables in Ansible.
For reference, remote temporary directories are handled by the shell plugins. While many parameters are shared, there are others that are specific to the shell Ansible using. Ansible uses sh by default. https://docs.ansible.com/ansible/latest/plugins/shell/sh.html
Hope that helps. Happy automating.

Is there any way to use dotenv with Bitbucket Pipelines?

As the title says, is there any way to use dotenv with Bitbucket Pipelines for CI purposes, while still adding the (perhaps multiple) (.stage).env to .gitignore?
I know Pipeline supports environment variables, and that they can be referenced in bitbucket-pipelines.yml, but I can't figure out how to use dotenv files instead, and vary which file to use based on i.e. branch patterns.
For example, I'd like commits to develop to use .test.env variables, while commits to master instead uses the variables from .prod.env.
Perhaps I'm going down the wrong path? Although other websites use examples of multiple .env files, the library authors discourage that approach. I'm using Zeit Now for hosting, so I can't just SSH a .env file onto the server.
Any advice is very welcome :-)
Create a base64 string out of your .env file. Then copy this string into your environment variables of your pipeline, see here: https://confluence.atlassian.com/bitbucket/environment-variables-794502608.html
For example, your content is now defined in APP_ENV, then you can use this line in your pipeline configuration file:
echo $APP_ENV | base64 --decode --ignore-garbage > ./www/.env
Now it is save because nobody knows your secrets in this file except your pipeline container itself.
This method could be used for all .env-files, also staging files. :)
Rename the files inside your develop pipelines:
mv .test.env .env
or in your master pipelines:
mv .prod.env .env

Ruby hiding API keys and IP address?

I have a ruby script main.rb which takes in two parameters, ipaddress and apitoken.
$token = "VALUE"
$ip_addr = "ADDRESS"
These values are hard coded into the script. When I push the project into Github's repo, I get a warning that my keys are visible.
What is the recommended way to hide these variables? Is it as simple as adding a separate file for these values and adding them to .gitignore?
Personally, I don't like using open and file operations in code. Better way would be to use one of the following approaches,
Put the keys in system environment as follows,
export MY_TOKEN=xyz
export MY_IP_ADDR=a.b.c.d
If you want it to be available after you restart shell, then put it in ~/.bash_profile.
and in your code use as follows,
$token = ENV["MY_TOKEN"]
$ip_addr = ENV["MY_IP_ADDR"]
OR
You can use dotenv gem, if you don't want system wide environment variables and exclude .env from git but putting the file in .gitignore.
Following this guide, a simple way to do this is to create folders .auth_token and .ip_addr.
Add the necessary keys in them and access them by reading the files as follows:
$token = open("lib/assets/.auth_token").read()
$ip_addr = open("lib/assets/.ip_addr").read()
If pushing to a repository, make sure the folders are added to .gitignore

How to disable location in nginx from bash command line?

Say I have a config in /etc/nginx/conf.d/myscript.conf
server {
listen 8080;
server_name _;
location = /a {...} # <-- needs to be disabled during maintainence
location = /b {...}
location = /c {...} # <-- needs to be enabled during maintainence
}
For maintainence I need to disable /a location, do some commands\deployments, then enable /a location back.
Can this be done automatically via bash, without programmatic config modifyings?
You can use includes and then just deal with creating and removing symlinks. Usually you see this done with server blocks (the base nginx.conf actually just includes conf.d/* which is how it loads your server blocks), but it can be done with anything. Basically you'll have two folders, named something like locations-available and locations-enabled, and put all of your location blocks in individual files in locations-available. In your server block include locations-enabled/* and then symlink all the locations you want enabled from locations-available to locations-enabled. Every time you add or remove symlinks just reload nginx and you should be good to go.
In you case just rm the symlink, reload, do whatever you want, recreate symlink, reload.

RUBYLIB Environment Path

So currently I have included the following in my .bashrc file.
export RUBYLIB=/home/git/project/app/helpers
I am trying to run rspec with a spec that has
require 'output_helper'
This file is in the helpers directory. My question is that when I change the export line to:
export RUBYLIB=/home/git/project/
It no longer finds the helper file. I thought that ruby should search the entire path I supply, and not just the outermost directory supplied? Is this the correct way to think about it? And if not, how can I make it so RUBY will search through all subdirectories and their subdirectories, etc?
Thanks,
Robin
Similar to PATH, you need to explicitly name the directory under which to look for libraries. However, this will not include any child directories within, so you will need to list any child sub-directories as well, delimiting them with a colon.
For example:
export RUBYLIB=/home/git/project:/home/git/project/app/helpers
As buruzaemon mentions, Ruby does not search subdirectories, so you need to include all the directories you want in your search path. However, what you probably want to do is:
require 'app/helpers/output_helper'
This way you aren't depending on the RUBYLIB environment variable being set a certain way. When you're deploying code to production, or collaborating with others, these little dependencies can make for annoying debugging sessions.
Also as a side note, you can specify . as a search path, rather than using machine-specific absolute paths.

Resources