Import YAML file content as ENV in ruby - ruby

Working on Ruby Rspec Capybara project. I have different env like
sample yml file
dev:
url: google.com
user: dev_user
password: dev_password
test:
url: google.com
user: test_user
password: test_password
Think yml would help here.
Is it possible to parse yaml file and store it as ENV variable so that it is available across the project in Ruby
this is purely ruby project. how can we do it without rails credentials

Perhaps you can use the dotenv gem?

Related

Can ansible vault encrypt values in plugin configuration files?

I'm writing a dynamic inventory plugin for ansible which pulls off device info from an API and adds it to the inventory. To configure my plugin, I need a username and password for the service which I retrieve from my plugin configuration yaml file
plugin_conf.yaml:
plugin: my_inventory_plugin
host_location: api.example.com
port: 443
user: some_user
password: some_pass
Since storing credentials in a file under version control is bad, does ansible vault support capabilities to encrypt values stored in a plugin configuration file?
i.e can the user of my plugin do something like
plugin: my_inventory_plugin
host_location: api.example.com
port: 443
user: !vault|
$FOO;1.1;AES256
blah blah
password: !vault|
$BAR;1.1;AES256
something else
and regardless if they use insecure plaintext or the ansible vault, my plugin can still get the values using the self.get_option('user') method?
I tested it out myself and the answer is yes.
If the user encrypts a string using ansible vault setting the name of the secret using -n, they can use the variable name into my config file. There are no special handling cases required in my plugin to handle plaintext credentials or ansible vault credentials.

Grafana provision elasticsearch datastore over ansible

Anybody can advise how to map the configuration properties seen in the Grafana UI to their equivalents in the configuration file over Ansible?
This is what I have that is working well:
grafana_datasources:
- name: elasticsearch
type: elasticsearch
access: server
database: "metricbeat-7.5.2"
url: 'http://localhost:9200'
readOnly: false
editable: true
basicAuth: false
jsonData:
timeField: "#timestamp"
esVersion: 70
maxConcurrentShardRequests: 5
I managed to set up everything except Auth section. Actually I setup only "Basic auth" field by adding "basicAuth: false". Now I am stuck with setting up the following fields:
TLS Client Auth
Skip TLS Verify
Forward OAuth Identity
I tried with adding:
tlsAuth: false
tlsAuthWithCACert: false
tlsSkipVerify: false
but nothing happens. I also tried with adding the same to jsonData but still no luck...
Thanks in advance.
Cheers,
Dragan
This is how I resolved this. In order to get these three fields I added the following to my playbook:
isDefault: false
How I figured it out? Well, I created a dashboard manually and then exported it to json with the following command:
mkdir -p data_sources && curl -s "http://localhost:3000/api/datasources"  -u admin:password | jq -c -M '.[]'|split -l 1 - data_sources/
Then I edited exported dashboard json file and found out the key and the value I used in my playbook.
Cheers

Install rails with WSL and postgresql

I'm trying to setup a rails environment development on a Windows 10.
I follow the tutorial of 'go_rails' (https://gorails.com/setup/windows/10)
Most of the installation seems to worked fine (when i type rby -v or rails -v in the bash it's return the expected result).
My issue is with postresql which is used for the project i work on it.
Following the instructions of the tutorial i install Postgresql (10) directly on Windows. It's seems to work since in can login using the pgadmin on windows or by typing 'psql -p 5432 -h localhost -U postgres' in the bash.
So it's look like postgresql is working, but when i do a rake db:create in bash, i got an error : could not connect to the server: No such file or directory. Is the server running locally and accepting connections on Unix domain socket '/var/run/postgresql/.s.PGSQL.5432'
In the postgresql.conf (C:/Programms/.../Data/postgresql.conf) the listen_addresses is set to '*'.
A bit after there is a line named '#unix_socket_directoris = ''', do you thing i should set something in there?
I really need to get that project work.
Thanks for your help
The problem is likely that you've installed the Windows binary for PostgreSQL, but you're trying to connect to it from Windows Subsystem for Linux using a Unix socket, which doesn't exist.
You need to use TCP/IP to connect rather than a Unix socket. When typing psql on the command line, add the option --host=127.0.0.1 to connect via TCP/IP.
I just went through the entire Go Rails tutorial and I also had trouble at the rake db:create step. All you need to do is add host: 127.0.0.1 to your database.yml as shown below. Make sure to Start PostgreSQL before you run rake db:create.
default: &default
host: 127.0.0.1
adapter: postgresql
encoding: unicode
username: postgres
password: ENV['POSTGRESQL_PASSWORD']
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
development:
<<: *default
database: app_development
test:
<<: *default
database: app_test
production:
<<: *default
database: app_production
username: app_username
password: <%= ENV['APP_DATABASE_PASSWORD'] %>
Add host: localhost to your default: in database.yml. Without something telling rails where to look for the host, it looks for the process in linux, which, of course, won't work, since windows processes are not exposed to the wsl layer.

Chef Vault with Test-Kitchen, Vagrant and Chef-Zero provisioner

I have an environment setup with Test-Kitchen v1.5.0, Vagrant v1.8.1. I have a recipe that uses chef vault to decrypt our encrypted passwords that our in our data_bags_path/passwords/pilot.json file.
I am using the solution here https://github.com/chef/chef-vault/issues/58 that daxgames provides towards the end of the page.
My .kitchen.yml:
---
driver:
name: vagrant
provisioner:
name: chef_zero
require_chef_omnibus: 12.14.77
roles_path: ../../roles
environments_path: ../../environments
data_bags_path: ../../data_bags
client_rb:
environment: lgrid2-dev
node_name: "ltylapp400a"
client_key: "/etc/chef/ltylapp400a.pem"
platforms:
- name: centos-6.8
driver:
synced_folders:
- ["/Users/212466756/.chef", "/etc/chef", "disabled:false"]
suites:
- name: ltylapp400a
run_list:
- role[lgrid-db]
attributes:
chef_client:
A snippet from my recipe that deals with chef-vault:
case node["customer_conf"]["status"]
when 'pilot'
passwords = ChefVault::Item.load('passwords', 'pilot')
when 'production'
passwords = ChefVault::Item.load('passwords', node[:hostname][1..3])
end
My directory structure for relevant data_bags:
data_bags
--passwords
--pilot.json
--pilot_keys.json
The error I am getting is that my client.pem that vagrant generates at /etc/chef/ltylapp400a.pem can not decrypt the contents of that databag. Chef suggest that I run knife vault refresh, I am not connected to my chef server on my local machine so if I run this it will give an error about not having a chef server to connect to. My question is how I can add my new key that vagrant generated to the pilot_keys.json so that it is able to decrypt that data_bag?
The more detailed answers are better I am still somewhat new to chef, test-kitchen, etc...
I was able to get this working, below are my results and conclusions. As I stated above my issue was I was unable to decrypt the data_bag since I could not add the new key that vagrant created to the pilot_key.json file since I was not connected to the chef server and could not run a knife vault refresh/update. What I had to do was get the client.pem key from a server that already had access to the pilot.json data_bag. I used our utility server key since it will not be destroyed in the near future.
So on my local PC I have a .chef/ directory under my home directory, I have the client.pem key I copied from the utility server and I sync this with the /tmp/kitchen/ which acts as the /etc/chef directory in the test-kitchen environment.
---
driver:
name: vagrant
provisioner:
name: chef_zero
require_chef_omnibus: 12.14.77
roles_path: ../../roles
environments_path: ../../environments
data_bags_path: ../../data_bags
client_rb:
node_name: "utilityServer"
client_key: "/tmp/kitchen/client.pem" #The Chef::Vault needs a client.pem file to authenticate back to the data_bag to decrypt it, this needs to be stored at /tmp/kitchen/client.pem
environment: dev
no_proxy: 10.0.2.2
platforms:
- name: centos-6.8
driver:
synced_folders:
- ["~/.chef","/tmp/kitchen/","disabled:false"] # Allows the vagrant box to have access to your .chef directory in your home directory. This is where you will store the client.pem for authentication.
suites:
- name: lzzzdbx400a
run_list:
- role[lgrid-db]
attributes:
The data_bags/passwords/pilot_key.json looks like this:
{
"id": "pilot_keys",
"admins": [
"utilityServer"
],
"clients": [
"webserver",
"database"
],
"search_query":"*:*"
"utilityServer":"key",
"webserver":"key",
"database": "key"
}
Since the utilityServer key was already able to decrypt the passwords/pilot data_bag it ran through fine during the next time I ran kitchen converge.
During previously struggles with Kitchen and chef-vault I used the synced_folders method to access key. Revisited this topic I found another solution.
Kitchen Support
To make this work in kitchen, just put a cleartext
data bag in the data_bags folder that your kitchen run refers to
(probably in test/integration/data_bags). Then the vault commands fall
back into using that dummy data when you use chef_vault_item to
retrieve it.
reference: http://hedge-ops.com/chef-vault-tutorial/

Configure Phoenix for Dynamic Endpoint URLs on Heroku

I'm confused on how the host: parameter in the Endpoint configuration in Phoenix works.
I'm deploying to different Heroku apps (prod and staging) with different URLs, respectively. I want to configure the Host URL to be dynamic, coming from an environment variable, like so:
config :testapp, TestApp.Endpoint,
http: [port: {:system, "PORT"}],
url: [scheme: "https", host: {:system, "HOST"}, port: 443],
cache_static_manifest: "priv/static/manifest.json",
force_ssl: [rewrite_on: [:x_forwarded_proto]],
secret_key_base: System.get_env("SECRET_KEY_BASE")
However, after deploy, my asset URLs no longer have the unique hash set by phoenix.digest, which is a deal breaker.
Interestingly, when I hardcode the URL:
config :testapp, TestApp.Endpoint,
http: [port: {:system, "PORT"}],
url: [scheme: "https", host: "someapp-staging.herokuapp.com", port: 443],
cache_static_manifest: "priv/static/manifest.json",
force_ssl: [rewrite_on: [:x_forwarded_proto]],
secret_key_base: System.get_env("SECRET_KEY_BASE")
Even if it doesn't match the Heroku app url, everything still seems to work fine, and the asset URLs are correct. E.g. I can deploy to an app with an URL foo.herokuapp.com and everything still works.
The configuration above is from prod.exs, I'm using the elixir and phoenix static custom buildpacks, with the following config:
# elixir_buildpack.config
# Elixir version
elixir_version=1.2.3
# Always rebuild from scratch on every deploy?
always_rebuild=true
# ENV variables
config_vars_to_export=(DATABASE_URL HOST)
and
# phoenix_static_buildpack.config
# We can set the version of Node to use for the app here
node_version=5.10.0
# We can set the version of NPM to use for the app here
npm_version=3.8.3
# ENV variables
config_vars_to_export=(DATABASE_URL HOST)
I could probably introduce a separate staging.exs config file and set MIX_ENV=staging, but I would like to understand:
1) Why using {:system, "HOST"} breaks digested asset URLs
2) Why any string works fine on different applications and URLs
Any help is appreciated!
Heroku does not have an environment variable named HOST by default (though PORT is available). I'd double check that you've added it as a config variable in your Heroku settings.
The command heroku run printenv is handy as it will output both base environment variables and config vars added manually or by add-ons.

Resources