External configuration beside app.conf & environment variables for revel go framework - go

I have read revel app.conf manual for custom configuration and environment variables. however I couldn't find way to use additional external configuration along with app.conf.
My goal is to achieve external configuration file in addition to internal app.conf. Let's say creating a product called example and example product maintains it's sensible defaults with app.conf (not exposing to end user) instead product exposes config attributes via example.conf (default location could be /etc/example/example.conf) for product users.
For example: http config field from app.conf
http.addr =
http.port = 9000
extend it to example.conf
http.addr =
http.port = 9000
[database]
host = "localhost"
port = 8080
user = "username"
password = "password"
# etc...
Then I read example.conf during an application start use values also apply values on top of app.conf (overriding). Finally revel server starts!
How to achieve this goal with revel go framework?

It appears you are working against the design of the app.conf. It is already setup to be sectioned, for example all this is in a single app.conf file
[dev]
results.pretty = true
watch = true
http.addr = 192.168.1.2
[test]
results.pretty = true
watch = true
http.addr = 192.168.1.22
[prod]
results.pretty = false
watch = false
http.addr = 192.168.1.100
you can launch 3 different scenarios by using three different command line options
revel run bitbucket.org/mycorp/my-app dev
revel run bitbucket.org/mycorp/my-app test
revel run bitbucket.org/mycorp/my-app prod
I know this is not exactly what your goal is but you can acheive a similar result.

In github.com/revel/revel/revel.go around line 152 you have something like
Config, err = LoadConfig("app.conf").
Maybe you can try and modify that with this
if len(os.Getenv("SOME ENV VAR")) > 0 {
Config, err = LoadConfig("path/to/your/example.conf")
} else {
Config, err = LoadConfig("app.conf")
}
You just need to set env var on your prod server.
That way you will not be using app.conf but your example.conf.

Related

Vault Error, Server gave HTTP response to HTTPS client

I'm using Hashicorp vault as a secrets store and installed it via apt repository on Ubuntu 20.04.
After that, I added the root key to access the UI and I'm able to add or delete secrets using UI.
Whenever I'm trying to add or get a secret using the command line, I get the following error :
jarvis#saki:~$ vault kv get secret/vault
Get "https://127.0.0.1:8200/v1/sys/internal/ui/mounts/secret/vault": http: server gave HTTP response to HTTPS client
My vault config looks like this :
# Full configuration options can be found at https://www.vaultproject.io/docs/configuration
ui = true
#mlock = true
#disable_mlock = true
storage "file" {
path = "/opt/vault/data"
}
#storage "consul" {
# address = "127.0.0.1:8500"
# path = "vault"
#}
# HTTP listener
#listener "tcp" {
# address = "127.0.0.1:8200"
# tls_disable = 1
#}
# HTTPS listener
listener "tcp" {
address = "0.0.0.0:8200"
tls_cert_file = "/opt/vault/tls/tls.crt"
tls_key_file = "/opt/vault/tls/tls.key"
}
# Example AWS KMS auto unseal
#seal "awskms" {
# region = "us-east-1"
# kms_key_id = "REPLACE-ME"
#}
# Example HSM auto unseal
#seal "pkcs11" {
# lib = "/usr/vault/lib/libCryptoki2_64.so"
# slot = "0"
# pin = "AAAA-BBBB-CCCC-DDDD"
# key_label = "vault-hsm-key"
# hmac_key_label = "vault-hsm-hmac-key"
#}
I fixed the problem. Though the exception can be common to more than one similar problem, I fixed the problem by exporting the root token generated after running this command :
vault server -dev
The output is like this
...
You may need to set the following environment variable:
$ export VAULT_ADDR='http://127.0.0.1:8200'
The unseal key and root token are displayed below in case you want to
seal/unseal the Vault or re-authenticate.
Unseal Key: 1+yv+v5mz+aSCK67X6slL3ECxb4UDL8ujWZU/ONBpn0=
Root Token: s.XmpNPoi9sRhYtdKHaQhkHP6x
Development mode should NOT be used in production installations!
...
Then just export these variables by running the following commands :
export VAULT_ADDR='http://127.0.0.1:8200'
export VAULT_TOKEN="s.XmpNPoi9sRhYtdKHaQhkHP6x"
Note: Replace "s.XmpNPoi9sRhYtdKHaQhkHP6x" with your token received as output from the above command.
Then run the following command to check the status :
vault status
Again, the error message can be similar for many different problems.
In PowerShell on Windows 10, I was able to set it this way:
$Env:VAULT_ADDR='http://127.0.0.1:8200'
Then
vault status
returned correctly. This was on Vault 1.7.3 in dev mode
You can echo VAULT_ADDR by specifying it on the command line and pressing enter - same as the set line above but omitting the = sign and everything after it
$Env:VAULT_ADDR
Output:
Key Value
--- ----- Seal Type shamir Initialized true Sealed false Total Shares 1 Threshold 1 Version
1.7.3 Storage Type inmem Cluster Name vault-cluster-80649ba2 Cluster ID 2a35e304-0836-2896-e927-66722e7ca488 HA Enabled
false
Try using a new terminal window. This worked for me

Cannot get public IP address of spot instance with Terraform

I’m spinning up a spot instance as you can see in below config and then trying to get the IP address from the spot. It seems to work fine with a regular ec2 instance (ie. that is not spot instance).
The error that I get is:
aws_route53_record.staging: Resource
‘aws_spot_instance_request.app-ec2’ does not have attribute
‘public_ip’ for variable ‘aws_spot_instance_request.app-ec2.public_ip’
Here is the config that I’m using:
resource "aws_spot_instance_request" "app-ec2" {
ami = "ami-1c999999"
spot_price = "0.008"
instance_type = "t2.small"
tags {
Name = "${var.app_name}"
}
key_name = "mykeypair"
associate_public_ip_address = true
vpc_security_group_ids = ["sg-99999999"]
subnet_id = "subnet-99999999"
iam_instance_profile = "myInstanceRole"
user_data = <<-EOF
#!/bin/bash
echo ECS_CLUSTER=APP-STAGING >> /etc/ecs/ecs.config
EOF
}
resource "aws_route53_record" "staging" {
zone_id = "XXXXXXXX"
name = "staging.myapp.com"
type = "A"
ttl = "300"
records = ["${aws_spot_instance_request.app-ec2.public_ip}"]
The spot request is fulfilled on the AWS Console as per below:
Any help will be greatly appreciated!
So I've been trying to figure this out since last night and kept seeing the spot instance request being fulfilled via the AWS Console. Likewise, I could see the public IP for the spot and this was misleading me.
It turns out I was missing 1 line (argument) in my script:
wait_for_fulfillment = true
By default, it is set to false, and therefore when I tried to set the public_ip address it simply did not exist at that time.
Now Terraform will wait for the Spot Request to be fulfilled. According to the documentation, it will throw an error if the timeout of 10m is reached.
I tried the code snippet you provided with Terraform version 0.12.10 and got the same error. I checked the terraform.tfstate file and saw that the fields were not populated yet (for example private_ip, public_ip, and public_dns were set to null). I checked the "Spot Requests" section in the AWS Console and saw the following Status: price-too-low: Your Spot request price of 0.0075 is lower than the minimum required Spot request fulfillment price of 0.008. The request state was still open so this is why all the variables in the state file were set to null.

When provisioning with Terraform, how does code obtain a reference to machine IDs (e.g. database machine address)

Let's say I'm using Terraform to provision two machines inside AWS:
An EC2 Machine running NodeJS
An RDS instance
How does the NodeJS code obtain the address of the RDS instance?
You've got a couple of options here. The simplest one is to create a CNAME record in Route53 for the database and then always point to that CNAME in your application.
A basic example would look something like this:
resource "aws_db_instance" "mydb" {
allocated_storage = 10
engine = "mysql"
engine_version = "5.6.17"
instance_class = "db.t2.micro"
name = "mydb"
username = "foo"
password = "bar"
db_subnet_group_name = "my_database_subnet_group"
parameter_group_name = "default.mysql5.6"
}
resource "aws_route53_record" "database" {
zone_id = "${aws_route53_zone.primary.zone_id}"
name = "database.example.com"
type = "CNAME"
ttl = "300"
records = ["${aws_db_instance.default.endpoint}"]
}
Alternative options include taking the endpoint output from the aws_db_instance and passing that into a user data script when creating the instance or passing it to Consul and using Consul Template to control the config that your application uses.
You may try Sparrowform - a lightweight provision tool for Terraform based instances, it's capable to make an inventory of Terraform resources and provision related hosts, passing all the necessary data:
$ terrafrom apply # bootstrap infrastructure
$ cat sparrowfile # this scenario
# fetches DB address from terraform cache
# and populate configuration file
# at server with node js code:
#!/usr/bin/env perl6
use Sparrowform;
$ sparrowfrom --ssh_private_key=~/.ssh/aws.pem --ssh_user=ec2 # run provision tool
my $rdb-adress;
for tf-resources() -> $r {
my $r-id = $r[0]; # resource id
if ( $r-id 'aws_db_instance.mydb') {
my $r-data = $r[1];
$rdb-address = $r-data<address>;
last;
}
}
# For instance, we can
# Install configuration file
# Next chunk of code will be applied to
# The server with node-js code:
template-create '/path/to/config/app.conf', %(
source => ( slurp 'app.conf.tmpl' ),
variables => %(
rdb-address => $rdb-address
),
);
# sparrowform --ssh_private_key=~/.ssh/aws.pem --ssh_user=ec2 # run provisioning
PS. disclosure - I am the tool author

error parallel distribution in omnet++

When I try to make parallel distribution in ubuntu14.04
I got this error: Cannot append hostname to file name results/General-0.elog:no HOST , HOSTNAME or COMPUTERNAME (Windows) environment variable.
[General]
network = Network
parallel-simulation = true
parsim-communications-class = "cMPICommunications"
parsim-synchronization-class = "cNullMessageProtocol"
**.scalar-recording = false
**.vector-recording = false
*.GCN.**.partition-id =0
*.lcn[*].partition-id =1
*.sn[*].partition-id =2
You have to set the HOST environment variable.
Type in console where you start OMNeT++:
export HOST=host01
or in IDE go to Run | Run Configurations | your configuration | Environment and add a new HOST variable with host01 value.

session error when using multiple uwsgi worker and beaker session.typ is memory

i'm running a pyramid webapp, using velruse to make OAuth. if running the app alone, it succeeded.
but if running with uwsgi multiple and set session.type = memory.
request.session will not contain necessary token info when callback from oauth.
production.ini:
session.type = memory
session.data_dir = %(here)s/data/sessions/data
session.lock_dir = %(here)s/data/sessions/lock
session.key = mykey
session.secret = mysecret
[uwsgi]
socket = 127.0.0.1:6543
master = true
workers = 8
max-requests = 65536
debug = false
autoload = true
virtualenv = /home/myname/my_env
pidfile = ./uwsgi.pid
daemonize = ./mypyramid-uwsgi.log
If you use memory as session store only the worker in which the session data has been written will be able to use that infos. You should use another sessione store (that can be shared by all of the workers/processes)
your uWSGI config is not clear (it looks like it only contains the socket option). Can you re-paste it ?

Resources