I managed to interactively set the debugging mode of my symfony application On and Off for a user session with something like this:
$configuration = ProjectConfiguration::getApplicationConfiguration($app,
$env, $debugging);
I know that the Web Debug Toolbar showing up or not does not depend on the value of $debugging, but the configuration of the current environment.
To this moment the only way that the toolbar appears is when $env = 'dev'.
I'd like to activate it when accessing the "prod" environment also.
I have this app setting:
prod:
.settings:
no_script_name: true
logging_enabled: false
web_debug: true
error_reporting: <?php echo (E_ALL | E_STRICT)."\n" ?>
dev:
.settings:
error_reporting: <?php echo (E_ALL | E_STRICT)."\n" ?>
web_debug: true
cache: false
no_script_name: false
etag: false
The toolbar is not being shown, apparently ignoring the "web_debug" setting.
If I echo(sfConfig::get('sf_web_debug')) I get "true".
¿How could I get the toolbar working?
you also need to change the factory.yml . By default, there's no Logger in the 'prod'-environment.
Just comment out like this:
prod:
# logger:
# class: sfNoLogger
# param:
# level: err
# loggers: ~
From memory you have to change a value in your frontend php file. Compare the frontend.php and frontend_dev.php files in your web directory. Look for a difference where one is true and the other is false (I think it's the last parameter).
The lines are:
require_once(dirname(__FILE__).'/../config/ProjectConfiguration.class.php');
$configuration = ProjectConfiguration::getApplicationConfiguration('frontend', 'prod', false));
sfContext::createInstance($configuration)->dispatch();
change to:
require_once(dirname(__FILE__).'/../config/ProjectConfiguration.class.php');
$configuration = ProjectConfiguration::getApplicationConfiguration('frontend', 'prod', true));
sfContext::createInstance($configuration)->dispatch();
Related
I need rundeck to parse an option based on the value selected in another option. I have an option ${option.env} and other options like ${option.id_dev}, ${option.id_qa}
I want to achieve something like below for extra-vars, so that "env" option value determines which id(dev or qa) to read.
ansible-playbook /build.yml -e id=${option.id_${option.env.value}}
Is this possible or Could I pass extra-vars like a conditional case based on env value ?. I'm using rundeck 3.0.X
Update :
To give clear info, If I select 'dev' for the option 'env', I need to use its value like ${option.id_${option.env.value}} , so it translates to ${option.id_dev} to get other option in the command line
You can use cascade remote options in a tricky way. The explanation is at the end of this answer.
I made a little example to see how to achieve this:
The branches.json file (referenced on the job options as "branches"):
[
{"name":"branch1", "value":"branch1.json"},
{"name":"branch2", "value":"branch2.json"},
{"name":"branch3", "value":"branch3.json"}
]
The branch1.json is the first tentative value of the branches option:
[
{"name":"v1", "value":"1"},
{"name":"v2", "value":"2"},
{"name":"v3", "value":"3"}
]
The branch2.json is the second tentative value of the branches option:
[
{"name":"v4", "value":"4"},
{"name":"v5", "value":"5"},
{"name":"v6", "value":"6"}
]
The branch3.json is the third tentative value of the branches option:
[
{"name":"v7", "value":"7"},
{"name":"v8", "value":"8"},
{"name":"v9", "value":"9"}
]
Full job definition to test:
- defaultTab: summary
description: ''
executionEnabled: true
id: ed0d84fe-135b-41ee-95b6-6daeaa94894b
loglevel: INFO
name: CascadeTEST
nodeFilterEditable: false
options:
- enforced: true
name: branches
valuesUrl: file:/Users/myuser/branches.json
- enforced: true
name: level2
valuesUrl: file:/Users/myuser/${option.branches.value}
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- fileExtension: .sh
interpreterArgsQuoted: false
script: |+
#!/bin/sh
# getting the options
first=#option.branches#
second=#option.level2#
# this just an example
echo "this an example: ${first%%.*}.$second"
scriptInterpreter: /bin/bash
keepgoing: false
strategy: node-first
uuid: ed0d84fe-135b-41ee-95b6-6daeaa94894b
Explanation
As you see, the value of the first option is always a file name, if you take the value directly you always obtain an "option1.json" like string, so, the trick here is to cut the file extension and take only the name as the value, for that, I used this approach.
So, with the first selection value and the second one, you can do anything later in a script, for example, launch the ansible-playbook command in the inline script.
Check the example result.
UPDATED ANSWER:
The closest way is just using two options and concatenating them in the command step but isn't possible to get the option value inside the other one as you say.
Just like this:
- defaultTab: nodes
description: ''
executionEnabled: true
id: 4e8df698-c7ca-4a10-9f70-bc68c1007a10
loglevel: INFO
name: NewJob
nodeFilterEditable: false
options:
- enforced: true
name: env
value: qa
values:
- qa
- prod
- stage
valuesListDelimiter: ','
- enforced: true
name: id_dev
value: '1'
values:
- '1'
- '2'
- '3'
valuesListDelimiter: ','
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo ${option.env}_${option.id_dev}
keepgoing: false
strategy: node-first
uuid: 4e8df698-c7ca-4a10-9f70-bc68c1007a10
Result.
(the cascade option is another approach)
I'm trying to use gomplate as a generator of configuration. The problem I'm facing now is having multiple mutations and environments where the application needs to be configured in a different way. I'd like to achieve some user-friendly and readable way with the least possible repetitions in the template and source data.
The motivation behind this is to have generated source data app_config which can be used in a following gomplate as following:
feature_a={{ index (datasource "app_config").features.feature_a .Env.APP_MUTATION .Env.ENV_NAME | required }}
feature_b={{ index (datasource "app_config").features.feature_b .Env.APP_MUTATION .Env.ENV_NAME | required }}
Basically I'd like to have this source data
features:
feature_a:
~: true
feature_b:
mut_a:
~: false
dev: true
test: true
mut_b:
~: true
converted into this result (used as app_config gomplate datasource)
features:
feature_a:
mut_a:
dev: true
test: true
load: true
staging: true
prod: true
mut_b:
dev: true
test: true
load: true
staging: true
prod: true
feature_b:
mut_a:
dev: true
test: true
load: false
staging: false
prod: false
mut_b:
dev: true
test: true
load: true
staging: true
prod: true
given that datasource platform is defined as
mutations:
- mut_a
- mut_b
environments:
- dev
- test
- load
- staging
- prod
I chose to use the ~ to state that every environment or mutation that is not defined will get the value behind ~.
This should work under assumption that the lowest level is environment and the level before the lowest is mutation. Unless environments are not defined, in that case mutation level is lowest and applies for all mutations and environments. However I know this brings extra complexity, so I'm wiling to use simplified variant where mutations are always defined:
features:
feature_a:
mut_a: true
mut_b: true
feature_b:
mut_a:
~: false
dev: true
test: true
mut_b:
~: true
However, since I'm fairly new to gomplate, I'm not sure whether it is the right tool for the job.
I welcome every feedback.
After further investigation I decided that this issue will be better solved with separate tool.
I'm trying to configure rsyslog to output in RFC5424 format. This means that the PROCID must be output in the syslog header. If there's no header, it should output a single dash (-) in its place. However, some of the events output have it just blank, and some have an actual value.
This is rsyslogd 5.8.10 running on Amazon Linux.
Here are the config lines:
$template CustomFormat,"<%PRI%>1 %timegenerated:1:23:date-rfc3339%-00:00 %HOSTNAME% %app-name% b%procid%b %msgid% %STRUCTURED-DATA%%msg:::sp-if-no-1st-sp%%msg:::drop-last-lf%\n"
$ActionFileDefaultTemplate CustomFormat
Note that I put a "b" on each side of %procid% to make it more visible (this part is not RFC5424-compliant). Here are two lines of sample output.
<87>1 2019-06-19T20:03:01.929-00:00 ip-10-90-0-15 crond b29408b - - pam_unix(crond:account): expired password for user root (password aged)
<85>1 2019-06-19T20:17:18.150-00:00 ip-10-90-0-15 sudo bb - - ssm-user : TTY=pts/0 ; PWD=/ ; USER=root ; COMMAND=/bin/vi /etc/rsyslog.conf
The first line is correct, but the second example should have "b-b" instead of "bb". What should I do to make the blank %procid% show up as a dash? It works fine for the %msgid% and %STRUCTURED-DATA%.
Is there a better way to get RFC5424 output? (I have to use -00:00 instead of Z.)
There may be a better way, but one thing you can try is to use a Rainer script variable in the template instead of the property, and set this variable to "-" if the procid is empty. For example,
$template CustomFormat,"<%PRI%>1 ... b%$.myprocid%b ..."
$ActionFileDefaultTemplate CustomFormat
if ($procid == "") then {
set $.myprocid = "-";
} else {
set $.myprocid = $procid;
}
*.* ./outputfile
Just make sure the if statement is before any action statements. Note, you cannot change the procid property itself with set.
I am trying to use the Ruby Google API Client to create a deployment on the Google Compute Platform (GCP).
I have a YAML file for the configuation:
resources:
- name: my-vm
type: compute.v1.instance
properties:
zone: europe-west1-b
machineType: zones/europe-west1-b/machineTypes/f1-micro
disks:
- deviceName: boot
type: PERSISTENT
boot: true
autoDelete: true
initializeParams:
sourceImage: global/images/myvm-1487178154
networkInterfaces:
- network: $(ref.my-subnet.selfLink)
networkIP: 172.31.54.11
# Create the network for the machines
- name: my-subnet
type: compute.v1.network
properties:
IPv4Range: 172.31.54.0/24
I have tested that this works using the gcloud command line tool.
I now want to do this in Ruby using the API. I have the following code:
require 'google/apis/deploymentmanager_v2'
require 'googleauth'
require 'googleauth/stores/file_token_store'
SCOPE = Google::Apis::DeploymentmanagerV2::AUTH_CLOUD_PLATFORM
PROJECT_ID = "my-project"
ENV['GOOGLE_APPLICATION_CREDENTIALS'] = "./service_account.json"
deployment_manager = Google::Apis::DeploymentmanagerV2::DeploymentManagerService.new
deployment_manager.authorization = Google::Auth.get_application_default([SCOPE])
All of this is working in that I am authenticated and I have a deployment_manager object I can work with.
I want to use the insert_deployment method which has the following signature:
#insert_deployment(project, deployment_object = nil, preview: nil, fields: nil, quota_user: nil, user_ip: nil, options: nil) {|result, err| ... } ⇒ Google::Apis::DeploymentmanagerV2::Operation
The deployment_object type is 'Google::Apis::DeploymentmanagerV2::Deployment'. I can create this object but then I am do not know how to import the YAML file I have into this to be able to programtically perform the deployment.
There is another class called ConfigFile which seems akin to the command line option of specifying the --config but again I do not know how to load the file into this nor then turn it into the correct object for the insert_deployment.
I have worked this out.
Different classes need to be nested so that the configuration is picked up. For example:
require 'google/apis/deploymentmanager_v2'
require 'googleauth'
SCOPE = Google::Apis::DeploymentmanagerV2::AUTH_CLOUD_PLATFORM
PROJECT_ID = "my-project"
ENV['GOOGLE_APPLICATION_CREDENTIALS'] = "./service_account.json"
# Create a target configuration
target_configuration = Google::Apis::DeploymentmanagerV2::TargetConfiguration.new(config: {content: File.read('gcp.yaml')})
# Now create a deployment object
deployment = Google::Apis::DeploymentmanagerV2::Deployment.new(target: target_configuration, name: 'ruby-api-deployment')
# Attempt the deployment
response = deployment_manager.insert_deployment(PROJECT_ID, deployment)
Hope this helps someone
I am looking for a ruby gem ( or a idea to develop one) which can refresh config files(yaml) during runtime. So that I can store in variable and use them.
There's a config object in Configurabilty (disclosure: I'm the author) which you can use either on its own, or as part of the Configurability mixin. From the documentation:
Configurability also includes Configurability::Config, a fairly simple
configuration object class that can be used to load a YAML configuration file,
and then present both a Hash-like and a Struct-like interface for reading
configuration sections and values; it's meant to be used in tandem with Configurability, but it's also useful on its own.
Here's a quick example to demonstrate some of its features. Suppose you have a
config file that looks like this:
---
database:
development:
adapter: sqlite3
database: db/dev.db
pool: 5
timeout: 5000
testing:
adapter: sqlite3
database: db/testing.db
pool: 2
timeout: 5000
production:
adapter: postgres
database: fixedassets
pool: 25
timeout: 50
ldap:
uri: ldap://ldap.acme.com/dc=acme,dc=com
bind_dn: cn=web,dc=acme,dc=com
bind_pass: Mut#ge.Mix#ge
branding:
header: "#333"
title: "#dedede"
anchor: "#9fc8d4"
You can load this config like so:
require 'configurability/config'
config = Configurability::Config.load( 'examples/config.yml' )
# => #<Configurability::Config:0x1018a7c7016 loaded from
examples/config.yml; 3 sections: database, ldap, branding>
And then access it using struct-like methods:
config.database
# => #<Configurability::Config::Struct:101806fb816
{:development=>{:adapter=>"sqlite3", :database=>"db/dev.db", :pool=>5,
:timeout=>5000}, :testing=>{:adapter=>"sqlite3",
:database=>"db/testing.db", :pool=>2, :timeout=>5000},
:production=>{:adapter=>"postgres", :database=>"fixedassets",
:pool=>25, :timeout=>50}}>
config.database.development.adapter
# => "sqlite3"
config.ldap.uri
# => "ldap://ldap.acme.com/dc=acme,dc=com"
config.branding.title
# => "#dedede"
or using a Hash-like interface using either Symbols, Strings, or a mix of
both:
config[:branding][:title]
# => "#dedede"
config['branding']['header']
# => "#333"
config['branding'][:anchor]
# => "#9fc8d4"
You can install it via the Configurability interface:
config.install
Check to see if the file it was loaded from has changed since you
loaded it:
config.changed?
# => false
# Simulate changing the file by manually changing its mtime
File.utime( Time.now, Time.now, config.path )
config.changed?
# => true
If it has changed (or even if it hasn't), you can reload it, which automatically re-installs it via the Configurability interface:
config.reload
You can make modifications via the same Struct- or Hash-like interfaces and write the modified config back out to the same file:
config.database.testing.adapter = 'mysql'
config[:database]['testing'].database = 't_fixedassets'
then dump it to a YAML string:
config.dump
# => "--- \ndatabase: \n development: \n adapter: sqlite3\n
database: db/dev.db\n pool: 5\n timeout: 5000\n testing: \n
adapter: mysql\n database: t_fixedassets\n pool: 2\n timeout:
5000\n production: \n adapter: postgres\n database:
fixedassets\n pool: 25\n timeout: 50\nldap: \n uri:
ldap://ldap.acme.com/dc=acme,dc=com\n bind_dn:
cn=web,dc=acme,dc=com\n bind_pass: Mut#ge.Mix#ge\nbranding: \n
header: \"#333\"\n title: \"#dedede\"\n anchor: \"#9fc8d4\"\n"
or write it back to the file it was loaded from:
config.write
Using for example Watchr or Guard you can monitor files and act on changes to them.
The actual action to take when a file changes depends entirely on your specific setup and situation, so you're on your own there. Or you need to provide more information.