Puppet bolt plan in yaml format - yaml

I am trying to put a together Puppet bolt plan in YAML format.
I got it working in .pp file and here is the plan
plan profiles::chg123456(
TargetSpec $nodes,
) {
apply($nodes) {
logrotate::rule {'proftpd':
path => ['/var/log/proftpd/*.log', '/var/log/xferlog', '/var/log/proftpd.system.log', '/var/log/sftp.log', '/var/log/sftp-xferlog',],
maxsize => '100m',
rotate_every => 'week',
compress => true,
ifempty => true,
missingok => true,
sharedscripts => true,
postrotate => 'test -f /var/lock/subsys/proftpd && /usr/bin/killall -HUP proftpd || :'
}
}
}
It worked and created /etc/logrotate.d/proftpd with all the correct settings.
Now I want to convert to YAML format but no idea how to do that.
Here is what I guessed but bolt plan show keep saying
$ bolt plan show
Parse error in step "chg123456":
No valid action detected (file: C:/Users/puppet/msys64/home/puppet/.puppetlabs/bolt/modules/profiles/plans/chg123456.yaml)
My YAML plan looks like follows
parameters:
nodes:
type: TargetSpec
steps:
- name: chg123456
target: $nodes
logrotate::rules:
proftpd:
path:
- '/var/log/proftpd/*.log'
- '/var/log/xferlog'
- '/var/log/proftpd.system.log'
- '/var/log/sftp.log'
- '/var/log/sftp-xferlog'
maxsize: '100m'
compress: true
ifempty: true
missingok: true
sharedscripts: true
postrotate: 'test -f /var/lock/subsys/proftpd && /usr/bin/killall -HUP proftpd || :'
return: $chg123456
What am I doing wrong?
Thanks

You'll want to use a resources step, and list the resources you want to use in yaml (documentation):
parameters:
nodes:
type: TargetSpec
steps:
- name: chg123456
target: $nodes
resources:
- logrotate::rules: proftpd
parameters:
path:
- '/var/log/proftpd/*.log'
- '/var/log/xferlog'
- '/var/log/proftpd.system.log'
- '/var/log/sftp.log'
- '/var/log/sftp-xferlog'
maxsize: '100m'
compress: true
ifempty: true
missingok: true
sharedscripts: true
postrotate: 'test -f /var/lock/subsys/proftpd && /usr/bin/killall -HUP proftpd || :'
return: $chg123456
In response to one comment, bolt plan convert is only used to convert yaml plans into Puppet plans, not the other way around.

Related

'Permission Denied' while configuring Filebeat and Logstash using Bash Script

I am writing a script that creates a logstash conf file and adds the configuration, and removes the existing filebeat config file and creates a new one.
I am using cat, but when I run the script, I get:
./script.sh: /etc/logstash/conf.d/apache.conf : Permission denied
./script.sh: /etc/filebeat/filebeat.yml: Permission denied
This is the script. I have tried using sudo chown -R.
Am I missing something or is there a better way to configure my file?
#!/bin/bash
sudo rm /etc/filebeat/filebeat.yml
cat > "/etc/filebeat/filebeat.yml" <<EOF
filebeat.inputs:
- type: filestream
id: my-filestream-id
enabled: true
paths:
- /home/ubuntu/logs/.*log
setup.kibana:
output.logstash:
hosts: ["169.254.169.254:5044"]
EOF
sudo touch /etc/logstash/conf.d/apache.conf
sudo cat > "/etc/logstash/conf.d/apache.conf " <<EOF
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["169.254.169.254"]
}
}
EOF
The main problem here is because of how redirections work.
According to this answer:
All redirections (including >) are applied before executing the actual command. In other words, your shell first tries to open /etc/php5/apache2/php.ini for writing using your account, then runs a completely useless sudo cat.
You can easily solve your problem by using tee (with sudo) instead of cat. Then, your script should be like this:
#!/bin/bash
sudo rm /etc/filebeat/filebeat.yml
sudo tee "/etc/filebeat/filebeat.yml" << EOF
filebeat.inputs:
- type: filestream
id: my-filestream-id
enabled: true
paths:
- /home/ubuntu/logs/.*log
setup.kibana:
output.logstash:
hosts: ["169.254.169.254:5044"]
EOF
sudo touch /etc/logstash/conf.d/apache.conf
sudo tee "/etc/logstash/conf.d/apache.conf" << EOF
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["169.254.169.254"]
}
}
EOF

GitLab CI rules not working with extends and individual rules

Below are two jobs in the build stage.
Default, there is set some common condition, and using extends keyword for that, ifawsdeploy.
As only one of them should run, if variable $ADMIN_SERVER_IP provided then connect_admin_server should run, working that way.
If no value provided to $ADMIN_SERVER_IP then create_admin_server should run, but it is not running.
.ifawsdeploy:
rules:
- if: '$TEST_CREATE_ADMIN && $REGION && $ROLE_ARN && $PACKAGEURL && $TEST_CREATE_ADMIN == "aws" && $SUB_PLATFORM == "aws" && $ROLE_ARN != "" && $PACKAGEURL != "" && $REGION != ""'
variables:
TEST_CREATE_ADMIN:
#value: aws
description: "Platform, currently aws only"
SUB_PLATFORM:
value: aws
description: "Platform, currently aws only"
REGION:
value: "us-west-2"
description: "region where to deploy company"
PACKAGEURL:
value: "http://somerpmurl.x86_64.rpm"
description: "company rpm file url"
ACCOUNT_NAME:
value: "testsubaccount"
description: "Account name of sub account to refer in the deployment, no need to match in AWS"
ROLE_ARN:
value: "arn:aws:iam::491483064167:role/uat"
description: "ROLE ARN of the user account assuming: aws sts get-caller-identity"
tfenv_version: "1.1.9"
DEV_PUB_KEY:
description: "Optional public key file to add access to admin server"
ADMIN_SERVER_IP:
description: "Existing Admin Server IP Address"
ADMIN_SERVER_SSH_KEY:
description: "Existing Admin Server SSH_KEY PEM content"
#export variables below will cause the terraform to use the root account instead of the one specified in tfvars file
.configure_aws_cli: &configure_aws_cli
- aws configure set region $REGION
- aws configure set aws_access_key_id $AWS_FULL_STS_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_FULL_STS_ACCESS_KEY_SECRET
- aws sts get-caller-identity
- aws configure set source_profile default --profile $ACCOUNT_NAME
- aws configure set role_arn $ROLE_ARN --profile $ACCOUNT_NAME
- aws sts get-caller-identity --profile $ACCOUNT_NAME
- aws configure set region $REGION --profile $ACCOUNT_NAME
.copy_remote_log: &copy_remote_log
- if [ -e outfile ]; then rm outfile; fi
- copy_command="$(cat $CI_PROJECT_DIR/scp_command.txt)"
- new_copy_command=${copy_command/"%s"/"outfile"}
- new_copy_command=${new_copy_command/"~"/"/home/ec2-user/outfile"}
- echo $new_copy_command
- new_copy_command=$(echo "$new_copy_command" | sed s'/\([^.]*\.[^ ]*\) \([^ ]*\) \(.*\)/\1 \3 \2/')
- echo $new_copy_command
- sleep 10
- eval $new_copy_command
.check_remote_log: &check_remote_log
- sleep 10
- grep Error outfile || true
- sleep 10
- returnCode=$(grep -c Error outfile) || true
- echo "Return code received $returnCode"
- if [ $returnCode -ge 1 ]; then exit 1; fi
- echo "No errors"
.prepare_ssh_key: &prepare_ssh_key
- echo $ADMIN_SERVER_SSH_KEY > $CI_PROJECT_DIR/ssh_key.pem
- cat ssh_key.pem
- sed -i -e 's/-----BEGIN RSA PRIVATE KEY-----/-bk-/g' ssh_key.pem
- sed -i -e 's/-----END RSA PRIVATE KEY-----/-ek-/g' ssh_key.pem
- perl -p -i -e 's/\s/\n/g' ssh_key.pem
- sed -i -e 's/-bk-/-----BEGIN RSA PRIVATE KEY-----/g' ssh_key.pem
- sed -i -e 's/-ek-/-----END RSA PRIVATE KEY-----/g' ssh_key.pem
- cat ssh_key.pem
- chmod 400 ssh_key.pem
connect-admin-server:
stage: build
allow_failure: true
image:
name: amazon/aws-cli:latest
entrypoint: [ "" ]
rules:
- if: '$ADMIN_SERVER_IP && $ADMIN_SERVER_IP != "" && $ADMIN_SERVER_SSH_KEY && $ADMIN_SERVER_SSH_KEY != ""'
extends:
- .ifawsdeploy
script:
- TF_IN_AUTOMATION=true
- yum update -y
- yum install git unzip gettext jq -y
- echo "Your admin server key and info are added as artifacts"
# Copy the important terraform outputs to files for artifacts to pass into other jobs
- *prepare_ssh_key
- echo "ssh -i ssh_key.pem ec2-user#${ADMIN_SERVER_IP}" > $CI_PROJECT_DIR/ssh_command.txt
- echo "scp -q -i ssh_key.pem %s ec2-user#${ADMIN_SERVER_IP}:~" > $CI_PROJECT_DIR/scp_command.txt
- test_pre_command="$(cat "$CI_PROJECT_DIR/ssh_command.txt") -o StrictHostKeyChecking=no"
- echo $test_pre_command
- test_command="$(echo $test_pre_command | sed -r 's/(ssh )(.*)/\1-tt \2/')"
- echo $test_command
- echo "sudo yum install -yq $PACKAGEURL 2>&1 | tee outfile ; exit 0" | $test_command
- *copy_remote_log
- echo "Now checking log file for returnCode"
- *check_remote_log
artifacts:
untracked: true
when: always
paths:
- "$CI_PROJECT_DIR/ssh_key.pem"
- "$CI_PROJECT_DIR/ssh_command.txt"
- "$CI_PROJECT_DIR/scp_command.txt"
after_script:
- cat $CI_PROJECT_DIR/ssh_key.pem
- cat $CI_PROJECT_DIR/ssh_command.txt
- cat $CI_PROJECT_DIR/scp_command.txt
create-admin-server:
stage: build
allow_failure: false
image:
name: amazon/aws-cli:latest
entrypoint: [ "" ]
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
extends:
- .ifawsdeploy
script:
- echo "admin server $ADMIN_SERVER_IP"
- TF_IN_AUTOMATION=true
- yum update -y
- yum install git unzip gettext jq -y
- *configure_aws_cli
- aws sts get-caller-identity --profile $ACCOUNT_NAME #to check whether updated correctly or not
- git clone "https://project-n-setup:$(echo $PERSONAL_GITLAB_TOKEN)#gitlab.com/company-oss/project-n-setup.git"
# Install tfenv
- git clone https://github.com/tfutils/tfenv.git ~/.tfenv
- ln -s ~/.tfenv /root/.tfenv
- ln -s ~/.tfenv/bin/* /usr/local/bin
# Install terraform 1.1.9 through tfenv
- tfenv install $tfenv_version
- tfenv use $tfenv_version
# Copy the tfvars temp file to the terraform setup directory
- cp .gitlab/admin_server.temp_tfvars project-n-setup/$SUB_PLATFORM/
- cd project-n-setup/$SUB_PLATFORM/
- envsubst < admin_server.temp_tfvars > admin_server.tfvars
- rm -rf .terraform || exit 0
- cat ~/.aws/config
- terraform init -input=false
- terraform apply -var-file=admin_server.tfvars -input=false -auto-approve
- echo "Your admin server key and info are added as artifacts"
# Copy the important terraform outputs to files for artifacts to pass into other jobs
- terraform output -raw ssh_key > $CI_PROJECT_DIR/ssh_key.pem
- terraform output -raw ssh_command > $CI_PROJECT_DIR/ssh_command.txt
- terraform output -raw scp_command > $CI_PROJECT_DIR/scp_command.txt
- cp $CI_PROJECT_DIR/project-n-setup/$SUB_PLATFORM/terraform.tfstate $CI_PROJECT_DIR
- cp $CI_PROJECT_DIR/project-n-setup/$SUB_PLATFORM/admin_server.tfvars $CI_PROJECT_DIR
artifacts:
untracked: true
paths:
- "$CI_PROJECT_DIR/ssh_key.pem"
- "$CI_PROJECT_DIR/ssh_command.txt"
- "$CI_PROJECT_DIR/scp_command.txt"
- "$CI_PROJECT_DIR/terraform.tfstate"
- "$CI_PROJECT_DIR/admin_server.tfvars"
How to fix that?
I tried the below step from suggestions on comments section.
.generalgrabclustertrigger:
rules:
- if: '$TEST_CREATE_ADMIN && $REGION && $ROLE_ARN && $PACKAGEURL && $TEST_CREATE_ADMIN == "aws" && $SUB_PLATFORM == "aws" && $ROLE_ARN != "" && $PACKAGEURL != "" && $REGION != ""'
.ifteardownordestroy: # Automatic if triggered from gitlab api AND destroy variable is set
rules:
- !reference [.generalgrabclustertrigger, rules]
- if: 'CI_PIPELINE_SOURCE == "triggered"'
when: never
And included the above in extends of a job.
destroy-admin-server:
stage: cleanup
extends:
- .ifteardownordestroy
allow_failure: true
interruptible: false
But I am getting syntax error in the .ifteardownordestroy part.
jobs:destroy-admin-server:rules:rule if invalid expression syntax
You are overriding rules: in your job that extends .ifawsdeploy. rules: are not combined in this case -- the definition of rules: in the job takes complete precedence.
Take for example the following configuration:
.template:
rules:
- one
- two
myjob:
extends: .template
rules:
- a
- b
In the above example, the myjob job only has rules a and b in effect. Rules one and two are completely ignored because they are overridden in the job configuration.
Instead of uinsg extends:, you can use !reference to preserve and combine rules. You can also use YAML anchors if you want.
create-admin-server:
rules:
- !reference [.ifawsdeploy, rules]
- ... # your additional rules
If no value provided to $ADMIN_SERVER_IP then create_admin_server should run
Lastly, pay special attention to your rules:
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
In this case, there are no rules that allow the job to run ever. You either need a case that will evaluate true for the job to run, or to have a default case (an item with no if: condition) in order for the job to run.
To get the behavior you expect, you probably want your default case to be on_success:
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
- when: on_success
you can change your rules to :
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
- when: always
or
rules:
- if: '$ADMIN_SERVER_IP == ""'
when: always
I have a sample in here: try-rules-stackoverflow-72545625 - GitLab and the pipeline record Pipeline no value - GitLab, Pipeline has value - GitLab

log4j temporary fix in elasticsearch helm chart using Dlog4j2.formatMsgNoLookups

I was trying to setup an elasticsearch cluster in AKS using helm chart but due to the log4j vulnerability, I wanted to set it up with option -Dlog4j2.formatMsgNoLookups set to true. I am getting unknown flag error when I pass the arguments in helm commands.
Ref: https://artifacthub.io/packages/helm/elastic/elasticsearch/6.8.16
helm upgrade elasticsearch elasticsearch --set imageTag=6.8.16 esJavaOpts "-Dlog4j2.formatMsgNoLookups=true"
Error: unknown shorthand flag: 'D' in -Dlog4j2.formatMsgNoLookups=true
I have also tried to add below in values.yaml file
esConfig: {}
# elasticsearch.yml: |
# key:
# nestedkey: value
log4j2.properties: |
-Dlog4j2.formatMsgNoLookups = true
but the values are not adding to the /usr/share/elasticsearch/config/jvm.options, /usr/share/elasticsearch/config/log4j2.properties or in the environment variables.
First of all, here's a good source of knowledge about mitigating Log4j2 security issue if this is the reason you reached here.
Here's how you can write your values.yaml for the Elasticsearch chart:
esConfig:
log4j2.properties: |
logger.discovery.name = org.elasticsearch.discovery
logger.discovery.level = debug
A ConfigMap will be generated by Helm:
apiVersion: v1
kind: ConfigMap
metadata:
name: elasticsearch-master-config
...
data:
log4j2.properties: |
logger.discovery.name = org.elasticsearch.discovery
logger.discovery.level = debug
And the Log4j configuration will be mount to your Elasticsearch as:
...
volumeMounts:
...
- name: esconfig
mountPath: /usr/share/elasticsearch/config/log4j2.properties
subPath: log4j2.properties
Update: How to set and add multiple configuration files.
You can setup other ES configuration files in your values.yaml, all the files that you specified here will be part of the ConfigMap, each of the files will be mounted at /usr/share/elasticsearch/config/ in the Elasticsearch container. Example:
esConfig:
elasticsearch.yml: |
node.master: true
node.data: true
log4j2.properties: |
logger.discovery.name = org.elasticsearch.discovery
logger.discovery.level = debug
jvm.options: |
# You can also place a comment here.
-Xmx1g -Xms1g -Dlog4j2.formatMsgNoLookups=true
roles.yml: |
click_admins:
run_as: [ 'clicks_watcher_1' ]
cluster: [ 'monitor' ]
indices:
- names: [ 'events-*' ]
privileges: [ 'read' ]
field_security:
grant: ['category', '#timestamp', 'message' ]
query: '{"match": {"category": "click"}}'
ALL of the configurations above are for illustration only to demonstrate how to add multiple configuration files in the values.yaml. Please substitute these configurations with your own settings.
if you update and put a value under esConfig, you will need to remove the curly brackets
esConfig:
log4j2.properties: |
key = value
I would rather suggest to change the /config/jvm.options file and at the end add
-Dlog4j2.formatMsgNoLookups=true
The helm chart has an option to set java options.
esJavaOpts: "" # example: "-Xmx1g -Xms1g"
In your case, setting it like this should be the solution:
esJavaOpts: "-Dlog4j2.formatMsgNoLookups=true"
As I see in updated in elastic repository values.yml:
esConfig: {}
log4j2.properties: |
key = value
Probably need to uncomment log4j2.properties part.

Jenkins Job Builder tries to expand parameter in a text field

I'm having some issues with a Jenkins Job Builder YAML file which contains an attribute (message-content) with "{}" characters in it:
- job-template:
id: senderjob
name: '{job-prefix}{id}'
command: 'echo "executed with $PARAMETER"'
type: freestyle
properties:
- ownership:
enabled: true
owner: user1
- build-discarder:
num-to-keep: 100
parameters:
- string:
name: PARAMETER
default: 'param'
description: 'default parameter for message.'
# template settings
builders:
- shell: '{command}'
publishers:
- ci-publisher:
override-topic: VirtualTopic.abcd.message
message-type: 'Custom'
message-properties: |
release-name=$PARAMETER
PARAMETER=$PARAMETER
message-content: '{"release-name" : "1.0"}'
The error reported is:
jenkins_jobs.errors.JenkinsJobsException: release-name parameter missing to format {release-name : 1.0}
So it looks like it's trying to expand "release-name" with a parameter.
So I have added it as parameter:
- string:
name: release-name
default: '1.0'
description: 'default parameter for release.'
It still reports the same error. Should I include the parameter somewhere else or escape it ? I couldn't find any YAML escape for "{}" characters though.
Any idea?
You should add the following to the configuration file
[job_builder]
allow_empty_variables = True
Link: https://jenkins-job-builder.readthedocs.io/en/latest/definition.html#job-template-1

How do I pass boolean Environmental Variables to a `when` step in CircleCI?

I want to do something along the lines of
commands:
send-slack:
parameters:
condition:
type: env_var_name
steps:
- when:
# only send if it's true
condition: << parameters.condition >>
steps:
- run: # do some stuff if it's true
jobs:
deploy:
steps:
- run:
name: Prepare Message
command: |
# Do Some stuff dynamically to figure out what message to send
# and save it to success_message or failure_message
echo "export success_message=true" >> $BASH_ENV
echo "export failure_message=false" >> $BASH_ENV
- send-slack:
text: "yay"
condition: success_message
- send-slack:
text: "nay"
condition: failure_message
```
Based on this documentation, you cannot use environment variables as conditions in CircleCI. This is because the when logic is done when the configuration is processed (ie, before the job actually runs and the environment variables are set). As an alternative, I would add the logic to a separate run step (or the same initial one).
jobs:
deploy:
steps:
- run:
name: Prepare Message
command: |
# Do Some stuff dynamically to figure out what message to send
# and save it to success_message or failure_message
echo "export success_message=true" >> $BASH_ENV
echo "export failure_message=false" >> $BASH_ENV
- run:
name: Send Message
command: |
if $success_message; then
# Send success message
fi
if $failure_message; then
# Send failure message
fi
Here is a relevant ticket on the CircleCI discussion board.

Resources