I'm tasked with creating about a hundred files for use with puppet. I'm creating .yaml files with unique filenames that will contain site-specific IP and hostname information, these must have the same format (ideally from a template).
I want to create a file generator that fills in variables for IP, subnet, network, and hostname from an input file (.csv?) What's the best way to approach this?
sample format:
---
network::interfaces::interfaces:
eth0:
method: 'static'
address: '10.20.30.1'
netmask: '255.255.240.0'
broadcast: '10.20.30.255'
network: '10.20.30.0'
gateway: '10.20.30.1'
network::interfaces::auto:
- 'eth0'
hosts::host_entries:
HOSTNAME:
ip: '10.20.30.2'
hosts::purge_hosts: true
dhcpd::range_start: '10.20.30.11'
dhcpd::range_stop: '10.20.30.240'
dhcpd::gateway: '10.20.30.1'
hornetq::site: 'test'
Write a skeleton like this:
network::interfaces::interfaces:
eth0:
method: 'static'
address: '__IP__'
netmask: '__MASK__'
broadcast: '__BC__'
network: '__NET__'
gateway: '__GW__'
etc.
Generate files with a loop like
cat input-file | while read OUTPUT IP MASK BC NET GW ; do
sed -e s/__IP__/$IP/ \
-e s/__MASK__/$MASK/ \
-e s/__BC__/$BC/ \
-e s/__NET__/$NET/ \
-e s/__GW__/$GW/ \
<$SKELETON >$OUTPUT
done
This assumes that the fileds in the input file are separated by whitespace, with the name of the respective output file in the first column.
Related
I have this role task file in roles/make_elasticsearch_conf/tasks/main.yml:
---
# tasks file for make_elasticsearch_conf
#
- name: Get private IP address
command:
cmd: "hostname -I | awk '{print $2}'"
register: "cluster_ip"
- name: Create /etc/elasticsearch/elasticsearch.yml File
ansible.builtin.template:
src: elasticsearch.yml.j2
dest: /etc/elasticsearch/elasticsearch.yml
I also have a template in roles/make_elasticsearch_conf/templates/elasticsearch.yml.j2:
cluster.name: {{ ansible_host }}
node.name: {{ ansible_host }}
network.host: {{ cluster_ip }}
I use it in this make_elastic_search_conf.yml playbook:
---
- name: Make Elastic Search Config.
hosts: all
become: True
gather_facts: True
roles:
- roles/make_elasticsearch_conf
When I run my playbook I get this error:
FAILED! => {"changed": true, "cmd": ["hostname", "-I", "|", "awk", "{print $2}"], "delta": "0:00:00.006257", "end": "2022-12-06 21:54:47.612238", "msg": "non-zero return code", "rc": 255, "start": "2022-12-06 21:54:47.605981", "stderr": "Usage: hostname [-b] {hostname|-F file} set host name (from file)\n hostname [-a|-A|-d|-f|-i|-I|-s|-y] display formatted name\n hostname display host name\n\n {yp,nis,}domainname {nisdomain|-F file} set NIS domain name (from file)\n {yp,nis,}domainname display NIS domain name\n\n dnsdomainname display dns domain name\n\n hostname -V|--version|-h|--help print info and exit\n\nProgram name:\n {yp,nis,}domainname=hostname -y\n dnsdomainname=hostname -d\n\nProgram options:\n -a, --alias alias names\n -A, --all-fqdns all long host names (FQDNs)\n -b, --boot set default hostname if none available\n -d, --domain DNS domain name\n -f, --fqdn, --long long host name (FQDN)\n -F, --file read host name or NIS domain name from given file\n -i, --ip-address addresses for the host name\n -I, --all-ip-addresses all addresses for the host\n -s, --short short host name\n -y, --yp, --nis NIS/YP domain name\n\nDescription:\n This command can get or set the host name or the NIS domain name. You can\n also get the DNS domain or the FQDN (fully qualified domain name).\n Unless you are using bind or NIS for host lookups you can change the\n FQDN (Fully Qualified Domain Name) and the DNS domain name (which is\n part of the FQDN) in the /etc/hosts file.", "stderr_lines": ["Usage: hostname [-b] {hostname|-F file} set host name (from file)", " hostname [-a|-A|-d|-f|-i|-I|-s|-y] display formatted name", " hostname display host name", "", " {yp,nis,}domainname {nisdomain|-F file} set NIS domain name (from file)", " {yp,nis,}domainname display NIS domain name", "", " dnsdomainname display dns domain name", "", " hostname -V|--version|-h|--help print info and exit", "", "Program name:", " {yp,nis,}domainname=hostname -y", " dnsdomainname=hostname -d", "", "Program options:", " -a, --alias alias names", " -A, --all-fqdns all long host names (FQDNs)", " -b, --boot set default hostname if none available", " -d, --domain DNS domain name", " -f, --fqdn, --long long host name (FQDN)", " -F, --file read host name or NIS domain name from given file", " -i, --ip-address addresses for the host name", " -I, --all-ip-addresses all addresses for the host", " -s, --short short host name", " -y, --yp, --nis NIS/YP domain name", "", "Description:", " This command can get or set the host name or the NIS domain name. You can", " also get the DNS domain or the FQDN (fully qualified domain name).", " Unless you are using bind or NIS for host lookups you can change the", " FQDN (Fully Qualified Domain Name) and the DNS domain name (which is", " part of the FQDN) in the /etc/hosts file."], "stdout": "", "stdout_lines": []}
I have tried all sorts of ways to get the private ip of the host but nothing I have tried gave the expected result.
The problem is caused by |.
As per the documentation of the command module:
If you want to run a command through the shell (say you are using <,> >, |, and so on), you actually want the ansible.builtin.shell module instead. Parsing shell metacharacters can lead to unexpected commands
being executed if quoting is not done correctly so it is more secure
to use the command module when possible.
You might want to use shell module. Or better get the IP address from the ansible facts.
I have a YAML file like below:
server:
scheme: http
host: localhost
port: 8080
context: myctx
report:
output-dir-base: /tmp
---
spring:
config:
activate:
on-profile: local
vertica:
datasource:
jdbc-url: jdbc:vertica://server:65534/database
username: user
password: db_pass
da-config:
user: username
password: da_pass
da-host:
scheme: http
server: server
port: 65535
From a bash script I want to replace Vertica password with a given value, let's say "first_pass" and da password with "second_pass".
I tried this but didn't work.
sed '/^vertica:\([[:space:]]*password: \).*/s//\1first_pass/' common-properties.yaml
Can this be helped with, please?
UPDATE:
This is what I did to handle the situation. In the bash script file
DB_PASSWORD="first_pass/here" ## I realized later that the password could contain a '/' char as well
DA_PASSWORD="second_pass"
sed -i '/vertica:/{n;n;n;n;s/\(password\).*/\1: '"${DB_PASSWORD//\//\\/}"'/}' my.yaml ## backslash-escape the '/' in the variable
sed -i '/da-config:/{n;n;s/\(password\).*/\1: '"${DA_PASSWORD//\//\\/}"'/}' my.yaml
Assumptions:
input is nicely formatted as displayed in question
OP does not have access to tools designed specifically for yaml editing
New passwords stored in variables:
pass1='first_pass'
pass2='second_pass'
One sed idea using ranges to zero in on the sections we're interested in:
sed -E "/^vertica:/,/password:/ s/(password:).*/\1 ${pass1}/; /^da-config:/,/password:/ s/(password:).*/\1 ${pass2}/; " common-properties.yaml
# or, as OP has mentioned in an update, when the password contains a forward slash we change the sed script delimiter:
sed -E "/^vertica:/,/password:/ s|(password:).*|\1 ${pass1}|; /^da-config:/,/password:/ s|(password:).*|\1 ${pass2}|; " common-properties.yaml
One awk idea:
awk -v p1="${pass1}" -v p2="${pass2}" '
$1 == "password:" { if ( found1 ) { sub(/password: .*/,"password: " p1); found1=0 }
if ( found2 ) { sub(/password: .*/,"password: " p2); found2=0 }
}
/^vertica:/ { found1=1 }
/^da-config:/ { found2=1 }
1
' common-properties.yaml
Both of these generate:
server:
scheme: http
host: localhost
port: 8080
context: myctx
report:
output-dir-base: /tmp
---
spring:
config:
activate:
on-profile: local
vertica:
datasource:
jdbc-url: jdbc:vertica://server:65534/database
username: user
password: first_pass
da-config:
user: username
password: second_pass
da-host:
scheme: http
server: server
port: 65535
Once OP is satisified with the output:
for sed add the -i option to update-in-place the file
if using GNU awk add -i inplace to update-in-place the file; if not running GNU awk then save the output to a temp file and then overwrite the input file with the contents of the temp file
I'm trying to process CSV file to find patterns like 'duser=','dhost=' and 'dproc=' and once found print next string after. I have to use pattern match first due to fact that content of CSV file is not constant. Field separators are not constant as well. Please take into consideration that CSV file contains logs in CEF format and contains much more other patterns and values. Sample log format:
CEF:0|Microsoft|Microsoft Windows|Windows 7|Microsoft-Windows-Security-Auditing:4688|A new process has been created.|Low| eventId=1010044130 externalId=4688 msg=Token Elevation Type indicates the type of token that was assigned to the new process in accordance with User Account Control policy.Type 1 is a full token with no privileges removed or groups disabled. Type 2 is an elevated token with no privileges removed or groups disabled.Type 3 is a limited token with administrative privileges removed and administrative groups disabled. type=1 start=1523950846517 categorySignificance=/Informational categoryBehavior=/Execute/Start categoryDeviceGroup=/Operating System catdt=Operating System categoryOutcome=/Success categoryObject=/Host/Resource/Process art=1523950885975 cat=Security deviceSeverity=Audit_success rt=1523950863727 dhost=A-Win7Test.*****.net dst=**.**.**.46 destinationZoneURI=/All Zones/ArcSight System/Public Address Space Zones/******* dntdom=****** oldFileHash=en_US|UTF-8 cnt=5 cs2=Process Creation cs6=TokenElevationTypeDefault (1) cs1Label=Mandatory Label cs2Label=EventlogCategory cs3Label=New Process ID cs4Label=Process Command Line cs5Label=Creator Process ID cs6Label=Token Elevation Type ahost=a-server09.****.net agt=**.**.**.9 agentZoneURI=/All Zones/ArcSight System/Public Address Space Zones/******** amac=00-50-56-B8-4F-BB av=7.7.0.8044.0 atz=GMT at=winc dvchost=A-Win7Test.*****.net dvc=**.**.**.46 deviceZoneURI=/All Zones/ArcSight System/Public Address Space Zones/********** deviceNtDomain=***** dtz=GMT _cefVer=0.1 aid=3AaTkhlEBABCABcfWDDqDbw\=\=
Ref: https://community.softwaregrp.com/t5/ArcSight-User-Discussions/Issue-with-Windows-Event-4688/td-p/1641345
Seems that below command works:
... | awk 'sub(/.*duser=/,""){print "User:",$1}
However, it works only for the first pattern. After execution as you can guess, there are no more lines to process. Is there any option to execute above command 3 times with different pattern to get a list of 3 columns?
I would like to achieve:
duser=AAA dhost=BBB dproc=CCC
duser=DDD dhost=EEE dproc=FFF
duser=GGG dhost=HHH dproc=III
Appreciate your help, thank you
Like this?
$ cat file
duser=AAA dhost=BBB dproc=CCC
duser=DDD dhost=EEE dproc=FFF
duser=GGG dhost=HHH dproc=III
$ awk '{print gensub("duser=([^ \t,]+)[ \t,]+dhost=([^ \t,]+)[ \t,]+dproc=([^ \t,]+)", "User: \\1, Host: \\2, Proc: \\3
", 1);}' file
User: AAA, Host: BBB, Proc: CCC
User: DDD, Host: EEE, Proc: FFF
User: GGG, Host: HHH, Proc: III
If the three parts are in different positions and with different sequences, then try this:
awk '{match($0,"duser=([^ \t,]+)",user); match($0,"dhost=([^ \t,]+)",host); match($0,"dproc=([^ \t,]+)",proc); print "User: " user[1] ", Host: " host[1] ", Proc: " proc[1];}' file
Please read mcve before you ask another question.
You can try Perl.
$ cat lack_of_threat.txt
duser=AAA dhost=BBB dproc=CCC
duser=DDD dhost=EEE dproc=FFF
duser=GGG dhost=HHH dproc=III
$ perl -ne ' /duser=(\S+)\s*dhost=(\S+)\s*dproc=(\S+)/; print "User:$1, Host:$2, Proc:$3\n" ' lack_of_threat.txt
User:AAA, Host:BBB, Proc:CCC
User:DDD, Host:EEE, Proc:FFF
User:GGG, Host:HHH, Proc:III
$
I have python script generating AWS Signature key for S3. It generates two values:
GZkXNl6Leat71ckcwfxGuiHxt9fnkj47F1SbVjRu/t0=
20190129/eu-west-2/s3/aws4_request
Both are valid for 7 days. What I want is to run that script for every five days using cron inside the Docker container, grab the output and place/replace values in the Nginx config
config:
server {
listen 80;
aws_access_key 'AKIDEXAMPLE';
aws_signing_key FIRST_VALUE;
aws_key_scope SECOND_VALUE;
aws_s3_bucket s3_bucket_name;
location / {
aws_sign;
proxy_pass http://s3_bucket_name.s3.amazonaws.com;
}
Then restart nginx in the container
Assuming the values are stored in val_file, slotting them into nginx.conf, this simplistic solution ought to do -
$: cat script
#! /bin/env bash
{ read -r val1 # read the first line
read -r val2 # read the second line
sed -i "s!FIRST_VALUE!$val1!;
s!SECOND_VALUE!$val2!;
" nginx.conf
} < val_file
$: script
$: cat nginx.conf
server {
listen 80;
aws_access_key 'AKIDEXAMPLE';
aws_signing_key GZkXNl6Leat71ckcwfxGuiHxt9fnkj47F1SbVjRu/t0=;
aws_key_scope 20190129/eu-west-2/s3/aws4_request;
aws_s3_bucket s3_bucket_name;
location / {
aws_sign;
proxy_pass http://s3_bucket_name.s3.amazonaws.com;
}
The curlies make the single input supply both reads. Then it's just a sed, using !'s because you have standard forward slashes in your data. Double quotes on the sed to allow embedded vars - not ideal, but seems ok here.
I am using this script to check a list of ips I own to see if they are on the spam block list.
auto.sh:
while read ip ; do
./blacklist.sh $ip
done < block.txt
blacklist.sh is the above linked script.
block.txt lists each of my ips one line at a time (I have several /22).
A typical output of a blocked ip scan looks like this:
Warning: PTR lookup failed
b.barracudacentral.org : 127.0.0.2
bb.barracudacentral.org : 127.0.0.2
black.junkemailfilter.com : 127.0.0.2
cbl.abuseat.org : 127.0.0.2
cidr.bl.mcafee.com : 127.0.0.4
dnsbl.justspam.org : 127.0.0.2
hostkarma.junkemailfilter.com : 127.0.0.2
----------------------------------------------------------
Results for <my ip>
Tested: 117
Passed: 110
Invalid: 0
Blacklisted: 7
----------------------------------------------------------
what I want to do is have the script spit out output to a file when the text above doesn't say "Blacklisted: 0".
I am not sure how to approach this, will this work?
sudo ./auto.sh "conditions where Blacklisted: is > 0" >> 12.txt
Thanks for any help
Put the output in a temporary file and
then check its content:
./auto.sh > 12_temp.txt
grep -q 'Blacklisted:[ \t]*0$' 12_temp.txt || cat 12_temp.txt >> 12.txt
rm -f 12_temp.txt