I am writing a script that creates a logstash conf file and adds the configuration, and removes the existing filebeat config file and creates a new one.
I am using cat, but when I run the script, I get:
./script.sh: /etc/logstash/conf.d/apache.conf : Permission denied
./script.sh: /etc/filebeat/filebeat.yml: Permission denied
This is the script. I have tried using sudo chown -R.
Am I missing something or is there a better way to configure my file?
#!/bin/bash
sudo rm /etc/filebeat/filebeat.yml
cat > "/etc/filebeat/filebeat.yml" <<EOF
filebeat.inputs:
- type: filestream
id: my-filestream-id
enabled: true
paths:
- /home/ubuntu/logs/.*log
setup.kibana:
output.logstash:
hosts: ["169.254.169.254:5044"]
EOF
sudo touch /etc/logstash/conf.d/apache.conf
sudo cat > "/etc/logstash/conf.d/apache.conf " <<EOF
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["169.254.169.254"]
}
}
EOF
The main problem here is because of how redirections work.
According to this answer:
All redirections (including >) are applied before executing the actual command. In other words, your shell first tries to open /etc/php5/apache2/php.ini for writing using your account, then runs a completely useless sudo cat.
You can easily solve your problem by using tee (with sudo) instead of cat. Then, your script should be like this:
#!/bin/bash
sudo rm /etc/filebeat/filebeat.yml
sudo tee "/etc/filebeat/filebeat.yml" << EOF
filebeat.inputs:
- type: filestream
id: my-filestream-id
enabled: true
paths:
- /home/ubuntu/logs/.*log
setup.kibana:
output.logstash:
hosts: ["169.254.169.254:5044"]
EOF
sudo touch /etc/logstash/conf.d/apache.conf
sudo tee "/etc/logstash/conf.d/apache.conf" << EOF
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["169.254.169.254"]
}
}
EOF
Related
Example of functional kubectl patch command:
# kubectl patch storageclass local-path \
-p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'
In certain cases the patched key/values are too numerous, so is recommended to use a file instead:
# kubectl patch storageclass local-path --patch-file=file.yaml
I would like to use an alternative of this format, which returns an error:
cat << 'EOF' | kubectl patch storageclass local-path --patch-file -
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: false
EOF
error: unable to read patch file: open -: no such file or directory
My goal is to use a dynamic way of pushing the patch data, without creating a file. What would be the correct format? Thank you.
Update: Based on provided documentation, I tried this format:
cat << 'EOF' | kubectl patch storageclass local-path --type=merge -p -
{
"metadata": {
"annotations": {
"storageclass.kubernetes.io/is-default-class": "false"
}
}
}
EOF
Error from server (BadRequest): json: cannot unmarshal array into Go value of type map[string]interface {}
Or:
kubectl patch storageclass local-path --type=merge -p << 'EOF'
{
"metadata": {
"annotations": {
"storageclass.kubernetes.io/is-default-class": "false"
}
}
}
EOF
error: flag needs an argument: 'p' in -p
What would be the correct format? I'm trying to avoid a very long line and keep a nice readable format.
If you look at the documentation of kubectl patch help that it is not a supported feature to pass the patch as you are trying to do because you either need to pass the patch as a json or from the file contains that contians the data.
You can pass something like this, but still you need to clean up the file you created here (auto.yaml).
$ cat <<EOF | echo "metadata:
> labels:
> app: testapp "> auto.yaml | kubectl patch pod pod-name --patch-file=auto.yaml
> EOF
For more information about EOF refer to the Here Document section in this document
For Updated question:
You are actually missing the ' quotation before starting the json and don't give - after -p. Give a try like this, this is working in our environment
$ cat <<EOF | kubectl patch deployments nginx --type=merge --patch '{
> "metadata":
> {
> "labels":
> {
> "check": "good"
> }
> }
> }'
> EOF
type: OS::Nova::Server
properties:
name: { get_param: hostname }
availability_zone: nova
image: { get_param: wisdom_image_id }
flavor: { get_param: flavor }
config_drive: true
networks:
- port: { get_param: public_EDN_port }
- port: { get_param: private_port }
user_data_format: RAW
user_data:
str_replace:
template: |
#!/bin/sh
#Any unix steps from IG leaving
set -e
cd wrongex
trap 'last_command=$current_command; current_command=$BASH_COMMAND' DEBUG
trap 'echo "\"${last_command}\" command filed with exit code $?."' EXIT
cd wrongex
printf "DEVICE=eth0\nBOOTPROTO=static\nONBOOT=yes\nTYPE=Ethernet\nUSERCTL=no\nIPADDR=public_EDNv4_ip\nNETMASK=public_EDNv4_Netmask\nGATEWAY=public_EDNv4_GATEWAY\nIPV6INIT=yes\nIPV6ADDR=public_EDNv6_ip\nIPV6_DEFAULTGW=public_EDNv6_IPV6_GATEWAY\n" > /etc/sysconfig/network-scripts/ifcfg-eth0
printf "DEVICE=eth1\nBOOTPROTO=static
Using "openstack stack create -e envYAML -f HEATyaml stackname" command to instantiate the VM.
This is the portion of HEAT YAML wherein we are trying to run some bash commands in the VM once it is instantiated (all what is part of user_data). Just need to know if there is a way to exit and report the error if any shell command fails.
Currently I have tried trap, set -e etc. but none of them I believe is able to provide the feedback and the stack still is created successfully (even if any of the commands inside user_data fails).
Here is what I have specified in my yml for the logstash. I've tried multiple variations of quotes, no quotes, etc:
volumes:
- "./logstash:/etc/logstash/conf:ro"
command:
- "logstash -f /etc/logstash/conf/simplels.conf"
And simplels.conf contains this:
input {
stdin{}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
stdout{}
}
Overall file structure is this, I'm running docker-compose up from the docker folder and getting Exit 1 on the Logstash container due to my 'command' parameter:
/docker:
docker-compose.yml
/logstash
simplels.conf
I want to execute a command using ansible 2.9.10 in remote machine, first I tried like this:
ansible kubernetes-root -m command -a "cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"registry-mirrors":[
"https://kfwkfulq.mirror.aliyuncs.com",
"https://2lqq34jg.mirror.aliyuncs.com",
"https://pee6w651.mirror.aliyuncs.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn",
"https://registry.docker-cn.com"
]
}"
obviously it is not working.so I read this guide and tried like this:
- hosts: kubernetes-root
remote_user: root
tasks:
- name: add docker config
shell: >
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"registry-mirrors":[
"https://kfwkfulq.mirror.aliyuncs.com",
"https://2lqq34jg.mirror.aliyuncs.com",
"https://pee6w651.mirror.aliyuncs.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn",
"https://registry.docker-cn.com"
]
}
and execute it like this:
[dolphin#MiWiFi-R4CM-srv playboook]$ ansible-playbook add-docker-config.yaml
[WARNING]: Invalid characters were found in group names but not replaced, use
-vvvv to see details
ERROR! We were unable to read either as JSON nor YAML, these are the errors we got from each:
JSON: Expecting value: line 1 column 1 (char 0)
Syntax Error while loading YAML.
could not find expected ':'
The error appears to be in '/home/dolphin/source-share/source/dolphin/dolphin-scripts/ansible/playboook/add-docker-config.yaml': line 7, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
cat > /etc/docker/daemon.json <<EOF
{
^ here
is there anyway to achive this?how to fix it?
your playbook should work fine, you just have to add some indentation after the shell clause line, and change the > to |:
here is the updated PB:
---
- name: play name
hosts: dell420
gather_facts: false
vars:
tasks:
- name: run shell task
shell: |
cat > /tmp/temp.file << EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"registry-mirrors":[
"https://kfwkfulq.mirror.aliyuncs.com",
"https://2lqq34jg.mirror.aliyuncs.com",
"https://pee6w651.mirror.aliyuncs.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn",
"https://registry.docker-cn.com"
]
}
EOF
Not sure what is wrong with the ad-hoc command, i tried a few things but didnt manage to make it work.
hope these help
EDIT:
as pointed out by Zeitounator, the ad-hoc command will work if you use shell module instead of command. example:
ansible -i hosts dell420 -m shell -a 'cat > /tmp/temp.file <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"registry-mirrors":[
"https://kfwkfulq.mirror.aliyuncs.com",
"https://2lqq34jg.mirror.aliyuncs.com",
"https://pee6w651.mirror.aliyuncs.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn",
"https://registry.docker-cn.com"
]
}
EOF
'
I use Fortinet for firewall automation, but i get the error "Error reading running config" . I already followed this website: https://github.com/ansible/ansible/issues/33392
But do not find any solution. Please tell me what am I doing wrong ?
Ansible version: 2.7.0
Python version: 2.7.5
Fortinet: 60E
FortiOS version: 6.0.2
Here is what I am trying:
FortiOS.yml playbook:
---
- name: FortiOS Firewall Settings
hosts: fortiFW
connection: local
vars_files:
- /etc/ansible/vars/FortiOS_Settings_vars.yml
tasks:
- name: Backup current config
fortios_config:
host: 192.168.1.99
username: admin
password: Password#123
backup: yes
backup_path: /etc/ansible/forti_backup
Here is what I get as error:
ok: [192.168.1.99] META: ran handlers Read vars_file
'/etc/ansible/vars/FortiOS_Settings_vars.yml'
TASK [Backup current config]
**************************************************************************************************************************************************************************************************************** task path: /etc/ansible/FortiOS_Settings_test.yml:8 <192.168.1.99>
ESTABLISH LOCAL CONNECTION FOR USER: root <192.168.1.99> EXEC /bin/sh
-c 'echo ~root && sleep 0' <192.168.1.99> EXEC /bin/sh -c '( umask 77 && mkdir -p "echo
/root/.ansible/tmp/ansible-tmp-1539674386.05-16470854685226" && echo
ansible-tmp-1539674386.05-16470854685226="echo
/root/.ansible/tmp/ansible-tmp-1539674386.05-16470854685226" ) &&
sleep 0' Using module file
/usr/lib/python2.7/site-packages/ansible/modules/network/fortios/fortios_config.py
<192.168.1.99> PUT
/root/.ansible/tmp/ansible-local-6154Uq5Dmw/tmpt6JukB TO
/root/.ansible/tmp/ansible-tmp-1539674386.05-16470854685226/AnsiballZ_fortios_config.py
<192.168.1.99> EXEC /bin/sh -c 'chmod u+x
/root/.ansible/tmp/ansible-tmp-1539674386.05-16470854685226/
/root/.ansible/tmp/ansible-tmp-1539674386.05-16470854685226/AnsiballZ_fortios_config.py
&& sleep 0' <192.168.1.99> EXEC /bin/sh -c '/usr/bin/python
/root/.ansible/tmp/ansible-tmp-1539674386.05-16470854685226/AnsiballZ_fortios_config.py
&& sleep 0' <192.168.1.99> EXEC /bin/sh -c 'rm -f -r
/root/.ansible/tmp/ansible-tmp-1539674386.05-16470854685226/ >
/dev/null 2>&1 && sleep 0' The full traceback is: WARNING: The below
traceback may not be related to the actual failure. File
"/tmp/ansible_fortios_config_payload_b6IQmy/main.py", line 132, in
main
f.load_config(path=module.params['filter']) File "/usr/lib/python2.7/site-packages/pyFG/fortios.py", line 212, in
load_config
config_text = self.execute_command(command) File "/usr/lib/python2.7/site-packages/pyFG/fortios.py", line 154, in
execute_command
output = output + self._read_wrapper(o) File "/usr/lib/python2.7/site-packages/pyFG/fortios.py", line 120, in
_read_wrapper
return py23_compat.text_type(data)
fatal: [192.168.1.99]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"backup": true,
"backup_filename": null,
"backup_path": "/etc/ansible/forti_backup",
"config_file": null,
"file_mode": false,
"filter": "",
"host": "192.168.1.99",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"src": null,
"timeout": 60,
"username": "admin",
"vdom": null
}
},
"msg": "Error reading running config" }
When working with this module, I had the same issue. I looked into the source code of the module, and found that this error occurs when filter is set to "" -> empty string. You can get facts about the device when changing filter to something like "firewall address". But then you will only get back the options from exactly that, like if you would've typed "show firewall address" on the CLI of the device.
I'm currently working on a solution to use Ansible for FortiGate automation, but it's not looking good. E.g. FortiGates additionally do not support Netconf, so you can't use Netconf to send commands to the device.
So therefore, you're not doing anything wrong, but the modules is either not optimized, or I guessed that maybe the full-configuration is too big to be read by the module, so that you have to use the filter option to shrink it.