I am using Elixir's porcelain to invoke shell script, in there I have command like:
#!/usr/bin/env bash
aws s3 sync frontend/dist s3://$S3_BUCKET --delete
echo
Now, if command fails(because of wrong bucket) it displays:
fatal error: An error occurred (InvalidBucketName) when calling the
ListObjects operation: The specified bucket is not valid.
But doesn't return this fatal "fatal error" message back to porcelain. How can I echo this error back?
Edit:
Porcelain code:
Porcelain.shell(". #{Path.join(:code.priv_dir(:hub), "scripts/copy_site_to_s3.sh")}")
I know the possible solution would be to use exec instead of shell but this is more of an example, I have a couple of slightly more complicated but similar shell scripts, facing the same problem.
Another script/example(I am testing failures):
Invoking with:
result = Porcelain.shell(". #{Path.join(:code.priv_dir(:hub),
"scripts/git_clone_pull.sh")} #{github}"
)
IO.inspect result
Script:
if cd frontend; then git reset --hard && git pull; else git clone $1 frontend; fi
It properly fails with:
fatal: Authentication failed for
'https://github.com/x/frontend.git/'
But porcelain result fails to capture message:
%Porcelain.Result{err: nil, out: "", status: 128}
If you’ll check the documentation for Porcelain.{exec,shell}/3 options, you’ll see:
:err — specify the way stderr will be passed back to Elixir.
Possible values are the same as for :out. In addition, it accepts the atom :out which denotes redirecting stderr to stdout.
Caveat: when using Porcelain.Driver.Basic, the only supported values are nil (stderr will be printed to the terminal) and :out.
Emphasis is mine. That caveat might be easily proven in the less cumbersome environment, without involving AWS and any other 3rd parties:
iex|1 ▶ Porcelain.shell("ls --gg", err: {:append, "error.log"})
#⇒ ls: unrecognized option '--gg'
# Try 'ls --help' for more information.
# %Porcelain.Result{err: {:append, "error.log"}, out: "", status: 2}
iex|2 ▶ ls "error.log"
# [ERROR] No such file or directory error.log
But we still have :out option!
iex|3 ▶ Porcelain.shell(">&2 echo 'error'", err: :out)
%Porcelain.Result{err: :out, out: "error\n", status: 0}
iex|4 ▶ Porcelain.shell("ls --gg", err: :out)
%Porcelain.Result{
err: :out,
out: "ls: unrecognized option '--gg'\nTry 'ls --help' for more information.\n",
status: 2
}
Naya, luckily even Basic driver might redirect :err to :out. That said, you have two options:
use err: :out parameter, pattern match to when status > 0 and examine standard output, or:
use Porcelain.Driver.Goon driver and deal with your stderr stream like a profi.
Related
One of the actions for fail2ban is configured to run a ruby script; however, fail2ban fails when trying to execute the ruby script with a "Command not found" error. I don't understand this error because I'm providing the full path to the ruby script and it has execution permissions:
Here's my fail2ban action:
[root:a17924e746f0:~]# cat /etc/fail2ban/action.d/404.conf
# Fail2Ban action configuration file for Subzero/Core
[Definition]
actionstart =
actionstop =
actioncheck =
actionban = /root/ban_modify.rb ban <ip>
actionunban = /root/ban_modify.rb unban <ip>
Here are the contents to the /root/ban_modify.rb script:
#!/usr/bin/env ruby
command = ARGV[0]
ip_address = ARGV[1]
blacklist = File.open("/root/blacklist.txt").read.split("\n")
if command == "unban"
if blacklist.include? "#{ip_address} deny"
blacklist.delete "#{ip_address} deny"
end
elsif command == "ban"
blacklist << "#{ip_address} deny"
end
File.open("/root/blacklist.txt", "w") {|f| f.write(blacklist.join("\n"))}
Very simple. This blacklist.txt file is used by Apache to permanently ban individuals from the web server when a fail2ban condition is met.
However, when I issue the following command: sudo /usr/bin/fail2ban-client set 404 unbanip <my ip>
I get the following error:
2019-08-19 20:56:43,508 fail2ban.utils [16176]: Level 39 7ff7395873f0 -- exec: ban_modify.rb ban <myip>
2019-08-19 20:56:43,509 fail2ban.utils [16176]: ERROR 7ff7395873f0 -- stderr: '/bin/sh: 1: ban_modify.rb: not found'
2019-08-19 20:56:43,509 fail2ban.utils [16176]: ERROR 7ff7395873f0 -- returned 127
2019-08-19 20:56:43,509 fail2ban.utils [16176]: INFO HINT on 127: "Command not found". Make sure that all commands in 'ban_modify.rb ban <myip>' are in the PATH of fail2ban-server process (grep -a PATH= /proc/`pidof -x fail2ban-server`/environ). You may want to start "fail2ban-server -f" separately, initiate it with "fail2ban-client reload" in another shell session and observe if additional informative error messages appear in the terminals.
2019-08-19 20:56:43,509 fail2ban.actions [16176]: ERROR Failed to execute ban jail '404' action '404' info 'ActionInfo({'ip': '<myip>', 'family': 'inet4', 'ip-rev': '<myip>.', 'ip-host': '<myip>', 'fid': '<myip>', 'failures': 1, 'time': 1566266203.3465006, 'matches': '', 'restored': 0, 'F-*': {'matches': [], 'failures': 1}, 'ipmatches': '', 'ipjailmatches': '', 'ipfailures': 1, 'ipjailfailures': 1})': Error banning <myip>
I'm not sure why this error is happening if the actionban is pointing to the full path of a ruby script.
I even tried changing the contents of /root/ban_modify.rb to just simply puts "Hello World". Tried changing the banaction to iptables-allports and that still failed. It seems like banaction just simply doesn't work.
You can enable fail2ban debug mode & check fail2ban log for more details.
# change fail2ban log level
sudo nano /etc/fail2ban/fail2ban.conf
loglevel = DEBUG
# restart fail2ban
sudo systemctl restart fail2ban
# check logs
tail -f /var/log/fail2ban.log
You can restart the fail2ban and check it again:
sudo systemctl restart fail2ban
I am trying to troubleshoot an old TCL accounting script called GOTS - Grant Of The System. What it does is creates a time stamped logfile entry for each user login and another for the logout. The problem is it is not creating the second log file entry on logout. I think I tracked down the area where it is going wrong and I have attached it here. FYI the log file exists and it does not exit with the error "GOTS was called incorrectly!!". It should be executing the if then for [string match "$argv" "end_session"]
This software runs properly on RHEL Linux 6.9 but fails as described on Centos 7. I am thinking that there is a system variable or difference in the $argv argument vector for the different systems that creates this behavior.
Am I correct in suspecting $argv and if not does anyone see the true problem?
How do I print or display the $argv values on logout?
# Find out if we're beginning or ending a session
if { [string match "$argv" "end_session"] } {
if { ![file writable $Log] } {
onErrorNotify "4 LOG"
}
set ifd [open $Log a]
puts $ifd "[clock format [clock seconds]]\t$Instrument\t$LogName\t$GroupName"
close $ifd
unset ifd
exit 0
} elseif { [string match "$argv" "begin_session"] == 0 } {
puts stderr "GOTS was called incorrectly!!"
exit -1
}
end_session is populated by the /etc/gdm/PostSession/Default file
#!/bin/sh
### Begin GOTS PostSession
# Do not run GOTS if root is logging out
if test "${USER}" == "root" ; then
exit 0
fi
/usr/local/lib/GOTS/gots end_session > /var/tmp/gots_postsession.log 2> /var/tmp/gots_postsession.log
exit 0
### End GOTS PostSession
This is the postsession log file:
Application initialization failed: couldn't connect to display ":1"
Error in startup script: invalid command name "option"
while executing
"option add *Font "-adobe-new century schoolbook-medium-r-*-*-*-140-*-*-*-*-*-*""
(file "/usr/local/lib/GOTS/gots" line 26)
After a lot of troubleshooting we have determined that for whatever reason Centos is not allowing part of the /etc/gdm/PostSession/default file to execute:
fi
/usr/local/lib/GOTS/gots end_session
But it does update the PostSession.log file as it should .. . Does anyone have any idea what could be interfering with only part of the PostSession/default?
Does anyone have any idea what could be interfereing with PostSession/default?
Could it be that you are hitting Bug 851769?
That said, am I correct in stating that, as your investigation shows, this is not a Tcl-related issue or question anymore?
So it turns out that our script has certain elements that depend upon the Xserver running on logout to display some of the GUI error messages. This from:
Gnome Configuration
"When a user terminates their session, GDM will run the PostSession script. Note that the Xserver will have been stopped by the time this script is run, so it should not be accessed.
Note that the PostSession script will be run even when the display fails to respond due to an I/O error or similar. Thus, there is no guarantee that X applications will work during script execution."
We are having to rewrite those error message callouts so they simply write the errors to a file instead of depending on the display. The errors are for things that should be there in the beginning anyway.
The problem
SNMPD is correctly delegating SNMP polling requests to another program but the response from that program is not valid. A manual run of the program with the same arguments is responding correctly.
The detail
I've installed the correct LSI raid drivers on a server and want to configure SNMP. As per the instructions, I've added the following to /etc/snmp/snmpd.conf to redirect SNMP polling requests with a given OID prefix to a program:
pass .1.3.6.1.4.1.3582 /usr/sbin/lsi_mrdsnmpmain
It doesn't work correctly for SNMP polling requests:
snmpget -v1 -c public localhost .1.3.6.1.4.1.3582.5.1.4.2.1.2.1.32.1
I get the following response:
Error in packet
Reason: (noSuchName) There is no such variable name in this MIB.
Failed object: SNMPv2-SMI::enterprises.3582.5.1.4.2.1.2.1.32.1
What I've tried
SNMPD passes two arguments, -g and <oid> and expects a three line response <oid>, <data-type> and <data-value>.
If I manually run the following:
/usr/sbin/lsi_mrdsnmpmain -g .1.3.6.1.4.1.3582.5.1.4.2.1.2.1.32.0
I correctly get a correct three line response:
.1.3.6.1.4.1.3582.5.1.4.2.1.2.1.32.0
integer
30
This means that the pass command is working correctly and the /usr/sbin/lsi_mrdsnmpmain program is working correctly in this example
I tried replacing /usr/sbin/lsi_mrdsnmpmain with a bash script. The bash script delegates the call and logs the supplied arguments and output from the delegated call:
#!/bin/bash
echo "In: '$#" > /var/log/snmp-pass-test
RETURN=$(/usr/sbin/lsi_mrdsnmpmain $#)
echo "$RETURN"
echo "Out: '$RETURN'" >> /var/log/snmp-pass-test
And modified the pass command to redirect to the bash script. If I run the bash script manually /usr/sbin/snmp-pass-test -g .1.3.6.1.4.1.3582.5.1.4.2.1.2.1.32.0 I get the correct three line response as I did when I ran /usr/sbin/lsi_mrdsnmpmain manually and I get the following logged:
In: '-g .1.3.6.1.4.1.3582.5.1.4.2.1.2.1.32.0
Out: '.1.3.6.1.4.1.3582.5.1.4.2.1.2.1.32.0
integer
30'
When I rerun the snmpget test, I get the same Error in packet... error and the bash script's logging shows that the captured delegated call output is empty:
In: '-g .1.3.6.1.4.1.3582.5.1.4.2.1.2.1.32.0
Out: ''
If I modify the bash script to only echo an empty line I also get the same Error in packet... message.
I've also tried ensuring that the environment variables that are present when I manually call /usr/sbin/lsi_mrdsnmpmain are the same for the bash script but I get the same empty output.
Finally, my questions
Why would the bash script behave differently in these two scenarios?
Is it likely that the problem that exists with the bash scripts is the same as originally noticed (manually running program has different output to SNMPD run program)?
Updates
eewanco's suggestions
What user is running the program in each scenario?
I added echo "$(whoami)" > /var/log/snmp-pass-test to the bash script and root was added to the logs
Maybe try executing it in cron
Adding the following to root's crontab and the correct three line response was logged:
* * * * * /usr/sbin/lsi_mrdsnmpmain -g .1.3.6.1.4.1.3582.5.1.4.2.1.2.1.32.1 >> /var/log/snmp-test-cron 2>&1
Grisha Levit's suggestion
Try logging the stderr
There aren't any errors logged
Checking /var/log/messages
When I run it via SNMPD, I get MegaRAID SNMP AGENT: Error in getting Shared Memory(lsi_mrdsnmpmain) logged. When I run it directly, I don't. I've done a bit of googling and I may need lm_sensors installed; I'll try this.
I installed lm_sensors & compat-libstdc++-33.i686 (the latter because it said it was a pre-requisite from the instructions and I was missing it), uninstalled and reinstalled the LSI drivers and am experiencing the same issue.
SELinux
I accidently stumbled upon a page about extending snmpd with scripts and it says to check the script has the right SELinux context. I ran grep AVC /var/log/audit/audit.log | grep snmp before and after running a snmpget and the following entry is added as a direct result from running snmpget:
type=AVC msg=audit(1485967641.075:271): avc: denied { unix_read unix_write } for pid=5552 comm="lsi_mrdsnmpmain" key=558265 scontext=system_u:system_r:snmpd_t:s0 tcontext=system_u:system_r:initrc_t:s0 tclass=shm
I'm now assuming that SELinux is causing the call to fail; I'll dig further...see answer for solution.
strace (eewanco's suggestion)
Try using strace with and without snmp and see if you can catch a system call failure or some additional hints
For completeness, I wanted to see if strace would have hinted that SELinux was denying. I had to remove the policy packages using semodule -r <policy-package-name> to reintroduce the problem then ran the following:
strace snmpget -v1 -c public localhost .1.3.6.1.4.1.3582.5.1.4.2.1.2.1.32.1 >> strace.log 2>&1
The end of strace.log is as follows and unless I'm missing something, it doesn't seem to provide any hints:
...
sendmsg(3, {msg_name(16)={sa_family=AF_INET, sin_port=htons(161), sin_addr=inet_addr("127.0.0.1")}, msg_iov(1)= [{"0;\2\1\0\4\20public\240$\2\4I\264-m\2"..., 61}], msg_controllen=32, {cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=, ...}, msg_flags=0}, MSG_DONTWAIT|MSG_NOSIGNAL) = 61
select(4, [3], NULL, NULL, {0, 999997}) = 1 (in [3], left {0, 998475})
brk(0xab9000) = 0xab9000
recvmsg(3, {msg_name(16)={sa_family=AF_INET, sin_port=htons(161), sin_addr=inet_addr("127.0.0.1")}, msg_iov(1)= [{"0;\2\1\0\4\20public\242$\2\4I\264-m\2"..., 65536}], msg_controllen=0, msg_flags=0}, MSG_DONTWAIT) = 61
write(2, "Error in packet\nReason: (noSuchN"..., 81Error in packet
Reason: (noSuchName) There is no such variable name in this MIB.
) = 81
write(2, "Failed object: ", 15Failed object: ) = 15
write(2, "SNMPv2-SMI::enterprises.3582.5.1"..., 48SNMPv2- SMI::enterprises.3582.5.1.4.2.1.2.1.32.1
) = 48
write(2, "\n", 1
) = 1
brk(0xaa9000) = 0xaa9000
close(3) = 0
exit_group(2) = ?
+++ exited with 2 +++
It was SELinux that was denying snmpd a delegated call to /usr/sbin/lsi_mrdsnmpmain (and probably beyond).
To identify it, I ran grep AVC /var/log/audit/audit.log and for each entry, I ran the following:
echo "<grepped-output>" | audit2allow -a -M <filename>
This creates a SELinux policy package that should allow the delegated call through. The package is then loaded using the following:
semodule -i <filename>.pp
I had to do this 5 times as there were different causes of denial (unix_read unix_write, associate, read write). I'll look to combine the modules into one.
Now when I run snmpget I get the correct delegated output:
SNMPv2-SMI::enterprises.3582.5.1.4.2.1.2.1.32.1 = INTEGER: 34
I try to flush the scripts using the command:" SCRIPT FLUSH" running the code like this:
c.Send("SCRIPT FLUSH")
c.Flush()
spew.Dump(c.Receive())
But I get this output:
(interface {}) <nil>
(redis.Error) (len=33) ERR unknown command 'SCRIPT FLUSH'
When I run the command from the command line I get an OK response:
How can I solve this problem?
Use two arguments:
c.Send("SCRIPT", "FLUSH")
c.Flush()
spew.Dump(c.Receive())
Also, use Do instead of the Send/Flush/Receive calls:
spew.Dump(c.Do("SCRIPT", "FLUSH"))
I'm starting to learn how to script bash and I've run into a problem with the echo command and a variable.
#!/bin/bash
LOGINOUTPUT = "`wget --no-check-certificate --post-data 'login=redacted&password=redacted' https://nessusserver:8834/login -O -`"
echo $LOGINOUTPUT
Running this script returns the following:
--2013-08-15 15:07:32-- https://nessusserver:8834/login
Resolving nessussserver (nessusserver)... 172.23.80.88
Connecting to nessusserver (nessusserver)|172.23.80.88|:8834... connected.
WARNING: cannot verify nessusserver's certificate, issued by ‘/C=FR/ST=none/L=Paris/O=Nessus Users United/OU=Certification Authority for nessusserver.healthds.com/CN=nessusserver.healthds.com/emailAddress=ca#besecmisc1.healthds.com’:
Unable to locally verify the issuer's authority.
WARNING: certificate common name ‘nessusserver.healthds.com’ doesn't match requested host name ‘nessusserver’.
HTTP request sent, awaiting response... 200 OK
Length: 461 [text/xml]
Saving to: ‘STDOUT’
100%[=============================================================================================================================================================>] 461 --.-K/s in 0s
2013-08-15 15:07:33 (90.4 MB/s) - written to stdout [461/461]
./nessus-output.sh: line 2: LOGINOUTPUT: command not found
Why does it think that LOGINOUTPUT is a command? Thanks in advance for any help!
EDIT: Updated script
#!/bin/bash
LOGINOUTPUT=$(wget --no-check-certificate --post-data 'login=redacted&password=redacted' https://nessusserver:8834/login -O -)
echo $LOGINOUTPUT
Still yields the same error, same if I leave $(...) as backticks.
This happens because you have spaces before and after the = in the variable assignment. The correct assignment is:
LOGINOUTPUT="....
with no spaces.
If you add spaces, then the shell interprets LOGINOUTPUT as a command, and tries to pass it two arguments: the "=" and the the quoted string. This of course fails, with the error LOGINOUTPUT: command not found
As a sidenote, it is better to use this syntax $(command) than backticks when doing process substitution.