Failed to send SMS with Gammu - gammu

As you can tell, I have a problem with Gammu. I'm using Gammu to send SMS in conjunction with PRTG Network Monitor, which is used to monitor Networking Devices(i.e Servers, Switches, Routers, Firewalls) and whenever one of the important Services goes Down, PRTG executes some batchfiles I have created, which do Gammu calls to send SMS.
But, whenever one or more services goes down, some of the batchfiles don't get executed at all and I receive the followin error:
Error 102 Error opening device. Unknown, busy or no permissions.
I have seen many related topics to this problem, but I couldn't solve my problem through any of them.
I have configured logging for Gammu, so that everything gets logged whatever happens. I have two errors that are showing up in these logs:
[System error - CreateFile in serial_open, 32, "The process cannot access the file because it is being used by another process.
"]
[System error - SetCommState in serial_close, 6, "The handle is invalid. "]
I have a question about the first error "The process cannot access the file because it is being used by another process.". Which file is it talking about, Gammu or my batchfile?
Here is an example of how I setup my batchfiles:
cd c:\Program Files (x86)\Gammu\bin
Rem User1
gammu sendsms TEXT 0123456789 -text "%*" -report
Rem User2
gammu sendsms TEXT 0123456789 -text "%*" -report
I tried the timeout mechanism before executing the gammu sendsms, but it didn't work at all. Do you guys have any solutions on how I can make the execution wait, until other processes are finished, so that I don't get that message?
Here is the gammuconfig:
[gammu]
device = com4:
connection = at115200
logfile = gammulog
logformat = errors
[gammu1]
device = com3:
connection = at115200
logfile = gammulog
logformat = errors
thank you in advance.

this looks like one process is locking the whole system which sends the SMS (using serial port?)
in this case you need to rewrite your script in a way that it sends those messages not in parallel, but sequentially. Using queues is the way to go.
One possibility is to let PRTG execute batch (or python) script which will add your message to the queue. Another process which would run in loop would fetch new messages from queue and process them (send them). This way you would not execute Gammu more than once at the time. There is disadvantage to this solution and it is that if you have many alerts then it will take longer to send them all since Gammu doesn't support parallel processing
Another option is using some internet sms gateway which would allow you to send many SMS at one time

Related

How to test smpt with automated telnet from bash script?

I want to make a bash script that calls automatically smtp port 25 and sends email then I assert that email is queued
I have this much code now
https://github.com/kristijorgji/docker-mailserver/blob/main/tests/smpt.bash
but is not working, I get always
improper command pipelining after DATA from unknown[172.21.0.1]: subject: Test 2022-07-22_17_10_09\r\nA nice test\r\n\r\n
That might be also ok, but please double check my script and give suggestions for improvements if it is ok or fixes
How can I automate this process so I can re-run the script after every configuration change ?
I want to know the best practices
The PIPELINING extension to ESMTP, which is specified in RFC 2920, says:
The EHLO, DATA, VRFY, EXPN, TURN, QUIT, and NOOP commands can only appear as the last command in a group since their success or failure produces a change of state which the client SMTP must accommodate.
This means that you can remove the sleep commands after mail from and rcpt to, but you should introduce a sleep after echo "data" before continuing with the message which is to be delivered. (Just to be clear: You need to wait for the server's response; sleep is just a hack to achieve that.)

Using fetchmail for one time email extraction from gmail

I'm trying to use fetchmail in terminal to extract e-mails from my gmail account.
I configured my ~/.fetchmailrc with:
poll imap.gmail.com protocol POP3
user "someuser#gmail.com" is oren here
password 'verysecretpassword'
(Of course with real username+password).
Then I tried to naively extract emails with: $ fetchmail.
Sadly nothing happened, and all I got was:
fetchmail: 6.3.26 querying imap.gmail.com (protocol POP3) at Mon 03 Feb 2020 14:34:46 IST: poll started
Trying to connect to <ADDRESS> ... connection failed.
Using Fetchmail
It looks like the config is set to poll an IMAP server but then specifying POP3 protocol. Try something like this for the ~/.fetchmailrc file:
set postmaster "local_user"
set daemon 600
poll pop.gmail.com with proto POP3
user 'gmail_user_name' there with password 'app_password' is local_user here options ssl fetchlimit 400
where:
local_user is some local account where undeliverable mail should go (the "last ditch effort" before failing permanently).
gmail_user_name is everything to the left of the # in the email address.
app_password is a specially generated password that is restricted to the gmail application (go here: https://myaccount.google.com/ and click Security, then app passwords and generate a new app password)
What to do at this point will depend on your local setup. Fetchmail will... fetch mail (clearly).... and then deliver it to the local machine's delivery system. If you have sendmail (a pretty safe bet), this might work:
$ fetchmail -d0 -avNk -m "/usr/sbin/sendmail -i -f %F -- %T" pop.gmail.com
Mail should start flowing in. Messages can be read using the mail command or get the raw content from /var/mail/[username]. This might not get everything in one shot; it very likely won't if the address has accumulated even a small amount of history. Let it finish and check that it worked as expected. If everything looks good, then it's time to start fetchmail as a daemon process and let it download the entire mailbox. First, configure fetchmail with appropriate polling interval and batch size settings1.
Confirm that the polling interval is configured in the ~/.fetchmailrc by the line daemon 600 (ie. 10 minute polling interval).
Confirm that the polling option fetchlimit 400 is set in the ~/.fetchmailrc under the options section in the poll pop.gmail.com stanza. This is the maximum number of messages to fetch per poll.
Start fetchmail using the same command as above, but omit the -d0 switch.
Fetchmail should start as a true daemon and continue to periodically download batches of messages until the whole mailbox is downloaded. You will need to remember to kill the daemon process if you don't want it to continue syncing until the next reboot.
Using Google Takeout
You can do this super easy using Google Takeout. Log in, click the "deselect" option at the top of the list, then scroll down to Mail and check just that. You can choose to get the data in a .zip or .tgz file. They will send you an email when the archive is ready for download. It is packaged in an mbox file but that is pretty straightforward to convert to other formats.
This is probably the easiest way to accomplish a one time export, and I think they have an option to set up a recurring export too. It probably isn't offering as much control compared to using the developer API directly, but it's a lot less hassle.
1: I believe Google has some rate limiting in place, so I am adding some steps to accommodate those limits. These are conservative values, since I don't know exactly what the limits are (or even for sure if they exist). If you know more, or care to research it, adjust these values to whatever you think is best.

send messages to users that connected to current computer

I need to show list of all users which connected to current computer and send each of them a message (by using command line).
I am using '*.bat' I need to list all the users that connected to current computer, and send each of them message (by command line).
(I presume using 'net send' as on site: http://technet.microsoft.com/en-us/library/bb490710.aspx , but I need to know only the active users, as I can see on task manager -> users ,column status = active).
Thanks :)
This is ancient, but as AndrewMedico says, msg gives you this functionality now in many versions of Windows.
If the only reason you wanted the usernames was to send them individual messages, you can just use msg * <message> which will send every user logged into that PC the same message.
If you wanted the usernames for another purpose, you can get these from a command prompt by typing query user. You could do some grepping of the results to just get a bare list of users if you require (and dont wish to use other methods of getting these users such as C#).

How to check the status of a CICS region through a JCL/Shell script(unix box)

I want to check the status of a Mainframe CICS region whether it is active or inactive by using a JCL or it is even better if one can suggest me a way to check the status of the CICS region through a shell script. This script/JCL will be used to send a mail to a group saying that the region is active/inactive at a scheduled time.
Please help me with the PROC/UTILITY to be used incase of a JCL or help me with an example of shell script to achieve it
Solution:
I executed the below command on the main screen
TSO STATUS <job-name>
and it gave me whether the job is running or not not. I have executed the same TSO command in my job and taken the output into a dataset.
Just thinking about this briefly I see three alternatives, all involve use of SDSF (System Display and Search Facility).
SDSF in batch
SDSF from Rexx
SDSF from Java
Note that not all mainframe shops license SDSF, which is an IBM product. There exist ISV alternatives; I am aware of these but unfamiliar with them.
If this were being done in the shop where I work, I'd establish an SSH session with the mainframe and submit a batch job to execute the Rexx code described at the link. The batch job could check the status of the CICS region and send the email. My preference comes from having done all those things previously, I've just not put them together like this.
Your mainframe folks may have a ban on Rexx, or not allow SSH connections to their machine, or be unwilling to set up the Rexx interface to SDSF. They may feel similarly about Java.
There may be security implications, the logonID and password will be in your script, yes? What will that ID be authorized to do? How will the script be secured? Is the password for the ID required to expire periodically?
All of which is to say that you must work with (probably multiple) mainframe staff in order to make this process function correctly. None of these questions is designed to stop you from accomplishing your goal; your goal must be accomplished without compromising system security and integrity.
Some other things to think about...
Why are you checking to see if the CICS region is up? If it is because, (e.g.) you will begin a batch process to send messages to the region if it is up and notify someone if it is down, then you are best served building the error handling into your batch process instead.
Mainframe shops typically have some automation software in place to notify people when a significant event occurs - bells ring, lights flash, pagers go off, emails are sent, etc. Maybe what you're trying to do is already being handled in a different manner.
A couple of other options:
You can also run the Status (ST) command - just run TSO background (IKJEFT01) and issue the status command see Status command for the appropriate job names.
You could try running a Cics Command from a batch job see Batch Cemt New copy. If it works CICS is up, fails CICS is not up, I would suggest an inquiry rather than a New Copy though.
One method is to add a job step to the CICS region. Define this job step to ALWAYS run, or even better define it to run ONLY if the CICS job step fails.
In this job step invoke the program that sends e-mails to everyone in the interested parties group.
This way, when the CICS region abends everyone gets a notification, but if the job is brought down cleanly as part of regular operations, no one gets an e-mail.
Alternatively, you might want to always send out an e-mail when the region comes down regardless of the reasons.
This solution avoids having to poll the region to see if it is up. For example, what happens if the region name is changed? The scripts have to be changed! Or what if a second or third production region is added? Again, all the scripts have to be changed.
Finally, you could add a step zero to the region so that everyone gets notified when the region comes UP!!

Print jobs don't queue, pulled straight into printer memory - no way to see real status

I have been asked to develop a print monitoring utility which will receive a print job from a customer system, route it to a printer and then trigger an update in the host system with success status whenever the printer completes the job.
I found at least two ways to get print job status from a printer queue - use WMI to query Win32_PrintJob or use API to winspool. Both methods worked just fine when I tried to print to a printer that was disconnected - I was able to get a list of jobs with statuses waiting for the printer to become available.
Now I am trying to test a scenario when the printer is out of paper or jammed. Unfortunately, in this case the print jobs are removed from the print spooler queue and pulled to printer memory, waiting for the user to put more paper or resolve a paper jam. The print job is no longer in the queue but it hasn't been printed, so I can really update the host system with success status. I found a few articles talking about using PJL or printer specific APIs to get that information from the printer itself but I haven't succeeded with those. Is there any way to configure Windows spooler queue to retain in the queue until it has actually been processed by the printer?
Thank you!
Thank you, arx for posting that MSDN link, that's exactly what I was suspecting. I guess that particular driver I was testing with didn't support that true status.

Resources