Autohotkey unable to kill a process - winapi

I have facing some issues with svchost going out of control at times and making my system unstable. Mostly i just kill it manually, but i decided to write an AHK script to do that automatically everytime if starts using too much memory.
#NoEnv ; Recommended for performance and compatibility with future AutoHotkey releases.
#Warn ; Enable warnings to assist with detecting common errors.
#SingleInstance force
;--------------------------------------------------------------
; Variables
;--------------------------------------------------------------
minMemMB = 200
minCPUPercentage = 50
Loop
{
for process in ComObjGet("winmgmts:").ExecQuery("Select IDProcess, PercentProcessorTime, WorkingSet from Win32_PerfFormattedData_PerfProc_Process where Name like '%svchost%'")
PID = % process.IDProcess
CPU = % process.PercentProcessorTime
MEM = % Round(process.WorkingSet/1000000)
FormatTime, TIME
if (CPU > minCPUPercentage or MEM > minMemMB)
{
Process, Close, %PID%
sleep, 2000
if ErrorLevel = %PID%
FileAppend,
(
Killed, %PID% , %CPU% , %MEM%, %TIME% `r`n
), log.csv
else
FileAppend,
(
Failed, %PID% , %CPU% , %MEM%, %TIME% `r`n
), log.csv
}
}
My code works fine in identifying when svchost has exceeded the accepted amount of memory it should take. But it fails in killing it. my log is full of entries like this:
Failed 624 0 1036 11:15 PM Wednesday May 13 2015
Failed 7408 68 65 12:36 AM Thursday May 14 2015
Failed 7408 92 121 12:37 AM Thursday May 14 2015
Failed 7408 80 142 12:39 AM Thursday May 14 2015
Failed 7408 55 176 12:39 AM Thursday May 14 2015
Failed 7408 99 149 12:46 AM Thursday May 14 2015
Failed 7408 80 150 12:53 AM Thursday May 14 2015
Can someone help me in this?
Should I use run + taskkill instead?
Or is there a WMI command I can use?
Thanks.

Killing svchost.exe (service host process) is probably a bad idea. An instance of svchost usually takes care of multiple services and if you kill it all the services that run under it will stop.
You should rather try to find out which service is causing the system to become unstable, then find out which program this service is part of and then update or uninstall that program... or in the worst case stop the service.
You could also disable the service to keep it from starting automatically when windows does.
I recommend the process explorer from sysinternals. Just hover over an svchost process and a tooltip will show you which services it currently hosts.
To disable or stop a service go here: Win+R -> services.msc -> Enter

Related

Memory builds up overtime on Kubernetes pod causing JVM unable to start

We are running a kubernetes environment and we have a pod that is encountering memory issues. The pod runs only a single container, and this container is responsible for running various utility jobs throughout the day.
The issue is that this pod's memory usage grows slowly over time. There is a 6 GB memory limit for this pod, and eventually, the memory consumption grows very close to 6GB.
A lot of our utility jobs are written in Java, and when the JVM spins up for them, they require -Xms256m in order to start. Yet, since the pod's memory is growing over time, eventually it gets to the point where there isn't 256MB free to start the JVM, and the Linux oom-killer kills the java process. Here is what I see from dmesg when this occurs:
[Thu Feb 18 17:43:13 2021] Memory cgroup stats for /kubepods/burstable/pod4f5d9d31-71c5-11eb-a98c-023a5ae8b224/921550be41cd797d9a32ed7673fb29ea8c48dc002a4df63638520fd7df7cf3f9: cache:8KB rss:119180KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:119132KB inactive_file:8KB active_file:0KB unevictable:4KB
[Thu Feb 18 17:43:13 2021] [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name
[Thu Feb 18 17:43:13 2021] [ 5579] 0 5579 253 1 4 0 -998 pause
[Thu Feb 18 17:43:13 2021] [ 5737] 0 5737 3815 439 12 0 907 entrypoint.sh
[Thu Feb 18 17:43:13 2021] [13411] 0 13411 1952 155 9 0 907 tail
[Thu Feb 18 17:43:13 2021] [28363] 0 28363 3814 431 13 0 907 dataextract.sh
[Thu Feb 18 17:43:14 2021] [28401] 0 28401 768177 32228 152 0 907 java
[Thu Feb 18 17:43:14 2021] Memory cgroup out of memory: Kill process 28471 (Finalizer threa) score 928 or sacrifice child
[Thu Feb 18 17:43:14 2021] Killed process 28401 (java), UID 0, total-vm:3072708kB, anon-rss:116856kB, file-rss:12056kB, shmem-rss:0kB
Based on research I've been doing, here for example, it seems like it is normal on Linux to grow in memory consumption over time as various caches grow. From what I understand, cached memory should also be freed when new processes (such as my java process) begin to run.
My main question is: should this pod's memory be getting freed in order for these java processes to run? If so, are there any steps I can take to begin to debug why this may not be happening correctly?
Aside from this concern, I've also been trying to track down what is responsible for the growing memory in the first place. I was able to narrow it down to a certain job that runs every 15 minutes. I noticed that after every time it ran, used memory for the pod grew by ~.1 GB.
I was able to figure this out by running this command (inside the container) before and after each execution of the job:
cat /sys/fs/cgroup/memory/memory.usage_in_bytes | numfmt --to si
From there I narrowed down the piece of bash code from which the memory seems to consistently grow. That code looks like this:
while [ "z${_STATUS}" != "z0" ]
do
RES=`$CURL -X GET "${TS_URL}/wcs/resources/admin/index/dataImport/status?jobStatusId=${JOB_ID}"`
_STATUS=`echo $RES | jq -r '.status.status' || exit 1`
PROGRES=`echo $RES | jq -r '.status.progress' || exit 1`
[ "x$_STATUS" == "x1" ] && exit 1
[ "x$_STATUS" == "x3" ] && exit 3
[ $CNT -gt 10 ] && PrintLog "WC Job ($JOB_ID) Progress: $PROGRES Status: $_STATUS " && CNT=0
sleep 10
((CNT++))
done
[ "z${_STATUS}" == "z0" ] && STATUS=Success || STATUS=Failed
This piece of code seems innocuous to me at first glance, so I do not know where to go from here.
I would really appreciate any help, I've been trying to get to the bottom of this issue for days now.
I did eventually get to the bottom of this so I figured I'd post my solution here. I mentioned in my original post that I narrowed down my issue to the while loop that I posted above in my question. Each time the job in question ran, that while loop would iterate maybe 10 times. After the while loop completed, I noticed that utilized memory increased by 100MB each time pretty consistently.
On a hunch, I had a feeling the CURL command within the loop could be the culprit. And in fact, it did turn out that CURL was eating up my memory and not releasing it for whatever reason. Instead of looping and running the following CURL command:
RES=`$CURL -X GET "${TS_URL}/wcs/resources/admin/index/dataImport/status?jobStatusId=${JOB_ID}"`
I replaced this command with a simple python script that utilized the requests module to check our job statuses instead.
I am not sure still why CURL was the culprit in this case. After running CURL --version it appears that the underlying library being used is libcurl/7.29.0. Maybe there is an bug within that library version causing some issues with memory management, but that is just a guess.
In any case, switching from using python's requests module instead of CURL has resolved my issue.

Bash script for monitoring logs based upon last update time

I have a directory on a RHEL 6 server where logs are being written as below. As you can see there are 4 logs already written within 1 minute. I just want to write a script which can check in every 15 minute (Cron ) & if log files are not updating then send an email alert like " Adapter is in hang status, Restart Required". I know basic linux commands & knowledge of crons. This is how i am trying
-rw-r--r-- 1 root root 11M Oct 6 00:32 Adapter.log.3
-rw-r--r-- 1 root root 11M Oct 6 00:32 Adapter.log.2
-rw-r--r-- 1 root root 10M Oct 6 00:32 Adapter.log.1
-rw-r--r-- 1 root root 6.3M Oct 6 00:32 Adapter.log
$ ll Adapter.log >/tmp/test.txt
$ cat test.txt | awk '{print $6,$7,$8}'
Oct 6 03:10
Now how can i get the time of same log file after 15 minutes, so that i can compare the time difference and write a script to send the alert.
Given description, looks like you timestamp can be checked every 15 minutes.
If file was updated in last 15 minutes, do nothing
If file was updated 15 to 30 minutes ago, send email alert
If file was updated 30 minutes ago, do nothing, as error was already reported on previous cycle
Consider placing the following into cron, on 15 minute interval:
find /path/to/log/Adapter.log* -mmin +15 -mmin -30 | xargs -L1 send-alert
This solution will work on most situations. However, it's worth noting that if the system load is very high, cron execution may be delayed, impacting the age test. In those cases, extra file to store the last test time is needed.

Jmeter run in ubuntu server none gui mode not show nothing result

I try to run command ./jmeter.sh -n -t ../../apache-jmeter-4.0/test-case-2018/jmeter_cron.jmx and i got log message :
Starting the test # Mon Jul 09 17:44:48 ICT 2018 (1531133088159)
Waiting for possible Shutdown/StopTestNow/Heapdump message on port 4445
summary = 0 in 00:00:00 = ******/s Avg: 0 Min: 9223372036854775807 Max: -9223372036854775808 Err: 0 (0.00%)
Tidying up ... # Mon Jul 09 17:44:48 ICT 2018 (1531133088762)
... end of run
It's seem nothing run .
PS: But i run in my desktop ( Windows 10) it's show result as normal . it's the log message from my desktop :
Starting the test # Mon Jul 09 17:09:03 ICT 2018 (1531130943233)
Waiting for possible Shutdown/StopTestNow/Heapdump message on port
4445 summary + 1 in 00:00:01 = 1.4/s Avg: 346 Min: 346
Max: 346 Err: 1 (100.00%) Active: 1 Started: 1 Finished: 0
summary + 6 in 00:00:01 = 5.1/s Avg: 179 Min: 176 Max:
184 Err: 0 (0.00%) Active: 0 Started: 2 Finished: 2 summary =
7 in 00:00:02 = 3.7/s Avg: 203 Min: 176 Max: 346 Err: 1
(14.29%) Tidying up ... # Mon Jul 09 17:09:05 ICT 2018
(1531130945291) ... end of run
Could anyone can help me ?
CSV Data Set Config filename can be tricky in different environments,
Notice you can use absolute path (not in distributed test) but it may be specific for each OS.
You can use Relative path according to path of the active test plan
Notice that Linux can be case sensitive so make sure driven_data.csv all in lower case
Filename Name of the file to be read. Relative file names are resolved with respect to the path of the active test plan. For distributed testing, the CSV file must be stored on the server host system in the correct relative directory to where the JMeter server is started. Absolute file names are also supported, but note that they are unlikely to work in remote mode, unless the remote server has the same directory structure. If the same physical file is referenced in two different ways - e.g. csvdata.txt and ./csvdata.txt - then these are treated as different files. If the OS does not distinguish between upper and lower case, csvData.TXT would also be opened separately.
Double check that:
The file /data/driven_data.csv exists, you will have to copy it from the master node as JMeter doesn't do this automatically
The user account has read access to the /data/driven_data.csv path, if not - grant it using the following command:
sudo chmod -R a+rX /data/driven_data.csv
See online chmod manual page or type man chmod in your terminal to get full help on the command.
Just FYI: the easiest way to implement the data-driven distributed testing in JMeter is using HTTP Simple Table Server which allows sharing the same data file between multiple slave instances so you will not have to copy the file to the remote slaves.
You can install HTTP Simple Table Server using JMeter Plugins Manager

Monit: 'Matching' functionality isn't working

I have a process that's kicked off from a custom script. The process does not end with '.pid' so I am trying to use 'matching'. However it seems to be breaking on the whitespace (just stops after 'bin/bash'), no matter how I format the command. The commands themselves do work fine, outside of monit.
Here is what I am trying to use:
check process example_process matching "example_process"
start program = "/bin/bash -c 'nohup /mnt1/path/to/custom/bin/run.sh &'"
stop program = "/usr/bin/killall example_process"
if cpu > 80% for 2 cycles then alert
if cpu > 95% for 5 cycles then restart
if totalmem > 500.0 MB for 5 cycles then restart
if children > 3 then restart
Errors logged:
[UTC Jun 18 02:01:46] info : 'system_ip-10-0-11-189' Monit started
[UTC Jun 18 02:01:46] error : 'example_process' process is not running
[UTC Jun 18 02:01:46] info : 'example_process' trying to restart
[UTC Jun 18 02:01:46] info : 'example_process' start: /bin/bash
[UTC Jun 18 02:02:16] error : 'example_process' failed to start

How to capture a process's memory and current CPU usage at specified intervals

requirement is to capture the following information in a single log file every 5 minutes for 4 processes (7005.exe, 7006.exe, 7007.exe, 7008.exe).
filename, memory used (kb), Cpu%, timestamp
7005.exe, 10240, 75, 10:30 AM
7006.exe, 10240, 75, 10:30 AM
7005.exe, 10242, 75, 10:35 AM
7006.exe, 10000, 75, 10:35 AM
I tried using task list but I am no good at command file scripting.
Please advise,
Thanks.
do you want something like sysinternals psList ?
Check out Windows Management Instrumentation, and the tasks/process API in particular. There's an example on that page for gathering process CPU info via the Process object.
I have used these commands in a batch file, the only issue I have is to capture time the app name and memory usage. Any words of advise on how to capture date and time in the same line of the process?
echo %Date% %TIME% >> c:\ym\tasklist.log
tasklist /fi "memusage gt 100000" >> c:\tasklist.log
The output:
Image Name PID Session Name Session# Mem Usage
========================= ======== ================ =========== ============
explorer.exe 476 Console 2 189,384 K
Maxthon.exe 8104 Console 2 275,540 K
OUTLOOK.EXE 5320 Console 2 189,992 K
Tue 10/09/2012 15:17:00.20
Image Name PID Session Name Session# Mem Usage
========================= ======== ================ =========== ============
explorer.exe 476 Console 2 187,936 K
Maxthon.exe 8104 Console 2 275,520 K
OUTLOOK.EXE 5320 Console 2 190,076 K
Tue 10/09/2012 15:19:00.35
Image Name PID Session Name Session# Mem Usage
========================= ======== ================ =========== ============
explorer.exe 476 Console 2 188,044 K
Maxthon.exe 8104 Console 2 300,520 K
OUTLOOK.EXE 5320 Console 2 190,080 K
The challenge for me is to bring the time-stamp next to each process. Someone advise please.

Resources