Running python script in Service account by using windows task scheduler - windows

NOTE 1- All files are running using cmd in my profile and fetching
correct results.But not with the windows task scheduler.
**> NOTE 2- I finally got a lead that glob.glob and os.listdir is not
working in the windows task scheduler in my python script in which I
am making a connection to a remote server, but it is working in my
local using cmd and pycharm.**
**
print("before for loop::", os.path.join(file_path, '*'))
print(glob.glob( os.path.join(file_path, '*') ))
for filename in glob.glob( os.path.join(file_path, '*') ):
print("after for loop")
**
While running above .py script I got: before for loop:: c:\users\path\dir\*
While executing print(glob.glob( os.path.join(file_path, '*') )) giving "[]" and not able to find why?
I followed this StackOverflow link for setting up Windows Scheduler for python by referring to MagTun comment:Scheduling a .py file on Task Scheduler in Windows 10
Currently, I am having scheduler.py which is calling the other 4 more .py files.
When I try to run Scheduler.py from Windows Task SCHEDULER,
It runs Scheduler.py and then after 1 minute it runs all other 4 .py files and exit within a seconds. Not giving any output in elastic search.
I used this for cmd script:
#echo off
cmd /k "cd /d D:\folder\env\Scripts\ & activate & cd /d D:\folder\files & python scheduler.py" >> open.log
timeout /t 15
In this above cmd command, It is not saving anything in open.log when running with windows task scheduler.
Script with multiple .py subprocess schedulers is like this:
from apscheduler.schedulers.blocking import BlockingScheduler
import datetime
from subprocess import call
from datetime import datetime
import os
from apscheduler.schedulers.blocking import BlockingScheduler
def a():
call(['python', r'C:\Users\a.py'])
def b():
call(['python', r'C:\Users\b.py'])
def c():
call(['python', r'C:\Users\c.py'])
def d():
call(['python', r'C:\Users\d.py'])
if __name__ == '__main__':
scheduler = BlockingScheduler()
scheduler.add_job(a, 'interval', minutes=1)
scheduler.add_job(b, 'interval', minutes=2)
scheduler.add_job(c, 'interval', minutes=1)
scheduler.add_job(d, 'interval', minutes=2)
print('Press Ctrl+{0} to exit'.format('Break' if os.name == 'nt' else 'C'))
try:
scheduler.start()
print("$$$$$$$$$$$$$$$$$$")
except (KeyboardInterrupt, SystemExit):
print("****#####")
pass

Having the same bizar issue. Works like a charm when running as user. With a windows task the glob query returns no results.
Edit: was using a network share by its mapping name. Only works when using the full UNC path (including the server name).

Related

curl command BashOpertor in Cloud Composer

I am following the tutorial mentioned in this link - download_rocket_launches.py . As I am running this in Cloud Composer, I want to put in the native path i.e. /home/airflow/gcs/dags but it's failing with error path not found.
What path can I give for this command to work? Here is the task I am trying to execute -
download_launches = BashOperator(
task_id="download_launches",
bash_command="curl -o /tmp/launches.json -L 'https://ll.thespacedevs.com/2.0.0/launch/upcoming'", # noqa: E501
dag=dag,
)
This worked on my end:
import json
import pathlib
import airflow.utils.dates
import requests
import requests.exceptions as requests_exceptions
from airflow import DAG
from airflow.operators.bash import BashOperator
from airflow.operators.python import PythonOperator
dag = DAG(
dag_id="download_rocket_launches",
description="Download rocket pictures of recently launched rockets.",
start_date=airflow.utils.dates.days_ago(14),
schedule_interval="#daily",
)
download_launches = BashOperator(
task_id="download_launches",
bash_command="curl -o /home/airflow/gcs/data/launches.json -L 'https://ll.thespacedevs.com/2.0.0/launch/upcoming' ", # put space in between single quote and double quote
dag=dag,
)
download_launches
Output:
The key was to put space between single quote ' and double quote " towards the end of your bash command.
Also, it is recommended to use the Data folder when it comes to mapping out your output file as stated in the GCP documentation:
gs://bucket-name/data /home/airflow/gcs/data: Stores the data that tasks produce and use. This folder is mounted on all worker nodes.

Using Python to run .exe automatically

I have some .exe applications that were given to me by a supplier of sensors. They allow me to grab data at specific times and convert file types. But, I need to run them through the cmd manually, and I am trying to automate the process with Python. I am having trouble getting this to work.
So far, I have:
import sys
import ctypes
import subprocess
def is_admin():
try:
return ctypes.windll.shell32.IsUserAnAdmin()
except:
return False
if is_admin():
process = subprocess.Popen('arcfetch C:/reftek/arc_pas *,2,*,20:280:12:00:000,+180', shell=True, cwd="C:/reftek/bin",
stdout=subprocess.PIPE, stderr=subprocess.PIPE,)
out = process.stdout.read()
err = process.stderr.read()
else:
# Re-run the program with admin rights
ctypes.windll.shell32.ShellExecuteW(None, "runas", sys.executable, __file__, None, 1)
But, the "arcfetch" exe was not ran.
Also, this requires me to allow Python to make changes to the hard drive each time, which won't work automatically.
Any assistance would be greatly appreciated!
After some playing around and assistance from comments, I was able to get it to work!
The final code:
import sys
import ctypes
import subprocess
def is_admin():
try:
return ctypes.windll.shell32.IsUserAnAdmin()
except:
return False
if is_admin():
subprocess.run('arcfetch C:/reftek/arc_pas *,2,*,20:280:12:00:000,+180', shell=True, check=True, cwd="C:/reftek/bin")
else:
# Re-run the program with admin rights
ctypes.windll.shell32.ShellExecuteW(None, "runas", sys.executable, __file__, None, 1)
Edit: This still has the admin issue, but I can change the security settings on my computer for that.

pexpect timed out before script ends

I am using pexpect to connect to a remote server using ssh.
The following code works but I have to use time.sleep to make a delay.
Especially when I am sending a command to run a script on the remote server.
The script will take up to a minute to run and if I don't use a 60 seconds delay, then the script will end prematurely.
The same issue when I am using sftp to download a file. If the file is large, then it download partially.
Is there a way to control without using a delay?
#!/usr/bin/python3
import pexpect
import time
from subprocess import call
siteip = "131.235.111.111"
ssh_new_conn = 'Are you sure you want to continue connecting'
password = 'xxxxx'
child = pexpect.spawn('ssh admin#' + siteip)
time.sleep(1)
child.expect('admin#.* password:')
child.sendline('xxxxx')
time.sleep(2)
child.expect('admin#.*')
print('ssh to abcd - takes 60 seconds')
child.sendline('backuplog\r')
time.sleep(50)
child.sendline('pwd')
Many pexpect functions take an optional timeout= keyword, and the one you give in spawn() sets the default. Eg
child.expect('admin#',timeout=70)
You can use the value None to never timeout.

how to pass specific arguments in batch commands during jenkins builds

I'm trying to use Jenkins to automate performance testing with JMeter,
each build is a single JMeter test and I want to increase the number of users(threads) for each Jenkins build if the previous was successful.
I have configured most of the build, with SSH plugin I can restart Tomcat, copy catalina.out, with publishing performance I can open the .jtl file and determine if the build was successful.
What I want is to execute a different batch command for the next build(to increase the number of users(threads) and user id's)
For example:
jmeter -Jthreads=10 -n -t C:\TestScripts\script.jmx -l C:\TestScripts\Jenkins.jtl
jmeter -Jthreads=20 -n -t C:\TestScripts\script.jmx -l C:\TestScripts\Jenkins.jtl
jmeter -Jthreads=30 -n -t C:\TestScripts\script.jmx -l C:\TestScripts\Jenkins.jtl...
Is there some good jmeter plugin some counter that i can use to increase some variable by 10 each time:
jmeter -Jthreads=%variable1%...
I have tried by setting environmental variables and then incrementing that variable by:
"SET /A thread+=10"
but it doesn't change that variable because jenkins opens its own CMD, a new process :
("cmd /c call C:\WINDOWS\TEMP\jenkins556482303577128680.bat")
Use the following SET command to increase threads variable by 10:
SET /A threads=threads+10
Or inside double quotes:
SET /A "threads+=10"
Not knowing your Jenkins configuration, and which plugins you have installed and how do you run the test it is quite hard to come up with the best solution.
The only "universal" workaround I can think of is writing the current number of threads into a file in Jenkins workspace and reading the value of threads from the file on next execution.
Add setUp Thread Group to your Test Plan
Add JSR223 Sampler to your Thread Group
Put the following Groovy code into "Script" area:
import org.apache.jmeter.threads.ThreadGroup
import org.apache.jorphan.collections.SearchByClass
import org.apache.commons.io.FileUtils
SampleResult.setIgnore()
def file = new File(System.getenv('WORKSPACE') + System.getProperty('file.separator') + 'threads.number')
if (file.exists()) {
def newThreadNum = (FileUtils.readFileToString(file, 'UTF-8') as int) + 10
FileUtils.writeStringToFile(file, newThreadNum as String)
def engine = ctx.getEngine()
def test = org.apache.commons.lang.reflect.FieldUtils.getField(engine.getClass(), 'test', true)
def testPlanTree = test.get(engine)
SearchByClass<ThreadGroup> threadGroupSearch = new SearchByClass<>(ThreadGroup.class)
testPlanTree.traverse(threadGroupSearch)
def threadGroups = threadGroupSearch.getSearchResults()
threadGroups.each {
it.setNumThreads(newThreadNum)
}
} else {
FileUtils.writeStringToFile(file, props.get('threads'))
}
The code will write down the current number of threads in all Thread Groups into a file called threads.number in Jenkins Workspace and on subsequent runs it reads the value from it, adds 10 and writes it back.
For now i am creating 20 .jmx files (1.jmx, 2.jmx , 3.jmx ...) each whith a different number of users. and calling them whit this command :
jmeter -n -t C:\TestScripts\%BUILD_NUMBER%.jmx -l C:\TestScripts\%BUILD_NUMBER%.jtl
the first billd will call 1.jmx the second 2.jmx ...
it isn't the best method but it works for now. I will try your advice over the weekend when i have more time.
i have found the a solution that works for me, it inst pretty. I created a python script which changes a .CVS fil from which JMeter reads the number of threads and the starting user id. This python script incremets the starting user id by the number of threads in the previous bild and the number of threads by 10
file = open('C:\\Users\\mp\\AppData\\Local\\Programs\\Python\\Python37-32\\eggs.csv', 'r')
a,b=file.readlines()[0].split(",")
print(a,b)
b=int(b)
a=int(a)
b=a+b
a=a+10
print(a,b)
f = open("C:\\Users\\mp\\AppData\\Local\\Programs\\Python\\Python37-32\\eggs2.csv", "a")
f.write(str(a)+","+str(b))
f.close()
I have python on my pc and a i am calling the script in Jenkins as a windows Bach command
C:\Users\mp\AppData\Local\Programs\Python\Python37-32\python.exe C:\Users\mp\AppData\Local\Programs\Python\Python37-32\rename_write_file.py
I am much better in python than Java so I implemented this in Python.
So for each new test,the CSV file from which jmeter reads values is changed.

mpiexec - Credentials for user rejected connecting host

To do some exercise to be more familiar with MPI, i installed MS-MPI on my windows 10 machine, and then mpi4py (python MPI). I tried a hello_world code:
from mpi4py import MPI
def main ():
comm = MPI. COMM_WORLD
rank = comm . Get_rank ()
size = comm . Get_size ()
print " hello from " + str( rank ) + " in " + str( size )
if __name__ == " __main__ ":
main ()
Then, with a windows command as admin i executed the following command:
mpiexec -n 8 python MPI_Test.py
I get:
User credentials needed to launch processes: account (domain\user)
[DESKTOP-3CFSBJ8\Hazem]:
I did a registration, as mpiexec - register from username/pwd, then execute again that command, and i get the following error:
Credentials for user rejected connecting to host.
THE PROBLEM COMES WHEN EXECUTING THE COMMAND mpiexec.
I got the same issue, the solution is:
Type ”mpiexec -n 3 cpi.exe” to run the sample program. You will get a response like this:
”user credentials needed to launch process”
Type your Windows username and Windows password, the sample program will run.
In order not to enter credentials every time you run mpiexec, you can register your username
and password by command ”mpiexec -register”
source: https://www.cmpe.boun.edu.tr/sites/default/files/mpi_install_tutorial.pdf

Resources