I have some .exe applications that were given to me by a supplier of sensors. They allow me to grab data at specific times and convert file types. But, I need to run them through the cmd manually, and I am trying to automate the process with Python. I am having trouble getting this to work.
So far, I have:
import sys
import ctypes
import subprocess
def is_admin():
try:
return ctypes.windll.shell32.IsUserAnAdmin()
except:
return False
if is_admin():
process = subprocess.Popen('arcfetch C:/reftek/arc_pas *,2,*,20:280:12:00:000,+180', shell=True, cwd="C:/reftek/bin",
stdout=subprocess.PIPE, stderr=subprocess.PIPE,)
out = process.stdout.read()
err = process.stderr.read()
else:
# Re-run the program with admin rights
ctypes.windll.shell32.ShellExecuteW(None, "runas", sys.executable, __file__, None, 1)
But, the "arcfetch" exe was not ran.
Also, this requires me to allow Python to make changes to the hard drive each time, which won't work automatically.
Any assistance would be greatly appreciated!
After some playing around and assistance from comments, I was able to get it to work!
The final code:
import sys
import ctypes
import subprocess
def is_admin():
try:
return ctypes.windll.shell32.IsUserAnAdmin()
except:
return False
if is_admin():
subprocess.run('arcfetch C:/reftek/arc_pas *,2,*,20:280:12:00:000,+180', shell=True, check=True, cwd="C:/reftek/bin")
else:
# Re-run the program with admin rights
ctypes.windll.shell32.ShellExecuteW(None, "runas", sys.executable, __file__, None, 1)
Edit: This still has the admin issue, but I can change the security settings on my computer for that.
Related
I am using pexpect to connect to a remote server using ssh.
The following code works but I have to use time.sleep to make a delay.
Especially when I am sending a command to run a script on the remote server.
The script will take up to a minute to run and if I don't use a 60 seconds delay, then the script will end prematurely.
The same issue when I am using sftp to download a file. If the file is large, then it download partially.
Is there a way to control without using a delay?
#!/usr/bin/python3
import pexpect
import time
from subprocess import call
siteip = "131.235.111.111"
ssh_new_conn = 'Are you sure you want to continue connecting'
password = 'xxxxx'
child = pexpect.spawn('ssh admin#' + siteip)
time.sleep(1)
child.expect('admin#.* password:')
child.sendline('xxxxx')
time.sleep(2)
child.expect('admin#.*')
print('ssh to abcd - takes 60 seconds')
child.sendline('backuplog\r')
time.sleep(50)
child.sendline('pwd')
Many pexpect functions take an optional timeout= keyword, and the one you give in spawn() sets the default. Eg
child.expect('admin#',timeout=70)
You can use the value None to never timeout.
NOTE 1- All files are running using cmd in my profile and fetching
correct results.But not with the windows task scheduler.
**> NOTE 2- I finally got a lead that glob.glob and os.listdir is not
working in the windows task scheduler in my python script in which I
am making a connection to a remote server, but it is working in my
local using cmd and pycharm.**
**
print("before for loop::", os.path.join(file_path, '*'))
print(glob.glob( os.path.join(file_path, '*') ))
for filename in glob.glob( os.path.join(file_path, '*') ):
print("after for loop")
**
While running above .py script I got: before for loop:: c:\users\path\dir\*
While executing print(glob.glob( os.path.join(file_path, '*') )) giving "[]" and not able to find why?
I followed this StackOverflow link for setting up Windows Scheduler for python by referring to MagTun comment:Scheduling a .py file on Task Scheduler in Windows 10
Currently, I am having scheduler.py which is calling the other 4 more .py files.
When I try to run Scheduler.py from Windows Task SCHEDULER,
It runs Scheduler.py and then after 1 minute it runs all other 4 .py files and exit within a seconds. Not giving any output in elastic search.
I used this for cmd script:
#echo off
cmd /k "cd /d D:\folder\env\Scripts\ & activate & cd /d D:\folder\files & python scheduler.py" >> open.log
timeout /t 15
In this above cmd command, It is not saving anything in open.log when running with windows task scheduler.
Script with multiple .py subprocess schedulers is like this:
from apscheduler.schedulers.blocking import BlockingScheduler
import datetime
from subprocess import call
from datetime import datetime
import os
from apscheduler.schedulers.blocking import BlockingScheduler
def a():
call(['python', r'C:\Users\a.py'])
def b():
call(['python', r'C:\Users\b.py'])
def c():
call(['python', r'C:\Users\c.py'])
def d():
call(['python', r'C:\Users\d.py'])
if __name__ == '__main__':
scheduler = BlockingScheduler()
scheduler.add_job(a, 'interval', minutes=1)
scheduler.add_job(b, 'interval', minutes=2)
scheduler.add_job(c, 'interval', minutes=1)
scheduler.add_job(d, 'interval', minutes=2)
print('Press Ctrl+{0} to exit'.format('Break' if os.name == 'nt' else 'C'))
try:
scheduler.start()
print("$$$$$$$$$$$$$$$$$$")
except (KeyboardInterrupt, SystemExit):
print("****#####")
pass
Having the same bizar issue. Works like a charm when running as user. With a windows task the glob query returns no results.
Edit: was using a network share by its mapping name. Only works when using the full UNC path (including the server name).
I created the following Python 3.5 script:
import sys
from pathlib import Path
def test_unlink():
log = Path('log.csv')
fails = 0
num_tries = int(sys.argv[1])
for i in range(num_tries):
try:
log.write_text('asdfasdfasdfasdfasdfasdfasdfasdf')
with log.open('r') as logfile:
lines = logfile.readlines()
# Check the second line to account for the log file header
assert len(lines) == 1
log.unlink()
not log.exists()
except PermissionError:
sys.stdout.write('! ')
sys.stdout.flush()
fails += 1
assert fails == 0, '{:%}'.format(fails / num_tries)
test_unlink()
and run it like this: python test.py 10000. On Windows 7 Pro 64 with Python 3.5.2, the failure rate is not 0: it is small, but non-zero. Sometimes, it is not even that small: 5%! If you printout the exception, it will be this:
PermissionError: [WinError 5] Access is denied: 'C:\\...\\log.csv'
but it will sometimes occur at the exists(), other times at the write_text(), and I wouldn't be surprised if it happens at the unlink() and the open() too.
Note that the same script, same Python (3.5.2), but on linux (through http://repl.it/), does not have this issue: failure rate is 0.
I realize that a possible workaround could be:
while True:
try: log.unlink()
except PermissionError: pass
else: break
but this is tedious and error prone (several methods on Path instance would need this, easy to forget), and should (IMHO) not be necessary, so I don't think it is practical solution.
So, does anyone have an explanation for this, and a practical workaround, maybe a mode flag somewhere that can be set when Python starts?
Something like this would be nice :
from IPython.parallel import Client
dv=Client()[0]
import time
def waitprogress(n):
for i in range(n):
time.sleep(1)
global progress
progress=str(i)+'/'+str(n)
dv.block=False
dv.apply(waitprogress,10)
dv['progress']
# the command wait 10 seconds, then returns 9/10
This does'nt work because IPython wait the dv.apply to end before searching for the progress variable in the remote instance.
Any idea great people of SO ?
From the answer of SO : Reading the stdout of ipcluster
I found out a solution using stdout :
from IPython.parallel import Client
c=Client()
dv=c[0]
import time
def waitprogress(n):
for i in range(n):
time.sleep(1)
print str(i)+'/'+str(n)
dv.block=False
res=dv.apply(waitprogress,10)
print c.spin() or c.metadata[res.msg_ids[0]].stdout.split()[-1]
#1/10
time.sleep(3)
print c.spin() or c.metadata[res.msg_ids[0]].stdout.split()[-1]
#4/10
If someone has a better solution, would be great
Hello stackoverflow members,
I'm in need of a Shell Script which checks if the VPN Connection gets changed or not. If it got changed (vpn disconnected) the network will be disabled and no data can go out. Therefore will be no IP leak. I have checked already many sites, and I have found a interesting python script. Here it is:
#!/usr/bin/env python
#
# licensed under GNU General Public License version 2
#
import sys
import traceback
import gobject
import dbus
import dbus.decorators
import dbus.mainloop.glib
import os
def catchall_signal_handler(*args, **kwargs):
print ("Caught signal: "+ kwargs['member'])
if args[0] >= 6: #vpn disconnect (6) or failure (7)
print ("Killing internet connection...")
#set eth0 to your network adapter
os.system('ifconfig eth0 down')
#if you are using python 3 no raw_input() exists so use input()
raw_input("Press Enter to enable your network adapter...")
#set eth0 to your network adapter
os.system('ifconfig eth0 up')
print ("Your network adapter has been enabled.")
if __name__ == '__main__':
dbus.mainloop.glib.DBusGMainLoop(set_as_default=True)
print ("Monitoring your VPN connection...")
bus = dbus.SystemBus()
#lets make a catchall
bus.add_signal_receiver(catchall_signal_handler, signal_name='VpnStateChanged', interface_keyword='dbus_interface', member_keyword='member')
loop = gobject.MainLoop()
loop.run()
The problem is, that it seemed to work only for 1 time. It's really weird.
So, I ask you as an expert where the issue could be. If someone would also have another script which is more optimized I would appreciate so much!
Thanks in advance!
Kind Regards,