Writing to Windows Event Log using win32evtlog from pywin32 library - windows

I have a simple python script that will be running on a windows server, I'd like to log specific events throughout the script to the windows event log. Does anyone have a simple and precise example of writing to the windows event log so I can view the event from the event viewer. I've read through the docs for the pywin32 library and I can't find any clear examples. I've tried building an event using:
win32evtlogutil.ReportEvent(ApplicationName, EventID, EventCategory,
EventType, Inserts, Data, SID)
I've had no success, could someone explain the ReportEvent a bit more in depth?

A simple example:
>>> import sys
>>> import win32evtlogutil
>>> import win32evtlog
>>> import time
>>>
>>>
>>> "Python {0:s} on {1:s}".format(sys.version, sys.platform)
'Python 3.5.4 (v3.5.4:3f56838, Aug 8 2017, 02:17:05) [MSC v.1900 64 bit (AMD64)] on win32'
>>>
>>> DUMMY_EVT_APP_NAME = "Dummy Application"
>>> DUMMY_EVT_ID = 7040 # Got this from another event
>>> DUMMY_EVT_CATEG = 9876
>>> DUMMY_EVT_STRS = ["Dummy event string {0:d}".format(item) for item in range(5)]
>>> DUMMY_EVT_DATA = b"Dummy event data"
>>>
>>> "Current time: {0:s}".format(time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()))
'Current time: 2018-07-18 20:03:08'
>>>
>>> win32evtlogutil.ReportEvent(
... DUMMY_EVT_APP_NAME, DUMMY_EVT_ID, eventCategory=DUMMY_EVT_CATEG,
... eventType=win32evtlog.EVENTLOG_WARNING_TYPE, strings=DUMMY_EVT_STRS,
... data=DUMMY_EVT_DATA)
>>>
Output:
You can see the correspondence between the values that I input from code, and the event fields in the (above) image of the Event Viewer (mmc) window.
win32evtlogutil.ReportEvent is part of [GitHub]: mhammond/pywin32 - Python for Windows (pywin32) Extensions, which is a Python wrapper over WINAPIs.
Everything you need to know is explained at [MS.Docs]: ReportEventW function, which is the WINAPI used to accomplish this task. Make sure to read it carefully (and some other URLs that it references) in order to get more familiar about the arguments, what their values could be, and other info.
Make sure not to abuse (tests included), or you might end up getting the event log polluted with lots of garbage data.

Related

Using Python to run .exe automatically

I have some .exe applications that were given to me by a supplier of sensors. They allow me to grab data at specific times and convert file types. But, I need to run them through the cmd manually, and I am trying to automate the process with Python. I am having trouble getting this to work.
So far, I have:
import sys
import ctypes
import subprocess
def is_admin():
try:
return ctypes.windll.shell32.IsUserAnAdmin()
except:
return False
if is_admin():
process = subprocess.Popen('arcfetch C:/reftek/arc_pas *,2,*,20:280:12:00:000,+180', shell=True, cwd="C:/reftek/bin",
stdout=subprocess.PIPE, stderr=subprocess.PIPE,)
out = process.stdout.read()
err = process.stderr.read()
else:
# Re-run the program with admin rights
ctypes.windll.shell32.ShellExecuteW(None, "runas", sys.executable, __file__, None, 1)
But, the "arcfetch" exe was not ran.
Also, this requires me to allow Python to make changes to the hard drive each time, which won't work automatically.
Any assistance would be greatly appreciated!
After some playing around and assistance from comments, I was able to get it to work!
The final code:
import sys
import ctypes
import subprocess
def is_admin():
try:
return ctypes.windll.shell32.IsUserAnAdmin()
except:
return False
if is_admin():
subprocess.run('arcfetch C:/reftek/arc_pas *,2,*,20:280:12:00:000,+180', shell=True, check=True, cwd="C:/reftek/bin")
else:
# Re-run the program with admin rights
ctypes.windll.shell32.ShellExecuteW(None, "runas", sys.executable, __file__, None, 1)
Edit: This still has the admin issue, but I can change the security settings on my computer for that.

Using expect() and interact() simultaneously in pexpect

The general problem is, that I want to use pexpect to call scripts that require sudo rights, but I don't always want to enter my password (only once).
My plan is to use pexpect to spawn a bash session with sudo rights and to call scripts from there. Basically I always want to keep the session busy, whenever one script stopped, I want to start another. But while the scripts are running, I want the user to have control. Meaning:
The scripts should be called after something like expect("root#"), so whenever the session is idle, it starts another script. While the scripts are running interact() is giving the user control of possible input he wants to give.
My idea was to use different threads to to solve this problem. My code (for the proof of concept) looks like this:
import pexpect
import threading
class BashInteractThread(threading.Thread):
def __init__(self, process):
threading.Thread.__init__(self)
self.pproc = process
def run(self):
self.pproc.interact()
s = pexpect.spawn("/bin/bash", ['-i', '-c', "sudo bash"])
it = BashInteractThread(s)
it.start()
s.expect("root#")
s.sendline("cd ~")
while(s.isalive()):
pass
s.close()
When I call this script, it does not give me any output, but the process seems to have been started. Still, I cannot CTRL-C or CTRL-D to kill the process - I have to kill the process separately. The behavior I would expect, would be to get prompted to enter a password and after that it should automatically change the directory to the home directory.
I don't exactly know why it does not work, but I guess the output only gets forwarded either to interact() or to expect().
Does anyone have an idea on how to solve this? Thanks in advance.
You can take advantage of interact(output_filter=func). I just wrote a simple example (no coding style!). What it does is spawn a Bash shell and repeatedly invoke Python for the user to interact with. To exit the trap, just input (or print) the magic words LET ME OUT.
expect() would not work anymore after interact(), so need to do the pattern matching work manually.
The code:
[STEP 101] # cat interact_with_filter.py
import pexpect, re
def output_filter(s):
global proc, bash_prompt, filter_buf, filter_buf_size, let_me_out
filter_buf += s
filter_buf = filter_buf[-filter_buf_size:]
if "LET ME OUT" in filter_buf:
let_me_out = True
if bash_prompt.search(filter_buf):
if let_me_out:
proc.sendline('exit')
proc.expect(pexpect.EOF)
proc.wait()
else:
proc.sendline('python')
return s
filter_buf = ''
filter_buf_size = 256
let_me_out = False
bash_prompt = re.compile('bash-[.0-9]+[$#] $')
proc = pexpect.spawn('bash --noprofile --norc')
proc.interact(output_filter=output_filter)
print "BYE"
[STEP 102] #
Let's try it:
[STEP 102] # python interact_with_filter.py
bash-4.4# python
Python 2.7.9 (default, Jun 29 2016, 13:08:31)
[GCC 4.9.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> exit() <---- user input
bash-4.4# python
Python 2.7.9 (default, Jun 29 2016, 13:08:31)
[GCC 4.9.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> exit() <---- user input
bash-4.4# python
Python 2.7.9 (default, Jun 29 2016, 13:08:31)
[GCC 4.9.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> LET ME OUT <---- user input
File "<stdin>", line 1
LET ME OUT
^
SyntaxError: invalid syntax
>>> exit() <---- user input
bash-4.4# BYE
[STEP 103] #

NumPy bitwise "and" fails for int32 & long, but not for long & int32; why?

Compare this:
>>> import numpy; numpy.int32(-1) & 0xFFFFFFFF00000000
TypeError: ufunc 'bitwise_and' not supported for the input types, and the inputs
could not be safely coerced to any supported types according to the casting rule ''safe''
With this:
>>> import numpy; 0xFFFFFFFF00000000 & numpy.int32(-1)
18446744069414584320L
Are both working as intended or is at least one of them a bug? Why does it occur?
The difference is in which object's __and__ or __rand__ method is being called. Normally, the left-hand expression has it's __and__ called first. If it returns NotImplemented, then the right hand expression will get a chance (and __rand__ will be called).
In this case, numpy.int32 has decided that it cannot be "anded" with a long -- At least not with a long whose value is above what can be represented by native types...
However, based on your experiments, python's long is happy to "and" with a numpy.int32 -- Or, possibly your version of numpy did not implement __rand__ symmetrically with __and__. This is possibly also python version dependent (e.g if your version of python decided to return a value rather than NotImplemented).
On my computer, neither work:
Python 2.7.12 |Continuum Analytics, Inc.| (default, Jul 2 2016, 17:43:17)
[GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> import numpy
>>> numpy.__version__
'1.11.2'
But we can see what is being called using the following script:
import sys
import numpy
class MyInt32(numpy.int32):
def __and__(self, other):
print('__and__')
return super(MyInt32, self).__and__(other)
def __rand__(self, other):
print('__rand__')
return super(MyInt32, self).__and__(other)
try:
print(MyInt32(-1) & 0xFFFFFFFF00000000) # Outputs `__and__` before the `TypeError`
except TypeError:
pass
try:
print(0xFFFFFFFF00000000 & MyInt32(-1)) # Outputs `__rand__` before the `TypeError`
except TypeError:
pass
sys.maxint & MyInt32(-1) # Outputs `__rand__`
print('great success')
(sys.maxint + 1) & MyInt32(-1) # Outputs `__rand__`
print('do not see this')

Python PermissionError on some file operations

I created the following Python 3.5 script:
import sys
from pathlib import Path
def test_unlink():
log = Path('log.csv')
fails = 0
num_tries = int(sys.argv[1])
for i in range(num_tries):
try:
log.write_text('asdfasdfasdfasdfasdfasdfasdfasdf')
with log.open('r') as logfile:
lines = logfile.readlines()
# Check the second line to account for the log file header
assert len(lines) == 1
log.unlink()
not log.exists()
except PermissionError:
sys.stdout.write('! ')
sys.stdout.flush()
fails += 1
assert fails == 0, '{:%}'.format(fails / num_tries)
test_unlink()
and run it like this: python test.py 10000. On Windows 7 Pro 64 with Python 3.5.2, the failure rate is not 0: it is small, but non-zero. Sometimes, it is not even that small: 5%! If you printout the exception, it will be this:
PermissionError: [WinError 5] Access is denied: 'C:\\...\\log.csv'
but it will sometimes occur at the exists(), other times at the write_text(), and I wouldn't be surprised if it happens at the unlink() and the open() too.
Note that the same script, same Python (3.5.2), but on linux (through http://repl.it/), does not have this issue: failure rate is 0.
I realize that a possible workaround could be:
while True:
try: log.unlink()
except PermissionError: pass
else: break
but this is tedious and error prone (several methods on Path instance would need this, easy to forget), and should (IMHO) not be necessary, so I don't think it is practical solution.
So, does anyone have an explanation for this, and a practical workaround, maybe a mode flag somewhere that can be set when Python starts?

using ipdb to debug python code in one cell (jupyter or Ipython)

I'm using jupyter (or Ipython) notebook with firefox, and want to debug some python code in the cell. I am using 'import ipdb; ipdb.set_trace()' as kind of breakpoint, for example my cell has the following code:
a=4
import ipdb; ipdb.set_trace()
b=5
print a
print b
which after execution with Shift+Enter gives me this error:
--------------------------------------------------------------------------
MultipleInstanceError Traceback (most recent call last)
<ipython-input-1-f2b356251c56> in <module>()
1 a=4
----> 2 import ipdb; ipdb.set_trace()
3 b=5
4 print a
5 print b
/home/nnn/anaconda/lib/python2.7/site-packages/ipdb/__init__.py in <module>()
14 # You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/.
15
---> 16 from ipdb.__main__ import set_trace, post_mortem, pm, run, runcall, runeval, launch_ipdb_on_exception
17
18 pm # please pyflakes
/home/nnn/anaconda/lib/python2.7/site-packages/ipdb/__main__.py in <module>()
71 # the instance method will create a new one without loading the config.
72 # i.e: if we are in an embed instance we do not want to load the config.
---> 73 ipapp = TerminalIPythonApp.instance()
74 shell = get_ipython()
75 def_colors = shell.colors
/home/nnn/anaconda/lib/python2.7/site-packages/traitlets/config/configurable.pyc in instance(cls, *args, **kwargs)
413 raise MultipleInstanceError(
414 'Multiple incompatible subclass instances of '
--> 415 '%s are being created.' % cls.__name__
416 )
417
MultipleInstanceError: Multiple incompatible subclass instances of TerminalIPythonApp are being created.
The same error appears if I use this code not in the jupyter notebook in the browser, but in jupyter qtconsole.
What does this error mean and what to do to avoid it?
Is it possible to debug code in the cell step-by-step, using next, continue, etc commands of pdb debugger?
Had this problem also and it seems to be related to versions of jupyter and ipdb.
Solution is to use this instead of the ipdb library set_trace call:
from IPython.core.debugger import Tracer
Tracer()() #this one triggers the debugger
Source: http://devmartin.com/blog/2014/10/trigger-ipdb-within-ipython-notebook/
Annotated screenshot:
Tracer() is deprecated.
Use:
from IPython.core.debugger import set_trace
and then place set_trace() where breakpoint is needed.
from IPython.core.debugger import set_trace
def add_to_life_universe_everything(x):
answer = 42
set_trace()
answer += x
return answer
add_to_life_universe_everything(12)
This works fine and brings us a little bit more comfort (e.g. syntax highlighting) than just using the built-in pdb.
source
If using Jupyter Notebook
begin your cell with magic command "%%debug".
Then a ipdb line will be shown at the bottom of the cell which will help you navigate through the debugging session. Following commands should get you started:
n- execute current line and go to next line.
c- continue execution until next break point.
Make sure you restart the kernel each time you decide on debugging, so that all variables are freshly assigned.You can check the value of each variable through the ipdb line and you will see that the variable is undefined until you execute the line that assigns a value to that variable.
%%debug
import pdb
from pdb import set_trace as bp
def function_xyz():
print('before breakpoint')
bp() # This is a breakpoint.
print('after breakpoint')
My version of Jupyter is 5.0.0 and my corresponding ipython version is 6.1.0. I am using
import IPython.core.debugger
dbg = IPython.core.debugger.Pdb()
dbg.set_trace()
Tracer is listed as deprecated.
Update:
I tried using the method from another answer https://stackoverflow.com/a/43086430/8019692 below but got an error:
MultipleInstanceError: Multiple incompatible subclass instances of TerminalIPythonApp are being created.
I prefer my method to the %%debug magic since I can set breakpoints in functions defined in other cells and run the function in another cell. Jupyter/IPython drops into the debugger in my function where the breakpoint is set, and I can use the usual pdb commands. To each his own...
#lugger1, the accepted answer is deprecated.

Resources