Error 1053 When Starting Window Service Written In Python - windows

I have already looked at and tried the resolutions to this question that others have posted. One user said that to try and change my setup.py file from:
from distutils.core import setup
import py2exe
setup(console=["dev.py"])
to
from distutils.core import setup
import py2exe
setup(service=["dev.py"])
I got the following results:
running py2exe
*** searching for required modules ***
Traceback (most recent call last):
File "C:\Python27\Scripts\distutils-setup.py", line 5, in <module>
setup(service=["C:\Python27\Scripts\dev.py"])
File "C:\Python27\lib\distutils\core.py", line 152, in setup
dist.run_commands()
File "C:\Python27\lib\distutils\dist.py", line 953, in run_commands
self.run_command(cmd)
File "C:\Python27\lib\distutils\dist.py", line 972, in run_command
cmd_obj.run()
File "C:\Python27\lib\site-packages\py2exe\build_exe.py", line 243, in run
self._run()
File "C:\Python27\lib\site-packages\py2exe\build_exe.py", line 296, in _run
self.find_needed_modules(mf, required_files, required_modules)
File "C:\Python27\lib\site-packages\py2exe\build_exe.py", line 1274, in
find_needed_modules
mf.import_hook(mod)
File "C:\Python27\lib\site-packages\py2exe\mf.py", line 719, in import_hook
return Base.import_hook(self,name,caller,fromlist,level)
File "C:\Python27\lib\site-packages\py2exe\mf.py", line 136, in import_hook
q, tail = self.find_head_package(parent, name)
File "C:\Python27\lib\site-packages\py2exe\mf.py", line 204, in find_head_package
raise ImportError, "No module named " + qname
ImportError: No module named dev
Now, when I run py2exe with "console" in my setup script it works fine, but the service doesn't start and I get the error. When I run py2exe with "service" in my setup script py2exe doesn't run and tells me it can't find my module.
I have tried to re-install py2exe to no resolution. I have also tried to change:
def SvcDoRun(self):
servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE,
servicemanager.PYS_SERVICE_STARTED,
(self._svc_name_,''))
to
def SvcDoRun(self):
self.ReportServiceStatus(win32service.SERVICE_RUNNING)
win32event.WaitForSingleObject(self.hWaitStop, win32event.INFINITE)
Didn't make a difference either. CAN ANYONE HELP ME PLEASE? Here is what I am working on. It monitors a server and spits back a text file every 60 seconds which I use to monitor my servers at any given minute. Any help you guys and gals can give would be great.
import win32serviceutil
import win32service
import win32event
import servicemanager
import socket
import wmi
import _winreg
from time import sleep
import os
class SrvMonSvc (win32serviceutil.ServiceFramework):
_svc_name_ = "SrvMonSvc"
_svc_display_name_ = "Server Monitor"
def __init__(self,args):
win32serviceutil.ServiceFramework.__init__(self,args)
self.hWaitStop = win32event.CreateEvent(None,0,0,None)
socket.setdefaulttimeout(60)
def SvcStop(self):
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
win32event.SetEvent(self.hWaitStop)
def SvcDoRun(self):
servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE,
servicemanager.PYS_SERVICE_STARTED,
(self._svc_name_,''))
self.main()
def main(self):
host = wmi.WMI(namespace="root/default").StdRegProv
try:
result, api = host.GetStringValue(
hDefKey = _winreg.HKEY_LOCAL_MACHINE,
sSubKeyName = "SOFTWARE\Server Monitor",
sValueName = "API")
if api == None:
raise Exception
else:
pass
except:
exit()
while 1 == 1:
with open("C:/test.txt", "wb") as b:
computer = wmi.WMI(computer="exsan100")
for disk in computer.Win32_LogicalDisk (DriveType=3):
name = disk.caption
size = round(float(disk.Size)/1073741824, 2)
free = round(float(disk.FreeSpace)/1073741824, 2)
used = round(float(size), 2) - round(float(free), 2)
for mem in computer.Win32_OperatingSystem():
a_mem = (int(mem.FreePhysicalMemory)/1024)
for me in computer.Win32_ComputerSystem():
t_mem = (int(me.TotalPhysicalMemory)/1048576)
u_mem = t_mem - a_mem
for cpu in computer.Win32_Processor():
load = cpu.LoadPercentage
print >>b, api
print >>b, name
print >>b, size
print >>b, used
print >>b, t_mem
print >>b, u_mem
print >>b, load
b.close()
date_list = []
stamp = time.strftime("%c",time.localtime(time.time()))
date_list.append(stamp)
name = re.sub(r"[^\w\s]", "",date_list[0])
os.rename("C:/test.txt", ("C:/%s.txt" % name))
try:
sleep(60.00)
except:
exit()
if __name__ == '__main__':
win32serviceutil.HandleCommandLine(SrvMonSvc)

Have you progressed from your original problem. I had similar problem with a python service and found out that it was missing DLLs since the 'System Path' (not the user path) was not complete.
Running pythonservice.exe with -debug from the command prompt was not a problem because it used correct PATH environment variable, but if your service is installed as a System service it's worth checking out if the System Path variable has all the paths for the required DLLs (MSVC, Python, System32). For me it was missing the python DLLs path, after that it worked again.

Related

How to solve ValueError('Invalid async_mode specified') for flask-socketio?

I'm testing a flask-socketio server in bitbucket pipeline. It failed with the following messages:
Traceback (most recent call last):
File "/root/.local/share/virtualenvs/build-3vGKWv3F/lib/python3.7/site-packages/flask_failsafe.py", line 29, in wrapper
return func(*args, **kwargs)
File "/opt/atlassian/pipelines/agent/build/main.py", line 89, in create_app
return cell_data_api.create_app()
File "/opt/atlassian/pipelines/agent/build/cell_data_api/__init__.py", line 30, in create_app
socketio.init_app(app)
File "/root/.local/share/virtualenvs/build-3vGKWv3F/lib/python3.7/site-packages/flask_socketio/__init__.py", line 243, in init_app
self.server = socketio.Server(**self.server_options)
File "/root/.local/share/virtualenvs/build-3vGKWv3F/lib/python3.7/site-packages/socketio/server.py", line 127, in __init__
self.eio = self._engineio_server_class()(**engineio_options)
File "/root/.local/share/virtualenvs/build-3vGKWv3F/lib/python3.7/site-packages/engineio/server.py", line 145, in __init__
raise ValueError('Invalid async_mode specified')
ValueError: Invalid async_mode specified
Traceback (most recent call last):
File "/.pyenv/versions/3.7.4/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/.pyenv/versions/3.7.4/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/atlassian/pipelines/agent/build/main.py", line 117, in <module>
socketio.run(APP, host='0.0.0.0', port=8080, debug=True, use_reloader=True)
File "/root/.local/share/virtualenvs/build-3vGKWv3F/lib/python3.7/site-packages/flask_socketio/__init__.py", line 564, in run
if app.debug and self.server.eio.async_mode != 'threading':
AttributeError: 'NoneType' object has no attribute 'eio'
My main.py file looks like:
import os
from cell_data_api import socketio
# Detect if we are running in App Engine
# Make sure this does NOT start if we are running a Cloud Function
if os.getenv('APP_ENGINE', '') == 'TRUE':
import cell_data_api
APP = cell_data_api.create_app()
if __name__ == '__main__':
from flask_failsafe import failsafe
#failsafe
def create_app():
# note that the import is *inside* this function so that we can catch
# errors that happen at import time
import cell_data_api
# If `entrypoint` is not defined in app.yaml, App Engine will look for an app
# called `app` in `main.py`.
return cell_data_api.create_app()
APP = create_app()
# This is used when running locally only. When deploying to Google App
# Engine, a webserver process such as Gunicorn will serve the app. This
# can be configured by adding an `entrypoint` to app.yaml.
socketio.run(APP, host='0.0.0.0', port=8080, debug=True, use_reloader=True)
Main.py imports from cell_data_api.py, which looks like:
import os
from flask import Flask
from flask_cors import CORS
# import eventlet
from engineio.async_drivers import eventlet
from flask_socketio import SocketIO
socketio = SocketIO(
always_connect=True,
logger=True,
async_mode=eventlet,
cookie=...,
ping_timeout=...
)
def create_app():
# create and configure the app
app = Flask(__name__)
CORS(app)
......
socketio.init_app(app)
# ensure the instance folder exists
try:
os.makedirs(app.instance_path)
except OSError:
pass
return app
My environment is Python 3.7 with installation packages:
[dev-packages]
alembic = "*"
flask_failsafe = "*"
wcwidth = "*"
[packages]
flask = "*"
absl-py = "*"
flask-cors = "*"
grpcio = "*"
transitions = "*"
sqlalchemy-json = "*"
sqlalchemy = "1.3.0"
flask_socketio='*'
eventlet='*'
Unlike the other two questions I found about the same error, I'm not using pyinstaller or cx_Freeze.
The async_mode parameter takes a string as an argument.
Instead of this:
async_mode=eventlet,
Do this:
async_mode='eventlet',

Create class in a different file in MicroPython

I'm trying to use the MPU6050 module for MicroPython in NodeMCU, using uPyCraft as an editor. (https://github.com/larsks/py-mpu6050/blob/master/mpu6050.py)
I've got some troubles and I tried to simplify the code to the point I just try to call a class defined in a different file and I cannot get it to work. This is what I've done so far:
I created a new file (myFile.py) and it looks like:
import machine
class myClass(object):
def __init__(self):
print("init method called")
Then in my main.py I do:
import myFile
myclass = myFile.myClass()
I run main.py and I get this error when doing myclass = myFile.myClass() line:
'module' has no method myClass
I have no experience with MicroPython (although I'm a C# coder) so I'm pretty sure there is some detail about the syntax I'm missing. Any help?
here some tests (file name is the real one, I simplified in the question):
>>> print(open('mpuController.py').read())
import machine
class myClass(object):
def __init__(self):
print("init method called")
other:
>>> print (open('testMPU.py','r').read())
import time
import mpuController
print("starting")
mpu = mpuController.myClass()
print("finished")
and then when running:
exec(open('./testMPU.py').read(),globals())
starting
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 5, in <module>
AttributeError: 'module' object has no attribute 'myClass'
Change
import myFile.py
to
import myFile
You're importing a module (myFile), not a file. Let Python sort it out.

Pyusb on Windows 7 - NotImplemented Error :is_kernal_driver_active

I am using escposprinter python library for printing my data in thermal printer. It depends on pyusb. the library is working fine in linux . While in windows 7 ,i get the it have some issues. Here are output i get.
File "main.py", line 1, in <module>
from app import app
File "D:\freeth-in-erp-60ab8eb96fad\app\__init__.py", line 94, in <module>
from .api_routes import *
File "D:\freeth-in-erp-60ab8eb96fad\app\api_routes.py", line 44, in <module>
from .printer import pos
File "D:\freeth-in-erp-60ab8eb96fad\app\printer\pos.py", line 14, in <module>
Epson = printer.Usb(idVendor=0x0416,idProduct=0x5011)
File "D:\freeth-in-erp-60ab8eb96fad\venv\lib\site-packages\escposprinter\print
er.py", line 37, in __init__
self.open()
File "D:\freeth-in-erp-60ab8eb96fad\venv\lib\site-packages\escposprinter\print
er.py", line 46, in open
if self.device.is_kernel_driver_active(0):
File "D:\freeth-in-erp-60ab8eb96fad\venv\lib\site-packages\usb\core.py", line
1064, in is_kernel_driver_active
interface)
File "D:\freeth-in-erp-60ab8eb96fad\venv\lib\site-packages\usb\backend\__init_
_.py", line 365, in is_kernel_driver_active
_not_implemented(self.is_kernel_driver_active)
File "D:\freeth-in-erp-60ab8eb96fad\venv\lib\site-packages\usb\backend\__init_
_.py", line 81, in _not_implemented
raise NotImplementedError(func.__name__)
NotImplementedError: is_kernel_driver_active
i download the libusb-1.20 from libusb.info and copy file libusb-1.0.dll from MinGW32 and paste in "C:\windows\System32". I get the following result.
from app import app
File "D:\freeth-in-erp-60ab8eb96fad\app\__init__.py", line 94, in <module>
from .api_routes import *
File "D:\freeth-in-erp-60ab8eb96fad\app\api_routes.py", line 44, in <module>
from .printer import pos
File "D:\freeth-in-erp-60ab8eb96fad\app\printer\pos.py", line 14, in <module>
Epson = printer.Usb(idVendor=0x0416,idProduct=0x5011)
File "D:\freeth-in-erp-60ab8eb96fad\venv\lib\site-packages\escposprinter\print
er.py", line 37, in __init__
self.open()
File "D:\freeth-in-erp-60ab8eb96fad\venv\lib\site-packages\escposprinter\print
er.py", line 46, in open
if self.device.is_kernel_driver_active(0):
File "D:\freeth-in-erp-60ab8eb96fad\venv\lib\site-packages\usb\core.py", line
1064, in is_kernel_driver_active
interface)
File "D:\freeth-in-erp-60ab8eb96fad\venv\lib\site-packages\usb\backend\libusb1
.py", line 898, in is_kernel_driver_active
intf)))
File "D:\freeth-in-erp-60ab8eb96fad\venv\lib\site-packages\usb\backend\libusb1
.py", line 593, in _check
raise NotImplementedError(_strerror(ret))
NotImplementedError: Operation not supported or unimplemented on this platform
my code is
`
from escposprinter import *
from tabulate import tabulate
Epson = printer.Usb(0x0416,0x5011)
from library escposprinter
class Usb(Escpos):
""" Define USB printer """
def __init__(self, idVendor, idProduct, interface=0, in_ep=0x82, out_ep=0x01):
"""
#param idVendor : Vendor ID
#param idProduct : Product ID
#param interface : USB device interface
#param in_ep : Input end point
#param out_ep : Output end point
"""
self.idVendor = idVendor
self.idProduct = idProduct
self.interface = interface
self.in_ep = in_ep
self.out_ep = out_ep
self.open()
def open(self):
""" Search device on USB tree and set is as escpos device """
self.device = usb.core.find(idVendor=self.idVendor, idProduct=self.idProduct)
if self.device is None:
print ("Cable isn't plugged in")
if self.device.is_kernel_driver_active(0):
try:
self.device.detach_kernel_driver(0)
except usb.core.USBError as e:
print ("Could not detatch kernel driver: %s" % str(e))
help me with your suggestion
`
The problem is plain and simple that the check whether a kernel driver is active which is not possible on Windows. It seems this library is simply not written for use on Windows.
You can use it on Windows if you delete the last 5 lines from the source code starting with if self.device.is_kernel_driver_active(0): or check if the code runs on Windows beforehand and don't call them.
Remember on Windows you need to install your own driver. I recommend using Zadig for creating and/or installing the driver.

How to run playbook api in Ansible v2 with vault

Here is what I have, I know this works without encryption and I can run
ansible-vault edit common.yml
with
ANSIBLE_VAULT_PASSWORD_FILE=~/.vault_pass.txt
set in the env.
from collections import namedtuple
from ansible.parsing.dataloader import DataLoader
from ansible.vars import VariableManager
from ansible.inventory import Inventory
from ansible.playbook import Playbook
from ansible.executor.playbook_executor import PlaybookExecutor
variable_manager = VariableManager()
loader = DataLoader()
inventory = Inventory(loader=loader, variable_manager=variable_manager, host_list='playbooks/hosts')
playbook_path = 'playbooks/' + PROJECT + '.yml'
Options = namedtuple('Options', ['connection', 'forks', 'become', 'become_method', 'become_user', 'check', 'listhosts', 'listtasks', 'listtags', 'syntax', 'module_path', 'vault_password_file'])
options = Options(connection='ssh', forks=5, become=None, become_method=None, become_user=None, check=False, listhosts=False, listtasks=False, listtags=False, syntax=False, module_path="", vault_password_file=os.environ['ANSIBLE_VAULT_PASSWORD_FILE'])
variable_manager.extra_vars = {'CAP_VERSION': CAP_VERSION, 'cluster': PROJECT + '-' + ENVIRONMENT, 'environ': ENVIRONMENT, 'rpm': rpmSource, 'VRSN': ARTI_BRANCH }
passwords = {}
pbex = PlaybookExecutor(playbooks=[playbook_path], inventory=inventory, variable_manager=variable_manager, loader=loader, options=options, passwords=passwords)
results = pbex.run()
It fails to decrypt the common.yml
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/ansible/ansible/lib/ansible/executor/playbook_executor.py", line 125, in run
all_vars = self._variable_manager.get_vars(loader=self._loader, play=play)
File "/opt/ansible/ansible/lib/ansible/vars/__init__.py", line 304, in get_vars
data = preprocess_vars(loader.load_from_file(vars_file))
File "/opt/ansible/ansible/lib/ansible/parsing/dataloader.py", line 119, in load_from_file
(file_data, show_content) = self._get_file_contents(file_name)
File "/opt/ansible/ansible/lib/ansible/parsing/dataloader.py", line 178, in _get_file_contents
data = self._vault.decrypt(data, filename=b_file_name)
File "/opt/ansible/ansible/lib/ansible/parsing/vault/__init__.py", line 264, in decrypt
raise AnsibleError(msg)
ansible.errors.AnsibleError: Decryption failed on /ansible/playbooks/vars/common.yml
In ansible 2.2.2 (not sure about other versions since the API can change frequently):
You can manually set the password in the python script like so:
loader = DataLoader()
loader.set_vault_password('mypass')
Or you could load the password from your vault password file:
import os
loader = DataLoader()
with open('{}/.vault_pass.txt'.format(os.path.expanduser('~')), 'r') as file:
loader.set_vault_password(file.read().splitlines()[0])
You can skip importing os and just put in your absolute path to the .vault_pass.txt file.
If you are sure your ANSIBLE_VAULT_PASSWORD_FILE is set in env:
import os
loader = DataLoader()
with open(os.environ['ANSIBLE_VAULT_PASSWORD_FILE'], 'r') as file:
loader.set_vault_password(file.read().splitlines()[0])

BrokenPipeError: [WinError 109] The pipe has been ended during data extraction

I am new to multiprocessing in python.I am extracting some features from a list of 70,000 URLs. I have them from 2 different files. After the feature extraction process I pass the result to a list and then to a CSV file.
The code runs but then stops with the error.I tried to catch the error but it produced another one.
Python version = 3.5
from feature_extractor import Feature_extraction
import pandas as pd
from pandas.core.frame import DataFrame
import sys
from multiprocessing.dummy import Pool as ThreadPool
import threading as thread
from multiprocessing import Process,Manager,Array
import time
class main():
lst = None
def __init__(self):
manager = Manager()
self.lst = manager.list()
self.dostuff()
self.read_lst()
def feature_extraction(self,url):
if self.lst is None:
self.lst = []
features = Feature_extraction(url)
self.lst.append(features.get_features())
print(len(self.lst))
def Pool(self,url):
pool = ThreadPool(8)
results = pool.map(self.feature_extraction, url)
def dostuff(self):
df = pd.read_csv('verified_online.csv',encoding='latin-1')
df['label'] = df['phish_id'] * 0
mal_urls = df['url']
df2 = pd.read_csv('new.csv')
df2['label'] = df['phish_id']/df['phish_id']
ben_urls = df2['urls']
t = Process(target=self.Pool,args=(mal_urls,))
t2 = Process(target=self.Pool,args=(ben_urls,))
t.start()
t2.start()
t.join()
t2.join
def read_lst(self):
nw_df = DataFrame(list(self.lst))
nw_df.columns = ['Redirect count','ssl_classification','url_length','hostname_length','subdomain_count','at_sign_in_url','exe_extension_in_request_url','exe_extension_in_landing_url',
'ip_as_domain_name','no_of_slashes_in requst_url','no_of_slashes_in_landing_url','no_of_dots_in_request_url','no_of_dots_in_landing_url','tld_value','age_of_domain',
'age_of_last_modified','content_length','same_landing_and_request_ip','same_landing_and_request_url']
frames = [df['label'],df2['label']]
new_df = pd.concat(frames)
new_df = new_df.reset_index()
nw_df['label'] = new_df['label']
nw_df.to_csv('dataset.csv', sep=',', encoding='latin-1')
if __name__ == '__main__':
start_time = time.clock()
try:
main()
except BrokenPipeError:
print("broken pipe....")
pass
print (time.clock() - start_time, "seconds")
Error Traceback
Process Process-3:
Traceback (most recent call last):
File "F:\Continuum\Anaconda3\lib\multiprocessing\connection.py", line 312, in _recv_bytes
nread, err = ov.GetOverlappedResult(True)
BrokenPipeError: [WinError 109] The pipe has been ended
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "F:\Continuum\Anaconda3\lib\multiprocessing\process.py", line 249, in _bootstrap
self.run()
File "F:\Continuum\Anaconda3\lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "H:\Projects\newoproject\src\main.py", line 33, in Pool
results = pool.map(self.feature_extraction, url)
File "F:\Continuum\Anaconda3\lib\multiprocessing\pool.py", line 260, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "F:\Continuum\Anaconda3\lib\multiprocessing\pool.py", line 608, in get
raise self._value
File "F:\Continuum\Anaconda3\lib\multiprocessing\pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "F:\Continuum\Anaconda3\lib\multiprocessing\pool.py", line 44, in mapstar
return list(map(*args))
File "H:\Projects\newoproject\src\main.py", line 26, in feature_extraction
self.lst.append(features.get_features())
File "<string>", line 2, in append
File "F:\Continuum\Anaconda3\lib\multiprocessing\managers.py", line 717, in _callmethod
kind, result = conn.recv()
File "F:\Continuum\Anaconda3\lib\multiprocessing\connection.py", line 250, in recv
buf = self._recv_bytes()
File "F:\Continuum\Anaconda3\lib\multiprocessing\connection.py", line 321, in _recv_bytes
raise EOFError
EOFError
My response is late and does not address the posted problem directly; but hopefully will provide a clue to others who encounter similar errors.
Errors that I encountered:
BrokenPipeError
WinError 109 The pipe has been ended &
WinError 232 The pipe is being closed
Observed with Python 36 on Windows 7, when:
(1) the same async function was submitted multiple times, each time with a different instance of a multiprocessing data store, a Queue in my case (multiprocessing.Manager().Queue())
AND
(2) the references to the Queues were saved in short-life local variables in the enveloping function.
The errors were occurring despite the fact that the Queues, shared with the successfully spawned and executing async-functions, had items and would still be in active use (put() & get()) at the time of exception.
The error consistently occurred when the same async_func was called the 2nd time with a 2nd instance of the Queue. Immediately after apply_async() of the function, the connection to the 1st Queue supplied to the async_func the 1st time, would get broken.
The issue got resolved when the references to the Queues were saved in non-overlapping (like a Queue-list) & longer-life variables (like variables returned to functions higher in the call-stack) in the enveloping function.

Resources