Return status of python unittest - windows

I'm trying to call a unittest from another python file, and evaluate the exit code. I was able to use unittest.TestLoader().loadTestsFromModule and unittest.TextTestRunner.run to call the unittest from another python file, but that's returning the entire results to the cmd. I would like to simply set a variable equal to the status code so I can evaluate it. I was able to find a method unittest.TestResult.wasSuccessful, but I'm having trouble implementing it. When I add it to the use case, I get the following AttributeError: AttributeError: 'ConnectionTest' object has no attribute 'failures'
I've included some code samples below and a mockup of the desired result as an illustration of what I'm trying to achieve. Thank you in advance.
""" Tests/ConnectionTest.py """
import unittest
from Connection import Connection
class ConnectionTest(unittest.TestCase):
def test_connection(self):
#my tests
def test_pass(self):
return unittest.TestResult.wasSuccessful(self)
if __name__ == '__main__':
unittest.main()
""" StatusTest.py """
import unittest
import Tests.ConnectionTest as test
#import Tests.Test2 as test2
#import Tests.Test3 as test3
#import other unit tests ...
suite = unittest.TestLoader().loadTestsFromModule(test)
unittest.TextTestRunner(verbosity=2).run(suite)
""" Return True if unit test passed
"""
def test_passed(test):
if test.test_pass() == 0:
return True
else:
return False
""" Run unittest for each module before using it in code
"""
def main():
tests = "test test2 test3".split()
for test in tests:
if test_passed(test):
# do something
else:
# log failure
pass
Update
To put the question more simply, I need to set the highlighted variable below to the highlighted value.

You mentioned you tried implementing result.wasSuccessful, but would something like the following work:
result = unittest.TextTestRunner(verbosity=2).run(suite)
test_exit_code = int(not result.wasSuccessful())
The value of test_exit_code would then be either 0 when the test suite ran successfully or 1 otherwise.
If you want to disable the output of the TextTestRunner you can specify your own stream, such as:
from io import StringIO
result = unittest.TextTestRunner(stream=StringIO(), verbosity=2).run(suite)

Related

Vertex AI Pipelines (Kubeflow) skip step with dependent outputs on later step

I’m trying to run a Vertex AI Pipelines job where I skip a certain pipeline step if the value of a certain pipeline parameter (in this case do_task1) is False. But because there is another step that runs unconditionally and expects the output of the first potentially skipped step, I get the following error, independently of do_task1 being True or False:
AssertionError: component_input_artifact: pipelineparam--task1-output_path not found. All inputs: parameters {
key: "do_task1"
value {
type: STRING
}
}
parameters {
key: "task1_name"
value {
type: STRING
}
}
It seems like the compiler just cannot find the output output_path from task1. So I wonder if there is any way to have some sort of placeholders for the outputs of those steps that are under a dsl.Condition , and thus they get filled with default values unless the actual steps run and fill them with the non-default values.
The code below represents the problem and is easily reproducible.
I'm using google-cloud-aiplatform==1.14.0 and kfp==1.8.11
from typing import NamedTuple
from kfp import dsl
from kfp.v2.dsl import Dataset, Input, OutputPath, component
from kfp.v2 import compiler
from google.cloud.aiplatform import pipeline_jobs
#component(
base_image="python:3.9",
packages_to_install=["pandas"]
)
def task1(
# inputs
task1_name: str,
# outputs
output_path: OutputPath("Dataset"),
) -> NamedTuple("Outputs", [("output_1", str), ("output_2", int)]):
import pandas as pd
output_1 = task1_name + "-processed"
output_2 = 2
df_output_1 = pd.DataFrame({"output_1": [output_1]})
df_output_1.to_csv(output_path, index=False)
return (output_1, output_2)
#component(
base_image="python:3.9",
packages_to_install=["pandas"]
)
def task2(
# inputs
task1_output: Input[Dataset],
) -> str:
import pandas as pd
task1_input = pd.read_csv(task1_output.path).values[0][0]
return task1_input
#dsl.pipeline(
pipeline_root='pipeline_root',
name='pipelinename',
)
def pipeline(
do_task1: bool,
task1_name: str,
):
with dsl.Condition(do_task1 == True):
task1_op = (
task1(
task1_name=task1_name,
)
)
task2_op = (
task2(
task1_output=task1_op.outputs["output_path"],
)
)
if __name__ == '__main__':
do_task1 = True # <------------ The variable to modify ---------------
# compile pipeline
compiler.Compiler().compile(
pipeline_func=pipeline, package_path='pipeline.json')
# create pipeline run
pipeline_run = pipeline_jobs.PipelineJob(
display_name='pipeline-display-name',
pipeline_root='pipelineroot',
job_id='pipeline-job-id',
template_path='pipelinename.json',
parameter_values={
'do_task1': do_task1, # pipeline compilation fails with either True or False values
'task1_name': 'Task 1',
},
enable_caching=False
)
# execute pipeline run
pipeline_run.run()
Any help is much appreciated!
The real issue here is with dsl.Condition(): creates a sub group, where task1_op is an inner task only "visible" from within the sub group. In the latest SDK, it will throw a more explicit error message saying task2 cannot depends on any inner task.
So to resolve the issue, you just need to move task2 to be within the condition--if condition was not met, you don't have a valid input to feed into task2 anyway.
with dsl.Condition(do_task1 == True):
task1_op = (
task1(
task1_name=task1_name,
)
)
task2_op = (
task2(
task1_output=task1_op.outputs["output_path"],
)
)

Streamlit Unhashable TypeError when i use st.cache

when i use the st.cache decorator to cash hugging-face transformer model i get
Unhashable TypeError
this is the code
from transformers import pipeline
import streamlit as st
from io import StringIO
#st.cache(hash_funcs={StringIO: StringIO.getvalue})
def model() :
return pipeline("sentiment-analysis", model='akhooli/xlm-r-large-arabic-sent')
after searching in issues section in streamlit repo
i found that hashing argument is not required , just need to pass this argument
allow_output_mutation = True
This worked for me:
from transformers import pipeline
import tokenizers
import streamlit as st
import copy
#st.cache(hash_funcs={tokenizers.Tokenizer: lambda _: None, tokenizers.AddedToken: lambda _: None})
def get_model() :
return pipeline("sentiment-analysis", model='akhooli/xlm-r-large-arabic-sent')
input = st.text_input('Text')
bt = st.button("Get Sentiment Analysis")
if bt and input:
model = copy.deepcopy(get_model())
st.write(model(input))
Note 1:
calling the pipeline with input model(input) changes the model and we shouldn't change a cached value so we need to copy the model and run it on the copy.
Note 2:
First run will load the model using the get_model function next run will use the chace.
Note 3:
You can read more about Advanced caching in stremlit in thier documentation.
Output examples:

Decorator function is not working as expected

I was doing some testing with imports, and I wanted to test how fast certain packages get imported using function decorators. Here is my code:
import time
def timeit(func):
def wrapper():
start = time.time()
func()
end = time.time()
print(f'{func.__name__} executed in {end - start} second(s)')
return wrapper
#timeit
def import_matplotlib():
import matplotlib.pyplot
#timeit
def import_numpy():
import numpy
import_matplotlib()
import_numpy()
Output
import_matplotlib executed in 0.4385249614715576 second(s)
import_numpy executed in 0.0 second(s)
This is not the expected output given that numpy isn't imported in an instant. What is happening here, and how can this be fixed? Thank you.
Edit
If I make this change to import_numpy():
#timeit
def import_numpy():
import numpy
time.sleep(2)
The output becomes this:
import_matplotlib executed in 0.4556155204772949 second(s)
import_numpy executed in 2.0041260719299316 second(s)
This tells me that there isn't anything wrong with my decorator function. Why is this behavior occurring?
Try using the timeit module? It was built for this purpose and makes that code simpler.
>>> import timeit
>>> timeit.timeit(stmt='import numpy')
0.13844075199995132

Python error: one of the arguments is required

I'm trying to run a code from github that uses Python to classify images but I'm getting an error.
here is the code:
import argparse as ap
import cv2
import imutils
import numpy as np
import os
from sklearn.svm import LinearSVC
from sklearn.externals import joblib
from scipy.cluster.vq import *
# Get the path of the testing set
parser = ap.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument("-t", "--testingSet", help="Path to testing Set")
group.add_argument("-i", "--image", help="Path to image")
parser.add_argument('-v',"--visualize", action='store_true')
args = vars(parser.parse_args())
# Get the path of the testing image(s) and store them in a list
image_paths = []
if args["testingSet"]:
test_path = args["testingSet"]
try:
testing_names = os.listdir(test_path)
except OSError:
print "No such directory {}\nCheck if the file exists".format(test_path)
exit()
for testing_name in testing_names:
dir = os.path.join(test_path, testing_name)
class_path = imutils.imlist(dir)
image_paths+=class_path
else:
image_paths = [args["image"]]
and this is the error message I'm getting
usage: getClass.py [-h]
(- C:/Users/Lenovo/Downloads/iris/bag-of-words-master/dataset/test TESTINGSET | - C:/Users/Lenovo/Downloads/iris/bag-of-words-master/dataset/test/test_1.jpg IMAGE)
[- C:/Users/Lenovo/Downloads/iris/bag-of-words-master/dataset]
getClass.py: error: one of the arguments - C:/Users/Lenovo/Downloads/iris/bag-of-words-master/dataset/test/--testingSet - C:/Users/Lenovo/Downloads/iris/bag-of-words-master/dataset/test/test_1.jpg/--image is required
can you please help me with this? where and how should I write the file path?
This is an error your own program is issuing. The message is not about the file path but about the number of arguments. This line
group = parser.add_mutually_exclusive_group(required=True)
says that only one of your command-line arguments (-t, -i) is permitted. But it appears from the error message that you are supplying both --testingSet and --image on your command line.
Since you only have 3 arguments, I have to wonder if you really need argument groups at all.
To get your command line to work, drop the mutually-exclusive group and add the arguments to the parser directly.
parser.add_argument("-t", "--testingSet", help="Path to testing Set")
parser.add_argument("-i", "--image", help="Path to image")
parser.add_argument('-v',"--visualize", action='store_true')

dajaxice: passing an argument to a python function

Using Dajaxice I want to pass a parameter to a python function.
In the html file I have the following statement
<i class="icon"></i>
and in my ajax.ps file I have the function
#dajaxice_register
def sayhello(request, dir):
print(dir)
It works fine if I remove the second argument dir in both the html and the python file, but with having dir, I get the error message "Something goes wrong".
Does anybody know what could be the issue here?
if you use Python 3.*, then in module dajaxIce make the changes file venv/lib/python3.2/site-packages/dajaxice/views.py
def safe_dict(d):
"""
Recursively clone json structure with UTF-8 dictionary keys
http://www.gossamer-threads.com/lists/python/bugs/684379
"""
if isinstance(d, dict):
return dict([(k, safe_dict(v)) for k, v in d.items()])
elif isinstance(d, list):
return [safe_dict(x) for x in d]
else:
return d
change the sayhello to :
def sayhello(request):
my_dict=json.loads(request.POST['argv'])
dir=my_dict['dir']
print(dir)

Resources