upload multiple Test cases at one time in ALM? - ipaf

I'm trying to upload multiple Test cases at one go. How to upload multiple Test cases at one time in ALM ?

All flow files which you would upload should be updated with name attribute.
Make sure the src folder has a properties file named as “multipleFlows.properties” or you would have to create it.
Update the multipleFlows.properties file with all the flow ids and flow xml path that you would like to upload through ALMSync as mentioned below.
Ex: multipleFlows.properties file should contain as below format
flow1_id=flow1_xml_path
flow2_id=flow2_xml_path
flow3_id=flow3_xml_path
flow4_id=flow4_xml_path
Open the Run Configuration ALMSync >> Arguments tab and update the arguments as
createTestCase flow_map multipleFlows

Related

How to rename the title of the HTML Report generated by pytest-html plug in?

I am generating html report using pytest-html plugin. I'm executing the pytest file by giving "pytest --html=report.html" in command line.
So the name and title of the html report generated is report.html. I want to change the title of the generated report.
Please let me know, how to do that?
Since v2.1.0, this plugin exposes a hook called before adding the title to the report. You can add this to conftest.py:
def pytest_html_report_title(report):
report.title = 'your title!'
This is also explained in the plugin's User Guide.
create conftest.py file in the same folder of the test.
this file is used to configure pytest.
put this snippet inside
def pytest_html_results_summary(prefix, summary, postfix):
prefix.extend([html.h1("A GOOD TITLE")])
if you need to change the html report file name you can try something like this
# #pytest.hookimpl(tryfirst=True)
def pytest_configure(config):
# to remove environment section
config._metadata = None
if not os.path.exists('reports'):
os.makedirs('reports')
config.option.htmlpath = 'reports/' + datetime.now().strftime("%d-%m-%Y %H-%M-%S") + ".html"
my example will put the report.html file in a folder called reports naming with a date instead of a static name
From what I see in the code there is no mean to change only the report's title yet, it is for now hardcoded as
html.h1(os.path.basename(self.logfile))
So the report title will always be the report file name. I've just pushed a merge request to the project to add a new hook to allow the change of the title without changing the file name, we will see if it is accepted.

Placing file inside folder of S3 bucket

have a spring boot application, where I am tring to place a file inside folder of S3 target bucket. target-bucket/targetsystem-folder/file.csv
The targetsystem-folder name will differ for each file which will be retrived from yml configuration file.
The targetsystem-folder have to created via code if the folder doesnot exit and file should be placed under the folder
As I know, there is no folder concept in S3 bucket and all are stored as objects.
Have read in some documents like to place the file under folder, have to give the key-expression like targetsystem-folder/file.csv and bucket = target-bucket.
But it doesnot work out.Would like to achieve this using spring-integration-aws without using aws-sdk directly
<int-aws:s3-outbound-channel-adapter id="filesS3Mover"
channel="filesS3MoverChannel"
transfer-manager="transferManager"
bucket="${aws.s3.target.bucket}"
key-expression="headers.targetsystem-folder/headers.file_name"
command="UPLOAD">
</int-aws:s3-outbound-channel-adapter>
Can anyone guide on this issue
Your problem that the SpEL in the key-expression is wrong. Just try to start from the regular Java code and imagine how you would like to build such a value. Then you'll figure out that you are missing concatenation operation in your expression:
key-expression="headers.targetsystem-folder + '/' + headers.file_name"
Also, please, in the future provide more info about error. In most cases the stack trace is fully helpful.
In the project that I was working before, I just used the java aws sdk provided. Then in my implementation, I did something like this
private void uploadFileTos3bucket(String fileName, File file) {
s3client.putObject(new PutObjectRequest("target-bucket", "/targetsystem-folder/"+fileName, file)
.withCannedAcl(CannedAccessControlList.PublicRead));
}
I didn't create anymore configuration. It automatically creates /targetsystem-folder inside the bucket(then put the file inside of it), if it's not existing, else, put the file inside.
You can take this answer as reference, for further explanation of the subject.
There are no "sub-directories" in S3. There are buckets and there are
keys within buckets.
You can emulate traditional directories by using prefix searches. For
example, you can store the following keys in a bucket:
foo/bar1
foo/bar2
foo/bar3
blah/baz1
blah/baz2

assign variables directly to build in jenkins

I am new to Jenkins, i have a wrapper script that runs overnight in Jenkins.
This wrapper script takes input from a .CSV file which contains list of projects. i had to give this way = ./wrapper_script project.csv
This has one problem i.e., it runs all the projects in one single build, but my requirement is i should run one build per project. I have already installed necessary plugins.
How can i give project.csv content as input to the build where i will trigger the wrapper_script.sh directly
Have a look on Job DSL Plugin. You could create a seed-job that reads the CSV file and iterates over the records and creates a job for each record. If you need a more detailed code example, please include sample data from your CSV file.
Ok. Given that the CSV you provided it so simple, you could skip using the CSV library. Your Job DSL seed job would be something like this:
new File('project.csv').splitEachLine(',') { fields ->
job(fields[0]) {
steps {
shell("your build command " + fields[1])
}
}

HP UFT API Test - Saving Response/Checkpoint values

Is there a way to capture and store (or write to a file) the values returned in the Response? (Checkpoint values)
Using HP UFT 11.52
Thanks,
Lynn
I figured it out. In UFT API under Standard Activities, there are File function modules including "Write to File". I added the module to the test, set the path and other properties, passed the variable to the file and it worked! Couldn't be easier.
I mentioned this on my other answer , you can also write it programatically if you have dynamic array response please refer below:
https://stackoverflow.com/a/28012383/3972994
After running a test, in the test folder, you can find a Snapshots/LastIteration directory.
In it you can find the return value for each step saved in a txt file.
Pay attention that if you data drive the step, only the last iteration will be saved to file.
However, in the Test's log (Test dir/Log/vtd_user.log) you can find all the iterations persisted
Thanks,
Yossi
You do not need to use the standard activities if you do this
var iResponse = this.Activity.responsebody;
System.IO.File.WriteLines(#"directorypath&FileName);
the above will write the response to the file and rewrite it for every run

BIRT: Specifying XML Datasource file as parameter does not work

Using BIRT designer 3.7.1, it's easy enough to define a report for an XML file data source; however, the input file name is written into the .rptdesign file as constant value, initially. Nice for the start, but useless in real life. What I want is start the BIRT ReportEngine via the genReport.bat script, specifying the name of the XML data source file as parameter. That should be trivial, but it is surprisingly difficult...
What I found out is this: Instead of defining the XML data source file as a constant in the report definition you can use params["datasource"].value, which will be replaced by the parameter value at runtime. Also, in BIRT Designer you can define the Report Parameter (datasource) and give it a default value, say "file://d:/sample.xml".
Yet, it doesn't work. This is the result of my Preview attempt in Designer:
Cannot open the connection for the driver: org.eclipse.datatools.enablement.oda.xml.
org.eclipse.datatools.connectivity.oda.OdaException: The xml source file cannot be found or the URL is malformed.
ReportEngine, started with 'genReport.bat -p "datasource=file://d:/sample.xml" xx.rptdesign' says nearly the same.
Of course, I have made sure that the XML file exists, and tried different spellings of the file URL. So, what's wrong?
What I found out is this: Instead of defining the XML data source file as a constant in the report definition you can use params["datasource"].value, which will be replaced by the parameter value at runtime.
No, it won't - at least, if you specify the value of &XML Data Source File as params["datasource"].value (instead of a valid XML file path) at design time then you will get an error when attempting to run the report. This is because it is trying to use the literal string params["datasource"].value for the file path, rather than the value of params["datasource"].value.
Instead, you need to use an event handler script - specifically, a beforeOpen script.
To do this:
Left-click on your data source in the Data Explorer.
In the main Report Design pane, click on the Script tab (instead of the Layout tab). A blank beforeOpen script should be visible.
Copy and paste the following code into the script:
this.setExtensionProperty("FILELIST", params["datasource"].value);
If you now run the report, you should find that the value of the parameter datasource is used for the XML file location.
You can find out more about parameter-driven XML data sources on BIRT Exchange.
Since this is an old thread but still usefull, i ll add some info :
In the edit datasource, add some url to have sample data to create your dataset
Create your dataset
Then remove url as shown
add some script

Resources