How can we call sub flows from another flow Xml? - ipaf

I have a main flow and there are several sub-flows. I need to call a flow at specific level at activity. Is it possible to do in pace automation framework?

Able to call a flow defined in another flow xml file or to create a master flow for the flows defined in multiple flow xml’s. This would also help PAF users to create a master test suite with multiple flows in it in ALM, SynapseRT etc.
Tag Name: call
Attributes:
flow – to specify the flow id xml – to specify the flow xml path in which the flow is defined.
Syntax: <call flow="flow_id" xml="flow_xml_path"></call>

Related

Create a custom backend listener for jmeter for sending test variables and other data to Influxdb

I have a requirement to send custom parameters, variables and properties to InfluxDB as a part of my Jmeter tests so that we can analyze test data based on the functionality of the application.
As of now I can use backend listener for influxdb but it only has limited fields which would not be helpful in my case since I want to send more relevant data based on the application functionality.
Can someone point/help with right resources to develop a custom backend listener to send custom data to InfluxDB from jmeter and not depend on the existing listener.
I want a flexible option to send data as per our application and not restrict to only fields in listener.
Currently I am using a View Results in table to save this custom data to CSV files. The sample is shown in below snapshot.
To do this I have modified our user.properties files and specify the sample_variables, like below:
sample_variables=employee_code,user_id,transName,transType,transVer,deptID,deptType,deptName
But instead of using an additional listener, I would like to send these variables for EACH HIT (for every sample) to the influxDB. How do I achieve it? Any further help would be appreciated.
The easiest option is just using normal HTTP Request sampler to send the metrics you want via InfluxDB API.
If you still want to implement a custom listener you can first of all take a look at the existing Backend Listener code
Then get familiarized with the following materials:
How to write a plugin for JMeter
How to write your own JMeter listener. A guide
and maybe see a reference project like jmeter-backend-azure

How to log all microservices logs in a single Log-file using springboot

I have 5 web applications which I developed using Springboot(A,B,C,D and E).
Below are the flow of 2 calls
First Flow:
A's Controllers --> A's Service --> B's Controller --> B'Service --> C's Controller --> C's Service --> C's Dao --> DB
SecondFlow:
A's Controllers --> A's Service --> D's Controller --> D'Service --> B's Controller --> B'Service --> C's Controller --> C's Service --> C's Dao --> DB
Once fetch/push data from/into the DB then the corresponding methods are returning some value. For each and every method logging the status (input details and returning status). I am able to see logs in each service separately. But I want to see complete one request-response(A's controller request to A's controller response) cycle logs in one file.
How Can I achieve it?
This is a ver bad idea, but let's take a step back and look at the problem instead of guessing a solution.
You have multiple applications that collaborate together to execute (distributed) transactions. You need to trace those interactions to see your dataflow. This is very useful for many reasons so it's correct that you care about it. It is also correct to collect all you log entries in a single sink, even if it won't be a file because it is not well suited to manage prodcution workloads. A typical scenario that many organization implements is the following
Each application send logs to files or standard output
For each node of you infrastructure there is an agent that reads that streams, does some basic conversion (eg. translates log entries in a common format) and send data to a certain sink
The sink is a database, the best technology option is a DBMS without strict requirements about data schema (you are storing everything in a single huge table after all) and transactional properites (if the data are logs, you are fine with an optimistic concurrency control). You also want some tool that is more good at reads than writes and have good performance in complex searches to drill down a large amount of structured data
a Dashboard to read logs, make searches and even create dashboard with synthetic stats about events
BONUS: use a buffer to manage load spikes
There are precise tools to do the job and they are
logstash/beats/fluentd
Elasticsearch....what else? ;)
Kibana, the favourite Elasticsearch client
BONUS: rabbimq/kafka/otherMessageBroker or Redis
but you still miss a step
Suppose you call a REST API, something simple like a POST /users/:userId/cart
API Gateway receives your request with a JWT
API Gateway calls Authentication-service to validate and decode the JWT
API Gateway calls Authorization-service to check if the client as right to perform the request
API Gateway calls User-service to find :userId
User-Service calls Cart-service to add the product on :userId cart
Cart-Service calls Notification-Service to decide whether is needed to send a notification for the completed task
Notification-Service calls Push-Gateway to invoke an external push notification service
....and back
to not get lost in this labyrinth you NEED just one thing: the correlation ID
Correlation IDs attach a unique ID to all interaction beetween these microservices (headers in HTTP calls or AMQP messages, for instance) and your custom log library (because you've already built a custom logging library and shared it among all the teams) capture this ID to include it in every log entry wrote in the context of the single request processed from each of those microservices. You can even add that correlation ID in the client response, catch it if the respone carry out an error code and perform a query on your logs DB to find all the entries with the given correlation ID. If the system clock works, they will be retrieved in the correct time order and you will be able to reconstruct the dataflow
Distributed Systems make everything more complicate and add a lot of overhead on things that we ever done before, but if you put in your pocket the right tools to manage the complexity, you can see the benefits
you can implement central logging system like:
The ELK stack (Elastic Search, Logstash and Kibana) for Centralized Logging.

SCDF. WSDL Source : Spring Cloud Task or Spring Cloud Stream or any other solution?

We have requirements for getting data from a SOAP web service, where same records are going to be exposed. Then the record is transformed and written do the DB.
We are the acitve side and at the certain intervals we are going to check if a new record has appeared.
Our main goal are:
to have a scheduler for setting intervals
to have a mechanizm to retry if something goes wrong (eg. lost connection)
to have a visual control of the process - check the places where something stuck (like dashboard in SCDF)
Since there is no sample wsdl source app, I guess the Task (or Stream ?) should be written by ourself. But what to use for repeating and scheduling...
I Need your advice in choosing the right approach.
I'm not tied to the SCDF solution if any other are more suitable.
If you intend to consume directly as SOAP messages from external services, you could either build a custom Spring Cloud Stream source or a simple Spring Batch/Spring Cloud Task application. Both the options provide the resiliency patterns, including retries.
However, if the upstream data is not real-time, you would choose the Task path because the streams are long-running and they never terminate. Tasks, on the other hand, run for a finite period of time, terminate, and free-up resources. There's also the option to use the platform-specific scheduler implementation to trigger to launch the Task on a recurring window periodically.
From the SCDF dashboard, you can design/build Composed Tasks, including the state transitions and the desired downstream operation.

multiple workflow instances of Workflows with Windows Workflow Foundation

I'm new to WF. what I'm trying to do is to create a simple Workflow Service and Call them in various clients. So what i have done, I have created a Workflow service. It has a xamlx file and that has a sequence with Receive and Send Reply activity. I also have Correlations. So the first ReceiveandSendReply activity has CanCreateInstance True. In addition to this I
wrote some of my own code activities.
Now I have hosted this service is IIS and trying to call this service using a console app. I have added the web Reference and created a service client and passed the values to the service. It gives me expected results.
But when I'm trying to run another client at the same time it gives me Instance error. I think the Workflow is not initiating a new Instance for the second client.
So I did a search and found multiple instancing can be achieved by using workflowservicehost. But could not find a way to do it.
I think the way Im calling the service is not correct. I'm just creating a new object from the service reference and calling the operation.
Can anyone help me with this?
Please have a look at correlation rules you've set up for your workflow. If several clients passes parameters which correlate with the same instance - a new instance won't be created.
So, if you need a new instance you either need to set different correlation rules, so that different client's calls would correlate with different workflow instances.

Expose process definition as a web service in tibco designer

I'm trying to expose a process definition in TIBCO BW Designer 5.7 as a web service, but I've run into some snags. For some reason, I cannot start the Generate Web Service Wizard, because my process does not appear in the "Add More Processes to interface" list.
I've been searching online but to not much avail. What I've gathered is that I need to reference external schemas (using XML Element Reference) in my input (Start) and output (End), which I have done so. So what could be possibly wrong?
Do I need to include any Process Variables or Partners under the Process Definition?
I'm very new to Designer so would appreciate some help here!
To expose a BusinessWorks process as a web service you need to use a WSDL message as input and output (and optionally error output). If you already have a process that is used by other processes and do not want to change input/output schema you could create another process that essentially wraps your initial process but expose input/output as WSDL messages. My suggestion would be to follow these approximate steps
Create a XML schema containing the input and output formats
Create a WSDL resource
Add two Message resources (input/output), reference the above XML schema
Add a PortType resource
Add an Operation resource referencing the two Message resources as input and output
Set input/output of the process to expose to the WSDL messages defined above
Create a Service resource
Add the WSDL operation to the Service interface
Set the implementation of the operation to your process definition
Add a SOAP endpoint with a HTTP transport
Add the Service resource to your Process Archive
For more details on the parameters that can be used, see the BusinessWorks Palette Reference documentation.
the most common mistake in this case is that you don't use a XML schema for the input and output, make sure that you have one for every process you have in your project and then you can continue with your web service generation.
Kind Regards

Resources