I'm trying to expose a process definition in TIBCO BW Designer 5.7 as a web service, but I've run into some snags. For some reason, I cannot start the Generate Web Service Wizard, because my process does not appear in the "Add More Processes to interface" list.
I've been searching online but to not much avail. What I've gathered is that I need to reference external schemas (using XML Element Reference) in my input (Start) and output (End), which I have done so. So what could be possibly wrong?
Do I need to include any Process Variables or Partners under the Process Definition?
I'm very new to Designer so would appreciate some help here!
To expose a BusinessWorks process as a web service you need to use a WSDL message as input and output (and optionally error output). If you already have a process that is used by other processes and do not want to change input/output schema you could create another process that essentially wraps your initial process but expose input/output as WSDL messages. My suggestion would be to follow these approximate steps
Create a XML schema containing the input and output formats
Create a WSDL resource
Add two Message resources (input/output), reference the above XML schema
Add a PortType resource
Add an Operation resource referencing the two Message resources as input and output
Set input/output of the process to expose to the WSDL messages defined above
Create a Service resource
Add the WSDL operation to the Service interface
Set the implementation of the operation to your process definition
Add a SOAP endpoint with a HTTP transport
Add the Service resource to your Process Archive
For more details on the parameters that can be used, see the BusinessWorks Palette Reference documentation.
the most common mistake in this case is that you don't use a XML schema for the input and output, make sure that you have one for every process you have in your project and then you can continue with your web service generation.
Kind Regards
Related
I'm trying out the .Net agent in Elastic APM and I'm using a C# application which is created using a framework called ASP.net Boilerplate. I've added the core libraries as mentioned in the documentation and added the settings in appsettings.json. This enables the default instrumentation and I got traces in the APM visualized through Kibana.
Currently I've got a node.js application running and I publish a message to a RabbitMQ queue with the traceparent in the message payload. The C# app reads the published message. I need to create a transaction or span using this traceparent / trace id so that Kibana would show the trace among the distributed systems.
I want to know if there is a way to create a transaction (or span) using a traceparent that is being sent from another system not using a HTTP protocol. I've checked the Elastic APM agent documentation -> Public API for information but couldnt find any information on this. Is there a way? Thanks.
I want to know if there is a way to create a transaction (or span) using a traceparent that is being sent from another system not using a HTTP protocol.
Yes, this is possible and there is an API for it. This part of the documentation explains it.
So you'll need to do this when you start your transaction - I imagine in your scenario this will be when you read a message from RabbitMQ.
When you start the transaction there is an optional parameter called distributedTracingData - if you pass it, then the transaction will reuse the traceid which you passed through RabbitMQ, and this way the new transaction will be part of the whole trace. If you don't pass this parameter, a new traceid will be generated and a new trace will be started.
Another comment that may help: you pass the trace id into the method where you start the transaction and each span will inherit this trace id within a transaction - so you control this on the transaction level and accordingly you don't pass it into a span.
Here is a small code snippet on how this would look:
serializedDistributedTracingData = //read this from the message which you get RabbitMq
var transaction2 = Agent.Tracer.StartTransaction("RadFromQueue", "RabbitMQRead",
DistributedTracingData.TryDeserializeFromString(serializedDistributedTracingData));
#gregkalapos, again thank you for the information. I checked how to acquire the neccessary trace information as in node.js agent documentation and when I debugged noticed that it was the trace id. Next in the C# consumer end I placed a code snippet as mentioned in the .Net agent and gave it a run. Kibana displayed the transactions from two different services in a single trace as I hoped it would.
We have requirements for getting data from a SOAP web service, where same records are going to be exposed. Then the record is transformed and written do the DB.
We are the acitve side and at the certain intervals we are going to check if a new record has appeared.
Our main goal are:
to have a scheduler for setting intervals
to have a mechanizm to retry if something goes wrong (eg. lost connection)
to have a visual control of the process - check the places where something stuck (like dashboard in SCDF)
Since there is no sample wsdl source app, I guess the Task (or Stream ?) should be written by ourself. But what to use for repeating and scheduling...
I Need your advice in choosing the right approach.
I'm not tied to the SCDF solution if any other are more suitable.
If you intend to consume directly as SOAP messages from external services, you could either build a custom Spring Cloud Stream source or a simple Spring Batch/Spring Cloud Task application. Both the options provide the resiliency patterns, including retries.
However, if the upstream data is not real-time, you would choose the Task path because the streams are long-running and they never terminate. Tasks, on the other hand, run for a finite period of time, terminate, and free-up resources. There's also the option to use the platform-specific scheduler implementation to trigger to launch the Task on a recurring window periodically.
From the SCDF dashboard, you can design/build Composed Tasks, including the state transitions and the desired downstream operation.
I am trying out spring data flow to see if it fits my needs. I am wondering how can I bind my spring cloud stream app to multiple named destinations (in my case RabbitMQ exchanges). From what I have read, you can bind multiple apps to one named destination (fan in/out) but not one app to multiple destinations...
Any ideas?
The spring.cloud.stream.bindings.input/output.destination property allows you to specify the name(s) of the destinations. NOTE: that "input" or "output" in the above corresponds to the channel name of your message handler (e.g., #StreamListener(Sink.INPUT))
Is that what you're looking for?
We have a microservice which is developed using spring boot. couple of the functionalities it implements is
1) A scheduler that triggers, at a specified time, a file download using webhdfs and process it and once the data is processed, it will send an email to users with the data process summary.
2) Read messages from kafka and once the data is read, send an email to users.
We are now planning to make this application high available either in Active-Active or Active-passive set up. The problem we are facing now is if both the instances of the application are running then both of them will try to download the file/read the data from kafka, process it and send emails. How can this be avoided? I mean to ensure that only one instance triggers the download and process it ?
Please let me know if there is known solution for this kind of scenarios as this seems to be a common scenario in most of the projects? Is master-slave/leader election approach a correct solution?
Thanks
Let the service download that file, extract the information and publish them via kafka.
Check beforehand if the information was already processed by querying kafka or a local DB.
You also could publish an DataProcessed-Event that triggers the EmailService, that sends the corresponding E-Mail.
I'm new to WF. what I'm trying to do is to create a simple Workflow Service and Call them in various clients. So what i have done, I have created a Workflow service. It has a xamlx file and that has a sequence with Receive and Send Reply activity. I also have Correlations. So the first ReceiveandSendReply activity has CanCreateInstance True. In addition to this I
wrote some of my own code activities.
Now I have hosted this service is IIS and trying to call this service using a console app. I have added the web Reference and created a service client and passed the values to the service. It gives me expected results.
But when I'm trying to run another client at the same time it gives me Instance error. I think the Workflow is not initiating a new Instance for the second client.
So I did a search and found multiple instancing can be achieved by using workflowservicehost. But could not find a way to do it.
I think the way Im calling the service is not correct. I'm just creating a new object from the service reference and calling the operation.
Can anyone help me with this?
Please have a look at correlation rules you've set up for your workflow. If several clients passes parameters which correlate with the same instance - a new instance won't be created.
So, if you need a new instance you either need to set different correlation rules, so that different client's calls would correlate with different workflow instances.