Hi I am trying to get records from incident table from a serviceNow instance using the ServiceNow connector from ESB. I am able to get back the filtered query records from incident table using the respond mediator.Can anybody tell me the way to cache these records in a detailed way?Thanks
You can use cache mediator to cache the response message. you can refer more at here.
You can achieve this using the cache mediator. You don't necessarily need to have the configuration in in-sequence and out-sequence respectively. Please try the following.
<cache timeout="20" scope="per-host" collector="false" hashGenerator="org.wso2.carbon.mediator.cache.digest.DOMHASHGenerator">
<implementation type="memory" maxSize="100"/>
</cache>
This is where the request's hash is generated to store in the cache. If you have this segment at the request which you need to cache, that would suffice.
<cache scope="per-host" collector="true"/>
This where the response is being cached. You can add this right after the ServiceNow call. If the request matches the hash when it reaches the first configuration it will respond to the client from the cache this way.
Related
I'm attempting to collect details about all processors on two installed instances of Nifi, versions 1.19.0 and 1.20.0. I want to use Nifi to do this so I can perform some minor ETLs and save the resultant dataset in a database. Both instances are configured as standalone authentication with SSL.
However, when I configure the InvokeHTTP processor how I think it should be to aquire a token, I'm getting a 400 status code and a response.body of "The username and password must be specified." on both instances.
Here's the super basic flow:
Nifi Flow
Here's what the error looks like:
Attributes of failed response
Here's the current config of the InvokeHTTP processor on the 1.20.0 instance which we can focus on from here out since they are both responding similarly.
Postman - InvokeHTTP Config
When I run the request with Postman, I get the expected token in the response body. And I'm able to make subsequent requests (the above config was a response from https://nifi06:8443/nifi-api/processors/39870615-0186-1000-e7d7-c59fa621ca7d in Postman).
I've tried the following:
Adding dynamic attributes for username/password as in the above configuration.
I've added them as JSON to the preceding Generate FlowFile processor's custom text.
And I've also tried using the Request Username/Request Password properties of the InvokeHTTP processor.
All return with the same response.body of "The username and password must be specified."
This seems it should be really simple and I'm sure I've been staring at it too long to see what I'm missing. Can anyone tell me where I need to specify the username/password?
Thanks.
I am using Azure ServiceBus Relay and although I am able to fetch the count of the current listeners, I want to know the details of those listeners as well. For example, their id or address. An attribute that I can use to access or target that particular listener.
As of now I am sending a GET request to :
https://{our relay}.servicebus.windows.net/
with a SAS token as header and returned object is:
<feed xmlns="http://www.w3.org/2005/Atom">
<title type="text">Publicly Listed Services</title>
<subtitle type="text">This is the list of publicly-listed services currently available.</subtitle>
<id>uuid:85ad7592-cecf-4f31-bef2-4bb7f5cea444;id=2185</id>
<updated>2019-12-17T10:27:55Z</updated>
<generator>Service Bus 1.1</generator>
</feed>
I am a beginner in middleware technology. I have started with WSO2. Now I learned that WSO2 has a caching feature at different places. Two of them is at Key Manager caches Keys and Response Caching.
My question is very simple (naive), that if we are caching wrong Response, we will get the response again.
For example:
I hit this dummy API request http://dummy.restapiexample.com/api/v1/employees which supposed to give a list of employees, but it gives me null or something else. Now, this response is cached in response cache, which means I will keep on getting null. Which is wrong. Caching makes sense, but it is caching all responses, wrong and right. So how it handled, what's the concept?
Similarily for Key Manager. What is the point of caching keys at both the API Gateway level and Key Manager level? We anyways have to re-generate it if it is a wrong key or expired key.
Please answer. My questions sound naive but appreciate if you can explain.
Yes, agree with you. Ideally, the cacheable responses should be picked based on the status code. I just created a feature improvement request.
However, this is already supported by the runtime, only the UI is missing it.
So you can make this working by changing a configuration file.
For that, open repository/resources/api_templates/velocity_template.xml and search for <cache scope="per-host" collector="false". (Note collector=false)
Then, add the <protocol> tag just above the <implementation> tag like this.
<cache scope="per-host" collector="false" hashGenerator="org.wso2.carbon.mediator.cache.digest.REQUESTHASHGenerator" timeout="$!responseCacheTimeOut">
<protocol type="HTTP">
<methods>*</methods>
<headersToExcludeInHash/>
<responseCodes>2[0-9][0-0]</responseCodes>
<enableCacheControl>false</enableCacheControl>
<includeAgeHeader>false</includeAgeHeader>
<hashGenerator>org.wso2.carbon.mediator.cache.digest.REQUESTHASHGenerator</hashGenerator>
</protocol>
<implementation type="memory" maxSize="500"/>
</cache>
Note the regex for 2xx responses in responseCodes. I hope this answers your 1st question.
Regarding the key caches, yes, there are caches at both gateway and keymanager. But by default, only the gateway cache is enabled.
<CacheConfigurations>
<!-- Enable/Disable token caching at the Gateway-->
<EnableGatewayTokenCache>true</EnableGatewayTokenCache>
<!-- Enable/Disable API resource caching at the Gateway-->
<EnableGatewayResourceCache>true</EnableGatewayResourceCache>
<!-- Enable/Disable API key validation information caching at key-management server -->
<EnableKeyManagerTokenCache>false</EnableKeyManagerTokenCache>
And there are cases where some want to disable gateway cache and enable keymanager cache, for example when the gateway is in DMZ.
Hi guys i got a simple HTTP mule flow that connects to a 3rd party API using REST for "create user", "create account", "create site" and so methods.
Thing is that whenever i run my flow, i see it makes multiple requests, why is this happening? Shouldn't be doing only 1 request as the flow goes? Thought in the beginning that it was maybe doing "retries" or something but all the times it gets connected and I get the response quickly.
Should i be adding the flow code? i mean its a very simple mule flow with a
<http:inbound-endpoint exchange-pattern="request-response" host="localhost" port="8082" doc:name="HTTP"/>
And a few string beans with a custom HTTPHTTPS class to make REST connections during the flow (with their api keys, method, url and such)
Thanks in advance.
Simply place the following filter after your HTTP inbound endpoint:
<not-filter>
<wildcard-filter pattern="/favicon.ico"/>
</not-filter>
Let's say we have a simple node JS backend, paired with a standard NoSQL document store such as CouchDB. Since our database is just a document key-store with no schema, anything can get inserted. And since our server is built on JSON as well, ultimately POST requests that come in from the client with JSON payloads end up getting stored directly into our data store.
This of course is very convenient and makes for a lightweight application. I've been wondering, though, short of writing code for every possible insertion endpoint to verify that each POST or PUT request is well-formed, is there anything to prevent an attacker from firing up their developer console and spoofing POST/PUT requests, allowing them to insert any kind of junk data they wish into our datastore? It would not be too difficult to wreck an application's data this way.
Clearly token-based authentication can ensure that only authenticated users can access these service endpoints, but that doesn't prevent them from spoofing these request with the same HTTP headers that valid requests have. This is all quite simple with today's browser developer tools.
In a traditional server language like Java, JSON PUTs and POSTs are marshalled to a highly-structured class-based Object. Requests whose payloads do NOT match these formats are rejected with HTTP errors.
Does anyone know of tools or paradigms for node which ensures that requests like this meet some basic structure criteria?