I've followed this sample: https://docs.wso2.com/display/ESB490/Governance+Partition+in+a+Remote+Registry
Everything works well: I have a list of endpoints in governance part of my GReg that are added through a Carbon Application Artifact(.car). The ESB mounts this repository to see those endpoints and use them. Perfect!
But, when modifying the endpoints in Greg Console (for example, updating the URL of the endpoint), it's not refreshed in ESB Console. Only after around 10 to 15 minutes, updates appears.
I guess, there is Cache configuration to add/update to take in account this remote governance?
Any help appreciate please :)
This is due to the caching timeout of ESB which is default set to 15 minutes. When an artifact is deployed in G-Reg node it takes about 15 minutes to be visible in ESB node.
You can reduce this caching timeout duration in "/repository/deployment/server/synapseconfigs/default/registry.xml" file as belows.
<registry provider="org.wso2.carbon.mediation.registry.WSO2Registry">
<parameter name="cachableDuration">15000</parameter>
</registry>
But changing this value will also have performance impact. If this value is too low, searching for resources would be done in the database, because most of the resources will be missing in the cache.
Related
I have used the WSO2 API manager in my company. When I change the settings of the available methods (scopes) or the available authorisation methods (application level security), the application of these settings takes up to 15 minutes (I tested the work of the methods through postman). This is a lot for running tests.
I followed the recommendation to change the timeout in the deployment.toml
[apim.cache.resource]
enable = true
expiry_time = "900s"
There were no such settings in my config, but I added them and change for 60s. After the reboot, the settings were applied instantly (not even after 60 seconds). However, after a while, the settings were applied again after 15 minutes. I disabled the cache altogether, but it didn't help the as well. Settings are applied quickly only the first time after restarting WSO2. Has anyone had the same problem?
In WSO2 APIM if you update an API, the resource cache gets invalidated and the changes are reflected within a few minutes. If you want to apply the changes quickly you can restart the server and check the flow.
The default cache size of any type of cache in a WSO2 product is 10,000 elements/records. Cache eviction occurs from the 10001st element. All caches in WSO2 products can be configured using the <PRODUCT_HOME>/repository/conf/deployment.toml file. In case you have not defined a value for default cache timeout under server configurations, the defaultCacheTimeout of 15 minutes will be applied which comes by default.
[server]
default_cache_timeout = 15
Please refer https://apim.docs.wso2.com/en/3.0.0/administer/product-configurations/configuring-caching/ for further details on caching.
We are using alfresco 5.2.3 enterprise with ADF 3.4.0
The web.xml files in our both alfresco and share war has 60
And for ADF we have not found any session timeout settings or config.
So, ideally the session should not expire before 60 mins, but the customer is complaining that after remaining idle for around 15 mins, their session expires/logs out. They need to relogin.
So, what should be the ideal way to make the session valid for actual 60 mins and not just 15 mins.
I tried overriding the session timeout using the following link but it's not working:
Overriding alfresco timeout
Also tried setting the following property in alfresco-global.properties file with different values:
authentication.ticket.validDuration=PT1H
But does not work.
The same behaviour is noted when we use ADF url as well as Share url.
Share Url actually logs out the user, ADF url mostly invalidates the session so our custom actions do not appear against the documents if user remains idle for 15 mins.
NOTE: There is no SSO integration done for our project.
Any suggestions or pointers would be really helpful.
I tried out with multiple options:
authentication.ticket.ticketsExpire=true
to
authentication.ticket.ticketsExpire=false
authentication.ticket.expiryMode=AFTER_INACTIVITY
to
authentication.ticket.expiryMode=DO_NOT_EXPIRE
authentication.ticket.useSingleTicketPerUser=false
to
authentication.ticket.useSingleTicketPerUser=true
But, none of the above settings after restart give any impact on the behaviour. So, this session timeout settings are mostly carried forward from the proxy server or load balancer settings and applied here.
I have a really simple setup: An azure load balancer for http(s) traffic, two application servers running windows and one database, which also contains session data.
The goal is being able to reboot or update the software on the servers, without a single request being dropped. The problem is that the health probe will do a test every 5 seconds and needs to fail 2 times in a row. This means when I kill the application server, a lot of requests during those 10 seconds will time out. How can I avoid this?
I have already tried running the health probe on a different port, then denying all traffic to the different port, using windows firewall. Load balancer will think the application is down on that node, and therefore no longer send new traffic to that specific node. However... Azure LB does hash-based load balancing. So the traffic which was already going to the now killed node, will keep going there for a few seconds!
First of all, could you give us additional details: is your database load balanced as well ? Are you performing read and write on this database or only read ?
For your information, you have the possibility to change Azure Load Balancer distribution mode, please refer to this article for details: https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-distribution-mode
I would suggest you to disable the server you are updating at load balancer level. Wait a couple of minutes (depending of your application) before starting your updates. This should "purge" your endpoint. When update is done, update your load balancer again and put back the server in it.
Cloud concept is infrastructure as code: this could be easily scripted and included in you deployment / update procedure.
Another solution would be to use Traffic Manager. It could give you additional option to manage your endpoints (It might be a bit oversized for 2 VM / endpoints).
Last solution is to migrate to a PaaS solution where all this kind of features are already available (Deployment Slot).
Hoping this will help.
Best regards
While looking into the resource balancer and dynamic load metrics on Service Fabric, we ran into some questions (Running devbox SDK GA 2.0.135).
In the Service Fabric Explorer (the portal and the standalone application) we can see that the balancing is ran very often, most of the time it is done almost instantly and this happens every second. While looking at the Load Metric Information on the nodes or partitions it is not updating the values as we report load.
We send a dynamic load report based on our interaction (a HTTP request to a service), increasing the reported load data of a single partition by a large amount. This spike becomes visible somewhere in 5 minutes at which point the balancer actually starts balancing. This seems to be an interval in which the load data gets refreshed. The last reported time gets updated all the time but without the new value.
We added the metrics to applicationmanifest and the clustermanifest to make sure it gets used in the balancing.
This means the resource balancer uses the same data for 5 minutes. Is this a configurable setting? Is it constraint because it is running on a devbox?
We tried a lot of variables in the clustermanifest but none seem to be affecting this refreshtime.
If this is not adaptable, can someone explain why would you run the balancer with stale data? and why this 5 minute interval was chosen?
This is indeed a configurable setting, and the default is 5 minutes. The idea behind it is that in prod you have tons of replicas all reporting load all the time, and so you want to batch them up so you don't spam the Cluster Resource Manager with all those as independent messages.
You're probably right in that this value is way too long for local development. We'll look into changing that for the local clusters, but in the meantime you can add the following to your local cluster manifest to change the amount of time we wait by default. If there are other settings already in there, just add the SendLoadReportInterval line. The value is in seconds and you can adjust it accordingly. The below would change the default load reporting interval from 5 minutes (300 seconds) to 1 minute (60 seconds).
<Section Name="ReconfigurationAgent">
<Parameter Name="SendLoadReportInterval" Value="60" />
</Section>
Please note that doing so does increase load on some of the system services (TANSTAAFL), and as always if you're operating on a generated or complete cluster manifest be sure to Test-ServiceFabricClusterManifest before deploying it. If you're working with a local development cluster the easiest way to get it deployed is probably just to modify the cluster manifest template (by default here: "C:\Program Files\Microsoft SDKs\Service Fabric\ClusterSetup\NonSecure\ClusterManifestTemplate.xml") and just add the line, then right click on the Service Fabric Local Cluster Manager in your system tray and select "Reset Local Cluster". This will regenerate the local cluster with your changes to the template.
I observe the following weird behavior. I have an Azure web role which is deployed on love Azure cloud. Now I click "Configure" in the Azure Management Portal and change the number of instances - the portal shows some "activity". Now I open the browser and navigate to the URL assigned to my deployment and start refreshing the page something like once per two seconds. The page reloads fine many times and then fro some time it will stop reloading - the request will be rejected, then after something like half a minute the requests are handled normally.
What is happening? Is the web server temporarily stopped? How do I change number of instances so that HTTP requests to the role are handled at all times?
When you change the configuration file, your current instance might be restarted. This might be the reason you met with, which your website didn't response in about 30 seconds.
Please have a look http://msdn.microsoft.com/en-us/library/microsoft.windowsazure.serviceruntime.roleenvironment.changing.aspx and check if it 's because of the role restarting.
What you are doing is manual. Have you looked at the SDK for autoscaling Azure?
http://channel9.msdn.com/posts/Autoscaling-Windows-Azure-applications
Check out the demo at the 18 minute mark. It doesn't answer your question directly, but its a much more configurable/dynamic way of scaling Azure.
Azure updates your roles one update domain at a time, so in theory you should see no downtime when updating the config (provided you have at least two instances). However, if you refresh the browser every couple of seconds, it's possible that your requests go always to the same instance due to keep-alive.
It would be interesting to know what the behavior is if you disable keep-alives for your webrole. Note that this will have a performance impact, so you'll probably want to re-enable keep-alives after the exercise.