We have different services deployed in DSS and we have a different way of caching:
no cache
1 hour cache
1 day cache
Is there any way to set this caching directly to each dbs file without using the administration console?
Another way would be to set these three caches through a configuration file and then to refers to them from the dbs files.
The solution we are looking for, is without using the administration console.
It is indeed possible to configure caching for dataservices via a configuration file without using the management console. Each dataservice is deployed as form of a axis2 service. Therefore you can use the "services.xml" file which you would typically use to configure axis2 service related parameters, with dataservices too with a slight modification. That is, if the name of your dataservice is "TestDS" then you have to name your services.xml file as "TestDS_services.xml" and place it inside the dataservices deployment directory which can be located at "DSS_HOME/repository/deployment/server/dataservices". Then you can include a caching policy having your own values as the parameters inside the aforementioned configuration file. Also it is important to note that, you can engage caching in three levels for a dataservies namely, per service group/per service/per operation.
A sample services.xml is show below.
<serviceGroup>
<service name="TestDS">
<!--parameter name="ServiceObjectSupplier">org.apache.axis2.engine.DefaultObjectSupplier</parameter-->
<Description>Enabling caching through sevices.xml</Description>
<operation name="op1">
<messageReceiver class="org.wso2.carbon.dataservices.core.DBInOutMessageReceiver"/>
<module ref="wso2caching"/>
<wsp:Policy
wsu:Id="WSO2CachingPolicy"
xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy"
xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">
<wsch:CachingAssertion xmlns:wsch="http://www.wso2.org/ns/2007/06/commons/caching">
<wsp:Policy>
<wsp:All>
<wsch:XMLIdentifier>org.wso2.caching.digest.DOMHASHGenerator</wsch:XMLIdentifier>
<wsch:ExpireTime>70000</wsch:ExpireTime>
<wsch:MaxCacheSize>1000</wsch:MaxCacheSize>
<wsch:MaxMessageSize>1000</wsch:MaxMessageSize>
</wsp:All>
</wsp:Policy>
</wsch:CachingAssertion>
</wsp:Policy>
</operation>
<operation name="op2">
<messageReceiver class="org.wso2.carbon.dataservices.core.DBInOutMessageReceiver"/>
<module ref="wso2caching"/>
<wsp:Policy
wsu:Id="WSO2CachingPolicy"
xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy"
xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">
<wsch:CachingAssertion xmlns:wsch="http://www.wso2.org/ns/2007/06/commons/caching">
<wsp:Policy>
<wsp:All>
<wsch:XMLIdentifier>org.wso2.caching.digest.DOMHASHGenerator</wsch:XMLIdentifier>
<wsch:ExpireTime>600000</wsch:ExpireTime>
<wsch:MaxCacheSize>1000</wsch:MaxCacheSize>
<wsch:MaxMessageSize>1000</wsch:MaxMessageSize>
</wsp:All>
</wsp:Policy>
</wsch:CachingAssertion>
</wsp:Policy>
</operation>
<operation name="op3">
</operation>
</service>
</serviceGroup>
After placing your "data_service_name"_services.xml file inside the aforesaid directory, you have to comment out the following entry from the axis2.xml configuration file that can be located at "DSS_HOME/repository/conf" directory.
<listener class="org.wso2.carbon.core.deployment.DeploymentInterceptor">
Now you're good to go with your deployment. Restart the server and you'll be able to get the aforementioned functionality working.
NOTE: You would also want to be informed that a lot of improvements have been done on this space in DSS for our immediate upcoming DSS release. (DSS 3.0.0).
Regards,
Prabath
Related
We use a WebSphere Liberty server behind a reverse proxy. We enabled the appSecurity-2.0 feature to add a custom TAI which validates HTTP request between the proxy and Liberty. To use the batch framework that comes with WebSphere Liberty, we enabled the feature batchManagement-1.0 and added the required role configuration as described here https://www.ibm.com/support/knowledgecenter/en/was_beta_liberty/com.ibm.websphere.wlp.nd.multiplatform.doc/ae/twlp_batch_securing.html.
It is possible to submit a batch through the REST API, if the tag authorization-roles is added to the server.xml and the role batchAdmin is assigned to a user from the basic registry. However, if we add the authorization-roles tag Liberty restricts the HTTP request from the proxy(frontend users) to the deployed web-app and reports that the user has not the required permission to access the resources.
Is it possible to disable the batch security in WebSphere Liberty independent of the appSecurity feature?
You could grant everyone batchAdmin role access simply by:
<authorization-roles id="com.ibm.ws.batch">
<security-role name="batchAdmin">
<special-subject type="ALL_AUTHENTICATED_USERS" />
</security-role>
</authorization-roles>
OR:
<authorization-roles id="com.ibm.ws.batch">
<security-role name="batchAdmin">
<special-subject type="EVERYONE" />
</security-role>
</authorization-roles>
However, there is not a way to disable batch security with security enabled.
I'd like to enable versioning for a replicated cache in a locally-running Infinispan server (8.2.4 final, two Infinispan servers form a cluster).
This is documented in the user guide.
Quote:
10.2.5. Configuration
By default versioning will be disabled.
and the user guide contains the following snippet:
<versioning scheme="SIMPLE|NONE" />
I am using locally-running Infinispan servers, the configuration is in clustered.xml.
A fragment thereof:
<subsystem xmlns="urn:infinispan:server:core:8.2" default-cache-container="clustered">
<cache-container name="clustered" default-cache="default" statistics="true">
[...]
<replicated-cache name="demoCache" mode="ASYNC" >
<versioning scheme="SIMPLE"/>
</replicated-cache>
So when I add the versioning element, starting fails with
Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[186,6]
Message: WFLYCTL0198: Unexpected element '{urn:infinispan:server:core:8.2}versioning' encountered
The XML element versioning indeed exists in urn:infinispan:config:8.2, but not in urn:infinispan:server:core:8.2 (which is used in clustered.xml).
urn:infinispan:config:8.2 is defined in infinispan-server-8.2.4.Final/docs/schema/infinispan-config-8.2.xsd.
urn:infinispan:server:core:8.2 is defined in infinispan-server-8.2.4.Final/docs/schema/jboss-infinispan-core_8_2.xsd
How can I enable (cluster aware) versioning when running Infinispan as a separate server?
Versioning does not make sense when using Infinispan remotely since versioning is purely used to detect write skew situations with repeteable read transactions, and that functionality is not really available to users in server mode.
I have configured Tomcat 6 with in-memory session replication. I am also using IIS 7 (I know, I know) and the AJP connector via isapi_redirector. The cluster is working properly and I am able to replicate session attributes using the SessionExample in the examples war. The problem is that I am unable to do the same in my custom application. I have added the distributable tag to the web.xml file on both servers in my test cluster. However, I don't see any message in the logs mentioning the attributes getting sent to the cluster (I see them for SessionExample). The only primary differences that I can see in my app from the examples:
The examples war uses servlet 2.5. I am still required to use 2.4.
My application uses SSO and requires the user to login.
The application is a portal application.
Also, in the code of the application, I am setting a simple string in the attribute, so nothing fancy.
So, I was wondering if anyone has some tips to get this working?
Thanks
Here is the cluster section within of my server.xml:
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
channelSendOptions="6">
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Membership className="org.apache.catalina.tribes.membership.McastService"
address="228.0.0.104"
port="45564"
frequency="500"
dropTime="10000"/>
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="auto"
port="4000"
autoBind="100"
selectorTimeout="7000"
maxThreads="6"
timeout="15000"/>
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"
timeout="70000"/>
</Sender>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
tempDir="/apache-tomcat-6.0.37/war-deploy/war-temp/"
deployDir="/apache-tomcat-6.0.37/webapps/"
watchDir="/apache-tomcat-6.0.37/war-deploy/war-listen/"
watchEnabled="true"/>
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>
Sorry. I found the issue. I was expecting to see messages in the log regarding the creation of the session attributes. I didn't realize that the examples project had a session listener that was outputting the messages to the log. I was thinking that it was simply from the log level that I had set.
Thanks to anyone who read this post.
I am developing a webapp with an embedded webservice with Axis2 using Maven.
The service implementation is a POJO with RPC-style interaction, the target appserver is Tomcat running the Axis2 servlet.
The "Hello world" works but now I need to configure some global axis2 settings in the axis2.xml file (placed under WEB-INF/conf).
Please provide or point to a simple configuration for axis2.xml for this common environment.
The default taken from the binary distribution has too many features activated (hotdeploy?) and also causes this problem:
<soapenv:Reason>
<soapenv:Text xml:lang="en-US">
The ServiceClass object does not implement the required method
in the following form: OMElement ping(OMElement e)
</soapenv:Text>
</soapenv:Reason>
As a reference: http://axis.apache.org/axis2/java/core/docs/servlet-transport.html says to configure the servlet transport in this way, but it does not solve the issue.
<transportReceiver name="http" class="org.apache.axis2.transport.http.AxisServletListener"/>
Apparently the problem is that the default axis2.xml sets raw xml messageReceivers, instead of the RPC ones.
Try to add this to the services.xml for the developed service, should fix the problem.
<messageReceivers>
<messageReceiver mep="http://www.w3.org/2004/08/wsdl/in-only"
class="org.apache.axis2.rpc.receivers.RPCInOnlyMessageReceiver" />
<messageReceiver mep="http://www.w3.org/2004/08/wsdl/in-out"
class="org.apache.axis2.rpc.receivers.RPCMessageReceiver" />
</messageReceivers>
"Solution that worked for me was adding the operation tag in the service.xml against the Java Service method name:
<operation name="sayHello" >
<messageReceiver mep="http://www.w3.org/2004/08/wsdl/in-out" class="org.apache.axis2.rpc.receivers.RPCMessageReceiver" />
</operation>
<parameter name="ServiceClass" locked="false">com.learning.webservices.pojo.HelloService</parameter>
I tested Tomcat Clustering for session replication on ubuntu servers with apache as front end load balancers. From my testing experience I say it's better not using tomcat clustering but running each node as standalone not knowing one another without any session replication as I felt it's slow, takes much time to startup tomcat service and consumes more memory. And the FarmDeployer is not reliable always in deploying and whole configuration should be placed under<Host></Host> element for farm deployer to work and also for each virtual hosting and thus a huge server.xml file. Below is the tomcat virtual hosting with cluster configuration from one of the node I used.
<Host name="site1.mydomain.net" debug="0" appBase="webapps" unpackWARs="true" autoDeploy="true">
<Logger className="org.apache.catalina.logger.FileLogger"
directory="logs" prefix="virtual_log1." suffix=".log" timestamp="true"/>
<Context path="" docBase="/usr/share/tomcat/webapps/myapp" debug="0" reloadable="true"/>
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster">
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="192.168.1.8"
port="4001"
selectorTimeout="100"
maxThreads="6"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.StaticMembershipInterceptor">
<Member className="org.apache.catalina.tribes.membership.StaticMember"
port="4002"
securePort="-1"
host="192.168.1.9"
domain="staging-cluster"
uniqueId="{0,1,2,3,4,5,6,7,8,9}"/>
<!-- <Member className="org.apache.catalina.tribes.membership.StaticMember"
port="4002"
securePort="-1"
host="192.168.1.9"
domain="staging-cluster"
uniqueId="{0,1,2,3,4,5,6,7,8,9}"/> -->
</Interceptor>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
tempDir="/usr/share/tomcat/temp/"
deployDir="/usr/share/tomcat/webapps/"
watchDir="/usr/share/tomcat/watch/"
watchEnabled="true"/>
</Cluster>
</Host>
Is Tomcat clustering good to use on production or is there any alternate way for session replication?. Or I'm missing anything in the above configuration which could be fine tuned?
Any ideas are welcome. Thanks!
One session-failover / session-replication solution for tomcat is memcached-session-manager (msm), supporting both sticky and non-sticky sessions. msm uses memcached (or any backend speaking the memcached protocol) as backend for session backup/storage.
In sticky mode sessions are still kept in tomcat, and memcached is only used as an additional backup - for session failover.
In non-sticky mode sessions are only stored in memcached and no longer in tomcat, as with non-sticky sessions the session-store must be external (to avoid stale data).
There's also special support for membase / membase buckets, which is useful for hosted solutions where you get access to a certain bucket with the appropriate authentication.
Session serialization is pluggable, so you're not tied to java serialization (and classes implementing Serializable). E.g. there's a kryo serializer available, which is one of the fastest serialization strategies available.
The msm home page mainly describes the sticky session approach, for details regarding non-sticky sessions you might search or ask on the mailing list.
Details and examples regarding the configuration can be found in the msm wiki (SetupAndConfiguration).