In Oracle Coherence 12, what is the backing-map-scheme that can give a durable storage (NOT database)?
For Ex. Redis writes to a RDB/AOF file and restores KV entries after restart.
Configured Persistence Envrionments in operational config.
<persistence-environments>
<!-- -Dcoherence.distributed.persistence.base.dir=override USR_HOME -->
<persistence-environment id="stage_env_w_active_store">
<persistence-mode>active</persistence-mode>
<active-directory system-property="coherence.distributed.persistence.active.dir">
/opt/datastore/staged/active</active-directory>
<snapshot-directory system-property="coherence.distributed.persistence.snapshot.dir">
/opt/datastore/staged/snapshot</snapshot-directory>
<trash-directory system-property="coherence.distributed.persistence.trash.dir">
/opt/datastore/staged/trash</trash-directory>
</persistence-environment>
<persistence-environment id="stage_env_w_ondemand_store">
<persistence-mode>on-demand</persistence-mode>
<active-directory system-property="coherence.distributed.persistence.active.dir">
/opt/datastore/staged/dactive</active-directory>
<snapshot-directory system-property="coherence.distributed.persistence.snapshot.dir">
/opt/datastore/staged/dsnapshot</snapshot-directory>
<trash-directory system-property="coherence.distributed.persistence.trash.dir">
/opt/datastore/staged/dtrash</trash-directory>
</persistence-environment>
</persistence-environments>
Configured the backing-map-scheme persistence in the cache scheme.
<distributed-scheme>
<scheme-name>server</scheme-name>
<service-name>PartitionedCache</service-name>
<local-storage system-property="coherence.distributed.localstorage">true</local-storage>
<backing-map-scheme>
<local-scheme>
<high-units>{back-limit-bytes 0B}</high-units>
</local-scheme>
</backing-map-scheme>
<persistence>
<environment>stage_env_w_active_store</environment>
</persistence>
<autostart>true</autostart>
</distributed-scheme>
The "Active Space Used on disk (MB)" shows apt space used in JMX JVisualVM.
Related
My app uses Hibernate and c3po, and isn't starting under windows 10 + eclipse Oxygen + Tomcat 8, while under Linux works fine with the same configuration.
These are some lines the console shows when starting Tomcat
2017/11/27 18:21:03 WARN com.mchange.v2.async.ThreadPoolAsynchronousRunner - com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector#15cc07ce -- APPARENT DEADLOCK!!! Creating emergency threads for unassigned pending tasks!
This should a connection problem as reported here .
This seems confirmed by an exception I get:
2017/11/27 18:21:14 WARN com.mchange.v2.resourcepool.BasicResourcePool - com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask#1e723184 -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (30). Last acquisition attempt exception:
org.postgresql.util.PSQLException: FATALE: the remaining connection slots are reserveed to non replica superusers connections
at org.postgresql.core.v3.ConnectionFactoryImpl.readStartupMessages(ConnectionFactoryImpl.java:469)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:112)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:66)
at org.postgresql.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:125)
at org.postgresql.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:30)
at org.postgresql.jdbc3g.AbstractJdbc3gConnection.<init>(AbstractJdbc3gConnection.java:22)
at org.postgresql.jdbc4.AbstractJdbc4Connection.<init>(AbstractJdbc4Connection.java:30)
at org.postgresql.jdbc4.Jdbc4Connection.<init>(Jdbc4Connection.java:24)
at org.postgresql.Driver.makeConnection(Driver.java:393)
From postgresql log I see there are many connection until the db says it cannot allocate more. The connections are far above 10, while I have this in my hibernate.cfg.xml
<property name="hibernate.c3p0.min_size">1</property>
<property name="hibernate.c3p0.max_size">3</property>
<property name="hibernate.c3p0.timeout">1800</property>
<property name="hibernate.c3p0.max_statements">50</property>
<property name="hibernate.c3p0.idle_test_period">3000</property>
psql -h localhost -U user db works fine
I used Wireshark and RawCap under Windows to capture the connections and it seems the connections are acquired, as the log says, but after seeing Postgres saying "ready for queries", they are closed, if I saw well.
What else can I look at to debug what's happening??
It turned out that I was using the wrong postgresql driver. I have java 8, and using PostgreSQL JDBC 4.2 Driver, 42.1.4 solved the issue.
I have followed this manual to migrate from GlassFish to WildFly:
http://wildfly.org/news/2014/02/06/GlassFish-to-WildFly-migration/
However I'm getting the following error when running my application in WildFly:
ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("deploy") failed - address: ([("deployment" => "exampleProject-ear-1.0-SNAPSHOT.ear")]) - failure description: {"WFLYCTL0180: Services with missing/unavailable dependencies" => [
"jboss.persistenceunit.\"exampleProject-ear-1.0-SNAPSHOT.ear/exampleProject-web-1.0-SNAPSHOT.war#exampleProjectPU\".FIRST_PHASE is missing [jboss.naming.context.java.jdbc.__TimerPool]",
"jboss.persistenceunit.\"exampleProject-ear-1.0-SNAPSHOT.ear/exampleProject-web-1.0-SNAPSHOT.war#exampleProjectPU\" is missing [jboss.naming.context.java.jdbc.__TimerPool]"
]}
The error talks about jboss.naming.context.java.jdbc.__TimerPool. Any idea of what should I do? I'm using WildFly 10 and MySQL as database.
Forget about this. __TimerPool was the name of a Datasource in GlassFish and I was using it without knowing it, I simply removed the persistence.xml file that contained it and it worked.
Check your standalone.xml. It must be having a datasource with pool-name "exampleProjectPU" . Something like this. Please remove the full xml block.
<datasources>
<datasource jndi-name="xxx:exampleProjectPU" pool-name="exampleProjectPU" enabled="true">
<connection-url>jdbc:oracle:thin:#//host:port/SID</connection-url>
<driver>oracle</driver>
<security>
<user-name></user-name>
<password></password>
</security>
</datasource>
Go to deployments folder and check if there is any sample project with name "example project.war". If yes, remove it and start the server again. It should work fine.
try to change your mysql-connecter to bin file like mysql-connector-java-5.1.47-bin
make sure the name in is the some in jndi-name
./bin/accumulo shell -u root
Password: ******
2015-02-14 15:18:28,503 [impl.ServerClient] WARN : There are no tablet servers: check that zookeeper and accumulo are running.
2015-02-14 13:58:52,878 [tserver.NativeMap] ERROR: Tried and failed to load native map library from /home/hduser/hadoop/lib/native::/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
java.lang.UnsatisfiedLinkError: no accumulo in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1886)
at java.lang.Runtime.loadLibrary0(Runtime.java:849)
at java.lang.System.loadLibrary(System.java:1088)
at org.apache.accumulo.tserver.NativeMap.<clinit>(NativeMap.java:80)
at org.apache.accumulo.tserver.TabletServerResourceManager.<init>(TabletServerResourceManager.java:155)
at org.apache.accumulo.tserver.TabletServer.config(TabletServer.java:3560)
at org.apache.accumulo.tserver.TabletServer.main(TabletServer.java:3671)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.accumulo.start.Main$1.run(Main.java:141)
at java.lang.Thread.run(Thread.java:745)
2015-02-14 13:58:52,915 [tserver.TabletServer] ERROR: Uncaught exception in TabletServer.main, exiting
java.lang.IllegalArgumentException: Maximum tablet server map memory 83,886,080 and block cache sizes 28,311,552 is too large for this JVM configuration 48,693,248
at org.apache.accumulo.tserver.TabletServerResourceManager.<init>(TabletServerResourceManager.java:166)
at org.apache.accumulo.tserver.TabletServer.config(TabletServer.java:3560)
at org.apache.accumulo.tserver.TabletServer.main(TabletServer.java:3671)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.accumulo.start.Main$1.run(Main.java:141)
at java.lang.Thread.run(Thread.java:745)
The above error is shown in the tserver_localhost.log. can anyone help me with this issue.
I have hadoop running on single-node mode, zookeeper running, and i followed the instructions in the Readme file of accumulo.
I dont know how to start a tablet server.There was no explanation regarding this in the readme,could anyone help me with this.
This is the confluence of two problems.
First, your Accumulo can't find the native libraries it would use for off-heaping the in-memory-map for live edits. Knowing your version of Accumulo, how you deployed accumulo, and seeing your accumulo-env.sh would be needed to diagnose why it may have failed. (asking on the user mailing list would be best) Take a look at the README for your version under the Building section for "native map support".
For example, the passage for version 1.6.1 gives the following advice for building them yourself without a full source tree:
Alternatively, you can manually unpack the accumulo-native tarball in the
$ACCUMULO_HOME/lib directory. Change to the accumulo-native directory in
the current directory and issue make. Then, copy the resulting 'libaccumulo'
library into the $ACCUMULO_HOME/lib/native/map.
$ mkdir -p $ACCUMULO_HOME/lib/native/map
$ cp libaccumulo.* $ACCUMULO_HOME/lib/native/map
Normally, not having the native libraries available is a soft failure; Accumulo will happily issue a WARN and then rely on a pure-java implementation.
Your second problem is caused by incorrect memory configuration. Accumulo relies on a single configuration parameter to tune memory use for both the native in-memory-map and the java one. The memory for the native implementation is allocated outside of the JVM heap and can be substantial (in the 1-16GB range depending on target workload). When running with the Java implementation, that same configuration value takes away space carved from the max heap size.
Based on your log output, you have configured a total max heap for tabletservers of ~46MB. You have allocated 27MB of this for the block cache and 80MB for the in-memory-map. The error you see is because those two values would result in an OOM.
You can increase the total Java Heap in accumulo-env.sh:
# Probably looks like this
test -z "$ACCUMULO_TSERVER_OPTS" && export ACCUMULO_TSERVER_OPTS="${POLICY} -Xmx48m -Xms48m "
# change this part to give it more memory --^^^^^^
And/or you can tune how much space should be used for the native maps, block cache, and index cache in accumulo-site.xml
<!-- Amount of space to hold incoming random writes -->
<property>
<name>tserver.memory.maps.max</name>
<value>80M</value>
</property>
<!-- Amount of space for holding blocks of data read out of HDFS -->
<property>
<name>tserver.cache.data.size</name>
<value>7M</value>
</property>
<!-- Amount of space for holding indexes read out of HDFS -->
<property>
<name>tserver.cache.index.size</name>
<value>20M</value>
</property>
How you should balance these three will depend on how much memory you have and what your workload looks like. Keep in mind that more than just those two things need to go into your total Java heap (like atleast one copy of the current cell being written / read on each RPC).
I have found the solution to this.
I have removed all the config files from the config folder in accumulo and used the bootstrap_config.sh file in bin folder,..which created the config files based on the input i have given and after that i initialized accumulo again and i was able to open the shell and the error was gone.
Thanks for the help.
I'm using OpenAM, with its embedded OpenDJ as the LDAP service, to protect my web application running on JBoss 7.
When I start my JBoss I get this error:
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'ldapUserDN'
...
Caused by: javax.naming.NameNotFoundException: ldap/idp/userDN -- service jboss.naming.context.java.ldap.idp.userDN
So apparently Spring is looking for the JNDI node ldap/idp/userDN. But the jboss configuration file that I got with the project has these entries:
<simple name="ldap/opendj/url" value="ldap://localhost:50389"/>
<simple name="ldap/opendj/userDN" value="cn=Directory Manager"/>
<simple name="ldap/opendj/password" value="mypassword"/>
<simple name="ldap/opendj/baseDN" value="dc=opensso,dc=java,dc=net"/>
And these properties are added to my JNDI tree on JBoss.
If I change these to "ldap/idp/userDN", for instance, then I get rid of the error, but I was wondering if there's anywhere, where "ldap/opendj/userDN" should be mapped to "ldap/idp/userDN", that I've missed.
If you're using Spring LDAP, the actual configuration of the ldap-context-source goes in the a spring config file, and might look like this:
<jee:jndi-lookup jndi-name="ldap/idp/url" id="ldapUrl"/>
<jee:jndi-lookup jndi-name="ldap/idp/userDN" id="ldapUserDN"/>
<jee:jndi-lookup jndi-name="ldap/idp/password" id="ldapPassword"/>
<jee:jndi-lookup jndi-name="ldap/idp/baseDN" id="ldapBaseDN"/>
<ldap:context-source url="#{ldapUrl}"
username="#{ldapUserDN}"
password="#{ldapPassword}"
base="#{ldapBaseDN}"
native-pooling="true"/>
So the jndi entries in your jboss config file should match the ones above.
We are trying to set up the active MQ cluster on production environment on Amazon EC2 with Auto discover and multicast.
I was able to configure successfully auto discovery with multi-cast on my local active mq server but on Amazon EC2 it is not working.
From the link
I found that Amazon EC2 does not support multi-cast. Hence we have to use HTTP transport or VPN for multi-cast. I tried HTTP transport for multi-cast by downloading activemq-optional-5.6.jar (we are using Active-MQ 5.6 version). It requires httpcore and httpClient jars to servlet in it class path.
In broker configuration(activemq.xml)
`
<networkConnectors>
<networkConnector name="default" uri="http://localhost:8161/activemq/DiscoveryRegistryServlet"/>
</networkConnectors>
<transportConnectors>
<transportConnector name="activemq" uri="tcp://localhost:61616" discoveryUri="http://localhost:8161/activemq/DiscoveryRegistryServlet"/>
</transportConnectors>`
are added.
But broker is not identifying the DiscoveryRegistryServlet.
Any help is much appreciated.
Finally figured out how to setup active MQ auto discovery with HTTP
Active-MQ Broker configuration:
In $ACTIVEMQ_HOME/webapps folder create a new folder
|_activemq
|_WEB-INF
|_classes
|_web.xml
create a web.xml file with the following contents
<web-app>
<display-name>ActiveMQ Message Broker Web Application</display-name>
<description>
Provides an embedded ActiveMQ Message Broker embedded inside a web application
</description>
<!-- context config -->
<context-param>
<param-name>org.apache.activemq.brokerURL</param-name>
<param-value>tcp://localhost:61617</param-value>
<description>The URL that the embedded broker should listen on in addition to HTTP</description>
</context-param>
<!-- servlet mappings -->
<servlet>
<servlet-name>DiscoveryRegistryServlet</servlet-name>
<servlet-class>org.apache.activemq.transport.discovery.http.DiscoveryRegistryServlet</servlet-class>
<load-on-startup>1</load-on-startup>
</servlet>
<servlet-mapping>
<servlet-name>DiscoveryRegistryServlet</servlet-name>
<url-pattern>/*</url-pattern>
</servlet-mapping>
</web-app>
Place httpclient-4.0.3.jar, httpcore-4.3.jar, xstream-1.4.5.jar and activemq-optional-5.6.0.jar in $ACTIVEMQ_HOME/lib directory.
In $ACTIVEMQ_HOME/config directory, modify the jetty.xml file to expose activemq web app.
<bean id="securityHandler" class="org.eclipse.jetty.security.ConstraintSecurityHandler">
...
<property name="handler">
<bean id="sec" class="org.eclipse.jetty.server.handler.HandlerCollection">
<property name="handlers">
...
...
<bean class="org.eclipse.jetty.webapp.WebAppContext">
<property name="contextPath" value="/activemq" />
<property name="resourceBase" value="${activemq.home}/webapps/activemq" />
<property name="logUrlOnStart" value="true" />
<property name="parentLoaderPriority" value="true" />
...
...
</list>
</property>
</bean>
</property>
</bean>
Modify activemq.xml file in $ACTIVEMQ_HOME/conf directory to use http protocol
<broker name=”brokerName”>
...
<networkConnectors>
<networkConnector name="default" uri="http://<loadbalancer_IP>:<locadbalancer_Port>/activemq/DiscoveryRegistryServlet?group=test"/>
<!--<networkConnector name="default-nc" uri="multicast://default"/>-->
</networkConnectors>
<transportConnectors>
<transportConnector name="http" uri="tcp://0.0.0.0:61618" discoveryUri="http://<loadbalancer_IP>:<locadbalancer_Port>/activemq/test"/>
</transportConnectors>
...
</broker>
make sure that the broker names are unique. “test” in url is the group name of brokers.
Client configuration:
1. Keep httpclient-4.0.3.jar, httpcore-4.3.jar, xstream-1.4.5.jar and activemq-optional-5.6.0.jar in classpath of client
2. URL to be use by client
discovery:(http://<loadbalancer_IP>:<locadbalancer_Port>/activemq/test)connectionTimeout=10000
here “test” is the group name.