I'm new to openshift and currently trying to set up my websocket application there, but have some issues. While I'm trying to connect to
ws://app-domain.rhcloud.com:8000/path
I get the following exception:
Caused by: java.lang.IllegalArgumentException: No 'javax.websocket.server.Server Container' ServletContext attribute. Are you running in a Servlet container that
supports JSR-356?
at org.springframework.util.Assert.notNull(Assert.java:112)
at org.springframework.web.socket.server.standard.AbstractStandardUpgradeStrategy.getContainer(AbstractStandardUpgradeStrategy.java:68)
at org.springframework.web.socket.server.standard.TomcatRequestUpgradeStrategy.getContainer(TomcatRequestUpgradeStrategy.java:83)
at org.springframework.web.socket.server.standard.TomcatRequestUpgradeStrategy.getContainer(TomcatRequestUpgradeStrategy.java:46)
at org.springframework.web.socket.server.standard.AbstractStandardUpgradeStrategy.getSupportedExtensions(AbstractStandardUpgradeStrategy.java:88)
at org.springframework.web.socket.server.support.DefaultHandshakeHandler.doHandshake(DefaultHandshakeHandler.java:214)
at org.springframework.web.socket.server.support.WebSocketHttpRequestHandler.handleRequest(WebSocketHttpRequestHandler.java:127)
... 25 more
When I run application locally everything works fine with the following url:
ws://localhost:8090/path
For both cases I use Tomcat 7.
Spring Config:
<websocket:handlers allowed-origins="*">
<websocket:mapping path="/fight-core" handler="webSocketHandler"/>
<websocket:handshake-interceptors>
<ref bean="webSocketHandshakeInterceptor"/>
</websocket:handshake-interceptors>
</websocket:handlers>
Also I've tried with websocket-api dependency in my pom.xml and w/o it
Kidnly advice
Apparently it's not about Spring nor Tomcat, but Openshift's routing layer; it looks like websocket support is still experimental, see:
* a blog post from 2012 explaining the situation
* Openshift official documentation
For plain WebSocket connections (ws://), requests are directed to port
8000, while WebSocket Secure connections (wss://) use port 8443, as
shown in the following example: http://example.example.com:8000
https://example.example.com:8443
Related
I am trying to connect to a Apache Ignite Server from a Spring Boot Application.
Example code:
ClientConfiguration cfg = new ClientConfiguration().setAddresses("127.0.0.1:10800");
try (IgniteClient client = Ignition.startClient(cfg)) {
Object cachedName = client.query(
new SqlFieldsQuery("SELECT name from Person WHERE id=?").setArgs("foo").setSchema("PUBLIC")
).getAll().iterator().next().iterator().next();
}
I get this error:
Caused by: class org.apache.ignite.IgniteCheckedException: Remote node
has peer class loading enabled flag different from local
[locId8=459833a1, locPeerClassLoading=true, rmtId8=83ea88ca,
rmtPeerClassLoading=false,
rmtAddrs=[ignite-0.ignite.default.svc.cluster.local/0:0:0:0:0:0:0:1%lo,
/10.4.2.49, /127.0.0.1], rmtNode=ClusterNode
[id=83ea88ca-da77-4887-9357-267ac7397767, order=1,
addr=[0:0:0:0:0:0:0:1%lo, 10.x.x.x, 127.0.0.1], daemon=false]]
So the PeerClassLoading needs to be deactivated in my Java code. How can I do that?
As noted in the comments, the error is from a thick client (or another server) connecting to the cluster but the code is from a thin client.
If you’re just reading/writing data and don’t need to execute code, the thin client is a perfectly good option.
To use a thick client, you need to make sure both the thick client and server have the same peer-class loading configuration. That would be either:
<property name=“peerClassLoadingEnabled” value=“false” />
in your Spring configuration file. Or:
IgniteConfiguration cfg = new IgniteConfiguration()
...
.setPeerClassLoadingEnabled(false);
(I’ve used false here as that’s your current server configuration. Having said that, you probably want it to be switched on.)
Having deployed the activemq-web-console war into a Tomcat embedded application how can one make it connect to an existing broker rather than create a new one?
The war comes with a set of predefined configurations, in particular, the WEB-INF/activemq.xml contains a configuration for the BrokerService
<broker brokerName="web-console" useJmx="true" xmlns="http://activemq.apache.org/schema/core">
<persistenceAdapter><kahaDB directory="target/kahadb"/></persistenceAdapter>
<transportConnectors>
<transportConnector uri="tcp://localhost:12345"/>
</transportConnectors>
</broker>
used from webconsole-embedded.xml in the following manner:
<bean id="brokerService" class="org.apache.activemq.xbean.BrokerFactoryBean">
<property name="config" value="/WEB-INF/activemq.xml"/>
</bean>
This configuration creates a new instance of BrokerService and tries to start the broker.
It is reported that the web console can be used to monitor an existing broker service rather than creating a new one. For this one should set the following properties somewhere:
webconsole.type=properties
webconsole.jms.url=tcp://localhost:61616
webconsole.jmx.url=service:jmx:rmi:///jndi/rmi://localhost:1099/karaf-trun
The questions is, where does one have to set these properties within the Tomcat embedded app and which XML changes in the above have to be performed for them to be used. I cannot find any sensible explanation how to configure it, and a BrokerService instance seems to be required by the remaining spring config.
Any ideas?
Please do not suggest to use hawtio instead!
I had the same problem today. You can start the webconsole in "properties" mode which gives you the oppertunity to connect over jmx.
I added following java arguments to our Jboss 6.1 and it worked immediatley. I didn't change any of the xmls (works out of the box)...
Example:
-Dwebconsole.type=properties -Dwebconsole.jms.url=tcp://<hostname>:61616 -Dwebconsole.jmx.url=service:jmx:rmi:///jndi/rmi://<hostname>:1090/jmxrmi -Dwebconsole.jmx.user=admin -Dwebconsole.jmx.password=123456
Also discussed here: https://svn.apache.org/repos/infra/websites/production/activemq/content/5.7.0/web-console.html
Questions
Why does it use localhost?
What does keystone have to do with it?
I can't seem to configure a keystone endpoint
Context
App: Spring Boot (1.5.6) REST API
Hibernate 5.2
Hazelcast 3.9 - as 2nd-level cache only
hazelcast-jclouds 3.7.1
jclouds-compute and jclouds-allcompute 2.0.2
Openstack cloud for VMs running the app
The Setup
I have my hazelcast.xml configured as follows:
<discovery-strategies>
<discovery-strategy class="com.hazelcast.jclouds.JCloudsDiscoveryStrategy" enabled="true">
<properties>
<property name="modules">org.jclouds.logging.slf4j.config.SLF4JLoggingModule</property>
<property name="provider">openstack-nova</property>
<property name="endpoint">http://dev.nova.cloud.youdontknow.net:8774/v2/</property>
<property name="identity">redacted</property>
<property name="credential">cens0red</property>
</properties>
</discovery-strategy>
</discovery-strategies>
The problem
App initialization fails. Here's some log tidbits:
[TRACE] o.j.r.internal.RestAnnotationProcessor : looking up default endpoint for org.jclouds.openstack.keystone.v2_0.AuthenticationApi.public abstract org.jclouds.openstack.keystone.v2_0.domain.Access org.jclouds.openstack.keystone.v2_0.AuthenticationApi.authenticateWithTenantNameAndCredentials(java.lang.String,org.jclouds.openstack.keystone.v2_0.domain.PasswordCredentials)[bnet-web, PasswordCredentials{username=redacted, password=*****}]
[TRACE] o.j.r.internal.RestAnnotationProcessor : using default endpoint Optional.of(http://localhost:5000/v2.0/) for org.jclouds.openstack.keystone.v2_0.AuthenticationApi.public abstract org.jclouds.openstack.keystone.v2_0.domain.Access org.jclouds.openstack.keystone.v2_0.AuthenticationApi.authenticateWithTenantNameAndCredentials(java.lang.String,org.jclouds.openstack.keystone.v2_0.domain.PasswordCredentials)[bnet-web, PasswordCredentials{username=redacted, password=*****}]
[TRACE] o.j.rest.internal.InvokeHttpMethod : << converted AuthenticationApi.authenticateWithTenantNameAndCredentials to POST http://localhost:5000/v2.0/tokens HTTP/1.1
And here's bits of the exception stack traces:
Caused by: com.hazelcast.core.HazelcastException: Failed to get registered addresses
at com.hazelcast.jclouds.JCloudsDiscoveryStrategy.discoverNodes(JCloudsDiscoveryStrategy.java:93)
at com.hazelcast.jclouds.JCloudsDiscoveryStrategy.discoverLocalMetadata(JCloudsDiscoveryStrategy.java:106)
at com.hazelcast.spi.discovery.impl.DefaultDiscoveryService.discoverLocalMetadata(DefaultDiscoveryService.java:91)
...
Caused by: org.jclouds.http.HttpResponseException: Connection refused: connect connecting to POST http://localhost:5000/v2.0/tokens HTTP/1.1
at org.jclouds.http.internal.BaseHttpCommandExecutorService.invoke(BaseHttpCommandExecutorService.java:122)
...
at com.sun.proxy.$Proxy147.authenticateWithTenantNameAndCredentials(Unknown Source)
at org.jclouds.openstack.keystone.v2_0.functions.AuthenticatePasswordCredentials.authenticateWithTenantName(AuthenticatePasswordCredentials.java:43)
Other Notes
Looks like it's using the default keystone address configured in org.jclouds.openstack.keystone.v2_0.KeystoneApiMetadata - but I don't know how that's involved.
By looking at the code, I think hazlecast-jclouds is not prepared to manage generic APIs. When connecting to a provider, you don't need to specify the endpoint, as it is well-known (the AWS endpoints, Google, Azure, etc), but when using generic APIs such as OpenStack or CloudStack, you need to tell jclouds where to connect. Unfortunately, it looks like hazlecast-jclouds lacks support for configuring custom endpoints for generic APIs.
A quick look at the code suggests that it could be easy to add, though. The properties that are taken into account are defined in the JCloudsDiscoveryStrategyFactory, and then read in the ComputeServiceBuilder to create the jclouds context.
I'm not familiar with Hazlecast, but I'd say that adding the definition for the "endpoint" property, and then, if present, configuring it by calling the jclouds contextBuilder.endpoint(endpoitn) method should do the trick.
I am facing an issue connecting spring data solr to solr running on openshift.
<solr:solr-server id="solrServer"
url="http://solr-dashapramathi.rhcloud.com/" />
<bean id="solrTemplate" class="org.springframework.data.solr.core.SolrTemplate"
scope="singleton">
<constructor-arg ref="solrServer" />
</bean>
is my configuration. i have also tried the url as "http://solr-dashapramathi.rhcloud.com/#/dashapramathi" I am running Solr 4.10.1 on openshift.
The error is as below:
IOException occured when talking to server at: http://solr-dashapramathi.rhcloud.com/dashapramathi; nested exception is org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: http://solr-dashapramathi.rhcloud.com/dashapramathi
Caused by:
org.springframework.data.solr.UncategorizedSolrException: IOException occured when talking to server at: http://solr-dashapramathi.rhcloud.com/dashapramathi; nested exception is org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: http://solr-dashapramathi.rhcloud.com/dashapramathi
at org.springframework.data.solr.core.SolrTemplate.execute(SolrTemplate.java:136)
at org.springframework.data.solr.core.SolrTemplate.saveBean(SolrTemplate.java:175)
at org.springframework.data.solr.core.SolrTemplate.saveBean(SolrTemplate.java:169)
at org.springframework.data.solr.repository.support.SimpleSolrRepository.save(SimpleSolrRepository.java:149)
Caused by: org.apache.http.conn.ConnectTimeoutException: Connect to solr-dashapramathi.rhcloud.com:80 timed out
at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:129)
Could anyone please help me?
Judging by the documentation listed here http://docs.spring.io/spring-data/solr/docs/1.2.4.RELEASE/reference/html/solr.repositories.html It looks like it tries to connect through port 8993 which is why would would be receiving the timeout.
OpenShift only allows external connections to be made to 80,8080,443,8443. For inside the gear connections you can alter things to connect to more ports as per the following OpenShift Doc https://help.openshift.com/hc/en-us/articles/202185874.
I had the same problem, and I've managed to resolve it just by changing the timeout in the definition of the Solr server.
So, for your case it would be:
> <solr:solr-server id="solrServer"
> url="http://solr-dashapramathi.rhcloud.com/" timeout="1000"/>
or you can set a higher value for the timeout, depending on your connection.
In the documentation of the Spring Data Solr the 8993 port it is probably the port that the application server which hosts Solr runs on.
I am using jmeter jms point to point queue for load testing.
But I am getting the following error:
javax.naming.NamingException: Failed to create remoting connection [Root exception is java.lang.RuntimeException: javax.security.sasl.SaslException: Authentication failed: all available authentication mechanisms failed]
I am using jmeter 2.11 version
I add user name and password in jndi properties. But still it is not working. Here is the configuration i am using:
QueueConnectionFactory: RemoteConnectionFactory
initial context factory: org.jboss.naming.remote.client.InitialContextFactory
url : remote://localhost:4447
JNDI Prpperties:
username: ..............
password: ...........
Your Jndi properties seem wrong, check this:
http://docs.oracle.com/cd/E19182-01/820-7853/ghyco/index.html
Login / password props are :
java.naming.security.principal
The identity of the principal for authenticating the caller to the service. For more information, see the Java API documentation for javax.naming.Context.SECURITY_PRINCIPAL.
java.naming.security.credentials
The credentials of the principal for authenticating the caller to the service. For more information, see the Java API documentation for javax.naming.Context.SECURITY_CREDENTIALS.
I have encountered similar problem while using jmeter for solace, hope this help to someone having similar issue.
For solace jms testing need to use jndi properties since there is no place holder for VPN name. JNDI properties file will look something like this:
java.naming.factory.initial=com.solacesystems.jndi.SolJNDIInitialContextFactory
java.naming.provider.url=<IP:port><br>
Solace_JMS_VPN=<VPN Name><br>
java.naming.security.principal=<username><br>
java.naming.security.credentials=<password>
Here the jndi properties has to be packaged as a jar file and placed in the jmeter lib folder in order to be picked at runtime.
jar cvf my-jndi-properties.jar jndi.properties
Hope this helps.