I have an HTTP2C Embedded Jetty 9.x Server running ... note the server connector shows h2c ...
2016-03-21 09:25:44.082:INFO:oejs.ServerConnector:main: Started ServerConnector#66c7bd3f{HTTP/1.1,[http/1.1, h2c, h2c-17, h2c-16, h2c-15, h2c-14]}{0.0.0.0:8080}
I have an OkHttpClient 3 attempting to talk HTTP2C to this server , however it always gets downgraded to HTTP/1.1, what am I missing? Which Java client API supports HTTP2C? My client code is as below ...
package http2;
import java.util.Collections;
import okhttp3.ConnectionSpec;
import okhttp3.OkHttpClient;
import okhttp3.Request;
import okhttp3.Response;
public class GetClear {
public static void main(String[] args) throws Exception {
ConnectionSpec spec = new ConnectionSpec.Builder(ConnectionSpec.CLEARTEXT).build();
OkHttpClient client = new OkHttpClient.Builder().connectionSpecs(Collections.singletonList(spec)).build();
Request request = new Request.Builder().url("http://localhost:8080/test").build();
Response response = client.newCall(request).execute();
System.out.println (response.body().string());
System.out.println("****");
response.body().close();
}
}
[The server prints the 'request.getProtocol' from a Jetty servlet and that shows HTTP/1.1 instead of HTTP/2].
HTTP/2 server and client on TLS works just fine using HTTP/2(client code and server code are different of course).
Any help will be truly appreciated.
Using a Jetty HTTP2C client, the same server code works. I guess OkHTTPClient does not support HTTP2C.
A complete h2c example using the HelloHandler example from the jetty doc:
public class HelloServer {
public static class HelloHandler extends AbstractHandler {
final String greeting;
final String body;
public HelloHandler() {
this("Hello World");
}
public HelloHandler(String greeting) {
this(greeting, null);
}
public HelloHandler(String greeting, String body) {
this.greeting = greeting;
this.body = body;
}
public void handle(String target,
Request baseRequest,
HttpServletRequest request,
HttpServletResponse response) throws IOException,
ServletException {
response.setContentType("text/html; charset=utf-8");
response.setStatus(HttpServletResponse.SC_OK);
PrintWriter out = response.getWriter();
out.println("<h1>" + greeting + "</h1>");
if (body != null) {
out.println(body);
}
baseRequest.setHandled(true);
}
}
public static void main(String[] args) throws Exception {
Server server = new Server();
server.setHandler(new HelloHandler());
HttpConfiguration httpConfig = new HttpConfiguration();
ConnectionFactory h1 = new HttpConnectionFactory(httpConfig);
ConnectionFactory h2c = new HTTP2CServerConnectionFactory(httpConfig);
ServerConnector serverConnector = new ServerConnector(server, h1, h2c);
serverConnector.setPort(8080);
server.setConnectors(new ServerConnector[] { serverConnector });
server.start();
server.join();
}
}
The Jetty log line shows that you have configured the server connector to have HTTP/1.1 to be the default protocol (that is the upper-case "HTTP/1.1" before the brackets containing the list of protocols supported).
You don't show your server-side code, but you have two choices:
Configure explicitly the default protocol for the server connector:
serverConnector.setDefaultProtocol("h2c");
Pass the ConnectionFactory objects in the right order to the server connector, since the first one will be the default protocol:
HttpConfiguration httpConfig = new HttpConfiguration();
ConnectionFactory h1 = new HttpConnectionFactory(httpConfig);
ConnectionFactory h2c = new HTTP2CServerConnectionFactory(httpConfig);
ServerConnector serverConnector = new ServerConnector(server, h2c, h1);
Related
I'm migrating from the HLRC to the new client, things were smooth but for some reason I cannot index a specific class/document. Here is my client implementation and index request:
#Configuration
public class ClientConfiguration{
#Autowired
private InternalProperties conf;
public ElasticsearchClient sslClient(){
CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY,
new UsernamePasswordCredentials(conf.getElasticsearchUser(), conf.getElasticsearchPassword()));
HttpHost httpHost = new HttpHost(conf.getElasticsearchAddress(), conf.getElasticsearchPort(), "https");
RestClientBuilder restClientBuilder = RestClient.builder(httpHost);
try {
SSLContext sslContext = SSLContexts.custom().loadTrustMaterial(null, (x509Certificates, s) -> true).build();
restClientBuilder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
#Override
public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
return httpClientBuilder.setSSLContext(sslContext)
.setDefaultCredentialsProvider(credentialsProvider);
}
});
} catch (Exception e) {
e.printStackTrace();
}
RestClient restClient=restClientBuilder.build();
ElasticsearchTransport transport = new RestClientTransport(
restClient, new JacksonJsonpMapper());
ElasticsearchClient client = new ElasticsearchClient(transport);
return client;
}
}
#Service
public class ThisDtoIndexClass extends ConfigAndProperties{
public ThisDtoIndexClass() {
}
//client is declared in the class it's extending from
public ThisDtoIndexClass(#Autowired ClientConfiguration esClient) {
this.client = esClient.sslClient();
}
#KafkaListener(topics = "esTopic")
public void in(#Payload(required = false) customDto doc)
throws ThisDtoIndexClassException, ElasticsearchException, IOException {
if(doc!= null && doc.getId() != null) {
IndexRequest.Builder<customDto > indexReqBuilder = new IndexRequest.Builder<>();
indexReqBuilder.index("index-for-this-Dto");
indexReqBuilder.id(doc.getId());
indexReqBuilder.document(doc);
IndexResponse response = client.index(indexReqBuilder.build());
} else {
throw new ThisDtoIndexClassException("document is null");
}
}
}
This is all done in spring boot (v2.6.8) with ES 7.17.3. According to the debug, the payload is NOT null! It even fetches the id correctly while stepping through. For some reason, it throws me a org.springframework.kafka.listener.ListenerExecutionFailedException: in the last line (during the .build?). Nothing gets indexed, but the response comes back 200. I'm lost on where I should be looking. I have a different class that also writes to a different index, also getting a payload from kafka directly (all seperate consumers). That one functions just fine.
I suspect it has something to do with the way my client is set up and/or the kafka. Please point me in the right direction.
I solved it by deleting the default constructor. If I put it back it overwrites the extended constructor (or straight up doesn't acknowledge the extended constructor), so my client was always null. The error message it gave me was extremely misleading since it actually wasn't the Kafka's fault!
Removing the default constructor completely initializes the correct constructor and I was able to index again. I assume this was a spring boot loading related "issue".
From reading the spring-boot docs, it seems like the standard way to customize the Jetty server is to implement a class like the following:
#Component
public class JettyServerCustomizer
implements WebServerFactoryCustomizer<JettyServletWebServerFactory> {
#Autowired
private ServerProperties serverProperties;
#Override
public void customize(final JettyServletWebServerFactory factory) {
factory.addServerCustomizers((server) -> {
// Customize
});
}
}
I'm specifically interested in modifying the SSLContextFactory.
Tracing through the spring-boot code, right before the customizers are called, ssl is configured:
if (getSsl() != null && getSsl().isEnabled()) {
customizeSsl(server, address);
}
for (JettyServerCustomizer customizer : getServerCustomizers()) {
customizer.customize(server);
}
customizeSsl is a private method so cannot be overridden easily:
private void customizeSsl(Server server, InetSocketAddress address) {
new SslServerCustomizer(address, getSsl(), getSslStoreProvider(), getHttp2()).customize(server);
}
One option is to create the context factory and connector ourselves in the customizer, and then overwrite the connectors on the server. This would probably work but it feels like we are re-creating a bunch of code that spring-boot is already doing just to be able to call a method on the SSLContextFactory.
It seems like if we could somehow provider our own SslServerCustomizer then we could do the custom configuration we want.
Does anyone know of a better way to do this?
On my case it works just fine as:
#SpringBootApplication
#ComponentScan(basePackages = { "org.demo.jetty.*" })
public class DemoWebApplication {
public static void main(String[] args) {
SpringApplication.run(DemoWebApplication.class, args);
}
#Bean
public ConfigurableServletWebServerFactory webServerFactory() {
JettyServletWebServerFactory factory = new JettyServletWebServerFactory();
factory.setContextPath("/demo-app");
factory.addServerCustomizers(getJettyConnectorCustomizer());
return factory;
}
private JettyServerCustomizer getJettyConnectorCustomizer() {
return server -> {
final HttpConfiguration httpConfiguration = new HttpConfiguration();
httpConfiguration.setSecureScheme("https");
httpConfiguration.setSecurePort(44333);
SslContextFactory.Server sslContextFactory = new SslContextFactory.Server();
sslContextFactory.setKeyStoreType("PKCS12");
sslContextFactory.setKeyStorePath("C:/jetty-demo/demo_cert.p12");
sslContextFactory.setKeyStorePassword("*****");
sslContextFactory.setKeyManagerPassword("****");
final HttpConfiguration httpsConfiguration = new HttpConfiguration(httpConfiguration);
httpsConfiguration.addCustomizer(new SecureRequestCustomizer());
ServerConnector httpsConnector = new ServerConnector(server,
new SslConnectionFactory(sslContextFactory, HttpVersion.HTTP_1_1.asString()),
new HttpConnectionFactory(httpsConfiguration));
httpsConnector.setPort(44333);
server.setConnectors(new Connector[] { httpsConnector });
server.setStopAtShutdown(true);
server.setStopTimeout(5_000);
};
}
}
You can define also a HTTP connector and add it to the customized section
...
ServerConnector connector = new ServerConnector(server);
connector.addConnectionFactory(new HttpConnectionFactory(httpConfiguration));
connector.setPort(8081);
server.setConnectors(new Connector[]{connector, httpsConnector});
...
I am using the Java HighLevelRestClient to connect to my elasticsearch instance hosted on AWS. I can make requests against the URL on postman and from my browser just fine, but when I use the client library I receive
java.net.ConnectException: Connection Refused.
(I don't currently need any authentication as this is a small public test instance). This is my code:
RestHighLevelClient restHighLevelClient = new RestHighLevelClient(restClientBuilder);
GetRequest getRequest = new GetRequest("some_index", "some_type","some_id");
final String[] elasticGetResponse = new String[1];
restHighLevelClient.getAsync(getRequest, new ActionListener() {
#Override
public void onResponse(GetResponse documentFields) {
try {
elasticGetResponse[0] = restHighLevelClient.get(getRequest).toString();
}
catch (IOException e) {
e.printStackTrace();
}
}
#Override
public void onFailure(Exception e) {
e.printStackTrace();
}
});
Please let me know how I can fix this... thanks!
Update: Here is my code for the restClientBuilder:
MySSLHelper sslHelper = new MySSLHelper(SSLConfig.builder()
.withKeyStoreProvider(myKeyStoreProvider)
.withTrustStoreProvider(InternalTrustStoreProvider.INSTANCE)
.build());
RestClientBuilder restClientBuilder = RestClient.builder(new HttpHost("MY_ELASTICSEARCH_ENDPOINT")).setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
#Override
public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpAsyncClientBuilder) {
return httpAsyncClientBuilder.setSSLContext(sslHelper.getContext());
}
});
I was the same problem and solved putting the port and protocol, like showed in this page:
https://www.elastic.co/guide/en/elasticsearch/client/java-rest/7.0/java-rest-high-getting-started-initialization.html
My code stayed like this:
RestHighLevelClient client = new RestHighLevelClient(RestClient.builder(new HttpHost(elasticsearchHost, 9200, "http")));
Please try to do something like this:
RestClientBuilder restClientBuilder = RestClient.builder(new HttpHost("MY_ELASTICSEARCH_ENDPOINT", "MY_ELASTICSEARCH_PORT", "MY_ELASTICSEARCH_PROTOCOL"))...
Hope this helps.
Good bye.
Using Spring FTP Integration and Annotation configuration, I downloaded files from the FTP server. After downloaded still our application is trigger to connect the server and find the any newly added files, if any files added it will download from the server. But I don't need to maintain the FTP server session alive and disconnect the server after first connection or first time downloaded.
Code :
public class FtpServices {
#Bean(name="ftpSessionFactory")
public DefaultFtpSessionFactory ftpSessionFactory() {
System.out.println("session");
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost("localhost");
sf.setPort(21);
sf.setUsername("user");
sf.setPassword("password");
return sf;
}
#Bean
public FtpInboundFileSynchronizer ftpInboundFileSynchronizer() {
System.out.println("2");
FtpInboundFileSynchronizer fileSynchronizer = new FtpInboundFileSynchronizer(ftpSessionFactory());
fileSynchronizer.setDeleteRemoteFiles(false);
fileSynchronizer.afterPropertiesSet();
fileSynchronizer.setRemoteDirectory("/test/");
// fileSynchronizer.setFilter(new FtpSimplePatternFileListFilter("*.docx"));
fileSynchronizer.setFilter(filter);
return fileSynchronizer;
}
#Bean()
#InboundChannelAdapter(value="ftpChannel", poller = #Poller(fixedDelay = "50", maxMessagesPerPoll = "1"))
public FtpInboundFileSynchronizingMessageSource ftpMessageSource() {
System.out.println(3);
FtpInboundFileSynchronizingMessageSource source =
new FtpInboundFileSynchronizingMessageSource(ftpInboundFileSynchronizer());
source.setLocalDirectory(new File("D:/Test-downloaded/"));
//source.stop();
return source;
}
#Bean
#ServiceActivator(inputChannel = "ftpChannel", requiresReply="false")
public MessageHandler handler() {
System.out.println(4);
MessageHandler handler = new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
System.out.println(message.getPayload()+" #ServiceActivator");
System.out.println(" Message Header :"+message.getHeaders());
}
};
return handler;
}
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata defaultPoller() {
PollerMetadata pollerMetadata = new PollerMetadata();
pollerMetadata.setTrigger(triggerOnlyOnce());
return pollerMetadata;
}
}
and also I override the AbtractFTPSessionFactory.java to test FTP server connection and disconnection process.
protected void postProcessClientAfterConnect(T t) throws IOException {
System.out.println("After connect");
}
protected void postProcessClientBeforeConnect(T client) throws IOException {
System.out.println("Before connect");
}
Console :
INFO : org.springframework.context.support.DefaultLifecycleProcessor - Starting beans in phase -2147483648
INFO : org.springframework.context.support.DefaultLifecycleProcessor - Starting beans in phase 0
Before connect
After connect
D:\Test-downloaded\demo 1.txt #ServiceActivator
Message Header :{id=e4a1fd7f-0bbf-9692-f70f-b0ac68b4dec4, timestamp=1477317086272}
D:\Test-downloaded\demo.txt #ServiceActivator
Message Header :{id=9115ee92-12b4-bf1f-d592-9c13bf7a27fa, timestamp=1477317086324}
Before connect
After connect
Before connect
After connect
Before connect
After connect
Before connect
After connect
Before connect
After connect
Before connect
After connect
Thanks.
That is really a purpose of any #InboundChannelAdapter: poll the target system for new data periodically.
To do that once we sometimes suggest OnlyOnceTrigger:
public class OnlyOnceTrigger implements Trigger {
private final AtomicBoolean done = new AtomicBoolean();
#Override
public Date nextExecutionTime(TriggerContext triggerContext) {
return !this.done.getAndSet(true) ? new Date() : null;
}
}
But this might not work for your case, because there might not be desired files in the source FTP directory yet.
Therefore we have to poll until you will receive required files and .stop() an adapter when that condition is met.
For this purpose you can use any downstream logic to determine the state or consider to implement AbstractMessageSourceAdvice to be injected to the PollerMetadata of the #Poller: http://docs.spring.io/spring-integration/reference/html/messaging-channels-section.html#conditional-pollers
Library in use:
AsyncHtpClient Library:
Version : 1.9.32
Location: https://github.com/AsyncHttpClient/async-http-client
Netty Version : 3.10.3.Final
Proxy: Squid Proxy
I am trying to create a websocket connection using AsyncHttpClinet library. It works fine when using without the proxy.
But when I start a proxy and pass in the Host, port, username and password , I am unable to create a websocket connection.
It get a stack trace which says Invalid Status Code 400:
Caused by: java.lang.IllegalStateException: Invalid Status Code 400
at com.ning.http.client.ws.WebSocketUpgradeHandler.onCompleted(WebSocketUpgradeHandler.java:76)
at com.ning.http.client.ws.WebSocketUpgradeHandler.onCompleted(WebSocketUpgradeHandler.java:29)
at com.ning.http.client.providers.netty.future.NettyResponseFuture.getContent(NettyResponseFuture.java:177)
at com.ning.http.client.providers.netty.future.NettyResponseFuture.done(NettyResponseFuture.java:214)
... 35 more
I am setting the proxy object like this:
ProxyServer ps = new ProxyServer("host-name",portNo,"user_name","password");
AsyncHttpClientConfig cf = new AsyncHttpClientConfig.Builder().setProxyServer(ps).build();
WebSocket websocket = c.prepareGet(url)
.execute(new WebSocketUpgradeHandler.Builder().addWebSocketListener(
new WebSocketTextListener() {
#Override
public void onMessage(String message) {
}
#Override
public void onFragment(String s, boolean b) {
}
#Override
public void onOpen(WebSocket websocket) {
}
#Override
public void onClose(WebSocket websocket) {
}
#Override
public void onError(Throwable t) {
}
}
).build()
).get();
Are there any other steps to configure a proxy for websocket connections?
I have also tried configuring the ProxyServer object like this:
ProxyServer ps = new ProxyServer(ProxyServer.Protocol.HTTPS,"host-name",portNo,"user_name","password");