Usage examples for Jetty's ProxyServlet.Transparent class - proxy

I am trying to use jetty7 to build a transparent proxy setup. Idea is to hide origin servers behind the jetty server so that the incoming request can be forwarded in a transparent manner to the origin servers.
I want to know if I can use jetty's ProxyServlet.Transparent implementation to do so. If yes, can anyone give me some examples.

This example is based on Jetty-9. If you want to implement this with Jetty 8, implement the proxyHttpURI method (See Jetty 8 javadocs.). Here is some sample code.
import java.io.IOException;
import java.net.InetAddress;
import java.net.URI;
import java.util.List;
import java.util.Map;
import java.util.Random;
import javax.servlet.ServletConfig;
import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
import javax.servlet.http.HttpServletRequest;
import org.eclipse.jetty.servlets.ProxyServlet;
/**
* When a request cannot be satisfied on the local machine, it asynchronously
* proxied to the destination box. Define the rule
*/
public class ContentBasedProxyServlet extends ProxyServlet {
private int remotePort = 8080;
public void setPort(int port) {
this.remotePort = port;
}
public void init(ServletConfig config) throws ServletException {
super.init(config);
}
public void service(ServletRequest request, ServletResponse response) throws IOException, ServletException {
super.service(request, response);
}
/**
* Applicable to Jetty 9+ only.
*/
#Override
protected URI rewriteURI(HttpServletRequest request) {
String proxyTo = getProxyTo(request);
if (proxyTo == null)
return null;
String path = request.getRequestURI();
String query = request.getQueryString();
if (query != null)
path += "?" + query;
return URI.create(proxyTo + "/" + path).normalize();
}
private String getProxyTo(HttpServletRequest request) {
/*
* Implement this method: All the magic happens here. Use this method to figure out your destination machine address. You can maintain
* a static list of addresses, and depending on the URI or request content you can route your request transparently.
*/
}
}
Further more, you can implement a Filter that determines whether the request needs to terminate on the local machine or on the destination machine. If the request is meant for the remote machine, forward the request to this servlet.
// Declare this method call in the filter.
request.getServletContext()
.getNamedDispatcher("ContentBasedProxyServlet")
.forward(request, response);

Related

How can i implement slf4j MDC in springboot webflux [duplicate]

I referenced with the blog post Contextual Logging with Reactor Context and MDC but I don't know how to access reactor context in WebFilter.
#Component
public class RequestIdFilter implements WebFilter {
#Override
public Mono<Void> filter(ServerWebExchange exchange, WebFilterChain chain) {
List<String> myHeader = exchange.getRequest().getHeaders().get("X-My-Header");
if (myHeader != null && !myHeader.isEmpty()) {
MDC.put("myHeader", myHeader.get(0));
}
return chain.filter(exchange);
}
}
Here's one solution based on the latest approach, as of May 2021, taken from the official documentation:
import java.util.List;
import java.util.Optional;
import java.util.UUID;
import java.util.function.Consumer;
import lombok.extern.slf4j.Slf4j;
import org.slf4j.MDC;
import org.springframework.context.annotation.Configuration;
import org.springframework.http.HttpHeaders;
import org.springframework.http.server.reactive.ServerHttpRequest;
import org.springframework.web.server.ServerWebExchange;
import org.springframework.web.server.WebFilter;
import org.springframework.web.server.WebFilterChain;
import reactor.core.publisher.Mono;
import reactor.core.publisher.Signal;
import reactor.util.context.Context;
#Slf4j
#Configuration
public class RequestIdFilter implements WebFilter {
#Override
public Mono<Void> filter(ServerWebExchange exchange, WebFilterChain chain) {
ServerHttpRequest request = exchange.getRequest();
String requestId = getRequestId(request.getHeaders());
return chain
.filter(exchange)
.doOnEach(logOnEach(r -> log.info("{} {}", request.getMethod(), request.getURI())))
.contextWrite(Context.of("CONTEXT_KEY", requestId));
}
private String getRequestId(HttpHeaders headers) {
List<String> requestIdHeaders = headers.get("X-Request-ID");
return requestIdHeaders == null || requestIdHeaders.isEmpty()
? UUID.randomUUID().toString()
: requestIdHeaders.get(0);
}
public static <T> Consumer<Signal<T>> logOnEach(Consumer<T> logStatement) {
return signal -> {
String contextValue = signal.getContextView().get("CONTEXT_KEY");
try (MDC.MDCCloseable cMdc = MDC.putCloseable("MDC_KEY", contextValue)) {
logStatement.accept(signal.get());
}
};
}
public static <T> Consumer<Signal<T>> logOnNext(Consumer<T> logStatement) {
return signal -> {
if (!signal.isOnNext()) return;
String contextValue = signal.getContextView().get("CONTEXT_KEY");
try (MDC.MDCCloseable cMdc = MDC.putCloseable("MDC_KEY", contextValue)) {
logStatement.accept(signal.get());
}
};
}
}
Given you have the following line in your application.properties:
logging.pattern.level=[%X{MDC_KEY}] %5p
then every time an endpoint is called your server logs will contain a log like this:
2021-05-06 17:07:41.852 [60b38305-7005-4a05-bac7-ab2636e74d94] INFO 20158 --- [or-http-epoll-6] my.package.RequestIdFilter : GET http://localhost:12345/my-endpoint/444444/
Every time you want to log manually something within a reactive context you will have add the following to your reactive chain:
.doOnEach(logOnNext(r -> log.info("Something")))
If you want the X-Request-ID to be propagated to other services for distributed tracing, you need to read it from the reactive context (not from MDC) and wrap your WebClient code with the following:
Mono.deferContextual(
ctx -> {
RequestHeadersSpec<?> request = webClient.get().uri(uri);
request = request.header("X-Request-ID", ctx.get("CONTEXT_KEY"));
// The rest of your request logic...
});
You can do something similar to below, You can set the context with any class you like, for this example I just used headers - but a custom class will do just fine.
If you set it here, then any logging with handlers etc will also have access to the context.
The logWithContext below, sets the MDC and clears it after. Obviously this can be replaced with anything you like.
public class RequestIdFilter implements WebFilter {
private Logger LOG = LoggerFactory.getLogger(RequestIdFilter.class);
#Override
public Mono<Void> filter(ServerWebExchange exchange, WebFilterChain chain) {
HttpHeaders headers = exchange.getRequest().getHeaders();
return chain.filter(exchange)
.doAfterSuccessOrError((r, t) -> logWithContext(headers, httpHeaders -> LOG.info("Some message with MDC set")))
.subscriberContext(Context.of(HttpHeaders.class, headers));
}
static void logWithContext(HttpHeaders headers, Consumer<HttpHeaders> logAction) {
try {
headers.forEach((name, values) -> MDC.put(name, values.get(0)));
logAction.accept(headers);
} finally {
headers.keySet().forEach(MDC::remove);
}
}
}
As of Spring Boot 2.2 there is Schedulers.onScheduleHook that enables you to handle MDC:
Schedulers.onScheduleHook("mdc", runnable -> {
Map<String, String> map = MDC.getCopyOfContextMap();
return () -> {
if (map != null) {
MDC.setContextMap(map);
}
try {
runnable.run();
} finally {
MDC.clear();
}
};
});
Alternatively, Hooks.onEachOperator can be used to pass around the MDC values via subscriber context.
http://ttddyy.github.io/mdc-with-webclient-in-webmvc/
This is not full MDC solution, e.g. in my case I cannot cleanup MDC values in R2DBC threads.
UPDATE: this article really solves my MDC problem: https://www.novatec-gmbh.de/en/blog/how-can-the-mdc-context-be-used-in-the-reactive-spring-applications/
It provides correct way of updating MDC based on subscriber context.
Combine it with SecurityContext::class.java key populated by AuthenticationWebFilter and you will be able to put user login to your logs.
My solution based on Reactor 3 Reference Guide approach but using doOnSuccess instead of doOnEach.
The main idea is to use Context for MDC propagation in the next way:
Fill a downstream Context (which will be used by derived threads) with the MDC state from an upstream flow (can be done by .contextWrite(context -> Context.of(MDC.getCopyOfContextMap())))
Access the downstream Context in derived threads and fill MDC in derived thread with values from the downstream Context (the main challenge)
Clear the MDC in the downstream Context (can be done by .doFinally(signalType -> MDC.clear()))
The main problem is to access a downstream Context in derived threads. And you can implement step 2 with the most convenient for you approach). But here is my solution:
webclient.post()
.bodyValue(someRequestData)
.retrieve()
.bodyToMono(String.class)
// By this action we wrap our response with a new Mono and also
// in parallel fill MDC with values from a downstream Context because
// we have an access to it
.flatMap(wrapWithFilledMDC())
.doOnSuccess(response -> someActionWhichRequiresFilledMdc(response)))
// Fill a downstream context with the current MDC state
.contextWrite(context -> Context.of(MDC.getCopyOfContextMap()))
// Allows us to clear MDC from derived threads
.doFinally(signalType -> MDC.clear())
.block();
// Function which implements second step from the above main idea
public static <T> Function<T, Mono<T>> wrapWithFilledMDC() {
// Using deferContextual we have an access to downstream Context, so
// we can just fill MDC in derived threads with
// values from the downstream Context
return item -> Mono.deferContextual(contextView -> {
// Function for filling MDC with Context values
// (you can apply your action)
fillMdcWithContextView(contextView);
return Mono.just(item);
});
}
public static void fillMdcWithContextValues(ContextView contextView) {
contextView.forEach(
(key, value) -> {
if (key instanceof String keyStr && value instanceof String valueStr) {
MDC.put(keyStr, valueStr);
}
});
}
This approach is also can be applied to doOnError and onErrorResume methods since the main idea is the same.
Used versions:
spring-boot: 2.7.3
spring-webflux: 5.3.22 (from spring-boot)
reactor-core: 3.4.22 (from spring-webflux)
reactor-netty: 1.0.22 (from spring-webflux)
I achieved this with :-
package com.nks.app.filter;
import lombok.extern.slf4j.Slf4j;
import org.slf4j.MDC;
import org.springframework.stereotype.Component;
import org.springframework.web.server.ServerWebExchange;
import org.springframework.web.server.WebFilter;
import org.springframework.web.server.WebFilterChain;
import reactor.core.publisher.Mono;
/**
* #author nks
*/
#Component
#Slf4j
public class SessionIDFilter implements WebFilter {
private static final String APP_SESSION_ID = "app-session-id";
/**
* Process the Web request and (optionally) delegate to the next
* {#code WebFilter} through the given {#link WebFilterChain}.
*
* #param serverWebExchange the current server exchange
* #param webFilterChain provides a way to delegate to the next filter
* #return {#code Mono<Void>} to indicate when request processing is complete
*/
#Override
public Mono<Void> filter(ServerWebExchange serverWebExchange, WebFilterChain webFilterChain) {
serverWebExchange.getResponse()
.getHeaders().add(APP_SESSION_ID, serverWebExchange.getRequest().getHeaders().getFirst(APP_SESSION_ID));
MDC.put(APP_SESSION_ID, serverWebExchange.getRequest().getHeaders().getFirst(APP_SESSION_ID));
log.info("[{}] : Inside filter of SessionIDFilter, ADDED app-session-id in MDC Logs", MDC.get(APP_SESSION_ID));
return webFilterChain.filter(serverWebExchange);
}
}
and, values associated with app-session-id for the thread can be logged.

is putting sqs-consumer to detect receiveMessage event in sqs scalable

I am using aws sqs as message queue. After sqs.sendMessage sends the data , I want to detect sqs.receiveMessage via either infinite loop or event triggering in scalable way. Then I came accross sqs-consumer
to handle sqs.receiveMessage events, the moment it receives the messages. But I was wondering , is it the most suitable way to handle message passing between microservices or is there any other better way to handle this thing?
I had written the code in java for fetching the data from sqs queue with SQSBufferedAsyncClient, advantages using this API is buffered the messages in async mode.
/**
*
*/
package com.sxm.aota.tsc.config;
import java.net.UnknownHostException;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import com.amazonaws.AmazonClientException;
import com.amazonaws.AmazonWebServiceRequest;
import com.amazonaws.ClientConfiguration;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.auth.InstanceProfileCredentialsProvider;
import com.amazonaws.regions.Region;
import com.amazonaws.regions.Regions;
import com.amazonaws.retry.RetryPolicy;
import com.amazonaws.retry.RetryPolicy.BackoffStrategy;
import com.amazonaws.services.sqs.AmazonSQSAsync;
import com.amazonaws.services.sqs.AmazonSQSAsyncClient;
import com.amazonaws.services.sqs.buffered.AmazonSQSBufferedAsyncClient;
import com.amazonaws.services.sqs.buffered.QueueBufferConfig;
#Configuration
public class SQSConfiguration {
/** The properties cache config. */
#Autowired
private PropertiesCacheConfig propertiesCacheConfig;
#Bean
public AmazonSQSAsync amazonSQSClient() {
// Create Client Configuration
ClientConfiguration clientConfig = new ClientConfiguration()
.withMaxErrorRetry(5)
.withConnectionTTL(10_000L)
.withTcpKeepAlive(true)
.withRetryPolicy(new RetryPolicy(
null,
new BackoffStrategy() {
#Override
public long delayBeforeNextRetry(AmazonWebServiceRequest req,
AmazonClientException exception, int retries) {
// Delay between retries is 10s unless it is UnknownHostException
// for which retry is 60s
return exception.getCause() instanceof UnknownHostException ? 60_000L : 10_000L;
}
}, 10, true));
// Create Amazon client
AmazonSQSAsync asyncSqsClient = null;
if (propertiesCacheConfig.isIamRole()) {
asyncSqsClient = new AmazonSQSAsyncClient(new InstanceProfileCredentialsProvider(true), clientConfig);
} else {
asyncSqsClient = new AmazonSQSAsyncClient(
new BasicAWSCredentials("sceretkey", "accesskey"));
}
final Regions regions = Regions.fromName(propertiesCacheConfig.getRegionName());
asyncSqsClient.setRegion(Region.getRegion(regions));
asyncSqsClient.setEndpoint(propertiesCacheConfig.getEndPoint());
// Buffer for request batching
final QueueBufferConfig bufferConfig = new QueueBufferConfig();
// Ensure visibility timeout is maintained
bufferConfig.setVisibilityTimeoutSeconds(20);
// Enable long polling
bufferConfig.setLongPoll(true);
// Set batch parameters
// bufferConfig.setMaxBatchOpenMs(500);
// Set to receive messages only on demand
// bufferConfig.setMaxDoneReceiveBatches(0);
// bufferConfig.setMaxInflightReceiveBatches(0);
return new AmazonSQSBufferedAsyncClient(asyncSqsClient, bufferConfig);
}
}
then written the scheduleR which executes after every 2 secs and fetches the data from queue, process it and delete it from queue before visibility timeout otherwise it will be ready for processing again when visibility tiiimeout expires again.
package com.sxm.aota.tsc.sqs;
import java.util.List;
import java.util.concurrent.CountDownLatch;
import javax.annotation.PostConstruct;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.DependsOn;
import org.springframework.scheduling.annotation.EnableScheduling;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
import com.amazonaws.services.sqs.AmazonSQSAsync;
import com.amazonaws.services.sqs.model.DeleteMessageRequest;
import com.amazonaws.services.sqs.model.GetQueueUrlRequest;
import com.amazonaws.services.sqs.model.GetQueueUrlResult;
import com.amazonaws.services.sqs.model.ReceiveMessageRequest;
import com.amazonaws.services.sqs.model.ReceiveMessageResult;
import com.fasterxml.jackson.databind.ObjectMapper;
/**
* The Class TSCDataSenderScheduledTask.
*
* Sends the aggregated Vehicle data to TSC in batches
*/
#EnableScheduling
#Component("sqsScheduledTask")
#DependsOn({ "propertiesCacheConfig", "amazonSQSClient" })
public class SQSScheduledTask {
private static final Logger LOGGER = LoggerFactory.getLogger(SQSScheduledTask.class);
#Autowired
private PropertiesCacheConfig propertiesCacheConfig;
#Autowired
public AmazonSQSAsync amazonSQSClient;
/**
* Timer Task that will run after specific interval of time Majorly
* responsible for sending the data in batches to TSC.
*/
private String queueUrl;
private final ObjectMapper mapper = new ObjectMapper();
#PostConstruct
public void initialize() throws Exception {
LOGGER.info("SQS-Publisher", "Publisher initializing for queue " + propertiesCacheConfig.getSQSQueueName(),
"Publisher initializing for queue " + propertiesCacheConfig.getSQSQueueName());
// Get queue URL
final GetQueueUrlRequest request = new GetQueueUrlRequest().withQueueName(propertiesCacheConfig.getSQSQueueName());
final GetQueueUrlResult response = amazonSQSClient.getQueueUrl(request);
queueUrl = response.getQueueUrl();
LOGGER.info("SQS-Publisher", "Publisher initialized for queue " + propertiesCacheConfig.getSQSQueueName(),
"Publisher initialized for queue " + propertiesCacheConfig.getSQSQueueName() + ", URL = " + queueUrl);
}
#Scheduled(fixedDelayString = "${sqs.consumer.delay}")
public void timerTask() {
final ReceiveMessageResult receiveResult = getMessagesFromSQS();
String messageBody = null;
if (receiveResult != null && receiveResult.getMessages() != null && !receiveResult.getMessages().isEmpty()) {
try {
messageBody = receiveResult.getMessages().get(0).getBody();
String messageReceiptHandle = receiveResult.getMessages().get(0).getReceiptHandle();
Vehicles vehicles = mapper.readValue(messageBody, Vehicles.class);
processMessage(vehicles.getVehicles(),messageReceiptHandle);
} catch (Exception e) {
LOGGER.error("Exception while processing SQS message : {}", messageBody);
// Message is not deleted on SQS and will be processed again after visibility timeout
}
}
}
public void processMessage(List<Vehicle> vehicles,String messageReceiptHandle) throws InterruptedException {
//processing code
//delete the sqs message as the processing is completed
//Need to create atomic counter that will be increamented by all TS.. Once it will be 0 then we will be deleting the messages
amazonSQSClient.deleteMessage(new DeleteMessageRequest(queueUrl, messageReceiptHandle));
}
private ReceiveMessageResult getMessagesFromSQS() {
try {
// Create new request and fetch data from Amazon SQS queue
final ReceiveMessageResult receiveResult = amazonSQSClient
.receiveMessage(new ReceiveMessageRequest().withMaxNumberOfMessages(1).withQueueUrl(queueUrl));
return receiveResult;
} catch (Exception e) {
LOGGER.error("Error while fetching data from SQS", e);
}
return null;
}
}

Push pull on couchabase server side thro' couchbase lite client side

i have tried to create one small java code to handle couchbase lite database and to do push pull operation
senario in depth is as follows
what i did is i have created bucket named as sync_gateway,
and conected with couchbase server by below config.json
{
"interface":":4984",
"adminInterface":":4985",
"databases":{
"db":{
"server":"http://localhost:8091",
"bucket":"sync_gateway",
"sync":function(doc) {
channel(doc.channels);
}
}
}
}
this had created metadata in sync_gateway bucket on server,
the n i have written sample java code for local database CBL , and wrote functions for push pull operations ...
code:
package com.Testing_couchbaseLite;
import java.io.IOException;
import java.net.MalformedURLException;
import java.net.URL;
import java.util.HashMap;
import java.util.Map;
import javax.naming.ldap.ManageReferralControl;
import org.apache.http.cookie.Cookie;
import com.couchbase.lite.Context;
import com.couchbase.lite.CouchbaseLiteException;
import com.couchbase.lite.Database;
import com.couchbase.lite.Document;
import com.couchbase.lite.JavaContext;
import com.couchbase.lite.Manager;
import com.couchbase.lite.ManagerOptions;
import com.couchbase.lite.QueryOptions;
import com.couchbase.lite.replicator.Replication;
import com.couchbase.lite.support.HttpClientFactory;
public class Test_syncGateWay {
private URL createSyncURL(boolean isEncrypted){
URL syncURL = null;
String host = "https://localhost"; //sync gateway ip
String port = "4984"; //sync gateway port
String dbName = "db";
try {
syncURL = new URL(host + ":" + port + "/" + dbName);
} catch (MalformedURLException me) {
me.printStackTrace();
}
return syncURL;
}
private void startReplications() throws CouchbaseLiteException {
try {
Map<String, Object> map = new HashMap<String, Object>();
map.put("id", "1");
map.put("name","ram");
Manager man = new Manager(new JavaContext(), Manager.DEFAULT_OPTIONS);
Database db = man.getDatabase("sync_gateway");
Document doc = db.createDocument();
doc.putProperties(map);
System.out.println("-------------done------------");
System.out.println(man.getAllDatabaseNames());
System.out.println(man.getDatabase("sync_gateway").getDocumentCount());
System.out.println(db.getDocument("1").getCurrentRevisionId());
System.out.println(db.exists());
Replication pull = db.createPullReplication(this.createSyncURL(true));
Replication push = db.createPushReplication(this.createSyncURL(true));
pull.setContinuous(true);
push.setContinuous(true);
pull.start();
push.start();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private void createDatabase() throws CouchbaseLiteException, IOException {
// TODO Auto-generated method stub
}
public static void main(String[] args) throws CouchbaseLiteException, IOException {
new Test_syncGateWay().startReplications();
}
}
now i am stating sync gateway by that config file and running java code to create document on CBL and CB server by push pull operation.
bt it is showing error as
Jul 08, 2016 10:27:21 AM com.couchbase.lite.util.SystemLogger e
SEVERE: RemoteRequest: RemoteRequest{GET, https://localhost:4984/db/_local/2eafda901c4de2fe022af262d5cc7d1c0cb5c2d2}: executeRequest() Exception: javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated. url: https://localhost:4984/db/_local/2eafda901c4de2fe022af262d5cc7d1c0cb5c2d2
so is there any misunderstanding in my concept??? and how do i resolve this problem??
You have not set up your Sync Gateway for SSL. You need to add the SSLCert and SSLPass keys to your config file.

How to use a custom ssh key location with Spring Cloud Config

I am trying to setup a Spring Cloud Config server that uses a custom location for the ssh private key.
The reason i need to specify a custom location for the key is because the user running the application has no home directory ..so there is not way for me to use the default ~/.ssh directory for my key.
I know that there is the option of creating a read-only account and provide the user/password in the configuration but the ssh way seams more clean.Is there a way I can setup this?
After reading a lot more code... I found a relatively simple work around to allow you to set whatever SSH keys you want.
First: Create a class as follows:
/**
* #file FixedSshSessionFactory.java
*
* #date Aug 23, 2016 2:16:11 PM
* #author jzampieron
*/
import org.eclipse.jgit.transport.JschConfigSessionFactory;
import org.eclipse.jgit.transport.OpenSshConfig.Host;
import org.eclipse.jgit.util.FS;
import com.jcraft.jsch.JSch;
import com.jcraft.jsch.JSchException;
import com.jcraft.jsch.Session;
/**
* Short Desc Here.
*
* #author jzampieron
*
*/
public class FixedSshSessionFactory extends JschConfigSessionFactory
{
protected String[] identityKeyPaths;
/**
* #param string
*/
public FixedSshSessionFactory( String... identityKeyPaths )
{
this.identityKeyPaths = identityKeyPaths;
}
/* (non-Javadoc)
* #see org.eclipse.jgit.transport.JschConfigSessionFactory#configure(org.eclipse.jgit.transport.OpenSshConfig.Host, com.jcraft.jsch.Session)
*/
#Override
protected void configure( Host hc, Session session )
{
// nothing special needed here.
}
/* (non-Javadoc)
* #see org.eclipse.jgit.transport.JschConfigSessionFactory#getJSch(org.eclipse.jgit.transport.OpenSshConfig.Host, org.eclipse.jgit.util.FS)
*/
#Override
protected JSch getJSch( Host hc, FS fs ) throws JSchException
{
JSch jsch = super.getJSch( hc, fs );
// Clean out anything 'default' - any encrypted keys
// that are loaded by default before this will break.
jsch.removeAllIdentity();
for( final String identKeyPath : identityKeyPaths )
{
jsch.addIdentity( identKeyPath );
}
return jsch;
}
}
Then register it with jgit:
...
import org.eclipse.jgit.transport.SshSessionFactory;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.config.server.EnableConfigServer;
#SpringBootApplication
#EnableConfigServer
public class ConfigserverApplication
{
public static void main(String[] args) {
URL res = ConfigserverApplication.class.getClassLoader().getResource( "keys/id_rsa" );
String path = res.getPath();
SshSessionFactory.setInstance( new FixedSshSessionFactory( path ) );
SpringApplication.run(ConfigserverApplication.class, args);
}
}
For this example I'm storing the keys in the src/main/resources/keys folder and
I'm using the class loader to get at them.
The removeAllIdentities is important b/c JSch was loading my default ssh key before the one I specified and then Spring Cloud was crashing out b/c its encrypted.
This allowed me to successfully authenticate with bitbucket.
The FixedSshSessionFactory solution of #Jeffrey Zampieron is good. However it won't work if packaging the spring boot app as a fat jar.
Polish it a bit for working with fat jar,
/**
* #file FixedSshSessionFactory.java
* #date Aug 23, 2016 2:16:11 PM
* #author jzampieron
*/
import com.jcraft.jsch.JSch;
import com.jcraft.jsch.JSchException;
import com.jcraft.jsch.Session;
import lombok.extern.slf4j.Slf4j;
import org.eclipse.jgit.transport.JschConfigSessionFactory;
import org.eclipse.jgit.transport.OpenSshConfig.Host;
import org.eclipse.jgit.util.FS;
import org.springframework.util.StreamUtils;
import java.io.IOException;
import java.io.InputStream;
import java.net.URL;
/**
* Short Desc Here.
*
* #author jzampieron
*/
#Slf4j
public class FixedSshSessionFactory extends JschConfigSessionFactory {
protected URL[] identityKeyURLs;
/**
* #param url
*/
public FixedSshSessionFactory(URL... identityKeyURLs) {
this.identityKeyURLs = identityKeyURLs;
}
/* (non-Javadoc)
* #see org.eclipse.jgit.transport.JschConfigSessionFactory#configure(org.eclipse.jgit.transport.OpenSshConfig.Host, com.jcraft.jsch.Session)
*/
#Override
protected void configure(Host hc, Session session) {
// nothing special needed here.
}
/* (non-Javadoc)
* #see org.eclipse.jgit.transport.JschConfigSessionFactory#getJSch(org.eclipse.jgit.transport.OpenSshConfig.Host, org.eclipse.jgit.util.FS)
*/
#Override
protected JSch getJSch(Host hc, FS fs) throws JSchException {
JSch jsch = super.getJSch(hc, fs);
// Clean out anything 'default' - any encrypted keys
// that are loaded by default before this will break.
jsch.removeAllIdentity();
int count = 0;
for (final URL identityKey : identityKeyURLs) {
try (InputStream stream = identityKey.openStream()) {
jsch.addIdentity("key" + ++count, StreamUtils.copyToByteArray(stream), null, null);
} catch (IOException e) {
logger.error("Failed to load identity " + identityKey.getPath());
}
}
return jsch;
}
}
I am having a similar problem because my default SSH key is encrypted with a password and therefore doesn't "just work", which makes sense because this is a head-less setup.
I went source-diving into Spring Cloud Config, org.eclipse.jgit and eventually ended up in com.jcraft.jsch. The short answer is that neither JGit nor Spring Cloud expose an obvious way to do this.
JSch clearly supports this feature within a JSch() instance, but you can't get at it from the Spring Cloud level. At least not that I could find in a hour or so of looking.

How to convert a string to an Apache HttpComponents HttpRequest

I have a String that contains an HTTP header. I want to turn this into an Apache HttpComponents HttpRequest object. Is there a way to do this without picking apart the string myself?
This tutorial: http://hc.apache.org/httpcomponents-core-dev/tutorial/html/fundamentals.html#d5e56 and the javadoc does not indicate as much.
A class to convert a string to apache request:
import org.apache.http.*;
import org.apache.http.impl.DefaultHttpRequestFactory;
import org.apache.http.impl.entity.EntityDeserializer;
import org.apache.http.impl.entity.LaxContentLengthStrategy;
import org.apache.http.impl.io.AbstractSessionInputBuffer;
import org.apache.http.impl.io.HttpRequestParser;
import org.apache.http.io.HttpMessageParser;
import org.apache.http.io.SessionInputBuffer;
import org.apache.http.message.BasicHttpEntityEnclosingRequest;
import org.apache.http.message.BasicLineParser;
import org.apache.http.params.BasicHttpParams;
import java.io.ByteArrayInputStream;
import java.io.IOException;
/**
*
*/
public class ApacheRequestFactory {
public static HttpRequest create(final String requestAsString) {
try {
SessionInputBuffer inputBuffer = new AbstractSessionInputBuffer() {
{
init(new ByteArrayInputStream(requestAsString.getBytes()), 10, new BasicHttpParams());
}
#Override
public boolean isDataAvailable(int timeout) throws IOException {
throw new RuntimeException("have to override but probably not even called");
}
};
HttpMessageParser parser = new HttpRequestParser(inputBuffer, new BasicLineParser(new ProtocolVersion("HTTP", 1, 1)), new DefaultHttpRequestFactory(), new BasicHttpParams());
HttpMessage message = parser.parse();
if (message instanceof BasicHttpEntityEnclosingRequest) {
BasicHttpEntityEnclosingRequest request = (BasicHttpEntityEnclosingRequest) message;
EntityDeserializer entityDeserializer = new EntityDeserializer(new LaxContentLengthStrategy());
HttpEntity entity = entityDeserializer.deserialize(inputBuffer, message);
request.setEntity(entity);
}
return (HttpRequest) message;
} catch (IOException e) {
throw new RuntimeException(e);
} catch (HttpException e) {
throw new RuntimeException(e);
}
}
}
and a test class showing how to use it:
import org.apache.http.HttpRequest;
import org.apache.http.NameValuePair;
import org.apache.http.client.utils.URLEncodedUtils;
import org.apache.http.message.BasicHttpEntityEnclosingRequest;
import org.junit.Test;
import java.io.IOException;
import java.net.URI;
import java.util.List;
import static org.junit.Assert.*;
/**
*
*/
public class ApacheRequestFactoryTest {
#Test
public void testGet() {
String requestString = "GET /?one=aone&two=atwo HTTP/1.1\n" +
"Host: localhost:7788\n" +
"Connection: Keep-Alive\n" +
"User-Agent: Apache-HttpClient/4.0.1 (java 1.5)";
HttpRequest request = ApacheRequestFactory.create(requestString);
assertEquals("GET", request.getRequestLine().getMethod());
List<NameValuePair> pairs = URLEncodedUtils.parse(URI.create(request.getRequestLine().getUri()), "ISO-8859-1");
checkPairs(pairs);
}
#Test
public void testPost() throws IOException {
String requestString = "POST / HTTP/1.1\n" +
"Content-Length: 17\n" +
"Content-Type: application/x-www-form-urlencoded; charset=ISO-8859-1\n" +
"Host: localhost:7788\n" +
"Connection: Keep-Alive\n" +
"User-Agent: Apache-HttpClient/4.0.1 (java 1.5)\n" +
"\n" +
"one=aone&two=atwo";
HttpRequest request = ApacheRequestFactory.create(requestString);
assertEquals("POST", request.getRequestLine().getMethod());
List<NameValuePair> pairs = URLEncodedUtils.parse(((BasicHttpEntityEnclosingRequest)request).getEntity());
checkPairs(pairs);
}
private void checkPairs(List<NameValuePair> pairs) {
for (NameValuePair pair : pairs) {
if (pair.getName().equals("one")) assertEquals("aone", pair.getValue());
else if (pair.getName().equals("two")) assertEquals("atwo", pair.getValue());
else assertTrue("got more parameters than expected:"+pair.getName(), false);
}
}
}
And a small rant:
WHAT ARE THE APACHE HTTP TEAM THINKING ? The api is incredibly awkward to use. Developers around the world are wasting time writing wrapper and conversion classes for what should be run of the mill every day usage (like this example the simple act of converting a string to an apache http request, and the bizarre way you need to extract the form parameters (also having to do it in two different ways depending on what type of request was made)). The global time wasted because of this is huge. When you write an API from the bottom up, starting with the specs, you MUST then start a layer from the top down (top being an interface where you can get typical work done without having to understand or look at the way the code is implemented), making every day usage of the library CONVENIENT and intuitive. Apache http libraries are anything but. It's almost a miracle that its the standard library for this type of task.

Resources