Bosh over https using smack - https

I am trying to have create Bosh connection to openfire over https. I have tried using the BoshConfiguration with hhtps argument as true. But connection times out at remote server.
Anyone has any working example of Bosh over https in smack ?

I´ve faced the same problem. I could make the connection and login by changing JBosh library, since the HttpClient usage there does not consider a SSL Context.
I followed the approach used in http://www.java-samples.com/showtutorial.php?tutorialid=211 for this, with some modification in subscribe() method, returning the SSLContext and using in XLightWebSender.java init() method, like this:
public void init(final BOSHClientConfig session) {
lock.lock();
try {
cfg = session;
SSLContext context = null;
try {
context = this.subscribe();
} catch (Exception e) {
e.printStackTrace();
}
client = new HttpClient(context);
} finally {
lock.unlock();
}
}
PS: I´m still testing and don´t guarantee that this work-around works fine for long living connections.

Related

Determining when FileWrittenEvent has completed writing the entire file

This is my first question here so please bear with me.In a recent release of Spring 5.2 there were certain and extremely helpful components added to Spring Integration as seen in this link:https://docs.spring.io/spring-integration/reference/html/sftp.html#sftp-server-eventsApache MINA was integrated with a new listener "ApacheMinaSftpEventListener" which
listens for certain Apache Mina SFTP server events and publishes them as ApplicationEvents
So far my application can capture the application events as noted in the documentation from the link provided but I can't seem to figure out when the event finishes... if that makes sense (probably not).In a process flow the application starts up and activates as an SFTP Server on a specified port.I can use the user name and password to connect to and "put" a file on the system which initiates the transfer.When I sign on I can capture the "SessionOpenedEvent"When I transfer a file I can capture the "FileWrittenEvent"When I sign off or break the connection I can capture the "SessionClosedEvent"When the file is a larger size I can capture ALL of the "FileWrittenEvent" events which tells me the transfer occurs on a stream of a predetermined or calculated sized buffer.What I'm trying to determine is "How can I find out when that stream is finished". This will help me answer "As an SFTP Server accepting a file, when can I access the completed file?"
My Listener bean (which is attached to Apache Mina on start up via the SubSystemFactory)
#Configuration
public class SftpConfiguration {
#Bean
public ApacheMinaSftpEventListener apacheMinaSftpEventListener() {
return new ApacheMinaSftpEventListener();
}
}
SftpSubsystemFactory subSystem = new SftpSubsystemFactory();
subSystem.addSftpEventListener(listener);
My Event Listener: this is here so I can see some output in a logger which is when I realized, on a few GB file, the FileWrittenEvent went a little crazy.
#Async
#EventListener
public void sftpEventListener(ApacheMinaSftpEvent sftpEvent) {
log.info("Capturing Event: ", sftpEvent.getClass().getSimpleName());
log.info("Event Details: ", sftpEvent.toString());
}
These few pieces were all I really needed to start capturing the eventsI was thinking that I would need to override a method to help me capture when the stream finishes so I can move on with my business logic but I'm not sure which one.I seem to be able to access the file (read/write) prior to the stream being done so I don't seem to be able to use logic that attempts to "move" the file and wait for it to throw an error, though that approach seemed like bad practice to me.Any guidance would be greatly appreciated, thank you.
Versioning Information
Spring 5.2.3
Spring Boot 2.2.3
Apache Mina 2.1.3
Java 1.8
This may not be helpful for others but I've found a way around my initial problem by integrating a related solution combined with the new Apache MINA classes found in this answer:https://stackoverflow.com/a/45513680/12806809
My solution:
Create a class that extends the new ApacheMinaSftpEventListener while overridding the 'open' and 'close' methods to ensure my SFTP Server business logic know when a file is done writing.
public class WatcherSftpEventListener extends ApacheMinaSftpEventListener {
...
...
#Override public void open(ServerSession session, String remoteHandle, Handle localHandle) throws IOException {
File file = localHandle.getFile().toFile();
if (file.isFile() && file.exists()) {
log.debug("File Open: {}", file.toString());
}
// Keep around the super call for now
super.open(session, remoteHandle, localHandle);
}
#Override
public void close(ServerSession session, String remoteHandle, Handle localHandle) {
File file = localHandle.getFile().toFile();
if (file.isFile() && file.exists()) {
log.debug("RemoteHandle: {}", remoteHandle);
log.debug("File Closed: {}", file.toString());
for (SftpFileUploadCompleteListener listener : fileReadyListeners) {
try {
listener.onFileReady(file);
} catch (Exception e) {
String msg = String.format("File '%s' caused an error in processing '%s'", file.getName(), e.getMessage());
log.error(msg);
try {
session.disconnect(0, msg);
} catch (IOException io) {
log.error("Could not properly disconnect from session {}; closing future state", session);
session.close(false);
}
}
}
}
// Keep around the super call for now
super.close(session, remoteHandle, localHandle);
}
}
When I start the SSHD Server I added my new listener bean to the SftpSubsystemFactory which uses a customized event handler class to apply my business logic against the incoming files.
watcherSftpEventListener.addFileReadyListener(new SftpFileUploadCompleteListener() {
#Override
public void onFileReady(File file) throws Exception {
new WatcherSftpEventHandler(file, properties.getSftphost());
}
});
subSystem.addSftpEventListener(watcherSftpEventListener);
There was a bit more to this solution but since this question isn't getting that much traffic and it's more for my reference and learning than anything now, I won't provide anything more unless asked.

Connection timeout to Couchbase using jdbc

I am trying to connect to Couchbase through JDBC, but it is behaving arbitrarily as it gives a timeout exception many times. I also tried to increase the time out, but it still errors out. The following is the code used to connect to Couchbase:
public static CouchbaseCluster connectToDB(String URL, String userid, String password) throws BusinessException
{
CouchbaseEnvironment env = null;
CouchbaseCluster cluster = null;
try
{
env = DefaultCouchbaseEnvironment.builder().connectTimeout(10000).queryEnabled(true).build();
cluster = CouchbaseCluster.fromConnectionString(env, URL);
} catch (Exception e) {
LOGGER.error(e.getMessage());
}
return cluster;
}
Also, we are using Jars :couchbase-core-io-1.2.7.jar and couchbase-java-client-2.2.6 and the couchbase version we are trying to connect to is Couchbase version 4.5.1-2841 Enterprise Edition
I also tried to increase the timeout using .connectTimeout(1000000), but the issue still persists.
Have you confirmed that all ports are open in between the client and the server? The ports are listed here
http://developer.couchbase.com/documentation/server/current/install/install-ports.html

Redis, SpringBoot and HttpSession: should I encrypt the session data?

I'm using Spring Boot 1.3.3 to build a web application. I use Redis for handling the session.
I'll set some "crucial" data into the HttpSession and I'd like to understand how this will work with Redis. Is the information stored server side plus a key on browser side or all the data is in a cookie in the user browser?
I'd like to see a documentation reference for the answer or to get an authoritative answer (e.g. a Pivotal dev).
While I agree with most of what the other answers in here have said, none of the other answers actually answered the question.
I'm going to assume that you are using SpringSession with Redis in Spring Boot.
In order to use SpringSession, you have likely configured (directly or indirectly) a servlet filter that extends SessionRepositoryFilter.
SessionRepositoryFilter uses a SessionRepository. Since you are using Redis, it is likely that your configuration makes use of RedisOperationsSessionRepository.
RedisOperationsSessionRepository implements SessionRepository, as you might have guessed and is ultimately responsible for fetching, storing, and deleting sessions based on a key (in your case, a key that is probably stored as a cookie on the user's browser).
RedisOperationSessionRepository, by default, uses JdkSerializationRedisSerializer, which implements RedisSerializer, to serialize session data prior to handing said data off to Redis.
According to the documentation for RedisOperationSessionRepository, it is possible to set the default serializer that RedisOperationSessionRepository will use, via it's setDefaultSerializer method.
You could theoretically extend JdkSerializationRedisSerializer and perform encryption and decryption there. JdkSerializationRedisSerializer looks like this:
public class JdkSerializationRedisSerializer implements RedisSerializer<Object> {
private Converter<Object, byte[]> serializer = new SerializingConverter();
private Converter<byte[], Object> deserializer = new DeserializingConverter();
public Object deserialize(byte[] bytes) {
if (SerializationUtils.isEmpty(bytes)) {
return null;
}
try {
return deserializer.convert(bytes);
} catch (Exception ex) {
throw new SerializationException("Cannot deserialize", ex);
}
}
public byte[] serialize(Object object) {
if (object == null) {
return SerializationUtils.EMPTY_ARRAY;
}
try {
return serializer.convert(object);
} catch (Exception ex) {
throw new SerializationException("Cannot serialize", ex);
}
}
}
So a potential way to add encryption might look like:
#Component
#Qualifier("springSessionDefaultRedisSerializer") //SB 2.0.0+
public class CrypticRedisSerializer extends JdkSerializationRedisSerializer {
#Override
public Object deserialize(byte[] bytes) {
byte[] decrpyted;
try {
decrpyted = EncryptionUtils.decrypt(bytes);
return super.deserialize(decrpyted);
} catch (NoSuchPaddingException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (GeneralSecurityException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
// handle expections or allow to propagate, your choice!
return null;
}
#Override
public byte[] serialize(Object object) {
byte[] bytes = super.serialize(object);
try {
return EncryptionUtils.encrypt(bytes);
} catch (NoSuchPaddingException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (GeneralSecurityException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
// handle expections or allow to propagate, your choice!
return null;
}
}
Where EncrpytionUtils might look like:
public class EncryptionUtils {
private static SecretKeySpec skeySpec;
static {
try {
ClassPathResource res = new ClassPathResource("key.key");
if(res != null){
File file = res.getFile();
FileInputStream input = new FileInputStream(file);
byte[] in = new byte[(int)file.length()];
input.read(in);
skeySpec = new SecretKeySpec(in, "AES");
input.close();
}
}catch (FileNotFoundException e) {
e.printStackTrace();
}catch (IOException e) {
e.printStackTrace();
}
}
public static byte[] encrypt(byte[] input)
throws GeneralSecurityException, NoSuchPaddingException{
Cipher cipher = Cipher.getInstance("AES");
cipher.init(Cipher.ENCRYPT_MODE, skeySpec);
return cipher.doFinal(input);
}
public static byte[] decrypt(byte[] input) throws GeneralSecurityException, NoSuchPaddingException{
Cipher cipher = Cipher.getInstance("AES");
cipher.init(Cipher.DECRYPT_MODE, skeySpec);
return cipher.doFinal(input);
}
}
All you would need to do to implement this is make sure that you set your custom serializer as the default that RedisOperationSessionRepository users.
Please note:
I have not tested the above code
I am not advocating that the above code is an ideal solution or THE solution, but simply demonstrating a mechanism for introducing encrpytion into SpringSession with Redis.
Obviously, you can use whatever 2-way encrpytion algorithm you want. EncrpytionUtils is just an example.
This will impact performance. How much? Hard to say without testing. Just be aware that there will be some performance impact.
If you are really worried about encrypting session data sent to Redis, then I highly recommend that you also make sure that your servers are secured. Make sure that only the servers that need to access your Redis server can. Place it behind a firewall. If you are using a cloud service like AWS, place your Redis server in a VPN and inside of a private subnet. Check out this article.
Redis does not support connection encryption currently. However, like they suggest, you could use Sniped to ensure your connections are encrypted.
Documentation and reference to check out:
SpringSession
RedisOperationsSessionRepository
SessionRepository
Very good article on redis security from the creator or redis - http://antirez.com/news/96 it's pretty interesting read. Read the comments as well.
One thing I am curious is, does your "crucial" data has to be stored in session? If it's not super critical for performance you can just save it in your DB. I use redis in our product just for storing tokens along with basic user data, I have seen people dumping large data size as a session data, which is fine but i don't think it's a really good idea.
In my opinion you should avoid encrypting data in redis, otherwise it will be a performance overhead. So, you may want to put redis nodes in a protected zone(internal) where only the traffic from your application is allowed to reach. If its not possible then IPSec/Stunnel can be used to secure the communication.
By the way, storing the session data as HTTPSession attribute will be faster than retrieving it from Redis. But I believe you would have chosen redis probably because of the volume of the data.

RMI very slow is client and server are not in the same machine

i have a strange problem. I developed client-server application with Java RMI, and in the localhost work very fine, also work very well if i put the client and server on two different MacBook Pro, but work very very slowly if i try to put the client and the server on two computer that are not macbook pro. I have this problem only if i try to send a reference of the client to the server with invocation of a remote method.
This is my code
SERVER:
public class Server{
public static void main(String [] args){
try
{
Server_Impl server=new Server_Impl();
Registry reg=LocateRegistry.createRegistry(1099);
reg.bind("Server",server);
if(new Scanner(System.in).nextInt()==-1){
System.exit(0);
}
}
catch (RemoteException e){e.printStackTrace( );}
catch (AlreadyBoundException e) {e.printStackTrace( );}
}
}
CLIENT
public class Client{
public static Interfaccia_Server Server;
public static void main(String [] args){
try{
Registry reg=LocateRegistry.getRegistry("10.0.1.5",1099);
Server = (Interfaccia_Server) reg.lookup("Server");
Client_Impl c= new Client_Impl(Server);
Server.connect_client(c);
c.check_action();
}
catch(Exception e){
}
}
}
All of the code work, but very very slow if client and server are not on the same machine, or on a apple mac computer.
If i remove this line of code from the client all work very well anywhere, but i need the reference of the client in the server
Server.connect_client(c);
I have no idea about, please help me

Spring ws timeout on server side

I have some web services exposed using spring web services.
I would like to set a maximun timeout on server side, I mean, when a client invokes my web service It could not last more than a fixed time. Is it possible?
I have found lot of information about client timeouts, but not server timeout.
This is set at the level of the server itself and not the application, so it's application server dependent.
The reason for this is that it's the server code that opens the listening socket used by the HTTP connection, so only the server code can set a timeout by passing it to the socket API call that starts listening to a given port.
As an example, this is how to do it in Tomcat in file server.xml:
<Connector connectionTimeout="20000" ... />
You can work around this issue by making the web service server trigger the real work on another thread and countdown the time out it self and return failure if timed out.
Here is an example of how you can do it, it should time out after 10 seconds:
public class Test {
private static final int ONE_SECOND = 1_000;
public String webserviceMethod(String request) {
AtomicInteger counter = new AtomicInteger(0);
final ResponseHolder responseHolder = new ResponseHolder();
// Create another thread
Runnable worker = () -> {
// Do Actual work...
responseHolder.finished = true;
responseHolder.response = "Done"; // Actual response
};
new Thread(worker).start();
while (counter.addAndGet(1) < 10) {
try {
Thread.sleep(ONE_SECOND);
} catch (InterruptedException e) {
e.printStackTrace();
}
if (responseHolder.finished) {
return responseHolder.response;
}
}
return "Operation Timeout"; // Can throw exception here
}
private final class ResponseHolder {
private boolean finished;
private String response; // Can be any type of response needed
}
}

Resources