I use the Hadoop's FileSystem class for deleting some HDFS files. The problem now is, that the client gets a Connection Timeout after a too long duration, I need to shrink the time to wait until the timeout, so that the user gets faster responses, if he/she is outside the network!
Here my codes snippet:
try {
System.setProperty("HADOOP_USER_NAME", "test");
Configuration conf = new Configuration();
File csvFile = new File(pathCsvFile);
FileSystem hdfs = FileSystem.get(new URI(csvFile.getPath(), conf);
if(hdfs.exists(new Path(filterValuesPath))) {
hdfs.delete(new Path(filterValuesPath), true);
setInfo("File deleted!");
} else {
setInfo("No file to delete!");
}
} catch (Exception ex) { // the timeout is too high!!!
ex.printStackTrace();
setInfo("No connection or no files to delete!");
}
Where and how can I set the timeout for my application? I don't want to change this in any Hadoop config files, just locally for my Java app. Thank you!
Related
I've got a Spring #Component where a SmbSessionFactory is injected to create a RemoteFileTemplate<SmbFile>. When my application runs, this piece of code is called multiple times:
public void process(Message myMessage, String filename) {
StopWatch stopWatch = StopWatch.createStarted();
byte[] bytes = marshallMessage(myMessage);
String destination = smbConfig.getDir() + filename + ".xml";
if (log.isDebugEnabled()) {
log.debug("Result: {}", new String(bytes));
}
Optional<IOException> optionalEx =
remoteFileTemplate.execute(
session -> {
try (InputStream inputStream = new ByteArrayInputStream(bytes)) {
session.write(inputStream, destination);
} catch (IOException e1) {
return Optional.of(e1);
}
return Optional.empty();
});
log.info("processed Message in {}", stopWatch.formatTime());
optionalEx.ifPresent(
ioe -> {
throw new UncheckedIOException(ioe);
});
}
this works (i.e. the file is written) and all is fine. Except that I see warnings appearing in my log:
DEBUG my.package.MyClass Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>....
INFO org.springframework.integration.smb.session.SmbSessionFactory SMB share init: XXX
WARN jcifs.smb.SmbResourceLocatorImpl Path consumed out of range 15
WARN jcifs.smb.SmbTreeImpl Disconnected tree while still in use SmbTree[share=XXX,service=null,tid=1,inDfs=true,inDomainDfs=true,connectionState=3,usage=2]
INFO org.springframework.integration.smb.session.SmbSession Successfully wrote remote file [path\to\myfile.xml].
WARN jcifs.smb.SmbSessionImpl Logging off session while still in use SmbSession[credentials=XXX,targetHost=XXX,targetDomain=XXX,uid=0,connectionState=3,usage=1]:[SmbTree[share=XXX,service=null,tid=1,inDfs=false,inDomainDfs=false,connectionState=0,usage=1], SmbTree[share=XXX,service=null,tid=5,inDfs=false,inDomainDfs=false,connectionState=2,usage=0]]
jcifs.smb.SmbTransportImpl Disconnecting transport while still in use Transport746[XXX/999.999.999.999:445,state=5,signingEnforced=false,usage=1]: [SmbSession[credentials=XXX,targetHost=XXX,targetDomain=XXX,uid=0,connectionState=2,usage=1], SmbSession[credentials=XXX,targetHost=XXX,targetDomain=null,uid=0,connectionState=2,usage=0]]
INFO my.package.MyClass processed Message in 00:00:00.268
The process method is called from a Rest method, which does little else.
What am I doing wrong here?
We are using Cerberus FTP server. And for the client I am using SSH.NET library to connect to server and upload a file. I was able to connect and upload files to FTP server without issue most of the time.
However when destination path does not exists on the FTP server, the SSH.NET library throws exception as expected. However Exception's Message property is empty.
var sftpClient = new SftpClient(host,username,password);
sftp.connect();
var destination = "SomeInvalidPath\myfile.txt"
using (var fs = new FileStream(sourceFilePath, FileMode.Open, FileAccess.Read))
{
try
{
sftp.UploadFile(fs, destination);
}
catch (Exception ex)
{
// ex.Message is empty ???
Logger.Current.Error(ex, "Error while FTP");
}
}
Not sure if this SSH.NET library issue or FTP server needs to propagate errors back to client?
Update 1
Stack trace
" at Renci.SshNet.Sftp.SftpSession.RequestOpen(String path, Flags
flags, Boolean nullOnError)\r\n at
Renci.SshNet.SftpClient.InternalUploadFile(Stream input, String path,
Flags flags, SftpUploadAsyncResult asyncResult, Action1
uploadCallback)\r\n at Renci.SshNet.SftpClient.UploadFile(Stream
input, String path, Action1 uploadCallback)\r\n at XXXXXX
We have a very standard elasticsearch setup with 3 master nodes, 6 data nodes and 3 client nodes. Here is our connection code for connecting to Elasticsearch clients from our Java application.
Settings settings = Settings.settingsBuilder()
.put("cluster.name", configuration.getString("clusterName"))
.put("client.transport.sniff", false)
.put("client.transport.ping_timeout", "5s")
.build();
TransportClient client = TransportClient.builder().settings(settings).build();
for (String hostname : (Collection<String>)configuration.get("hostnames")){
try {
client = client.addTransportAddresses(
new InetSocketTransportAddress(InetAddress.getByName(hostname), 9300)
);
break;
} catch (UnknownHostException e) {
e.printStackTrace();
}
}
We have currently three different host in hostnames list. But any time a single client from this list of hostname goes down this Elasticsearch transport client stops responding. I have gone through transport client documentation on Elasticsearch site and have also tried looking at their Github issues, according to that whenever a node goes down only elasticsearch should remove it from list of nodes and continue working with other nodes, but in our case things just break down. Anyone has any idea what might be the problem?
We are using elasticsearch 2.4.3 right now.
It looks like you are breaking the loop after a single node has been added. Try removing the break statement:
for (String hostname : (Collection<String>)configuration.get("hostnames")){
try {
client = client.addTransportAddresses(
new InetSocketTransportAddress(InetAddress.getByName(hostname), 9300)
);
} catch (UnknownHostException e) {
e.printStackTrace();
}
}
Installed the filezilla server and enabled the FTP over TLS Settings in Settings and started the server.
Through eclipse java client i tried to connect to server for upload and download the file using the below code
using commons-net apache library.
FTPSClient ftpClient = new FTPSClient(false);
// Connect to host
ftpClient.connect(mServer, mPort);
int reply = ftpClient.getReplyCode();
System.out.println("The reply code is "+reply);
if (FTPReply.isPositiveCompletion(reply)) {
// Login
if (ftpClient.login("******", "*******")) {
// Set protection buffer size
ftpClient.execPBSZ(0);
// Set data channel protection to private
ftpClient.execPROT("P");
// Enter local passive mode
ftpClient.enterLocalPassiveMode();
// Upload File using storeFile
File firstLocalFile = new File("e:/Test.txt");
String firstRemoteFile = "hello.txt";
InputStream is = new FileInputStream(firstLocalFile);
String result = getStringFromInputStream(is);
System.out.println(result);
Object output = ftpClient.storeFile(firstRemoteFile, is);
System.out.println(output);
is.close();
// Download File using retrieveFile(String, OutputStream)
String remoteFile1 = "/settings.xml";
File downloadFile1 = new File("e:/testOutput.xml");
OutputStream outputStream1 = new BufferedOutputStream(new FileOutputStream(downloadFile1));
boolean success = ftpClient.retrieveFile(remoteFile1, outputStream1);
outputStream1.close();
if (success) {
System.out.println("File #1 has been downloaded successfully.");
}
// Logout
ftpClient.logout();
// Disconnect
ftpClient.disconnect();
} else {
System.out.println("FTP login failed");
}
// Disconnect
ftpClient.disconnect();
} else {
System.out.println("FTP connect to host failed");
}
} catch (IOException ioe) {
System.out.println("FTP client received network error");
ioe.printStackTrace();
} catch (Exception nsae) {
System.out.println("FTP client could not use SSL algorithm");
nsae.printStackTrace();
}
It creates a file hello.txt on the server but size is of 0kb (source file size is 10 kb) and ended up the following error. Please help me to resolve this
javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake
at sun.security.ssl.SSLSocketImpl.readRecord(Unknown Source)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source)
at sun.security.ssl.SSLSocketImpl.startHandshake(Unknown Source)
at sun.security.ssl.SSLSocketImpl.startHandshake(Unknown Source)
at org.apache.commons.net.ftp.FTPSClient._openDataConnection_(FTPSClient.java:619)
at org.apache.commons.net.ftp.FTPClient._storeFile(FTPClient.java:633)
at org.apache.commons.net.ftp.FTPClient.__storeFile(FTPClient.java:624)
at org.apache.commons.net.ftp.FTPClient.storeFile(FTPClient.java:1976)
at com.test.ftps.TestClass.main(TestClass.java:88)
Caused by: java.io.EOFException: SSL peer shut down incorrectly
at sun.security.ssl.InputRecord.read(Unknown Source)
... 9 more
just un-tick
"Require TLC session resumption on data connection..." in the filezilla server -> settings -> FTP over TLS Settings -> un-tick the Require TLC session resumption on data connection when using PROT P
In addition to user2750213's answer ( Filezilla's TLS session resumption ) beware to have the required protocols enabled. You can verify them running this code or this other on the jvm connecting to the FTPS server. Recent versions of Filezilla server use TLSv1.2.
If this works for you, you may get a java.net.SocketException: Unconnected sockets not implemented. In this case you need to write your own class which extends DefaultSocketFactory class and then set it to your FTPS client via method ftpsClient.setSocketFactory(yourSocketFactory) overriding the createSocket() method which must returns a new Socket()
I have the below code to SFTP to a location
public static void putFile(String username, String host, String password, String remotefile, String localfile){
JSch jsch = new JSch();
Session session = null;
try {
session = jsch.getSession(username, host, 22);
session.setConfig("StrictHostKeyChecking", "no");
session.setPassword(password);
session.connect();
Channel channel = session.openChannel("sftp");
channel.connect();
ChannelSftp sftpChannel = (ChannelSftp) channel;
sftpChannel.put(localfile, remotefile);
sftpChannel.exit();
session.disconnect();
} catch (JSchException e) {
e.printStackTrace();
} catch (SftpException e) {
e.printStackTrace();
}
}
I am able to SFTP the document from my local machine using the above code. However when I am trying from a different environment to SFTP to the same location I am getting the follow error.
com.jcraft.jsch.JSchException: invalid server's version string at
com.jcraft.jsch.Session.connect(Session.java:253)
Note : I am using jsch-0.1.31.jar file.
on printing out session.getClientVersion() I am getting "SSH-2.0-JSCH-0.1.31".
I tried to upgrade the jar file to jsch-0.1.51.jar then session.getClientVersion() = "SSH-1.5-JSCH-0.1.51" and I am getting the following error
com.jcraft.jsch.JSchException: Session.connect: java.net.SocketException: Connection reset at com.jcraft.jsch.Session.connect(Session.java:558)
Please can you help me on what parameters should I be looking into and what is causing it to run from my local machine and upload to the same SFTP location and not from other environment?
As noted by #Kenster, the exception is about server's version string, not client's. The "invalid server's version string" exception is thrown by following code in Session.connect:
if(i==buf.buffer.length ||
i<7 || // SSH-1.99 or SSH-2.0
(buf.buffer[4]=='1' && buf.buffer[6]!='9') // SSH-1.5
){
throw new JSchException("invalid server's version string");
}
First, I would try to connect with some client that logs the version string and see yourself. For example with WinSCP, search its log for a pattern like:
. 2014-09-03 17:01:20.596 Server version: SSH-2.0-OpenSSH_5.3
(I'm the author of WinSCP)
Though possibly it's not about version string at all. I would rather believe the error raised by the new version, the Connection reset. The old version may fail to detect that the connection was aborted prematurely and tries to validate some random or incomplete data.
The Connection reset may indicate wide variety of different errors
Server refusing a connection from the other location
Some firewall or proxy not allowing the connection to pass through