I'm using ActiveMQ to send files from an application to another and I've encountered this error when the consumer attempt to connect at ActiveMQ server.
Here is the connect method
public void connect()
{
connectionFactory = new ActiveMQConnectionFactory(
"tcp://" + Configuration.getInstance().getServerAddress() +
":61616?jms.blobTransferPolicy.defaultUploadUrl=" + "http://" +
Configuration.getInstance().getServerAddress() + ":8161/fileserver/"
+ "&connectionTimeout=0&soTimeout=0&soWriteTimeout=0"
+ "&useInactivityMonitor=false");
try
{
connection = connectionFactory.createConnection();
session = (ActiveMQSession) connection.createSession(false,
Session.AUTO_ACKNOWLEDGE);
destination = session.createQueue(Configuration.getInstance().getQueueName());
consumer = session.createConsumer(destination);
connection.start();
// System.out.println("Consumer connected");
} catch (JMSException e) {
logger.error("PACS", e);
e.printStackTrace();
}
}
This is exactly the same method I use to connect the producer to ActiveMQ and in that case it works perfectly, at the consumer side I have the following error:
javax.jms.JMSException: Unknown data type: 49
at org.apache.activemq.util.JMSExceptionSupport.create(JMSExceptionSuppo
rt.java:54)
at org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnect
ion.java:1417)
at org.apache.activemq.ActiveMQConnection.ensureConnectionInfoSent(Activ
eMQConnection.java:1522)
at org.apache.activemq.ActiveMQConnection.createSession(ActiveMQConnecti
on.java:328)
at com.kratossrl.pacs.consumer.CloudPacsConsumer.connect(CloudPacsConsum
er.java:74)
at com.kratossrl.pacs.consumer.CloudPacsConsumer.start(CloudPacsConsumer
.java:91)
at com.kratossrl.pacs.consumer.CloudPacsConsumer.run(CloudPacsConsumer.j
ava:135)
at com.kratossrl.pacs.consumer.CloudPacsConsumer.init(CloudPacsConsumer.
java:57)
at com.kratossrl.pacs.consumer.CloudPacsConsumer$1.call(CloudPacsConsume
r.java:39)
at com.kratossrl.pacs.consumer.CloudPacsConsumer$1.call(CloudPacsConsume
r.java:1)
at javafx.concurrent.Task$TaskCallable.call(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.io.IOException: Unknown data type: 49
at org.apache.activemq.openwire.OpenWireFormat.doUnmarshal(OpenWireForma
t.java:348)
at org.apache.activemq.openwire.OpenWireFormat.unmarshal(OpenWireFormat.
java:268)
at org.apache.activemq.transport.tcp.TcpTransport.readCommand(TcpTranspo
rt.java:221)
at org.apache.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.jav
a:213)
at org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:
196)
... 1 more
javax.jms.JMSException: Unknown data type: 49
at org.apache.activemq.util.JMSExceptionSupport.create(JMSExceptionSuppo
rt.java:54)
at org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnect
ion.java:1417)
at org.apache.activemq.ActiveMQConnection.ensureConnectionInfoSent(Activ
eMQConnection.java:1522)
at org.apache.activemq.ActiveMQConnection.createSession(ActiveMQConnecti
on.java:328)
at com.kratossrl.pacs.consumer.CloudPacsConsumer.connect(CloudPacsConsum
er.java:74)
at com.kratossrl.pacs.consumer.CloudPacsConsumer.start(CloudPacsConsumer
.java:91)
at com.kratossrl.pacs.consumer.CloudPacsConsumer.run(CloudPacsConsumer.j
ava:135)
at com.kratossrl.pacs.consumer.CloudPacsConsumer.init(CloudPacsConsumer.
java:57)
at com.kratossrl.pacs.consumer.CloudPacsConsumer$1.call(CloudPacsConsume
r.java:39)
at com.kratossrl.pacs.consumer.CloudPacsConsumer$1.call(CloudPacsConsume
r.java:1)
at javafx.concurrent.Task$TaskCallable.call(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by:
java.io.IOException: Unknown data type: 49
at org.apache.activemq.openwire.OpenWireFormat.doUnmarshal(OpenWireForma
t.java:348)
at org.apache.activemq.openwire.OpenWireFormat.unmarshal(OpenWireFormat.
java:268)
at org.apache.activemq.transport.tcp.TcpTransport.readCommand(TcpTranspo
rt.java:221)
at org.apache.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.jav
a:213)
at org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:
196)
... 1 more
I have seek in the web for this error but I have not found the '49' value.
Anyone has encountered my situation or knows the cause/solution of this problem?
Thanks in advice and sorry for my not perfect English
double-check your ActiveMQ versions, it almost appears that the client is more recent than the server and when they're negotiating the protocol the client sends something not understood by the server and the exception is thrown
Related
** fisrt. My project is about the list of articles recommended,every article have there own rule,so i use AsyncTaskExecutor this tool class to concurrent query different articles, and now,some rules is special,So I split them into different rules into two parts. next is my code:
i use springboot + mybatis to do**
#Bean
public AsyncTaskExecutor dataTaskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(16);
executor.setThreadNamePrefix("data_task_executor-");
return executor;
}
here i initialization AsyncTaskExecutor class for ready
Next is the partial code of the concurrent query.
// here i get different rule list
List<Rule> ruleList = JSON.parseArray(scene.getRules(), Rule.class);
Iterator<Rule> ruleIterator = ruleList.iterator();
CountDownLatch latch1 = new CountDownLatch(ruleList.size());
while (ruleIterator.hasNext()) {
Rule ruleNext = ruleIterator.next();
// unAsyncScenes is a array,this rule query in here
if (Arrays.binarySearch(unAsyncScenes, ruleNext.getSource()) >= 0) {
dataTaskExecutor.execute(() -> {
try {
searchIDSByRule(idWithRtsMap, articleReferralList, sceneId, feedSum, userId, isNewUserByHistory, discussHistoryList, discussList, graphHistorys, ruleNext);
//Record browsing history
graphHistorys.addAll(idWithRtsMap.keySet());
} catch (Exception e) {
log.warn("子规则查图失败", e);
} finally {
latch1.countDown();
}
});
//Query deleted
ruleIterator.remove();
} else {
latch1.countDown();
}
}
try {
latch1.await(10, TimeUnit.SECONDS);
} catch (InterruptedException e) {
log.error("多线等待异常:", e);
}
//deal with Duplicate article
Set<Long> articleSet = new HashSet();
articleReferralList.forEach(article -> articleSet.add(article));
if (articleReferralList.size() != articleSet.size()) {
log.warn("出现了重复的文章");
articleReferralList.clear();
articleReferralList.addAll(articleSet);
}
final CountDownLatch latch = new CountDownLatch(ruleList.size());
for (Rule rule : ruleList) {
// second concurrent query(query for other article)
dataTaskExecutor.execute(() -> {
try { *****// here hava error!!!!!!!!!!!!*****
searchIDSByRule(idWithRtsMap, articleReferralList, sceneId, feedSum, userId, isNewUserByHistory, discussHistoryList, discussList, graphHistorys, rule);
} catch (Exception e) {
log.warn("子规则查图失败", e);
} finally {
latch.countDown();
}
});
}
try {
latch.await(10, TimeUnit.SECONDS);
} catch (InterruptedException e) {
log.error("多线等待异常:", e);
}
this is all query code, but went i run this code, it sometimes gives an error like this:
org.apache.ibatis.exceptions.PersistenceException: \
### Error querying database. Cause: java.util.ConcurrentModificationException\
### Cause: java.util.ConcurrentModificationException\
at org.mybatis.spring.MyBatisExceptionTranslator.translateExceptionIfPossible(MyBatisExceptionTranslator.java:77) ~[mybatis-spring-1.3.1.jar!\/:1.3.1]\
at org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:446) ~[mybatis-spring-1.3.1.jar!\/:1.3.1]\
at com.sun.proxy.$Proxy91.selectList(Unknown Source) ~[?:?]\
at org.mybatis.spring.SqlSessionTemplate.selectList(SqlSessionTemplate.java:230) ~[mybatis-spring-1.3.1.jar!\/:1.3.1]\
at org.apache.ibatis.binding.MapperMethod.executeForMany(MapperMethod.java:137) ~[mybatis-3.4.5.jar!\/:3.4.5]\
at org.apache.ibatis.binding.MapperMethod.execute(MapperMethod.java:75) ~[mybatis-3.4.5.jar!\/:3.4.5]\
at org.apache.ibatis.binding.MapperProxy.invoke(MapperProxy.java:59) ~[mybatis-3.4.5.jar!\/:3.4.5]\
at com.sun.proxy.$Proxy131.searchBySigAndExample(Unknown Source) ~[?:?]\
at com.coffee.ref.service.impl.ReferralServiceImpl.searchIDSByRule(ReferralServiceImpl.java:842) ~[classes!\/:0.0.1]\
at com.coffee.ref.service.impl.ReferralServiceImpl.lambda$findArticleIDSByRule$7(ReferralServiceImpl.java:625) ~[classes!\/:0.0.1]\
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_212]\
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_212]\
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]\
Caused by: org.apache.ibatis.exceptions.PersistenceException: \
### Error querying database. Cause: java.util.ConcurrentModificationException\
### Cause: java.util.ConcurrentModificationException\
at org.apache.ibatis.exceptions.ExceptionFactory.wrapException(ExceptionFactory.java:30) ~[mybatis-3.4.5.jar!\/:3.4.5]\
at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:150) ~[mybatis-3.4.5.jar!\/:3.4.5]\
at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:141) ~[mybatis-3.4.5.jar!\/:3.4.5]\
at sun.reflect.GeneratedMethodAccessor143.invoke(Unknown Source) ~[?:?]\
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_212]\
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_212]\
at org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:433) ~[mybatis-spring-1.3.1.jar!\/:1.3.1]\
... 11 more\
Caused by: java.util.ConcurrentModificationException\
at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:909) ~[?:1.8.0_212]\
at java.util.ArrayList$Itr.next(ArrayList.java:859) ~[?:1.8.0_212]\
at org.apache.ibatis.scripting.xmltags.ForEachSqlNode.apply(ForEachSqlNode.java:62) ~[mybatis-3.4.5.jar!\/:3.4.5]\
at org.apache.ibatis.scripting.xmltags.MixedSqlNode.apply(MixedSqlNode.java:33) ~[mybatis-3.4.5.jar!\/:3.4.5]\
at org.apache.ibatis.scripting.xmltags.IfSqlNode.apply(IfSqlNode.java:35) ~[mybatis-3.4.5.jar!\/:3.4.5]\
at org.apache.ibatis.scripting.xmltags.MixedSqlNode.apply(MixedSqlNode.java:33) ~[mybatis-3.4.5.jar!\/:3.4.5]\
at org.apache.ibatis.scripting.xmltags.DynamicSqlSource.getBoundSql(DynamicSqlSource.java:41) ~[mybatis-3.4.5.jar!\/:3.4.5]\
at org.apache.ibatis.mapping.MappedStatement.getBoundSql(MappedStatement.java:292) ~[mybatis-3.4.5.jar!\/:3.4.5]\
at com.github.pagehelper.PageInterceptor.intercept(PageInterceptor.java:83) ~[pagehelper-5.1.2.jar!\/:?]\
at org.apache.ibatis.plugin.Plugin.invoke(Plugin.java:61) ~[mybatis-3.4.5.jar!\/:3.4.5]\
at com.sun.proxy.$Proxy188.query(Unknown Source) ~[?:?]\
at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:148) ~[mybatis-3.4.5.jar!\/:3.4.5]\
at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:141) ~[mybatis-3.4.5.jar!\/:3.4.5]\
at sun.reflect.GeneratedMethodAccessor143.invoke(Unknown Source) ~[?:?]\
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_212]\
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_212]\
at org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:433) ~[mybatis-spring-1.3.1.jar!\/:1.3.1]\
... 11 more\"}"]
The place where the error was reported is marked above.I don't understand why, mybatis should be thread safe.
enter image description here
The official documentation says it is thread safe.
SqlSessionTemplate itself is thread safe. The problem is in your code.
The exception shows that error happens in the foreach element. Note this piece of the stacktrace:
Caused by: java.util.ConcurrentModificationException\
at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:909) ~[?:1.8.0_212]\
at java.util.ArrayList$Itr.next(ArrayList.java:859) ~[?:1.8.0_212]\
at org.apache.ibatis.scripting.xmltags.ForEachSqlNode.apply(ForEachSqlNode.java:62) ~[mybatis-3.4.5.jar!\/:3.4.5]\
So what happens here? In the mapper you build SQL dynamically by iterating over some collection. This collection is modified concurrently by another thread. The iterator over the collection has built-in check that collection is not modified and this check indicates you that there is a problem.
In order to fix this you need to synchronize access to the collection that is used in multiple threads so that when you use the collection to query some data based on it this happens atomically and no modification can happen in the middle of the query generation.
One possible reason of this is that the result of this await is not analyzed:
latch1.await(10, TimeUnit.SECONDS);
If the processing takes more than 10 seconds the second part starts executing while the data that query is based on is still being modified. This can happen as the amount of work to do depends on the data.
You need to check the result of this await and do not continue processing until all tasks in the first part of the procedure are finished.
I am running word count program from my windows machine on hadoop cluster which is setup on remote linux machine.
Program is running successfully and I am getting output but I am getting following exception and my waitForCompletion(true) is not returning true.
java.io.IOException: java.net.ConnectException: Your endpoint configuration is wrong; For more details see: http://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:345)
at org.apache.hadoop.mapred.ClientServiceDelegate.getJobStatus(ClientServiceDelegate.java:430)
at org.apache.hadoop.mapred.YARNRunner.getJobStatus(YARNRunner.java:870)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:331)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:328)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:328)
at org.apache.hadoop.mapreduce.Job.isComplete(Job.java:612)
at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1629)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1591)
at practiceHadoop.WordCount$1.run(WordCount.java:60)
at practiceHadoop.WordCount$1.run(WordCount.java:1)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
at practiceHadoop.WordCount.main(WordCount.java:24)
Caused by: java.net.ConnectException: Your endpoint configuration is wrong; For more details see: http://wiki.apache.org/hadoop/UnsetHostnameOrPort
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:751)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1495)
at org.apache.hadoop.ipc.Client.call(Client.java:1437)
at org.apache.hadoop.ipc.Client.call(Client.java:1347)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy16.getJobReport(Unknown Source)
at org.apache.hadoop.mapreduce.v2.api.impl.pb.client.MRClientProtocolPBClientImpl.getJobReport(MRClientProtocolPBClientImpl.java:133)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:326)
... 17 more
Caused by: java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:685)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:788)
at org.apache.hadoop.ipc.Client$Connection.access$3500(Client.java:409)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1552)
at org.apache.hadoop.ipc.Client.call(Client.java:1383)
... 26 more
My MapReduce Program which I run on eclipse (windows)
UserGroupInformation ugi = UserGroupInformation.createRemoteUser("admin");
ugi.doAs(new PrivilegedExceptionAction<Void>() {
public Void run() throws Exception {
try {
Configuration configuration = new Configuration();
configuration.set("yarn.resourcemanager.address", "192.168.33.75:50001"); // see step 3
configuration.set("mapreduce.framework.name", "yarn");
configuration.set("yarn.app.mapreduce.am.env",
"HADOOP_MAPRED_HOME=/home/admin/hadoop-3.1.0");
configuration.set("mapreduce.map.env", "HADOOP_MAPRED_HOME=/home/admin/hadoop-3.1.0");
configuration.set("mapreduce.reduce.env", "HADOOP_MAPRED_HOME=/home/admin/hadoop-3.1.0");
configuration.set("fs.defaultFS", "hdfs://192.168.33.75:54310"); // see step 2
configuration.set("mapreduce.app-submission.cross-platform", "true");
configuration.set("mapred.remote.os", "Linux");
configuration.set("yarn.application.classpath",
"{{HADOOP_CONF_DIR}},{{HADOOP_COMMON_HOME}}/share/hadoop/common/*,{{HADOOP_COMMON_HOME}}/share/hadoop/common/lib/*,"
+ " {{HADOOP_HDFS_HOME}}/share/hadoop/hdfs/*,{{HADOOP_HDFS_HOME}}/share/hadoop/hdfs/lib/*,"
+ "{{HADOOP_MAPRED_HOME}}/share/hadoop/mapreduce/*,{{HADOOP_MAPRED_HOME}}/share/hadoop/mapreduce/lib/*,"
+ "{{HADOOP_YARN_HOME}}/share/hadoop/yarn/*,{{HADOOP_YARN_HOME}}/share/hadoop/yarn/lib/*");
configuration.set("mlv_construct", "min");
configuration.set("column_name", "TotalCost");
Job job = Job.getInstance(configuration);
job.setJar("C:\\Users\\gauravp\\Desktop\\WordCountProgam.jar");
job.setJarByClass(WordCount.class); // use this when uploaded the Jar to the server and
// running the job directly and locally on the server
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(MapForWordCount.class);
job.setReducerClass(ReduceForWordCount.class);
Path input = new Path("/user/admin/wordCountInput.txt");
Path output = new Path("/user/admin/output");
FileSystem fs = FileSystem.get(configuration);
fs.delete(output);
FileInputFormat.addInputPath(job, input);
FileOutputFormat.setOutputPath(job, output);
if (job.waitForCompletion(true)) {
System.out.println("Job done...");
}
One more observation :
My connection from windows machine to remote linux machine ports (54310 and 50001) vanish after some time.
HDFS port connection status
yarn port connection status
I am stuck here from last 5 days. Please help me. Thanks in advance.
Check if your ResourceManager and NodeManager services are up and running using jps command. In my case only NameNode and DataNode services were up and above were not running. So when running a INSERT query on Hive, when it tried to run map reduce job it was failing with above error.
Starting yarn services mentioned above fixed the issue for me.
there
I am quite new to Hive, and a java app which accesses hive with kerberos authentication, like below:
try
{
System.setProperty("java.security.krb5.conf", "/haManage/krb5.conf");
StringBuilder sBuilder = new StringBuilder();
sBuilder.append("jdbc:hive2://ha-cluster/default");
sBuilder.append(";zk.quorum=").append("x.x.x.x,x.x.x.x");//ip list
sBuilder.append(";zk.port=").append("24002");
if (isSecureVer) {
sBuilder.append(";user.principal=")
.append("hadoop#HADOOP.COM")
.append(";user.keytab=")
.append("/home/hdclient/gyj/user.keytab")
.append(";sasl.qop=auth-conf;auth=KERBEROS;principal=hive/" +
"hadoop.hadoop.com#HADOOP.COM;zk.principal=zookeeper/hadoop.hadoop.com");
}
url = sBuilder.toString();
logger.info(url);
Class.forName("org.apache.hive.jdbc.HiveDriver");
connToHive = DriverManager.getConnection(url,"","");
} catch (Exception e)
{
logger.error("Error occurs",e);
}
But exception happens, shown below:
Caused by: org.apache.thrift.transport.TTransportException: Cannot open without port.
at org.apache.thrift.transport.TSocket.open(TSocket.java:172) ~[hive-exec-0.14.0.jar:0.14.0]
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:248) ~[hive-exec-0.14.0.jar:0.14.0]
at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) ~[hive-exec-0.14.0.jar:0.14.0]
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52) ~[hive-exec-0.14.0.jar:0.14.0]
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49) ~[hive-exec-0.14.0.jar:0.14.0]
at java.security.AccessController.doPrivileged(Native Method) ~[na:1.7.0_45]
at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656) ~[hadoop-common-2.6.4.jar:na]
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49) ~[hive-exec-0.14.0.jar:0.14.0]
at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:190) ~[hive-jdbc-1.1.0.jar:1.1.0]
... 6 common frames omitted
Any effort will be appreciated.
While you have the zookeeper port specified as a query string parameter (needed for kerberos auth) you also need to have the port for hive after the hostname part of the URL. The normal port used by Hive is 10000, so your URL might start like this:
sBuilder.append("jdbc:hive2://ha-cluster:10000/default");
Installed the filezilla server and enabled the FTP over TLS Settings in Settings and started the server.
Through eclipse java client i tried to connect to server for upload and download the file using the below code
using commons-net apache library.
FTPSClient ftpClient = new FTPSClient(false);
// Connect to host
ftpClient.connect(mServer, mPort);
int reply = ftpClient.getReplyCode();
System.out.println("The reply code is "+reply);
if (FTPReply.isPositiveCompletion(reply)) {
// Login
if (ftpClient.login("******", "*******")) {
// Set protection buffer size
ftpClient.execPBSZ(0);
// Set data channel protection to private
ftpClient.execPROT("P");
// Enter local passive mode
ftpClient.enterLocalPassiveMode();
// Upload File using storeFile
File firstLocalFile = new File("e:/Test.txt");
String firstRemoteFile = "hello.txt";
InputStream is = new FileInputStream(firstLocalFile);
String result = getStringFromInputStream(is);
System.out.println(result);
Object output = ftpClient.storeFile(firstRemoteFile, is);
System.out.println(output);
is.close();
// Download File using retrieveFile(String, OutputStream)
String remoteFile1 = "/settings.xml";
File downloadFile1 = new File("e:/testOutput.xml");
OutputStream outputStream1 = new BufferedOutputStream(new FileOutputStream(downloadFile1));
boolean success = ftpClient.retrieveFile(remoteFile1, outputStream1);
outputStream1.close();
if (success) {
System.out.println("File #1 has been downloaded successfully.");
}
// Logout
ftpClient.logout();
// Disconnect
ftpClient.disconnect();
} else {
System.out.println("FTP login failed");
}
// Disconnect
ftpClient.disconnect();
} else {
System.out.println("FTP connect to host failed");
}
} catch (IOException ioe) {
System.out.println("FTP client received network error");
ioe.printStackTrace();
} catch (Exception nsae) {
System.out.println("FTP client could not use SSL algorithm");
nsae.printStackTrace();
}
It creates a file hello.txt on the server but size is of 0kb (source file size is 10 kb) and ended up the following error. Please help me to resolve this
javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake
at sun.security.ssl.SSLSocketImpl.readRecord(Unknown Source)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source)
at sun.security.ssl.SSLSocketImpl.startHandshake(Unknown Source)
at sun.security.ssl.SSLSocketImpl.startHandshake(Unknown Source)
at org.apache.commons.net.ftp.FTPSClient._openDataConnection_(FTPSClient.java:619)
at org.apache.commons.net.ftp.FTPClient._storeFile(FTPClient.java:633)
at org.apache.commons.net.ftp.FTPClient.__storeFile(FTPClient.java:624)
at org.apache.commons.net.ftp.FTPClient.storeFile(FTPClient.java:1976)
at com.test.ftps.TestClass.main(TestClass.java:88)
Caused by: java.io.EOFException: SSL peer shut down incorrectly
at sun.security.ssl.InputRecord.read(Unknown Source)
... 9 more
just un-tick
"Require TLC session resumption on data connection..." in the filezilla server -> settings -> FTP over TLS Settings -> un-tick the Require TLC session resumption on data connection when using PROT P
In addition to user2750213's answer ( Filezilla's TLS session resumption ) beware to have the required protocols enabled. You can verify them running this code or this other on the jvm connecting to the FTPS server. Recent versions of Filezilla server use TLSv1.2.
If this works for you, you may get a java.net.SocketException: Unconnected sockets not implemented. In this case you need to write your own class which extends DefaultSocketFactory class and then set it to your FTPS client via method ftpsClient.setSocketFactory(yourSocketFactory) overriding the createSocket() method which must returns a new Socket()
I am new in Storm.. I am stucked with below error
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119)
1178482 [Thread-11-SendThread(localhost:2000)] WARN org.apache.zookeeper.ClientCnxn - Session 0x1417cd58578000b for server null, unexpected error, closing socket connection and attempting reconnect
Sometimes my topology works fine but when i tried again i got above error.Searched a lot in google but could nt find any clue.
I am running my topology in local cluster.. please suggest some solutions
Please find more logs below :-
2595 [Thread-11-EventThread] INFO com.netflix.curator.framework.state.ConnectionStateManager - State change: SUSPENDED
2596 [Thread-11-EventThread] WARN backtype.storm.cluster - Received event :disconnected::none: with disconnected Zookeeper.
2596 [ConnectionStateManager-0] WARN com.netflix.curator.framework.state.ConnectionStateManager - There are no ConnectionStateListeners registered.
3592 [Thread-11-SendThread(localhost:2000)] WARN org.apache.zookeeper.ClientCnxn - Session 0x1417e6596c7000b for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119)
3895 [Thread-11-SendThread(localhost:2000)] WARN org.apache.zookeeper.ClientCnxn - Session 0x1417e6596c7000b for server null, unexpected error, closing socket connection and attempting reconnect
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.zookeeper.KeeperException.create(KeeperException.java:90)
at com.netflix.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(CuratorFrameworkImpl.java:380)
at com.netflix.curator.framework.imps.BackgroundSyncImpl$1.processResult(BackgroundSyncImpl.java:49)
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:617)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506)
Please find more log :-
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119)
50454 [Thread-15] ERROR com.netflix.curator.ConnectionState - Connection timed out
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at com.netflix.curator.ConnectionState.getZooKeeper(ConnectionState.java:72)
at com.netflix.curator.CuratorZookeeperClient.getZooKeeper(CuratorZookeeperClient.java:74)
at com.netflix.curator.framework.imps.CuratorFrameworkImpl.getZooKeeper(CuratorFrameworkImpl.java:353)
at com.netflix.curator.framework.imps.ExistsBuilderImpl$2.call(ExistsBuilderImpl.java:149)
at com.netflix.curator.framework.imps.ExistsBuilderImpl$2.call(ExistsBuilderImpl.java:138)
at com.netflix.curator.RetryLoop.callWithRetry(RetryLoop.java:85)
at com.netflix.curator.framework.imps.ExistsBuilderImpl.pathInForeground(ExistsBuilderImpl.java:134)
at com.netflix.curator.framework.imps.ExistsBuilderImpl.forPath(ExistsBuilderImpl.java:125)
at com.netflix.curator.framework.imps.ExistsBuilderImpl.forPath(ExistsBuilderImpl.java:34)
at backtype.storm.zookeeper$exists_node_QMARK_.invoke(zookeeper.clj:78)
at backtype.storm.zookeeper$exists.invoke(zookeeper.clj:117)
at backtype.storm.cluster$mk_distributed_cluster_state$reify__1996.set_data(cluster.clj:70)
at backtype.storm.cluster$mk_storm_cluster_state$reify__2415.worker_heartbeat_BANG_(cluster.clj:276)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93)
at clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:28)
at backtype.storm.daemon.worker$do_executor_heartbeats.doInvoke(worker.clj:35)
at clojure.lang.RestFn.invoke(RestFn.java:439)
at backtype.storm.daemon.worker$fn__4348$exec_fn__1228__auto____4349$fn__4352.invoke(worker.clj:346)
at backtype.storm.timer$schedule_recurring$this__1776.invoke(timer.clj:69)
at backtype.storm.timer$mk_timer$fn__1759$fn__1760.invoke(timer.clj:33)
at backtype.storm.timer$mk_timer$fn__1759.invoke(timer.clj:26)
at clojure.lang.AFn.run(AFn.java:24)
at java.lang.Thread.run(Thread.java:680)
I have just encountered with this problem, too. My problem is the running time is set too short.The zookeeper do not have enough time to shut down properly.Look at the code below:
builder.createTopology());
try {
Thread.sleep(20000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
cluster.shutdown();
}
You should set enough time before the cluster.shutdown() was called. At first, I set Thread.sleep(1000), then the same problem occurred as yours. After I changed the time, this problem never showed up again.