I have a code that reads data from FTP server using mapreduce code . The code we use to connect to ftp server is as follows `
String inputPath = args[0];
String outputPath = args[1];
Configuration conf1 = new Configuration();
String[] otherArgs = new GenericOptionsParser(conf1, args).getRemainingArgs();
Path arg = new Path(inputPath);
FTPFileSystem ftpfs = new FTPFileSystem();
Path arg1 =new Path(outputPath);
ftpfs.setConf(conf1);
String ftpUser = URLEncoder.encode("username", "UTF-8");
String ftpPass = URLEncoder.encode("password", "UTF-8");
String url = String.format("ftp://%s:%s#ftpserver.com",
ftpUser, ftpPass);
ftpfs.initialize(new URI(url), conf1);
JobConf conf = new JobConf(FTPIF.class);
FileOutputFormat.setOutputPath(conf, arg1));
FileInputFormat.setInputPaths(conf, ftpfs.makeQualified(arg));
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(NullWritable.class);
conf.setOutputFormat(TextOutputFormat.class);
conf.setInputFormat(CustomInputFormat.class);
conf.setMapperClass(CustomMap.class);
conf.setReducerClass(CustomReduce.class);
JobClient.runJob(conf);
`
The problem is this code works perfectly fine in pseudo mode but gives a login failed on server error when run on a cluster.the error stack trace is
ERROR security.UserGroupInformation: PriviledgedActionException as:username (auth:SIMPLE) cause:java.io.IOException: Login failed on server - 0.0.0.0, port - 21
Exception in thread "main" java.io.IOException: Login failed on server - 0.0.0.0, port - 21
at org.apache.hadoop.fs.ftp.FTPFileSystem.connect(FTPFileSystem.java:133)
at org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPFileSystem.java:389)
at org.apache.hadoop.fs.FileSystem.getFileStatus(FileSystem.java:2106)
at org.apache.hadoop.fs.FileSystem.globStatusInternal(FileSystem.java:1566)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1503)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:174)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:205)
at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1041)
at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1033)
at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:172)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:943)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:896)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:896)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:870)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1319)
at FTPIF.run(FTPIF.java:164)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at FTPIF.main(FTPIF.java:169)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:208
The cluster has connectivity to ftp . The credentials used are correct. Any ideas why the code is not able to connect to ftp ?
If you have many nodes on your cluster and multiple mappers are trying to open connections to your FTP server then you can exceed the limit of FTP users which FTP server supports.
Related
I am running word count program from my windows machine on hadoop cluster which is setup on remote linux machine.
Program is running successfully and I am getting output but I am getting following exception and my waitForCompletion(true) is not returning true.
java.io.IOException: java.net.ConnectException: Your endpoint configuration is wrong; For more details see: http://wiki.apache.org/hadoop/UnsetHostnameOrPort
at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:345)
at org.apache.hadoop.mapred.ClientServiceDelegate.getJobStatus(ClientServiceDelegate.java:430)
at org.apache.hadoop.mapred.YARNRunner.getJobStatus(YARNRunner.java:870)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:331)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:328)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:328)
at org.apache.hadoop.mapreduce.Job.isComplete(Job.java:612)
at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1629)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1591)
at practiceHadoop.WordCount$1.run(WordCount.java:60)
at practiceHadoop.WordCount$1.run(WordCount.java:1)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
at practiceHadoop.WordCount.main(WordCount.java:24)
Caused by: java.net.ConnectException: Your endpoint configuration is wrong; For more details see: http://wiki.apache.org/hadoop/UnsetHostnameOrPort
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:751)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1495)
at org.apache.hadoop.ipc.Client.call(Client.java:1437)
at org.apache.hadoop.ipc.Client.call(Client.java:1347)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy16.getJobReport(Unknown Source)
at org.apache.hadoop.mapreduce.v2.api.impl.pb.client.MRClientProtocolPBClientImpl.getJobReport(MRClientProtocolPBClientImpl.java:133)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:326)
... 17 more
Caused by: java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:685)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:788)
at org.apache.hadoop.ipc.Client$Connection.access$3500(Client.java:409)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1552)
at org.apache.hadoop.ipc.Client.call(Client.java:1383)
... 26 more
My MapReduce Program which I run on eclipse (windows)
UserGroupInformation ugi = UserGroupInformation.createRemoteUser("admin");
ugi.doAs(new PrivilegedExceptionAction<Void>() {
public Void run() throws Exception {
try {
Configuration configuration = new Configuration();
configuration.set("yarn.resourcemanager.address", "192.168.33.75:50001"); // see step 3
configuration.set("mapreduce.framework.name", "yarn");
configuration.set("yarn.app.mapreduce.am.env",
"HADOOP_MAPRED_HOME=/home/admin/hadoop-3.1.0");
configuration.set("mapreduce.map.env", "HADOOP_MAPRED_HOME=/home/admin/hadoop-3.1.0");
configuration.set("mapreduce.reduce.env", "HADOOP_MAPRED_HOME=/home/admin/hadoop-3.1.0");
configuration.set("fs.defaultFS", "hdfs://192.168.33.75:54310"); // see step 2
configuration.set("mapreduce.app-submission.cross-platform", "true");
configuration.set("mapred.remote.os", "Linux");
configuration.set("yarn.application.classpath",
"{{HADOOP_CONF_DIR}},{{HADOOP_COMMON_HOME}}/share/hadoop/common/*,{{HADOOP_COMMON_HOME}}/share/hadoop/common/lib/*,"
+ " {{HADOOP_HDFS_HOME}}/share/hadoop/hdfs/*,{{HADOOP_HDFS_HOME}}/share/hadoop/hdfs/lib/*,"
+ "{{HADOOP_MAPRED_HOME}}/share/hadoop/mapreduce/*,{{HADOOP_MAPRED_HOME}}/share/hadoop/mapreduce/lib/*,"
+ "{{HADOOP_YARN_HOME}}/share/hadoop/yarn/*,{{HADOOP_YARN_HOME}}/share/hadoop/yarn/lib/*");
configuration.set("mlv_construct", "min");
configuration.set("column_name", "TotalCost");
Job job = Job.getInstance(configuration);
job.setJar("C:\\Users\\gauravp\\Desktop\\WordCountProgam.jar");
job.setJarByClass(WordCount.class); // use this when uploaded the Jar to the server and
// running the job directly and locally on the server
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(MapForWordCount.class);
job.setReducerClass(ReduceForWordCount.class);
Path input = new Path("/user/admin/wordCountInput.txt");
Path output = new Path("/user/admin/output");
FileSystem fs = FileSystem.get(configuration);
fs.delete(output);
FileInputFormat.addInputPath(job, input);
FileOutputFormat.setOutputPath(job, output);
if (job.waitForCompletion(true)) {
System.out.println("Job done...");
}
One more observation :
My connection from windows machine to remote linux machine ports (54310 and 50001) vanish after some time.
HDFS port connection status
yarn port connection status
I am stuck here from last 5 days. Please help me. Thanks in advance.
Check if your ResourceManager and NodeManager services are up and running using jps command. In my case only NameNode and DataNode services were up and above were not running. So when running a INSERT query on Hive, when it tried to run map reduce job it was failing with above error.
Starting yarn services mentioned above fixed the issue for me.
there
I am quite new to Hive, and a java app which accesses hive with kerberos authentication, like below:
try
{
System.setProperty("java.security.krb5.conf", "/haManage/krb5.conf");
StringBuilder sBuilder = new StringBuilder();
sBuilder.append("jdbc:hive2://ha-cluster/default");
sBuilder.append(";zk.quorum=").append("x.x.x.x,x.x.x.x");//ip list
sBuilder.append(";zk.port=").append("24002");
if (isSecureVer) {
sBuilder.append(";user.principal=")
.append("hadoop#HADOOP.COM")
.append(";user.keytab=")
.append("/home/hdclient/gyj/user.keytab")
.append(";sasl.qop=auth-conf;auth=KERBEROS;principal=hive/" +
"hadoop.hadoop.com#HADOOP.COM;zk.principal=zookeeper/hadoop.hadoop.com");
}
url = sBuilder.toString();
logger.info(url);
Class.forName("org.apache.hive.jdbc.HiveDriver");
connToHive = DriverManager.getConnection(url,"","");
} catch (Exception e)
{
logger.error("Error occurs",e);
}
But exception happens, shown below:
Caused by: org.apache.thrift.transport.TTransportException: Cannot open without port.
at org.apache.thrift.transport.TSocket.open(TSocket.java:172) ~[hive-exec-0.14.0.jar:0.14.0]
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:248) ~[hive-exec-0.14.0.jar:0.14.0]
at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) ~[hive-exec-0.14.0.jar:0.14.0]
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52) ~[hive-exec-0.14.0.jar:0.14.0]
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49) ~[hive-exec-0.14.0.jar:0.14.0]
at java.security.AccessController.doPrivileged(Native Method) ~[na:1.7.0_45]
at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656) ~[hadoop-common-2.6.4.jar:na]
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49) ~[hive-exec-0.14.0.jar:0.14.0]
at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:190) ~[hive-jdbc-1.1.0.jar:1.1.0]
... 6 common frames omitted
Any effort will be appreciated.
While you have the zookeeper port specified as a query string parameter (needed for kerberos auth) you also need to have the port for hive after the hostname part of the URL. The normal port used by Hive is 10000, so your URL might start like this:
sBuilder.append("jdbc:hive2://ha-cluster:10000/default");
Installed the filezilla server and enabled the FTP over TLS Settings in Settings and started the server.
Through eclipse java client i tried to connect to server for upload and download the file using the below code
using commons-net apache library.
FTPSClient ftpClient = new FTPSClient(false);
// Connect to host
ftpClient.connect(mServer, mPort);
int reply = ftpClient.getReplyCode();
System.out.println("The reply code is "+reply);
if (FTPReply.isPositiveCompletion(reply)) {
// Login
if (ftpClient.login("******", "*******")) {
// Set protection buffer size
ftpClient.execPBSZ(0);
// Set data channel protection to private
ftpClient.execPROT("P");
// Enter local passive mode
ftpClient.enterLocalPassiveMode();
// Upload File using storeFile
File firstLocalFile = new File("e:/Test.txt");
String firstRemoteFile = "hello.txt";
InputStream is = new FileInputStream(firstLocalFile);
String result = getStringFromInputStream(is);
System.out.println(result);
Object output = ftpClient.storeFile(firstRemoteFile, is);
System.out.println(output);
is.close();
// Download File using retrieveFile(String, OutputStream)
String remoteFile1 = "/settings.xml";
File downloadFile1 = new File("e:/testOutput.xml");
OutputStream outputStream1 = new BufferedOutputStream(new FileOutputStream(downloadFile1));
boolean success = ftpClient.retrieveFile(remoteFile1, outputStream1);
outputStream1.close();
if (success) {
System.out.println("File #1 has been downloaded successfully.");
}
// Logout
ftpClient.logout();
// Disconnect
ftpClient.disconnect();
} else {
System.out.println("FTP login failed");
}
// Disconnect
ftpClient.disconnect();
} else {
System.out.println("FTP connect to host failed");
}
} catch (IOException ioe) {
System.out.println("FTP client received network error");
ioe.printStackTrace();
} catch (Exception nsae) {
System.out.println("FTP client could not use SSL algorithm");
nsae.printStackTrace();
}
It creates a file hello.txt on the server but size is of 0kb (source file size is 10 kb) and ended up the following error. Please help me to resolve this
javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake
at sun.security.ssl.SSLSocketImpl.readRecord(Unknown Source)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source)
at sun.security.ssl.SSLSocketImpl.startHandshake(Unknown Source)
at sun.security.ssl.SSLSocketImpl.startHandshake(Unknown Source)
at org.apache.commons.net.ftp.FTPSClient._openDataConnection_(FTPSClient.java:619)
at org.apache.commons.net.ftp.FTPClient._storeFile(FTPClient.java:633)
at org.apache.commons.net.ftp.FTPClient.__storeFile(FTPClient.java:624)
at org.apache.commons.net.ftp.FTPClient.storeFile(FTPClient.java:1976)
at com.test.ftps.TestClass.main(TestClass.java:88)
Caused by: java.io.EOFException: SSL peer shut down incorrectly
at sun.security.ssl.InputRecord.read(Unknown Source)
... 9 more
just un-tick
"Require TLC session resumption on data connection..." in the filezilla server -> settings -> FTP over TLS Settings -> un-tick the Require TLC session resumption on data connection when using PROT P
In addition to user2750213's answer ( Filezilla's TLS session resumption ) beware to have the required protocols enabled. You can verify them running this code or this other on the jvm connecting to the FTPS server. Recent versions of Filezilla server use TLSv1.2.
If this works for you, you may get a java.net.SocketException: Unconnected sockets not implemented. In this case you need to write your own class which extends DefaultSocketFactory class and then set it to your FTPS client via method ftpsClient.setSocketFactory(yourSocketFactory) overriding the createSocket() method which must returns a new Socket()
I am running a map reduce job using an Accumulo table as input and storing the data in another table in Accumulo. This is the run method
public int run(String[] args) throws Exception {
Opts opts = new Opts();
opts.parseArgs(PivotTable.class.getName(), args);
Configuration conf = getConf();
conf.set("formula", opts.formula);
Job job = Job.getInstance(conf);
job.setJobName("Pivot Table Generation");
job.setJarByClass(PivotTable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
job.setMapperClass(PivotTableMapper.class);
job.setCombinerClass(PivotTableCombiber.class);
job.setReducerClass(PivotTableReducer.class);
AccumuloInputFormat.setInputTableName(job, opts.dataTable);
BatchWriterConfig bwConfig = new BatchWriterConfig();
AccumuloOutputFormat.setBatchWriterOptions(job, bwConfig);
AccumuloOutputFormat.setDefaultTableName(job, opts.pivotTable);
AccumuloOutputFormat.setCreateTables(job, true);
job.setInputFormatClass(AccumuloInputFormat.class);
job.setOutputFormatClass(AccumuloOutputFormat.class);
opts.setAccumuloConfigs(job);
return job.waitForCompletion(true) ? 0 : 1;
}
The problem though is that when I run the job, I get an exception that says that it cannot connect to zookeeper.
Error: java.lang.RuntimeException: Failed to connect to zookeeper (zookeeper.1:22181) within 2x zookeeper timeout period 30000
at org.apache.accumulo.fate.zookeeper.ZooSession.connect(ZooSession.java:124)
at org.apache.accumulo.fate.zookeeper.ZooSession.getSession(ZooSession.java:164)
at org.apache.accumulo.fate.zookeeper.ZooReader.getSession(ZooReader.java:43)
at org.apache.accumulo.fate.zookeeper.ZooReader.getZooKeeper(ZooReader.java:47)
at org.apache.accumulo.fate.zookeeper.ZooCache.getZooKeeper(ZooCache.java:59)
at org.apache.accumulo.fate.zookeeper.ZooCache.retry(ZooCache.java:159)
at org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:289)
at org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:238)
at org.apache.accumulo.core.client.ZooKeeperInstance.getInstanceID(ZooKeeperInstance.java:169)
at org.apache.accumulo.core.client.ZooKeeperInstance.<init>(ZooKeeperInstance.java:159)
at org.apache.accumulo.core.client.ZooKeeperInstance.<init>(ZooKeeperInstance.java:140)
at org.apache.accumulo.core.client.mapreduce.RangeInputSplit.getInstance(RangeInputSplit.java:364)
at org.apache.accumulo.core.client.mapreduce.AbstractInputFormat$AbstractRecordReader.initialize(AbstractInputFormat.java:495)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
I checked to see if zookeeper was up and it was running. I ran telnet to see if the port was up and it was up.
I am using $ACCUMULO_HOME/bin/tool.sh to run the job. Any help would be appreciated.
It was an issue with the hosts file in my hadoop slaves. The hostname mappings were not correct.
I'm building a monitoring application as a servlet running on my websphere 7 ND deployment manager. The tool uses JMX to query the deployment manager for various data. Global Security is enabled on the dmgr.
I'm having problems getting this to work however. My first attempt was to use the websphere client code:
String sslProps = "file:" + base +"/properties/ssl.client.props";
System.setProperty("com.ibm.SSL.ConfigURL", sslProps);
String soapProps = "file:" + base +"/properties/soap.client.props";
System.setProperty("com.ibm.SOAP.ConfigURL", pp);
Properties connectProps = new Properties();
connectProps.setProperty(AdminClient.CONNECTOR_TYPE, AdminClient.CONNECTOR_TYPE_SOAP);
connectProps.setProperty(AdminClient.CONNECTOR_HOST, dmgrHost);
connectProps.setProperty(AdminClient.CONNECTOR_PORT, soapPort);
connectProps.setProperty(AdminClient.CONNECTOR_SECURITY_ENABLED, "true");
AdminClient adminClient = AdminClientFactory.createAdminClient(connectProps) ;
This results in the following exception:
Caused by: com.ibm.websphere.management.exception.ConnectorNotAvailableException: ADMC0016E: The system cannot create a SOAP connector to connect to host ssunlab10.apaceng.net at port 13903.
at com.ibm.ws.management.connector.soap.SOAPConnectorClient.getUrl(SOAPConnectorClient.java:1306)
at com.ibm.ws.management.connector.soap.SOAPConnectorClient.access$300(SOAPConnectorClient.java:128)
at com.ibm.ws.management.connector.soap.SOAPConnectorClient$4.run(SOAPConnectorClient.java:370)
at com.ibm.ws.security.util.AccessController.doPrivileged(AccessController.java:118)
at com.ibm.ws.management.connector.soap.SOAPConnectorClient.reconnect(SOAPConnectorClient.java:363)
... 22 more
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
at java.net.Socket.connect(Socket.java:519)
at java.net.Socket.connect(Socket.java:469)
at java.net.Socket.<init>(Socket.java:366)
at java.net.Socket.<init>(Socket.java:209)
at com.ibm.ws.management.connector.soap.SOAPConnectorClient.getUrl(SOAPConnectorClient.java:1286)
... 26 more
So, I then tried to do it via RMI, but adding in the sas.client.properties to the environment, and setting the connectort type in the code to CONNECTOR_TYPE_RMI. Now though I got a NameNotFoundException out of CORBA:
Caused by: javax.naming.NameNotFoundException: Context: , name: JMXConnector: First component in name JMXConnector not found. [Root exception is org.omg.CosNaming.NamingContextPackage.NotFound: IDL:omg.org/CosNaming/NamingContext/NotFound:1.0]
To see if it was an IBM issue, I tried using the standard JMX connector as well with the same result (substitute AdminClient for JMXConnector in the above error)
JMXServiceURL url = new JMXServiceURL("service:jmx:rmi:///jndi/JMXConnector");
Hashtable h = new Hashtable();
String providerUrl = "corbaloc:iiop:" + dmgrHost + ":" + rmiPort + "/WsnAdminNameService";
h.put(Context.PROVIDER_URL, providerUrl);
// Specify the user ID and password for the server if security is enabled on server.
String[] credentials = new String[] { "***", "***" };
h.put("jmx.remote.credentials", credentials);
// Establish the JMX connection.
JMXConnector jmxc = JMXConnectorFactory.connect(url, h);
// Get the MBean server connection instance.
mbsc = jmxc.getMBeanServerConnection();
At this point, in desperation I wrote a wsadmin sccript to run both the RMI and SOAP methods. To my amazement, this works fine. So my question is, why does the code not work in a servlet installed on the dmgr ?
regards,
Trevor
For the SOAP error, the ConnectException looks like the wrong SOAP host/port was used for the dmgr. I would double-check the server logs for the SOAP port. For the RMI error (NameNotFoundException), it looks like you're trying to use JMXConnectorFactory, which isn't supported by WAS.
If your application is installed on the dmgr, it's probably easiest to just use AdminServiceFactory.getAdminService to get an in-process reference to the AdminService rather than trying to open a new connection to the same process:
http://publib.boulder.ibm.com/infocenter/wasinfo/fep/topic/com.ibm.websphere.javadoc.doc/web/apidocs/com/ibm/websphere/management/AdminServiceFactory.html