I followed https://sqoop.apache.org/docs/1.99.4/RESTAPI.html for trying out sqoop2. But Iam getting error "Exception in thread "main" org.apache.sqoop.common.SqoopException: MODEL_011:Input do not exist - Input name: linkConfig.connectionString" on the line linkConfig.getStringInput("linkConfig.connectionString").setValue("jdbc:mysql://localhost/my");
i tested sqoop2, mysql, database etc from terminal and working fine. please help. thanks in advance.
here is the code i am trying
import org.apache.sqoop.client.SqoopClient;
import org.apache.sqoop.model.MLink;
import org.apache.sqoop.model.MLinkConfig;
import org.apache.sqoop.validation.Status;
public class Sqoop2 {
public static void main(String[] args) {
//Initialization SqoopClient
String url = "http://<myip>:12000/sqoop/";
SqoopClient client = new SqoopClient(url);
// create a placeholder for link
long connectorId = 1;
MLink link = client.createLink(connectorId);
link.setName("Vampire");
link.setCreationUser("Buffy");
MLinkConfig linkConfig = link.getConnectorLinkConfig();
// fill in the link config values
linkConfig.getStringInput("linkConfig.connectionString").setValue("jdbc:mysql://<myip>/<dbname>");
linkConfig.getStringInput("linkConfig.jdbcDriver").setValue("com.mysql.jdbc.Driver");
linkConfig.getStringInput("linkConfig.username").setValue("root");
linkConfig.getStringInput("linkConfig.password").setValue("root");
// save the link object that was filled
Status status = client.saveLink(link);
if(status.canProceed()) {
System.out.println("Created Link with Link Id : " + link.getPersistenceId());
} else {
System.out.println("Something went wrong creating the link");
}
}
}
I faced the same issue. As per the documentation generic-jdbc connector id =1 and hdfs-connector id =2. But after we upgraded to 5.3.2 the id's were swapped.
Don't hard code the connector Id's(as said in the documentation). Use client.getConnectors(); or show connector --all method to look for existing connectors and get the connector Id you need. There is currently an issue logged for this https://issues.apache.org/jira/browse/SQOOP-1965.
Looks like connector 1 is already exists. Can you try with another id ?
Related
I was able to receive push notifications some months ago, a day ago i started to work again on the app now the issue is it's not able to receive push notification. It does provide FCM token but onMessageReceived never gets called also if i try with Postman it gives an error of Mismatchsender ID, but the scenario here is a bit confusing. If i change the package name (after creating new project on console and added new goole-service.json file) it doesn't let me register for FCM token. i've stuck in this situation from last day. can anybody please help? what i'm doing wrong.
Here is implementaion of FCMToken
[Service]
[IntentFilter(new[] { "com.google.firebase.INSTANCE_ID_EVENT" })]
public class MyFirebaseIIDService : FirebaseInstanceIdService
{
const string TAG = "MyFirebaseIIDService";
public override void OnTokenRefresh()
{
var refreshedToken = FirebaseInstanceId.Instance.Token;
Log.Debug(TAG, "Refreshed token: " + refreshedToken);
SendRegistrationToServer(refreshedToken);
}
void SendRegistrationToServer(string token)
{
// Add custom implementation, as needed.
}
}
Here it gives me error if i change my package name to any other,
Error: Java.Lang.IllegalStateException: Default FirebaseApp is not
initialized in this process
try
{
var refreshedToken = FirebaseInstanceId.Instance.Token;
// PushNotificationManager.Initialize(this, false);
} catch(Exception ee)
{
}
I've solved my issue with with customization of FirebaseInitialize after creating new project on Firebase here is my code. But one bad thing is here that when new token gets initialized it never gets called on FirebaseInstanceIdReceiver.
var options = new FirebaseOptions.Builder()
.SetApplicationId("<AppID>")
.SetApiKey("<ApiKey>")
.SetDatabaseUrl("<DBURl>")
.SetStorageBucket("<StorageBucket>")
.SetGcmSenderId("<SenderID>").Build();
var fapp = FirebaseApp.InitializeApp(this, options);
I'm beginner with java and using console to compile and run my programs. I'm trying to read data from MS Access .accdb file with ucanaccess driver. As i have added 5 ucanaccess files to C:\Program Files\Java\jdk1.8.0_60\jre\lib\ext, but still getting Exception java.lang.ClassNotFoundException:net.ucanaccess.jdbc.ucanaccessDriver.
Here is my code.
import java.sql.*;
public class jdbcTest
{
public static void main(String[] args)
{
try
{
Class.forName("net.ucanaccess.jdbc.UcanaccessDriver");
String url = "jdbc:ucanaccess://C:javawork/PersonInfoDB/PersonInfo.accdb";
Connection conctn = DriverManager.getConnection(url);
Statement statmnt = conctn.createStatement();
String sql = "SELECT * FROM person";
ResultSet rsltSet = statmnt.executeQuery(sql);
while(rsltSet.next())
{
String name = rsltSet.getString("name-");
String address = rsltSet.getString("address");
String phoneNum = rsltSet.getString("phoneNumber");
System.out.println(name + " " + address + " " + phoneNum);
}
conctn.close();
}
catch(Exception sqlExcptn)
{
System.out.println(sqlExcptn);
}
}
}
Please add JDBC driver jar to lib folder.
Download URL download jar
I tried the method mentioned by Gord in his post Manipulating an Access database from Java without ODBC and used eclipse instead of command line compile and run. Also to learn eclipse basics, i watched video tutorial https://www.youtube.com/watch?v=mMu-JlBrYXo.
Finally i was able to read my MS Access data base file from my java code.
OTAClient.dll version 10.0.0.2532
The following is the code I have used to connect to qc, apply filter and fetch qcpath and a field data. It is working fine with java 32 bit version 1.7.
import com.oracle.qcTasks.ClassFactory;
import com.oracle.qcTasks.IList;
import com.oracle.qcTasks.ISubjectNode;
import com.oracle.qcTasks.ITDConnection;
import com.oracle.qcTasks.ITDFilter;
import com.oracle.qcTasks.ITest;
import com.oracle.qcTasks.ITestFactory;
import com4j.Com4jObject;
public class qcClient {
public static void main(String[] args) {
ITest qcTestCase;
ISubjectNode qcTestCasePath;
Com4jObject SubjectField;
//QC url
String url = "http://fusionqc.us.oracle.com/";
//username for login
String username = "username";
//password for login
String password = "";
//domain
String domain = "domain";
//project
String project = "project";
ITDConnection itdc = ClassFactory.createTDConnection();
itdc.initConnectionEx(url);
itdc.connectProjectEx(domain, project, username, password);
boolean flag = itdc.connected();
System.out.println(itdc.projectName());
ITestFactory qcTestFactory = itdc.testFactory().queryInterface(ITestFactory.class);
ITDFilter qcFilter=qcTestFactory.filter().queryInterface(ITDFilter.class);
String query="^Subject\\path^";
qcFilter.clear();
qcFilter.filter("TS_SUBJECT", query);
IList qcTestList = qcFilter.newList();
for (Com4jObject com4jObject : qcTestList) {
qcTestCase = com4jObject.queryInterface(ITest.class);
System.out.println(qcTestCase.name());
System.out.println(qcTestCase.field("TS_USER_09"));
SubjectField = (Com4jObject)qcTestCase.field("TS_SUBJECT");
qcTestCasePath = SubjectField.queryInterface(ISubjectNode.class);
System.out.println(qcTestCasePath.path());
break;
}
System.out.println("command output :: "+flag);
System.out.println("OUT");
itdc.disconnectProject();
}
}
To the project requirement, I've downgraded the version of java to 64 bit version of 1.6. Post the downgrade, I'm receiving the following error.
Exception in thread "main" com4j.ExecutionException: com4j.ComException: 80040154 CoCreateInstance failed : Class not registered : .\com4j.cpp:153
at com4j.ComThread.execute(ComThread.java:203)
at com4j.Task.execute(Task.java:25)
at com4j.COM4J.createInstance(COM4J.java:97)
at com4j.COM4J.createInstance(COM4J.java:72)
at com.oracle.qcTasks.ClassFactory.createTDConnection(ClassFactory.java:16)
at com.oracle.qcCode.qcClient.main(qcClient.java:32)
Caused by: com4j.ComException: 80040154 CoCreateInstance failed : Class not registered : .\com4j.cpp:153
at com4j.Native.createInstance(Native Method)
at com4j.COM4J$CreateInstanceTask.call(COM4J.java:117)
at com4j.COM4J$CreateInstanceTask.call(COM4J.java:104)
at com4j.Task.invoke(Task.java:51)
at com4j.ComThread.run0(ComThread.java:153)
at com4j.ComThread.run(ComThread.java:134)
I've found similar threads but didn't found any particular solution for the same. Please help. Is there any impact wrt java versions
What version of com4j do you use? See this post for using com4j on a 64 bit Windows machine. Are there any COM objects you can use from your 64 bit environment?
According to this blog entry you will get a Class not Registered Exception when you try to access a 32 bit COM object in a 64 bit environment. It even contains a workaround using some registry hacks. Maybe it works?
I'm trying to run the benchmark software yscb on ElasticSearch
The problem I'm having is that after the load, the data seems to get removed during cleanup.
I'm struggling to understand what is supposed to happen?
If I comment out the cleanup, it still fails because it cannot find the index during the "run" phase.
Can someone please explain what is supposed to happen in YSCB?
I mean I think it would have
1. load phase: load say 1,000,000 records
2. run phase: query the records loaded during the "load phase"
Thanks,
Okay I have discovered by running Couchbase in YCSB that the data shouldn't be removed.
Looking at cleanup() for ElasticSearchClient I see no reason why the files would be deleted (?)
#Override
public void cleanup() throws DBException {
if (!node.isClosed()) {
client.close();
node.stop();
node.close();
}
}
The init is as follows: any reason this would not persist on the filesystem?
public void init() throws DBException {
// initialize OrientDB driver
Properties props = getProperties();
this.indexKey = props.getProperty("es.index.key", DEFAULT_INDEX_KEY);
String clusterName = props.getProperty("cluster.name", DEFAULT_CLUSTER_NAME);
Boolean newdb = Boolean.parseBoolean(props.getProperty("elasticsearch.newdb", "false"));
Builder settings = settingsBuilder()
.put("node.local", "true")
.put("path.data", System.getProperty("java.io.tmpdir") + "/esdata")
.put("discovery.zen.ping.multicast.enabled", "false")
.put("index.mapping._id.indexed", "true")
.put("index.gateway.type", "none")
.put("gateway.type", "none")
.put("index.number_of_shards", "1")
.put("index.number_of_replicas", "0");
//if properties file contains elasticsearch user defined properties
//add it to the settings file (will overwrite the defaults).
settings.put(props);
System.out.println("ElasticSearch starting node = " + settings.get("cluster.name"));
System.out.println("ElasticSearch node data path = " + settings.get("path.data"));
node = nodeBuilder().clusterName(clusterName).settings(settings).node();
node.start();
client = node.client();
if (newdb) {
client.admin().indices().prepareDelete(indexKey).execute().actionGet();
client.admin().indices().prepareCreate(indexKey).execute().actionGet();
} else {
boolean exists = client.admin().indices().exists(Requests.indicesExistsRequest(indexKey)).actionGet().isExists();
if (!exists) {
client.admin().indices().prepareCreate(indexKey).execute().actionGet();
}
}
}
Thanks,
Okay what I am finding is as follows
(any help from ElasticSearch-ers much appreciated!!!!
because I'm obviously doing something wrong )
Even when the load shuts down leaving the data behind, the "run" still cannot find the data on startup
ElasticSearch node data path = C:\Users\Pl_2\AppData\Local\Temp\/esdata
org.elasticsearch.action.NoShardAvailableActionException: [es.ycsb][0] No shard available for [[es.ycsb][usertable][user4283669858964623926]: routing [null]]
at org.elasticsearch.action.support.single.shard.TransportShardSingleOperationAction$AsyncSingleAction.perform(TransportShardSingleOperationAction.java:140)
at org.elasticsearch.action.support.single.shard.TransportShardSingleOperationAction$AsyncSingleAction.start(TransportShardSingleOperationAction.java:125)
at org.elasticsearch.action.support.single.shard.TransportShardSingleOperationAction.doExecute(TransportShardSingleOperationAction.java:72)
at org.elasticsearch.action.support.single.shard.TransportShardSingleOperationAction.doExecute(TransportShardSingleOperationAction.java:47)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:61)
at org.elasticsearch.client.node.NodeClient.execute(NodeClient.java:83)
The github README has been updated.
It looks like you need to specify using:
-p path.home=<path to folder to persist data>
I am trying to run the below hive thrift code on hive server2 on CDH 4.3 and getting below error. Here is my code: I can run hive jdbc connection to same server successfully, it is just thrift which is not working.
public static void main(String[] args) throws Exception
{
TSocket transport = new TSocket("my.org.hiveserver2.com",10000);
transport.setTimeout(999999999);
TBinaryProtocol protocol = new TBinaryProtocol(transport);
TCLIService.Client client = new TCLIService.Client(protocol);
transport.open();
TOpenSessionReq openReq = new TOpenSessionReq();
TOpenSessionResp openResp = client.OpenSession(openReq);
TSessionHandle sessHandle = openResp.getSessionHandle();
TExecuteStatementReq execReq = new TExecuteStatementReq(sessHandle, "SELECT * FROM testhivedrivertable");
TExecuteStatementResp execResp = client.ExecuteStatement(execReq);
TOperationHandle stmtHandle = execResp.getOperationHandle();
TFetchResultsReq fetchReq = new TFetchResultsReq(stmtHandle, TFetchOrientation.FETCH_FIRST, 1);
TFetchResultsResp resultsResp = client.FetchResults(fetchReq);
TRowSet resultsSet = resultsResp.getResults();
List<TRow> resultRows = resultsSet.getRows();
for(TRow resultRow : resultRows){
resultRow.toString();
}
TCloseOperationReq closeReq = new TCloseOperationReq();
closeReq.setOperationHandle(stmtHandle);
client.CloseOperation(closeReq);
TCloseSessionReq closeConnectionReq = new TCloseSessionReq(sessHandle);
client.CloseSession(closeConnectionReq);
transport.close();
}
Here is the error log:
Exception in thread "main" org.apache.thrift.protocol.TProtocolException: Required field 'operationHandle' is unset! Struct:TFetchResultsReq(operationHandle:null, orientation:FETCH_FIRST, maxRows:1)
at org.apache.hive.service.cli.thrift.TFetchResultsReq.validate(TFetchResultsReq.java:465)
at org.apache.hive.service.cli.thrift.TCLIService$FetchResults_args.validate(TCLIService.java:12607)
at org.apache.hive.service.cli.thrift.TCLIService$FetchResults_args$FetchResults_argsStandardScheme.write(TCLIService.java:12664)
at org.apache.hive.service.cli.thrift.TCLIService$FetchResults_args$FetchResults_argsStandardScheme.write(TCLIService.java:12633)
at org.apache.hive.service.cli.thrift.TCLIService$FetchResults_args.write(TCLIService.java:12584)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:63)
at org.apache.hive.service.cli.thrift.TCLIService$Client.send_FetchResults(TCLIService.java:487)
at org.apache.hive.service.cli.thrift.TCLIService$Client.FetchResults(TCLIService.java:479)
at HiveJDBCServer1.main(HiveJDBCServer1.java:26)
Are you really sure you set the operationsHandle field to a valid value? The Thrift eror indicates what it says: The API expects a certain field (operationHandle in your case) to be set, which has not been assigned a value. And you stack trace confirms this:
Struct:TFetchResultsReq(operationHandle:null, orientation:FETCH_FIRST,
maxRows:1)
In case anyone finds this, like I did, by googling that error message: I had a similar problem with a PHP Thrift library for hiverserver2. At least in my case, execResp.getOperationHandle() returned NULL because there was an error in the executed request that generated execResp. This didn't throw an exception for some reason, and I had to examine execResp in detail, and specifically check the status, before attempting to get an operation handle.