Cassandra DevCenter fails to recognize "COPY" command - cassandra-2.0

I am trying to run the copy command using DataStax Devcenter. I am using Cassandra 2.0.11 and DevCenter 1.2.1
COPY myks.table1 from "D:/test.csv" WITH DELIMITER = ',';
But Devcenter fails to recognize the COPY command and keeps giving me error. I can create keyspace, tables, insert values. Any ideas will be appreciated. There is another thread, but it also has no answer. here

DevCenter does not support COPY, DESCRIBE, SHOW or SUPPORT commands. Use CQLSH instead.

Related

Flyway windows command line gives no output, at all

I have been using flyway for a while and have managed to successfully execute migrations. This past week or so whenever I try and use flyway the command executes successfully but I get no output at all!
This is not such a big issue for the migrate command, I can just go and check in the database what has been done from the flyway_schema_history table. It is quite frustrating when using the info command however. Previously I'd used this to sort of pre-view what was about to happen.
I've tried running the command in as many ways as I can think of.
I've now tried three versions of flyway too!
I am currently using flyway-9.7.0 on a windows 10 computer. I am trying to execute my migration scripts on an Oracle 19C database.
I am executing these commands:
flyway migrate -url=jdbc:oracle:thin:#//<URL>:<PORT>/<DB_NAME> -user=<DB_USER> -password=<Password>
-locations=filesystem:C:\flyway-9.7.0\migrations -outputFile=C:\flyway-9.7.0\migrations\out.log
flyway info -url=jdbc:oracle:thin:#//<URL>:<PORT>/<DB_NAME> -user=<DB_USER> -password=<Password>
-outputFile=C:\flyway-9.7.0\migrations\out.log
Both of these commands execute successfully (apparently).
For the "migrate" command the database objects are created in the database, an entry is added to the schemas flyway_schema_history table and a log file is created as specified (but is empty)
For the "info" command the log file is again created (empty)
I get nothing written to the windows CMD window that I am executing the command in however!
Please, what can I do to see some feedback from my commands?

How to Use DBInstanceID from the ROS to query the database urls by Aliyun CLI

I am trying to work with Alibaba Cloud from Command line by use Aliyun CLI & Aliyun ROS CLI.
When creating a stack by ROS template, I got the outputs are DBInstanceStatus and DBInstanceID.
So How to Use DBInstanceID from the ROS to query the database URLs by Aliyun CLI?
One more question Can we do a backup restore the database with Aliyun CLI commands or not? If yes, how to do it?
Thanks,
You can not query the MongoDB connection information by using ROS CLI. You should use MongoDB API:
DescribeReplicaSetRole or DescribeShardingNetworkAddress
The related python sdk is aliyun-python-sdk-dds
pip install aliyun-python-sdk-dds
https://github.com/aliyun/aliyun-openapi-python-sdk/tree/master/aliyun-python-sdk-dds

PuTTy "unknown option -o" when trying to connect

following the getting started guide I attempt to create & connect to a datalab vm instance with the command:
datalab create demo
but I get the following pop-up:
then, on ok-ing the error,
connection broken
Attempting to reconnect...
in the command prompt
Any idea how to have the keys generated a different way to allow me to connect?
As a workaround, you can try either running the datalab connect demo command from inside of Cloud Shell, or downgrading to version 153.0.0 of the Cloud SDK.
As for your error, this seems to be a newly introduced bug in the 154.0.0 release of the Cloud SDK.
Prior to that, running a command like gcloud compute ssh --ssh-flag=-o --ssh-flag=LogLevel=info demo would have resulted in the "-o LogLevel=info" flag being stripped out of the command prior to it running on Windows.
With the most recent release (154.0.0), however, those flags are now passed to the SSH command as-is. This causes an error on Windows, as the PuTTY CLI does not support the -o flag.
I've filed https://github.com/googledatalab/datalab/issues/1356 to track fixing this issue.
Sorry that you got hit by this.

Running Cassandra on Mac OS X

I am trying to run Cassandra on my mac.
I installed it following the steps detailed here: http://www.datastax.com/docs/1.0/getting_started/install_singlenode_root
but when I run:
bin/nodetool ring –h localhost
I get the following error message:
Class JavaLaunchHelper is implemented in both
/Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/bin/java and
/Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/jre/lib/libinstrument.dylib. One of the two will be used. Which one is undefined.
How can I make cassandra work?
Many thanks
You are using ancient docs. On a recent version of Cassandra, run the command like this:
bin/nodetool -h localhost ring (see http://www.datastax.com/documentation/cassandra/2.1/cassandra/tools/toolsRing.html)
If you installed vnodes (the default), use nodetool status for an easier-to-read output.
Please use these docs or the docs that match your installation, I doubt you installed Cassandra 1.0. Please check the installation instructions that match the version you downloaded.
CORRECTION: the nodetool ring command worked for me using options in any position on 2.0.10:
bin/nodetool -h localhost ring
bin/nodetool ring -h localhost
and using --h instead of -h
It is a known bug in the JDK but it is not going to stop you from running Cassandra.
What you can do is to set JAVA_HOME variable explicitly.
It will not solve the bug, but it might remedy the error.
This is problem with jdk version, so you have to do the following
unset JAVA_HOME from your terminal.
edit nodetool and assign JAVA variable with jdk version less than jdk7.
JAVA = /Library/Java/JavaVirtualMachines/jdk1.6.0_xx.jdk/Contents/Home/bin/java
then run nodetool, you should be able to go without any issue.

set hive.cli.print.current.db and hive.cli.print.header not working

I have tried all suggested changes in other replies like creating a find .hiverc in /etc/hive/conf with set hive.cli.print.current.db=true; and set hive.cli.print.header=true;. It did not work.
I tried the same in /etc/hive/conf. That also did not work.
FYI I am using a cloudera training environment.
Thanks in advance!!
If you are using hive in remote mode (clidriver connecting to hiveserver), these options probably won't work.
You can try using cli in local mode, or beeline + hiverserver2.

Resources