I am new to Cassandra and was trying to achieve some simple
operations like inserting data into cassandra. I am using cassandra gem
to achieve this.
client = Cassandra.new('tags_logs', 'ec2-xxx-xxx-xxx.com:9160')
client.disable_node_auto_discovery!
client.get('tag_data','red')
And I get the following error:
ThriftClient::NoServersAvailable - No live servers in ...
I'm running this code from my local machine. And while I've no problem connecting using cassandra-cli (so it is not a firewall issue), the code refuses to work. It works perfectly when accessing Cassandra on my own local machine.
Any ideas?
Thanks,
Eden.
I recommend you to use this gem I'm developing: https://github.com/hsgubert/cassandra_migrations
It gives access to Cassandra through CQL3 and manages schema with migrations.
Note: it requires Rails.
For future generations: simply change the timeout ...
client = Cassandra.new('tags_logs', 'ec2-example-example-example.com:9160',:connect_timeout => 10000)
Related
I have a local cockroachdb up and running by following instructions from https://www.cockroachlabs.com/docs/stable/start-a-local-cluster.html
I am trying to run the tpcc benchmark following the instructions from https://www.cockroachlabs.com/docs/stable/performance-benchmarking-with-tpc-c.html
It looks like the TPCC binary workload.LATEST assumes the cluster is on google cloud; and so it issues the following error:
$ ./workload.LATEST fixtures load tpcc --warehouses=1000 "postgres://root#localhost:26257?sslmode=disable"
Error: failed to create google cloud client (You may need to setup the GCS application default credentials: 'gcloud auth application-default login --project=cockroach-shared'): dialing: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
What can I change to run the benchmark?
If you upgrade to v2.1, workload is a built-in command and you can run it with your cluster, it does not make google cloud assumption: https://www.cockroachlabs.com/docs/stable/cockroach-workload.html
It's not nearly as fast as using the fixtures stored in Google Cloud, but you can load the data into your cluster using normal SQL statements by running something like:
workload init tpcc --warehouses=1000
Note that while I'm not sure exactly how long it will take to load 1000 warehouses in this way locally, I expect it will take quite some time.
I tried to put together a Horizon app with an externally hosted RethinkDB and I couldn't seem to get it to work with existing tools. I understand Horizon includes a server-side API component, which may be why.
I want to be able to directly insert and/or update documents in my RethinkDB from an external server, and have those updates be pushed to subscribed browsers. Is this possible and/or wise?
Preferably this would not involve my Horizon express server at all. I would prefer to not have to expose my own API to do this.
This is totally possible as long as the RethinkDB instance is visible to the service pushing data into RethinkDB in some way. You'd then just connect to RethinkDB via a standard driver connection with your language of choice. A simple in Python would look like this:
import rethinkdb as r
conn = r.connect('localhost', 28015)
r.db("horizon_project_name").table("things").insert({'text': 'Hello, World!'}).run(conn)
Then when you start Horizon, you'll want to make sure to use the --connect flag and provide the hostname and port of that same RethinkDB instance.
An example, if RethinkDB is running on the same machine as Horizon:
hz serve --connect localhost:28015
In Horizon, you'd be able to listen to these messages like so in the browser:
const horizon = Horizon();
horizon('things').subscribe((result) => {
// `result` is the entire collection as an array
console.log("result!", result);
});
If you need futher help with this, feel free to tweet me #dalanmiller or create a new topic in discuss.horizon.io!
I am able to create a successful connection to a remote mongodb server and database. When I try and insert the document into a collection I get the following error:
Unable to connect to server xxx.xxx.x.xx:28017: Attempted to read past the end of the stream..
after a little lag. I am not sure what the issue is if the connection seems fine.
server = New MongoClient("mongodb://admin:password#xxx.xxx.x.xx:28017/").GetServer
db = server.GetDatabase("TestDB")
mongoC = db("TestCpo")
Have you made sure that Mongo is up and running? Open up a command line, navigate to the location of your mongo installation then to the bin directory, type in "mongod". This will start up the mongo server, you need to have the mongo server up and running before you can do anything with it.
It's also useful to add the path to your environment variables so that it can be set up easier.
Another thing you could do is use
Process.Start(#"C:\[Directory of the mongo installation]\bin\mongod.exe");
(this is probably more useful if you want to start Mongo under test conditions)
I need to store stock market data and stock graph datas for my site . so i am selecting two db's mongo and cassandra and i finalize cassandra db, i am using windows 7 , Xampp and i need to use this in yii. i am installing data stack it is working in cli and its also working in (localhost:8888) . so now how can i connect it in yii site .
Is this requirement solvable? I need the steps to connect. Because diffrent site tells the different solution.
Thanks in advance
You need to connect a client to your php framework
PDO driver - https://code.google.com/a/apache-extras.org/p/cassandra-pdo/
A popular php driver - https://github.com/thobbs/phpcassa
The pdo driver also supports CQL, which is good if you are just getting started.
note The project appears to be dead, but there is a github fork of it with recent updates: https://github.com/Orange-OpenSource/YACassandraPDO
phpcassa has lots of examples on how to get started in the github page. One thing to be aware of is that the example lists super columns, try to stay away from supper columns as they are somewhat deprecated.
The final option is to set up your cassandra server and host it inside of a webserver which can be queried (for example via rest) and this will give you access to more powerful java drivers, although the php drivers above should do the job.
I'm pretty sure the answer is "no" but I thought I'd check.
Background:
I have some legacy data in Access, need to get it into MySQL, which will be the DB server for a Ruby application that uses this legacy data.
Data has to be processed and transformed. Access and MySQL schemas are totally different. I want to write a rake task in Ruby to do the migration.
I'm planning to use the techniques outlined in this blog post: Using Ruby and ADO to Work with Access Databases. But I could use a different technique if it solves the problem.
I'm comfortable working on Unix-like computers, such as Macs. I avoid working in Windows because it fills me with deep existential horror.
Is there a practical way that I can write and run my rake task on my Mac and have it reach across the network to the grunting Mordor that is my Windows box and delicately pluck the data out like a team of commandos rescuing a group of hostages? Or do I have to just write this and run it on Windows?
Why don't you export it from MS-Access into Excel or CSV files and then import it into a separate MySQL database? Then you can rake the new one to your heart's content.
Mac ODBC drivers that open Access databases are available for about $30.00
http://www.actualtechnologies.com/product_access.php is one. I just run access inside vmware on my mac and expore to csv/excel as CodeSlave mentioned.
ODBC might be handy in case you want to use the access database to do a more direct transfer.
Hope that helps.
I had a similar issue where I wanted to use ruby with sql server. The best solution I found was using jruby with the java jdbc drivers. I'm guessing this will work with access as well, but I don't know anything about access