CouchDB Performance: 1.6.1 vs 2.1.1 - performance

We are looking at upgrading our CouchDB on our RHEL servers from 1.6.1 to 2.1.1. Before we do that, though, we wanted to run a performance test. So we created a JMeter test that goes directly against the database. It does not use any random values, so that the test would be exactly the same, and we could compare the two results. This is just a standalone server, we are not using clustering. I ran the tests the exacts same way for both. I ran the tests for 1.6.1, and then installed 2.1.1 on the same machine. And I created the database new for each test run. [I also updated Erlang to R19.3.]
The results were very shocking:
Average response times:
1.6.1: 271.15 ms
2.1.1: 494.32 ms
POST and PUTs were really bad ...
POST:
1.6.1: 38.25 ms
2.1.1: 250.18 ms
PUT:
1.6.1: 37.33 ms
2.1.1: 358.76
We are just using the default values for all the config options, except that we changed 1.6.1 to have delayed_commits = false (that is now the default in 2.1.1). I'm wondering if there's some default that changed that would make 2.1.1 so bad.
When I ran the CouchDB setup from the Fauxton UI, it added the following to my local.ini:
[cluster]
n = 1
Is that causing CouchDB to try to use clustering, or is that the same as if there were no entries here at all? One other thing, I deleted the _global_changes database, since it seemed as if it would add extra processing that we didn't need.

Is that causing CouchDB to try to use clustering, or is that the same as if there were no entries here at all?
It's not obvious from your description. If you setup CouchDB 2.0 as clustered then that's how it will work. This is something you should know depending on the setup instructions you followed: http://docs.couchdb.org/en/2.1.1/install/setup.html
You can tell by locating the files on disk and seeing if they are in a shards directory or not.
I'm pretty sure you want at least two, so setting n = 1 doesn't seem like something you should be doing.
If you're trying to run in single node follow the instructions I linked above to do that.
One other thing, I deleted the _global_changes database, since it seemed as if it would add extra processing that we didn't need.
You probably don't want to delete random parts of your database unless there are instructions saying this is OK?

Related

phpLDAPadmin returning another tree instead of edit window

I just installed phpLDAPadmin on a test server and a production server. The instance on the test server tells me that it can't start TLS but it works fine otherwise so I'm ignoring the error. The problem with the installation on the production server can't be ignored. When I click on a user in the tree instead of getting an edit window on the right, I instead get another copy of the tree.
screenshot of duplicated tree
It's also returning a warning that using curly braces with an array is deprecated which isn't happening on the test server. I wouldn't expect a deprecation to cause this.
Other than the fact that I screw up things five times on the test server before getting them right, the test server and the production server should be the same.
I'm on RHEL 7.9 with phpLDAPadmin 1.2.5 and PHP 7.4.30. (Yes, I know I should be on RHEL 9 and using Identity Management but I need to buy some time until I can fit that upgrade into my schedule.)
What could be causing this tree duplication?

Simple Local Database Solution for Ruby?

I'm attempting to write a simple Ruby/Nokogiri scraper to get event information from multiple pages and then output it to a CSV that is attached to an email sent out weekly.
I have completed the scraping components and the CSV component and it's working perfectly. However, I now realize that I need to know when new events are added, which means I need some sort of database. Ideally I would just store this locally.
I've dabbled a bit with using the ruby gem 'sequel', but the data does not seem to persist beyond the running of the program. Do I need to download some database software to work with 'sequel'? Also I'm not using the Rails framework, just Ruby.
Any and all guidance is deeply appreciated!
I'm guessing you did Sequel.sqlite, as in the first example in the Sequel README, which creates an in-memory SQLite database. To create a database in your filesystem instead of memory, just pass it a path, e.g.:
Sequel.sqlite("./my-database.db")
This is, of course, assuming that you have the sqlite3 gem installed. If the given file doesn't exist, it will be created.
This is covered in the Sequel docs.

mongo shell not showing all dbs

Good Day.
I've been developing with meteorJS which uses mongodb. No problems there. I've been using the mongo shell to access the database on my dev machine (osx 10.11). This is my first project with mongo and when the shell would load, it would connect to db.test and I'd always show dbs and get the list of database, then use myApp.
Yesterday whenever I go into the shell and I type show dbs the only one shown is local 0.078GB. However my app is still working and pulling and pushing data to the database.
I've checked the dbpath in the mongod.conf and that seems ok. I'm not entirely sure about the exact order of things, but two things where different (I'm not sure if these happened prior to the show dbs not showing everything or after, and I'm not sure which came first):
when loading the mongo shell I was getting this error:
WARNING: soft rlimits too low. Number of files is 256, should be at least 1000"
I followed these directions which seemed to stop that error from appearing (https://github.com/basho/basho_docs/issues/1402 )
I use Meteor Toys and for the first time I update user.profile.companyName (which is a custom field within the standard profile from within the Meteor Toys widget.
Just odd that the app can still access the database and collections, but that the mongo shell doesn't show. I've update mongod via brew upgrade mongodb from 3.0.2 to 3.0.7 to no avail.
Any ideas?
If you want to use the regular mongo console you have to specify the port to be 3001 for meteor apps instead of the default 27017. Otherwise it's much simpler to just type meteor mongo and connect that way. Then you can type 'show collections' and it will show them all just like normal.
MongoDB do not show the database unless if there is minimum of one collection with a document in it.
Refer to this link

Neo4j Cypher queries really slow after upgrade to 2.1.3

This morning, with some struggles (see: Upgrading a neo4j database from 2.0.1 to 2.1.3 fails), i upgraded my database from version 2.0.1 to 2.1.3. My main goal with the upgrade was to gain performance on certain queries (see: Cypher SORT performance).
Everything seems to be working, except for the fact that all Cypher queries - without exception - have become much, much, much slower. Queries that used to take 75ms now take nearly 2000ms.
As i was running on an A1 (1xCPU ~2GB RAM) VM in Azure, i thought that giving neo4j some more ram and an extra core would help, but after upgrading to an A2 VM i get more or less the same results.
I'm no wondering, did i loose my indexes by doing a backup and upgrading/using that db? I have perhaps 50K nodes in my db, so it's not that spectacular, right?
I'm now still running on an A2 VM (2xCPU, ~4GB RAM), but had to downgrade to 2.0.1 again.
UPDATE: #1 2014-08-12
After reading Michael's first comment, on how to inspect my indexes using the shell, i did the following:
With my 2.0.1 database service running (and performing well), i executed Neo4jShell.bat and then executed the Schema command. This yielded the following response:
I uninstalled the 2.0.1 service using the Neo4jInstall.bat remove command.
I installed the 2.1.3 service using the Neo4jInstall install command.
With my 2.1.3 database service running, I again executed the Neo4jShell.bat and then executed the schema command. This yielded the following response:
I think it is safe to conclude that either the migration process (in 2.1.3) or the backup process (in 2.0.1) has removed the indexes from my database. This does explain why my backed up database is much smaller (~110MB) than the online database (~380MB). After migration to 2.1.3, my database became even smaller (~90MB).
Question is now, is it just a matter of recreating my indexes and be done with it?
UPDATE: #2 2014-08-12
I guess i have answered my own question. After recreating the constraints and indexes, my queries perform like they used to (some even faster, as expected).
Eventually, it turned out that in the process of backing up my database (in version 2.0.1) or during the migration process at startup (in version 2.1.3) i lost my indexes and constraints. Obvious solution is to manually recreate them (http://docs.neo4j.org/chunked/stable/cypher-schema.html) and be on your way.

Unable to get performance counters from certain servers when running load tests

I am running load tests using the built in system in Visual Studio 2010. The setup is a test controller with four agents. The tests that I am running will put load on an application server and a database server. The problem I am having is that I am unable to get values from the performance counters on the application server and the database server. I have followed the instructions on http://msdn.microsoft.com/en-us/library/ms404661%28v=vs.100%29.aspx and http://blogs.catapultsystems.com/tlingenfelder/archive/2009/06/18/performance-counters-timeouts-and-load-testing-with-visual-studio-2008.aspx in order to troubleshoot, but to no avail.
Using Performance Monitor (perfmon) I can connect and get values from the performance counters on the application server and database, tested from several computers. But when running the load tests, I get nothing.
I am trying to get system performance counters like CPU% and memory usage, so there are no custom counters involved.
Any hints as to what I shoud do next?
The main problem was that I was not aware of that in order to get performance data from a computer during load testing it should have a test agent (or controller) installed on it.
Install test agents and register them with the controller on all machines used for load testing (for me, the appserver and the database, those did not have test agents installed)
In the actual load test in Visual Studio, remove the appserver and database from the counter sets
Add them again
Run the test!
Seems like the old reference to the appserver and database in the load test did not work as expected, hence the need to remove and add them again.
Voila! Performance counters appears and returns values!
I have also the same problem.
The only solution I found is to remove some counters in counter sets and also increase the sampling interval (see here).
Another things that i have in my rig are Roles : agents will run tests but not collect data ; this will be done by my web server (installed a test agent on it). Please look at the link.

Resources