MongoDB How to find out data directory using Java driver - mongodb-java

I am using an instance of MongoDB with just one node. I would like to write a web service that fsyncs the data files and zips them into a backup folder.
Ideally, I would get the location of the data directory programatically (rather than reading a config file) so I can easily port this from a development to a production machine, where the installation paths differ. Is there any way to do this using the Java driver?

Try using use admin
db.runCommand({getCmdLineOpts: 1}) as outlined here and then playing with the returned data.
Example return data is
{
"argv" : [
"mongod",
"--port",
"6669",
"--dbpath=c:\\data\\mongo2",
"--rest"
],
"parsed" : {
"dbpath" : "c:\\data\\mongo2",
"port" : 6669,
"rest" : true
},
"ok" : 1
}

You could use mongoexport to get the data; run it from the production machine and specify the host/port/collection of the development machine. The data can be imported to the production machine using mongoimport.

Related

Issues with loading Maxmind Data into Clickhouse Database using a local file

I'm trying to insert Maxmind Data into a Clickhouse Dictionary but defining it source as a local file where I can running my Client from.
so to define my dictionary I use the query:
CREATE DICTIONARY usage_analytics.city_locations(
geoname_id UInt64 DEFAULT 0,
...
...
...
...
)
PRIMARY KEY geoname_id
SOURCE(File(path '/home/ubuntu/maxmind_csv/GeoLite2-City-Locations-en.csv' format 'CSVWithNames'))
SETTINGS(format_csv_allow_single_quotes = 0)
LAYOUT(HASHED())
LIFETIME(300);
yet I keep getting hit with the error of:
Failed to load dictionary 'usage_analytics.city_locations': std::exception. Code: 1001, type: std::__1::__fs::filesystem::filesystem_error, e.what() = filesystem error: in canonical: No such file or directory [\home/ubuntu/maxmind_csv/GeoLite2-City-Locations-en.csv] [/],
According to the documentation, I have to use its absolute path, which I did by using readlink, and still it cannot detect my file. I am running a clickhouse client from a remote machine and have the files on the remote machine. Am I suppose to have my files else where or what?
It looks like this file is not available, to fix it need to to set right ownership for file:
chown clickhouse:clickhouse /home/ubuntu/maxmind_csv/GeoLite2-City-Locations-en.csv
# chown -R clickhouse:clickhouse /home/ubuntu/maxmind_csv
.XML dictionary allows to read files from any folder.
SQL dictionary does not.
https://clickhouse.tech/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources/#dicts-external_dicts_dict_sources-local_file
When dictionary with source FILE is created via DDL command (CREATE DICTIONARY ...), the source file needs to be located in user_files directory, to prevent DB users accessing arbitrary file on ClickHouse node.
/etc/clickhouse-server/config.xml
<!-- Directory with user provided files that are accessible by 'file' table function. -->
<user_files_path>/var/lib/clickhouse/user_files/</user_files_path>

Create database by manifest when deploying with Jelastic Packaging Standard

I'm trying to write a manifest for JPS deployment of an Jelastic application.
Creating nodes and deploying webapps works fine but I can't create a database and load a sql dump into it using the manifest directives.
My configs section looks like this:
"configs": [
{
"nodeType": "postgres9",
"restart": false,
"database": [{
"name": "somedbname",
"user" : "someusername",
"dump": "http://www.somehost.de/jelastic/somedump.sql"
}]
},
...
]
...
It seems that the database section is completely ignored.
Any ideas?
Likely you have an extra square brackets around database object definition, i.e. you must have "database" : { ... }, instead of "database" : [{...}]
Also I can suggest you to review an example from Cyclos. Their idea is to download an executable bash script which will be started by cron and do all the things required for setting the database up, including adding of a new user, extensions etc.
Best regards.

Cannot load index to elasticsearch from external file, using logstash

i am running one instance of elastic and one of logstash in parallel on the same computer.
when trying to load a file into elastic, using logstash that is running the config file below, i get the follwing output msgs on elastic and no file is loaded
(when input is configured to be stdin everything seems to be working just fine)
any ideas?
"
[2014-06-17 22:42:24,748][INFO ][cluster.service ] [Masked Marvel] removed {[logstash- Eitan-PC-5928-2010][Ql5fyvEGQyO96R9NIeP32g][Eitan-PC][inet[Eitan-PC/10.0.0.5:9301]]{client=true, data=false},}, reason: zen-disco-node_failed([logstash-Eitan-PC-5928-2010][Ql5fyvEGQyO96R9NIeP32g][Eitan-PC][inet[Eitan-PC/10.0.0.5:9301]]{client=true, data=false}), reason transport disconnected (with verified connect)
[2014-06-17 22:43:00,686][INFO ][cluster.service ] [Masked Marvel] added {[logstash-Eitan-PC-5292-4014][m0Tg-fcmTHW9aP6zHeUqTA][Eitan-PC][inet[/10.0.0.5:9301]]{client=true, data=false},}, reason: zen-disco-receive(join from node[[logstash-Eitan-PC-5292-4014][m0Tg-fcmTHW9aP6zHeUqTA][Eitan-PC][inet[/10.0.0.5:9301]]{client=true, data=false}])
"
config file:
input {
file {
path => "c:\testLog.txt"
}
}
output {
elasticsearch { host => localhost
index=> amat1
}
}
When you use "elasticsearch" as your output http://logstash.net/docs/1.4.1/outputs/elasticsearch as opposed to "elasticsearch_http" http://logstash.net/docs/1.4.1/outputs/elasticsearch_http you are going to want to set "protocol".
The reason is that it can have 3 different values, "node", "http" or "transport" with different behavior for each and the default selection is not well documented.
From the look of your log files it appears it's trying to use "node" protocol as I see connection attempts on port 9301 which indicates (along with other log entries) that logstash is trying to join the cluster as a node. This can fail for any number of reasons including mismatch on the cluster name.
I'd suggest setting protocol to "http" - that change has fixed similar issues before.
See also:
http://logstash.net/docs/1.4.1/outputs/elasticsearch#cluster
http://logstash.net/docs/1.4.1/outputs/elasticsearch#protocol
EDIT:
A few other issues I see in your config -
Your host and index should be strings, which in a logstash config
file should be wrapped with double quotes, "localhost" and "amat1".
No quotes may work but they recommend you use quotes.
http://logstash.net/docs/1.4.1/configuration#string
If you don't use "http" as the protocol or don't use
"elasticsearch_http" as the output you should set cluster equal to
your ES cluster name (as it will be trying to become a node of the
cluster).
You should set start_position under file in input to "beginning".
Otherwise it will default to reading from the end of the file and you
won't see any data. This a particular problem with Windows right now
as the other way of tracking position within a file, sincedb, is
broken on Windows:
https://logstash.jira.com/browse/LOGSTASH-1587
http://logstash.net/docs/1.4.1/inputs/file#start_position
You should change your path to your log file to this:
"C:/testLog.txt". Logstash prefers forward slashes and upper case
drive letters under Windows.
https://logstash.jira.com/browse/LOGSTASH-430

qooxdoo: Building scss using scss.py

I tried to compile scss to css by scss.py.
I finally found that I had to create the folder structure qooxdoo-3.0-sdk/tool/pylib/scss/sass/frameworks and copy qooxdoo-3.0-sdk/framework/source/resource/qx/mobile/scss/* into it.
Do I have to add some path reference?
"compile-css" :
{
"let" :
{
"SCSS_CMD" : "${PYTHON_CMD} ${QOOXDOO_PATH}/tool/bin/scss.py"
},
"shell" :
{
"command" :
[
"${SCSS_CMD} --output=${QOOXDOO_PATH}/framework/source/qx/mobile/css/ios.css ${QOOXDOO_PATH}/framework/source/resource/qx/mobile/scss/ios.scss"
]
}
}
I'm not sure what you mean with "path reference". Normally you won't have to create the sass/frameworks directory path and copy files manually. You would be working and generating files in your app directories only.
Can you provide more context and what you are trying to achieve? :)
I assume you created a mobile app (./qooxdoo-3.0-sdk/create-application.py -n myApp -t mobile). This provides you already with a watch job for scss (watch-scss) [1] in your config.json. So there you can see how we use tool/bin/scss.py [2]. This is also covered in the dedicated manual page, which you might have found already [3].
[1] http://manual.qooxdoo.org/3.0/pages/tool/generator/default_jobs_actions.html#watch-scss
[2] https://github.com/qooxdoo/qooxdoo/blob/master/component/skeleton/mobile/config.tmpl.json
[3] http://manual.qooxdoo.org/3.0/pages/mobile/theming.html

Puppet Server and Client working Good but still manifest file doesn't get executed

I am currently working on puppet using Amazon Fedora EC2 instances. Both Puppet Server and Client are working fine. I am able to create certificate from client and server is able to sign that but still whatever code I have written in manifest files doesn’t get executed.
Below mentioned is my code in Site.pp file :
class test_class {
file { “/tmp/testfile”:
ensure => present,
mode => 644,
owner => root,
group => root
}
}
node puppetclient {
include test_class
}
Here, puppetclient is the hostname of client. But still after signing certificate /tmp/testfile doesn’t get created.
DNS is also working perfectly fine. I can ping puppetserver(named as puppet) from puppet client.
Can you please tell me what must be the possible mistake ??
It's probably just a typo in the question, but the default catalog file is 'site.pp', not 'Site.pp', so try it with 'site.pp' instead.

Resources