The plain command doesn't work, even though the cartridge command does - tarantool

I installed tarantool cartridge following the docs:
tarantoolctl rocks install cartridge-cli
export PATH=$PWD/.rocks/bin/:$PATH
Now, according to the docs I should be able to use the plain and the cartridge commands. But I'm not able to use the plain command. There's no plain script in .rocks/bin.

I suppose you are trying to create a Cartridge application using the plain template, according to this page: https://www.tarantool.io/en/doc/2.2/book/cartridge/cartridge_dev/#application-templates
Unfortunately, this part of our doc is slightly outdated. There are no built-in templates anymore, although you still can create your own one. The right command will look like
cartridge create --template templates/plain ~/tmp/
See the latest Cartridge documentation in Github: https://github.com/tarantool/cartridge-cli#usage

UPD: it was really present in the docs.
This is a mistake in the docs, which will be fixed. Only cartridge command is available.

Related

Clickhouse can't authenticate in MongoDB nor passing empty credentials

everyone. I've been following in Github the progression of this issue handling. I think and believe all is OK now. I just need you to tell me what to do in my deployments. I've just installed Clickhouse 21.8.9, and I'm trying to make some tests in order to extract from MongoDB and fill out an aggregatingMergeTree table in Clickhouse. I've been reading a lot of tech doc about Clickhouse possibilities, so I know that is not the only way to accomplish what I want to do. But it's a valid way, so I want to test it. My Clickhouse installation comes from downloaded DEB files (I'm using Ubuntu 20.04 in my laptop). According to some changes I saw in the Clickhouse repo at Github, it seems I might have to re-compile Clickhouse, is that correct? What do you advice me to do? Thanks in advance.
PS: I've just tried in a MongoDB 3.6 and a MongoDB 4.0, the outcome is the same: no admit empty username or cannot authenticate if I use credentials

Using Nutch and Elasticsearch with indexer-elastic-rest

I've used Nutch and Elasticsearch many times before, however, I believe I was using the default setup to where Nutch used the binary transport method for communicating with Elasticsearch. It was simple and worked out of the box so I've used it alot.
I've been in the process of updating crawl system and it seems now the better option is to use the Jest REST api library.
However, I'm a bit confused about it...
First how do I install the Jest library to be used with Nutch and Elasticsearch. I know I can download or clone via Github but.. how is it connected?
Do I literally just update the dependencies in the /indexer-elastic-rest *.xml files for Nutch and then just build again with ant?
My first install of Nutch was using the binary zip. I just recently started using the src package so ant/maven is somewhat new to me - which is why this all a bit confusing. All the blogs and articles say to "and then rebuild with ant"...
Second - does the Jest library take care of all Java REST api code or do I have to write Java code now?

Spring-shell along with spring-boot NullPointerException

I am trying to build a simple app using spring boot and spring shell . My pom.xml is like this:
I am getting the below error when trying to connect to the shell using putty
I am not able to get any pointers.
Putty I downloaded for windows. Trying connecting using user as username, password shown in console, port 2000. Logs says authentication successful. Still a NullPointerException
I am using jdk 1.8 only. Not jre.
There are many open issues on this field right now. One of them is your same issue but specific for macOs, i do not have windows, but perhaps you can reproduce it same way and comment on that jira.
You can check it here:
https://jira.exoplatform.org/projects/CRASH/issues/CRASH-252?filter=allopenissues
If you want to use it, you should return to an older version better :)
Good luck!

How do I use my Sinatra-powered Ruby application to scrape data to a Heroku PostgreSQL database

I successfully pushed my Sinatra-powered Ruby app to Heroku.
One of the files I pushed is a Ruby script which scrapes the web and puts data into a PostgreSQL database (that's the non-Sinatra one).
I set up a PostgreSQL add-on for the Heroku app, but I haven't gotten further than that.
What I'm trying to figure out is how I'd edit the scraping script (which uses the Sequel gem) to add the data it scrapes to the Heroku PostgreSQL add-on database.
I took a look this tutorial on it, but I got stuck on the first step. I'm afraid I don't understand the command prompt syntax they listed.
Furthermore, when I tried to follow their alternate instructions using PGAdmin III, I ran into another problem. The Heroku tutorial says:
As an alternative, you can also create an a dump file using the PGAdmin GUI tool. To do so, select a database from the Object Browser and click Tools > Backup. Set the filename to data.dump, use the “COMPRESS” format, and (under “Dump Options #1”) choose not to save Privilege or Tablespace.
The problem here is I see no "COMPRESS" format in PGAdmin. Instead, I just save the file "data.dump" as an "All files" type without any formatting.
I'm not sure if this is correct, and if it is, what exactly I need to do next.
Can anyone confirm that I'm on the right path, and if so, what specifically I must do next?
EDIT: For clarification, I'm trying to get my scraping script to add its scraping data to the Heroku app's PostgreSQL database. Right now, it's still written as if it were on my local machine, scraping to my local PostgreSQL database.
It looks like you can run
heroku pg:credentials DATABASE --app your-app-name
where "DATABASE" is literally the word "DATABASE". Once you have the credentials, configure your script to access that database.

Running gem server in passenger

I'm running a few rails/rake apps in Apache/passenger and I want to add the documentation app served by gem server to these apps, so I can easily give it a special (sub)domain, like docs.example.org, so it's easily available for all members of our team and nobody has to start the server himself or remember port numbers (like 8808, the default gem server port).
I would recommend looking into bdoc instead of gem server, it allows the user to access all their gem docs without a server running at all. It would also be trivial to modify bdoc to output to a specific directory then you could easily add a step to regenerate the docs.
The nice thing about having them in static files would be the apache config is dead simple.
If you do want to make bdoc output to a specific dir look at this line.
Edit:
I actually went ahead and branched this on github and made the change. Now you can supply the output directory on the command line and it will generate the static rdoc pages for you.
I'm running http://gems.local on my machine in case I want to do some Ruby cracking offline. (Plain journey, trains, etc).
This is really easy, you can actually run passenger with all the Ruby gems' documentation locally without having to access the net.
I was following Jason's tips and got everything working. See the following article and you should be ready to go:
http://jasonseifer.com/2009/02/22/offline-gem-server-rdocs
Attila
I wrote a blog post on how I have my gems, ruby, rails and jquery docs locally using the yard server and nginx for proxing in mac os x. Steps for linux are almost the same, only thing that changes is the way to configure the daemons.
https://makarius.posterous.com/offline-rails-ruby-jquery-and-gems-docs-with

Resources