I am trying to install elasticsearch 2.x through RPM.When I run
rpm -ivh elasticsearch-2.1.0.rpm it says
error: elasticsearch-2.1.0.rpm: Header V4 RSA/SHA1 signature: BAD, key ID d88e42b4
error: elasticsearch-2.1.0.rpm cannot be installed
I tried to sign RPM by
rpm --define="%_gpg_name " --addsign elasticsearch-2.1.0.rpm
When it asks for the pass phrase i tried with my email and d88e42b4 both but it says Pass phrase check failed.
Any help would be greatly appreciated.
Thanks
I landed in the same boat today and after digging around a while found that Elastic no longer supports RHEL < 6.
Have a look at the Elastic matrix at https://www.elastic.co/support/matrix.
Also refer the thread https://github.com/elastic/elasticsearch/issues/10843.
The question is nearly a year old but I'm answering it with the hope that a fellow passerby might benefit.
Related
Hello I was following this tutorial: https://docs.elrond.com/developers/tutorials/your-first-dapp/
With the help of: https://www.youtube.com/watch?v=IdkgvlK3rb8
But I think there is some difference between the dApp repository and the tutorial, first the src/config.devnet.tsx disappeared we now have an src/config.tsx already present (not a big deal).
I'm blocked when I try to do the ping, in the console I got the error Sender not allowed with value erd1qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq6gq4hu.
So my guess is that I've done something wrong deploying the contract, but I tried to redeploy other contracts I always ended up with the same error.
I tried natively on my ubuntu 20.04, and then in a devcontainer using an Ubuntu 22.04 image.
I'm pretty new to Elrond, Crypto (and also Node) so I might be missing something.
Thanks for your help!
I've just completed the tutorial and I think that you put the wrong SC address in src/config.tsx. The address you provided is the SC that handles other SC deployments. You have to replace it with your SC's address generated after deployment.
We're using odoo.sh platform with odoo14. The installed wkhtmltopdf is wkhtmltopdf_paas_wrapper 0.12.5, we can't upgrade to 0.12.6 because the access is very limited we cant use 'sudo' to apt-install. To temporarily solve this, we decided to use the 0.12.5 version. But it returns "Unable to call host printing service (HTTPError)" even with the right arguments. I've already tried it with the staging and production server, but still the same result. The ticket I've sent hasn't been replied to yet. This is so frustrating, I'm going bonkers...please help.
here's a screenshot:
ps: unrecognized argument error was intentional so I can display the available args. I've also crossed out the project domain. Thank you
Apparently, to properly execute the package, it should not have been "wkhtmltopdf" but instead "wkhtmltopdf.bin". I've overridden the ir_actions_report.py to change the package name. Here's the snippet of the original source code:
They shouldve known better, its a paid platform.
I am trying to run the upgrade assistant in order to upgrade ElasticSearch from 5 to 6, however I am hitting the following error:
Deprecated script security settings changes
This issue must be resolved to upgrade. Read Documentation
Details: [node[instance-0000000093] used these script-security settings:[script.inline, script.stored, script.file, script.engine.painless.stored, script.engine.painless.file, script.engine.expression.stored, script.engine.expression.file, script.engine.mustache.inline, script.engine.mustache.stored, script.engine.mustache.file]]
I have looked over the settings and our Kibana config and cannot find any references to any scrips being used.
Has anyone else run into a similar issue and how did you solve it?
Thanks.
Using Nutch 1.10 (newbie), I am trying to learn how to crawl using Nutch 1.10 and using ElasticSearch as my indexer. Not sure why, but I can not get this crawl command to work:
bin/crawl -i --elastic -D elastic.server.url=http://localhost:9200/elastic/ urls elasticTestCrawl 1
UPDATE: just used
bin/crawl -i -D elastic.server.url=http://localhost:9200/elastic/ urls/ elasticTestCrawl/ 2
--almost succesfully, received following error when it came to the indexing part of the command:
Error running:
/home/david/apache-nutch-1.10/bin/nutch clean -Delastic.server.url=http://localhost:9200/elastic/ elasticTestCrawl//crawldb
Failed with exit value 255.
What is exit value 255 for nutch 1.x? And why does the space get deleted between "-D and elastic..."
I have these ElasticSearch Properties from here in my nutch-site.xml file:
If someone can point my to the error of my ways, that would be great!
Update
I just posted my own answer below, its the second one. I had already accepted the first answer months ago when I initially got it working. My answer is simply more clear and concise to make it easier (and quicker) to get started with Nutch.
Unfortunately I can't tell you where you're going wrong as I'm in the same boat although from what I can see you are running nutch and elastic on the same box where as I've split it across two.
I've not got it to work but according to a guide I found on integrating nutch 1.7 with elastic it should just be
bin/crawl urls/ TestCrawl -depth 3 -topN 5
It may just be it isn't working for me because I've added the extra complication of networking.
I also assume you have created an index called elasticTestIndex in your elastic instance and launched it on the box before trying to run your crawl?
Should it be of help the guide I got that command from is
https://www.mind-it.info/integrating-nutch-1-7-elasticsearch/
Update:
I'm not sure I'm quite there yet but using your update I've got further than I had.
You are putting in port 9200 which is the web administartion port but you need to use port 9300 to interact with the service so change the port to 9300
I'm not sure but I thing the portion after the slash refers to the index so in your example make sure you have "elastic" set up as an index. or change
blah (low rep score so can't put in to many urls) blah localhost:9300/[index name]/
so that it uses and index you have created. If you haven't created one then you can do so from the putty with the following command.
curl -XPUT 'http://localhost:9200/[index name]/'
Using the command you supplied with the alternative port it did run although I've yet to extract the crawl data from elastic.
Supplemental Update:
It's successfully dumping data crawled from nutch into elastic for me and having put a different index in on the command line I can tell you it ignores that and uses what ever is in your nutch-site.xml
To help anyone else get it working
Start off by reading this blog post to help you get Elasticsearch configured to work with Nutch.
After that read this Nutch doc to get familiar with the NEW cli command for running the crawl script. (Works for 1.9+)
Follow the example in the new Nutch crawl script command on that page. You have to change it a bit for elasticsearch:
solr.server.url=http://localhost:8983/solr/ to something like
elastic.server.url=http://localhost:9300/yourelasticindex/
So basically there are 2 steps:
Configure Elasticsearch to work with Nutch (click on first link above)
Change the new cli command for solr to work with Elasticsearch (its
default is solr) Hope that helps!
I've been using twilio's library just fine locally (mac os x with XAMPP), but when I upload it to an amazon ec2 instance, the ability to send sms messages breaks.
$sms = $client->account->sms_messages->create(
"xxx-xxx-xxxx", $users[pnumber], "Testing!");
(the x's are numbers)
The above code seems to be what breaks it. I have uploaded the twilio library to the correct directory. I have also tried enabling all permissions to see if it was a permission issue.
I'm rather inexperienced to running things on my own server. Any guidance, guesses, and tips would be appreciated!
edit: Clarification - by "breaks", I mean the rest of the page does not load. If I add "echo "Hi";", it will not be printed. However, echo-ing before the code above works.
The problem was that I had not installed cURL onto my server yet. It was not included in my php installation. Thanks to Kevin Burke's advice, I ran it in command line and realized that it was calling a non-existant function. Some googling led to me installing curl, which fixed the problem. Thanks Kevin!