Elastic Search search_phase_execution_exception Reason: "all shards failed" - elasticsearch

ES Query is as following :
var searchResponse = client.Search<ItemSearch>(s => s
.Query(q => q
.MultiMatch(mm => mm
.Query(searchQuery)
.Type(TextQueryType.BestFields)
.Fields(f => f.Field(p => p.Description).Field(p=>p.Comment).Field(p => p.CommentSmall)
.Field(p => p.DisplaySequence).Field(p => p.ImageUrl).Field(p => p.ItemBrandDescription)
.Field(p => p.ItemBrandSequence).Field(p => p.ItemCode)
.Field(p => p.ItemGroupID).Field(p => p.ItemGroupSpecification).Field(p => p.ItemMaximumOrderAmount)
.Field(p => p.ItemMinimumOrderAmount).Field(p => p.ItemSpecsDescription).Field(p => p.ItemSupplierCode)
.Field(p => p.PackSize).Field(p => p.PriceUnit).Field(p => p.Sequence).Field(p => p.Stock).Field(p => p.StockToday).Field(p => p.StockTomorrow)
.Field(p => p.SupplierCode).Field(p => p.UOM).Field(p => p.UOMTypeID).Field(p => p.UOMTypeDescription)
).Query(searchQuery)
)
));
getting Following error, Can anyone please help me out?
ServerError:ServerError: 400Type: search_phase_execution_exception Reason: "all shards failed"
DebugInformation:Invalid NEST response built from a unsuccessful low level call on POST: /myindex2-solvi/itemsearch/_search
# Audit trail of this API call:
- [1] BadResponse: Node: http://localhost:9200/ Took: 00:00:00.2664112
# ServerError: ServerError: 400Type: search_phase_execution_exception Reason: "all shards failed"
# OriginalException: System.Net.WebException: The remote server returned an error: (400) Bad Request.
at System.Net.HttpWebRequest.GetResponse()
at Elasticsearch.Net.HttpConnection.Request[TReturn](RequestData requestData)
# Request:
<Request stream not captured or already read to completion by serializer. Set DisableDirectStreaming() on ConnectionSettings to force it to be set on the response.>
# Response:
<Response stream not captured or already read to completion by serializer. Set DisableDirectStreaming() on ConnectionSettings to force it to be set on the response.>
IsValid:False
OriginalException:System.Net.WebException: The remote server returned an error: (400) Bad Request.
at System.Net.HttpWebRequest.GetResponse()
at Elasticsearch.Net.HttpConnection.Request[TReturn](RequestData requestData)

Solution if you have multiple websites.
Go to System > Store > Configuration > Catalog > Catalog Search > Elasticsearch Index Prefix
Change the prefix for each store
Save Settings, Clear Cache, and Run index again
Another Official Solution is
Base on the bug filed in my previous reply, I modified the following file to fix the search problem.
./vendor/magento/module-elasticsearch/Model/Adapter/FieldMapper/Product/FieldProvider/FieldType/Converter.php
private const ES_DATA_TYPE_DOUBLE = 'double';
--> private const ES_DATA_TYPE_FLOAT = 'float';
self::INTERNAL_DATA_TYPE_FLOAT => self::ES_DATA_TYPE_DOUBLE,
---> self::INTERNAL_DATA_TYPE_FLOAT => self::ES_DATA_TYPE_FLOAT,

Related

"Invalid NEST response built from a unsuccessful () low level call on POST"

The following code works most of the time but sometimes it throws an exception with this message:
Invalid NEST response built from a unsuccessful () low level call on POST: /queries2020-09/_search?typed_keys=true
var response = await client.SearchAsync<LogEntry>(s => s
.Query(q => q
.Bool(b => b
.Must(m => m.DateRange(r => r.Field(l => l.DateTimeUTC)
.GreaterThanOrEquals(new DateMathExpression(since))),
m => m.Term(term)
)))
.Aggregations(a => a
.Sum("total-cost", descriptor => descriptor
.Field(f => f.Cost)
.Missing(1)))
.Size(0));
if (!response.IsValid)
{
throw new Exception("Elasticsearch response error. " + response.ToString());
}
This seems to be a very generic message that pops up a lot on Q&A websites. How do I debug it to see the root cause?
Using NEST 7.6.1.
It may be better to write the debug information out rather than .ToString()
if (!response.IsValid)
{
throw new Exception("Elasticsearch response error. " + response.DebugInformation);
}
The debug information includes the audit trail and details about an error/exception, if there is one. It's a convenience method for collecting the pertinent information available on IResponse in a human readable form.
If a response is always checked for validity and an exception thrown, you may want to set ThrowExceptions() on ConnectionSettings to throw when an error occurs.

Unable to use Filter_Path with NEST - ElasticSearch 5.0

On using Filter_Path to reduce the metadata returned from Elasticsearch, I get the following error:
: 'Method not found: 'Elasticsearch.Net.Search RequestParameters Elasticsearch.Net.Search RequestParameters.Filter Path(System.String)'.'
var respones = client.Search<dynamic>(s => s
.FilterPath("hits.hits._source")
.Index(Indices.Index(indices))
.Query(q => q.Bool(b => b.Must(mustQueries.ToArray()).Must(shouldQueries.ToArray()))).Source(o => source)
.Size(10000)
.AllTypes());
Any ideas how to use it?

Error with elasticsearch_http Logstash

I'm trying to run this:
input {
twitter {
# add your data
consumer_key => "shhhhh"
consumer_secret => "shhhhh"
oauth_token => "shhhhh"
oauth_token_secret => "shhhhh"
keywords => ["words"]
full_tweet => true
}
}
output {
elasticsearch_http {
host => "shhhhh"
index => "idx_ls"
index_type => "tweet_ls"
}
}
This is the error I got:
Sending Logstash's logs to /usr/local/Cellar/logstash/5.2.1/libexec/logs which is now configured via log4j2.properties
[2017-02-24T04:48:03,060][ERROR][logstash.plugins.registry] Problems loading a plugin with {:type=>"output", :name=>"elasticsearch_http", :path=>"logstash/outputs/elasticsearch_http", :error_message=>"NameError", :error_class=>NameError, :error_backtrace=>["/usr/local/Cellar/logstash/5.2.1/libexec/logstash-core/lib/logstash/plugins/registry.rb:221:in `namespace_lookup'", "/usr/local/Cellar/logstash/5.2.1/libexec/logstash-core/lib/logstash/plugins/registry.rb:157:in `legacy_lookup'", "/usr/local/Cellar/logstash/5.2.1/libexec/logstash-core/lib/logstash/plugins/registry.rb:133:in `lookup'", "/usr/local/Cellar/logstash/5.2.1/libexec/logstash-core/lib/logstash/plugins/registry.rb:175:in `lookup_pipeline_plugin'", "/usr/local/Cellar/logstash/5.2.1/libexec/logstash-core/lib/logstash/plugin.rb:129:in `lookup'", "/usr/local/Cellar/logstash/5.2.1/libexec/logstash-core/lib/logstash/pipeline.rb:452:in `plugin'", "(eval):12:in `initialize'", "org/jruby/RubyKernel.java:1079:in `eval'", "/usr/local/Cellar/logstash/5.2.1/libexec/logstash-core/lib/logstash/pipeline.rb:98:in `initialize'", "/usr/local/Cellar/logstash/5.2.1/libexec/logstash-core/lib/logstash/agent.rb:246:in `create_pipeline'", "/usr/local/Cellar/logstash/5.2.1/libexec/logstash-core/lib/logstash/agent.rb:95:in `register_pipeline'", "/usr/local/Cellar/logstash/5.2.1/libexec/logstash-core/lib/logstash/runner.rb:264:in `execute'", "/usr/local/Cellar/logstash/5.2.1/libexec/vendor/bundle/jruby/1.9/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/local/Cellar/logstash/5.2.1/libexec/logstash-core/lib/logstash/runner.rb:183:in `run'", "/usr/local/Cellar/logstash/5.2.1/libexec/vendor/bundle/jruby/1.9/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/local/Cellar/logstash/5.2.1/libexec/lib/bootstrap/environment.rb:71:in `(root)'"]}
[2017-02-24T04:48:03,073][ERROR][logstash.agent ] fetched an invalid config {:config=>"input { \n twitter {\n # add your data\n consumer_key => \"shhhhh\"\n consumer_secret => \"Shhhhhh\"\n oauth_token => \"shhhh\"\n oauth_token_secret => \"shhhhh\"\n keywords => [\"word\"]\n full_tweet => true\n }\n}\noutput { \n elasticsearch_http {\n host => \"shhhhh.amazonaws.com\"\n index => \"idx_ls\"\n index_type => \"tweet_ls\"\n }\n}\n", :reason=>"Couldn't find any output plugin named 'elasticsearch_http'. Are you sure this is correct? Trying to load the elasticsearch_http output plugin resulted in this error: Problems loading the requested plugin named elasticsearch_http of type output. Error: NameError NameError"}
I've tried installing elasticsearch_http but it doesnt seem to be a package. Ive also tried
logstash-plugin install logstash-input-elasticsearch
and
logstash-plugin install logstash-output-elasticsearch
which did install but got the same error.
Totally new to logstash so this might be very simple.
I am Trying to follow this https://www.rittmanmead.com/blog/2015/08/three-easy-ways-to-stream-twitter-data-into-elasticsearch/
I tried Val's answer and got this:
[2017-02-24T05:12:45,385][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x4c2332e0 URL:http://shhhhh:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://sshhhhhh:9200/][Manticore::ConnectTimeout] connect timed out"}
I can go to the url and i get a response on browser and I it set open on permissions so Im not sure what the issue with that would be.
The elasticsearch_http output is no longer alive. You need to use the elasticsearch output instead.
elasticsearch {
hosts => "localhost:9200"
index => "idx_ls"
document_type => "tweet_ls"
}
Just an addition to #Val's answer. What if you have your hosts parameter without the port:
output {
elasticsearch {
index => "idx_ls"
document_type => "tweet_ls"
hosts => "localhost"
}
}
by default ES runs on port 9200 so you don't have to explicitly set it up.

How to update all fields in MailChimp API batch subscribe using Ruby and Gibbon

I am using Ruby 1.9.3 without Rails and version 1.0.4 of the Gibbon gem.
I have referrals populated with my list and can send the following to MailChimp with Gibbon. However, only the email address and email type fields are populated in the list in MailChimp. What am I doing wrong that is prohibiting all the merge fields from being imported via API?
Here is the batch and map of the list.
referrals.each_slice(3) do |batch|
begin
prepared_batch = batch.map do |referral|
{
:EMAIL => {:email => referral['client_email']},
:EMAIL_TYPE => 'html',
:MMERGE6 => referral['field_1'],
:MMERGE7 => referral['field_2'],
:MMERGE8 => referral['field_3'],
:MMERGE9 => referral['field_4'],
:MMERGE11 => referral['field_5'],
:MMERGE12 => referral['field_6'],
:MMERGE13 => referral['field_7'],
:MMERGE14 => referral['field_8'],
:MMERGE15 => referral['field_9'],
:FNAME => referral['client_first_name']
}
end
#log.info("prepared_batch : #{prepared_batch}")
result = #gibbon.lists.batch_subscribe(
:id => #mc_list_id,
:batch => prepared_batch,
:double_optin => false,
:update_existing => true
)
#log.info("#{result}")
rescue Exception => e
#log.warn("Unable to load batch into mailchimp because #{e.message}")
end
end
The above executes successfully. However, only the email address and email type are populated but most of the fields should be populated.
Here is my log output for one of the prepared_batches. I replaced the real values with Value. I used my own email for testing.
I, [2013-11-11T09:01:14.778907 #70827] INFO -- : prepared_batch : [{:EMAIL=>
{:email=>"jason+6#marketingscience.co"}, :EMAIL_TYPE=>"html", :MMERGE6=>"Value",
:MMERGE7=>"Value", :MMERGE8=>nil, :MMERGE9=>nil, :MMERGE11=>"8/6/13 0:00",
:MMERGE12=>"Value", :MMERGE13=>nil, :MMERGE14=>"10/18/13 19:09", :MMERGE15=>"Value",
:FNAME=>"Value"}, {:EMAIL=>{:email=>"jason+7#marketingscience.co"}, :EMAIL_TYPE=>"html",
:MMERGE6=>"Value", :MMERGE7=>"Value", :MMERGE8=>nil, :MMERGE9=>nil, :MMERGE11=>"8/6/13
0:00", :MMERGE12=>"Value", :MMERGE13=>nil, :MMERGE14=>nil, :MMERGE15=>"Value",
:FNAME=>"Value"}, {:EMAIL=>{:email=>"jason+8#marketingscience.co"}, :EMAIL_TYPE=>"html",
:MMERGE6=>"Value", :MMERGE7=>"Value", :MMERGE8=>nil, :MMERGE9=>nil, :MMERGE11=>"8/7/13
0:00", :MMERGE12=>"Value", :MMERGE13=>nil, :MMERGE14=>nil, :MMERGE15=>"Value",
:FNAME=>"Value"}]
Here is the log output of result from the MailChimp call.
I, [2013-11-11T09:01:14.778691 #70827] INFO -- : {"add_count"=>3, "adds"=>
[{"email"=>"jason+3#marketingscience.co", "euid"=>"ab512177b4", "leid"=>"54637465"},
{"email"=>"jason+4#marketingscience.co", "euid"=>"eeb8388524", "leid"=>"54637469"},
{"email"=>"jason+5#marketingscience.co", "euid"=>"7dbc84cb75", "leid"=>"54637473"}],
"update_count"=>0, "updates"=>[], "error_count"=>0, "errors"=>[]}
Any advice on how to get all the fields to update in MailChimp is appreciated. Thanks.
Turns out the documentation for using the Gibbon gem to batch subscribe is not correct. You need to add the :merge_vars struct to contain the fields other than email and email type. My final code looks like the following. I'm also going to update this code in its entirety at: https://gist.github.com/analyticsPierce/7434085.
referrals.each_slice(3) do |batch|
begin
prepared_batch = batch.map do |referral|
{
:EMAIL => {:email => referral['email']},
:EMAIL_TYPE => 'html',
:merge_vars => {
:MMERGE6 => referral['field_1'],
:MMERGE7 => referral['field_2'],
:MMERGE8 => referral['field_3'],
:MMERGE9 => referral['field_4'],
:MMERGE11 => referral['field_5'],
:MMERGE12 => referral['field_6'],
:MMERGE13 => referral['field_7'],
:MMERGE14 => referral['field_8'],
:MMERGE15 => referral['field_9'],
:FNAME => referral['first_name']
}
}
end
#log.info("prepared_batch : #{prepared_batch}")
result = #gibbon.lists.batch_subscribe(
:id => #mc_list_id,
:batch => prepared_batch,
:double_optin => false,
:update_existing => true
)
#log.info("#{result}")
rescue Exception => e
#log.warn("Unable to load batch into mailchimp because #{e.message}")
end
end

Ruby real time google analytics API

I am trying to get activeVisitors with the google-api-ruby-client. The client is listed in the real time google analytics API docs here however I see nothing in the docs about using it for real time api.
I see the function discovered_api however I see no list for posisble parameters for the API name.
Example for Regular Analytics API:
# Get the analytics API
analytics = client.discovered_api('analytics','v3')
Does anyone know how to use this client to get real time active visitors?
Here is the code I am trying to use:
require 'google/api_client'
require 'date'
# Update these to match your own apps credentials
service_account_email = 'xxxxxxxxxxxxx#developer.gserviceaccount.com' # Email of service account
key_file = '/path/to/key/privatekey.p12' # File containing your private key
key_secret = 'notasecret' # Password to unlock private key
profileID = '111111111' # Analytics profile ID.
# Get the Google API client
client = Google::APIClient.new(:application_name => '[YOUR APPLICATION NAME]',
:application_version => '0.01')
# Load your credentials for the service account
key = Google::APIClient::KeyUtils.load_from_pkcs12(key_file, key_secret)
client.authorization = Signet::OAuth2::Client.new(
:token_credential_uri => 'https://accounts.google.com/o/oauth2/token',
:audience => 'https://accounts.google.com/o/oauth2/token',
:scope => 'https://www.googleapis.com/auth/analytics.readonly',
:issuer => service_account_email,
:signing_key => key)
# Start the scheduler
SCHEDULER.every '1m', :first_in => 0 do
# Request a token for our service account
client.authorization.fetch_access_token!
# Get the analytics API
analytics = client.discovered_api('analytics','v3')
# Execute the query
visitCount = client.execute(:api_method => analytics.data.ga.get, :parameters => {
'ids' => "ga:" + profileID,
'metrics' => "ga:activeVisitors",
})
# Update the dashboard
send_event('current_visitors', { current: visitCount.data.rows[0][0] })
end
Error returned:
Missing required parameters: end-date, start-date.
Assuming that the ruby client lib uses the discovery service and the method is actually available, instead of:
visitCount = client.execute(:api_method => analytics.data.ga.get, :parameters => {
'ids' => "ga:" + profileID,
'metrics' => "ga:activeVisitors",
try this (change ga to realtime in the api_method):
visitCount = client.execute(:api_method => analytics.data.realtime.get, :parameters => {
'ids' => "ga:" + profileID,
'metrics' => "ga:activeVisitors",
If you're a member of the Real-time reporting product forum, this post may be helpful - https://groups.google.com/forum/m/#!topic/google-analytics-realtime-api/zgAsKFBenV8
You might try...
analytics = client.discovered_api('realtime','v3')
Or real-time, or w/o v3.
If that works update your get method too.
Wish I could be more help but there is absolutely no documentation on this.

Resources