Cannot see the logs on Kibana in the logs tab - elasticsearch

I am running my project and after creating the index pattern and adding the log indices under the settings tab and applying it to save the changes. I go to the stream and for some reason I do not see anything in it. I then try using postman to use the get method to cause the messages to load in the logs stream but it does not. Is there a reason why that I cannot see it?
This is the logs in the discover section:
I then have the indice in the settings applied so it should populate:
But when I go to stream it does not show up, even on live streaming and using postman to send it.

Related

Cant figure out, why i get a 403

i try to setup the youtube api but i get a 403.
I tried to setup several times- without success.
https://www.googleapis.com/youtube/v3/videos?part=snippet,contentDetails&id=-DIJvggBrg8&key=xyz
Maybe someone is able to help me or even login to the console for a setup?
The 403 error by itself is not of immediate use. Its attached error code (as I already pointed out above) sheds light on the things that happened.
The API responds to any query you make with a text (that is structured in JSON format). That response text contains the needed error code.
I suggest you to proceed with the following steps:
delete the old API key (right now it is still accessible!);
create a new API key and
run your API query using the new key and then post here the API response text.
Note that I checked the video -DIJvggBrg8 to which your query refers to with my own API key and got no error but a JSON text describing the video.

How to debug HyperLedger Composer Transaction code in Playground

I am using a local install of Playground on MacOS.
I was successful to create my business network, add my model file and logic to this network and create assets and participants instances.
So now I am ready to submit my first transaction, but I get an error message in the popup window as a result to my request. The message per se is not the problem (it's about some Undefined asset), my problem is I want to debug this transaction code by producing some execution traces, using old-school printf or log message.
I tried to insert console.log(message) instructions in my transaction code but eventually I was not able to retrieve those logs traces (eg. using a command like docker logs -f composer).
Is there another way to produce logs traces? Or did I miss a config setting to defilter logs in docker logs?
Any help greatly appreciated!
Olivier.
On console logging (and seeing them in the browser Developer console), see this Stack Overflow here (hyperledger composer playground) Can you see results of console.log('something') in browser? (it also has a link to more info there
See here https://hyperledger.github.io/composer/latest/problems/diagnostics.html for more on logging / where to find debug logs.
As for setting checkpoint/breakpoints: These are set by the Editor tooling 🙂 In H/Composer, you can just use the embedded connector (eg such as TP functions) to try out / step through each breakpoint - for more info on VSCode -> https://code.visualstudio.com/docs/editor/debugging and Atom -> How do I set a breakpoint inside of atom's package? and I posted the link to diagnostics/logging above.
One quick way I used to insert breakpoints with debug messages, is to throw an exception using throw new Error(...) in the transaction method.
This shows up in the playground interface as well.

Zap stack traces vs. error messages on google cloud

I'm using zapp to log error messages on a service hosted on google cloud, and am seeing that while errors are logged successfully, the text stored in the "message" field of the google cloud log is the stack trace, and not the error message I have logged.
Example code:
var log *zap.Logger
if err := doStuff(); err != nil {
log.Error(<error message I want to log>, zap.Error(err))
}
This works well except google cloud logging and stackdriver will use the stack-trace caught by the call to zap.Error in the message field of the structured log. The message I've defined appears in the msg field, but the former appears to be the one displayed predominantly in the logging console and used by stackdriver for indexing errors.
This means that when navigating logs and errors via the console, I only see stacktraces, and no indication of the associated error string.
The tricky thing is I have no idea if this "issue" is cloud-side or zapp-side. I've spent some time digging around in Zapp to no avail, and am out of ideas.
zap by default puts the message under the msg key, the stacktrace under stacktrace, and prints log lines as json to stdout. You should be able to see this in action by just running your binary locally.
Your logging system presumably processes these log lines as they're printed. It will read them, parse them, and maybe do some restructuring or add some metadata, and then send them off somewhere else to be saved or processed more.
Since zap is in all likeliness working as intended, you need to look at the system that processes your logs. How does it expect them to look? Does it have special rules for any particular keys? Will it inject any keys of its own?
Note that you can configure zap to use different keys for all of its standard fields.
To log correctly and effectively on GCP, first, you have to set appropriate Zap keys with the Logging's LogEntry
In case you are looking for a working example, I write a simple Zap config here:
https://github.com/uber-go/zap/discussions/1110#discussioncomment-2955566

Kibana 4 'Discover' search error

I indexed a dataset of geo-data records in ElasticSearch for analysis in Kibana. My issue is that the 'Discover' tab doesn't pick up the data but instead displays the error message
Discover: An error occurred with your request. Reset your inputs and try again.
In 'Settings', I could configure my data index just fine, and Kibana is picking up all the mapping fields with correct type/analysis/indexing metadata. 'Visualize' works fine, too. I can create my charts, add them to the dashboard, drill down - everything. Just the 'Discover' tab is broken for me.
I'm running ElasticSearch 1.5.2, and tried with Kibana 4.0.1, 4.0.2 and 4.1-snapshot now (on Ubuntu 14.04), all with the same results.
Another effect I'm noticing: the sidebar is not showing any 'Available Fields'. Only if I unfold the field settings and untick 'Hide Missing Fields' I'll get my list of schema fields. (These are greyed out as they are considered 'missing' by Kibana. But interestingly, clicking on 'Visualize' on one of them to chart their distribution works, again, perfectly fine.)
My only suspicion is: my data doesn't have a timestamp field, so maybe that's what's messing things up. Although judging from the docs I'd assume that non-timeseries data should be supported.
Any hints appreciated!
In my case, the cause was that I had indexed malformed JSON into elasticsearch. It was valid Javascript, but not valid JSON. In particular I neglected to quote the keys in the objects
I had inserted my (test) data using curl, e.g.
curl -X PUT http://localhost:9200/foo/doc/1 -d '{ts: "2015-06-24T01:07:00.000Z", employeeId: 105, action: "PICK", quantity: 8}'
Note that ts: should have been "ts":, etc.
Seems like elasticsearch tolerates such things, but Kibana does not. Once I fixed that, Discover worked fine.
Note that the error you are seeing is generated client side when an error arises. If you open your client debugger (e.g. Firefox) you will see the error in the console log. In my case, the error message was
Error: Unable to parse/serialize body
If your error is different, it will be a different cause.
It was my fault for entering bad JSON to begin with. Odd that elasticsearch is more tolerant than Kibana.
It happened to me as well. I tried all...:
Deleting all the indices (.kibana, my own, etc) didn't work
Restarting the ES, Kibana and LS services didn't help.
I didn't have the Request Timeout problem in kibana.yml either.
My problem was that the timestamp field was using an incorrect time format. I changed it to this format and it worked: "date": "2015-05-13T00:00:00"
I had the same problem. None of the suggested solutions helped. I finally found the problem while comparing a working version with a non-working version in Wireshark.
Don't emit a UTF8 byte order mark in front of your JSON. Somehow, my serializer was set up to do that... ElasticSearch is fine with it, but Kibana cannot handle it on the Discover page.

Kibana dashboard - error saving to ElasticSearch

I have a logstash-elasticsearch-kibana local setup and I have a problem when it comes to save Kibana dashboards.
Selecting the "Save" option I get the following error: "Save failed Dashboard could not be saved to Elasicsearch"
I'm using the logstash dashboard that comes with Kibana and after some modifications I tried to save it getting this error.
As far as I understand dashboards loaded from templates (json files located in kibana3/app/dashboards) cannot be saved to Elasticsearch (as stated in kibana templates). But I haven't been able to figure out how to create a new dashboard for logstash and save it to Elasticsearch, nor find instructions to do that. I would like to have different dashboards and be able to modify them and load them as needed.
I have exported the dashboard schema and successfully load it back, which works as far as saving a dashboard after all customization is done. But I would prefer to save them to elasticsearch rather than to template files.
Communication between ES and Kibana works fine (no errors show up in logs and information is retrieved and showed in Kibana).
Someone who could tell me what I'm missing here?
Thanks!
I got the error when I had a '/' (slash) in the name of the dashboard. Changing this to '-' solved the problem. See the following issue on GitHub: https://github.com/elasticsearch/kibana/issues/837

Resources