I want to talk my Loki instance with an address in the following format:
http://my.domain.com/monitoring/loki/
But I cannot find the correct place to configure it.
I assumed that Loki is based on Prometheus and that I could use flags like --web.external-url. But it seems that is not the case, as I have checked all available flags with docker run grafana/loki --help.
Am I missing something or do I have to add a reverse proxy between Loki and the rest of the world?
Not possible. Use Nginx.
Even though Loki is based on Prometheus, it is currently not possible to make Loki listen at a subpath.
Just in case someone need it and come to this post, you have the possibility to use -server.path-prefix=/loki for that case.
All the api path will be served by this path.
Related
I have a graylog instance that's running a UDP-Syslog-Input on Port 1514.
It's working wonderfully well for all the system logs of the linux servers.
When I try to ingest payara logs though [1], the "source" of the message is set to "localhost" in graylog, while it's normally the hostname of the sending server.
This is suboptimal, because in the best case I want the application logs with correct source in graylog also.
I googled around and found:
https://github.com/payara/Payara/blob/payara-server-5.2021.5/nucleus/core/logging/src/main/java/com/sun/enterprise/server/logging/SyslogHandler.java#L122
It seems like the syslog "source" is hard-coded into payara (localhost).
Is there a way to accomplish sending payara-logs with the correct "source" set?
I have nothing to do with the application server itself, I just want to receive the logs with the correct source (the hostname of the sending server).
example log entry in /var/log/syslog for payara
Mar 10 10:00:20 localhost [ INFO glassfish ] Bootstrapping Monitoring Console Runtime
I suspect I want the "localhost" in above example set to fqdn of the host.
Any ideas?
Best regards
[1]
logging.properties:com.sun.enterprise.server.logging.SyslogHandler.useSystemLogging=true
Try enabling "store full message" in the syslog input settings.
That will add the full_message field to your log messages and will contain the header, in addition to what you see in the message field. Then you can see if the source IP is in the UDP packet. If so, collect those messages via a raw/plaintext UDP input and the source should show correctly.
You may have to parse the rest of the message via an extractor or pipeline rule, but at least you'll have the source....
Well,
this might not exactly be a good solution but I tweaked the rsyslog template for graylog.
I deploy the rsyslog-config via Puppet, so I can generate "$YOURHOSTNAME-PAYARA" dynamically using the facts.
This way, I at least have the correct source set.
$template GRAYLOGRFC5424,"<%PRI%>%PROTOCOL-VERSION% %TIMESTAMP:::date-rfc3339% YOURHOSTNAME-PAYARA %APP-NAME% %PROCID% %MSGID% %STRUCTURED-DATA% %msg%\n"
if $msg contains 'glassfish' then {
*.* #loghost.domain:1514;GRAYLOGRFC5424
& ~
} else {
*.* #loghost.domain:1514;RSYSLOG_SyslogProtocol23Format
}
The other thing we did is actually activating application logging through log4j and it's syslog appender:
<Syslog name="syslog_app" appName="DEMO" host="loghost" port="1514" protocol="UDP" format="RFC5424" facility="LOCAL0" enterpriseId="">
<LoggerFields>
<KeyValuePair key="thread" value="%t"/>
<KeyValuePair key="priority" value="%p"/>
<KeyValuePair key="category" value="%c"/>
<KeyValuePair key="exception" value="%ex"/>
</LoggerFields>
</Syslog>
This way, we can ingest the glassfish server logs and the independent application logs into graylog.
The "LoggerFields" in log4j.xml appear to be key-value pairs for the "StructuredDataElements" according to RFC5424.
https://logging.apache.org/log4j/2.x/manual/appenders.html
https://datatracker.ietf.org/doc/html/rfc5424
That's the problem with UDP Syslog. The sender gets to set the source in the header. There is no "best answer" to this question. When the information isn't present, it's hard for Graylog to pass it along.
It sounds like you may have found an answer that works for you. Go with it. Using log4j solves two problems and lets you define the source yourself.
For those who face a similar issue, a simpler way to solve the source problem might be to use a static field. If you send the payara syslog messages to their own input, you can create a static field that could substitute for the source to identify traffic from that source. Call it "app_name" or "app_source" or something and use that field for whatever sorting you need to do.
Alternatively, if you have just one source for application messages, you could use a pipeline to set the value of the source field to the IP or FQDN of the payara server. Then it displays like all the rest.
My team has data stored on ElasticSearch and have given me an API key, the URL of a remote cluster, and a username/password combination (to what I dont know) to GET data.
How do I use this API key to get data from the ElasticSearch cluster with Python? I've looked through the docs, but none include the use of a raw API key and most involve localhost, not a remote host in my case.
Surely I need to know the names of nodes or indexes at least? For what would I need the username/password combo for? There must be more details I need to connect with than what I've been given?
We're moving from Node.js+couchbase work to ElasticSearch+Python so I'm more than a bit lost.
TYIA
Most probably x-pack basic security is enabled in your Elasticsearch(ES) cluster, which you can check by hitting http::9200, if it ask for username/password then you can provide what you have.
Please refer x-pack page for more info.
In short, its used to secure your cluster and indices and there are various types of authentication and basic auth(which requires username/password) is the one your team might be using.
I am trying to setup jaeger-collector on one server with jaeger-agent running in another server.
If I run the exe jaeger-all-in-one, everything works as expected (using in memory).
In order to see the options available with ES, i am not able to run a help command. When I run jaeger-collector --help, it shows only cassandra related flags. How do I check the elastic search specific details.
Now, my requirement is to specify and elastic search url.
I have set up the Environment variables SPAN_STORAGE_TYPES and ES_SERVER_URLS, but couldn't find how to run jaeger-collector.exe by asking it to take in these environment variables.
Thanks,
Minu
Currently I have a go web application containing over 50 .go files. Each file writes logs on STDOUT for now.
I want to use fluentd to capture these logs and then send them to elasticsearch/kibana.
I search on internet for solution to this. There is one package https://github.com/fluent/fluent-logger-golang .
To use this I would need to change my whole logging related code in each go file.
And there would be many data structures that I would need to Post to fluentd.
Shortly speaking I dont want to use this approach.
Please let me know if there are any other ways to do this.
Thank you
Ideally (at least in my opinion), you would essentially just pipe stdout to Fluentd.
If you happen to be also using Docker for your application you can do this easily using the built in logging drivers:
https://docs.docker.com/engine/admin/logging/overview/
Otherwise, there seem to be a few options to help get stdout to Fluentd:
12Factor App: Capturing stdout/stderr logs with Fluentd
I wrote simple function for my puppet module. It makes some requests using puppetdb API and I need IP address of puppetdb server. Is there correct way to get settings of connection PuppetMaster to puppetdb to get address of puppetdb server or I should parse puppet.conf by hand?
Parsing puppetdb.conf by hand would be the least desirable way to go about it.
Looking at the code that loads the config, it should be possible to access it using
settings_value = Puppet::Util::Puppetdb.config['main'][setting_name]
for configuration options from the [main] section.
Looking at even more code, you should even be able to use
Puppet::Util::Puppetdb.server
Puppet::Util::Puppetdb.port
I'm not entirely sure whether those APIs are available from parser functions, but it's worth a shot.