Kibana String URL image template authentication - elasticsearch

I am trying to work out whether it is possible to attach authentication, or make kibana send some sort of authentication with URL templates for field formatters in kibana.
Field formatters are found in:
Management -> Kibana -> Indices -> INDEX_NAME -> Field.
It is possible to display images on URLs with this. For this purpose, I have configured my URL template to be something among the lines of:
localhost:8080/resolve/{imageId}
The imageId is provided via the {{value}} variable and this works all fine.
now, the server running the image resolver has access to data beyond the scope of the image. I would like to add some authentication to the request coming in from Kibana. I have printed available headers and only gotten this:
{host=[localhost:8082], connection=[keep-alive], user-agent=[Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36], accept=[image/webp,image/apng,image/*,*/*;q=0.8], referer=[http://localhost:5601/app/kibana], accept-encoding=[gzip, deflate, br], accept-language=[en-GB,en;q=0.9,de-DE;q=0.8,de;q=0.7,en-US;q=0.6], cookie=[JSESSIONID.1f47f03c=node01x3mhn2jmd7g4qby84ryfcxmd1.node0; screenResolution=1920x1080; JSESSIONID.d86b4be4=node01gzefm5lc0i3c9itul3p0zoil1.node0; JSESSIONID.9211a7ee=node01v32dtus1uphtcvg72es74h681.node0]}
I can't find any basic authentication in there that I can take advantage of. I am not sure if the cookies can be used to resolve the authentication in a way?
My question is: Can I send basic authentication of the logged in user as part of my request? If so, how?
I realise this is not too much to go on. I am attaching a screenshot for hopefully a little more clarity.

I asked this on the elastic board as well who informed me of:
The cookies won't be sent to the third-party because of the same
origin restrictions that browsers put in place. Its not possible.
https://discuss.elastic.co/t/kibana-string-url-image-template-authentication/165786/2
Thanks!

Related

Diagnosing 403 forbidden error from wget command

When I try the following code, I get a 403 forbidden error, and I can't work out why.
wget --random-wait --wait 1 --no-directories --user-agent="Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36" --no-parent --span-hosts --accept jpeg,jpg,bmp,gif,png --secure-protocol=auto referer=https://pixabay.com/images/search/ --recursive --level=2 -e robots=off --load-cookies cookies.txt --input-file=pixabay_background_urls.txt
It returns:
--2021-09-01 18:12:06-- https://pixabay.com/photos/search/wallpaper/?cat=backgrounds&pagi=2
Connecting to pixabay.com (pixabay.com)|104.18.20.183|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
2021-09-01 18:12:06 ERROR 403: Forbidden.
Notes:
-The input file has the the url 'https://pixabay.com/photos/search/wallpaper/?cat=backgrounds&pagi=2 ' page3, page 4 etc separated by new lines
-I used the long form for the flags just so I could remember what they were.
-I used a cookie file generated from the website called 'cookies.txt' and made sure it was up to date.
-I used the referer 'https://pixabay.com/images/search/' that I found by looking at the headers in Google DevTools.
-I'm able to visit these URLs normally without any visible captcha requirements
-I noticed one of the cookies _cf_bm had a Secure = TRUE- so needed to be sent using https. I'm not sure whether I'm doing that or not
It might not actually be possible to do, perhaps cloudflare is a deciding factor. But I'd like to know if it was something that could be circumvented and whether or not it's doable to download a large number of files from this website
Any solutions, insights or any other way of downloaded large numbers of image files would be very appreciated.I know pixabay has an API which I might use as a last resort, but I think it's very rate limited.
It seems these images download sites detect that a server is querying them, rather than a real person on a normal browser. It would probably appear as futile to try to circumvent this as to try and fool Google with SEO tricks, as they will likely be in an ongoing battle trying to stop people doing mass downloads.
I quit from a company that was trying to do that to manipulate images from Google images to pass off as their own.
The 403 is usually reserved for failed logins, but is applicable if being used to reject non-standard access to resources.
I think that these image download sites should return a 200 response for HEAD ONLY https requests so that links to their images can be checked for validity. This would protect their resources while allowing proper automated site maintenance checks that include checking external links.

Performance testing in a specific browser

I need to test the performance of a website in a specific browser (IE). Is it possible to do in Jmeter? Or is there any other tool that can do this?
Websites identify client browsers basing on User-Agent HTTP header so if you need to mimic IE browser just add HTTP Header Manager configured like:
Name: User-Agent
Value: the relevant User Agent string depending on which IE version you are trying to simulate, like for Internet Explorer 11 it would be
Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; AS; rv:11.0) like Gecko
See Internet Explorer User Agent Strings for the full list.
JMeter doesn't actually "render" the page, it downloads response as plain text so you won't be able to detect any rendering time. It neither executes JavaScript.
links for reference:
https://guide.blazemeter.com/hc/en-us/articles/206733719-How-to-make-JMeter-behave-more-like-a-real-browser
https://www.blazemeter.com/blog/jmeter-webdriver-sampler

jmeter - Authorization header goes missing

I have a fairly simple jmeter script for our site. As part of the flow through the site, I use our API in order to update the user's application.
The API uses OAuth authentication, which I'm familiar with using our own proprietary testing tool.
First I get a auth token via a call to our authorization endpoint. This returns a bit of JSON like this:
{"access_token":"a really long auth token string"}
In my script I use a regex to capture this token string. As part of investigating this problem, I've used a Debug PostProcessor to check that I get the correct string out, which I do. It's saved as variable 'authToken'.
In the very next step in the script, I add a header via a HTTP Header Manager, like so:
I know this header is correct as we have many instances of it in our API tests.
The relevant part of the script looks like this:
Each time I run the script however, the step that uses the token/header returns a 401 unauthorized.
I've tested the actual URL and header in a Chrome plugin and the call works as expected.
In the 'View Results Tree' listener, there is no evidence at all that the Authorization header is set at all. I've tried hard-coding an auth token but no joy - it still doesn't seem to be part of the request.
From the results tree, the request looks like this:
POST <correct URL>
POST data:{"id":"<item id>"}
Cookie Data: SessionProxyFilter_SessionId=<stuff>; sessionToken=<stuff>
Request Headers:
Content-Length: 52
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36
Connection: keep-alive
Content-Type: application/json
The results tree also shows no redirects.
I've tried the solutions here and here but neither of these worked.
Oddly, I'm almost certain that this worked about a month ago and as far as I can tell nothing has changed on the machine, in the script or with the jmeter installation. Obviously one of these is not true but I'm at my wit's end.
Another member of my team answered this for me and it's fairly simple. I just needed to set the 'Implementation' for the problem step to 'HttpClient4'.

How to identify when yammer is making a request for a page

In our ASP.NET code we look at the user-agent to perform statistics. Yes I know user-agent can be spoofed.
How can I identify when a web request for one of our pages comes from yammer? It doesn't appear the user agent is set to yammer.
If someone tries to share a link on yammer, the user agent that is set when the link is accessed is...
Mozilla/5.0 (compatible; Embedly/0.2; +http://support.embed.ly/)

Want to get html content of Microsoft Live Login page

I have url:
https://login.live.com/login.srf?wa=wsignin1.0&wtrealm=http%3a%2f%2fcorp.sts.microsoft.com&wctx=7b4cd04b-7dc2-4880-9f77-20c8c6ef64c4&wct=2013-03-11T06%3a54%3a42Z&whr=uri%3aWindowsLiveID.
I want to get htmlcotent of this webpage as string. My Code Looks like this.
WebClient wc = new WebClient();
string html = wc.DownloadString("url");
When I examine the content in html string I see an error message:
Microsoft account requires JavaScript to sign in. This web browser
either does not support JavaScript, or scripts are being blocked. To
find out whether your browser supports JavaScript, or to allow
scripts, see the browser's online help.
You could set the User-Agent request header to some know browser which will trick the website into thinking that it supports javascript:
using (WebClient wc = new WebClient())
{
client.Headers[HttpRequestHeader.UserAgent] = "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.22 (KHTML, like Gecko) Chrome/25.0.1364.152 Safari/537.22";
string html = wc.DownloadString("https://www.microsoft.com/en-/itacademy/members/default.aspx");
}
Obviously if the site performs some javascript tasks they will not be executed and you cannot rely on them because the WebClient doesn't support that.
If on the other hand you are attempting to authenticate against Live ID, I would strongly recommend you using OAuth for that purpose. Here's the documentation which explains how to integrate this type of authentication with Live ID after you registered your application as a relying party.

Resources