ffmpeg: How to set a custom HTTP header when using PUT as the output method? - ffmpeg

I'm trying to use ffmpeg to PUT encoded files to object storage, and I need to include an API key in a header. I've tried -http_opts '"headers='AccessKey: mykey'" and -headers 'AccessKey: mykey' but neither end up with the header in the request when I use -v trace to see what is getting sent.
Here's the relevant part of my command:
-method PUT -headers 'AccessKey: mykey' \
https://storage/store/stream.mpd
Is this a known issue or have I just got the order of the options wrong?

Your header is missing a trailing CRLF. Try -headers 'AccessKey: mykey'$'\r\n' with your ffmpeg version.
Newer ffmpeg versions have auto adding of trailing CRLF in headers and your code
ffmpeg -v trace -headers 'AccessKey: mykey' -method PUT -i http://localhost/
is working with my ffmpeg version 4.3.4-0+deb11u1+mx21+1 built with gcc 10 (Debian 10.2.1-6)

Related

ffmpeg header how to allow certain domain

so im new to ffmpeg , i want to restrict my streams be allowed only to my website
stream server is nginx and mywebsite is written in php , the stream server is different from mywebsite host , so i want to restrict stream requests only to my website domain.
i have applied this command is not working , i have searched in google didnt found much information on ffmpeg headers command.
ffmpeg -headers "Accept-Encoding:gzip, deflate, br" -headers "Accept-Language:zh-CN,zh;q=0.9,en;q=0.8" -headers "Origin:https://mywebsite.com" -headers "Referer:https://mywebsite.com"
appreciate your support

Windows Batch: wget to download Nirsoft tools - leads to corrupt files

As I made a batch file to update NirSoft tools, I had a strange experience using wget.
First I downloaded a text file with pad links:
wget http://www.nirsoft.net/pad/pad-links.txt --backups=20 --append-output=C:\Path\Update\LOG\Nirsoft\%Timestamp%_NirSoft.log
After, I used fart-js to delete rows I did not need from the pad-links.txt file. Also I used that program to change the download links to https://www.nirsoft.net/utils, and change the file extensions to .zip.
fart ".\pad-links.txt" "http://www.nirsoft.net/pad" "http://www.nirsoft.net/utils" | tee --append C:\Path\Update\LOG\Nirsoft\%Timestamp%_NirSoft.log
and
fart ".\pad-links.txt" ".xml" ".zip" | tee --append C:\Path\Update\LOG\Nirsoft\%Timestamp%_NirSoft.log
After, to download the programs, I used:
wget --timestamping --input-file=C:\Path\UtilSuit\NirLauncher\Download\pad-links.txt --append-output=C:\Path\Update\LOG\Nirsoft\%Timestamp%_NirSoft.log
Having a look at the log file I found out that not all programs are stored in this location. For example WirelessKeyView is stored in https://www.nirsoft.net/toolsdownload/wirelesskeyview.zip.
Trying to get this file with wget leads to downloaded corrupt files at size of 4kb. The same with cURL and aria2. When I download it with Mozilla, or IDM, I have no problems to get the file. So I tried out wget --auth-no-challenge or wget --header="Accept: text/html" --user-agent="Mozilla/5.0 …"
I also tried cliget, the wget/aria2/curl lines it produced while normal downloading with Mozilla.
wget --header 'Host: www.nirsoft.net' --user-agent 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:92.0) Gecko/20100101 Firefox/92.0' --header 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8' --header 'Accept-Language: de,en-US;q=0.7,en;q=0.3' --referer 'https://www.nirsoft.net/utils/wirelesskeyview.html' --header 'Upgrade-Insecure-Requests: 1' --header 'Sec-Fetch-Dest: document' --header 'Sec-Fetch-Mode: navigate' --header 'Sec-Fetch-Site: same-origin' --header 'Sec-Fetch-User: ?1' --header 'DNT: 1' --header 'Sec-GPC: 1' 'https://www.nirsoft.net/toolsdownload/wirelesskeyview.zip' --output-document 'wirelesskeyview.zip'
I googled and found this reference for powershell, (same error), but cannot reproduce the working answer in batch, (I am not familiar with powershell scripting).
So how is is possible to download the single wirelesskey.zip file with wget/curl or aria2 in a batch script?
A workaround I found out is downloading it directly from the pad Panel but I want the .zip-file, including the updated .chm-file, and also the 64-bit versions, if available.
One more note, within my anti-virus tool the nirsoft site is exempted from scanning, so that is not the answer.
Any solutions?
Aah, this one is simple. If you look at the actual page downloaded, it's called "403.html". So, let's open it. The first thing that strikes you is this:
<title>Error 403: Missing HTTP referer in the HTTP request</title>
So, the server wants a Referer header. Sure, let's give it one:
$ wget --referer foo <URL>
And it downloads the zip file correctly as expected.
Now, really, the server should not be returning a HTTP 200 response with a file called 403. It really should have sent back a HTTP 403 response. But what can you do? There's broken servers everywhere

Ansible Tower REST API: Is there any way to get the logs/output of a job?

I have a Ansible job started by another Process. Now I need to check the status of the current running job in Ansible Tower.
I am able to track the status whether it is running/success/failed/canceled with /jobs/{id} using REST API.
But I would also need the information of the console logs/ouputs of the task for processing as well. Is there any direct API call for the same?
You can access the job log via a link similar to:
https://tower.yourcompany.com/api/v1/jobs/12345/stdout?format=txt_download
Your curl command would be similar to:
curl -O -k -J -L -u ${username):${password} https://tower.company.com/api/v1/jobs/${jobnumber}/stdout?format=txt_download
obviously replacing ${username}, ${password}, and ${jobnumber} with your own values
The curl flags used:
-O : output the filename that is actually downloaded
-k : insecure SSL (don't require trusted CAs)
-J : content header for file download https://curl.haxx.se/docs/manpage.html#-J
-L : follow redirects
-u : username and password
You can do this via their restful call.
To get the job number use a GET against https://yourtowerinstance/api/v2/job_templates/
this will return your templates, and their IDs
To get the output in real time I use this powershell code
$stdouturl = "https://yourtowerinstance/api/v2/jobs/$($templateResult.id)/stdout/?format=txt"
$resultstd = Invoke-Restmethod -uri $stdouturl -Method 'Get' -Headers $authHeader
while ($resultstd -notmatch 'PLAY RECAP') {
$resultstd = Invoke-Restmethod -uri $stdouturl -Method 'Get' -Headers $authHeader
start-sleep -s 5
}
$resultstd
Once you launch a template you get the job-id in response but I don't think there is a API to get the output of the job. However from the dashboard under jobs section you can download the individual job output.

Invoke-RestMethod : Content-Length or Chunked Encoding cannot be set for an operation that does not write data

I am looking for cURL equivalent command in powershell and then found the below mentioned URL :
https://superuser.com/questions/344927/powershell-equivalent-of-curl
Based on the above URL I tried the below mentioned powershell script in a system which has Powershell version 4.0
Invoke-RestMethod -Uri www.discoposse.com/index.php/feed -Method Get -OutFile C:\Temp\DiscoPosseFeed.xml
Once I run the above command I see a xml file in the specified location but if now I specify the encoding as mentioned below:
Invoke-RestMethod -Uri www.discoposse.com/index.php/feed -TransferEncoding compress -Method Get -OutFile C:\Temp\DiscoPosseFeed.xml
I am getting an error :
Can anyone help me to know is there anything I am missing here?
The explanation is tha your command line does not write anything on the line. TransferEncoding specifies a value for the transfer-encoding HTTP response header. Valid values are Chunked, Compress, Deflate, GZip and Identity.

Using CURL to download file and view headers and status code

I'm writing a Bash script to download image files from Snapito's web page snapshot API. The API can return a variety of responses indicated by different HTTP response codes and/or some custom headers. My script is intended to be run as an automated Cron job that pulls URLs from a MySQL database and saves the screenshots to local disk.
I am using curl. I'd like to do these 3 things using a single CURL command:
Extract the HTTP response code
Extract the headers
Save the file locally (if the request was successful)
I could do this using multiple curl requests, but I want to minimize the number of times I hit Snapito's servers. Any curl experts out there?
Or if someone has a Bash script that can respond to the full documented set of Snapito API responses, that'd be awesome. Here's their API documentation.
Thanks!
Use the dump headers option:
curl -D /tmp/headers.txt http://server.com
Use curl -i (include HTTP header) - which will yield the headers, followed by a blank line, followed by the content.
You can then split out the headers / content (or use -D to save directly to file, as suggested above).
There are three options -i, -I, and -D
> curl --help | egrep '^ +\-[iID]'
-D, --dump-header FILE Write the headers to FILE
-I, --head Show document info only
-i, --include Include protocol headers in the output (H/F)

Resources