Have kafkacat produce both name of a file and its contents - bash

I want kafkacat to write both the name of a file and its contents in a single bash command, if possible. The following apparently does not work. What am I missing?
kafkacat -T -P -b ... -t ... <(echo $file) $file

Related

curl script to extract tar files

I have a curl script that i use to get read files from a url. I now need to extract the tar files after they are downloaded.
for file in $(/usr/bin/curl 'https://linktofile.com/yourfile.txt' );
do
echo $file;
/usr/bin/curl -L -J -O "$file";
done;
output to the screen is
curl: Saved to filename 'your_file_1.tar'
How do I extract the file that was saved?
I tried adding tar -xvf $file but nothing happens.
How do I get the name of the file that was just saved?
Using -J makes the file's location depend on the Content-Disposition header, so you need to retrieve the filename it specifies. You can do so by specifying through -w the output of the curl command, which you will want to output the filename :
for url in $(/usr/bin/curl 'https://linktofile.com/yourfile.txt' ); do
filename=$(/usr/bin/curl -sLJOw '%{filename_effective}' "$url")
tar -xvf "$filename"
done

Pass stdin to plistbuddy

I have a script to show the content of the Info.plist of .ipa files:
myTmpDir=`mktemp -d 2>/dev/null || mktemp -d -t 'myTmpDir'`
unzip -q "$1" -d "${myTmpDir}";
pathToFile=${myTmpDir}/Payload/*.app/Info.plist
/usr/libexec/PlistBuddy -c "Print" ${pathToFile}
With large files this can take some time until they are extracted to the temp folder just to read a small Info.plist (xml) file.
I wondered if I can just extract Info.plist file and pass that to plistbuddy? I've tried:
/usr/libexec/PlistBuddy -c "Print" /dev/stdin <<< \
$(unzip -qp test.ipa Payload/*.app/Info.plist)
but this yields
Unexpected character b at line 1
Error Reading File: /dev/stdin
The extraction is working fine. When running unzip -qp test.ipa Payload/*.app/Info.plist I get the output of the Info.plist file to the terminal:
$ unzip -qp test.ipa Payload/*.app/Info.plist
bplist00?&
!"#$%&'()*+5:;*<=>?ABCDECFGHIJKXYjmwxyIN}~N_BuildMachineOSBuild_CFBundleDevelopm...
How can I pass the content of the Info.plist to plistbuddy?
Usually commands support "-" as a synonym of stdin, but this PlistBuddy tool doesn't.
But you can still extract just one file from your ipa, save it as a temporary file, and then run PlistBuddy on that file:
tempPlist="$(mktemp)"
unzip -qp test.ipa "Payload/*.app/Info.plist" > "$tempPlist"
/usr/libexec/PlistBuddy -c Print "$tempPlist"
rm "$tempPlist"
I ended up with plutil as chepner suggested:
unzip -qp test.ipa Payload/*.app/Info.plist | plutil -convert xml1 -r -o - -- -

Creating a loop with curl -F argument

I have this bash script code:
#!/bin/bash
FILES=/home/user/Downloads/asd/
for f in $FILES
do
curl -F dir="#/home/user/Downloads/asd;$f" -F Match=3 -F "Name=DrKla" \
-F countNo=1 -F outputFormat=json "http://somelink.com"
done
Inside the asd folder there are 6 files and I want them to be uploaded 1 by 1 with this code as an argument of -F "dir=#...."
When I run my code I get the error:
Warning: skip unknown form field: /home/user/Downloads/asd/
curl: (43) A libcurl function was given a bad argument
Here is a working version of the code for a single file:
curl -F dir="#/home/user/Downloads/asd/count.txt" -F Match=3 -F "Name=DrKla" \
-F countNo=1 -F outputFormat=json "http://somelink.com"
So I want to have all the files in asd folder to be read and uploaded like this. I don't see what's wrong with my do loop.
The issues appear to be that you only give a path, not a reference to all files in the path * and there is a strange semi-colon ; in your path:
#!/bin/bash
FILES=/home/user/Downloads/asd/*
for f in $FILES
do
curl -F dir="#$f" -F Match=3 -F "Name=DrKla" \
-F countNo=1 -F outputFormat=json "http://somelink.com"
done
I'm not sure what the # is for or if it is needed, but $f should already contain the path.

Passing curl results to wget with bash

I have a small script that i'd like to use with cron.
Purpose: Get webpage with links, extract dates from link and download files.
Script below is not working 100% and i can't see the problem.
#!/bin/bash
for i in $(curl http://107.155.72.213/anarirecap.php 2>&1 | grep -o -E 'href="([^"#]+)"' | cut -d'"' -f2 | grep '_whole_1_3000.mp4'); do
GAMEDAY=$(echo "$i" | grep -Eo '[[:digit:]]{4}/[[:digit:]]{2}/[[:digit:]]{2}')
wget "$i" --output-document="$GAMEDAY.mp4"
done
It get's the webpage "curl http://...etc" - works
$DAY - extracts the date - works.
wget part not working when i add $DAY. I'm i blind ... what am i missing.
Look at your output format here:
wget "$i" -O 2015/05/12.mp4
This is looking for a directory named 2015 with a subdirectory named 05 in which to place the file 12.mp4. Those directories don't exist, so you get 2015/05/12.mp4: No such file or directory.
If you want to replace the /s with underscores:
wget -O "${GAMEDAY//\//_}" "$i"
Alternately, if you want to create the directories if they don't exist:
mkdir -p -- "(dirname "$GAMEDAY")"
wget -O "$GAMEDAY" "$i"

Bash interpreter change arguments order

I have bash script and try run command inside it
That's ok
echo ${something:="zip -r -q $TAG -P $PASS $LOCPATH"}
>zip -r -q evolution -P evolution ~/.gconf/apps/evolution
That's ok too
zip -r -q evolution -P evolution ~/.gconf/apps/evolution
But here order have been changed only when passed values and added strange . -i
zip -r -q $TAG -P $PASS $LOCPATH
>zip error: Nothing to do! (try: zip -r -q -P evolution evolution . -i ~/.gconf/apps/evolution
Thanks for any advice.
BASH FAQ entry #50: "I'm trying to put a command in a variable, but the complex cases always fail!"
something=(zip -r -q "$TAG" -P "$PASS" "$LOCPATH")
"${something[#]}"
Try doing type zip, seems it's aliased to something.
Maybe put the full path of zip to override this ,something like :
/usr/bin/zip

Resources