I want to store an .ogg file inside of a bash script and play it later on in the script. I have tried:
Archiving the .ogg into a .7z file (saves some space), encoding the .7z archive into base64, storing that base64 into my script, and decoding->unzipping->playing the raw .ogg stream.
Encoding the .ogg into base64, storing that base64 into my script, and decoding->playing the raw ogg stream.
Creating a hex dump of the .ogg file, storing that hex into my script, using sed to place \x before every two characters of the hex, using printf to print the hex and <<< it into ogg123 (my ogg player)
Archiving the .ogg into a .7z file (saves some space), creating a hex dump of the .7z file, storing that hex into my script, using sed to place \x before every two characters of the hex, using printf to print the hex, pipe the output into 7za e -si and <<< it into ogg123 (my ogg player)
None of these work. The most successful approach I have had is:
ogg123 <<< cat sound.ogg
However I would really prefer to have no files written to the disk (want to keep it all stored in my script) and, if possible, not use variable(s) to store any of the raw data.
Another problem is, ogg123 does not support reading from stdin, therefore I can't pipe the any raw ogg data into it.
Commands I have tried: (hex and base64 are truncated of course)
$ ogg123 <<< printf 'xae\x0f\x00\xad\x83' # .ogg data
/usr/local/bin/ogg123: Argument list too long
$ ogg123 <(printf 'xae\x0f\x00\xad\x83') # .ogg data
Error opening /dev/fd/63 using the oggvorbis module. The file may be corrupted.
$ S=<<SOUND
dGhpcyBiYXNlNjQgd291bGQgYmUgdGhlIGJhc2U2NCBvZiBteSBvZ2cgZmlsZQ==
SOUND
$ ogg123 <(echo $S | openssl base64 -d)
Error opening /dev/fd/63 using the oggvorbis module. The file may be corrupted.
$ ogg123 <<< echo $S | openssl base64 -d
5?w?k譸?
I did try several other commands, however I accidentally quit Terminal and those two were the only commands saved in my .bash_history. But believe me, though, everything I tried got me nowhere (I've spent 3.5 hours on this already with no success).
Using macOS High Sierra 10.13.6, bash 3.2.57(1)-release, ogg123 from vorbis-tools 1.4.0, 7za 16.02 (x64), openssl base64 (LibreSSL 2.2.7).
Not a complete fix, but I did get it to work with the following:
mplayer <(openssl base64 -d <<SND
dGhpcyBiYXNlNjQgd291bGQgYmUgdGhlIGJhc2U2NCBvZiBteSBvZ2cgZmlsZQ==
SND
)
Props to mplayer for reading raw data like that (also worked reading from stdin!)
Related
I want to extract a huge wordlist and use its contents like a "stream" to the shell.
This would permit one to perform dictionary attacks without having to decompress the entire wordlist.
A little bit more searching, and I've just found this here:
7z e -so -bd "$#" 2>/dev/null compressed_file.7z | tail
This would print the last 10 lines of the compressed file without having to extract or store it, sending all errors to /dev/null. This is just what I needed!
How can I pipe an image into exiv2 or imagemagick, strip the EXIF tag, and pipe it out to stdout for more manipulation?
I'm hoping for something like:
exiv2 rm - - | md5sum
which would output an image supplied via stdin and calcualte its md5sum.
Alternatively, is there a faster way to do this?
Using exiv2
I was not able to find a way to get exiv2 to output to stdout -- it only wants to overwrite the existing file. You could use a small bash script to make a temporary file and get the md5 hash of that.
image.sh:
#!/bin/bash
cat <&0 > tmp.jpg # Take input on stdin and dump it to temp file.
exiv2 rm tmp.jpg # Remove EXIF tags in place.
md5sum tmp.jpg # md5 hash of stripped file.
rm tmp.jpg # Remove temp file.
You would use it like this:
cat image.jpg | image.sh
Using ImageMagick
You can do this using ImageMagick instead by using the convert command:
cat image.jpg | convert -strip - - | md5sum
Caveat:
I found that stripping an image of EXIF tags using convert resulted in a smaller file-size than using exiv2. I don't know why this is and what exactly is done differently by these two commands.
From man exiv2:
rm Delete image metadata from the files.
From man convert:
-strip strip image of all profiles and comments
Using exiftool
ExifTool by Phil Harvey
You could use exiftool (I got the idea from https://stackoverflow.com/a/2654314/3565972):
cat image.jpg | exiftool -all= - -out - | md5sum
This too, for some reason, produces a slightly different image size from the other two.
Conclusion
Needless to say, all three methods (exiv2, convert, exiftool) produce outputs with different md5 hashes. Not sure why this is. But perhaps if you pick a method and stick to it, it will be consistent enough for your needs.
I tested with NEF file. Seems only
exiv2 rm
works best. exiftool and convert can't remove all metadata from .nef FILE.
Notice that the output file of exiv2 rm can no longer be displayed by most image viewers. But I only need the MD5 hash keeps same after I update any metadata of the .NEF file. It works perfect for me.
What i want to do is, transfer some exe files from my local PC to a server thro RDP.
Copy-Pasting the file doesnt work and i dont want to do it this way.
What i tried to do was, open the exe in notepad in my local PC, copy the contents and paste them in a text file in the server and then rename to .exe. This, however did not work. It corrupted the exe file.
Is there any other way to convert the exe/binary file into a series of strings only so that i can copy paste to the server, and then decode it back to the exe without corrupting it?
Will base64 work?
(I can to use VBScript to encode/decode)
Emails are using base64 encoding to transfer files. So yes, base64 will work.
Here the prove (on Linux) with a simple text file:
$ echo -n "abc" > file
$ hexdump file
0000000 6261 0063
0000003
$ sha1sum file
a9993e364706816aba3e25717850c26c9cd0d89d file
$ base64 ./file > BASE64
$ base64 --decode < BASE64 > newFile
$ sha1sum newFile
a9993e364706816aba3e25717850c26c9cd0d89d newFile
base64 encoding should work. It would be easier to just connect one of your local drives in the RDP session, though.
I have bunch of files are encoded with GB2312, now I want to convert them into UTF-8, so I apply the code below:
find . | xargs iconv -f GB2312 -t UTF-8
It successfully convert them but the output is printed in console.
Here I want them to be saved in their original files, how do I make it ?
You could always use a loop instead of xargs. I wouldn't recommend overwriting files in a one-shot command-line call. How about moving them aside first:
for file in `find .`; do
mv "$file" "$file.old" && iconv -f GB2312 -t UTF-8 < "$file.old" > "$file"
done
Just take care with this. If your file names contain spaces, this loop might not work correctly.
I want to extract just the first filename from a remote zip archive without downloading the entire zip. In particular, I'm trying to get the build number of dartium (link to zip file). Since the file is quite large, I don't want to download the entire thing.
If I download the entire thing, unzip -l reports the first file as being: 0 2013-04-07 12:18 dartium-lucid64-inc-21033.0/. I want to get just this filename so I can parse out the 21033 portion as the build number.
I was doing this (total hack):
_url="https://storage.googleapis.com/dartium-archive/continuous/dartium-lucid64.zip"
curl -s $_url | head -c 256 | sed -n "s:.*dartium-lucid64-inc-\([0-9]\+\).*:\1:p"
It was working when I had my shell in ASCII mode, but I recently switched it to UTF-8 and it seems sed is now honoring that, which breaks my script.
I thought about hacking it by doing:
export LANG=
curl -s ...
But that seemed like an even bigger hack.
Is there a better way?
Firstly, you can set bytes range using curl.
Next, use "strings" to extract all strings from binary stream.
Add "q" after "p" to quit after find only first occurrence.
curl -s $_url -r0-256 | strings | sed -n "s:.*dartium-lucid64-inc-\([0-9]\+\).*:\1:p;q"
Or this:
curl -s $_url -r0-256 | strings | sed -n "/dartium-lucid64/{s:.*-\([^-]\+\)\/.*:\1:p;q}"
It must be a bit faster and more reliable. Also it extracts full version, including subversion (if you need it).