Google speech recognition API returns null result - google-api

I am trying to use Google's Speech Recognition API from a shell command, but I am having issues.
My Shell file contains the following Code:
arecord -D plughw:1,0 -q -f cd -t wav -R 16000 | flac - -f --best --sample-rate=16000 -s -o test.flac
wget -q -U "Mozilla/5.0" --post-file test.flac --header "Content-Type: audio/x-flac; rate=16000" -O - "http://www.google.com/speech-api/v2/recognize?client=chromium&lang=en-US&key=MyKey" | >stt.txt
I have validated that the test.flac file does contain my recording. Also, I have confirmed that the Google Server is indeed receiving my requests. Meanwhile, I am returned a Null result from the Web Server.

The syntax used to create my file was wrong. It should have been the following:
arecord -D plughw:1,0 -q -t wav -r 16000 file.wav
flac -f --sample-rate=16000 -s file.wav

Use http://www.audacityteam.org/ to double check that your file is 16bit-PCM and mono.

Related

Bash Script Using API for Online Image Vectorizing

I am a user of vectorizer.io services, but I now need to batch convert and that requires a script utilizing their API. Their customer service sent me a script written in ChatGPT that users have put together, but it isn't working for me (API credentials were removed for posting). I receive this error:
curl: (26) Failed to open/read local data from file/application
Script:
#!/bin/bash
# Set the api credits code: found in menu / "email address button" / "Api Settings" (enable 'pay for additional credits' if you want to use more than the included 100 api calls)
API_CREDITS_CODE=""
# Set the directory to search for GIF files
DIRECTORY="/Users/User/Desktop/folderofimages"
# Find all GIF files in the specified directory
for file in $(find "$DIRECTORY" -name "*.gif"); do
# Extract the filename without the file extension
filename="${file%.*}"
# Call the curl command, replacing the X-CREDITS-CODE header with the api credits key and the input and output filenames
curl --http1.1 -H "Expect:" --header "X-CREDITS-CODE: $API_CREDITS_CODE" "https://api.vectorizer.io/v4.0/vectorize" -F "image=#$file" -F "format=svg" -F "colors=0" -F "model=auto" -F "algorithm=auto" -F "details=auto" -F "antialiasing=off" -F "minarea=5" -F "colormergefactor=5" -F "unit=auto" -F "width=0" -F "height=0" -F "roundness=default" -F "palette=" -vvv -o "${filename}.svg"
done
It would also be ideal if the script were to find and use images of all formats, not just GIF. That is not a necessity though.

How to upload a file to a slack channel using a bot

I have a slack bot and the token starting with xoxb is used to upload a file to a channel.
I am using below format
curl -F token="${SLACK_TOKEN}" -F file=e2e.sh -F channel="${SLACK_CHANNEL}" -F as_user=true https://slack.com/api/files.upload
This throws
{"ok":false,"error":"no_file_data"}
You are missing the # in your file=e2e.sh argument to let curl know you want to transmit a file. The following should do the trick:
curl \
-F token="${SLACK_TOKEN}" \
-F file=#e2e.sh \
-F channel="${SLACK_CHANNEL}" \
-F as_user=true \
https://slack.com/api/files.upload
p.s. Breaking a long curl into multiple lines can help you see things more clearly ;)

OSX equivalent of piping sound to linux's aplay

On Ubuntu I am able to use aplay to play sound generated live from a script by piping the output of my script to aplay's stdin :
./generate_sound.py | aplay -r 2000 -c2 -f MU_LAW
cat sample.wav | aplay
Is there a way to do the same from terminal in OSX? I think afplay doesn't support this ...
Maybe someone knows another OSX command line sound player that would do the trick?
I had high hopes for redirection/piping, but afplay /dev/stdin <<< $(generate_sound.py) failed for all the formats I tried. Sadly afplay doesn't let you specify the format, and so it tries instead to sniff it which probably involves seeking which doesn't work with pipes.
I think you'd better find another command line player. sox seems like a good candidate. And! It's installable via homebrew: brew install sox and you can pipe data to it like so:
cat whatever.raw | play -t raw -e floating-point -b 32 -c 2 -r 44100 -
To Listen to an FM station on a mac
rtl_fm -f 95.3e6 -M wbfm -s 200000 -r 48000 – | aplay -r 48k -f S16_LE
To record for 10s
export AUDIOSAMPLERATE=48000
export SAMPLERATE=200000
export FREQ="127.2m"
rtl_fm -f $FREQ -M am -s $SAMPLERATE -r $AUDIOSAMPLERATE | sox -r $AUDIOSAMPLERATE -t raw -e s -b 16 -c 1 -V1 - FILENAME.wav&
sleep 10
killall rtl_fm

How to download multiple URLs using wget using a single command?

I am using following command to download a single webpage with all its images and js using wget in Windows 7:
wget -E -H -k -K -p -e robots=off -P /Downloads/ http://www.vodafone.de/privat/tarife/red-smartphone-tarife.html
It is downloading the HTML as required, but when I tried to pass on a text file having a list of 3 URLs to download, it didn't give any output, below is the command I am using:
wget -E -H -k -K -p -e robots=off -P /Downloads/ -i ./list.txt -B 'http://'
I tried this also:
wget -E -H -k -K -p -e robots=off -P /Downloads/ -i ./list.txt
This text file had URLs http:// prepended in it.
list.txt contains list of 3 URLs which I need to download using a single command. Please help me in resolving this issue.
From man wget:
2 Invoking
By default, Wget is very simple to invoke. The basic syntax is:
wget [option]... [URL]...
So, just use multiple URLs:
wget URL1 URL2
Or using the links from comments:
$ cat list.txt
http://www.vodafone.de/privat/tarife/red-smartphone-tarife.html
http://www.verizonwireless.com/smartphones-2.shtml
http://www.att.com/shop/wireless/devices/smartphones.html
and your command line:
wget -E -H -k -K -p -e robots=off -P /Downloads/ -i ./list.txt
works as expected.
First create a text file with the URLs that you need to download.
eg: download.txt
download.txt will as below:
http://www.google.com
http://www.yahoo.com
then use the command wget -i download.txt to download the files. You can add many URLs to the text file.
If you have a list of URLs separated on multiple lines like this:
http://example.com/a
http://example.com/b
http://example.com/c
but you don't want to create a file and point wget to it, you can do this:
wget -i - <<< 'http://example.com/a
http://example.com/b
http://example.com/c'
pedantic version:
for x in {'url1','url2'}; do wget $x; done
the advantage of it you can treat is as a single wget url command

curl upload command using bash & terminal

when i use bash to upload files to dropbox, it works fine but when i manually use command line it does not work.
I'm thinking it might be the & in the url.. im not sure..
Bash code:
CURL_BIN="/usr/bin/curl"
#Note: This option explicitly allows curl to perform "insecure" SSL connections and transfers.
#CURL_ACCEPT_CERTIFICATES="-k"
CURL_PARAMETERS="--progress-bar"
APPKEY="zrwv8z3bycfk3m8"
OAUTH_ACCESS_TOKEN="aaaaaaaa"
APPSECRET="aaaaaaaaaa"
OAUTH_ACCESS_TOKEN_SECRET="aaaaaaaaa"
ACCESS_LEVEL="dropbox"
API_UPLOAD_URL="https://api-content.dropbox.com/1/files_put"
RESPONSE_FILE="temp2.txt"
FILE_SRC="temp.txt"
$CURL_BIN $CURL_ACCEPT_CERTIFICATES $CURL_PARAMETERS -v -i -o "$RESPONSE_FILE" --upload-file "$FILE_SRC" "$API_UPLOAD_URL/$ACCESS_LEVEL/$FILE_DST?oauth_consumer_key=$APPKEY&oauth_token=$OAUTH_ACCESS_TOKEN&oauth_signature_method=PLAINTEXT&oauth_signature=$APPSECRET%26$OAUTH_ACCESS_TOKEN_SECRET"
Manual code:
curl --insecure --progress-bar -v -i -o temp2.txt --upload-file temp.txt https://api-content.dropbox.com/1/files_put/dropbox/attachments/temp.txt?oauth_consumer_key=aaaaaaaaaa&oauth_token=aaaaaaaaa&oauth_signature_method=PLAINTEXT&oauth_signature=aaaaaaaaa%26aaaaaaaaaa
curl --insecure --progress-bar -v -i -o temp2.txt --upload-file temp.txt "https://api-content.dropbox.com/1/files_put/dropbox/attachments/temp.txt?oauth_consumer_key=aaaaaaaaaa&oauth_token=aaaaaaaaa&oauth_signature_method=PLAINTEXT&oauth_signature=aaaaaaaaa%26aaaaaaaaaa"
The solution is to add in the inverted commas "

Resources