Unable to use Select query while using curl - macos

For my interest. I am trying to use the below curl query:
Delphi$ curl -o /dev/null -s -w %{time_total}
http://localhost:8080/getquery?db=EmpDt&col=HRDt&query=select * from
emp where id=1111
but unable to execute it:
[1] 2784
[2] 2785
0.003799invalid option or syntax: 10
[1]- Done curl -o /dev/null -s -w %{time_total}
http://localhost:8080/getquery?db=EmpDt
[2]+ Done col=HRDt
Something is not correct here but not able to get what? Any help would be really helpful. Thanks

in shell unquoted & terminates command and runs the command to its left in the background; thus your post contains three separate commands run concurrently. Either quote & individually with backslash as \& or surround at least the &s and usually the whole string with either singlequotes'http://host/q?x&y&z' or doublequotes "http://host/q?x&y&z"
? and * are also special in shell, although not command terminators, and in general must also be quoted, although in your case after fixing the spaces (below) this becomes less critical
URL cannot contain space; it must be encoded as + (preferred) or %20. Other special characters (here * and =) may not work depending on how your server handles URL parsing, which in turn depends on what your server is and you didn't give any hint; in that case they too must be percent-encoded. (If you want actual +, which you don't, it is encoded as %2B.)

Related

Escape single and double quotes in K8s Lifecycle hook

lifecycle:
preStop:
exec:
command: ["sh", "-c", "curl -v -X PUT -d '\"SHUTTING_DOWN\"' http://localhost:8080/v1/info/state"]
I am expecting this will produce a curl url like
curl -v -X PUT -d '"SHUTTING_DOWN"' http://localhost:8080/v1/info/state
How ever I am getting with extra single quotes surrounded ''"SHUTTING_DOWN"''
curl -v -X PUT -d ''"SHUTTING_DOWN"'' http://localhost:8080/v1/info/state
Any pointers, where am I going wrong?
I'd suggest getting rid of as many layers of quotes as you can. In the original example you have a layer of quotes from YAML, plus a layer of quotes from the sh -c wrapper. Since you need the HTTP PUT body itself to have both single and double quotes – you need to send the string '"SHUTTING_DOWN"' with both kinds of quotes over the wire – getting rid of as much quoting as you can is helpful.
In both the shell and YAML, the two kinds of quotes behave differently. Backslash escaping only works in double-quoted strings and so you probably need that at the outer layer; then you need single quotes inside the double quotes; and then you need backslash-escaped double quotes inside that.
In YAML specifically the quotes around strings are usually optional, unless they're required to disambiguate things (forcing 'true' or '12345') to be strings. This lets you get rid of one layer of quoting. You also may find this slightly clearer if you use YAML block style with one list item on a line.
command:
- /bin/sh
- -c
- curl -v -X PUT -d "'\"SHUTTING_DOWN\"'" http://localhost:8080/v1/info/state
I might even go one step further here, though. You're not using environment variable expansion, multiple commands, or anything else that requires a shell. That means you don't need the sh -c wrapper. If you remove this, then the only layer of quoting you need is YAML quoting; you don't need to worry about embedding a shell-escaped string inside a YAML-quoted string.
You do need to make sure the quotes are handled correctly. If the string begins with a ' or " then YAML will parse it as a quoted string, and if not then there are no escaping options in an unquoted string. So again you probably need to put the whole thing in a double-quoted string and backslash-escape the double quotes that are part of the value.
Remember that each word needs to go into a separate YAML list item. curl like many commands will let you combine options and arguments, so you can have -XPUT as a single argument or -X and PUT as two separate arguments, but -X PUT as a single word will include the space as part of that word and confuse things.
command:
- curl
- -v
- -X
- PUT
- -d
- "'\"SHUTTING_DOWN\"'"
- http://localhost:8080/v1/info/state

How to execute cURL requesting which requires to upload file in Golang?

I have a cURL request as follows.
$(curl --request PUT --upload-file "<path to catalog file on your local machine>" "<presigned URL>")
Let's say that I have to upload a bin/test.txt file with the presigned URL being https://www.someurl.com
I execute the command in my terminal
curl --request PUT --upload-file "bin/test.txt" "https://www.someurl.com" and it works fine.
How do I write a piece of Golang which does the same? I have tried
cmd := exec.Command("curl", "--request", "PUT", "--upload-file", fmt.Sprintf("\"%s\"", catalogPath), fmt.Sprintf("\"%s\"", presignedURL))
err = cmd.Run()
but found no success.
I see one obvious problem preventing that curl call from working properly, one quite possible and another one also possible.
The obvious problem is that string quoting — such as executing curl … --upload-file "bin/test.txt" … — in interpreted by the shell which is to execute the command. Quoting — using either double or single quotes — is used to inhibit interpreting of otherwise special characters by the shell; chiefly it's used to prevent the shell from splitting a string into separate "words" on whitespace characters or their series.
The key takeaway is that the command run by the shell after it's fully parsed the command to be executed (and interpreted the quotes) does not "see" these quotes because they are removed by the shell.
os/exec.Cmd calls the specified program directly and does not "pass it through" the shell. Hence if you include double quotes into the command-line parameters of the program to execute, they are passed to that program, unchanged. This means, curl were to try to find the file named test.txt" located in the directory named "bin — which is most probably not what you expected.
The same applied to the URL.
The second — possible — problem is that your call relies on the current directory of your Go program because you pass a relative path to curl.
This might or might not be a problem but you might check this anyway.
The third problem is that you might want to pass your URL through the "percent escaping" algorithm before passing it to curl.
You might look at PathEscape and QueryEscape functions of the net/url package.
Two pieces of advice follow.
First, I would try very hard not to call out to curl to perform such a ridiculously simple task. Go has excellent support for making HTTP requests (and serving them, FWIW) in its standard library, and PUTting a file is really a no-brainer with a solutions googleable in, like, five minutes.
Second, if, for some reason, you intend to stick with calling curl, consider passing it some options to make it fail loudly on errors — otherwise you're doomed to be in that „but found no success” situation in your attempts. For instance, consider passing curl the -s and -S command line options (together).
that's not how you quote shell arguments, that would break if your argument starts with or ends with \ or ", the proper way to quote shell arguments on unix would be
func quoteshellarg(str string) string {
if strings.Contains(str, "\x00") {
panic("argument contains null bytes, it is impossible to escape null bytes in shell arguments!")
}
return "'" + strings.ReplaceAll(str, "'", "'\\''") + "'"
}
and with that, just
cmd := exec.Command("curl --request PUT --upload-file " + quoteshellarg(catalogPath)+ " " + quoteshellarg(presignedURL));
... at least that's how to do it on unix systems. as for how to do it on Windows, it seems nobody knows for sure, not even Microsoft

Strange characters appearing in bash variable expansion

Trying to do the following on contos7 works as I expect:
pod_in_question=$(curl -u uname:password -k very.cluster.com/api/v1/namespaces/default/pods/ | grep -i '"name": "myapp-' | cut -d '"' -f 4)
echo "$pod_in_question"
curl -u uname:password -k -X DELETE "very.cluster.com/api/v1/namespaces/default/pods/${pod_in_question}"
However, trying the same thing on MacOS (10.12.1) yields:
curl: (3) [globbing] bad range in column 92
When I try to curl the last line with a -g option it substitutes with a malformed name such as: myapp-\\x1b[m\\x1b[Kl1eti\
The echo statement would always execute just fine and show something like myapp-v7454 which I later want to put into the last curl statement. So where are these other characters coming from?
A robust solution - Basic cURL CLI debugging.
This answer is revised after it's been identified that the issue for the OP relates to curl applying color output.
There's a proposed answer which explains clearly what the embedded special characters meant, and instructions to override the grep behaviour to not output color. Certainly this is a good practise for grep use in piping. There are however a number of best practises that can help diagnose this or a similar issue with cURL and ultimately lead to the most robust solution.
Re-creating the problem
Assuming it's a JSON Content-Type, we use echo {'"name": "myapp-7414"'} to simulate the output from cURL
We filter the text and set a variable with it that we use in a cURL command
We force grep to output color, since it doesn't normally by default when outputting to a tty.
Recreation:
myvar=$(echo {'"name": "myapp-7414"'} | grep --color=always -i '"name": "myapp-' | cut -d '"' -f 4)
curl "https://www.google.com/${myvar}"
Output:
curl: (3) [globbing] bad range in column 32
First up:
'{}' are special characters to cURL, period.
The best practise for URL syntax in cURL:
If Variable Expansion is required:
Apply the -g switch to disable potential globbing done by cURL
Otherwise:
Use $variable as part of a "quoted" url string, instead of ${variable}
Second: In addition to -g, we add --libcurl /tmp/libcurl so we can get some insight into what cURL is seeing.
   Recreation with -g and --libcurl:
curl -g --libcurl /tmp/libcurl "https://www.google.com/${myvar}"
Output:
<p>Your client has issued a malformed or illegal request <ins>That’s all we know.
Perfect, at least now everything is getting to the server and back! Let's see what cURL sent out to the server:
cat /tmp/libcurl
Surely enough we find this line: (note the bold part).
curl_easy_setopt(hnd, CURLOPT_URL, "https://www.google.com/myapp-\033[m7414");
So we know that:
The shell is doing something strange with our variable.
cURL knows not to try glob once we send the -g switch. That way - If there is an error with the shell variable, we can actually see what it is. We shouldn't be debugging a globbing error if we're not trying to use URL Ranges.
The special characters are colors. They represent the --color=always that we added to simulate the OPs environment.
At this point. Since it looks like we're working with JSON data, why not just use a widely available, high performance JSON parsing tool. That has a number of benefits, including:
Not relying on any environment that could affect string filtering
Can request the data we want (aka. "name")
The app name "myapp" can change and we won't have to re-write the code to retrieve it.
It's cleaner and accounts for things I haven't considered yet.
If we used jq for example (while we're at it, we don't need the -g switch because we don't need '{}' for the variable because we're already double " the URL):
myvar=$(echo {'"name": "myapp-7414"'} | jq -r .name)
curl --libcurl /tmp/libcurl "https://www.google.com/$myvar"
Now we get:
<p>The requested URL /myapp-7414 was not found on this server. That’s all we know.
Great. It's all working now. It should be obvious that the test URL here being www.google.com is obviously not going to know was myapp-7414 was.
So we've gone from :
Globbing bad range, to:
Malformed URL, to:
URL not found on server.
We could also as suggested elsewhere change the grep output and override it to --color=never (As I have noted: If grep has to be used, the --color=never is a great way to use it as a best practise when piping strings, period.). However, given the portability issues already experienced because of string filtering, and the fact that we are already handed structured data on a plate that can be parsed reliably, the more robust solution would be to do just that, if possible.
The substitution you showed at the last part looks like one of your calls injected ANSI escape sequences. It's possible that grep isn't detecting non-TTY output and is colorizing.
On a terminal that supports ANSI escape sequences, your particular codes might not be visible. The codes ^E[m^E[K set the screen mode and clear the current line. That's why you thought the echo command proved your data was correct.
You can examine the raw data with:
echo "$pod_in_question" | hexdump -C
And you should see there are other characters in there which did not appear in your terminal before. When you put these "invisible" codes into the URL, curl tries to encode them and then fails when it encounters a control character (ESC).
The solution is to add the argument --color=never to your grep call, which will disable colorization.

Append bash parameters and pass forward to other script

I need to pass further original parameters and also I want to add some others. Something like this:
#!/bin/bash
params="-D FOREGROUND"
params+=" -c Include conf/dev.conf"
/usr/local/apache/bin/apachectl $params "$#"
This code above don't work as expected if params contains of two or more parameters, it treated as one parameter.
The code in your example should work if the following command is valid when executed at the command line written exactly like this :
/usr/local/apache/bin/apachectl -D FOREGROUND -c Include conf/dev.conf "$#"
A quick web search leads me to think that what you want is this (notice the additional double quotes) :
/usr/local/apache/bin/apachectl -D FOREGROUND -c "Include conf/dev.conf" "$#"
Here is how to achieve that simply and reliably with arrays, in a way that sidesteps "quoting inside quotes" issues:
#!/bin/bash
declare -a params=()
params+=(-D FOREGROUND)
params+=(-c "Include conf/dev.conf")
/usr/local/apache/bin/apachectl "${params[#]}" "$#"
The params array contains 4 strings ("-D", "FOREGROUND", "-c" and "Include conf/dev/conf"). The array expansion ("${params[#]}", note that the double quotes are important here) expands them to these 4 strings, as if you had written them with double quotes around them (i.e. without further word splitting).
Using arrays with this kind of expansion is a flexible and reliable to way to build commands and then execute them with a simple expansion.
If the issue is the space in the parameter "-c Include conf/dev.conf" then you could just use a backspace to preserve the space character:
params+="-c Include\ conf/dev.conf"

Curl and wget: why isn't the GET parameter used?

I am trying to fetch data from this page using wget and curl in PHP. As you can see by using your browser, the default result is 20 items but by setting the GET parameter iip to number x, I can fetch x items, i.e. http:// www.example.com/ foo ?a=26033&b=100
The problem is that the iip parameter only works in browsers. If I try to fetch the last link using wget or curl, only 20 items are returned. Why? Try this at the command-line:
curl -O http://www.example.com/foo?a=26033&iip=b
wget http://www.example.com/foo?a=26033&iip=b
Why can't I use the GET parameter iip?
Try adding quotes:
curl -O 'http://www.objektvision.se/annonsorer?ai=26033&iip=100'
wget 'http://www.objektvision.se/annonsorer?ai=26033&iip=100'
The & has special functionality on the command line which is likely causing the issues.
Try quoting the argument. At least in cmd, & is used to delimit two commands that are run individually.
You'll have to enclose your URL in either " or ', since the & has a special meaning in shellscript... That'll give you:
curl -O "http://www.objektvision.se/annonsorer?ai=26033&iip=100"
wget "http://www.objektvision.se/annonsorer?ai=26033&iip=100"
& is a reserved word in shell. Just escape it like this \&

Resources