Is there a way to make the JSON output of curl commands Pretty printed by default? That is, without specifying the option of ?pretty=true to the curl URL, is it possible to display the output pretty printed everytime?
I was able to accomplish this by adding a new alias to my .bashrc (or .bash_profile on a mac):
alias pp='python -mjson.tool'
Then, after reloading the .bashrc / .bash_profile configuration by opening a new terminal or by running
$ source ~/.bashrc
you can pipe curl output to the 'pp' alias as follows:
$ curl -XGET http://localhost:9200/_search | pp
Source: http://ruslanspivak.com/2010/10/12/pretty-print-json-from-the-command-line/
ElasticSearch has no such permanent setting and I don't want such. Quite often I see that developers forget to undo such settings in production and then overall product performance is degraded. Similar example is leaving DEBUG logging enabled which is very popular performance killer.
You have plenty of tools to ease your development:
RestClient‎ Firefox plugin
[ElasticSearch Head] - excellent ES admin which even pretty formats your input
plus earlier mentioned ElasticShell.
However if you really really want to make it in curl here is simple trick I've just made. Run this in your bash shell or script:
curl() { `which curl` $# | python -mjson.tool ; }
And you use curl as before:
curl http://localhost:9200/
To undo:
unset -f curl
Of course it would be better to name the function as ppcurl if you don't like above :)
In most cases, you use REST API from other products (JQuery, PHP, Perl, Ruby...). These frameworks don't need to have pretty input.
So, I assume that in 99% of the requests you don't want to slow your requests by pretty rendering JSON.
IMHO, you only need pretty print when doing debug or in dev mode.
An option could be to have it as an elasticsearch property in elasticsearch.yml file. Open an issue for it?
You can easily use jq instead:
curl --location --request GET 'http://localhost:9200/_cluster/health' | jq .
Related
I'm trying to set up a Bash Script (shl) that will use curl to download a file.
I really can't find a good bash script tutorial. I need assistance.
I've tried testing it with a windows bat file that has something like
: curl ${url} > file name [trying to see it work from windows]
and getting
Protocol "https" not supported or disabled in libcurl
the URL that I can use to extract the file would look something like this {example only)
https://bigstate.academicworks.com/api/v1/disbursements.csv?per_page=3&fields=id,disbursement_amount,portfolio_name,user_uid,user_display_name,portfolio_code,category_name&token=fcc28431bcb6771437861378aefe4a4474dbf9e503c78fd9a4db05924600c03b
I'm trying to put the file here \aiken\ProdITFileTrans\cofc_aw_disbursement.csv
so my bat file looks
#Echo On
curl --verbose -g ${https://bigstate.academicworks.com/api/v1/disbursements.csv?per_page=3&fields=id,disbursement_amount,portfolio_name,user_uid,user_display_name,portfolio_code,category_name&token=fcc28431bcb6771437861378aefe4a4474dbf9e503c78fd9a4db05924600c03b} >\\aiken\ProdITFileTrans\cofc_aw_disbursement.csv
PAUSE
Again the goal is to take a working version of this call in put in in a Bash shell that I can call forom ATOMIC/UC4
Once I have the bash script I want to be able to do a daily download of my file.
Well, perhaps something like:
#!/bin/bash
curl --verbose -g yourlongurlhere -o /path/to/your/file.csv
Make the file executable (chmod +x).
EDIT: check Advanced Bash Scripting Guide for tons of examples. It covers just about everything.
OK, this question has been asked before (e.g. Is there a /dev/null on Windows?) but I'm trying to find an answer that works, and all of the prior questions just say "change the command to point to NUL"
If I have a curl request (or whatever) which someone ran on a Unix/Mac which includes this:
-o /dev/null
it will throw an error if I try to run it as-is on my Windows box. Therefore, I need to change the command by replacing that with:
-o NUL
My question is, is there something I can do so that I can run the original curl request without needing to make that change?
IOW, can I create a symlink or something similar, so that I don't need to change the curl statement? Basically so I can use the *nix syntax on a Windows box?
Before someone says "how much hassle is it to change the curl", I'm running hundreds of curls a day, often ones which were originally run on a *nix box. Also, if I change to use Windows syntax, then when someone tries to run it on a *nix box, they get issues....
I am using a custom editor for an embedded systems project. For source code I would like to get ctags working from command line and give me search results on commandline. Other option is to work with cscope in non interactive mode so I can include it in my editor at a later date. I did some initial web search but couldn't find anything relevant to accomplish this.
Does anyone know how to use either of these tools from command line?? Any tutorial?
Thanks.
Have a great day.
Using readtags.c shipped as part of ctags implementation, you can search a tag from given tags file.
Let me show an example:
$ ctags -R main
$ readtags -t tags kindDefinition
kindDefinition main/types.h /^typedef struct sKindDefinition kindDefinition;$/
$ readtags -t tags -e kindDefinition
kindDefinition main/types.h /^typedef struct sKindDefinition kindDefinition;$/;" kind:t typeref:struct:sKindDefinition
I am having issues pulling from a YAML config file:
Fatal error: while parsing a block mapping; expected <block end>, but found block entry
While there are plenty of online YAML validators, which I have tried and have helped, I'd like to validate my YAML files from the command line and integrate this into my continuous integration pipeline.
With basic Ruby installation this should work:
ruby -ryaml -e "p YAML.load(STDIN.read)" < data.yaml
Python version (thx #Murphy):
pip install pyyaml
python -c 'import yaml, sys; print(yaml.safe_load(sys.stdin))' < data.yaml
You could use yamllint. It's available in Homebrew, etc. It can be used for syntax validation as well as for linting.
Given that you have a perl install on the server you are working on, and it has some of the basic YAML tools, you can use...
perl -MYAML -e 'use YAML;YAML::LoadFile("./file.yaml")'
It should be noted that this will be strict in it's interpretation of the file, but useful.
To correct your .yaml files I recommend the tool yamllint. It can be launched easily from the local console.
The package yamllint is available for all major operating systems.
It's installable from the system's package sources. (e.g. sudo apt-get install yamllint).
See documentation for quick start and installation.
My preferd way is
yamllint -d "{extends: default, rules: {quoted-strings: enable}}" .
Since I really want to catch quote errors,
e.g. validate: bash -c ' ' \""
This is valid yaml, since yaml will just quote the string and turn it into:
validate: "bash -c ' ' \\\"\""
Whilst there was just clearly a quote missing at the beginning of the validate comand.
So a normal yaml checker will not detect this, yamllint wil not even detect this in it's default configuration, so turn on quoted-strings checker.
If you got no interpreter installed in your environment but still got a curl, then you could use an online linter project like Lint-Trilogy:
curl -X POST --data "data=$(cat myfile.yml)" https://www.lint-trilogy.com/lint/yaml/json
It delivers the validation result incl. error descriptions (if any) as json or csv or, where sufficient, as plain text true or false.
It's available as docker file, too. So if you often need a REST based linter, perhaps in a CI/CD pipeline, it may be handy to host an own instance on your site.
Or alternately installed (free) Eclipse IDE and then YEdit yaml editor and see your yaml with syntax highlighting, error flags, and outline views. One time setup cost works pretty well for me.
I aim to filter my Google results right at terminal such that I get only Google's definitions.
I am trying to run the following in Mac's terminal
open http://www.google.com/search?q=define:cars&ie=utf-8&oe=utf-8:en-GB:official&client=vim
A similar command for Firefox is
open http://www.google.com/search?q=define:cars&ie=utf-8&oe=utf-8:en-GB:official&client=firefox-a
Which client can you use to have Google's html page to your standard output?
To use Google search not through their web interface, you're almost certainly better off using their API.
However, I think curl is the right tool to use to download a web page if that's what you have to do (and it probably isn't)
"GET"
GET 'http://www.google.com/search?q=define:cars&ie=utf-8&oe=utf-8:en-GB:official&client=vim'
See also "HEAD".
The command can be installable on Gnu/Linux OS:
[elcuco#pinky ~]$ rpm -qf `which GET`
perl-libwww-perl-5.808-2mdv2008.1
In theory you could also use "wget" and output to stdout using something like this:
wget http://www.google.com -O - --quiet
However I cannot get it to work with this example URL.