Topojson equivalent to ogr2ogr -clipdst? - topojson

I'am digging in for reprojection, which is often followed by cliping to work on "just the needed data". I noticed that both ogr2ogr could be used for reprojection :
#using ogr2ogr, then crop via ogr2ogr -clipsrc
ogr2ogr -f 'ESRI Shapefile' -t_srs 'EPSG:...' output.shp input.shp
# or using topojson:
topojson --projection 'd3.geo.albers()' -q 1e5 --id-property=name -- input.shp -o output.json
Ogr2ogr privide cliping capabilities on shapefiles :
ogr2ogr -clipsrc $(WEST) $(NORTH) $(EAST) $(SOUTH) output.shp input.shp
But is there a clip function via topojson ? that i'am not aware of. And if not, how could I crop my topojson data let's say in the bounding box W:0, N:55, E:15, S: 40 (Europe), so I don't have to load json for the whole earth.
Topojson API of Reference.

Related

gdal_translate only translating first three bands from .vrt to .tif

I'm trying to translate a set of .tif images to one multiband .tif following this tutorial. I'm not getting any errors but every time I run the script it generates a three-band image when I have a lot more input .tifs than that. I have tried the -r flag and it didn't change anything.
I have:
wdir="/Users/<mydir>"
creoleLA="$wdir/data/s2l2a_tiffs/T15RVP"
dhaka="$wdir/data/s2l2a_tiffs/T45QZG"
easthouston="$wdir/data/s2l2a_tiffs/T15RTN"
kolkata="$wdir/data/s2l2a_tiffs/T45QXF"
murcia="$wdir/data/s2l2a_tiffs/T30SXH"
wuhan="$wdir/data/s2l2a_tiffs/T50RKU"
#=============================================
# Main script
#=============================================
read -p 'ROI: ' roi
indir=${!roi}
outdir="$wdir/data/s2l2a_tiffs"
ls ${indir}*"_B"*.tif > "$outdir/btif_${roi}.txt"
gdalbuildvrt -separate -overwrite -input_file_list "$outdir/btif_${roi}.txt" "$outdir/S2L2A_${roi}.vrt"
gdal_translate -strict -ot uint16 "$outdir/S2L2A_${roi}.vrt" "$outdir/S2L2A_${roi}_mb.tif"
rm "$outdir/S2L2A_${roi}.vrt"
rm "$outdir/btif_${roi}.txt"
Where "$outdir/btif_${roi}.txt" is a textile of the GeoTIFF file paths like this:
/Users/<mydir>/data/s2l2a_tiffs/T15RVP_20200831T164849_B01_60m.tif
/Users/<mydir>/data/s2l2a_tiffs/T15RVP_20200831T164849_B02_60m.tif
...
I am processing Sentinel-2 imagery and using OSX 11.6.

Find an especific color coordenates on an image

I would like to find the coordinates of the first appearance of a certain color, for example green, on an image so can be used in a bash script.
I've been trying to use Imagemagick but can't find a way to solve the problem.
Can this be done with Imagemagick or should I use anything else?
Here is one way in ImageMagick using sparse-color: (provided the image is fully opaque). Sparse-color will read out the x,y coordinates and color of all fully opaque pixels. Then you can use unix tools to get the first one.
Create red and blue image:
convert -size 10x10 xc:red xc:blue -append x.png
Find first rgb(0,0,255) i.e. blue colored pixel
convert x.png sparse-color: | tr " " "\n" | grep "srgb(0,0,255)" | head -n 1
Result
0,10,srgb(0,0,255)
Similar results can be achieved using txt: in place of sparse-color. But the unix commands for filtering would be a bit different.

How to use imagemagick "convert" to create Google Earth pyramid files

I have a large image that I am using imagemagick to convert into tiles for use in a Google Earth KML as explained here
instructions on image pyramid construction
The idea is to chop up the images into 4 pieces, then 16, then 64, etc.
To keep things simple, I made the image canvas 4096x4096 so that dividing it in will produce equal size files. The basic command is very simple. For example:
convert large.png -crop 512x512 tiles.png
The issue is the convert command creates file names sequentially, while google needs a format of row column. For instance if there were four files output, the file names should be:
tiles00.png
tiles01.png
tiles10.png
tiles11.png
I brute forced renaming scripts for up to 64 files, but before doing the 256 file case, I'd like to know if there is a simpler way to generate the file names. I'm using linux.
Here is one way in Imagemagick 6 using for loops.
lena.png
The lena.png image is 256x256. I choose 128x128 size tiles. So there will be a total of 2 rows and 2 columns for four output images.
infile="lena.png"
tx=128
ty=128
ncols=`convert -ping "$infile" -format "%[fx:floor(w/$tx)]" info:`
nrows=`convert -ping "$infile" -format "%[fx:floor(h/$ty)]" info:`
for ((j=0; j<nrows; j++)); do
offy=$((j*ty))
for ((i=0; i<ncols; i++)); do
offx=$((i*tx))
convert lena.png -crop ${tx}x${ty}+${offx}+${offy} +repage lena_tile${j}${i}.png
done
done
lena_tile00
lena_tile01
lena_tile10
lena_tile11
An alternate, more compact way is to use -set filename command with fx calculations to name the files in the image chain.
infile="lena.png"
tx=128
ty=128
ncols=`convert -ping "$infile" -format "%[fx:floor(w/$tx)]" info:`
nrows=`convert -ping "$infile" -format "%[fx:floor(h/$ty)]" info:`
convert "$infile" -crop ${tx}x${ty} -set filename:row_col "%[fx:floor(t/$nrows)]%[fx:mod(t,$ncols)]" "lena_tile%[filename:row_col].png"
See:
https://imagemagick.org/Usage/basics/#set
https://imagemagick.org/script/fx.php
Are you trying to make your own in order to learn about the process? If not, existing tools like dzsave can build complete pyramids for you very quickly in a single command. For example:
$ vipsheader wtc.jpg
wtc.jpg: 10000x10000 uchar, 3 bands, srgb, jpegload
$ /usr/bin/time -f %M:%e vips dzsave wtc.jpg x --layout google
211224:1.52
$ ls -R x | wc
2404 2316 15186
So that's making a google-style pyramid of 2400 tiles in directory x from a 10,000 x 10,000 pixel JPG image. It takes about 1.5s and 210mb of ram.
There's a chapter in the manual introducing dzsave:
http://libvips.github.io/libvips/API/current/Making-image-pyramids.md.html

Tile images of different aspect ratios using ImageMagick without gaps

I want to be able to tile together images of different aspect ratios in a way that looks good and avoids as much whitespace between the images as possible.
What I've done so far is rename all the images using a script that changes the image name to the aspect ratio, which makes ImageMagick tile the narrowest images first:
for i in *.jpg;
do mv "$i" $(printf '%.4f' $(echo "scale=4;" $(identify -format "%w" "$i") "/" $(identify -format "%h" "$i") | bc))"$i";
done
Then I run ImageMagick:
montage -mode concatenate -tile 6x -geometry 250x+10+20 -background black *.jpg out.jpg
Which gives me something like this:
Unfortunately, I want something like this, where there isn't as much vertical space between the images with the smaller aspect ratios and bigger ones:
Anyone have any ideas?

Detecting mostly empty images using imagemagick

I'd like to use imagemagick or graphicsmagick to detect whether an image has basically no content.
Here is an example:
https://s3-us-west-2.amazonaws.com/idelog/token_page_images/120c6af0-73eb-11e4-9483-4d4827589112_embed.png
I've scoured Fred's imagemagick scripts, but I can't figure out if there is a way to do this:
http://www.fmwconcepts.com/imagemagick/
Easiest way would be to use -edge detection followed by histogram: & text:. This will generate a large list of pixel information that can be passed to another process for evaluation.
convert 120c6af0-73eb-11e4-9483-4d4827589112_embed.png \
-edge 1 histogram:text:- | cut -d ' ' -f 4 | sort | uniq -c
The above example will generate a nice report of:
50999 #000000
201 #FFFFFF
As the count of white pixels is less then 1% of black pixels, I can say the image is empty.
This can probably be simplified by passing -fx information to awk utility.
convert 120c6af0-73eb-11e4-9483-4d4827589112_embed.png \
-format '%[mean] %[max]' info:- | awk '{print $1/$2}'
#=> 0.00684814
If you are talking about the amount of opaque pixels vs the amount of transparent pixels, then the following will tell you the percentage of opaque pixels.
convert test.png -alpha extract -format "%[fx:100*mean]\n" info:
39.0626
Or if you want the percentage of transparent pixels, use
convert test.png -alpha extract -format "%[fx:100*(1-mean)]\n" info:
60.9374

Resources