Can geoserver fetch tiles in EPSG:32633 from one server, convert and serve EPSG:4326 locally? - geoserver

The maps I would like to use served tiles using EPSG:32633
https://geodata.npolar.no/arcgis/rest/services/Basisdata/NP_Basiskart_Svalbard_WMTS_25833/MapServer/WMTS/1.0.0/WMTSCapabilities.xml
I need to access those maps as tiles from , for example http://localhost/svalbard/z/x/y as data with EPSG:4326
Can GeoServer do the job? - if yes, please point me to a configuration example.

Related

GCP Vertex AI AutoML for images requires the image data to also reside in us-central1

We have our image data in us-west1, but since AutoML is available in us-central1, we are not able to use the image data in us-west1. We do not want to copy the data in us-central1. Is there any other option to use Vertex AI automl without moving the data to us-central1
Thanks
Based on the documentation[1] that I went through, and since the feature of AutoML for image data is solely available in us-central1 (Iowa), there is no other option aside from copying or moving the data in us-central1.
[1] - https://cloud.google.com/vertex-ai/docs/general/locations#feature-availability

Seeking satellite image API

I’m seeking a satellite image API which I can call with lat/long, date, resolution, among other things, and pull a satellite image to analyze.
What are some go-to APIs for this purpose? I’m willing to pay as I anticipate heavy usage.
ESRI has many public satellite images consumable using their ArcGIS API for JS .
Here's an example : https://developers.arcgis.com/javascript/latest/sample-code/widgets-basemapgallery/index.html
As long as you need to display layers from public services and add some graphics on it (Polygons points etc) , you can use it for free . Otherwise if you are planning to create your own maps , then you should pay or find other GIS opensource solutions .
check out similar post on same topic : https://gis.stackexchange.com/questions/9687/seeking-satellite-imagery-providers
a list of satellite APIs can be found here : https://www.programmableweb.com/category/satellites/api

GeoServer: Set sequence of layers to be rendered in a WMS

how do I configure the sequence of layers in my GeoServer workspace such that when I read the layers in WMS in, say~ Tableau (below) or QGIS, the list of the layers available to be checked is in the sequence I need?
In ArcGIS Server, this can be easily set by aligning the layers in ArcMap before publishing it to the ArcGIS Server.
However, I can't seem to find such a configuration in GeoServer Admin.
Thanks!
The ordering of the layers is solely at the control of the client. GeoServer just provides a list of layers that the client can pick and display in any order it likes.
If you need to combine certain layers in a specific order you could use a LayerGroup.
Got a solution for this.
I found out that arrangement of layers to be displayed on the client can be set by naming the layer "Name" field in order.
E.g., (1 Coastline), (2 Waterbodies), (3 Roads) will display coastline as first layer, followed by waterbodies and roads.

Using varnish to cache batched backend operations

I'm using Mapnik to generate map tiles (PNG). I have a url where tiles can be generated on-the-fly individually:
http://tiles.example.com/dynamic/MAPID/ZOOM/X/Y.png
Each map tile is 256x256 pixels.
However, generating tiles individually is expensive. It's much more efficient to generate them batched (i.e. generate one large PNG, and split it into smaller files). I have a URL that can do that too:
http://tiles.example.com/dynamic/MAPID
which batch generates all the tiles for a map and returns "OK" when complete, saves them to disk, from where they are available statically at:
http://tiles.example.com/static/MAPID/ZOOM/X/Y.png
which is NGINX serving raw files.
Is it possible to configure Varnish to trigger a batch generation, wait for it to complete, then cache and serve individual tiles until they expire (in my case, 5 minutes)?
Currently varnish3 doesn't support backend fetching, this feature should be implemented in varnish4, Instead I would suggest to trigger those as cron jobs and varnish would fetch them when the first user hits the image.
I would also recommend that the generation would be done on a separate folder/file location and just move it when they are ready, would spare you the hassle of people hitting your server during the generation.

Store Images to display in SOLR search results

I have built a SOLR Index which has the image thumbnail urls that I want to render an image along with the search results. The problem is that those images can run into millions and I think storing the images in index as binary data would make the index humongous.
I am seeking guidance on how to efficiently store those images after rendering them from the URLs , should I use the plain file system and have them rendered by tomcat , or should I use a JCR repository like Apache Jackrabbit ?
Any guidance would be greatly appreciated.
Thank You.
I would evaluate the effective requiriments before finally deciding how to persist the images.
Do you require versioning?
Are you planning to stir eonly the images or additional metadata?
Do you have any requirements in horizontal scaling?
Do you require any image processing or scaling?
Do you need access to the image metatdata?
Do you require additional tooling for managing the images?
Are you willing to invest time in learning an additional technology?
Storing on the file system and making them available by an image sppoler implementation is the most simple way to persist your images.
But if you identify some of the above mentioned requirements (which are typical for a content repo or a dam system), then would end up reinventing the wheel with the filesystem approach.
The other option is using a kind of content repository. A JCR repo like for example Jackrabbit or it's commercial implementation CRX is one option. Alfresco (supports CMIS) would be the another valid.
Features like versioning, post processing (scaling ...), metadata extraction and management belong are supported by both mentioned repository solutions. But this requires you to learn a new technology which can be time consuming. Both mentioned repository technologies can get complex.
If horizontal scaling is a requirement I would consider a commercially supported repository implementations (CRX or Alfresco Enterprise) because the communty releases are lacking this functionality.
Me personally I would really depend any decision on the above mentioned requirements.
I extensively worked with Jackrabbit, CRX and Alfresco CE and EE and personally I would go for the Alfresco as I experienced it to scale better with larger amounts of data.
I'm not aware of a image pooling solution that fits your needs exactly but it shouldn't be to difficult to implement that, apart from the fact that recurring scaling operations may be very resource intensive.
I would go for the following approach if FS is enough for you:
Separate images and thumbnail into two locations.
The images root folder will remain, the thumbnails folder is
temporary.
Create a temporary thumbnail folder for each indexing run.
All thumbnails for that run are stored under that location, scaling
can be achived with i.e ImageMagick.
The temporary thumbnail folder can then easily be dropped as soon as
the next run has been completed.
If you are planning to store millions of images then avoid putting all files in the same directory. Browsing flat hierarchies with two many entries will be a nightmare.
Better create a tree structure by i.e. inverting the current datetime (year/month/day/hour/minute ... 2013/06/01/08/45).
This makes sure that the number of files inside the last folder get's not too big (Alfresco is using the same pattern for storing binary objects on the FS and it has proofen to work nicely).

Resources