How can i minimise run time of fetching location details? - laravel

I am using GEOIP plugin in my laravel system. All things are correctly working but response time of GEOIP is very more. I want to reduec it.
I tried to fetch simple IP address but it is not worked , because i need to do country validation also.
use \Torann\GeoIP\Facades\GeoIP;
$respondentLocation = GeoIP::getLocation();
I just need quick response nothing else.
Thank you in advance.

I'm not familiar with that particular library but I'm assuming its looking up remote data (hence the speed issues). If so, there isn't a lot you can do outside of caching records, and possibly pre-fetching data (if your scenario allows for that).
Looking at their docs, it appears they do have caching mechanisms in place: http://lyften.com/projects/laravel-geoip/doc/commands.html

Related

Updating global.xml on disk doesn't update geoserver config

I'm using geoserver 2.19 and tried to update directly global.xml to update the contact information but when reloading the cache, it just discards my changes and writes back the global.xml without my changes.
I tried modifying the logging.xml as well, the change I made is visible in the GUI when reloading the cache but is not really adapting the logs as per the modifications I've made.
Am I missing something?
To give a bit more information, I have 2 instances of Geoserver and when I make changes to 1 instance, I call the rest reload to apply the changes on the other instances too. I've read about JMS clustering but it seemed a bit too complex and rigid for what I need to do. Advices are welcome.
I am trying to achieve this : https://docs.geoserver.geo-solutions.it/edu/en/clustering/clustering/passive/passive.html. But I'm having trouble with the synchronization between the instances
Thank you
Basically, don't do that! GeoServer manages that file and you can break things very badly if you mess with that file.
You should only change things like contact information through the GUI or the REST API.

Setting up multiple network layers in Relay Modern

I am using a react-native app with relay modern.
Currently our app's fetchQuery implementation, just does a fetch on the network (like in https://facebook.github.io/relay/docs/en/network-layer.html),
Although there is a possibility of another local-network layer like https://github.com/relay-tools/relay-local-schema which returns data from a local-db like sqlite/realm.
Is there a way to setup offline-first response from local-network layer, followed by automatic request to real network which also populates the store with fresher data (along with writing to local-db)?
Also should/can they share the same store?
From the requirements of Network.create(), it should return a promise containing the payload, there does not seem a possibility to return multiple values.
Any ideas/help/suggestions are appreciated.
What you trying to achieve its complex, and ill go for the easy approach which is long time cache.
As you might know relay modern uses a local storage and its exact copy of the data you are fetching, you can configure this store cache as per your needs, no cache on mutations.
To understand how this is achieve the best library around to customise Relay Modern or Classic network layer you can find in https://github.com/nodkz/react-relay-network-modern
My recommendation: setup your cache and watch your request.... (you going to love it)
Thinking in Relay,
https://facebook.github.io/relay/docs/en/thinking-in-relay.html

Can weblogic cache reponses to get requests?

I don't mean using coherence. I am looking for a way to avoid hitting my application to look something up that I've already looked up. When the client performs a GET on a resource I want it to hit the application the first time only and after that return a cached copy.
I think I can do this with apache and mod_mem_cache, but I was hoping there was a weblogic built in solution that I'm just not able to find.
Thanks.
I don't believe there's inbuilt features to do that across the entire app server, but if you want to do it programmatically, perhaps CacheFilter might work.

How to store DNS records (Go authoritative name server)

I'm need to deploy an authoritative name server using and I've found miekg/dns package to almost fit the bill but I can't find how to store/persist records (i.e. on disk ). Currently it seems to store everything in a map but I guess it's all gone when the server is shut down. Is there anything I'm missing or an easy way to plugin a persistent storage engine ?
miekg/dns is a library, not a fully functional DNS-server. It has built in support of RFC 1035 zone files (originally used by bind): zscan.go to parse zone file and zgenerate.go to generate zone string.
If you're looking for a complete DNS-server based on this library check Users section in README or take a look to discodns server based on the library that reads zones from etcd.
Easiest solution, IMO, is Redis, which has a few libs available. Even if this isn't your long term solution, it works well for prototyping. From there you just need to serialize/deserialize your data into/from a string/[]byte and write a simple caching layer that loads a saved state and pushes changes when they happen.

proxy for scale, performance (to load external content)?

I am sure answer for this question will be very subjective, I simply want to know what the options are out there (for building a proxy to load external contents).
Typically I used cURL in php and pass a variable like proxy.url to fetch content. Then make an AJAX call with Javascript to populate the contents.
EDIT:
YQL (Yahoo Query language) seems a very promising solution to me, however, it has a daily usage limit which essentially prevents me from using it for large scale projects.
What other options do I have? I am open to any language, any platform, key criteria are: performance and scalability.
Please share your ideas, thoughts and experience on this topic.
Thanks,
you dont need a proxy server or something else.
Just create a cronjob to fetch the contents every 5 minutes (or whenever you want).
You just need to create a script that grabs the content from the web and saves it (to a file, a database, ...), which will be started by the cronjob.
If somebody requests your page, you just need to send the cached content out and do with it whatever you want to do.
I think scalability and performance will be no problem.
Depending on what you need to do with the content, you might consider Erlang. It's lightening fast, ridiculously reliable, and great for scaling.

Resources