I need to inspect the cache, and see what IP's are blocked*, and for how long.
NB: I am on a developer/free plan.
Update:
I'm using the rack-throttle gem which stores blocked IP's in the cache.
If you click on the Memcachier add-on name in your Heroku control panel you are redirected to the Memcachier dashboard for the instance. More details are available in the add-on documentation.
The dashboard will provide you some information such as limits, memory consumption and items in the collection.
However, keep in mind that Memcached itself doesn't provide advanced commands to list all keys in the database.
As far as I know, there is no IP blocking feature in Memcachier.
I donot know whether this will help you: https://gist.github.com/bkimble/1365005
I change a little in my apps to get the keys. we should not use it frequently.
Related
NEFilterProvider, or more specifically its 2 subclasses NEFilterDataProvider and NEFilterPacketProvider, has the functionality to allow or deny network activity. However, I couldn't find any way to log in the activity, for debugging purposes.
I know the documentation says this:
it runs in a very restrictive sandbox. The sandbox prevents the Filter
Data Provider extension from moving network content outside of its
address space by blocking all network access, IPC, and disk write
operations.
but is there any trick to log this anyway in debug mode? Maybe using os_log or something like that?
yes, you can use os_log and read the output in the Console app. if you want to workaround the privacy restrictions (while developing/testing), use the %{public} prefix, like so...
import os.log
// ...somewhere in the provider class
os_log("something i want to log %{public}#", someVar)
you're right, the documentation is really, really lacking for this area, other than the SimpleFirewall sample code, and wwdc video. i have an app in production using NEFilterDataProvider but it about cost me my sanity to figure out how to put it all together. at some point i'm going to try to write some blog posts or make a demo repo to try to help create a central community resource to share knowledge and fill in the gaps in the documentation with hard-won knowledge.
i am deploying a python app with a web interface on heroku. I know that I can check the "metrics"-tab when looking at the app but that does not give me much. Is there any other addon where I can check(and save) the metrics?
What i want to see is:
* Traffic over time(hours:minutes)
* What kind of browsers the requests come from
* What kind of device used(tablet/phone etc)
I also need it to be saved so i can check traffic for a month back. In the metrics tab i can only see 1-2 weeks back.
I have been looking at keen.io-app for this but I doesnt know exactly how to use it. What i look for is more like www.similarweb.com.
Do you have any tips on which addon or solution i can use?
There are two addons you can use here:
NewRelic will give you performance information regarding your slow transactions and external calls.
Librato will graph your memory usage and traffic, and allow you to send any data and graph it too.
I understand page caching isn't a good option on heroku since each dyno has an emepheral file system (so they wouldn't share files and it would get wiped out on each restart).
So I'm wondering what the best alternative is. I have a large amount of potential files that could get generated in a traditional page caching scenario (say 10GB-100GB) so redis/memcached don't seem like good options here. Redis can write out to disk, but my understanding is that once you exceed it's memory capacity, it's not the right solution to start reading off of disk.
Has anyone found a good solution here? I'm thinking maybe MongoStore. (And some way to run this in conjunction with redis since I'm using redis for some other scenarios.) Thanks!
If your site is 100% static content and never going to be dynamic, S3 may be a good option. You can then create a CNAME to the s3 domain. This allows you to leverage CloudFront should you need it. Otherwise, 100GB would have to go into the database, which is in turn then pulled up by your application.
Heroku's cedar stack allows for custom buildpacks. This one vendors nginx. This would be good if you envision transitioning to a more dynamic site.
I'm using the Selenium Client (v 1.2.18) to do automated navigation of retail websites for which there exists no external API. My goal is to determine real-time, site-specific product availability using the "Check Availability" button that exists on a lot of these sites.
In case there's any concern, each of these checks will be initiated by a real live consumer who is actually interested in whether or not something's available at that store. There will be no superfluous requests or other internet badness.
I'm using Selenium's Grid framework so that I can run stuff in parallel and I'm keeping each of the controlled browsers open between requests. The issue I'm experiencing is that I need to perform these checks across a number of different domains, and I won't know in advance which one I will have to check next. I didn't think this would be too big an issue, but it turns out that when a Selenium browser instance gets made, it gets linked to a specific domain and I haven't been able to find any way to change what domain that is. This requires restarting a browser each time a request comes in for a domain we're not already linked to.
Oh, and the reason we're using Selenium instead something more light-weight (eg. Mechanize) is because we need something that can handle JavaScript.
Any help on this would be greatly appreciated. Thanks in advance.
I suppose you are restricted from changing domain because of same origin policy. Did you try using browser with elevated security privileges like iehta for internet explorer and chrome for firefox browsers. While using these modes of browsers, use open method in your tests and pass the URL which you want to open. This might solve your problem.
We're working on developing user widgets that our members can embed on their websites and blogs. To reduce the load on our servers we'd like to be able to compile the necessary data file for each user ahead of time and store it on our Amazon S3 account.
While that sounds simple enough, I'm hoping there might be a way for S3 to automatically ping our script if a file is requested that for some reason doesn't exist (say for instance if it failed to upload properly). Basically we want Amazon S3 to act as our cache and it to notify a script on a cache miss. I don't believe Amazon provides a way to do this directly but I was hoping that some hacker out there could come up with a smart way to accomplish this (such as mod_rewite, hash table, etc).
Thanks in advance for your help & advice!
Amazon doesn't currently support this, but it looks like it might be coming. For now, what you could do is enable logging and parse the log for 404s once an hour or so.
It's certainly not instant, but it would prevent long-term 404s and give you some visibility about what files are missing.