Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Micro-caching and Nginx can really speed up the app.
Would it be possible to use micro-caching (or something similar) with Varnish?
Yes you can set Varnish to micro-cache content.
For other readers, micro-caching is a transparent process where a cache keeps a local of copy of content that is not to be cached, and serves that for a specified time.
For example, you may have a home page the updates often, and have no caching set in the headers for the site. However your application is running on a VM that is low performance, and it cannot cope with many requests. Micro-caching can mitigate this problem by silently serving the home page from cache (and sending no headers) for a short time.
In Varnish this is achieved with the TTL setting. This tells varnish to cache the content for the time specified.
If you are using TTL you should also use the GRACE setting - this tells varnish to continue to serve cached content for a specified time should the backend not respond in a timely manner.
The other advantage with TTL (the default is, I believe, 120 seconds) is that varnish sends only the first request for uncached content to the backend, queuing any other requests to wait for the cache to be ready.
The Varnish Book has some examples of what is possible with various settings of grace and ttl.
Related
I have a Cloudflare Worker, whose responses can be cached for a long time. I know I can use Cache API inside the worker, but I want the request to never reach the worker at all, if Cache TTL is not reached.
There will be more than 10 million requests to this url, and I don't see the point paying for a Worker, that most of the time will just fetch a response from the Cache API.
I know a workaround - just host the worker code on a server, and use Page Rules, to cache everything from this origin. But I'm wondering if I could use Worker as origin, and somehow make Page Rules work with it. Just setting a Page Rule to cache everythig and cache TTL setting to 1 month still routes all requests to the Worker and doesn't cache anything.
There's currently no way to do this.
It's important to understand that this is really a pricing question, not a technical question. Cloudflare has chosen to price Workers based on the traffic level of a site that is served using Workers. This pricing decision isn't necessarily based on Cloudflare's costs, and Cloudflare's costs wouldn't necessarily be lower if your Worker runs less often (since the cost of deployment would not change, and the cost of executing a worker is quite low), so it doesn't necessarily make sense for Cloudflare to offer a discount for Worker-based sites that manage to serve most responses from cache.
With that said, Cloudflare could very well decide to offer this discount in the future for competitive or other reasons. But, at this time, there are no plans for this.
There's a longer explanation on the Cloudflare forums: https://community.cloudflare.com/t/cache-in-front-of-worker/171258/8
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
How should a Golang app handle missing external dependencies ?
When a app starts and it doesn't find the database it is supposed to persist the data on, knowing the app is useless in that state, should I panic the app ?
I can otherwise log infinitely something, print to stderr, or use another method to notify, but I'm not sure when to choose each method.
An application that has no access to the external network service should not panic. This should be expected as networks tend to fail. I would wrap the error and pass it further.
Consider the following scenario. You have multiple application servers connected to two database servers. You are upgrading the database servers one at a time. When one is turned off half of your application servers panicked and crashed. You upgrade the second database server and every application server is gone now. Instead, when the database is not available just report an error for instance by sending HTTP status 500. If you have a load balancer it will pass the request to the working applications servers. When the database server is back, the application servers reconnect and continue to work.
Another scenario, you are running an interactive application that processes a database to create a report. The connection is not available. The application panicked and crashed. From the user perspective, it looks like a bug. I would expect a message that connection cannot be established.
In the standard library it is accepted to panic when an internal resource is not available. See template.Must. This means something is wrong with the application itself.
Here's an example from the Go standard library.
Package crypto
import "crypto"
func (Hash) New
func (h Hash) New() hash.Hash
New returns a new hash.Hash calculating the given hash function. New
panics if the hash function is not linked into the binary.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 months ago.
Improve this question
we have a custom windows service which runs on a user account. Whenever we reboot the server, the service stops. To start the service again, we have to enter the password again in the service' Log On tab. What is causing this and how to resolve the issue?
The behavior you describe can occur when service user is in a domain and the domain policy periodically overwrites the local policy, dropping the "Log on as a service" right for that user.
To fix the problem, edit your domain group policy (with gpmc.msc) and ensure that the service's user has the "Log on as a service" right.
The Microsoft Windows Service Control Manager controls the state (i.e., started, stopped, paused, etc.) of all installed Windows services. By default, the Service Control Manager will wait 30,000 milliseconds (30 seconds) for a service to respond. Certain configurations, technical restrictions, or performance issues may result in the service taking longer than 30 seconds to start and report ready to the Service Control Manager.
By editing or creating the ServicesPipeTimeout DWORD value, the Service Control Manager timeout period can be overridden, thereby giving the service more time to start up and report ready to the service.
How to make it ?
Go to Start > Run > and type regedit
Navigate to: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control
With the control folder selected, right click in the pane on the right and select new DWORD Value
Name the new DWORD: ServicesPipeTimeout
Right-click ServicesPipeTimeout, and then click Modify
Click Decimal, type '180000', and then click OK
Restart the computer
Note: The recommendation above increases the timeout to 180,000 milliseconds (3 minutes), but this may need to be increased further depending on your environment. Keep in mind that increasing this value will likely yield longer server boot times.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Every time I connect to a server with ssh, the connection is reset after a few minutes if there is no input. However, I want to remove these timeouts as to keep the connection alive for as long as possible.
By looking on different forums, I saw it was possible to modify the ServerAliveInterval option in the /etc/ssh_config file. However, there doesn't seem to be this option in my file. Where could it be?
I'm running OpenSSH_5.2p1 on Snow Leopard.
Thanks!
Server Alive interval simply sends a null packet to the server at a set time to keep the connection alive, you should just be able to add some thing like into your config file: ~/.ssh/config
Host *
ServerAliveInterval 60
The second line must be indented with at least one space.
* will match any host; if you wanted you could restrict this to particular destinations like *somedomain.com.
Check out http://kehlet.cx/articles/129.html
I am using drupal and my website uses approx 30mb per page load for nodes and user profiles. My website has round about 150 contributed modules in addition to a few core optional modules. But most of them are small and installed to improve user experience.
My php memory limit is 128mb.
Is 30mb per page acceptable?? And how many page loads can be handled by it easily in 128mb??
Any idea?
Honestly, at 30MB your app is just sipping on memory. The PHP memory limits are set pretty low.
As far as how many "page loads can be handled by 128MB" of memory, well, that's not really valid. When a request comes in, Apache (or whatever server you're using) hands the request to mod_php or FCGI and your PHP code is interpreted, compiled, run, and then quit. The "application" doesn't act like a daemon waiting for requests to come in, so the memory it consumes is used for the duration of the request and then it gets released for use by other requests/processes.
That 128MB limit is per request. That means that so long as you have enough memory (and Apache child processes, etc) you can handle additional requests. If you want to see how your application performs under load, check out apachebench.