Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Every time I connect to a server with ssh, the connection is reset after a few minutes if there is no input. However, I want to remove these timeouts as to keep the connection alive for as long as possible.
By looking on different forums, I saw it was possible to modify the ServerAliveInterval option in the /etc/ssh_config file. However, there doesn't seem to be this option in my file. Where could it be?
I'm running OpenSSH_5.2p1 on Snow Leopard.
Thanks!
Server Alive interval simply sends a null packet to the server at a set time to keep the connection alive, you should just be able to add some thing like into your config file: ~/.ssh/config
Host *
ServerAliveInterval 60
The second line must be indented with at least one space.
* will match any host; if you wanted you could restrict this to particular destinations like *somedomain.com.
Check out http://kehlet.cx/articles/129.html
Related
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 1 year ago.
Improve this question
I have a deployment with 5 containers.
Among them two of them have --endpoint as argument for which value is set from ENV
So I see this error after deployment
/home/xxx-csi-drivers/xxx-vpc-block-csi-driver flag redefined: endpoint
panic: /home/xxx-csi-drivers/xxx-vpc-block-csi-driver flag redefined: endpoint
The code from which container A is build has
endpoint = flag.String("endpoint", "/tmp/storage-secret-sidecar.sock", "Storage secret sidecar endpoint")
also
The code from which container B is build also has
endpoint = flag.String("endpoint", "unix:/tmp/csi.sock", "CSI endpoint")
Is defining the same var endpoint in the code reason for the above bug.
I have tried changing arg names in deployment file. and other options which didnt help. But changing flag name in code fixed the issue but need to undersatnd more on working. So posted this question
It has nothing to do with the different containers. Whichever process is crashing is just broken, the code has a bug where it registers the same flag twice which isn't allowed.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I'm using a privately made SDK by some company where I have to get messages from using an SSE stream. Right now I run my function in my controller which sends a GET to the server and keeps the connection alive. I'm triggering this from my console. I've made a command php artisan fetch:messages which starts calls my controller method and starts the connection.
Right now, if I close my console the SSE stream also closes. How can I keep this alive when I'm away? Because this has to be active at all times.
I tried making a laravel schedule that triggers php artisan fetch:messages command every few minutes but I cannot see if the previous command is active or not.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
How should a Golang app handle missing external dependencies ?
When a app starts and it doesn't find the database it is supposed to persist the data on, knowing the app is useless in that state, should I panic the app ?
I can otherwise log infinitely something, print to stderr, or use another method to notify, but I'm not sure when to choose each method.
An application that has no access to the external network service should not panic. This should be expected as networks tend to fail. I would wrap the error and pass it further.
Consider the following scenario. You have multiple application servers connected to two database servers. You are upgrading the database servers one at a time. When one is turned off half of your application servers panicked and crashed. You upgrade the second database server and every application server is gone now. Instead, when the database is not available just report an error for instance by sending HTTP status 500. If you have a load balancer it will pass the request to the working applications servers. When the database server is back, the application servers reconnect and continue to work.
Another scenario, you are running an interactive application that processes a database to create a report. The connection is not available. The application panicked and crashed. From the user perspective, it looks like a bug. I would expect a message that connection cannot be established.
In the standard library it is accepted to panic when an internal resource is not available. See template.Must. This means something is wrong with the application itself.
Here's an example from the Go standard library.
Package crypto
import "crypto"
func (Hash) New
func (h Hash) New() hash.Hash
New returns a new hash.Hash calculating the given hash function. New
panics if the hash function is not linked into the binary.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am working in a project developed on Laravel 4.2.
I am not able to get the idea about queuing service of laravel. I have read many documents about it but still things are not clear.
Should I compare Queue and cron job ?
when we put a cron job on server, We mention a time when the cron will run. But in the case of queue I could not find the place where time of run is mentioned.
There are some files in App/command directory and code is running on my server but I am helpless to find the time of run OR how to stop these queues.
Please guide me about this problem.
Queue is a service where you add the tasks for later.
Generally you ask other service provider like iron.io to call your application to have the task processes asynchronously and repeat the call if it fails the first time. This allows you to respond to the application user quickly and leave the task to be processed in the background.
If you use local sync driver then task will be done immediately during the same request.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Micro-caching and Nginx can really speed up the app.
Would it be possible to use micro-caching (or something similar) with Varnish?
Yes you can set Varnish to micro-cache content.
For other readers, micro-caching is a transparent process where a cache keeps a local of copy of content that is not to be cached, and serves that for a specified time.
For example, you may have a home page the updates often, and have no caching set in the headers for the site. However your application is running on a VM that is low performance, and it cannot cope with many requests. Micro-caching can mitigate this problem by silently serving the home page from cache (and sending no headers) for a short time.
In Varnish this is achieved with the TTL setting. This tells varnish to cache the content for the time specified.
If you are using TTL you should also use the GRACE setting - this tells varnish to continue to serve cached content for a specified time should the backend not respond in a timely manner.
The other advantage with TTL (the default is, I believe, 120 seconds) is that varnish sends only the first request for uncached content to the backend, queuing any other requests to wait for the cache to be ready.
The Varnish Book has some examples of what is possible with various settings of grace and ttl.