Occasionally, I will leave an aria2c connection seeding after it has finished downloading, and then hop on a network which doesn't like me seeding.
I'd like to immediately close the connection after I finishing downloading, preventing me from ever seeding. How can I do this?
Note, that the general opinion of the torrent community is that you should seed for at least the amount of the size of the file you downloaded. You can do this by providing the seed-ratio=1.0 option:
However, this can be done by providing the seed-time=0 option. It will immediately stop it from seeding after the download is complete:
aria2c --seed-time=0 path-to-torrent-file.torrent
Note that, it'll still seed the downloaded data during the download so some seeding may still occur.
Related
When I issue an $.ajax query with a timeout: parameter, and my timeout is met such that error: is invoked, what does that mean?
More specifically:
does that mean the server received the request, but is still processing it? That may mean some effect may occur, so I may have to cancel it on the server, or somehow invalidate data that was already partially written to a database.
Or does that mean I was never able to reach the server at all? This is nice to know since then I don't have to deal with partial data on a server "save"
Or does that mean the request made it part of the way, and now we lost track of it? In this case, I'd have to actually ask the server, "Oh hey, about that request I sent awhile ago... did you get that one? yeah? okay ignore that last save"
OS Commands like tracert make it clear there may be many servers for a TCP command to go through, so if one becomes unresponsive, it's hard to tell if it got it or not. But some protocols require an echo-back to be considered receivable (so I'm not sure if HTTP or Apache is involved in this)
It is how long the client will wait to hear from the server before giving up.
The server may or may not have done its part. The only way for the client to know about that is for the client to be notified. Since you don't want to to leave a process or a human waiting forever, by using a timeout you specify the time to wait for success before giving up.
I have done something silly and written a script for a website that does an ajax check every 2 seconds. In this case its using wordpress and its admin-ajax.php file every 2 seconds. This essentially burned up all the CPU power of the server, and made every site on the server run really slowly.
After a lot of detective work, i finally found the script and stopped it, so that it doesn't happen on new loads of that website. But looking at my apache log, i can see that it is still running in one browser somewhere.
Is there a way for me to stop that browser from requesting that ajax-call, or perhaps block it from my server? Or will I just have to wait until that browser is being refreshed or closed?
Try to use netstat or something similar through ssh to detect the IP and port of the unknown browser. Also you should try to reboot the server so it may will loose connection.
PS: It's pretty hard to give a clue or answer in the right direction without having any logs or evidence to ensure you answer to this question correctly.
I like the Panic Transmit client for FTP and SFTP, but have lost work a couple of times because the file list is cached and can't be completly refreshed easily.
The Refresh option in the View menu only refreshes the current directory, and doesn't do the subdirectories.
I've contacted Panic about this and got a response that it's the way it works, and they would like to change it but not in this release. I've tried a couple of other FTP clients and find them lacking, eg. Fetch only shows the remote side and uses the Finder for the local side, this gets confusing quite quickly.
Does anyone know where Transmit keeps the cache of the file list so I can delete it and get a full refresh?
If not, it's back to the future with SCP, RSYNC and command line FTP.
I found a crude workaround for this. Transmit keeps the cache in memory, so if you quit the application it is cleared. I just make a habit of always using quit from the dock before any usage that requires up to date timestamps.
Practical Challenge:
I have a LR script that runs against an app being mocked and do not have a logout button (yet).
The test runs fine With stable response time for about 10 minutes, but after that the response time peaks and the server goes into 99% memory usage and transactions start to fail.
I suspect this is due to the script does not terminate the vusers after each run anf it builds up a lot of running sessions against the server wich is not terminated. But I might be wrong.
Anyays I want to programatically close each run after it has competed the business process.
I have red somewhere that web_set_sockets_option ("SHUTDOWN_MODE", "ABRUPT") could be used for this, but I want to be sure that this function actually does what I want and what does 'ABRUPT' means?
Are there better ways of closing sessions? Clicking the close browser during recording does not result in anything being captured in the script.
It's a server issue on session aging. Your server admin for your website can adjust the timeout values where no activity has taken place on a given session. By default most places have this set at 30 minutes. Trim it to what you need rather than taking the default value on the server.
Also, you may have hit a leak situation if resources are constantly accumulated on the server side but never released.
Based on your question I assume you're using the WEB/HTML protocol. I agree that the core issue is that your app's sessions should expire more elegantly and probably sooner. But, in order to get beyond this while testing you can try this. It isn't a guarantee, but it has worked sometimes for me in the past when dealing with similar situations. Try changing your Run-time Settings for the script:
Run-time Settings > Browser > Browser Emulation
Make sure you have the box checked for "Simulate a new user on each iteration". You can also try playing with the other settings here, like clearing the cache each iteration. This could cause a new connection setting with the web page for each iteration depending on the server's session settings. Again, this isn't 100%, but it has worked for me from time to time.
try this:
web_set_sockets_option("CLOSE_KEEPALIVE_CONNECTIONS", "1");
I'm trying to upload several hundred files to 10+ different servers. I previously accomplished this using FileZilla, but I'm trying to make it go using just common command-line tools and shell scripts so that it isn't dependent on working from a particular host.
Right now I have a shell script that takes a list of servers (in ftp://user:pass#host.com format) and spawns a new background instance of 'ftp ftp://user:pass#host.com < batch.file' for each server.
This works in principle, but as soon as the connection to a given server times out/resets/gets interrupted, it breaks. While all the other transfers keep going, I have no way of resuming whichever transfer(s) have been interrupted. The only way to know if this has happened is to check each receiving server by hand. This sucks!
Right now I'm looking at wput and lftp, but these would require installation on whichever host I want to run the upload from. Any suggestions on how to accomplish this in a simpler way?
I would recommend using rsync. It's really good at only transferring just the data that's been changed during a transfer. Much more efficient than FTP! More info on how to resume interrupted connections with an example can be found here. Hope that helps!