How to refresh Panic Transmit file list? - caching

I like the Panic Transmit client for FTP and SFTP, but have lost work a couple of times because the file list is cached and can't be completly refreshed easily.
The Refresh option in the View menu only refreshes the current directory, and doesn't do the subdirectories.
I've contacted Panic about this and got a response that it's the way it works, and they would like to change it but not in this release. I've tried a couple of other FTP clients and find them lacking, eg. Fetch only shows the remote side and uses the Finder for the local side, this gets confusing quite quickly.
Does anyone know where Transmit keeps the cache of the file list so I can delete it and get a full refresh?
If not, it's back to the future with SCP, RSYNC and command line FTP.

I found a crude workaround for this. Transmit keeps the cache in memory, so if you quit the application it is cleared. I just make a habit of always using quit from the dock before any usage that requires up to date timestamps.

Related

How to implement synchronization of browser-based online games when users refresh their browser

In implementing a browser-based simple game involving multiple users, I have the server save the game state at certain sync points (not time-based but event-specific). I identify each state by an integer.
When a user refreshes his browser, the server provides the latest state and restores the content in the browser. However, in those few seconds while the browser is loading the latest content after browser-refresh, the state could change again. I do not know how to handle this situation because sending the next state will again raise the same issue.
I want a seamless refresh so none of the other players are impacted when one user refreshes his browser (or for that matter leaves and comes back).
The implementation language is not relevant. I use websockets to communicate between the browser and the server. The server is the intermediary for all communication between users (I am not using WebRTC data channels). What is the best way to sync the application content in multiple browsers?
This is indeed a programming-based question though no code is provided.
Forget the fact that your client exists in a browser. Let's just talk about replication.
The usual approach in databases is to separate snapshots from Write Access Logging (WAL) logs. When you bring a new client up, you select a snapshot and transfer that. Then when the client is ready it asks for WAL logs from that snapshot forward. The same mechanism is used after crashes. The last available snapshot is loaded, then the WAL log is replayed, then the database comes up.
I would suggest the same strategy. This does require efficient storage of snapshots. Some kind of log. And some kind of replay mechanism. Which is a lot of easy to mess up code. If you can use something existing, that would be good.
The first thing that I looked into was using Emscripten to compile Redis to JS, and then try to use Redis' built-in asynchronous replication to replicate to your browser. That may be possible, but the fact that Redis is single-threaded and wants to be a client-server is probably a showstopper.
The next best option that I found is that you can use https://isomorphic-git.org/. Here is how that could build what you need. You simply maintain your current state in a git repository, and keep a WAL log of everything that you've done with it. When a client connects, it clones the repository. Once done it connects to the websocket, tells you what commit it is at, and you send it the WAL log from that point forward. Locally in the browser you run those git commands. If the client simply loses its connection and then rejoins, it can do a git pull, and then follow the same strategy.
This will be a bunch of work for you. But a lot less work than implementing everything from scratch.

How do live sessions work (with multiple users)?

This has been on my mind for a while now, and I guess I am asking it now. My question is how do live sessions work? For instance, a live chat session, or the live multi-user updater on JSfiddle.net. How do both items update instantly? In the case of the live chat, is it updating from AJAX to the server every second?
Sorry if my question is misunderstood, but my question is simply, how do live sessions work with multiple users?
EDIT
How does Stack Overflow do it? Every time something happens I get a notification, is that updating to the database every second to see if something happens, or is there a better (more efficient) way of going about doing this?
There is a couple of ways of doing it.
The most common way people are nowadays doing it is through websockets. You can just google that term and learn about it. Basically the webserver notifies you through a socket whenever it decides to.
Another way is polling. People used to do it like this back in the day. Polling is pretty much the dumb way: constantly (or every other second or so) sending an ajax request to the webserver asking if there is any new content.
Another interesting way is sending a get request that stays open for a certain amount of time, even after it gets a response. It sort of functions like a stream that you opened to a file or connection, it stays open untill you close it (or untill some other condition). I'm not too familiar with this method, but I know Google Drive uses it for it's multi-user file editing. Just open two sessions to the same Google Drive document and inspect the page. You'll see in the console that every time you type a block of text it'll send a post, and you'll have at least 1 get request pending at all times. At one point it'll close, and right away a new one starts.
So in short: Websockets, Polling, and whatever you call that last method.

How to only read a few lines from a remote file?

Before downloading file, I need to set up a way it (the .csv typically, but not always) will be parsed.
I don't want to download the whole file especially if the "headers" do not match what is expected.
Is there a way to only download up until a certain number of byes and then gracefully kill the connection?
There's no explicit support for this in an FTP protocol.
There's an expired draft for RANG command that would allow this:
https://datatracker.ietf.org/doc/html/draft-bryan-ftp-range-08
But that's obviously supported by only new FTP servers.
Though there's nothing that prevents you from initiating a normal (full) download and forcefully break it as soon you get the amount of data you need.
All you need to do is to close the data transfer connection. This is basically what all FTP clients do, when an end user decides to abort the transfer.
This approach might result in few error messages in an FTP server log.
If you can use an SFTP protocol, then it's easy. The SFTP supports this natively.

Heroku - letting users download files from tmp

Let me start by saying I understand that heroku's dynos are temporary and unreliable. I only need them to persist for at most 5 minutes, and from what I've read that generally won't be an issue.
I am making a tool that gathers files from websites and zips the up for download. My tool does everything and creates the zip - I'm just stuck at the last part: providing the user with a way to download the file. I've tried direct links to the file location, and http GET requests, and Heroku didn't like either. I really don't want to have to set up AWS just to host a file that only needs to persist for a couple of minutes.. Is there another way to download files stored on /tmp?
As far as I know, you have absolutely no guarantee that a request goes to the same dyno as the previous request.
The best way to do this would probably be to either host the file somewhere else, like S3, or to send it immediately in the same request.
If you're generating the file in a background worker, then it most definitely won't work. Every process runs on a separate dyno.
See How Heroku Works for more information on their backend.

ie save onunload bug

I have a dynamic ajaxy app, and I save the state when the user closes the explorer window.
It works ok in all browsers but in IE there is problem. After I close twice the application tab, i can't connect anymore to the server.
My theory is that the connection to the server fail to complete while the tab is being closed and somehow ie7 thinks that it has 2 outstanding connections to the server and therefore queues new connections indefinitely.
Any one has experienced this, any workaround or solution?
In IE if you use long-polling AJAX request, you have to close down the XHR connection on 'unload'. Otherwise it will be kept alive by browser, even if you navigate away from your site. These kept alive connections will then cause the hang, because your browser will hit the maximum open connection limit.
This problem does not happen in other browsers.
Well, you can get around the connection-limit easily enough; simply create a wildcard domain and instruct your app to round-robin the subdomains; e.g. a.rsrc.dmvnoc.com, b.rsrc.dmvnoc.com, etc, for my netMail application. Without this trick, preloading all the images takes almost 30 seconds on a LAN (because of MSIE's low connection limit), but with it, the images download in about a second.
If you need to combine scripts with this trick, just set document.domain to the parent in the new scripts.
However, you might want to checkpoint the state on change anyway- the user might lose their network connection, or their computer might crash. If you want to reduce network traffic, have the client simply set a cookie that contains the relevent state- you can fit an awful lot in there (3000 bytes or so) and then the server gets it automatically on the next connection anyway- where it can save the results (as it presently does) and remove the cookie to signal that it has saved the state.

Resources