I'm pretty new to tmux. Commands that I run in my ssh server give out pretty large outputs and I should be able to scroll/search the output. I've read some answers for increasing the scrollback buffer but the downside is it consumes huge RAM as said here. I'm looking for a solution that doesn't stress the RAM at the same time allowing me to search no matter the size of logs.
If I don't use tmux and do a plain ssh server from my mac, I can easily browse the logs because there are no buffers involved. I want something like this.
Is this possible at all?
I can easily browse the logs because there are no buffers involved
That's not how that works. To be able to scroll back through output, it has to be saved in RAM. There is no way to scroll back through previous command output without storing that information in RAM (you could also save all command output to a file, and then open that file, but when you open that file, you're loading that data into memory).
That answer recommends against a huge amount of buffer for tmux because tmux has: a) multiple panes; and b) persistent terminals. These together means that if you use a ton of panes and never stop your tmux session, eventually tmux will be storing a lot of data... but if you need access to that data, you have to store it somewhere, and if you need instant access, you need to store it in RAM.
TL;DR: tmux is not somehow worse at storing scrollback data than any other program, it just makes it more likely that you have lots of data.
Related
I'm trying to look at the Freebase data dump which is stored on a server that I access through ssh. The trouble is I don't know how I can view it in a way that doesn't take forever, make things freeze or crash, I had been trying to view it with nano and it evokes the precisely the behaviour just described.
The operating system is Darwin.
How can I examine this data?
Basically you could use command more or less to scroll over the file. If you know which lines in the file you are interested in, like from line 3000 to 3999, you could show them with sed -n '3000,3999p' your_file_name.
One script takes lot of time. I would like to pause it (e.g. by pressing p) and save it to HDD (e.g. by pressing s) so I can resume it later from HDD. Libraries like Thread or gems like Celluloid may pause some part of the code, but as far I have seen they cannot save the current process to disk.
Ideally, I would like to put a few lines of codes at the beginning of script or something easy like this.
TL;DR
You are solving the wrong problem. If your script takes too long to run, speed up your script rather than try to serialize an OS process.
Alternative Approaches
If you insist on being able to freeze processes and save state to disk, you may want to consider running your processes inside a virtual machine like VirtualBox or VMware. Both of these products support the ability to pause a virtual machine and save the VM's current state to disk.
I'm unaware of any way to store running OS processes on disk other than inside some sort of virtualization layer. If you really need this functionality, that's the way I'd recommend. However, you'll probably get more bang for your buck by improving the efficiency of your code (profile or benchmark for bottlenecks), scaling up your system, or scaling out your program's tasks in a distributed way.
When I open a file, does vim read it all into memory? I experienced significant slowdowns when I open large files. Or is it busy computing something (e.g., line number)?
Disabling features like syntax highlighting, cursorline, line numbers and so on will greatly reduce the load and make Vim snappier in these cases.
There's even a plugin to handle that for you and a Vim tip for some background info.
Yes, Vim loads the whole file into memory.
If you run htop in another pane you can watch it happen in real time.
If you don't have enough memory available, it will start hitting the swap which makes it take even longer.
You can disable plugins and heavy features (like syntax highlighting) to get improved performance (the -u effectively tells vim not to load your ~/.vimrc):
vim -u NONE mysqldump.sql
However, unless you really need to edit the file, I prefer to just use a different tool. I typically use less. I mostly search the files in vim with / and less supports that just fine.
less mysqldump.sql
I want to be able to (programmatically) move (or copy and truncate) a file that is constantly in use and being written to. This would cause the file being written to would never be too big.
Is this possible? Either Windows or Linux is fine.
To be specific what I'm trying to do is log video with FFMPEG and create hour long videos.
It is possible in both Windows and Linux, but it would take cooperation between the applications involved. If the application that is writing the new data to the file is not aware of what the other application is doing, it probably would not work (well ... there is some possibility ... back to that in a moment).
In general, to get this to work, you would have to open the file shared. For example, if using the Windows API CreateFile, both applications would likely need to specify FILE_SHARE_READ and FILE_SHARE_WRITE. This would allow both (multiple) applications to read and write the file "concurrently".
Beyond sharing the file, though, it would also be necessary to coordinate the operations between the applications. You would need to use some kind of locking mechanism (either by locking some part of the file or some shared mutex/semaphore). Note that if you use file locking, you could lock some known offset in the file to act as a "semaphore" (it can even be a byte value beyond the physical end of the file). If one application were appending to the file at the same exact time that the other application were truncating it, then it would lead to unpredictable results.
Back to the comment about both applications needing to be aware of each other ... It is possible that if both applications opened the file exclusively and kept retrying the operations until they succeeded, then perform the operation, then close the file, it would essentially allow them to work without "knowledge" of each other. However, that would probably not work very well and not be very efficient.
Having said all that, you might want to consider alternatives for efficiency reasons. For example, if it were possible to have the writing application write to new files periodically, it might be more efficient than having to "move" the data constantly out of one file to another. Also, if you needed to maintain some portion of the file (e.g., move out the first 100 MB to another file and then move the second 100 MB to the beginning) that could be a fairly expensive operation as well.
logrotate would be a good option is linux, comes stock on just about any distro. I'm sure there's a similar windows service out there somewhere
How to effectively send a file from my own process to a program such as Photoshop, Word, Paint.
I do not want to save the whole file to disk and then open the program from the startup parameters using CreateProcess, ShellExecute, etc.
Maybe the only way out is Memory Maped Files?
Maybe I should look to COM, IPC, Pipes?
You cannot tell these programs that your file data is actually a memory mapped file. That really doesn't matter, files are already memory mapped by default. Much more efficiently than a MMF, file data is stored in RAM and doesn't take any space in the paging file.
The file system cache takes care of that. Think of it as a large RAM disk without actually having to pay for the RAM. This works so well that there never was a need for these programs to do something else than accept their input from a file.