How to share terminal command output in realtime via a web page? - terminal

As programming work often requires us to share our terminal output, I am looking for a persistent way to share the output (stdout and stderr) via a web page.
Old school approach was copy/paste to gist or similar service, even piping would work. Still this does not give you colored console and real time.
Another nice to have feature was to be able to use a cloud storage service to store the uploaded content. Still stuff like AWS S3 is not usable because is has no support for streamed upload and download. Mainly an object becomes available only when you finished uploading it, which means the shared command output could not be accessed before the command is finished.
Identified requirements:
persistency, the uploaded content need to be up for at least 30 days
ANSI coloring support, because plain text is hard to read
live output streaming: the content should be accesible even if the command didn't finish yet
opensource client
(optional) opensource server so you could host your own and not rely on service that could go offline without notice or change its TOS.
I did some research before and here are my current findings:
seashells is working but has some serious problems only last 5 executions are kept and all are recycled after 24 hours. The second one is that the server is not open-source and the entire project has one studend behind with no plans to open-source it. This makes it hi
streamhut - 4y old abandonware?
rtail - 4y old abandonware
Notable but not usable:
tmate sharing terminal sessions not command output
gotty same as above
Do you know another approach that would work?

Related

Sync a local folder with a server via REST API calls?

I currently have the following problem and can't decide which way to go:
I have a local directory with subfolders and files and want to mirror and sync that with a remote directory on a server. The problem is that I don't have any direct access to the server itself. The only access point I have is a bunch of REST API calls such as: uploading a file, downloading a file, getting metadata of a file (including creation and change date) and getting a file/directory list
I have already spent some time to look for possible programs/implementations but none of those have really convinced me. Here are some of the possiblities I considered so far:
Use a Powershell or Python script and manually check each file and folder for changes. Schedule a task to call the script every x minutes/hours
Use the Microsoft Sync Framework (MSF) and implement a custom SyncProvider which handles the REST calls and translates it into MSF format. Here I can't really tell if it's feasable at all and how complex it would be
Use tools like Syncthing or similar, but I couldn't find something that supports a remote sync directory only accessible via REST calls, but as there are quite a lot of tool I might have missed some that do
I'm working under Windows 10 so the solution should run on Windows and preferably not require too many addition resources.
Furthermore the solution should be somewhat resilient to errors as the REST API calls seem to have a tendency to fail sometimes (roughly 1 in 10 calls fails)
Any ideas and suggestions are welcome :)

Performance of a java application rendering video files

I have a java/j2ee application deployed in tomcat container on a windows server. The application is a training portal where the training files such as pdf/ppt/flash/mp4 files are read from a share path. When the user clicks a training link, the associated file from the share folder is read downloaded from the share path to the client machine and start running.
If the user clicks mp4/flash/pdf files, it is taking too much time to get opened.
Is there anything in the application level, we need to configure? or it is a configuration for load in the server? or is it something needs attention from a WAN settings?
I am unable to find out a solution for this issue?
Please post your thoughts.
I'm not 100% sure because there is not so much details, but I'm 90% sure that the application code is not the main problem.
For example:
If the user clicks mp4/flash/pdf files, it is taking too much time to get opened.
A PDF is basically just a string. Flash is a client-side technology. And I'm pretty sure that you just send a stream to a video player in order to play a MP4. So the server is supposed to just take the file and send it. The server is probably not the source of your problem if we assume that he can handle the number of requests.
So it's about your network: it is too slow for some reasons. And it's difficult to be more specific without more details.
Regards.

What is a good framework for deploying a portable HTML/JavaScript Windows application?

I need to deploy an application onto some Windows machines for purposes of data collection from a group of people (i.e. the application will be used to gather responses to a series of survey questions). The process is interactive, alternating between displays of text and images with specific timing requirements. I have put together a prototype application using HTML and JavaScript that implements the survey. However, there are some unique constraints on the deployment environment that have me stuck:
While the machine is Internet-connected, the client requires that the survey application must run fully local to the PC that it runs on. Therefore, sending the survey results to a remote server is not permissible. Obviously, saving to a local file from a Web browser is typically not permitted for security reasons.
Installation of applications onto the machines that will run the survey is not permitted.
The configuration of the machines is not known specifically a priori, but I can assume some recent version of Windows with IE8+.
The "no remote access" requirement was a late comer, and has thrown a wrench into the plan of just writing a simple Web application that could post results to an HTTP server. I'm now looking for the easiest way forward. Two main approaches come to mind:
Use a GUI framework that provides a control that can display HTML/JavaScript; running a full-blown application on the PC would allow me to save the results to the filesystem. I've never done this, but it seems like in this day and age it shouldn't be too difficult. This would allow me to reuse much of my existing prototype implementation, but I would need some way of transferring the results (which would be stored in a JavaScript data structure) outside of the Web control to where the rest of the application could access it.
Reimplement the entire application using some GUI framework (I've used PyQt successfully before, although not on Windows). This approach is obviously less desirable than #1 due to the lack of reuse. However, it may be necessary if #1 isn't feasible.
Any recommendations for the best way to go? Ideally, I'm looking for a solution that can be run in a "portable" manner from a USB thumbdrive or similar.
Have you looked at HTML Applications (HTA)? They work in IE5+ and can use Windows Scripting Host to write to local drives and UNC shares...
Maybe you can use a portable web server with a scripting language on the server side. http://code.google.com/p/mongoose/ Mongoose, for example, you can run PHP, CGI, etc. .. scripts. Then, simply create a script to save a file to your hard drive. And let the rest of the application in the same manner.
Use a script to start the web server, and perhaps a portable web browser like K-Meleon to start the application http://kmeleon.sourceforge.net/ This is highly configurable. Or start the system explorer to your localhost URL.
The only problem may be that the user has to modify the firewall for the first time you run the server?

Downloading large files to PC from OAS Server

We have an Oracle 10g forms application running on a Solaris OAS server, with the forms displaying in IE. Part of the application involves uploading and downloading files (Word docs and PDFs, mainly) from the PC to the OAS server, using Oracle's webutil utility.
The problem is with large files (anything over 25Megs or so), it takes a long time, sometimes many minutes. Uploading seems to work, even with large files. Downloading large files, though, will cause it to error out part way through the download.
I've been testing with a 189Meg file in our development system. Using WEBUTIL_FILE_TRANSFER.Client_To_DB (or Client_To_DB_with_Progress), the download would error out after about 24Megs. I switched to WEBUTIL_FILE_TRANSFER.URL_To_Client_With_Progress, and finally got the entire file to download, but it took 22 minutes. Doing without the progress bar got it down to 18 minutes, but that's still too long.
I can display files in the browser, and my test file displayed in about 5 seconds, but many files need to be downloaded for editing and then re-uploaded.
Any thoughts on how to accomplish this uploading and downloading faster? At this point, I'm open to almost any idea, whether it uses webutil or not. Solutions that are at least somewhat native to Oracle are preferred, but I'm opn to suggestions.
Thanks,
AndyDan
This may be totally out to lunch, but since you're looking for any thoughts that might help, here are mine.
First of all, I'm assuming that the actual editing of the files happens outside the browser, and that you're just looking for a better way to get the files back and forth.
In that case, one option I've used in the past is just to route around the web application using Apache, or any other vanilla web server you like. For downloading, create a unique file session token, remember it in the web application, and place a copy of the file, named with the token (e.g. <unique token>.doc), in a download directory visible to Apache. Then provide a link to the file that will be served via Apache.
For upload, you have a couple of options. One is to use the mechanism you've got, then when a file is uploaded, you just have to match on the token in the name to patch the file back into your archive. Alternately, you could create a very simple file upload form separate from your application that will upload the file to a temp directory via Apache, then route the user back into your application and provide the token in the URL HTTP GET-style or else in a cookie.
Before you go to all that trouble, you'll want to make sure that your vanilla web server will provide better upload and download speed and reliability than your current solution, but it should.
As an aside, I don't know whether the application server you're using provides HTTP compression, but if it does, you should make sure it's enabled and working. This is probably the best single thing you can do to increase transfer speed of large files, assuming they're fairly compressible. If your application server doesn't support it, then most any vanilla web server will.
I hope that helps.
I ended up using CLIENT_HOST to call an FTP command to download the files. My 189MB test file took 20-22 minutes to download using WEBUTIL_FILE_TRANSFER.URL_To_Client_With_Progress, and only about 20 seconds using FTP. It's not the best solution because it leaves the FTP password exposed on the PC temporarily, but only for as long as the download takes, and even then the user would have to know where to find it.
So, we're implementing this for now, and looking for a more secure but still performant long term solution.

How would you make an RSS-feeds entries available longer than they're accessible from the source?

My computer at home is set up to automatically download some stuff from RSS feeds (mostly torrents and podcasts). However, I don't always keep this computer on. The sites I subscribe to have a relatively large throughput, so when I turn the computer back on it has no idea what it missed between the the time it was turned off and the latest update.
How would you go about storing the feeds entries for a longer period of time than they're available on the actual sites?
I've checked out Yahoo's pipes and found no such functionality, Google reader can sort of do it, but it requires a manual marking of each item. Magpie RSS for php can do caching, but that's only to avoid retrieving the feed too much not really storing more entries.
I have access to a webserver (LAMP) that's on 24/7, so a solution using a php/mysql would be excellent, any existing web-service would be great too.
I could write my own code to do this, but I'm sure this has to be an issue previously encountered by someone?
What I did:
I wasn't aware you could share an entire tag using Google reader, thanks to Mike Wills for pointing this out.
Once I knew I could do this it was simply a matter of adding the feed to a separate Google account (not to clog up my personal reading list), I also did some selective matching using Yahoo pipes just to get the specific entries I was interested in, this too to minimize the risk that anything would be missed.
It sounds like Google Reader does everything you're wanting. Not sure what you mean by marking individual items--you'd have to do that with any RSS aggregator.
I use Google Reader for my podiobooks.com subscriptions. I add all of the feeds to a tag, in this case podiobooks.com, that I share (but don't share the URL). I then add the RSS feed to iTunes. Example here.
Sounds like you want some sort of service that checks the RSS feed every X minutes, so you can download every single article/item published to the feed while you are "watching" it, rather than only seeing the items displayed on the feed when you go to view it. Do I have that correct?
Instead of coming up with a full-blown software solution, can you just use cron or some other sort of job scheduling on the webserver with whatever solution you are already using to read the feeds and download their content?
Otherwise it sounds like you'll end up coming close to re-writing a full-blown service like Google Reader.
Writing an aggregator for keeping longer history shouldn't be too hard with a good RSS library.

Resources