I got a hosting server which allows me SSH connection and I am successfully managed to install Compass on it. I have an application which requires me to compile Compass CSS to W3C CSS on the go. I am using PHP server side language and wants to compile compass files as the user requests it.
I know how to do that but the question is about performance and how can I handle concurrent user requests of compiling through a single SSH connection. Assuming the user should receive the compiled CSS quickly, what are my options to handle this type of thing?
I would like to add more detail. I want my user to upload their SCSS files to my server and then compile it into normal CSS at the server, once compiled, the file will be sent to the user again for the download. It would seem user seamless as they just uploaded and then downloaded the file. But behind the scene compilation is occurred. I can do that this way...
User uploads file (send POST request)
I process the request by getting file, reading contents, writing to scss file.
I connected to server terminal using SSH and compiled this file using compass compile and then send compiled file back to the user as request (POST) response.
There would be huge SSH connections and my server would be exhausted. How can I optimize having just single SSH connection and that connection processing all requests coming to it?
"wants to compile compass files as the user requests it" this is not the intended use of compass or sass they are meant to be compiled to static files and served as static assets. If you still want to go down this road take a look into the sass --compass command and pipe the output to the used via php. We get this question a lot and my only real answer is you may need to build a worker queue system if compile times are too long when serving on demand.
Related
I have recently published a webapp to Heroku built using the Plotly Dash libary. The webapp is dependent on uploading files using Dash Core Components (dcc) dcc.Upload and then saving the files using dcc.Store. This works perfectly on the localhost, however, when trying to upload files to the live version hosted by Heroku, the app.callback depending on the uploaded files won't fire.
Is there any issues with using dcc.Upload or dcc.Storeon Heroku? I haven't found any related issues on forums. The files are not large (< 1 MB), and as I said, it all works on the localhost.
Since it's not on my localhost I'm having problems with troubleshooting. Is there any easy way to troubleshoot a live webapp on Heroku?
It is possible that the user through which your app is running do not have write permissions to the directory in which you are saving the file after upload.
Try checking the permissions on the directory. If still not working, please share the errors you are getting. That would be easier to share solutions.
After some digging I found that the abrupt shortcoming of the callbacks were due to a timeout error with the code H12 from Heroku. H12 states that any call without a response wilthin 30 seconds is terminated. The solution was therefore to reduce the callbacks into smaller components so that no function exceeded the 30 second limit.
I have a bunch of HTML5 canvas tests in my library which I run via Karma and Jasmine. If I detect differences in my tests I show the canvas DOM elements with my generated image output and a diff on the browser page. But when I run my tests in Chrome Headless and/or on my CI environment I have no way of checking the test results in case they fail. And that's currently the issue I'm facing: when I run the tests with a UI, they are green, if I run them in Chrome Headless they fail but I have no chance of checking my visual output.
As solution I'd like to write my generated images to disk. On local tests I can then check this folder what happened an on CI I can publish the result images as artifacts. But here comes the point: Karma and Jasmine seem to have no proper mechanism in place for this task. Also I could not find any plugin tackling this challenge of properly accessing the local file-system from your tests.
A tricky aspect is also that I cannot use promise (async/await) operations at the place where I want to save files. I am within a Jasmine CustomMatcher which does not have promise support I could try rewriting my tests again to handle the error reporting to Jasmine.
My attempts so far:
I started with a custom Karma reporter listening to browser logs and potentially use this as channel to hand over the image bytes to Node for writing to disk. But this additional plugin registration messed with my overall Karma configuration. Karma and Rollup were not working anymore once I registered my custom reporter and I never found out if such large byte amounts can be even transferred via this channel.
Then I started with an API via karma-express-http-server where I "upload" the files to be saved. But I stopped half way as such a simple task seem to require again a bunch of libs and custom implementation to get a simple file upload running (karma-express-http-server, multer). Also I need to rely on synchronous Ajax calls here which is not really future proof.
The Native File System API heavily relies on Promises so I cannot use it.
There must be a simpler way of just writing a file as part of your tests to disk while when using Karma and Jasmine.
My project: go - 1.12.5; gin-gonic; vue-cli - 3.8.2.
On windows server 2008 go under the local account, run main.exe - works well. But when log off my account, all local account programs are closed, including my go server.
The first thing I did was try to configure IIS for my GO. Nothing good came of it.
Then I tried to run main.exe from the SYSTEM account psexec -s c:\rafd\main.exe. When log off the process does not close. But the frontend is in my account and SYSTEM does not see the local files (js, html, css) of my project
Tell me how to start the Go server, to after log off my project did not stop life
Two ways to approach it.
Go with ISS (or another web server).
Should you pick this option, you have further choices:
Leave your project's code as is, but
Make sure it's able to be told which socket to listen for connections on—so that you can tell it to listen, say, on localhost:8080.
For instance, teach your program to accept a command-line parameter for that—such as -listen or whatever.
Configure IIS in a way so that it reverse-proxies incoming HTTP requests on a certain virtual host and/or path prefix to a running instance of your server. You'll have to make the IIS configuration—the socket it proxies the requests to—and the way IIS starts your program agree with each other.
Rework the code to use FastCGI protocol instead.
This basically amounts to using net/fastcgi instead of net/http.
The upside is that IIS (even its dirt-old versions) support FastCGI out of the box.
The downsides are that FastCGI is beleived to be slightly slower than plain HTTP in Go, and that you'll lose the ability to run your program in the standalone mode.
Turn your program into a proper Windows™ service or "wrap" it with some helper tool to make it a Windows™ service.
The former is cleaner as it allows your program to actually be aware of control requests the Windows Service Management subsystem would send to you. You could also easily turn your program into a shrink-wrapped product, if/when needed. You could start with golang.org/x/sys/windows/svc.
The latter may be a bit easier, but YMMV.
If you'd like to explore this way, look for tools like srvany, nssm, winsv etc.
Note that of these, only srvany is provided by Microsoft® and, AFAIK, it's missing since Win7, W2k8, so your best built-in bet might be messing with sc.exe.
In either case, should you pick this route, you'll have to deal with the question of setting up proper permissions on your app's assets.
This question is reasonably complex in itself since there are many moving parts involved.
For a start, you have to make sure your assets are tried to be accessed not from "the process' current directory"—which may be essentially random when it runs as a service—but either from the place the process was explicitly told about when run (via command-line option or whatever) or figured out somehow using a reasonably engeneered guess (and this is a complicated topic in itself).
Next, you either have to make sure the account your Windows™ uses to run your service really has the permissions to access the place your assets are stored in.
Another possibility is to add a dedicated account and make the SCM use it for running your service.
Note that in either case proper error handling and their reporting is paramount: when your program is being run non-interactively, you want to know when something goes wrong: socket failed to be opened or listened on, assets not found, access was denied when trying to open an asset file, and so on—in all these cases you have to 1) handle the error, and 2) report it in a way you can deal with it.
For a non-interactive Windows™ program the best way may be to use the Event Log (say, via golang.org/x/sys/windows/svc/eventlog).
Simplest solutions would be using windows schedular.
Start your exe file on system logon with highest privilage in background. So whenever your system will logon it will start your exe and make runnign in background.
You can refer this answer,
How do I set a Windows scheduled task to run in the background?
How should I download file from an ftp server to my local machine using php? Is curl good for this?
you can use wget, or curl, from PHP. Be aware that the PHP script will wait for the download to finish. So if the download takes longer than your PHPs max_execution_time, your PHP script will be killed during runtime.
The best way to implement something like this is by doing it asynchronously, that way you don't slow down the execution of the PHP script which is probably supposed to serve a page later.
There are many ways to implement it asynchronously. The cleanest one is probably to use some queue like RabbitMQ or ZeroMQ over AMQP. A less clean one, which works as well, would be writing the urls to download into a file, and then implement a cronjob which minutely checkes this file for new urls to download and executes the download.
just some ideas...
I have designed a stylesheet/javascript files bundler and minifier that uses a simple cache mechanism. It simply writes into a file the timestamp of each bundled files and compares those timestamps to prevent rewriting the "master file" again. That way, after an application update (here my website), where CSS or JS files were modified, a single request would trigger the caching again only once. This, and all other requests would then see a compiled file such as master.css?v=1234567.
The thing is, under my development environment, every tests pass, integration works great and everything works as expected. However, on my staging environment, on a server with PHP5.3 compiled with FastCGI, my cached files seems to get rewritten with invalid data but only when not requested from the same browser.
Use case:
I make the first request on Firefox, under Linux. Everything works as expected for every other requests on that browser.
As soon as I make a request on Windows/Linux (IE7, IE8, Chrome, etc) my cache file gets invalid data, but only on the staging server running under FastCGI, not under development!
Running back another request on Firefox recaches the file correctly.
I was then wondering, does FastCGI has anything to do with it? I thought browser's clients or even operating systems didn't have anything to do with server side code.
I know this problem is abstractly described, but pasting any concrete code would be too heavy IMO, but I will do it if it can clear up my question.
I have tried remote debugging my code, and found that everything was still working as expected, even the cached file gets written correctly. I saw that when the bug occurs, the file gets written with the expected data, but then gets rewritten back with invalid data after two seconds -after php has finished its execution!-
Is there a way to disable that FastCGI caching for specific requests through a PHP function maybe?
Depending on your environment, you could look at working something out using .htaccess in Apache to serve those requests in regular cgi mode. This could probably be done with just a simple AddHandler, and Action that points to the cgi directly. This kind of assumes that you are deploying to some kind of shared hosting environment where you don't have direct access to Apache's config.
Since fastcgi persists the process for a certain amount of time, it makes sense that it could be clobbering the file at a later point after initial execution, although what the particular bug might be is beyond me.
Not much help, I know, but might give you a few ideas...
EDIT:
Here is the .htaccess code from my comment below
Options -Indexes +FollowSymLinks +ExecCGI
AddHandler php-cgi .php
Action php-cgi /cgi-bin/php5.cgi