I'm using sails on Heroku and I would like to optimize concurrency on 2X dynos by using pm2 to have 2 process running per dyno.
However I'm facing a quite annoying problem as when I start the app with pm2 start app.js, the first process run fine but the second one trigger a grunt error :
Aborted due to warnings.
2015-09-14T10:41:41.897208+00:00 app[web.3]: Running "clean:dev" (clean) task
2015-09-14T10:41:41.897209+00:00 app[web.3]: Cleaning .tmp/public...ERROR
2015-09-14T10:41:41.897211+00:00 app[web.3]: Warning: Unable to delete ".tmp/public" file (ENOTEMPTY, directory not empty '.tmp/public/images').
Does anyone encountered this problem ? It's quite annoying as for the moment I pay a 2X dyno to use only one processor...
Thank you
It can't be achieve by running multiple instance at single folder because Sails will modify, concat, minify, etc. files at assets folder and place it to .tmp folder. So if you do run multimple instance, grunt process will conflict. If you still want to do it, achieve by running on multiple folders, copy paste your project folder to any other folder.
Related
I published my first simple app on Heroku with a free dyno. This app writes a simple .txt file, that seems to be correctly written because my API services are working fine.
But if I try to check this file by entering in the file system using "heroku run bash -a MYAPP", I can't see that file in the folder I thought to see. It is like the file is not existing. Can someone tell me why?
Thanks.
I found this on https://devcenter.heroku.com/articles/active-storage-on-heroku:
In addition, any files stored on disk will not be visible from one-off dynos such as a heroku run bash instance or a scheduler task because these commands use new dynos.
It is still not so clear to me, but at least I know it is a normal (but strange) behaviour of Heroku!
I have created a node application that is for subscribing to an OPC-UA server and storing the data on our s3 bucket. I am using the node-opcua module for that purpose.
I am working on a Windows server via RDP and the node-opcua module creates some files under %LOCALAPPDATA%\Temp as part of the process and uses it. I am using pm2 to run the application and it is getting the path of those files via TMP and TEMP environment variables which are dynamically generated by the process itself.
When the Windows server restarts it delete those files and the location updates of the new file. I already have run pm2 save and put the pm2 resurrect command in the Batch file which has a shortcut in the windows startup to make sure the process automatically gets started.
The issue was that the pm2 process was resurrected but still throwing the error %LOCALAPPDATA%\Temp\{some_path} file not found by the node-opcua process which was running through pm2. I ran pm2 restart manually but still didn't work out.
First I was thinking of it from the problem with the node-opcua module and thought about how can make it use the new system variables, but that was not in had as the process keeps making and deleting temporary files, so I need the pm2 to use the new system variable which has the updated path after system reboot and was not updating even after pm2 restart.
So, for updating the variables I figured out two solutions:
Either delete the old process and initiate a new pm2 process to run that application and put it in the batch file which is being called at server reboot
Adds pm2 restart {name} --update-env after pm2 resurrect and the system variables will be updated.
I am running Rscripts on a self hosted Devops agent. My Windows agent is able to access the system's directory where its hosted. Below is the directory structure for my code
Agent loc. : F:/agent
Source Code : F:/agent/deployment/projects/project1/sourcecode
DWH _dump : F:/agent/deployment/DWH_dump/2021/
Output loca. : F:/agent/deployment/projects/project1/output_data/2021
The agent is using CMD in the devops pipeline to trigger R from the system and use the libraries from the system directory.
Problem statement: I am unable to save the output from my Rscript in to the Output Loca. directory. It give an error as Probable reason: permission denied error by pointing to that directory.
Output File Format: file_name.rds but same issue happens even for a csv file.
Command leading to failure: saverds(paste0(Output loca.,"/",file_name.rds))
Workaround : However I found a workaround, tried to save the scripts at the Source Code directory and then save the same files at the Output Loca. directory. This works perfectly fine but costs me 2 extra hours of run time because I have to save all intermediatory files and delete them in the end. Keeping the intermediatory files in memory eats up my RAM.
I have not opened that directory anywhere in the machine. Only open application in my explorer is my browser where the pipeline is running. I spent hours to figure out the reason but no success. Even I checked the system Path to see whether I have mentioned that directory over there and its not present.
When I run the same script directly, on the machine using Rstudio, I do not have any issues with saving the file at any directory.
Spent 2 full days already. Any pointers to figure out the root cause can save me few hours of runtime.
Solution was to set the Azure Pipeline Agent services in Windows to run with Admin Credentials. The agent was not configured as an admin during creation and so after enabling it with my userid which has admin access on the VM, the pipelines were able to save files without any troubles.
Feels great, saved few hours of run time!
I was able to achieve this by following this post.
I am working in project having technology(laravel + vuejs).In that there is a form created in vue file and i want to add a text in the vue file.So as i have seen that "when any changes is done in vue file then need to rebuild the application".And i see that there are commands for rebuild the application is npm run dev , npm run prod , npm run watch and npm run watch-poll. I have tried all this commands after saving the file through FTP but sometimes the changes is applied (Note : Not immediately but after some duration) and sometimes no changes apply in the browser.When i tried the above command by executing it then there is no such error occurs and the rebuild is done successfully.So what will be the issue can you please suggest something that i need to configure?
Below is the code of package.json,webpack.mix.js file and after that i have attached image of putty in which application rebuild is done.
Thanks in advance!
I think you are not understanding what npm run dev/prod/watch do. If you alter the .vue file in your resources folder, then have npm rebuild your assets, then ftp the vue file to your server, nothing should happen.
Depending on how you have your laravel mix file set-up, the file you need to ftp to the server is likely public/js/app.js.
You should really consider getting your local environment setup for development, there is nothing I can image worse than viewing your changes by ftp-ing files to a server.
Have anyone met an issue like me before?
I have deployed TiddlyWiki5 to heroku as an app at https://jameswiki.herokuapp.com. It displayed and worked as expected at runtime. However, after the server (web dyno) sleeping and wakeup (often after 1 hour inactive), everything is clear.
I have checked my console in heroku when creating new Tiddle, it still said new Tiddle has been saved, but in fact, no new tiddle is saved to Tiddlers folder. Below is my script to install and run it:
In Package.json
{
...
"scripts": {
"start": "tiddlywiki . --server",
"postinstall": "npm install -g tiddlywiki"
}
}
In Procfile
web: tiddlywiki . --server $PORT $:/core/save/all text/plain text/html "" "" 0.0.0.0
Help me to fix this issue. Thanks.
Heroku's filesystem is ephemeral - it exists only while that dyno exists. When the dyno restarts or ends (as it does when the app goes to sleep), the new one will have a fresh, empty filesystem. If you want files to persist, you need to save them off to something like a database or Amazon S3 for long-term storage.
https://devcenter.heroku.com/articles/dynos#isolation-and-security
Each dyno gets its own ephemeral filesystem, with a fresh copy of the most recently deployed code. During the dyno’s lifetime its running processes can use the filesystem as a temporary scratchpad, but no files that are written are visible to processes in any other dyno and any files written will be discarded the moment the dyno is stopped or restarted. For example, this occurs any time a dyno is replaced due to application deployment and approximately once a day as part of normal dyno management.