pm2 process using the old system Environment Variables on resurrection - windows

I have created a node application that is for subscribing to an OPC-UA server and storing the data on our s3 bucket. I am using the node-opcua module for that purpose.
I am working on a Windows server via RDP and the node-opcua module creates some files under %LOCALAPPDATA%\Temp as part of the process and uses it. I am using pm2 to run the application and it is getting the path of those files via TMP and TEMP environment variables which are dynamically generated by the process itself.
When the Windows server restarts it delete those files and the location updates of the new file. I already have run pm2 save and put the pm2 resurrect command in the Batch file which has a shortcut in the windows startup to make sure the process automatically gets started.
The issue was that the pm2 process was resurrected but still throwing the error %LOCALAPPDATA%\Temp\{some_path} file not found by the node-opcua process which was running through pm2. I ran pm2 restart manually but still didn't work out.

First I was thinking of it from the problem with the node-opcua module and thought about how can make it use the new system variables, but that was not in had as the process keeps making and deleting temporary files, so I need the pm2 to use the new system variable which has the updated path after system reboot and was not updating even after pm2 restart.
So, for updating the variables I figured out two solutions:
Either delete the old process and initiate a new pm2 process to run that application and put it in the batch file which is being called at server reboot
Adds pm2 restart {name} --update-env after pm2 resurrect and the system variables will be updated.

Related

How can I test a bash script for setting up a new machine

I have a couple bash scripts to automate the setup process for a new devices e.g. installing packages, configuring environment variables, etc.
I'm working on making the process more automated with autoexpect, and adding a few thing other things; however, it's difficult to test since every time I run the install script I have to manually go back an undo the changes that were made from running the script. Is there a way to run the scripts without actually installing anything so I can observe the behaviour for testing? something like the --dry-run option with rsync
for configuring your machine and being able to test this quickly and knowing you won't cause problems to your PC locally, create a VM using Virtual box or VMwWare player and then snapshot the VM so you can revert back to the state before you run the script, and then you can run your script on this VM, and check what configuration has been applied successfully.

Is there any way to inspect the contents of the RocksDB instance used by NEAR Protocol?

Disclosure: I work with NEAR and am currently on-boarding.
When I start up a local node on a clean machine I see that a .near folder is created in my home directory with a few configuration files (exact files seem to depend on which start_ script I run). Another folder appears inside of the .near folder called data.
Running strings ~/.near/data/*.sst in the folder spits out a few lines starting with the string "rocksdb" which led me to this reference to RocksDB
Is there any way to inspect the contents of a node's RocksDB instance?
I found Keylord but it crashes when I try to configure a new connection to the database (by pointing the connection to ~/.near/data). I didn't pursue that thread.
PSA1: sometimes it's useful to backup the ~/.near folder between node restarts if you want to reset the environment or avoid reusing old data while troubleshooting
mv ~/.near ~/.near_`date +%Y-%m-%d.%s`
PSA2: on MacOS you can watch what happens to the contents of the ~/.near folder while the node boots up and runs. (brew install watch).
watch -d -c -n 0.5 find ~/.near
The content of RocksDB is serialized using our own binary serialization format (http://borsh.io/), so you won't be able to examine the content with general-purpose third-party tools

Docker: Oracle database 18.4.0 XE wants to configure a new database on startup

I'm trying to configure an Oracle Database container. My problem is whenever I'm trying to restart the container, the startup script wants to configure a new database and failing to do so, because there already is a database configured on the specified volume.
What can I do let the container know that I'd like to use my existing database?
The start script is the stock one that I downloaded from the Oracle GitHub:
Link
UPDATE: So apparently, the problem arises when /etc/init.d/oracle-xe-18c start returns that no database has been configured, which triggers the startup script to try and configure one.
UPDATE 2: I tried creating the db without any environment variables passed and after restarting the container, the database is up and running. This is an annoying workaround, but this is the one that seems to work. If you have other ideas, please let me know
I think that you should connect to the linux image with:
docker exec -ti containerid bash
Once there you should check manually for the following:
if $ORACLE_BASE/oradata/$ORACLE_SID exists as it does the script and if $ORACLE_BASE/admin/$ORACLE_SID/adump does not.
Another thing that you should execute manually is
/etc/init.d/oracle-xe-18c start | grep -qc "Oracle Database is not configured
UPDATE AFTER COMMENT=====
I don't have the script but you should run it with bash -x to see what is the script looking for in order to debug what's going on
What makes no sense is that you are saying that $ORACLE_BASE/admin/$ORACLE_SID/adump does not exist but if the docker deployed and you have a database running, the first time the script run it should have created this.
I think I understand the source of the problem from start to finish.
The thing I overlooked in the documentation is that the Express Edition of Oracle Database does not support a SID/PBD other than the default. However, the configuration script (seemingly /etc/init.d/oracle-xe-18c, but not surly) was only partially made with this fact in mind. Which means that if I set the ORACLE_SID and/or ORACLE_PWD environmental variables when installing, the database will be up and running, with 2 suspicious errors, when trying to copy 2 files.
mv: cannot stat '/opt/oracle/product/18c/dbhomeXE/dbs/spfileROPIDB.ora': No such file or directory
mv: cannot stat '/opt/oracle/product/18c/dbhomeXE/dbs/orapwROPIDB': No such file or directory
When stopping and restarting the docker container, I'll get an error message, because the configuration script created folder/file names according to those variables, however, the docker image is built in a way that only supports the default names, causing it to try and reconfigure a new database, but seeing that one already exists.
I hope it makes sense.

GCE Windows startup Script is not running

I have a simple Django code which I want to keep running on a specific GCE instance. Sometimes the instance gets restarted due to some reasons, not in my control. I created a batch script which I tried to put in Startup folder in both users and common folder. It didn't work. I tried putting the script in using sysprep-specialize-script-url(using cloud storage), sysprep-specialize-script-cmd and sysprep-specialize-script-bat. It didn't work. Here's the content of the batch script -
cd C:\Users\kartik_domadiya\Desktop\happierMiscGoogleCloud
manage.py runserver 0.0.0.0:80
pause
I tried running C:\Program Files\Google\Compute Engine\metadata_scripts\run_startup_scripts.cmd manually and it worked (with any metadata key). So I can see that there's no problem with the script itself.
I even tried with putting the batch script in task scheduler which didn't work too.
So is there any way I can debug the problem and find out why isn't the batch script working? I am using Windows 2012 R2, if that matters.
PS: I know that's a development server and should not be used in production.
I moved the code to C:/code (basically out of any particular user's folder) and then provided all user its access (Right Click > Properties > Security), updated the batch file and put it into startup folder (Run > shell:startup).
It started working after that. I suppose the issue was due to access permission.

monitor folder and execute command

I'm looking for a solution for monitoring a folder for new file creation and then execute shell command upon the created file. The scenario is I have a host machine that runs a virtual machine and they share a folder. What I want is when I create or copy a new file to that shared folder on my host machine, on the VM, the system should be able to detect those changes. I have tried incron and inotify but they only work when I do the copy, create as a user in the VM. Thanks
Method 1 in this answer may help: Bash script, watch folder, execute command
Just run that script in your VM, and you should be able to detect changes made by the host.

Resources