I have just installed a new instance of Rocket.Chat (on heroku), and I'm beginning to play with the configuration. I changed the colors, added some Incoming and Outgoing Integration Scripts and changed some other parameters here and there.
Is it possible to get those changes in a configuration file so that if I have to redeploy another Rocket.Chat instance elsewhere I could apply this configuration easily ?
I will share my experience feedback.
I installed Rocket.Chat in manual (step by step)
In a concern of reinstalling my VPS, I recovered all the data of the database mongoDB in .gz file format and was able to restore without encumber with the following commands :
#for dump
mongodump --archive=rocketchat.gz --gzip
# for restore
mongorestore --gzip --archive=rocketchat.gz
MongoDB's dump doc - MongoDB's restore doc
Good luck
Related
I've deployed the "Database Backup/Restore Add-On" on a PostgreSQL node in one of my Jelastic environments (configured to use a Backup Storage environment that is also deployed in my Jelastic).
But no backup is performed, and no error message is displayed in the Jelastic console.
Database Backup/Restore Add-On:
Add-On Configuration:
Backup Storage environment:
Despite the recurring backup set in the addon configuration, and despite my attemps to trigger a manual backup (using the "Backup Now" button on the addon), nothing happen. I was expecting to find a sql dump file (reflecting the content of my PostgreSQL database) in the /data/environment_name folder of my Backup Storage environment, but the folder remains empty.
Empty Folder - no backup file generated:
Do you have any advice to configure correctly this addon? Where can I find the logs generated by this addon, to check if there are any issues?
Thanks in advance for your support.
We would like to use Nifi registry with git as storage engine. In that case, i modified providers.xml and i was able to save the flows there.
Challenges:
There is no 2 way sync. We can only save the flows modified by Nifi user but if we modify the flow directly in git location, it will not be reflected on nifi registry
There is no review or approval process for Nifi registry. A user has to login to nifi-registry server, create a branch and issue a pull request.
As a workaround, we can delete the database file ( H2) and restart the nifi resgistry.
Lastly, everything should be automated in CI/CD like what we do for regular maven project.
Any suggestions ?
The purpose of the git storage is mostly to let user visualize the differences through tools like git hub, or any other tools that can support diffs, plus by pushing to a remote you also get a remote backup of the flow content. It is not meant to be modified outside of the application, just like you wouldn't bypass an application and go right into it's database and start changing data.
This is my setup;
Bitbucket Repo of HTML docs.
Elastic Beanstalk Environment
EC2 c3 Instance (8GB Elastic Block Store attached)
So I connect Deploybot to Elastic Beanstalk successfully and deploy. Path is default.
Success, so it seems.
output Creating application archive for Elastic Beanstalk.
output Application archive created.
output Uploading application archive to S3 bucket deploybot-elastic-beanstalk-> mysite-997c5d66.
output Application archive uploaded.
output Creating application version mysite-fbba70de-5e4736.
output Application version created.
output Updating environment e-txarhpt4wp with new version.
output Environment was updated and will now be refreshing.
But no... where are the files?
I drop in with filezilla (SFTP) and cannot find anywere on the server.
Moreover, my path is actually;
var/www/vhosts/my.site.yay/html/
If I change the path in the Deploybot environment settings - the repo never successfuly deploys, instead all I get is 'bypassed' with every single git push which indicates to me that deploybot is not acutally connecting to anything and thus constantly sees 'no changes'.
Anyone got any clues?
I have spent several hours searching prior to this post and there is almost nothing written about using deploybot with aws besides official deploybot documents.
thanks in advance to those with potential answers
Doh!
I had set up my ec2 instance to use Nginx instead of Apache - deleting Apache (httpd).
In the process of writing this question I looked more closely at the deploybot log and traced that Deploybot pushs a .zip to a S3 bucket which it creates and then triggers an elastic beanstalk environment refresh - using whatever built in webhook there must be.
So whatever webhook that uses looks for Apache (httpd) and the whole thing fails, as revealed in the Beanstalk environment event logs;
ERROR
[Instance: i-0dc350d4f82711400] Command failed on instance. Return code: 1 Output: (TRUNCATED)...tory # rb_sysopen - /etc/httpd/conf/httpd.conf (Errno::ENOENT) from /opt/elasticbeanstalk/support/php_apache_env:81:in update_apache_env_vars' from /opt/elasticbeanstalk/support/php_apache_env:125:in' Using configuration value for DocumentRoot:. Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/05_configure_php.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
So, I switched my deploybot to SFTP with public key and it works. (I'm not clever enough to edit/write my own beanstalk envior refresh webhook yet)
I know how to create a partition in local ApacheDS instance from this article. Current problem is I don't know how to create a partition in remote ApacheDS.
I am accessing remote ApacheDS server(in CentOS) from Apache Directory Studio(in Windows).
Any help would be appreciated.
ApacheDS
Version: 2.0.0-M14
Apache Directory Studio
Version: 2.0.0.v20130517
I don't know if your problem is that you can't access the remote instance or another.
But if you want to create a partition follow this "guide".
ApacheDS seems to have a very bad tutorial.
Contrary the other answers, here I explain the real problem. The sad truth is the following:
You can't manipulate the partitions of a non-local Apache Directory Server with Apache Directory Studio.
You can't even do this with a locally running one. The only what you can do, are the Apache Directory Server partitions running inside your Apache Directory Studio.
However, there is a workaround for the problem. It is particularly useful, if you are using linux, or at least you have a cygwin by the hand.
The Apache Directory Server has a complex directory structure, full with small files, partially binary and partially text data.
This data structure doesn't contain any filesystem references, so you can freely clone it.
Create an LDAP server inside your Apache Directory Studio. Open its properties. You get a popup form. Inside this form, you will see some like this:
Location /your/home/directory/.ApacheDirectoryStudio/.metadata/.plugins/org.apache.directory.studio.ldapservers/servers/e56640c7-70ed-4eed-921c-75c475117a11
This is what you want!
This is the directory structure, where your local ApacheDS is running!
And you can now easily synchronize this data structure, ideally with a simple rsync command, into your server or back!
So,
You create the new Apache Directory Server instance inside the Apache Directory Studio
Your check its properties
You stop it, and synchronize your server-side server directory into your this one! For example, rsync -va --delete you#your.server.com:/srv/apacheds/instance/ /your/home/directory/.ApacheDirectoryStudio/.metadata/.plugins/org.apache.directory.studio.ldapservers/servers/e56640c7-70ed-4eed-921c-75c475117a11
You play with the partitions as you wish
You synchronize it back.
Of course if you are playing with the Apache Directory Server file structure on such a low, file-system level, the server needs to be stopped!
I'm planning to migrate my teamcity server onto a new physical location. The process is pretty straight forward, export the database, install a vanilla teamcity server and import the database via maintaindb.sh.
Since i have a large installation I decided to backup only server settings, projects and builds configurations, plugins. My point was that I can manually move build logs and artifacts later (i'd rather not try to restore from a 500GB zip file). However after importing the backup I was unable to see any build agents in the agent pool.
Any ideas? Do you have to install each build agent from scratch just because the server got migrated to a new location? Do you just have to point the agents to the new server and that's it (and if so why does the agent pool on the server seem empty)
Thanks,
If you are changing the servers URL in your migration, which from your question I am assuming that you are, then you will need to edit each build agent's properties.
In your ~TeamCity\Install\buildAgent\conf, you will have a buildAgent.properties file. You need to modify this file to point to your new Teamcity location via the serverURL value. Then you will want to restart the build agent server, and authorize and enable the build agent from your Teamcity interface.
There is an extremely brief explanation of this here at the bottom of the "Move TeamCity Installation to a New Machine" section.
And to answer your question as to why the agent pool seems empty - it is because the agent is not looking for the server at it's new location.