H2O Steam uses tmp directory to store deployments - h2o

I'm currently using H2O steam version 1.1.6 to deploy model endpoints which is working great!
However, steam uses the /tmp directory to store these deploys, which is actually only meant for temporary files. Because the /tmp has been cleared on my server, I've lost some deploys.
Is there a way to change where these files are stored?
Additionally it's also not possible to delete the deployments through the steam UI because the files are gone, is there a way to delete these as well?

I was wrong; the deployments can be found in steam/var/master/model
The files and directories that are located in /tmp are created by jetty.

Related

heroku and nuxt file uploader not working

I have a PWA made with NuxtJS correctly deployed and working on Heroku.
I would like to implement a file uploader and manager so that I can manage some files in a directory (~/static/files) from my front-end through some APIs.
On localhost, it works fine so I have my directory and when I add or delete the file, it deletes or creates it from the file system (as it should).
My question is: why can't I do the same on Heroku? I mean, I tried by uploading a file and deleting it and it works but the problem comes when I restart the app (through heroku ps:restart -a appname) because when I do so it deletes the file as if it was saved in RAM and not onto the file system.
If I try to see the files in the directory where they should be through heroku run bash -a appname and then down to the directory, no file is showed.
How can I fix this?
The Heroku filesystem is ephemeral - that means that any changes to the filesystem whilst the dyno is running only last until that dyno is shut down or restarted. Each dyno boots with a clean copy of the filesystem from the most recent deploy. This is similar to how many container based systems, such as Docker, operate.
In addition, under normal operations dynos will restart every day in a process known as "Cycling".
These two facts mean that the filesystem on Heroku is not suitable for persistent storage of data. In cases where you need to store data we recommend using a database addon such as Postgres (for data) or a dedicated file storage service such as AWS S3 (for static files).

Backup strategy ubuntu laravel

I am searching for a backup strategy for my web application files.
I am hosting my (laravel) application at an ubuntu (18.04) server in the cloud and currently have around 80GB of storage that needs to be backed up (this grows fast). The biggest files are around ~30mb, the rest of it are small jpg/txt/pdf files.
I want to make at least 2 times a day a full backup of the storage directory and store it as a zip file on a local server. I have 2 reasons for this: independence from cloud providers, and for archiving.
My first backup strategy was to zip all the contents of the storage folder en rsync the zip, this goes well until a couple of gigabytes then the server is completely stuck on cpu usage.
My second approach is with rsync, but this i can't track when a file is deleted / added.
I am looking for a good backup strategy that preferable generate zips before or after backup and stores them so we can browse and examine back in time.
Strange enough i could not find anything that suits me, i hope anyone can help me out.
I agree with #RobertFridzema that the whole server becomes unresponsive when using ZIP functionality from spatie package.
Had the same situation with a customer project. My suggestion is to keep the source code files within version control. Just backup the dynamic/changing files with rsync (incremental works best and fast) and create a separate database backup strategy. For example with MySQL/Mariadb: mysqldump, encrypt the resulting file and move it to an external storage as well.
If ZIP creation still is a problem, I would maybe use a storage which is already set up with raid functionality or if that is not possible, I would definitly not use the ZIP functionality on the live server. rsync incremental to another server and do the backup strategy there.
Spatie has a package for Laravel backups that can be scheduled in the laravel job scheduler. It will create zips with the entire project including storage dirs
https://github.com/spatie/laravel-backup

How to create a partition in remote ApacheDS, LDAP server?

I know how to create a partition in local ApacheDS instance from this article. Current problem is I don't know how to create a partition in remote ApacheDS.
I am accessing remote ApacheDS server(in CentOS) from Apache Directory Studio(in Windows).
Any help would be appreciated.
ApacheDS
Version: 2.0.0-M14
Apache Directory Studio
Version: 2.0.0.v20130517
I don't know if your problem is that you can't access the remote instance or another.
But if you want to create a partition follow this "guide".
ApacheDS seems to have a very bad tutorial.
Contrary the other answers, here I explain the real problem. The sad truth is the following:
You can't manipulate the partitions of a non-local Apache Directory Server with Apache Directory Studio.
You can't even do this with a locally running one. The only what you can do, are the Apache Directory Server partitions running inside your Apache Directory Studio.
However, there is a workaround for the problem. It is particularly useful, if you are using linux, or at least you have a cygwin by the hand.
The Apache Directory Server has a complex directory structure, full with small files, partially binary and partially text data.
This data structure doesn't contain any filesystem references, so you can freely clone it.
Create an LDAP server inside your Apache Directory Studio. Open its properties. You get a popup form. Inside this form, you will see some like this:
Location /your/home/directory/.ApacheDirectoryStudio/.metadata/.plugins/org.apache.directory.studio.ldapservers/servers/e56640c7-70ed-4eed-921c-75c475117a11
This is what you want!
This is the directory structure, where your local ApacheDS is running!
And you can now easily synchronize this data structure, ideally with a simple rsync command, into your server or back!
So,
You create the new Apache Directory Server instance inside the Apache Directory Studio
Your check its properties
You stop it, and synchronize your server-side server directory into your this one! For example, rsync -va --delete you#your.server.com:/srv/apacheds/instance/ /your/home/directory/.ApacheDirectoryStudio/.metadata/.plugins/org.apache.directory.studio.ldapservers/servers/e56640c7-70ed-4eed-921c-75c475117a11
You play with the partitions as you wish
You synchronize it back.
Of course if you are playing with the Apache Directory Server file structure on such a low, file-system level, the server needs to be stopped!

Access to filesystem on AppHarbor

I want to try AppHarbor, but I have an application which stores uploaded files in certain place on a filesystem. Is it compatible with AppHarbor? Can I store files in the file system and access them later?
(what kind of path can I expect, like c:\blabla something or what?)
Thank you.
You can store files on the local filesystem, but the application directory is wiped on each new deployment so it's not recommended to rely on for file storage.
Instead we recommend that you use a cloud storage service such as Amazon S3, Google Cloud Storage or similar. There are .NET libraries for both services.
We recently wrote a blog post about uploading files directly to S3 and GCS from the browser that you might want to read.
If you are using a background worker, you need to 'Enable File System Write Access' in the settings of you application.
Then, you are permitted access to write to: Path.GetTempPath()
Sourced from this support question: http://support.appharbor.com/discussions/problems/5868-create-directory-in-background-worker

What's the proper way to access the filesystem from a bundle independent of the launcher?

I have a few resources (log files, database files, separate configuration files, etc.) that I would like to be able to access from my OSGi bundles. Up until now, I've been using a relative file path to access them. However, now my same bundles are running in different environments (plain old Felix and Glassfish).
Of course, the working directories are different and I would like to be able to use a method where the directory is known and deterministic. From what I can tell, the working directory for Glassfish shouldn't be assumed and isn't spec'ed (glassfish3/glassfish/domains/domain1/config currently).
I could try to embed these files in the bundle themselves, but then they would not be easily accessible. For instance, I want it to be easy to find the log files and not have to explode a cached bundle to access it. Also, I don't know that I can give my H2 JDBC driver a URL to something inside a bundle.
A good method is to store persistent files in a subdirectory of the current working directory (System.getProperty("user.dir") or of the users home directory (System.getProperty("user.home"))
Temporary and bundle specific files should be stored in the bundle's data area (BundleContext.getData()). Uninstalling the bundle will then automatically clean up. If different bundles need access to the same files, use a service to pass this information.
Last option is really long lived critically important files like major databases should be stored in /var or Window's equivalent. In those cases I would point out the location with Config Admin.
In general it is a good idea to deliver the files in a bundle and expand them to their proper place. This makes managing the system easier.
You have some options here. The first is to use the Configuration Admin service to specify a configuration directory, so you can access files if you have to.
For log files I recommend Ops4J Pax Logging. It allows you to simply use a logging API like slf4j and Pax Logging does the log management. It can be configured using a log4j config.
I think you should install the DB as a bundle too. For example I use Derby a lot in smaller projects. Derby can simply be started as a bundle and then manages the database files itself. I'm not sure about h2 but I guess it could work similarly.

Resources