Heroku: Free dyno file system limitations - heroku

So I have a Heroku free account. Trying to run my portfolio from it. It uses a json file to contain my blog posts and portfolio information. I can update it through a basic CMS I created for it.
I wrote an article, and saved it, but I woke up the next day and checked, and the article was gone. I tested this theory by trying again with a test article. Again, the next day the article had gone. I was left with just the initial article I pushed to Heroku when I published the project.
Does this mean the Heroku free dyno does not retain the file system, and in-fact re-builds the entire project every time it spins down, then gets spun up again? It certainly appears this way.
Can somebody confirm this for me?
Thanks.

Did a little more research. I missed the daily cycling.
This is what Heroku has to say about its file system:
https://help.heroku.com/K1PPS2WM/why-are-my-file-uploads-missing-deleted
The Heroku filesystem is ephemeral - that means that any changes to the
filesystem whilst the dyno is running only last until that dyno is
shut down or restarted. Each dyno boots with a clean copy of the
filesystem from the most recent deploy. This is similar to how many
container based systems, such as Docker, operate.
In addition, under normal operations dynos will restart every day in a
process known as "Cycling".
This bottom quote answers my question. I did not realise the Dyno's were cycled daily, I assumed it was just based on server restart.

Related

Heroku appears to be leaking memory after api-maintenance deploy - Enable allow-multiple-sni-endpoints feature

My Heroku App seems to be consuming more and more memory, so I think I may have a memory leak. It started happening at midnight, but I haven't made any changes in several days.
The latest change was a change by heroku maintenance, so I'm wondering if that might be the cause?
I plan to restart soon to prevent going over my limit.
It seems that Heroku had several platform issues yesterday. There were several "platform initiated" dyno restarts and it seems like things are back to normal now.

Looking at old logs on Heroku

I have a Ruby heroku app. It crashed. I rebooted it, it works. Fine. Such is the life of a computer program.
Now, I want to look at the error logs to see WHY it crashed. However, when I go to view logs, they start at the reboot. How do I find the logs from 30 minutes ago when the app crashed?
It appears that restarting an instance clears all logs so it's best to do this with care.
If you'd like to store logs long-term, look at implementing Log Drains

How long a fresh installation should take?

I have a remote database and a copy of nopcommerce running locally (from Visual Studio). During the first run I hit the install button and the page still appears to be loading some 40 minutes later.
I looked into the database and all the tables seem to be there. I didn't bother to copy sample data, so there shouldn't be that much stuff to be done beyond creating tables?
So the question is how long should the installation last? Maybe it is ok to assume that it's done and I can ignore the part where page still appears to be loading?
Your assumption about "it's done and I can ignore the part where page still appears to be loading" is absolutely correct. Usually installation takes 1-4 minutes (depends on your hosting specifications).
But we also experienced this issue several times when the installation page hangs (but we know for sure that it's already completed).
It can take 5 minutes or more to install NopCommerce on remote database when running Nop on local machine. It's probably better to do the installation on local DB (or put Nop on the same remote server as the database).
In my case Nop would hang after the installation was done, but likely it should not happen with a local DB (the hang is probably caused by the long installation time).

Install heroku cli on linux *without* root, and *no auto update*

I am a CS professor trying to teach web app development (Flask, Rails, SparkJava, etc.) using Heroku.
Our computing environment is a centrally managed Linux system, where neither the students nor I, have root permission. The students also have a very limited file and disk quotas: 200MB of space, and 4000 individual files.
I used to be able to provide them a way to give them access to the Heroku toolbelt by hacking the "standalone install" to get around the default assumption that the person doing the install has root permission.
But it is no longer working. When I install into a directory and run from there, the Heroku toolbelt keeps trying to "auto-update" into the ~/.local/share for each individual user and since the Heroku-CLI installation has over 12000 files in it (!) it blows their file quota.
This is madness. I want to have just ONE installation of the Heroku toolbelt client, update it centrally and NOT have each student have to have their own copy. Is this too much to ask? Is there any way to do this?
There used to be some trick to making the client think it was already up-to-date or some way to configure it to NOT do the auto-update. But I can't find how to do it.
(Thanks in advance for all of your good ideas such as: have them work on their own laptops, make a VM, have them work on AWS, etc. Those are all great ideas for some parallel universe in which they are feasible. If I could use any other computing environment, I'd already been doing that. This is the one I have. If I can't make Heroku work here, I just can't use Heroku in class. And it's frustrating because it used to work.)
As a quick and dirty solution you can in lib/heroku/updater.rb change
def self.needs_update?
compare_versions(latest_version, latest_local_version) > 0
end
to
def self.needs_update?
false
end
and you will not be bugged with updates anymore. You have to do this each time you want to update manually.
A better more maintainable solution would be to get a config value or something similar for controlling this behavior accepted upstream in the toolbelt, which is open source at https://github.com/heroku/heroku

Cloud9 - Workspace that update alone

Sometimes, I see that my workspace are updated alone.
Cloud9 says: "Updated X time ago", but I have no more information.
It may be an error of Cloud9? This happens to me in workspace that I'm alone and I've already changed the password.
Does this happen to anyone else?
UPDATE:
Response from Cloud9.
"Hi Sebastian,
Thanks for getting in touch about this issue. I looked into your issue, and it turns out to be our automated backups. We actually take periodic backups of both active and inactive workspaces, and this updates the "updated" time stamp. I'll talk to the teams about not updating the time stamp when backups are performed. This should be very helpful to avoid any future concerns users may have :)"

Resources