Why is `extract_css` set to true for production by default? - production-environment

I don't see any mention of the reason behind the defaults in either the docs or in the default generated config/webpacker.yml.
I saw someone mention (can't find the link now) that they have set extract_css: false in production without any problem and was interested in learning the pros and cons of both options for use in production.

Related

Caching on Heroku CI

I am setting up Heroku CI with Elixir Phoenix buildpack. I want to start using Dialyzer.
Diazlyer is a static analysis tool that before the first run takes at least a couple of minutes to create a "persistent lookup table" (PLT) of types from Erlang, Elixir and project dependencies. Later, project analysis is much quicker. I want to cache the PLT.
I've found this section on caching during build: https://devcenter.heroku.com/articles/buildpack-api#caching but I can't find anything about caching in test-setup or test script.
Is there test/CI cache or is it only usable in buildpacks?
(Tomasz, I know that you already have found a path how to approach this problem, but I will share here publicly what I have shared with you privately so that others might also benefit.)
Is there test/CI cache or is it only usable in buildpacks?
Seems that in test/CI you can not do it, you have to use a buildpack. Or maybe hold the cache somewhere outside of Heroku (seems like not a good way to me though).
Have you seen this https://github.com/tsloughter/heroku-buildpack-erlang-dialyzer? It seems dated but maybe it has some hint that can be useful to you.
Setting up backpacks is rather straight forward and for your need this seems the only option that would support caching.

Javascript quality profile use is flipping after each new analysis

I noticed that after each sonar analyse, the use of the 'Sonar way' (Javascript) profile is switching.
Then each time it's re-enabled we have all JavaScript issues tagged as new!
What can be the cause of this behavior ?
How can I fix it ?
Thanks for any advice.
I see three possibilities:
you have someone with too much time on his/her hands manually flipping the configuration
you have sonar.profile somewhere in your analysis configuration. The question is how/why it would be getting set/unset
you have a person or more likely process that is resetting what the default JavaScript profile is.
I'm guessing there was some attempt to automate/ensure the use of the Sonar way profile that has somehow gone awry.
I would closely check your job configuration to see if sonar.profile appears anywhere in it and to see if there are any web services calls that might be (re)setting what the default is.

NIFI - Dev to Test to Prod

We are struggling with trying to figure out the best approach for updating processor configurations as a flow progresses through the dev, test, and prod stages. We would really like to avoid manipulating host, port, etc. references in the processors when the flow is deployed to the specific environment. At least in our case, we will have different hosts for things like ElasticSearch, PostGres, etc. How have others handled this?
Things we have considered:
Pull the config from a properties file using expression language. This is great for processors that have EL enabled, but not the case for those where it isn't.
Manipulate the flow xml and overwrite the host, port, etc. configurations. A bit concerned about inadvertently corrupting the xml and how portable this will be across NIFI versions.
Any tips or suggestions would be greatly appreciated. There is a good chance that there is an obvious solution we have neglected to consider.
EDIT:
We are going with the templates that Byran suggested. They will definitely meet our needs and appear to be a good way for us to control configurations across numerous environments.
https://github.com/aperepel/nifi-api-deploy
This discussion comes up frequently, and there is definitely room for improvement here...
You are correct that currently one approach is to extract environment related property values into the bootstrap.conf, and then reference them through expression language so the flow.xml.gz can be moved from one environment to the other. As you mentioned this only works well with properties that support expression language.
In order to make this easier in the future, there is a feature proposal for an idea called a Variable Registry:
https://cwiki.apache.org/confluence/display/NIFI/Variable+Registry
An interesting approach you may want to look at is using templates. There is a GitHub project that can be used to help with this:
https://github.com/aperepel/nifi-api-deploy
You can loook at this post automating NIFI template deployment
For automating NIFI template deployment, there is a tool that works well : https://github.com/hermannpencole/nifi-config
Prepare your nifi development
Create a template on nifi
and download it
Extrac a sample configuration with the tools
Deploy it on production
undeploy the old version with the tools
deploy the template with the tools
update the production configuration with the tools

What's the standard way of specifying the environment for a Ruby app without using RAILS_ENV?

When developing a Ruby application, how can I distinguish in code between development, test and production environments without using RAILS_ENV? I've seen some apps that don't even use Rails using that variable, which doesn't make much sense to me.
Of course I can just use a different name, but is there a standard one? Also, would it be bad to set this on code, in some sort of configuration object, instead of using the system's environment variables?
PS: sorry if this is too basic, but it's hard to search for an answer since the results are always Rails related.
The Standard
Rails.env.development?
Rails.env.test?
Rails.env.production?
Don't use RAILS_ENV
RAILS_ENV is being deprecated and will cause warnings and/or errors.
References
Rails.env | API Dock
Rails Environment Settings | Rails Guides
You are free to invent your own semantics, and your own ways of determining which configuration is in use. The environment names of test, development, production have become well-known standards, sometimes with the addition of release-management steps such as smoke, uat, staging etc. However, there is no requirement to use environments as a concept in the first place, nor is there a generic approach that could be applied across all Ruby projects - the set of possible applications is too broad.
If you are creating a web application that conforms to the rack API (for hosting in Apache/Passenger or Thin or other server that supports Rack), it is common to use RACK_ENV environment variable to control the choice of named environment (and which part of config to then use) - Sinatra config will use this for example, and Rails will fall back to it.
Rails.env = 'production'
Rails.env.production? # true

Why is caching usually disabled in test environments?

On our applications we have a lot of functional tests through selenium.
We understand that it is a good practice to have the server where the tests are ran as similar as possible to the production servers, and we try to follow it as much as possible.
But that is very hard to achieve in 100%, so we have a different settings file for our server for some changes that we want in the staging environment (for example, we opt to turn e-mail sending off because of the additional required architecture).
In fact, lots of server frameworks recommend having an isolated front-controller (environment) for testing to easily achieve this small changes.
By default, most frameworks such as ours recommend that their testing environment should have its cache turned off. WHY?
If we want to emulate production as much as possible, what's the possible advantage of having the server's cache turned off when performing functional tests? There can be bugs that are only found with the cache on, and having it on might also have the benefit of accelerating our tests execution!
Don't we just need to make sure that the cache is cleared before starting a new batch of functional tests, the same way we clear the cache when deploying a new version to production?
A colleague of mine suggests that the reason for this is could be that cache can generate false-positives, errors that are not caused by badly implemented features (that are the main target of those tests), but of the cache system itself... but even if those really happen (I suppose it depends on how advanced is the way the cache is used), why would they be false-positives?
To best answer this question I will clarify some points.
(be aware that this is based on my experience)
Integration tests using the browser are typically "Black Blox Tests" , which means that they are made ​​without knowledge of the code. That is, without knowing whether the cache is being used or not.
These tests are usually designed based on certain tasks that are performed during the normal use of the system. But, these tasks are chosen for automation depending on certain conditions of use (mainly reusability, and criticality/importance but also the cost of implementation). So most of the times, we will not need/wont to test caching behaviour.
By convention, a test (any) must be created with a single purpose and have the less possible dependencies. Why?
When the test fails , we can quickly find the source of the failure.
Smaller tests are easier to extend, fix, remove...
We do not spend too much time, first debugging the test code and then
debugging the system code.
Integration testing should follow this convention.
Answering the question:
If we want to check a particular task, we must isolate it as possible.
For example, if we want to verify that the user correctly logs in, we have to delete the cookies to be sure that they do not influence the result (because they may). If on the other hand, we want to test the cookies we have to somehow use an environment where they are not deleted.
so, in short:
If there is need to test the caching behaviour then we need to create an "isolated" environment where this is possible.
The usual integration tests purpose is to test the functionality, so the framework default value it's to have the cache disabled.
This does not means that we shouldn't create our own environment to test the caching behaviour.

Resources