Retention time of packages on conda-forge - conda-forge

What is the retention time of packages on conda-forge? For example, if I export an environment to yaml today, will I be able to re-create it in 5 years time?

Related

Shared volume across multiple docker-compose projects [duplicate]

This question already has answers here:
How to cache package manager downloads for docker builds?
(4 answers)
Closed 2 years ago.
I'm using docker-compose to orchestrate containers for multiple separate projects. Each of these projects has their own set of containers and do not relate to other projects.
For example:
/my-projects/project-1/docker-compose.yml
/my-projects/project-2/docker-compose.yml
/my-projects/project-3/docker-compose.yml
These projects are, however, similar in that they are all PHP projects and use webpack for front-end assets, thus share the same package managers: composer and yarn.
I was wondering, in the interest of performance, if it would be possible to mount a shared volume outside the directory root of all the projects for package manager caches?
For example:
/my-projects/caches/composer
/my-projects/caches/npm
/my-projects/project-1/docker-compose.yml
/my-projects/project-2/docker-compose.yml
/my-projects/project-3/docker-compose.yml
Where /my-projects/caches/composer and /my-projects/caches/npm get mounted inside the relevant containers within each project. In case it's not clear, only one project would be spun up at a time.
At the moment, if two projects share the same deps then each downloads and caches it individually. A more performant (in terms of build times) would be to mount a common volume and point the package manager's caches there so that when "Project A" downloads an update to a dip, "Project B" can load it from cache.
You can simply mount the same directories as volume binds on each of the containers that require it. You can use absolute paths. Even one of the examples in the docs is using an absolute path as bind mount.
However, volumes are not available during image build (docker-compose build, which is where commands like composer install, npm install or yarn install should be run in any case.
Although if you are running these commands at container runtime, nothing stops you from mounting these cache dirs into each container.

Wercker: How to build constantly few times a day?

Right now wercker builds a build when changes were made to git repo. How do I set up wercker so it builds for example every 12 or 8 hours every day ?
This is not possible yet, although it has been suggested here. But you could create a small program the calls the api to trigger builds periodically.

What is the best way to save composer dependencies across multiple builds

I am currently using atlassian bamboo build server (cloud based, using aws) and have an initial task that simply does a composer install.
this single task can take quite a bit of time which can be a pain when developers have committed multiple times giving the build server 4 builds all downloading dependencies (these are not parallel).
I wish to speed this process up but canot figure out a way in which to save the dependancies to a common location for use across multiple builds which still allowing the application to run as intended (laravel)
Answer
Remove composer.lock from your .gitignore
Explanation
When you run compose install for the first time, composer has to check all of your dependencies (and their dependencies etc.) or compatibility. Running through the whole dependency tree is quite expensive, which is why it takes so long.
After figuring out all of your dependencies, composer then writes the exact versions it uses into the composer.lock file so that subsequent composer install commands will not have to spend that much time running through the whole graph.
If you commit your composer.lock file it'll come along to your bamboo server. The composer install command will be waaaayy faster.
Committing composer.lock is a best practice regardless. To quote the docs:
Commit your application's composer.lock (along with composer.json) into version control.
This is important because the install command checks if a lock file is present, and if it is, it downloads the versions specified there (regardless of what composer.json says).
This means that anyone who sets up the project will download the exact same version of the dependencies. Your CI server, production machines, other developers in your team, everything and everyone runs on the same dependencies, which mitigates the potential for bugs affecting only some parts of the deployments.

Flex builder4 - Compilation issue

I'm using FB4 and my project will take atleast 30 minutes to compile entire project since it has 20 modules.
Everyday i have to logoff my machine since our company policy.
My problem is :
Everyday i have to spend 30 minutes to compile my entire project.
I have to spend another 30 minutes again to compile all modules in my project if i accidently closed my project or closed my Flex builder IDE.
Do you have any suggestion to avoid flexbuilder to compile my project again if i closed?
Thanks in advance.
Eclipse Java could probably use more memory to speed that up.
How can you speed up Eclipse?
Max value of Xmx and Xms in Eclipse?
Is 'Build Automatically' selected?
Maybe how you link your projects will also impact performance. Linking SWC instead of project references should improve time.

How long would it take to setup a new CI repository?

I wonder how long it would usually take for:
Professional
Average
Beginner
to setup and configure CI for a new project?
I have never set up CI before, which puts me squarely in your "Beginner" category. Your question nudged me to try and setup a CI system for my projects; something which I've always avoided, because I thought it would cost me a lot of effort and time.
It took me all of 20 minutes.
I used a fantastic project called CInABox (Continuous Integration in a Box). It consists of two simple scripts which download and compile Ruby and download, install and configure CruiseControl.rb for Ubuntu 8.04.
In just 20 minutes, I downloaded Ubuntu JeOS 8.04, configured a VirtualBox VM, installed Ubuntu in that VM, setup networking, installed Ruby, installed CruiseControl.rb, added my first project to CC.rb and watched the light go green! The most time was actually spent downloading Ubuntu, downloading Ruby and installing Ubuntu. The actual CI setup took less than 5 minutes.
Don't let the name fool you: CC.rb is written in Ruby, but you can build anything with it. In the default configuration, it assumes that you are using rake to build your project, but by setting just one configuration option, you can just as well use a shell script.
It depends on how much other infrastructure you already have in place and whether you have issues tying everything together. Even with that in mind, you should be able to get TeamCity and all the infrastructure up and and running within a day or so if you have a decent idea of what you're doing. The documentation is pretty good for TeamCity and should get you past any bumps.
It depends on may factors:
What features of CI do you want to use.
Have you project installed in your CI environment already.
What type of project. How easily it can be installed on fresh environment.
just to say a few.
I think that if project is not a trivial, then all this time spent for the CI environment is worth the price. Whether it is 20 minutes or 3 days.
CI Factory
TeamCity
CC.NET sample configs
Try.

Resources