Capistrano - Previewing deploy and manually update symlink - magento

I am using a Capistrano deployment workflow for a Magento project.
On deploy Capistrano builds this Magento project on the server using https://github.com/Cotya/magento-composer-installer.
The issue is sometimes my Magento modules don't install correctly and I need to clear the cache, reindex or some other task to get everything 100%. The issues occur sporadically so I haven't been able to script a fix into the deployment process.
What I would like is on deploy Capistrano does not change the symlink to the new build straight away. Instead I am able to preview the site on another link, fix what needs to be fixed, then change the symlink manually.
Is this possible to set up using Capistrano?
If not, my other solution to this is to use the Magento maintenance flag however I would rather avoid having to put the site in maintenance. Open to other idea's as well!
Thanks

It is probably possible to do this by telling Capistrano to not include the symlink change as part of the process (something like Rake::Task["deploy:symlink:release"].clear_actions), and then running that manually (cap [env] deploy:symlink:release).
However, under the category of "Open to other ideas as well!" I'd suggest that you set up a staging site. Create a process to automatically restore a prod database back to stage, then deploy your code to stage and check it there. Once you have confirmed it works, deploy to prod and let the symlinks automatically do their job.

Related

OctopusDeploy not deploying package to Custom Installation Directory

I'm pretty new to OctopusDeploy and am trying to set up a process to deploy our artifact to multiple Windows Servers.
As of right now it is deploying the package to the default working directory of C:\Octopus\Applications...... but I need it to be deployed to a different path.
I have defined a Custom Install Directory in the process editor, however this seems to be overlooked during the deployment, and the package just goes to the default directory.
I have tried substituting the path with a variable, but that didn't fix it. There are no errors or warnings in the deployment logs.
Can anyone help?
Sounds like you're taking the right steps to change your custom installation directory on your deployment.
One thing to check is that you've created a new release since updating your step configuration. Because releases in Octopus snapshot the deployment process, any updates you make won't show up in your deployments until you've created a new release.

How to keep user-uploaded files when using github actions

Questions is rather simple, and i am surprised that i could not find an answer. Maybe i just do not know what to ask. Here is the thing, i am using a Laravel app that serves as some sort of cloud-storage/file-system, and i am using github actions to deploy my latest code. I do not have access to git repo at the moment, but yml file is pretty simple, use actions checkout v2, then run some commands that i need, and voila, latest code is up. The only issue is, when that happens, everything gets deleted, including my storage folder, which is by convention used as a default upload location. So, is there some command that i should run so i just pull latest changes and not delete everything, or do i move storage folder outside of repo scope?
p.s i am self-hosted

How do you deploy build artifacts to Heroku from Codeship?

In starting a new project, I put together the skeleton for a Node app that has tests and generates some build artifacts, like asset compilation and compression. I have the tests running in Codeship so successful builds initiate a deploy to Heroku. They've made it all super easy, except I can't find any way to deploy built files, just a copy of what's in the repo.
Has anyone done this successfully? I feel like writing a custom deploy script to rebuild the assets after the tests and manually deploy them would be working against the existing toolset, and I know can't possibly be the first person to want to do this...
Turns out that Codeship doesn't keep anything, in fact, different servers do the deployment than the testing. It seems that the best-practice here is to recreate the assets on the Heroku side with a custom buildpack, which, directly after the git pull, does the dependency installation and compiles the app slug.

Laravel, Composer, Git workflow

I am new to Laravel, Laravel Homestead, Composer, and the development workflow associated with commiting changes to a Git repository and then pulling those changes to a development/production server. So far after much trial and error, I have managed to:
Set up my local Homestead environment with vagrant.
Create a new Laravel application
Run Composer to fetch dependencies
Access the application locally.
Create a Git repository for my application, commit changes, and push to an origin master branch.
Clone the repository on my remote server (shared hosting on 1and1) and pull the changes in.
For a long time, I couldn't understand why when I pulled the changes to the remote site, I would get PHP errors, but the local site ran just fine. It came down to the fact that the Laravel .gitignore file was ignoring the /vendor directory, which Laravel requires to function. Some Google-fu searches indicate that some people simply run composer update / (composer install ?) on their production servers. (I don't have access to Composer on my shared hosting server, so I am unable to do this)
My question to the community - what do you feel is the best workflow for my given situation? remove the /vendor directory from the .gitignore file? Something else?
Replies are much appreciated.
It looks like you are using GIT as a deployment tool which I dont think is a good idea.
Composer update/install is just for managing dependecies. Some servers dont allow you to run scripts from console or running them is complicated. In this situation you can run composer locally before deployment and send your code to server with all dependencies.
Here are some things that you should keep in mind when designing your workflow:
Use GIT to keep source code and configurations
Use composer to manage dependecies (downloaded dependencies should't be under version control in your GIT repository. Vendor directory and its contetnt is a dependency too)
For deployment use one of deployment tools eg. https://github.com/rocketeers/rocketeer
use the -f flag to forcefully include the vendor directory while using git add.
You are on the right track here, and many will do what you are doing.
The real trouble comes when you are doing multiple server deployments (load balanced, auto-scaling).
Typically what I've seen is a shell script that you would include and run whenever something happens that would require these commands to be run.
Inside of this shell script would be the commands that you want completed every time a new server instance is booted up.
You can do this with a number of tools for a single server environment as well.
I might look into continuous integration tools like Travis CI, Jenkins, etc. If this is a major headache of yours.
Otherwise, it might be overkill.... then just keep doing what you are doing.
adding the vendor directory to your git repo is against best practices.
This is also a decent option involving webhooks:
http://losstopschade.de/post/96967373358
Look at Deploy Laravel Webapp to 1and1

SVN Post-Commit to Update Working Copy when Working Copy is on a Network Drive

I work for a fairly new web development company and we are currently testing subversion installations to implement a versioning system. One of the features we need the versioning system to perform is to update the development server with an edited file once it has been committed.
We would like to maintain one server for all of our SVN repositories, even though, due to system requirements, we need to maintain several separate development servers. I understand that the updates are fairly simple when the development server resides in the same location as SVN, but that is just not possible for us. So, we need to map separate network drives to the SVN server for each development server.
However, this errors on commit. Here is my working copy test directory, as referenced in the post-commit.bat file:
SET WORKING_COPY=Z:\testweb
This, however, results in an error...
post-commit hook failed (exit code 1) with output: svn: Error resolving case of 'Z:\testweb'
I'm sure this is because the server is not the same user as me and therefore does not have the share I need mapped to "Z" - I just have no idea how to work around this. Can anyone help?
UPDATE: The more I look in to these issues it appears that the real solution to the problem is to use a CI Server to accomplish what I am attempting to accomplish. I am currently looking in to TeamCity and what it might do for us.
Don't do this through a post-commit hook. If you ever manage to get the hook to succeed, you'll be causing the person who did the commit to wait until the update is complete. Instead, I recommend that you use Jenkins which is a continuous build engine.
It is possible that you don't have anything to build. After all, if you're using PHP or JavaScript, there's nothing to compile. However, you can still use Jenkins to do the update for you.
I can't get into the nitty-gritty detail hear, but one of the things you can do with Jenkins is redefine its working directory. You can do this by clicking on the Advanced button when you define a job, and it'll ask you where you want the working directory. In this case, you can specify your server's working directory.
One of the things you can do with Jenkins is have it automatically run tests, or maybe do a bit smoother update. For example, you might have to restart your web server when you change a few files, or maybe you need to make sure that if you're changing 100 files, they all get changed at once, or your server isn't in a stable state. You could use Jenkins to do this too. And, if there are any problems, you can have Jenkins email the person who is responsible for the server that the server update failed.
Jenkins is easy to setup and use. You can download it and start up Jenkins in 10 minutes. Setting up a job in Jenkins might take you another 15 minutes if you had never seen Jenkins before and had no idea how it works.

Resources