I am new to Azure(Beginner). Having a task to implement the Azure pipeline for one of our application.
We have multiple scripts in Windows Server. Normally if we need to change anything on those scripts we use to login to the server and modify it. But now we need to modify the scripts in our local and upload it in Azure repo and then using pipeline we need to copy that modified scripts into Windows server.
We have created username & password separately for the connection to the server.
Can anyone help me to achieve this( Any sample pipeline or any steps/docs)?
Related
I have just started developing a Golang app, and have deployed it on Google App Engine. But, when I try to connect my local server to CloudSQL instance through proxy, I am able to connect only through TCP.
However, when connecting with the same CloudSQL instance in AppEngine, I am able to connect only through UNIX.
To cope with this, I have made changes in my local environment handler file, so that it can adapt to local and GCloud config, but I'm not sure how I can skip the update on just this file for GCloud? Again, I don't want AppEngine to delete this file, I just want the CLI to avoid uploading the new version of the handler file.
I use this command for deploying: gcloud app deploy
Currently, I deploy directly to AppEngine, instead of pushing it through VCS. Also, if there is an option to detect if the app is running on AppEngine, then it'd be really great.
TIA
Got it, in case anyone gets stuck in such situation, we can make use of environment variables set in GCloud AppEngine. Although there is documentation stating the environment variables, I would still give importance to checking the environment variables in Cloud Console.
Documentation link for Go 1.12+ Runtime env:
https://cloud.google.com/appengine/docs/standard/go/runtime
I'm using Amazon Workspace windows desktop client. Every time when I wish to start the workspace I need to login into the workspace manually. My Id and password is bit long and I want to write the script that automates this process.
I have tried pywinauto for this purpose but since the Amazon workspace have a login form which is web form. So im not able to automate it.
Any other solution or improvement in my solution is appriciated
It's not a completely automatic solution, but if you use a password manager many of them have an auto-fill option that will work with the Amazon WorkSpaces client.
I've used AHK (AutoHotKey) to automate the login procedure before.
I'm trying to use codedeploy with autoscaling in order to automate the deployment of my application.
I have everything ready. When developing all the parts (hooks' scripts, roles etc) I installed the codedeploy agent manually. Now I want to make it production ready, which means that the codedeploy agent will be installed at sysprep (by providing the powershell commands via user data in launch configuration).
The problem is that it's not working. The script either runs and fails for some reason (are there any logs to confirm?) or it doesn't run at all. My AMI is based on a aws standard windows AMI. The EC2ConfigService is present.
Do you have any idea of what could be the problem or if I have some way to find what's the problem (logs)?
You could take a look at C:\Program Files\Amazon\Ec2ConfigService\Logs\Ec2ConfigLog.txt
On Linux AMIs you can also find the user data script execution logs in the ec2 console when you right click your instance -> Instance Settings -> Get System Log.
Does anyone know if it's possible to deploy to Parse.com hosting from CloudBees, Travis, or circle?
I'm aware of the commandline tool but I'm not sure how to integrate it with CI or if there is any other way.
I've found a solution that have worked well for me. Using travis-ci.com you can set it up to work with parse.com and github. Users commit to master branch and the code is auto deployed to Parse.com. Basically your credentials are encrypted using Travis's Ruby script (which can be found here: http://docs.travis-ci.com/user/encryption-keys/ . Once you're keys are made, you setup a .yml config file that, on travis downloads the parse sdk in a virtual environment, uses the hashed credentials to login to parse and then runs the parse deploy function resulting in a push to parse.
We've been experimenting with Octopus Deploy on a development PC and now want to transfer the environment we've created onto our main Octopus Deploy server (which is used by other teams and already has a few environment set up on it).
So we would like to backup/restore this one environment. However, it looks like Octopus only allows you to backup/restore the entire database.
Is it possible to move a single environment from one Octopus server to another using backup/restore or another means?
What worked for me was simply doing the following in order:
Shutting down Octopus service so that no transaction going through.
Copy the raven database (usually stored in Program Files\Data) to your new server.
Install the new Octopus server and during the setup, in the Storage Tab, specify the location of your data location copied in the second step above.
The Octopus developer, Paul, mentions the great thing about RavenDB is the installation. It requires no services running like SQL. It's just a copy paste of the data itself and great for installation and portability.
There's currently no way to backup/restore just part of the database - you'd need to restore a full backup, and then delete the information you don't need.
Octopus 2.0 (which is now a public beta) has a comprehensive REST API so it would be possible to use that API to fetch a subset of information and import it to your new Octopus server.