How do you delete node from octopus deploy - octopus-deploy

If you rename the VM where octopus deploy is installed, you will get the following error.
You are using the trial version of Octopus Deploy, which only allows 1
active node. You currently have 2 active nodes.". Previous
installation has been shutdown
That is because Octopus Deploy thinks you have 2 nodes.
How do you delete the old node?

There currently isn't a gui, so you will need to do this from the command line.
Inside the MSSQL Database is a table called 'OctopusServerNode'. The table only contains a list of the Server nodes. Removing the line from that database will remove the node.
Install the mssql command line utility.
npm install -g sql-cli
Connect to the sql server (Replace everything inside brackets <>)
mssql -s octopusxxxxxxxxxx.database.windows.net -u <sqladminusername> -p '<password>' -d OctopusDeploy -e
List all Octopus Servers
SELECT * FROM OctopusServerNode
Verify that you can select the correct row from the table. If the node you wanted to delete was called 'foobar', you would run the following
SELECT * FROM OctopusServerNode WHERE Name = 'foobar'
Once you are 100% positive you are selecting just the 1 node that you want to delete, go ahead and delete it.
DELETE FROM OctopusServerNode WHERE Name = 'foobar'
Run the selet statement again to verify that the row was deleted from the table and exit.
exit()

Related

Azure DevOps ThirdParty Tools for build / Deployment

List item
pipelines:
default:
- step:
name: Push changes to Commerce Cloud
script:
- dcu --putAll $OCCS_CODE_LOCATION --node $OCCS_ADMIN_URL --applicationKey $OCCS_APPLICATION_KEY
- step:
name: Publish changes Live Storefront
image: Python 3.5.1
script:
python publishDCUAuthoredChanges.py -u $OCCS_ADMIN_URL -k $OCCS_APPLICATION_KEY
environment variables:
$OCCS_CODE_LOCATION: Path to location of all OCCS code
$OCCS_ADMIN_URL: URL for the administration interface on the target Commerce Cloud instance
$OCCS_APPLICATION_KEY: application key to use to log into the target Commerce Cloud administration interface
So I want to use Azure Dev Repository to CI / CD.
in the above code block if you see I have specified - dcu & python code in two task.
dcu is nodejs third party oracle tool which needed to be used to migrate code to cloud system. I want to know how to use that tool in azure dev ops,
Second python (or) nodejs which I want to invoke to REST api to publish the changes.
So where to place those files and how do we invoke it.
*********** Update **************
I hosted the self pool agent and able to access the system.
Just start executing basic bash code, but end up in two issue -
1) the git extract files from the repository it is going to _work/1/s, not sure how that path is decided. How can I change that location s
2) I did 'pwd' to the correct path but it fails in 'dcu' command. I tried with npm and other few commands it fails. But things like mkdir , rmdir it create & remove folder correctly from the desired path. when I tried the 'dcu' cmd from the terminal manually from the system it works fine as expected.
You can follow below steps to use DCU tool and python in azure pipelines.
1, create a azure git repo to include dcu zip file and your .py files. You can follow the steps in this thread to create a azure git repo and push local files to azure repo.
2, create azure build pipeline. Please check here to create a yaml pipeline. Here is a good tutorial for you to get started.
To create a classic UI pipeline, please choose Use the classic editor in the pipeline setup wizard, and choose start with an Empty job to start with an empty pipeline and add your own steps.(I will use classic UI pipeline in below example.)
3, Click "+" and search for Extract files task to unzip the DCU zip file. Click the 3dots on the Destination folder field to select a destination folder for extracted dcu files. eg. $(agent.builddirectory). Please check my answer in this thread more information about predefined variables
4, click "+" to add a powershell task. Run below script in screenshot to install dcu and run dcu command. For environment variables (like $OCCS_CODE_LOCATION), please click the variables tab in below screenshot to define them
cd $(agent.builddirectory) #the folder where the unzipped dcu files reside. eg. $(agent.builddirectory)
npm install -g
.\dcu.cmd --putAll $(OCCS_CODE_LOCATION) --node $(OCCS_ADMIN_URL) --applicationKey $(OCCS_APPLICATION_KEY)
5, add Use python version task to define a python version to execute your .py file.
6, add Python script task to run your .py file. Click the 3dots on Script path field to locate your publishDCUAuthoredChanges.py file(this py file and the dcu zip file have been pushed to azure git repo in the above step 1).
You should be able to run the script of above question in the azure devops pipeline.
Update:
_work/1/s is the default working folder for the agent. You cannot change it. Though there are ways to change the location where the source code is cloned from git, the tasks' workingdirectory is still from the default folder.
However, You can change the workingdirectory inside the tasks. And there are predefined variables you can use to refer to the places in the agents. For below example:
$(Agent.BuildDirectory) is mapped to c:\agent_work\1
%(Build.ArtifactStagingDirectory) is mapped to c:\agent_work\1\a
$(Build.BinariesDirectory) is mapped to c:\agent_work\1\b
$(Build.SourcesDirectory) is mapped to c:\agent_work\1\s
The .sh scripts in the _temp folder are generated automatically by the agent which contains the scripts in the bash task.
For above dcu command not found error. You can try adding dcu command path to the system variables path for your local machine's environment variables. (path in user variables cannot be found by agent jobs, For the agent use a different user account to connect to local machine)
.
Or you can use the physically path to dcu command in the bash task. For example let's say the dcu.cmd in the c:\dcu\dcu.cmd on local machine. Then in the bash task use below script to run dcu command.
c:/dcu/dcu.cmd --putAll ...

Docker: Oracle database 18.4.0 XE wants to configure a new database on startup

I'm trying to configure an Oracle Database container. My problem is whenever I'm trying to restart the container, the startup script wants to configure a new database and failing to do so, because there already is a database configured on the specified volume.
What can I do let the container know that I'd like to use my existing database?
The start script is the stock one that I downloaded from the Oracle GitHub:
Link
UPDATE: So apparently, the problem arises when /etc/init.d/oracle-xe-18c start returns that no database has been configured, which triggers the startup script to try and configure one.
UPDATE 2: I tried creating the db without any environment variables passed and after restarting the container, the database is up and running. This is an annoying workaround, but this is the one that seems to work. If you have other ideas, please let me know
I think that you should connect to the linux image with:
docker exec -ti containerid bash
Once there you should check manually for the following:
if $ORACLE_BASE/oradata/$ORACLE_SID exists as it does the script and if $ORACLE_BASE/admin/$ORACLE_SID/adump does not.
Another thing that you should execute manually is
/etc/init.d/oracle-xe-18c start | grep -qc "Oracle Database is not configured
UPDATE AFTER COMMENT=====
I don't have the script but you should run it with bash -x to see what is the script looking for in order to debug what's going on
What makes no sense is that you are saying that $ORACLE_BASE/admin/$ORACLE_SID/adump does not exist but if the docker deployed and you have a database running, the first time the script run it should have created this.
I think I understand the source of the problem from start to finish.
The thing I overlooked in the documentation is that the Express Edition of Oracle Database does not support a SID/PBD other than the default. However, the configuration script (seemingly /etc/init.d/oracle-xe-18c, but not surly) was only partially made with this fact in mind. Which means that if I set the ORACLE_SID and/or ORACLE_PWD environmental variables when installing, the database will be up and running, with 2 suspicious errors, when trying to copy 2 files.
mv: cannot stat '/opt/oracle/product/18c/dbhomeXE/dbs/spfileROPIDB.ora': No such file or directory
mv: cannot stat '/opt/oracle/product/18c/dbhomeXE/dbs/orapwROPIDB': No such file or directory
When stopping and restarting the docker container, I'll get an error message, because the configuration script created folder/file names according to those variables, however, the docker image is built in a way that only supports the default names, causing it to try and reconfigure a new database, but seeing that one already exists.
I hope it makes sense.

Teamcity upgrade from 9.16 to 10

I want to upgrade Teamcity 9.16 to 10. I want to proceed with manual back-up and then restore it . I am using external database - mysql . i want to upgrade database as well. How should i proceed with this?
TeamCity documentation states that
Backups created with TeamCity 6.0+ can be restored using the same or
more recent TeamCity versions
so you should be able to create a backup in TC9 and then restore it in TC10.
The simplest way to create a backup is to navigate to the Administration | Backup section in the server UI to specify some parameters and run the backup, as described here.
The other options are
backup via the maintainDB command-line tool — it is basically the same option, as backup via the UI
manual backup
which are described on the corresponding page of the TC documentation.
Restoring data from backup is performed using the maintainDB tool, basically the steps for your case are:
install new TeamCity (but do not start the server)
create a new empty Data Directory
create and configure an empty database
configure a temporary database.properties file
place the database drivers into the lib/jdbc in new data directory
use the maintainDB utility located in the <TeamCity Home>/bin to run restore command:
maintainDB.[cmd|sh] restore -A <absolute path to the Data Directory> -F <path to the TeamCity backup file> -T <absolute path to the database.properties file>
If the process completes successfully, copy over
/system/artifacts from the old directory
More details could be found on the corresponding page.

Chain automated builds in the same Docker Hub repository

Due to build time restrictions on Docker Hub, I decided to split the Dockerfile of a time-consuming automated build into three files.
Each one of those "sub-builds" finishes within Docker Hub's time limits.
I have now the following setup within the same repository:
| branch | dockerfile | tag |
| ------ | ------------------ | ------ |
| master | /step-1.Dockerfile | step-1 |
| master | /step-2.Dockerfile | step-2 |
| master | /step-3.Dockerfile | step-3 |
The images build on each other in the following order:
step-1.Dockerfile : FROM ubuntu
step-2.Dockerfile : FROM me/complex-image:step-1
step-3.Dockerfile : FROM me/complex-image:step-2
A separate web application triggers the building of step-1 using the "build trigger" URL provided by Docker Hub (to which the {"docker_tag": "step-1"}' payload is added). However, Docker Hub doesn't provide a way to automatically trigger step-2 and then step-3 afterwards.
How can I automatically trigger the following build steps in their respective order?** (i.e., trigger step-2 after step-1 finishes. Then, trigger step-3 after step-2 finishes).
NB: I don't want to set up separate repositories for each of step-i then link them using Docker Hub's "Repository Links." I just want to link tags in the same repository.
Note: Until now, my solution is to attach a Docker Hub Webhook to a web application that I've made. When step-n finishes, (i.e., calls my web application's URL with a JSON file containing the tag name of step-n) the web application uses the "build trigger" to trigger step-n+1. It works as expected, however, I'm wondering whether there's a "better" way of doing things.
As requested by Ken Cochrane, here are the initial Dockerfile as well as the "build script" that it uses. I was just trying to dockerize Cling (a C++ interpreter). It needs to compile LLVM, Clang and Cling. As you might expect, depending on the machine, it needs a few hours to do so, and Docker Hub allows "only" 2-hour builds at most :) The "sub build" images that I added later (still in the develop branch) build a part of the whole thing each. I'm not sure that there is any further optimization to be made here.
Also, in order to test various ideas (and avoid waiting h-hours for the result) I have setup another repository with a similar structure (the only difference is that its Dockerfiles don't do as much work).
UPDATE 1: On Option 5: as expected, the curl from step-1.Dockerfile has been ignored:
Settings → Build Triggers → Last 10 Trigger Logs
| Date/Time | IP Address | Status | Status Description | Request Body | Build Request |
| ------------------------- | --------------- | ------- | ------------------------ | -------------------------- | ------------- |
| April 30th, 2016, 1:18 am | <my.ip.v4.addr> | ignored | Ignored, build throttle. | {u'docker_tag': u'step-2'} | null |
Another problem with this approach is that it requires me to put the build trigger's (secret) token in the Dockerfile for everyone to see :) (hopefully, Docker Hub has an option to invalidate it and regenerate another one)
UPDATE 2: Here is my current attempt:
It is basically a Heroku-hosted application that has an APScheduler periodic "trigger" that starts the initial build step, and a Flask webhook handler that "propagates" the build (i.e., it has the ordered list of build tags. Each time it is called by the webhook, it triggers the next build step).
I recently had the same requirement to chain dependent builds, and achieved it this way using Docker Cloud automated builds:
Create a repository with build rules for each Dockerfile that needs to be built.
Disable the Autobuild option for all build rules in dependent repositories.
Add a shell script named hooks\post_push in each directory containing a Dockerfile that has dependents with the following code:
for url in $(echo $BUILD_TRIGGERS | sed "s/,/ /g"); do
curl -X POST -H "Content-Type: application/json" --data "{ \"build\": true, \"source_name\": \"$SOURCE_BRANCH\" }" $url
done
For each repository with dependents, add a Build Environment Variable named BUILD_TRIGGERS to the automated build, and set the Value to a comma-separated list of the build trigger URLs of each dependent automated build.
Using this setup, a push to the root source repository will trigger a build of the root image, once it completes and is pushed the post_push hook will be executed. In the hook a POST is made to each dependent repositories build trigger, containing the name of the branch or tag being built in the requests body. This will cause the appropriate build rule of the dependent repository to be triggered.
How long is the build taking? Can you post your Dockerfile?
Option 1: is to find out what is taking so long with your automated build to see why it isn't finishing in time. If you post it here, we can see if there is anything you can do to optimize.
Option 2: Is what you are already doing now, using a 3rd party app to trigger the builds in the given order.
Option 3: I'm not sure if this will work for you, since you are using the same repo, but normally you would use repo links for this feature, and then chain them, when one finishes, the next one triggers the first. But since you have one repo, it won't work.
Option 4: Break it up into multiple repos, then you can use repo links.
Option 5: Total hack, last resort (not sure if it will work). You add a CURL statement on the last line of your Dockerfile, to post to the build trigger link of the repo with the given tag for the next step. You might need to add a sleep in the next step to wait for the push to finish getting pushed to the hub, if you need one tag for the next.
Honestly, the best one is Option 1: what ever you are doing should be able to finish in the allotted time, you are probably doing some things we can optimize to make the whole thing faster. If you get it to come in under the time limit, then everything else isn't needed.
It's possible to do this by tweaking the Build Settings in the Docker Hub repositories.
First, create an Automated Build for /step-1.Dockerfile of your GitHub repository, with the tag step-1. This one doesn't require any special settings.
Next, create another Automated Build for /step-2.Dockerfile of your GitHub repository, with the tag step-2. In the Build Settings, uncheck When active, builds will happen automatically on pushes. Also add a Repository Link to me/step-1.
Do the same for step-3 (linking it to me/step-2).
Now, when you push to the GitHub repository, it will trigger step-1 to build; when that finishes, step-2 will build, and after that, step-3 will build.
Note that you need to wait for the previous stage to successfully build once before you can add a repository link to it.
I just tried the other answers and they are not working for me, so I invented another way of chaining builds by using a separate branch for each build rule, e.g.:
master # This is for docker image tagged base
docker-build-stage1 # tag stage1
docker-build-latest # tag latest
docker-build-dev # tag dev
in which stage1 is dependent on the base, the latest is dependent on stage1, and dev is based on the latest.
In each of the dependencies’ post_push hook, I called the script below with the direct dependents of itself:
#!/bin/bash -x
git clone https://github.com/NobodyXu/llvm-toolchain.git
cd llvm-toolchain
git checkout ${1}
git merge --ff-only master
# Set up push.default for push
git config --local push.default simple
# Set up username and passwd
# About the credential, see my other answer:
# https://stackoverflow.com/a/57532225/8375400
git config --local credential.helper store
echo "https://${GITHUB_ROBOT_USER}:${GITHUB_ROBOT_ACCESS_TOKEN}#github.com" > ~/.git-credentials
exec git push origin HEAD
The variables GITHUB_ROBOT_USER and GITHUB_ROBOT_ACCESS_TOKEN are environment variables set in Docker Hub auto build configuration.
Personally, I prefer to register a new robot account with two-factor authentication enabled on GitHub specifically for this, invite the robot account to become a collaborator and use an access token instead of a password as it is safer than using your own account which has access to far more repositories than needed and is also easy to manage.
You need to disable the repository link, otherwise there will be a lot of unexpected build jobs in Docker Hub.
If you want to see a demo of this solution, check NobodyXu/llvm-toolchain.

How to install and run Nutch in Windows 7 x64

I want to run Nutch on my Windows 7 x64. I have Nutch versions 1.5.1 and 2 from apache.spinellicreations.com/nutch/.
I used the tutorial at wiki.apache.org/nutch/NutchTutorial. But I messed up in the second step and I can't verify the installation. Other steps are hard to understand...
What are the steps to install and use nutch?
Follow steps to install nutch in windows :
1) download and install cygwin from : https://www.cygwin.com/
2) download nutch from : http://nutch.apache.org/downloads.html
3) paste nutch downloaded and extracted folder into C:\cygwin64\home\
4) rename to apache-nutch
5) open cygwin terminal and type given commands
- $ cd C:
- $ cd cygwin64
- $ cd home
- $ cd apache-nutch
- $ cd src/bin
- $ ./nutch
you will get given output :
Usage: nutch COMMAND
where COMMAND is one of:
inject inject new urls into the database
hostinject creates or updates an existing host table from a text file
generate generate new batches to fetch from crawl db
fetch fetch URLs marked during generate
parse parse URLs marked during fetch
updatedb update web table after parsing
updatehostdb update host table after parsing
readdb read/dump records from page database
readhostdb display entries from the hostDB
index run the plugin-based indexer on parsed batches
elasticindex run the elasticsearch indexer - DEPRECATED use the index command instead
solrindex run the solr indexer on parsed batches - DEPRECATED use the index command instead
solrdedup remove duplicates from solr
solrclean remove HTTP 301 and 404 documents from solr - DEPRECATED use the clean command instead
clean remove HTTP 301 and 404 documents and duplicates from indexing backends configured via plugins
parsechecker check the parser for a given url
indexchecker check the indexing filters for a given url
plugin load a plugin and run one of its classes main()
nutchserver run a (local) Nutch server on a user defined port
webapp run a local Nutch web application
junit runs the given JUnit test
or
CLASSNAME run the class named CLASSNAME
Most commands print help when invoked w/o parameters.
You didn't mess up the second step - you simply don't have (I'm guessing) Cygwin installed so you can't run a bash script. Either install Cygwin (simplest) or you could try porting the bash script to a Windows cmd file. (If you do that, you may find other dependencies down the line.
Hope this helps.

Resources