With Bluemix DevOps Services I want a deploy script that will always create a new service instance (for example I am deploying to QA stack).
My deploy script looks something like this:
echo "Deleting app ${CF_APP}"
cf delete "${CF_APP}" -f -r
if cf services | grep "Insights for Twitter-test" -q
then
echo "Twitter Service found, deleting in order to create a new one."
cf delete-service "Insights for Twitter-test" -f
else
echo "Twitter Service doesn't exist yet, will create new one."
fi
echo "Creating new service instance for Twitter"
cf create-service twitterinsights Free "Insights for Twitter-test"
echo "Pushing app ${CF_APP}"
cf push "${CF_APP}"
Everytime I run it, the service creation part times out:
Server error, status code: 504, error code: 10001, message: Service instance Insights for Twitter-test: The request to the service broker timed out: https://provision-broker.ng.bluemix.net/bmx/provisioning/brokers/832dfb83-50e9-42b5-9516-ac54ab1eeaf4/v2/service_instances/c3a42482-398c-4148-bacb-297c1f6670ef?accepts_incomplete=true&plan_id=a888c333-41b6-4384-97d1-f89d11d48be9&service_id=4176989f-0bf7-4cf2-987a-6a57320744d1
If I manually run this script with the CF CLI it works fine. Only in DevOps Services does the service creation time out.
For the moment I am not concerned with the fact that the Twitter service doesn't have any state, and that I might as well leave it. Imagine a database service instead, that I want to ensure has been create anew.
Any and all help is appreciated.
Related
I am getting an timeout error when trying to deploy to an VM instance hosted on AWS. Manually I can log ing using
ssh -i myKeyFile.pem myuser#IP
Once I accessed the remote machine I can execute some docker commands and everything works fine. But now that I need to automated that on the CD pipeline is where I am getting the following error:
2020-06-02T21:37:12.6877276Z Trying to establish an SSH connection to ***#IP:port
2020-06-02T21:38:52.4629461Z ##[error]Failed to connect to remote machine. Verify the SSH service connection details. Error: Error: Timed out while waiting for handshake.
2020-06-02T21:38:52.4685976Z ##[section]Finishing: Run shell commands on remote machine
The steps I follow to make the SSH connection are:
I created a SSH service connection on the project settings in Azure DevOps
I created the CD pipeline
I added a SSH task with the following parameters
When I manually trigger it to test if it works, the release start working fine but after 1:43 minutes more or less is when I got the error:
Then, when I review the logs, it is the same error I pasted at the beginning:
[error]Failed to connect to remote machine. Verify the SSH service connection details. Error: Error: Timed out while waiting for handshake
I've increase the handshake timeout settings from the default one (20000) to 90000, but no luck.
Any one has face this problem before?
Seems there is an ongoing error with the default agent pools from Azure DevOps. Lot of people have been reported this and Azure DevOps teams is working on it at the time this post is been written (I couldn't find the post where all that is details. I will add this later on).
The workaround is
To create a self-hosted agent.
After this has been created you will need to re-create your CD pipeline using the new self-hosted agent.
The rest of the SSH task configuration depends on your needs. But if you want to test the SSH connection works, just print something:
echo 'I'm connected'
After this you CD pipeline should be working fine.
More details on how to created the Self-Hosted Agent on Windows. There are also links for Linux and Mac.
I had a similar issue with a VM in Azure. It turned out I had set the security group to only allow SSH in from my local network and Azure Dev-Ops agents obviously run in a Microsoft network and were coming from a different IP Address range. The solution was to open up SSH to all source IP Addresses. You can get the list of IP address ranges Dev-Ops agents use but they appear to change every week which isn't very helpful.
See https://learn.microsoft.com/en-us/azure/devops/organizations/security/allow-list-ip-url?view=azure-devops#microsoft-hosted-agents
I'm using a Kubernetes cluster in Azure running an ingress controller. The ingress controller routes to different services via a given context root.
To add another service and connect it to my ingress I build a simple shell script looking like this:
kubectl apply -f $1'-svc.yaml'
some script magic here to add a new route in the hello-world-ingress.json
kubectl apply -f 'hello-world-ingress.json'
I tested the script on my local machine and everything works as expected. Now I want to trigger the script with an HTTP rest call on Azure.
Does anyone have an idea how to do that? So far I know:
I need the Azure cli with Kubernetes to run the kubectl command
I need something to build the HTTP trigger. I tried using AzureFunctions, but I wasn't able to install the Azure cli in Azure Functions on the Azure Portal and I wasn't able to install Azure cli + Azure Functions in a Docker Container.
Does anyone have an idea how to trigger my shell script via HTTP in Azure in an environment where the Azure cli exists?
The easiest way, in my opinion, is to set up an Azure instance with kubectl and the Azure cli configured to talk to your cluster and on that same server setup something like shell2http. For example:
shell2http -export-all-vars /mybash "yourbash.sh"
shell2http -form /apply "kubectl apply -f $v'-svc.yaml'"
shell2http -export-all-vars /domore "domore.sh"
Where $v above is the name of your deployment.
I am looking to push a custom docker image to OpenShift Online 3 to run container instances there. I have seen many instructions on forums / blogs about how to do this, but the first part of the process seems to be eluding me.
This is one of the references I'm using: link
I log in using the oc command:
oc login https://api.starter-us-west-2.openshift.com --token=xxxxxxx
This gets me in and I can run the command to return the running services (one of which should be the docker instance):
oc get svc
But the response I get is simply:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test-phil4 172.30.217.192 <none> 8080/TCP 13h
I was expecting to see lines for a docker instance that I could connect to. I think I need to 'expose' this, the command should be:
oc expose service docker-registry
but without seeing the service there is the list of services, I'm not sure how I can do that - and the result is - predictably:
error: services "docker-registry" not found
I feel like this is to do with the permissions on my user - I have currently granted my user 'image-pusher', 'image-builder', 'registry-admin' and 'cluster-status'. There are many more options, most of which I don't seem to be able to apply.
Perhaps this is not possible with the free-tier, or perhaps not available within the online version at all? Would anyone know how to go about connecting my existing docker repo to the OpenShift repo I'm connected to and uploading my custom images?
Thanks,
Phil
OpenShift Online clusters have their registry exposed at registry.<cluster-id>.openshift.com. So, for your example, to login to the registry for starter-us-west-2, after logging in to the cluster, you would run
docker login registry.starter-us-west-2.openshift.com -u $(oc whoami) -p $(oc whoami -t)
You can then push and pull from your project with
docker push registry.<cluster-id>.openshift.com/<project_name>/<image-name>:<image-tag>
docker pull registry.<cluster-id>.openshift.com/<project_name>/<image-name>:<image-tag>
Note: to docker push you have to have already tagged your local image as registry.<cluster-id>.openshift.com/<project_name>/<image-name>:<image-tag>
I am trying to push my .Net Native API to Pivotal Cloud Foundry. Below is the command I am giving for pushing my API.
cf push API-Name -s windows2012R2 -b binary_buildpack -c "start" -m 1G -p C:/Path
While running it will say "No start command detected" but when I did -c ? it showed me that start was a command. Then when I look at the log file it will show me:
ERR Could not determine a start command. Use the -c flag to 'cf push' to specify a custom start command.
and at the end it will say:
ERR Failed to create container
"reason"=>"CRASHED", "exit_description"=>"failed to initialize container"
Am I running the command wrong or is there something I need to do to my API to make it compatible?
I figured out that I had to set the health check off and my app and all instances are started now.
cf set-health-check NAME none
I am new to informatica and looking for assistance from available experts.
I am able to login to admin console (http://localhost:6008/administrator/#admin) where I can see my node is available, my repository service is available, my integration service is available.
Through Power center desiner tool, I am able to view my mappings. Also, I am able to connect to Powercenter Workflow manager. However, while trying to execute my workflow, it says that cannot connect to integration service.
I am getting following error in log :
CCM_10322
The following error occurred while logging to Log Service: [[DOM_10022] The master gateway node for the domain is not available.
Electing another master gateway. Wait for the election of the master gateway node to complete.
If the problem persists, verify that the master gateway node is running.].
Thanks,
Manish.
Pre-requisites:
1)Once the Repository files has been restored make sure all service(rep & IS services) are up.
2)Make sure the INFA_DOMAIN_FILE env variable has been created and has the right values. And make sure file with path added in the PATH, in both server & client machines.
Solution :Last resort, Upgrade the domain info in server & Client machines as shown below, then restart the infaservices & server. Now it works!!!
Upgrade the Domains:
Cd $INFA_HOME/isp/bin
Sh infacmd.sh updateGatewayInfo –dn Domain_name –dg Servername.net:6005
in server:
Cd $INFA_HOME/isp/bin
Sh infacmd.sh updateGatewayInfo –dn Domain_name –dg Servername.net:6005
in client:
cd e:\Informatica\9.0.1\clients\PowercenterClient\CommandLineUtilities\Pc\server\bin\
`infacmd updateGatewayInfo –dn Domain_name –dg servername.net:6005'
Restart the Server:
Cd $INFA_HOME/tomcat/bin
Sh infaservices.sh shutdown
Sh startserver.sh shutdown
Sh startserver.sh startup
Sh infaservices.sh startup
Done. Now check!!!
You must define one node as Master Gateway using infacmd or infasetup then you can run all services from admin console.