how to to Restart the Databricks Cluster at the specific time either using script or Job manager - cluster-computing

Command for restart is available but not have a option at specific time.
databricks clusters restart --cluster-id <>

You can run the below command either using a script or in a Databricks Notebook (using the %sh magic command):
curl \
-X POST \
-H 'Authorization: Bearer <TOKEN>' \
-d '{"cluster_id": "<CLUSTER_ID>"}' \
https://<URL>/api/2.0/clusters/restart
From here, you can schedule the script with a tool like cron or the Databricks Notebook with a Databricks Job.

Related

Azure Devops trigger Pipeline via rest getting issue while triggering newly created pipeline

The below script is to trigger pipeline in Azure devops via REST, Its working fine on an existing pipeline, but when trying on newly created pipeline which has never ran then its throwing below error. Any help or suggestion would be really appreciated.
{"$id":"1","innerException":null,"message":"No pool was specified.\nUnexpected parameter 'pool'","typeName":"Microsoft.Azure.Pipelines.WebApi.PipelineValidationException, Microsoft.Azure.Pipelines.WebApi","typeKey":"PipelineValidationException","errorCode":0,"eventId":3000}
#!/bin/bash
echo "Enter PAT Token"
read -r PAT
echo "Enter Organization name"
read -r OrganizationName
echo "Enter Project ID"
read -r projectId
pipelineId=$(jq -r '.id' PipeOutput.txt) #Get definition ID from external TXT file
"Trigger_Pipeline=$(curl --write-out "%{http_code}\n" -X POST -L \
-u :"$PAT" "https://dev.azure.com/""${OrganizationName}""/""${projectId}""/_apis/pipelines/""${pipelineId}""/runs?api-version=6.0-preview.1" \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-d '{
"resources": {
"repositories": {
"self": {
"refName": "refs/heads/master"
}
}
}
}' --output Triggeroutput.txt --silent)"
echo "Output: $(Trigger_Pipeline)"
I tested this REST API with the newly created classic pipeline and YAML pipeline, it works fine on both pipelines.
For us to investigate this issue further, please share the YAML file of the newly created pipeline definition.
In addition, please also try with the following steps:
Try execute the REST API with another method to see if it work, such as on Postman client.
Try to manually trigger this new pipeline to see if it can work.
If the manual trigger works, try using the REST API trigger again.

Curl command on windows not passing user:pass correctly

I'm using CURL to execute some API calls against a bitbucket server. The command is like this:
curl https://bitbucket.myserver.com/rest/api/1.0/projects/PRJ/repos/repo-slug/tags \
-u 'user:pass' \
-X POST \
-H 'Content-type: application/json' \
-d '{"name" : "test-tag", "message": "Test tag from curl", "startPoint": "master" }'
This is expected to create a tag on the master branch in the repo. This however fails complaining of incorrect username/password.
I then copy/paste the command into git-bash prompt - and it works as expected, completing successfully. I tried this with multiple user accounts. I also tried specifying only the username and then entering the password on command line - with the same results.
How can I get curl on windows to pass correct username/password to the server?

AWS CLI command list for checking limits?

I'm working on a project where a single section of our deployment pipeline can easily take up to an hour to deploy onto AWS. We have about 30 steps in our pipeline and one of the primary time killers of spinning up a new environment is hitting a random limit in AWS. I've searched their website for checking limits and have found a few select commands for specific environments, but are there commands (and if so, a list of them) that can check for each limit such as 'NatGatewayLimitExceeded' for example? It would be great if I could make a script that checked all of our limits before we wasted time spinning up half an instance to be blocked by something like this. Thank you in advance!
From here they say that if you have AWS Premium Support, you can do this:
CHECK_ID=$(aws --region us-east-1 support describe-trusted-advisor-checks \
--language en --query 'checks[?name==`Service Limits`].{id:id}[0].id' \
--output text)
aws support describe-trusted-advisor-check-result --check-id "$CHECK_ID" \
--query 'result.sort_by(flaggedResources[?status!="ok"],&metadata[2])[].metadata' \
--output table --region us-east-1
If you do not have AWS Premium Support, I hacked together this:
awscommands=($(COMP_LINE='aws' aws_completer))
for command in "${awscommands[#]}"; do COMP_LINE="aws $command" \
aws_completer | xargs -n1 -I% printf "aws $command %\n"; done | grep limit | \
bash 2>/dev/null
This uses AWS's own bash completion program to find all possible aws commands (mutatis mutandis for your environment), and then all subcommands of those commands that have "limit" in their name, and then runs them. Some of those "limit" subcommands have required options; my trick does not account for those and they just error out, so I redirected stderr to /dev/null. Therefore the results are incomplete. Suggestions for improvement are welcome.

Why I can't use API for common user integrated with OIDC in ICP(IBM cloud private)

https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0/apis/auth_manage_api.html
I try to use the API for common user integrated with OIDC, but the error msg shows:
{"error_description":"invalid_resource_owner_credential","error":"server_error"}
command as the following
curl -k -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" -d "grant_type=password&username=abc\#test\.com&password=ChangeMe\!\#\#&scope=openid" https://<cluster_access_ip>:8443/idprovider/v1/auth/identitytoken --insecure
But it is working fine for the administrator: admin/admin, so strange.
Issue is with the special character "!" which is used for history expansions in command line prompt.
You can use below command which works...
curl -k -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" -d "grant_type=password&username=abc#test.com&password=ChangeMe"'!'"##&scope=openid" https://<cluster_access_ip>:8443/idprovider/v1/auth/identitytoken --insecure
Have you configured the LDAP, created Teams and added users to the team? Did you check the logs on Master node /var/log/containers for platform-identity-manager, _platform-auth-service, *platform-identity-provider?

Apache Kylin's fact table is not getting incremental data from hive?

I am working on apache kylin. I am able to getting incremental data from hive dimension tables into kylin using Restful api (using curl command).But my fact table is not getting incremental data from hive, when i am using curl command.If I am doing manual build in kylin GUI then I am getting incremental data into fact table.I am using curl commands as,
/usr/bin/curl -c /home/hdfs/.mozilla/firefox/a7ec5aak.default/cookies.sqlite -X POST -H "Authorization: Basic QURNSU46S1lMSU4 =" -H 'Content-Type: application/json' http://192.168.1.135:7070/kylin/api/user/authentication
/usr/bin/curl -b /home/hdfs/.mozilla/firefox/a7ec5aak.default/cookies.sqlite -X PUT -H 'Content-Type: application/json' -d '{"startTime":'1425384000000', "endTime": '1488907200000', "buildType":"BUILD"}' http://192.168.1.135:7070/kylin/api/cubes/incident_analytics_cube/rebuild
what i have to do for getting fact table incremental data also into kylin using curl command. Please suggest me.
In kylin am i able to use join query statement without using fact table?

Resources