I'm trying to associate a policy file in my vault test container. But It is giving me an error.
Below is the command I'm running.
Container.ExecResult result = vaultContainer.execInContainer("vault", "policy", "write", "admin", "- <<EOF\n" +
"path \"secret/*\" {\n" +
" capabilities = [ \"read\" ]\n" +
"}\n" +
"EOF");
Error:
Container.ExecResult(exitCode=2, stdout=, stderr=Error opening policy file: open - <<EOF
path "secret/*" {
capabilities = [ "read" ]
}
EOF: no such file or directory
)
And policy is not getting written in the vault container. Any help would be greatly apprecited.
I was able to run the command with the following steps:
Create a policy file in the resources folder.
Map this file .withClasspathResourceMapping("policy.hcl", "/opt/policy.hcl", BindMode.READ_ONLY);.
Run the command to create policy. vaultContainer.execInContainer("vault", "policy", "write", "full_access","/opt/policy.hcl")
Related
I have exe file downloaded in the VM in specific folder, I am trying to install Adobe using Powershell dsc code.
Script is failing with below error during execution (Configuration is called through ARM template), however if I check inside the VM, adobe is installed.
Tried running the same script manually inside the VM. Not facing any error though.
[{"code":"VMExtensionProvisioningError","message":"VM has reported a failure when processing extension 'configureWindowsServer'. Error message: "DSC Configuration 'Adobe' completed with error(s). Following are the first few: PowerShell DSC resource MSFT_PackageResource failed to execute Set-TargetResource functionality with error message: The return code 1618 was not expected. Configuration is likely not correct The SendConfigurationApply function did not succeed."\r\n\r\nMore information on troubleshooting is available at https://aka.ms/VMExtensionDSCWindowsTroubleshoot "}]}
Configuration Adobe
{
$PackagesFolder = "C:\Packages\Adobe"
$AcrobatReader = #{
"Name" = "Adobe Acrobat Reader DC"
"ProductId" = "XXXXXX-XXXXX-XXXXXXX"
"Installer" = "AcroRdrDC.exe"
"FileHash" = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
"HashAlgorithm" = "SHA256"
"DestinationPath" = "$PackagesFolder\AdobeAcrobatReaderDC"
"Arguments" = "/msi EULA_ACCEPT=YES /qn"
}
Package AdobeAcrobatReaderDC {
Ensure = "Present"
Name = $AcrobatReader.Name
ProductId = $AcrobatReader.ProductId
Path = ("{0}\{1}" -f $AcrobatReader.DestinationPath, $AcrobatReader.Installer)
Arguments = $AcrobatReader.Arguments
}
}
I tried adding service principal to azure databricks workspace using cloud shell but getting error. I am able to look at all the clusters in the work space and I was the one who created that workspace. Do I need to be in admin group if I want to add Service Principal to workspace?
curl --netrc -X POST \ https://xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.net/api/2.0/preview/scim/v2/ServicePrincipals \ --header 'Content-type: application/scim+json' \ --data #create-service-principal.json \ | jq .
file has following info:
{ "displayName": "sp-name", "applicationId": "a9217fxxxxcd-9ab8-dxxxxxxxxxxxxx", "entitlements": [ { "value": "allow-cluster-create" } ], "schemas": [ "urn:ietf:params:scim:schemas:core:2.0:ServicePrincipal" ], "active": true }
Here is the error I am getting: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 279 100 279 0 0 5166 0 --:--:-- --:--:-- --:--:-- 5264 parse error: Invalid numeric literal at line 2, column 0
Do I need to be in admin group if I want to add Service Principal to workspace?
Issue is with JSON file not with access to admin group.
You need to check double quotes in line number 2 of your JSON file.
You can refer this github link
Try this code in Python that you can run in a Databricks notebook:
import pandas
import json
import requests
# COMMAND ----------
# MAGIC %md ### define variables
# COMMAND ----------
pat = 'EnterPATHere' # paste PAT. Get it from settings > user settings
workspaceURL = 'EnterWorkspaceURLHere' # paste the workspace url in the format of 'https://adb-1234567.89.azuredatabricks.net'
applicationID = 'EnterApplicationIDHere' # paste ApplicationID / ClientID of Service Principal. Get it from Azure Portal
friendlyName = 'AddFriendlyNameHere' # paste FriendlyName of ServicePrincipal. Get it from Azure Portal
# COMMAND ----------
# MAGIC %md ### add service principal
# COMMAND ----------
payload_raw = {
'schemas':
['urn:ietf:params:scim:schemas:core:2.0:ServicePrincipal'],
'applicationId': applicationID,
'displayName': friendlyName,
'groups':[],
'entitlements':[]
}
payload = json.loads(json.dumps(payload_raw))
response = requests.post(workspaceURL + '/api/2.0/preview/scim/v2/ServicePrincipals',\
headers = {'Authorization' : 'Bearer '+ pat,\
'Content-Type': 'application/scim+json'},\
data=json.dumps(payload))
response.content
I have actually published a blog post where a Python script is provided to fully manage service principals and access control in Databricks workspaces.
I'm trying to send a mail after my pipe ends that will include some logs that I'm collecting
emailext subject: "${env.JOB_NAME} #" + env.BUILD_NUMBER + " - " + currentBuild.currentResult + " for branch: " + branch_Name + " commit: " + "${git_commit_hash}", body: """
Installation : ${create_cluster_result}
unit test results: ${run_unit_tests_result}
error logs: ${error_logs}
""", attachLog: true, attachmentsPattern: "${error_logs}", to: "$extendedEmailRec"
but I get only one attachment, build.log. What am i missing here?
It seems that "${error_logs}" is the path to only one log file. You can always use wildcards in your pattern as well. For e.g. considering error_logs as your logs directory: (*.log in code below)
""", attachLog: true, attachmentsPattern: "${error_logs}/*.log", to: "$extendedEmailRec"
This way you can end up with as many files with .log extension in your attachments.
Have a look at the code first
resource "aws_instance" "ec2-lab-cis01-${var.aue2a}-rgs" {
ami = "ami-0356d62c1d9705bdf"
instance_type = "t3.small" #t3.small
key_name = "${var.key_pair}"
}
output "lab-cis01" {
value = ["Private IP = ${aws_instance.ec2-lab-cis01-${var.aue2a}-rgs.private_ip}"]
}
I have multiple servers, I want to use variables names in the names of resources. How can I do this? I can not also reference this ec2 name while creating route53 entries.
The error what VS Code is giving me is:
"expected "}" but found invalid sequence "$""
when I run the terraform init it gives me the following error
Error loading /test/test.tf: Error reading config for output lab-cis01: parse error at 1:47: expected "}" but found invalid sequence "$"
I am using net.schmizz.sshj.xfer.scp.SCPFileTransfer class to upload file from local to remote server. It is failing with following error:
net.schmizz.sshj.xfer.scp.SCPException: EOF while expecting response
to protocol message. Additional info: bash: -c: line 0: unexpected EOF while looking for matching
bash: -c: line 1: syntax error: unexpected end of file
This issue I am facing only when remote machine is Windows. For Linux machine it is successfully uploading.
I have tried following steps in my code.
1. Download a file from remote machine to local
2. Upload same file again back to remote.
It is failing in step 2.
#Override
public boolean upload(String localLocation, String remoteLocation) throws SSHClientException {
this.ensureConnected();
SCPFileTransfer scp = this.sshj.newSCPFileTransfer();
try {
scp.upload(localLocation, remoteLocation);
} catch (IOException e) {
log.error("Failed to copy file {} from local path at {} to remote location {} at {}" + remoteLocation,
hostname, localLocation, e);
return false;
}
return true;
}
Any leads will be really helpful.
Thanks.
I got the solution.
The remote file path that I have used looks like :
'/cygdrive/c/Program Files/XXX/'
The issue is "'" in the path. Removal of "'" from the path results successful upload of the file.
Thanks to all who gave me leads.
Thanks,
Shruti