Unable to Authenticate Application Default Credentials - bash

I am following the docs here and it says:
The simplest way for applications to authenticate to a Google Cloud Platform API service is by using Application Default Credentials (ADC). Services using ADC first search for credentials within a GOOGLE_APPLICATION_CREDENTIALS environment variable; Google Cloud recommends you set this environment variable to point to your service account key file (the .json file downloaded when you created a service account key, as explained in Set Up a Service Account.
And it says to use this command:
$ export GOOGLE_APPLICATION_CREDENTIALS=<path_to_service_account_file>
In the Google Shell, I have attempted this:
<INSERT_SOMETHING>"~$ $ export GOOGLE_APPLICATION_CREDENTIALS=</Users/grantespanet/Downloads/myfile.json>
But I get this error: -bash: syntax error near unexpected token newline
I have also tried this:
<INSERT_SOMETHING>:~$ $ export GOOGLE_APPLICATION_CREDENTIALS=/Users/grantespanet/Downloads/myfile.json
but nothing happens
I know the command is pointing to the correct file location. How can I successfully Authenticate Application Default Credentials?

The command you are executing is a variable assignment. Variable GOOGLE_APPLICATION_CREDENTIALS is given the value that follows the = sign.
The export keyword has the effect of making this variable available to child processes of the shell you are executing it from. In plain terms, it means any program you launch from the shell will have a copy of that variable (with its value) and can use it.
It is entirely normal for that command to produce no visible result or output.
The instructions you have probably require you to launch other commands after this one, which will use this value. Try performing the next steps.

Related

How to pass dynamic variables in firebase database set command?

I want to update the build version number at the firebase, so I am running a firebase command through a bash file that looks like this-
sudo firebase database:set /build_version --data '"1.1.13.12.34"' --project <PROJECT_ID>
This is working fine.
But, I want to pass the version number dynamically, so I would like to write this command something like that-
VERSION=<ANY_NUMBER>
sudo firebase database:set /build_version --data $VERSION --project <PROJECT_ID>
But passing dynamic variable giving me an error-
Error: Unexpected error while setting data: FirebaseError: HTTP Error:
400, Invalid data; couldn't parse JSON object, array, or value.
I have read that firebase doesn't allow some special character and we cannot wrap firebase data in double-quotes
At firebase, "build_version" is not a JSON Object. It's a single variable.
My real-time database structure is looking like this-
Can anyone suggest how can I pass the dynamic data to the firebase database command?
Sharing just a trick
If firebase doesn't allow using special characters then you can just put any string (let's say BUILD_VERSION) instead of your shell script variable $VERSION and then using shell script syntax just search and replace that string with your build version before running firebase command.
In the end, you can again replace the build version with the string BUILD_VERSION.

Creating VM with Azure CLI

I'm posting to see if someone can point me in the right direction. I've been trying to get this working for a few days now and I'm at a dead end.
I have tried to remove " around the variables and remove variables completely. I have tried to run this in Azure Cloud Shell, WSL, and PowerShell as a .ps1 script. However, I continue to get the same type of errors.
Here is the script.
Here is the error I am getting.
validation error: Parameter 'resource_group_name' must conform to the
following pattern: '^[-\w\._\(\)]+$'. ' not recognized. ' not found. Check the spelling and casing and
try again.
If I run a one liner with out variables I get this error.
az vm create: error: local variable 'images' referenced before
assignment
Any help would be greatly appreciated. Thank you!
So for the first error, looks like the create VM is initiating to fast to recognize the new resource group.
For the image you need to use the full URN MicrosoftWindowsServer:WindowsServer:2016-Datacenter:latest
First, it's a bash script and the variable definition is different between Windows and Linux. So I suggest you run the script in the Azure Cloud Shell with bash mode or the WSL when it's a bash script.
Second, the error shows the resource group name does not meet the Naming rules and restrictions for Azure resources, you need to read the document to check if the group name is right.
So I have found out that the issue was. It was how VS Code formats bash scripts. I'm not exactly sure how VS Code is formatting but I think it might have something to do with ASCII. I recreated the script in Azure CLI with nano and it runs fine as a bash shell script. Thanks everyone for your help.

Script "heroku login" in a CI environment

Is there a sanctioned way to either script or bypass the Heroku Toolbelt's login prompt? I've come across a number of hacks which claim to provide a solution (expect, environment variables, interpolating environment variables in .netrc, etc.), but I'd really like to find a stable solution.
From what I see in the docs, there's three ways one can go about this.
Method 1: Login via CLI
The first one is to authenticate via Login&Password (bleh). Knowing the input format - login on one line, password on the other - we can cat or echo the data in:
Via secure env vars:
(
echo "$HEROKU_CREDENTIALS_EMAIL" # or you can plaintext it, if you're feeling adventurous
echo "$HEROKU_CREDENTIALS_PASSWORD"
) | heroku login
Highlighted the important parts (variable names and security).
Or via an encrypted file:
Prepare a file named .heroku_cred in the repo root:
pdoherty926#gmail.com
IAmPdohertyAndThisIsMyPasswordIWorkHereWithMyOldMan
Then encrypt it:
travis encrypt-file .heroku_cred
That'll give you two things: a file named .heroku_cred.enc in the repo root and a command that'll decrypt the file on Travis. git add the encrypted file (be careful to not grab the unencrypted file by accident!) and add the command to before_install. Then, to the place where you want to authenticate with Heroku add:
cat .heroku_cred | heroku login
Now, this method sucks for two reasons: first, you're using your literal password, which is terrible, because if it leaks you're 100% fucked and if you ever change it your builds will start spuriously failing.
Method 2: Environment Variable
The next method is using the HEROKU_API_KEY env var, which might "interfere with the normal functioning of auth commands", but that doesn't matter, because you're not authenticating in other ways anyway.
Doing this requires no changes to .travis.yml, only a secure environment variable named HEROKU_API_KEY containing the output from
heroku auth:token
Ran on your machine (where you're probably authenticated).
Highlighted the important parts (variable names and security).
This method combines both security (OAuth token used, which can just be revoked) and simplicity of setup.
Method 3: Write directly to token storage file
There's the third way, too: using ~/.netrc, which'll cooperate with the whole ecosystem as if you authenticated via the CLI with username and password (but you're using an OAuth token instead, which is better).
The steps to follow on this one are similar to 1.2:
First create a file named .heroku-netrc, which contains the part of your ~/.netrc responsible for authenticating with Heroku (details) like this:
machine api.heroku.com
login me#example.com
password c4cd94da15ea0544802c2cfd5ec4ead324327430
machine git.heroku.com
login me#example.com
password c4cd94da15ea0544802c2cfd5ec4ead324327430
Then, to encrypt it, run:
travis encrypt .heroku-netrc
You'll get a decryption command (add it to before_install) and .heroku-netrc.enc, which you should git add (be careful not to add the unencrypted .heroku-netrc). Afterwards, add this to the install step:
cat .heroku-netrc >> $HOME/.netrc

AWS Data Pipeline: setting local variable in shell command

I am trying to make use of the uuid library within a shell command invoked by an AWS data pipeline. It seems like the uuid function works fine, but when I try to pass this value to a variable, the data is lost.
A snippet of my testing script below:
sudo yum -y install uuid-devel
myExportId=$(uuid)
echo 'myExportId:' $myExportId
uuid
When I look at the activity log for the pipeline I see that the uuid function seems to be working, but the variable does not seem to contain anything?
myExportId:
b6cf791a-1d5e-11e6-a581-122c089c2e25
I notice this same behavior with other local variables in my scripts. Am I expressing these incorrectly?
My pipeline parameters are working fine within the script, so no issues there.
Shortly after posting this I realized that I had encoded some of the above steps in subshells within my pipeline definition. It can be difficult debugging shell scripts embedded within JSON, so I think I will move on to using the scriptUri parameter pointing to a bash file.

Access build folder in Xcode Server CI bot run (env variables?)

I need to access the folder that is created dynamically during each bot integration. On one of the run it is something like this -
/Library/Developer/XcodeServer/Integrations/Caches/a3c682dd0c4d569a3bc84e58eab88a48/DerivedData/Build/Products/Debug-iphonesimulator/my.app
I would like to get to this folder in an post trigger, how do I go about it? Based on the wwdc talk it seems like some environment variables like 'XCS_INTEGRATION_RESULT' and XCS_ERROR_COUNT etc.. are being used. Also I can see in logs something like PROJECT_DIR.
But I can't access any of these variables from my command line(is it because I am a different user than the bot?)
Also where can I find the list of variables created by this CI system?
I have been echoing set to the bot log, the first line of my bot script is simply
set
When you view the log after the integration is complete it will be in your trigger output.
XCS_ANALYZER_WARNING_CHANGE=0
XCS_ANALYZER_WARNING_COUNT=0
XCS_ARCHIVE=/Library/Developer/XcodeServer/Integrations/Integration-76eb5292bd7eff1bfe4160670c2d4576/Archive.xcarchive
XCS_BOT_ID=4f7c7e65532389e2a741d29758466c18
XCS_BOT_NAME='Reader'
XCS_BOT_TINY_ID=00B0A7D
XCS_ERROR_CHANGE=0
XCS_ERROR_COUNT=0
XCS_INTEGRATION_ID=76eb5292bd7eff1bfe4160670c2d4576
XCS_INTEGRATION_NUMBER=15
XCS_INTEGRATION_RESULT=warnings
XCS_INTEGRATION_TINY_ID=FF39BC2
XCS_OUTPUT_DIR=/Library/Developer/XcodeServer/Integrations/Integration-76eb5292bd7eff1bfe4160670c2d4576
XCS_PRODUCT='Reader.ipa'
XCS_SOURCE_DIR=/Library/Developer/XcodeServer/Integrations/Caches/4f7c7e65532389e2a741d29758466c18/Source
XCS_TESTS_CHANGE=0
XCS_TESTS_COUNT=0
XCS_TEST_FAILURE_CHANGE=0
XCS_TEST_FAILURE_COUNT=0
XCS_WARNING_CHANGE=36
XCS_WARNING_COUNT=36
#Viktor is correct, these variables only exist during their respective sessions. #Pappy gave a great list of those variables.
They can be used in a script like so:
IPA_PATH="${XCS_OUTPUT_DIR}/${XCS_BOT_NAME}.ipa"
echo $IPA_PATH
I'm not familiar with Xcode Server but generally Unix/CI systems when export environment variables they only export it to the current session.
If you want to set an environment variable persistently you have to set it in an initializer file like ~/.bash_profile or ~/.bashrc so it always gets set/loaded when a shell session starts (ex: when you log in with Terminal - the exact file depends on what kind of shell you start).
It wouldn't make much sense to export these persistently either, because in that case if you run different integrations these would simply overwrite each others exported environment variables (they would set the same environment variables).
That's why the systems which communicate through environment variables usually don't write the variables into persistent initialiser file rather just export the variables. With export the variable is accessible from the process which exports it and from the child processes the process starts.
For example in a bash script if you export a variable you can access it from the bash script after the export and from any command/program you start from the bash script, but when the bash script finishes the environment won't be accessible anymore.
edit
Just to clarify it a bit: You should be able to access these environment variables from a post trigger script, run by Xcode Server but you most likely won't be able to access these from your Terminal/command line.
Also where can I find the list of variables created by this CI system?
You can print all the available environment variables with the env command. In a bash script simply type env in a new line like this:
#!/bin/bash
env
This will print all the available environment variables (not just the ones defined by Xcode Server!) - you can simply pipe it to a file for inspection if you want to, like this:
#!/bin/bash
env > $HOME/envinspect.txt
After this script runs you can simply open the envinspect.txt file in the user's home folder.

Resources