I have pulled this from a dumpsys of com.google.android.wearable.app
Service Resolver Table: Non-Data Actions:
com.google.android.clockwork.home.action.BIND_HOME:
adc4ff78 com.google.android.wearable.app/com.google.android.clockwork.home.watchfaces.HomeBackgroundService filter adc79b80
Action: "com.google.android.clockwork.home.action.BIND_HOME"
com.google.android.clockwork.action.TUTORIAL_FORCE:
adc9acc8 com.google.android.wearable.app/com.google.android.clockwork.home.tutorial.TutorialService filter adc9adf8
Action: "com.google.android.clockwork.action.TUTORIAL_START"
Action: "com.google.android.clockwork.action.TUTORIAL_FORCE"
Action: "com.google.android.clockwork.action.TUTORIAL_NEXT_STAGE"
Action: "com.google.android.clockwork.action.TUTORIAL_SKIP"
Action: "com.google.android.clockwork.action.TUTORIAL_NOTIFICATION_DISMISSED"
Action: "com.google.android.clockwork.action.TUTORIAL_DONE"
com.google.android.clockwork.action.TUTORIAL_START:
adc9acc8
I've tried to run:
am start -a com.google.android.clockwork.action.TUTORIAL_START -n com.google.android.wearable.app/com.google.android.clockwork.home.tutorial.TutorialService
However, I have not been able to restart the initial tutorial again. Any advice would be helpful
The answer was:
am startservice -a com.google.android.clockwork.action.TUTORIAL_FORCE -n com.google.android.wearable.app/com.google.android.clockwork.home.tutorial.TutorialService
Related
I am trying to pass a JWT token in between jobs but something prevents it to be passed correctly. According to the docs, if I want to pass variables between jobs I need to use outputs as explained here. What I am doing is the following:
name: CI
on:
pull_request:
branches:
- main
jobs:
get-service-url:
...does something not interesting to us...
get-auth-token:
runs-on: ubuntu-latest
outputs:
API_TOKEN: ${{ steps.getauthtoken.outputs.API_TOKEN }}
steps:
- name: Get Token
id: getauthtoken
run: |
API_TOKEN:<there is a full JWT token here>
echo -n "API_TOKEN=$API_TOKEN" >> $GITHUB_OUTPUT
use-token:
runs-on: ubuntu-latest
needs: [get-service-url,get-auth-token]
name: Run Tests
steps:
- uses: actions/checkout#v3
- name: Run tests
run: |
newman run ${{ github.workspace }}/tests/collections/my_collection.json --env-var "service_url=${{needs.get-service-url.outputs.service_URL}}" --env-var "auth_token=${{needs.get-auth-token.outputs.API_TOKEN}}"
So, during a run, in my output I see:
Run newman run /home/runner/work/my-repo/my-repo/tests/collections/my_collection.json --env-var "service_url=https://test.net" --env-var "auth_token="
At first I thought there was something wrong in passing the token itself between jobs. Hence I tried
to put a dummy token an export it in the output. In my get-auth-token job, the call to output it became:
echo -n "API_TOKEN=test" >> $GITHUB_OUTPUT
and in the log I saw it there:
--env-var "auth_token=test"
so the way I am passing it intra jobs is fine. Moreover, the token is there and is correct because I hard coded one to simplify my tests. Indeed if in my get-auth-token job I try to echo $API_TOKEN I see in the logs *** which makes me understand Github is correctly obfuscating it.
I then tried not to pass it in between jobs. So I created the same token, hardcoded, right before the newman run command and referenced it in the newman run directly and tada! The log now is:
Run newman run /home/runner/work/my-repo/my-repo/tests/collections/my_collection.json --env-var "service_url=https://test.net" --env-var "auth_token=***"
So the token is there! But I need it to be coming from another job. There is something preventing the token to be passed in between jobs and I don't know how to achieve that.
Found out a trick to make this happen. Consists on temporarily "obfuscating" the secret to the eyes of Github.
In the job where I retrieve the secret I encode it and export it to GITHUB_OUTPUT:
API_TOKEN_BASE64=`echo -n <my_secret> | base64 -w 0`
echo -n "API_TOKEN=$API_TOKEN_BASE64" >> $GITHUB_OUTPUT
In the job where I need the secret I decode it (and use where needed):
API_TOKEN=`echo -n ${{needs.get-auth-token.outputs.API_TOKEN}} | base64 --decode`
I'm trying to create my own hook (defined in terraform_plan.sh, can refer in terraform_plan.sh and .pre-commit-config.yaml below) and require user input to determine if this hook success or fail (This hook is about some checking by the user before commit). To activate user input function, I add exec < /dev/tty according to How do I prompt the user from within a commit-msg hook?.
The snippet code looks like this (terraform_plan.sh).
#!/bin/sh
location=$(pwd)
echo "location: ${location}"
cd ./tf_build/
project_id=$(gcloud config get-value project)
credentials=$(gcloud secrets versions access latest --secret="application-default-credentials")
echo "PROJECT_ID: ${project_id}"
echo "CREDENTIALS: ${credentials}"
terraform plan -var "PROJECT_ID=${project_id}" -var "APPLICATION_DEFAULT_CREDENTIALS=${credentials}"
exec < /dev/tty
read -p "Do yu agree this plan? (Y/N): " answer
echo "answer: ${answer}"
# for testing purpose, assign 0 directly
exit 0
I expect that the prompt Do you agree this plan? (Y/N) should appear before I can enter my answer. But actually nothing shown and it just hangs there waiting for the input.
(.venv) ➜ ✗ git commit -m "test"
sqlfluff-lint........................................(no files to check)Skipped
sqlfluff-fix.........................................(no files to check)Skipped
black................................................(no files to check)Skipped
isort................................................(no files to check)Skipped
docformatter.........................................(no files to check)Skipped
flake8...............................................(no files to check)Skipped
blackdoc.............................................(no files to check)Skipped
Terraform plan...........................................................
Only after I give an input "Y", all output strings defined in this hook (ex: output string through echo, terraform plan) comes out.
Terraform plan...........................................................Y
Passed
- hook id: terraform_plan
- duration: 222.33s
location: [remove due to privacy]
PROJECT_ID: [remove due to privacy]
CREDENTIALS: [remove due to privacy]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# google_bigquery_table.bar will be created
+ resource "google_bigquery_table" "bar" {
+ creation_time = (known after apply)
+ dataset_id = "default_dataset"
+ deletion_protection = true
...
I also try read -p "Do yu agree this plan? (Y/N): " answer < /dev/tty, but get the same issue.
Here is my part of .pre-commit-config.yaml config file.
repos:
- repo: local
hooks:
# there are other hooks above, remove them for easy to read
- id: terraform_plan
name: Terraform plan
entry: hooks/terraform_plan.sh
language: script
verbose: true
files: (\.tf|\.tfvars)$
exclude: \.terraform\/.*$
- repo: ../pre-commit-terraform
rev: d7e049d0b72eebcb09b719bb296589a47f4fa806
hooks:
- id: terraform_fmt
- id: terraform_tflint
- id: terraform_validate
args: [--args=-no-color]
So far I do not know what the root cause is and how to solve it. Seek for someone's help and suggestion. Thanks.
pre-commit intentionally does not allow interactivity so there is no working solution within the framework. you can sidestep the framework and utilize a legacy shell script (.git/hooks/pre-commit.legacy) but I would also not recommend that:
terraform plan is also kind of a poor choice for a pre-commit hook -- especially if you are blocking the user to confirm it. they are likely to be either surprised by such a hook or fatigued by it and will ignore the output (mash yes) or ignore the entire process (SKIP / --no-verify).
disclaimer: I wrote pre-commit
I am trying to trigger a particular job in CI after either of the 2 conditions
trigger by another job in the same pipeline
OR
changes: somefile.txt
My CI is as described
job1:
stage: build
script:
- echo "JOb1"
- curl -X POST -F token=2342344444 -F "variables[TRIGGER_JOB]=job1" -F ref=master https://main.gitlab.myconmpanyxyz.com/api/v4/projects/1234/trigger/pipeline
only:
changes:
- job1.md
job2: # This does not RUN as expected because of the TRIGGER_JOB set to job1
stage: test
script:
- echo "Job2"
rules:
- if: $TRIGGER_JOB =="job2"
job3: # this RUNS as expected because of VARIABLE TRIGGER_JOB
stage: test
script:
- echo "Job3"
rules:
- if: $TRIGGER_JOB =="job1"
job4: # this also RUNS, but this should not be the expected behavior
stage: test
script:
- echo “job4“
rules:
- if: $TRIGGER_JOB == "xyz"
- changes:
- job4.md
After job1 finishes it also needs to call job4 and not any other jobs (job2 in this case). So I am using CURL to call the job itself. If there are any better ways of calling a specific job in the same CI, also please let me know.
I have already seen this stack-overflow page, but it does not help because my job needs to be triggered by either of 2 conditions which is not allowed bit gitlab-ci.
I need job4 to be called by either of the 2 conditions - if the TRIGGER_JOB=="job1" or if there are any changes in job4.md file.
Currently job4 runs if changes are made in job4.md file, however it also runs if the job1 is triggered. But afaik this should not be the expected behavior.
docs. Can anyone please give me some leads how to create this kind of design.
Your solution was almost correct, but the changes keyword with only or except does only work, if the pipeline is triggered by a push or a merge_request event. This is defined in the variable CI_PIPELINE_SOURCE. When you trigger the pipeline by calling the API, the variable CI_PIPELINE_SOURCE contains the value trigger and therefore only:changes returns always true, which triggers job1 again and ends in an endless loop. You can add a simple except rule to your job1 to prevent that:
job1:
stage: build
script:
- echo "JOb1"
- curl -X POST -F token=2342344444 -F "variables[TRIGGER_JOB]=job1" -F ref=master https://main.gitlab.myconmpanyxyz.com/api/v4/projects/1234/trigger/pipeline
only:
changes:
- job1.md
except:
variables:
- $CI_PIPELINE_SOURCE == "trigger"
You can find more information on only/except:changes in the documentation.
I have an Nx workspace with multiple Angular apps included. When master is updated in my GitHub repo, I want a build to kick off. That part is easy enough with GCB's triggers. But what I want to happen is to run this command:
npm run affected:apps
on the trigger, and build a Docker image and push it to Google Container registry for each affected app. My cloudbuild.yaml file looks like this so far:
steps:
- name: 'gcr.io/cloud-builders/git'
args: ['fetch', '--unshallow']
- name: node:10.15.1
entrypoint: npm
args: ['run affected:apps --base=origin/master --head=HEAD']
That command returns a result like this:
> project-name#0.0.0 affected:apps /Users/username/projects/project-folder
> nx affected:apps
Note: Nx defaulted to --base=master --head=HEAD
my-app-1
I'm not sure what to do with Google Cloud with that result. With a node script, I could do the following to print out an array of affected apps:
const { exec } = require('child_process');
function getApps() {
exec('npm run affected:apps', (err, out) => {
if (err) {
console.log(null);
} else {
const lines = out.split('\n');
const apps = lines[lines.length - 2].split(' ');
console.log(JSON.stringify(apps));
}
});
}
getApps();
That returns an array of the affected apps, and null if an error. Still, even with that, I'm not sure what I would do for the next step in Google Cloud build. With the results of either the command or that script, ideally I'd be able to run a docker build command, like this:
docker build --file ./:loop variable:/Dockerfile
where :loop variable: is the name of an affected app. I'd like to do that for each value in the array, and not do anything if, for some reason, the command returns no affected apps.
Any ideas on how to use Google Cloud Build with Nx Workspaces, or if you've just got Google Cloud build experience and know what my next step should be, that'd be great.
Continue #chinoche comment there is an example of how you could save the list of affected apps to the affected.txt file
- name: 'gcr.io/cloud-builders/npm'
entrypoint: 'bash'
args:
- '-c'
- |
IFS=' ' read -a apps <<< $(npx nx affected:apps --base=origin/master~1 --plain)
for app in "${apps[#]}"
do
echo $app
done >> affected.txt
The next step could read the file and call any other commands, e.g. create docker image
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- |
while IFS= read -r project; do
docker build -t gcr.io/$PROJECT_ID/$project -f <path-to>/Dockerfile .
done < affected.txt
One of the tricks might be to create separate cloudbuild.yaml file for each project and then trigger a new cloud build process for each affected project. That allows having a completely different build process for each project.
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
while IFS= read -r project; do
gcloud builds submit --config ./<path-to>/$project/project-specific-cloudbuild.yaml .
done < affected.txt
if you are able to get the affected apps with the node script I'd suggest you to write a file with the affected apps in a Cloud Build custom steps, this file will be written at the "/workspace" directory and will be able to any other custom step that may be executed in later steps, with this you should be able to run the docker build command
I have created an angular application using composer and yeoman where transactions are happening correctly. Now I want to add users with different operational roles. I have added details in the permission file and created participants accordingly.
percmissions.acl looks like:
rule Govt {
description: "Allow all participants access to all resources"
participant: "org.acme.<network-name>.Govt"
operation: ALL
resource: "org.acme.<network-name>.*"
action: ALLOW
}
rule Farmer {
description: "Allow all participants access to all resources"
participant: "org.acme.<network-name>.Farmer"
operation: READ
resource: "org.acme.<network-name>.*"
action: ALLOW
}
rule SystemACL {
description: "System ACL to permit all access"
participant: "org.hyperledger.composer.system.Participant"
operation: ALL
resource: "org.hyperledger.composer.system.**"
action: ALLOW
}
Participants 'govt1' and 'farmer1' are successfully added as suggested in https://hyperledger.github.io/composer/managing/participant-add.html
To issue identity, I run the command:
composer identity issue -p hlfv1 -n ‘<networkname>’ -i admin -s adminpw -u govt1id1 -a "resource:org.acme.cphnetwork.Govt#govt1”
The issue is that the command does not give any output.. neither success nor error.
on q1. You can use the --issuer, -x flag on composer identity issue command to create an identity (associated with a participant) that will also have 'issuer' authority -> https://hyperledger.github.io/composer/reference/composer.identity.issue.html … on q2. Your playground would need to be connected via a v1 connection profile (and then you would need to connect to that deployed business network in playground) to the same runtime fabric where you originally deployed your business network (which your REST APIs are consuming via rest-server)
To address the original question, as per discussion on rocketchat:
Original command:
composer identity issue -p hlfv1 -n ‘<networkname>’ -i admin -s adminpw -u govt1id1 -a "resource:org.acme.cphnetwork.Govt#govt1”
Solution:
Remove the 'single quotes' from the <networkname>
Remove the word 'resource:'
Remove "double quotes" from namespace#participant
So if the business network name is govt-application, the command should look like:
composer identity issue -p hlfv1 -n govt-application -i admin -s adminpw -u govt1id1 -a org.acme.cphnetwork.Govt#govt1
As per discussion on rocketchat, simply removing the word 'resouce:' and retaining the quotes also works.