How to use --directories-to-pull to save files created by app in Firebase Test Lab - firebase-test-lab

I'm using Firebase Test Lab to run integration tests of my app written in Flutter. I would like to pull all files created by the app and save it in the results bucket. It seems like the --directories-to-pull argument is what I need: https://cloud.google.com/sdk/gcloud/reference/firebase/test/android/run#--directories-to-pull
I'm using the following code to run my tests:
gcloud firebase test android run \
--type instrumentation \
--app build/app/outputs/apk/debug/app-debug.apk \
--test build/app/outputs/apk/androidTest/debug/app-debug-androidTest.apk \
--timeout 5m \
--directories-to-pull /sdcard,/data/local/tmp
However, when using the argument --directories-to-pull /sdcard,/data/local/tmp some files are missing in the results bucket. Especially the logcat file is missing which is really important for debugging.
Without the --directories-to-pull argument I see the following results in my bucket:
With --directories-to-pull /sdcard,/data/local/tmp I only see the following results in my bucket:
Questions:
How do I pull all data created by the app and save it in the results bucket?
Am I using the --directories-to-pull argument correctly?
Why are there fewer files in my results bucket when using the --directories-to-pull argument?
The documentation states that one can use --directories-to-pull /storage. However, this results in an error saying that the path /storage is not known. Why is that?

Related

Getting too many arguments in call to spdy.NewRoundTripperWithProxy error when i try to run my terratest go code for validating EKS cluster on AWS

Getting too many arguments in call to spdy.NewRoundTripperWithProxy error when i try to run my terratest go code which deploys, validates and un-deploys k8s pod to/from AWS EKS
My scripts were perfectly working fine 3 months back but looks like some library change happened in between these 3 months in k8s side which is affecting my scripts
Most problematic part is i am unable to find out at which line number my script is failing
go mod init and go go mod tidy are working fine but as soon as i run go test command getting the error as attached in the screen shot.
My code is present in Dropbox
I got resolution from gruntwork community which helped me to fix this issue
We need to call or use go get -u k8s.io/apimachinery#v0.20.6 command which pulls/sets required library to relevant version
Below steps are the resolutions for the current problem
go mod init
go get -u k8s.io/apimachinery#v0.20.6
go mod tidy -compat=1.17
go test -v -timeout 120m | tee test_output.log

AWS-CLI Upload only one file in lambda (update-function-code)

I am using AWS-CLI to my files into my lambda (because i want to ship with my own boto3).
The problem is that I have to upload the whole project (my files + boto3) in my lambda.
I have to wait ~5min each time (my connection is kinda bad)
The question is: can i upload only the files that i want (as git)?
Currently I use this command:
zip -r function.zip . && aws lambda update-function-code --function-name MYFUNC --zip-file fileb://function.zip && rm function.zip
Thanks
Create a lambda layer for your common files and attach it to your lambda.
Where you can make direct upload for frequently changing files.
Lambda layers

How to avoid AWS SAM rebuild and reupload a gradle function with unchanged code?

I'm developing an application with micronaut using SAM CLI to deploy it on AWS Lambda. As I was including dependencies and developing new features, the function packages got bigger an bigger (now they are around 250MB). This makes deployment take a while.
On top of that every time I edit template.yaml and then run sam build && sam deploy to try a new configuration on S3, RDS, etc... I have to wait for gradle to build the function again (even though it's unchanged since the last deployment) and upload the whole package to S3.
As I'm trying to configure this application with many trials and errors on SAM, waiting for this process to complete just to get an error because of some misconfiguration is getting quite counterproductive.
Also my SAM s3 bcuket is at 10GB size after just a single day of work. This may get expensive on the long run.
Is there a way to avoid those gradle rebuilds and reuploads when teh function code is unchanged?
If you are only updating the template.yml file, you could copy the new version to ./.aws-sam/build folder and then run sam deploy
$ cp template.yml ./.aws-sam/build/template.yml
$ sam deploy
If you are editing a lambda you could try to update the function code by itself (after you create it in the template and deploy of course). That can be done via the AWS CLI update-function-code command:
rm index.zip
cd lambda
zip –X –r ../index.zip *
cd ..
aws lambda update-function-code --function-name MyLambdaFunction --zip-file fileb://index.zip
more info can be found here:
Alexa Blogs - Publishing Your Skill Code to Lambda via the Command Line Interface
AWS CLI Command Reference - lambda - update-function-code
my SAM s3 bcuket is at 10GB size
Heh. Yea start deleting stuff. Maybe you can write a script using aws s3?

How do I use a CloudFormation output value in a script with Ansible

I'm trying to set up some automation for a school project. The gist of it is:
Install an EC2 instance via CloudFormation. Then
Use cfn-init to
Install a very basic Ansible configuration
Download an Ansible playbook from S3
Run said playbook to install a Redshift cluster via CloudFormation
Install some necessary packages
Install some necessary Python modules
Download a Python script that will
Connect to the Redshift database
Create a table
Use the COPY command to import data into the table
It all works up to the point of executing the script. Doing so manually works a treat, but that is because I can copy the created Redshift endpoint into the script for the database connection. The issue I have is that I don't know how to extract that output value from CloudFormation so it can be inserted it into the script for a fully automated (save the initial EC2 deployment) solution.
I see that Ansible has at least one means of doing so (cloudformation_facts, for instance), but I'm a bit foggy on how to implement it. I've looked at examples but it hasn't become any clearer. Without context I'm lost and so far all I've seen are standalone snippets.
In order to ensure an answer is listed:
I figured out the describe-stacks and describe-stack-resources sub-commands to the aws cloudformation cli command. Using these, I was able to track down the information I needed. In particular, I needed to access a role. This is the command that I used:
aws cloudformation describe-stacks --stack-name=StackName --region=us-west-2 \
--query 'Stacks[0].Outputs[?OutputKey==`RedshiftClusterEndpointAddress`].OutputValue' \
--output text
I first used the describe-stacks subcommand to get a list of my stacks. The relevant stack is the first in the list (an array) so I used Stacks[0] at the top of my query for the describe-stack-recources subcommand. I then used Outputs since I am interested in a value from the CloudFormation output list. I know the name of the key (RedshiftClusterEndpointAddress), so I used that as the parameter. I then used OutputValue to return the value of RedshiftClusterEndpointAddress.

Parse Cloud Code Deploy

I am currently setting up Parse Cloud Code and I have gotten to the final step which is to deploy the main.js file but when I do this it just opens the file in Adobe Dreamweaver?
I was having problems understanding how to deploy the main.js file.
Read the following steps: https://www.parse.com/apps/quickstart#cloud_code/unix
It contains:
Get the Parse tool
Download the command line tool by running this command:
curl -s https://www.parse.com/downloads/cloud_code/installer.sh | sudo /bin/bash
This installs a tool named parse to /usr/local/bin/parse. That is the only thing that's added, so to uninstall, just delete that file. Running this command will also update your existing Parse command line tool if you already have it installed.
Set up a Cloud Code directory
Open a new terminal window, run the command parse new and follow the instructions.
$ parse new
Please login to Parse using your email and password.
Email: ninja#gmail.com
Password (will be hidden):
Would you like to create a new app, or add Cloud Code to an existing app?
Type "new" or "existing": existing
1:MyApp
2:MyOtherApp
Select an App to add to config: 1
Awesome! Now it's time to setup some Cloud Code for the app: "MyApp",
Next we will create a directory to hold your Cloud Code.
Please enter the name to use for this directory, or hit ENTER to use
"MyApp" as the directory name.
Directory Name: MyCloudCode
Your Cloud Code has been created at ${CLOUD_CODE_DIR}.
Next, you might want to deploy this code with
"parse deploy"
.
This includes a "Hello world" cloud function, so once you deploy
you can test that it works, with:
curl -X POST \
-H "X-Parse-Application-Id: ${APP_ID}" \
-H "X-Parse-REST-API-Key: ${REST_API_KEY}" \
-H "Content-Type: application/json" \
-d '{}' \
https://api.parse.com/1/functions/hello
$ cd MyCloudCode
Go to the directory where your Main.js file does exist and
use command $parse deploy to deploy your code on parse cloud.Refer this link if you are using Linux system.https://www.parse.com/apps/quickstart#cloud_code/unix and use this link if you are using windows system https://parse.com/apps/quickstart#cloud_code/windows.
Hope it will help.
As far as I understand from your question, you want to deploy the main.js file to the Parse cloud. The problem that you face opening in Adobe Dreamweaver is related with your computer where you configure the file open with the Adobe Dreamweaver.
The answer to your question is; as detailed in Parse Cloud tutorial, first you have to install Parse command line tool. This tool enables you to manage your code in Parse cloud. Then you can use "parse new" command to set up a cloud directory where you have to replace the main.js file with your own. Following that "parse deploy" will deploy your js file to the Parse cloud. You can find detailed information in tutorials Parse Cloud. Hope this helps.
Regards.

Resources