AccuRev CLI list snapshots - snapshot

Is there any way to retrieve a list of all snapshots in a Depot/Stream using the AccuRev CLI?

You will need to create a script to parse out the snapshot streams.
Look at the accurev "show" command.
This command will list all the streams in your depot.
accurev show -p streams
You will need to use the -fx option to generate xml format to parse out only the snapshot streams.

Related

I need to deploy all the files in a diff file (txt) to an org Salesforce

I have to deploy from a git Diff to a Salesforce Org.
So I have all the files names written on a txt file and need to bring them all to the Salesforce Org.
The files I have them in local and need to, as I said, deploy them to the Salesforce Org.
I tried doing sfdx and writing them all but it gives me
"C:\Program" is not a reconized command
I tried adding """ at the start and """ at the end and separating each file with a "","" but it still doesn't work.
I know I can do it from a xml file but I have the diff in a txt.
The error sounds like your sfdx isn't installed correctly, you may have to reinstall. Or maybe you had newlines in your command and they messed something up?
You need to read up about force:source:deploy command, the -p parameter...
This is a decent example of what you can do. Bit boring, repetitive but deploys exactly these files and nothing more, not whole folders.
sfdx force:source:deploy -u prod -p "force-app/main/default/objects/MyObject__c/fields/Description__c.field-meta.xml,force-app/main/default/objects/MyObject__c/fields/Amount__c.field-meta.xml,force-app/main/default/objects/MyObject__c/fields/Quantity__c.field-meta.xml,force-app/main/default/classes/MyObjectTriggerHandler.cls" -l RunSpecifiedTests -r "SomeTestClass" --verbose --loglevel fatal
There are also some cool sfdx plugins that will generate the xml file for you based on difference between 2 commits? Search list at https://github.com/mshanemc/awesome-sfdx-plugins

How to download multiple files in multiple sub-directories using curl?

I am downloading multiple files using curl. The base URL for all the files is the same like
https://mydata.gov/daily/2017
The data in these directories are further grouped by date and file type. So the first data that I need has this directory
https://mydata.gov/daily/2017/001/17d/Roger001.gz
The second data being
https://mydata.gov/daily/2017/002/17d/Roger002.gz
I need to download up until the data for the last day of 2017 which is
https://mydata.gov/daily/2017/365/17d/Roger365.gz
How can I use curl or any other similar tool to download all the files to a single local folder, preferably adopting the original file names?
use for f in {001..365}; do curl https://mydata.gov/daily/2017/"$f"/17d/Roger"$f".gz -o /your-directory/Roger"$f".gz; done in bash terminal.
replace your-directory with your directory which you want to save files.

Generate summary from robot framework log file

we are using Jenkins for installation which is there from many years in our product.
But now I want to enhance it, we have some basic tests run for each component, each has its own robot log file placed in a folder.
I want a create a batch file using windows cmd to parse all log files in that folder, and get me the test cases which are failed.
Any idea how it can be done?
Thanks
I think, you requirement is rebot option of Robot Framework.
Consider, you have two test suites which are executed separately.
valid_login.txt
invalid_login.txt
Each test suite create three log files in Robot framework.
report.html
log.html
output.xml
First thing, you need to create the unique name for these all logs.
This can be done using :
-r for renaming report.html
-l for renaming log.html
-o for renaming output.xml
Example:
pybot -d "F:\robot framework\WebDemo\Logs" -o "invalid_login.xml" -l "invalid_login.html" -r "invalid_login_report.html" invalid_login.robot
In this way, after executing, you will have two set of reports.
Since you want to merge the reports and create the combined report, we will first create only XML log for each test suite.
pybot -d "F:\robot framework\WebDemo\Logs" -o "invalid_login.xml" -l None -r None invalid_login.robot
Above example will create only XML log file in given directory mentioned with -d option.
Once you execute above both test suites, it will create two XML log files.
invalid_login.xml
valid_login.xml
Now, you can merge the both XML and create combined report using below command.
F:\robot framework\WebDemo>rebot Logs\*.xml
Log: F:\robot framework\WebDemo\log.html
Report: F:\robot framework\WebDemo\report.html
F:\robot framework\WebDemo>invalid_login
rebot will take all XML files in Logs directory and create combined Log and Report file.
Check this link for more info !!!
Once you combined the all reports, you can filter the Failed test cases easily in report.html

Upload to S3 via shell script without aws-cli, possible?

As the title says, is it possible to upload to S3 via shell script without aws-cli-tools?
If so, how?
What I'm trying to do is read from a txt file on S3 (which is public, so no authentication is required).
But I want to be able to overwrite whatever is in the file (which is just a number).
Thanks in advance,
Fadi
Yes you can! You basically emulate the api calls the SDK would do for you through standard linux cmd utils.
Look at:
https://aws.amazon.com/code/Amazon-S3/943
and/or
http://tmont.com/blargh/2014/1/uploading-to-s3-in-bash
I use s3cmd which is a command line tool written in Python.
It uses the (restful) web APIs.
s3cmd put --recursive
s3cmd sync
would be the interesting bits:
Synchronize a directory tree to S3
s3cmd sync LOCAL_DIR s3://BUCKET[/PREFIX] or s3://BUCKET[/PREFIX] LOCAL_DIR

How to rename output file(s) of Hive on EMR?

The output of Hive on EMR is a file named 000000_0 (perhaps a different number if there is more than 1 reducer).
How do I get this file to be named differently? I see two options:
1) Get Hive to write it differently
2) Rename the file(s) in S3 after it is written. This is could be a problem: from what I've read S3 doesn't really have a "rename". You have to copy it, and delete the original. When dealing with a file that is 1TB in size, for example, this could cause performance problems or increase usage cost?
The AWS Command Line Interface (CLI) has a convenient mv command that you could add to a script:
aws s3 mv s3://my-bucket/000000_0 s3://my-bucket/data1
Or, you could do it programmatically via the Amazon S3 COPY API call.

Resources