I am trying to create table in DynamoDB using CLI.
I am using below command:
aws dynamodb create-table \ --table-name my_table \--attribute-definitions 'AttributeName=Username, AttributeType=S' 'AttributeName=Timestamp, AttributeType=S' \--key-schema 'AttributeName=Username, KeyType=HASH' 'AttributeName=Timestamp, KeyType=RANGE' \--provisioned-throughput 'ReadCapacityUnits=5, WriteCapacityUnits=5' \--stream-specification StreamEnabled=true,StreamViewType=NEW_AND_OLD_IMAGES \--region us-east-1
On running above, I am getting below error:
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: the following arguments are required: --attribute-definitions, --key-schema
I am new to AWS, in my command I am declaring the attributes and key-schema, what is the error?
The backlashes on the command you typed are used for telling the cmd, when there is a line break, that the command continues on the next line.
Based on the screenshot and command you typed, you are trying to execute it in a single line.
As a solution you could remove the backslashes from your command or copy the original command (the one from the tutorial) as it is (including line breaks).
Without line breaks:
aws dynamodb create-table --table-name my_table --attribute-definitions 'AttributeName=Username, AttributeType=S' 'AttributeName=Timestamp, AttributeType=S' --key-schema 'AttributeName=Username, KeyType=HASH' 'AttributeName=Timestamp, KeyType=RANGE' --provisioned-throughput 'ReadCapacityUnits=5, WriteCapacityUnits=5' --stream-specification StreamEnabled=true,StreamViewType=NEW_AND_OLD_IMAGES --region us-east-1
With line breaks:
aws dynamodb create-table \
--table-name my_table \
--attribute-definitions AttributeName=Username,AttributeType=S AttributeName=Timestamp,AttributeType=S \
--key-schema AttributeName=Username,KeyType=HASH AttributeName=Timestamp,KeyType=RANGE \
--provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \
--stream-specification StreamEnabled=true,StreamViewType=NEW_AND_OLD_IMAGES \
--region us-east-1
I would try using a json file for both the key schema and the attribute definitions. See https://docs.aws.amazon.com/cli/latest/reference/dynamodb/create-table.html for the json syntax and examples. You shouldn’t need any other arguments other than the table to get your table running.
Related
I have been trying to get adf pipelines status using az cli script.
I am using
az datafactory pipeline-run query-by-factory --factory-name "adfname" --filters operand="Status" operator="Equals" values="Failed" --last-updated-after "2023-01-17T00:00:00.3345758Z" --last-updated-before "2023-01-17T11:59:59.3686473Z" --resource-group "rgname"
command and I am getting full json of pipelines but I only want name and status of these pipelines. I have tried using jQuery like --query "pipelineName", --query "status". Pipeline is succeeding but I am not getting any results.
Please help me for the issue if anyone have knowledge about it.
I am expecting result like pipelineName -- status.
e.g.,
pl_databricks -- Failed
pl_databricks_mq -- Succeeded.
If possible date and time also
pl_databricks -- Failed -- 23/12/22 10:29:27
pl_databricks_mq -- Succeeded -- 23/12/22 08:20:50
I have reproduced in my environment and got outputs as below:
$Target=#()
$x=az datafactory pipeline-run query-by-factory --factory-name "adfname" --filters operand="Status" operator="Equals" values="Succeeded" --last-updated-after "2023-01-15T00:00:00.3345758Z" --last-updated-before "2023-06-16T00:36:44.3345758Z" --resource-group "rgname"
$r=$x | ConvertFrom-Json
$Target= $r.value.pipelinename +" "+ $r.value.status
$Target
I want to use the value of the DOMAIN_ID variable to filter the EFS to get a FileSystemId. I used the commands below. The first command works and it stores the domain ID. The second one returns an empty list, even though the DOMAIN_ID variable is present.
DOMAIN_ID=$(aws sagemaker list-domains --query 'Domains[0].DomainId')
aws efs describe-file-systems --query 'FileSystems[?CreationToken==`$DOMAIN_ID`].FileSystemId'
Output:
[]
Expected output:
<Some EFS identifier>
This works (escaping backticks) -
aws efs describe-file-systems --query "FileSystems[?CreationToken==\`$DOMAIN_ID\`].FileSystemId"
You can also use describe-domain command instead -
$ DOMAIN_ID=$(aws sagemaker list-domains --query 'Domains[0].DomainId' | tr -d '"')
$ aws sagemaker describe-domain --domain-id $DOMAIN_ID --query 'HomeEfsFileSystemId'
I have two AWS CLI commands . Command 1 produces an output. Command 2 should take that output and use it as an input parameter. In more detail:
Command 1 : aws datasync list-tasks | jq '.Tasks[0].TaskArn'
Returns : arn:aws:datasync:us-west-2:123456789102:task/task-1234567890abc01 as an output
Command 2: aws datasync start-task-execution --task-arn <output_from_command1>
When I try command2
aws datasync start-task-execution --task-arn "$(aws datasync list-tasks | jq '.Tasks[0].TaskArn')",
I get:
An error occurred (InvalidRequestException) when calling the StartTaskExecution operation: Invalid parameter: DataSync ARN ("arn:aws:datasync:us-west-2:123456789102:task/task-1234567890abc01") must match datasync resource regex.
BUT when I try:
aws datasync start-task-execution --task-arn "arn:aws:datasync:us-west-2:123456789102:task/task-1234567890abc01" - It works fine
I want to think it's a string bash variable issue
How do I deal with this Invalid parameter issue?
I am using below aws cli command to modify the UI.
aws cognito-idp set-ui-customization --user-pool-id us-west-2_XXXXXXX --client-id ALL --css ".submitButton-customizable{background-color: #0091e1;} " --region us-west-2 --image-file Logo.png
But it is giving me an error that my PNG File is not valid.
I have seen the documentation and found that image-file should have file format Base64-encoded binary data object
I am using Linux Instance (Ubuntu) and running this command from terminal.
How can i correct this?
I had the same problem, try using the fileb://./Logo.png syntax with the --image-file flag, e.g.
aws cognito-idp set-ui-customization --user-pool-id us-west-2_XXXXXXX --client-id ALL --css ".submitButton-customizable{background-color: #0091e1;} " --region us-west-2 --image-file fileb://./Logo.png
I'm running through Amazon's example of running Elastic MapReduce and keep getting hit with the following error:
Error launching job , Output path already exists.
Here is the command to run the job that I am using:
C:\ruby\elastic-mapreduce-cli>ruby elastic-mapreduce --create --stream \
--mapper s3://elasticmapreduce/samples/wordcount/wordSplitter.py \
--input s3://elasticmapreduce/samples/wordcount/input \
--output [A path to a bucket you own on Amazon S3, such as, s3n://myawsbucket] \
--reducer aggregate
Here is where the example comes from here
I'm following Amazon'd directions for the output directory. The bucket name is s3n://mp.maptester321mark/. I've looked through all their suggestions for problems on this url
Here is my credentials.json info:
{
"access_id": "1234123412",
"private_key": "1234123412",
"keypair": "markkeypair",
"key-pair-file": "C:/Ruby/elastic-mapreduce-cli/markkeypair",
"log_uri": "s3n://mp-mapreduce/",
"region": "us-west-2"
}
hadoop jobs won't clobber directories that already exist. You just need to run:
hadoop fs -rmr <output_dir>
before your job ot just use the AWS console to remove the directory.
Use:
--output s3n://mp.maptester321mark/output
instead of:
--output s3n://mp.maptester321mark/
I suppose EMR makes the output bucket before running and that means you'll already have your output directory / if you specify --output s3n://mp.maptester321mark/ and that might be the reason why you get this error.
---> If the folder (bucket) already exists then remove it.
---> If you delete it and you still get the above error make sure your output is like this
s3n://some_bucket_name/your_output_bucket if you have it like this s3n://your_output_bucket/
its an issue with EMR!! as i think it first creates bucket on the path (some_bucket_name) and then tries to create the (your_output_bucket).
Thanks
Hari