I'm currently working on a cloudformation script that requires a lot of MSK configuration details to run. I am working on a makefile that runs the command
#aws kafka list-clusters
This returns a json-like structure that can be found here . Most of the details I require are in that structure, is there a way to retrieve each of them without having to save the output and then parse through the structure..? All of this would be done within the makefile, so that it can be plugged directly into the cloudformation and wouldn't require manual input/hardcoded values.
I hope I'm not missing something simple, Thanks!
Right, seems it was as simple as using a --query.
So let's say I wanted to get the ARN of the cluster (if there is only one on the region which there is in my case)
I'd call
CLUSTER_ARN = #aws kafka list-clusters --query 'ClusterInfoList[0].ClusterArn'
This will call the Shell command to list only the first kafka cluster, and return the ClusterArn value and store it in CLUSTER_ARN to be used throughout the Makefile.
Please read here to see more about filtering :)
Related
have a spring boot application, where I am tring to place a file inside folder of S3 target bucket. target-bucket/targetsystem-folder/file.csv
The targetsystem-folder name will differ for each file which will be retrived from yml configuration file.
The targetsystem-folder have to created via code if the folder doesnot exit and file should be placed under the folder
As I know, there is no folder concept in S3 bucket and all are stored as objects.
Have read in some documents like to place the file under folder, have to give the key-expression like targetsystem-folder/file.csv and bucket = target-bucket.
But it doesnot work out.Would like to achieve this using spring-integration-aws without using aws-sdk directly
<int-aws:s3-outbound-channel-adapter id="filesS3Mover"
channel="filesS3MoverChannel"
transfer-manager="transferManager"
bucket="${aws.s3.target.bucket}"
key-expression="headers.targetsystem-folder/headers.file_name"
command="UPLOAD">
</int-aws:s3-outbound-channel-adapter>
Can anyone guide on this issue
Your problem that the SpEL in the key-expression is wrong. Just try to start from the regular Java code and imagine how you would like to build such a value. Then you'll figure out that you are missing concatenation operation in your expression:
key-expression="headers.targetsystem-folder + '/' + headers.file_name"
Also, please, in the future provide more info about error. In most cases the stack trace is fully helpful.
In the project that I was working before, I just used the java aws sdk provided. Then in my implementation, I did something like this
private void uploadFileTos3bucket(String fileName, File file) {
s3client.putObject(new PutObjectRequest("target-bucket", "/targetsystem-folder/"+fileName, file)
.withCannedAcl(CannedAccessControlList.PublicRead));
}
I didn't create anymore configuration. It automatically creates /targetsystem-folder inside the bucket(then put the file inside of it), if it's not existing, else, put the file inside.
You can take this answer as reference, for further explanation of the subject.
There are no "sub-directories" in S3. There are buckets and there are
keys within buckets.
You can emulate traditional directories by using prefix searches. For
example, you can store the following keys in a bucket:
foo/bar1
foo/bar2
foo/bar3
blah/baz1
blah/baz2
I have a issue. I build myTalendJob and I am running myShell succesfully by adding a contextVariable. The command I use is:
./mainJob_run.sh --context_param myVar="/myDirectory/file.txt"
Is it possible to simply run ./mainJob_run.sh and passing dynamically --context_param myVar="/myDirectory/file.txt" avoiding to rewrite it anytime?
Thank you in advance!
I am not sure I understand your question but this is my attempt to answer.
Either:
When exporting your job, override the context "myVar" by this given value
Write caller script to call mainJob_run.sh appending this additional parameter. I prefer this one as it gives more flexibility
Implicit contexts load
You can read your context params from a file.
With this, you don't need to pass the context params through the shell command, but instead it reads the context params from a file when the job is executing.Ideally, you should put this in your tPreJob.
After reading the values, you can also pass the context params through a tJavaRow for further processing. This way you can format your context params, or generate new context params based on the input values.
TalendByExample has provided a great guide on how to build a reusable context loading job which you can call from any of your jobs.
https://www.talendbyexample.com/talend-reusable-context-load-job.html
I'm trying to copy data from one bucket to another using the ruby "aws-sdk" gem version 3.
My code is shown below:
temporary_object = #temporary_bucket.object(temporary_path)
permanent_object = #permanent_bucket.object(permanent_path)
temporary_object.copy_to(permanent_object)
However I keep getting the error Aws::S3::Errors::NoSuchKey: The specified key does not exist. Which makes sense as the permanent bucket doesn't exist at this moment however I thought that using copy_to creates the bucket if it does not exist.
Any advice would be very helpful.
Thanks
I'm trying to create a user-data script with cloud-init using the chef features. I've run into a limitation and I'm wondering if there's a way around it. I need my node names to be unique since the chef server will only accept a client with a unique name. I've tried several things to pass either a datetime variable or the instance ID, but I can't seem to pass variables to the node_name section.
node_name: "server-app-$INSTANCE_ID"
or
node_name: "server-app-$(date +%s)"
Is there a way to escape this so that it doesn't get interpreted literally?
Chef encountered an error attempting to create the client "server-app-$INSTANCE_ID"
You'll have to do the expansion before you write out the client.rb:
echo "node_name '$(date +%s)' >> /etc/chef/client.rb"
Not that this could still collide if two machines boot in the same second. I would highly recommend using the EC2 instance ID or IP address instead. You can fetch these from the metadata server. See https://github.com/coderanger/brix/blob/master/packer/client-bootstrap.sh#L26-L37 for an example.
Is there a way to capture and store (or write to a file) the values returned in the Response? (Checkpoint values)
Using HP UFT 11.52
Thanks,
Lynn
I figured it out. In UFT API under Standard Activities, there are File function modules including "Write to File". I added the module to the test, set the path and other properties, passed the variable to the file and it worked! Couldn't be easier.
I mentioned this on my other answer , you can also write it programatically if you have dynamic array response please refer below:
https://stackoverflow.com/a/28012383/3972994
After running a test, in the test folder, you can find a Snapshots/LastIteration directory.
In it you can find the return value for each step saved in a txt file.
Pay attention that if you data drive the step, only the last iteration will be saved to file.
However, in the Test's log (Test dir/Log/vtd_user.log) you can find all the iterations persisted
Thanks,
Yossi
You do not need to use the standard activities if you do this
var iResponse = this.Activity.responsebody;
System.IO.File.WriteLines(#"directorypath&FileName);
the above will write the response to the file and rewrite it for every run