I wanna use QuickSight for visualization with AWS Timestream as data source.
But I cannot see any icon of AWS Timestream.
The region of both services are the same(ap-northeast-1).
And when I set up Quicksight, I checked the box to add permission to source AWS Timestream.
I have already checked youtube(https://youtu.be/TzW4HWl-L8s?t=118) and
document(https://docs.aws.amazon.com/timestream/latest/developerguide/Quicksight.html#Quicksight.accessing).
I directly asked AWS, then I found the reason. QuickSight does not support Timestream at Asia Pacific (Tokyo) ap-northeast-1 now.
Related
I am looking for scope where i can send data from oracle db to AWS Data exchange without any manual intervention?
In January 2022, AWS Data Exchange launched support for data sets backed by Amazon Redshift; the same guide referenced by John Rotenstein, above, shows you how you can create a data set using Amazon Redshift datashares. If you are able to move data from the Oracle database to Amazon Redshift, this option may work for you.
AWS Data Exchange just announced a preview of data sets using AWS Lake Formation, which allows you to share data from your Lake Formation data lake, which has support for Oracle databases running in Amazon Relational Database Service (RDS) or hosted in Amazon Elastic Compute Cloud (EC2). Steps to create this kind of product can be found here.
I came across this question in my AWS study:
A user is designing a new service that receives location updates from
3600 rental cars every second. The cars location needs to be uploaded
to an Amazon S3 bucket. Each location must also be checked for
distance from the original rental location. Which services will
process the updates and automatically scale?
Options:
A. Amazon EC2 and Amazon EBS
B. Amazon Kinesis Firehose and Amazon S3
C. Amazon ECS and Amazon RDS
D. Amazon S3 events and AWS Lambda
My question is how can Option D be used as the solution? or should I use Firehose to ingest (capture and transform) data into S3?
Thanks.
I would choose Option B
a.)This provides a service to ingest data directly to S3
b.) Firehose would have Lambda transformation which can compute the distance from the original location and that can be stored in S3
I have a high voluminous data in my oracle database. I want to migrate it on the AWS S3 bucket. I cannot find a good documentation for this. Please share if someone has already done it.
Thanks
You can use AWS Data Pipeline
[Copied from above link]
With AWS Data Pipeline, you can regularly access your data where it’s stored, transform and process it at scale, and efficiently transfer the results to AWS services such as Amazon S3, Amazon RDS, Amazon DynamoDB, and Amazon EMR.
Also found some code on GitHub to backup Oracle data to S3 link
I have a requirement to launch multiple EC2 instances in the Tokyo region, based on the number of AMIs owned by our account in that same region. The AMIs are backed-up daily from another region.
What this CloudFormation needs to achieve is:
Retrieve a list of AMIs created today
Attempt to launch each of them in the same region
For example, if today there are 10 different AMIs created in the Tokyo region, then CloudFormation will then create 10 EC2 instances based on these 10 AMIs.
I have looked at some examples at Walkthrough: Looking Up Amazon Machine Image IDs - AWS CloudFormation but found the code does not suit the requirement.
I already have the Lambda function retrieve-today-ami.py, the challenge is to include them in the CF template found in Walkthrough: Looking Up Amazon Machine Image IDs - AWS CloudFormation
Normally, CloudFormation is used to launch pre-defined infrastructure. Your requirement to launch a variable number of instances with information that changes for each instance every day, does not match the model for using CloudFormation.
Based on your use-case, I would recommend writing a script to perform the operation you want.
For example, a Python scripts that lists the AMIs, identifies the ones you want to use, then launches EC2 instances using those AMIs.
You might be able to achieve this by using a Lambda-backed custom resource to fetch the names of the AMIs. Then, the outputs of your custom resource could be used in the EC2 stanzas in the template. You could have the one template defining the Lambda export the values and import them on your EC2 templates.
I want to make exact replica of my server in non China region to China region. AWS wont allow AMI copy into china, Is there a way of achieving my goal. ?
Already tried [1], but didn't worked for me. would like to hear if anyone have done [1] successfully.
[1]https://forums.aws.amazon.com/thread.jspa?threadID=178941
You can easily transfer or copy your AMI to another region. There has some ways. Most easiest way is to use the Copy AMI options.
Click on your instance -> Create Image .
From AMI lists -> Select your newly created AMI -> Copy AMI -> Select region where you want to take it. After that it will be available in that region after couple mins. Then from that region, you can simply re-create your instance with this AMI. This is simple.
Another way is, if AWS don't allow you to copy your AMI in China region, then you just create another account in China Region and then from your AMI list, just give the permission to your china region AWS account, and it will be available in your china region account and you can re-create instance from there. Though, I never dealt with china region in AWS, but it should work.