We have a lot of AWS quicksight reports in one account, which needs to be migrated to another account.
Within the same account, we can use the 'save-as' feature of the dashboard to create a copy of the report, but is there any way to export the analysis from one account and import into another account?
At present, it appears only we way is to recreate all the reports again from scratch in the new account, but anyone has any other options?
You can do this programmatically through the API:
QuickSight API
However, it will take a bit of scripting. You will need to pull out the pieces with the API, and then rebuild on a new account.
For example, DescribeTemplate will pull the JSON defining a template. Then you can use CreateTemplate to create on another account.
In my organization, we are using the QuickSight APIs in AWS Lambda Functions and save the Analysis template in JSON format in an S3 bucket. This S3 bucket has access to multiple environments like Dev, QA, Staging, and Production. Leveraging the API again, we create analysis in other environments by using the template JSON file. We also store version information of the templates in a PostgreSQL database.
PS - The dataset in each environment needs to be created prior to migrating the analysis.
1.Create template in account1 from Analysis
--template.json:
{
"SourceAnalysis": {
"Arn": "arn:aws:quicksight:<AccountID-1>:analysis/<Analysis ID>",
"DataSetReferences": [
{
"DataSetPlaceholder": "DS1",
"DataSetArn": "arn:aws:quicksight:AccountID-1:dataset/<DatasetID>"
}
]
}
}
aws quicksight create-template --aws-account-id AccountID-1 --template-id <templateId> --source-entity file://template.json --profile default
2.Update Template Permissions in Account1->root(Account2):
--TemplatePermission.json
[{
"Principal": "arn:aws:iam::AccountID-2:root",
"Actions": ["quicksight:UpdateTemplatePermissions", "quicksight:DescribeTemplate"]
}]
aws quicksight update-template-permissions --aws-account-id AccountID-1 --template-id <templateId> --grant-permissions file://TemplatePermission.json --profile default
3.Create Analysis in Account2 using Account1 Template
createAnalysis.json
{
"SourceTemplate": {
"DataSetReferences": [
{
"DataSetPlaceholder": "DS1",
"DataSetArn": "arn:aws:quicksight:us-east-1:AccountID-2:dataset/<DatasetID in account 2>"
}
],
"Arn": "arn:aws:quicksight:us-east-1:AccountID-1:template/testTemplate"
}
}
aws quicksight create-analysis --aws-account-id AccountID-2 --analysis-id testanalysis --name test1 --source-entity file://createAnalysis.json
4.Update Permissions to view analysis for your user
UpdateAnalysisPermission.json
[
{
"Principal": "arn:aws:quicksight:us-east-1:AccountID-2:user/default/<username>",
"Actions": [
"quicksight:RestoreAnalysis",
"quicksight:UpdateAnalysisPermissions",
"quicksight:DeleteAnalysis",
"quicksight:DescribeAnalysisPermissions",
"quicksight:QueryAnalysis",
"quicksight:DescribeAnalysis",
"quicksight:UpdateAnalysis"
]
}
]
aws quicksight update-analysis-permissions --aws-account-id AccountID-2 --analysis-id testanalysis --grant-permissions file://UpdateAnalysisPermission.json
UPDATE.
As #yottabrain clarified, at the moment (February 2020) you can share analysis with other users in your Amazon QuickSight account only.
Sure, you can share your analysis as well:
Go to Share > Share analysis > Manage analysis access > Invite Users
See the detailed manual from AWS: Sharing an Analysis
Related
Our team has decided to adopt the AWS CDKV2.0 to build and manage our AWS resources. We are also using The AWS Deployment Framework to manage the deployment process by creating Code Pipelines and using AWS Code build.
The Setup we currently have works for the most part. We seem to have stumbled upon an issue when we attempt to deploy any of our resource that contain assets such as lambdas. Specifically I am talking about lambdas which are not included in-line within the synthezied cloudformation template as per this example.
In other words our lambda code is expected to be uploaded to S3 before being deployed,I am looking for best practice guides on how to configure our accounts and ADF with the CDK to deploy assets which require uploading to S3. At the moment all I can think of is either Bootstrapping the accounts we deploy to and/or customising the CDK synthesizer as part of our stack definition, any guidance or thoughts would be appreciated!
In other words our lambda code is expected to be uploaded to S3 before being deployed
Luckily, no. The CDK's lambda constructs automagically handle local asset bundling and S3 uploading out of the box. The CDK also accepts inline code an existing S3 buckets as code sources. And Docker images.
// aws_cdk.aws_lambda
const fn = new lambda.Function(this, 'MyFunction', {
runtime: lambda.Runtime.NODEJS_12_X,
handler: 'index.handler',
code: lambda.Code.fromAsset(path.join(__dirname, 'lambda-handler')),
});
Even better, the CDK provides runtime-specific lambda constructs (in dedicated modules) to make life even easier. For instance, CDK will transpile your ts or build your go executable for you and send the artifacts to a CDK-managed S3 bucket on cdk deploy.
// aws_cdk.aws_lambda_nodejs
new NodejsFunction(this, 'MyFunction', {
entry: '/path/to/my/file.ts', // accepts .js, .jsx, .ts and .tsx files
handler: 'myExportedFunc', // defaults to 'handler'
});
// aws_cdk.aws_lambda_go_alpha
new GoFunction(this, 'handler', {
entry: 'app/cmd/api',
});
// aws_cdk.aws_lambda_python_alpha
new PythonFunction(this, 'MyFunction', {
entry: '/path/to/my/function', // required
runtime: Runtime.PYTHON_3_8, // required
index: 'my_index.py', // optional, defaults to 'index.py'
handler: 'my_exported_func', // optional, defaults to 'handler'
});
I am trying to get all the instance(server name) ID based on the app. Let's say I have an app in the server. How do I know which apps below to which server. I want my code to find all the instance (server) that belongs to each app. Is there any way to look through the app in the ec2 console and figure out the servers are associated with the app. More of using tag method
import boto3
client = boto3.client('ec2')
my_instance = 'i-xxxxxxxx'
(Disclaimer: I work for AWS Resource Groups)
Seeing your comments that you use tags for all apps, you can use AWS Resource Groups to create a group - the example below assumes you used App:Something as tag, first creates a Resource Group, and then lists all the members of that group.
Using this group, you can for example get automatically a CloudWatch dashboard for those resources, or use this group as a target in RunCommand.
import json
import boto3
RG = boto3.client('resource-groups')
RG.create_group(
Name = 'Something-App-Instances',
Description = 'EC2 Instances for Something App',
ResourceQuery = {
'Type': 'TAG_FILTERS_1_0',
'Query': json.dumps({
'ResourceTypeFilters': ['AWS::EC2::Instance'],
'TagFilters': [{
'Key': 'App',
'Values': ['Something']
}]
})
},
Tags = {
'App': 'Something'
}
)
# List all resources in a group using a paginator
paginator = RG.get_paginator('list_group_resources')
resource_pages = paginator.paginate(GroupName = 'Something-App-Instances')
for page in resource_pages:
for resource in page['ResourceIdentifiers']:
print(resource['ResourceType'] + ': ' + resource['ResourceArn'])
Another option to just get the list without saving it as a group would be to directly use the Resource Groups Tagging API
What you install on an Amazon EC2 instance is totally up to you. You do this by running code on the instance itself. AWS is not involved in the decision of what you install on the instance, nor does it know what you installed on an instance.
Therefore, you will need to keep track of "what apps are installed on what server" yourself.
You might choose to take advantage of Tags on instances to add some metadata, such as the purpose of the server. You could also use AWS Systems Manager to run commands on instances (eg to install software) or even use AWS CodeDeploy to roll-out software to fleets of servers.
However, even with all of these deployment options, AWS cannot track what you have put on each individual server. You will need to do that yourself.
Update: You can use AWS Resource Groups to view/manage resources by tag.
Here's some sample Python code to list tags by instance:
import boto3
ec2_resource = boto3.resource('ec2', region_name='ap-southeast-2')
instances = ec2_resource.instances.all()
for instance in instances:
for tag in instance.tags:
print(instance.instance_id, tag['Key'], tag['Value'])
So I have some different types of aws resources tagged as xxx/yyy/<generated_id>. I need to fetch them using go-sdk.
Here is a sample code for subnets, the filters look the same for every other resource.
This doesn't work.
var resp *ec2.DescribeSubnetsOutput
resp, err = d.ec2Client().DescribeSubnets(&ec2.DescribeSubnetsInput{
Filters: []*ec2.Filter{
{
Name: aws.String("vpc-id"),
Values: []*string{&d.VpcId},
},
{
Name: aws.String(fmt.Sprintf(`tag:"xxx/yyy.[*]"`),
Values: []*string{aws.String("owned")},
},
},
})
This does:
aws ec2 describe-subnets --filters `Name=tag:"xxx/yyy.[*]",Values=owned`
I'm obviously doing something wrong, can someone point out what?
There is nothing in the API documentation to suggest that DescribeSubnets accepts a regular expression in filter names: https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSubnets.html
If it works in the CLI, that's likely something the CLI is doing on top of what the SDK offers. The Go SDK is like any other AWS SDK; it exposes the AWS API in a language-specific way. The AWS CLI adds convenience features on top of the API to make it more useful on the command line, but that doesn't mean those features are exposed by the API or any published SDK.
I stepped with this problem recently, my issue was the version of the sdk I was using;
Filters: [ ]*ec2.Filter{
is for v1 sdk mod and it was not working as I was importing github.com/aws/aws-sdk-go-v2/aws, while
Filters: [ ]types.Filter{
is for v2 and this one worked in my case.
https://aws.amazon.com/blogs/developer/aws-sdk-for-go-version-2-general-availability/
I'm using jovo framework (version 1.0.0) and I'm facing the following problem:
In app.js:
app.setHandler({
'LAUNCH': function() {
if(this.user().isNewUser()) {
this.tell('This will never be told on AWS Lambda.');
}
}
});
Running locally I can distinguish between (isNewUser === true) and (isNewUser === false) but as soon as I'm executing it as a lambda function on AWS isNewUser is always false. Why is that?
and additionally
NEW_USER': function() {
}
Isn't triggered either.
System Environment on local machine:
Windows 10 Home
NodeJS: v8.9.1
Lambda function:
NodeJS 6.10
I really appreciate any help you can provide.
Both 'NEW_USER' and this.user().isNewUser() need to have access to a database, where the number of sessions is stored for each user.
When you're prototyping locally, it uses the default File Persistence database integration, which saves the data to a local db/db.json file.
However, on AWS Lambda, the local db doesn't work, so you need to set up a DynamoDB configuration. Learn more here: Jovo Framework Docs > Database Integrations > DynamoDB.
Please remember to give your Lambda function role the right permission to access DynamoDB data: AWS Lambda Permissions Model.
In parse.com, when I want to create new app, I use:
curl -X POST \
-H "X-Parse-Email: <PARSE_ACCOUNT_EMAIL>" \
-H "X-Parse-Password: <PARSE_ACCOUNT_PASSWORD>" \
-H "Content-Type: application/json" \
-d '{"appName":"my new app","clientClassCreationEnabled":false}' \
https://api.parse.com/1/apps
But when I deployed Parse server to Heroku and Digital Ocean, I didn't know to create new app, because my server doesn't have PARSE_ACCOUNT_EMAIL and PARSE_ACCOUNT_PASSWORD. When I deployed parse dashboard, it didn't have "Create a new app" like Parse.com.
How can I create new app with my self-hosted Parse server?
The self hosted parse servers can only handle one app per server, at least for now.
This means that you will have to use several installations of Parse, one app per installation using multiple servers or multiple instances of parse on the same server but configure each server to use different ports.
To answer you question: No you do not need to use parse.com to create new apps.
To create a new app you set the appID and password in the parse config/start file on your digital ocean or other hosted server.
The appID and password can be anything that you make up, it does not need to be from parse.com.
Below is an example of the environment settings in a startup file:
**Example file: ~/parse-server-example/my_app.js**
var express = require('express');
var ParseServer = require('parse-server').ParseServer;
// Configure the Parse API
var api = new ParseServer({
databaseURI: 'mongodb://localhost:27017/dev',
cloud: __dirname + '/cloud/main.js',
appId: 'myOtherAppId',
masterKey: 'myMasterKey'
});
var app = express();
// Serve the Parse API on the /parse URL prefix
app.use('/myparseapp', api);
// Listen for connections on port 1337
var port = 9999;
app.listen(port, function() {
console.log('parse-server-example running on port ' + port + '.');
});
Then run the file with:
node my_app.js
You can read more here: Parse Server at Digital Ocean
There is an open issue for that: https://github.com/ParsePlatform/parse-dashboard/issues/188
For the moment, I just use parse's hosted dashboard to create new apps. They say on January 28th, calls to their API will cease to function. They don't say that the hosted dashboard will be going away. I imagine that, if they don't get it into the self-hosted version, you'll still be able to create new apps within the hosted dashboard.
In any case, for now what I am doing is creating the app as I normally would in the hosted dashboard. I then run the migration tool at app > app settings > general > Migrate to external database option. You have to add at least one class to the database in order for the migration tool to work. Basically, the migration tool will fail with some ambiguous error message if it's a completely fresh app with a clean database.
Once the migration is done and read/writes are hooked up to my self-hosted Parse Server, I then providing the app's keys, etc in the parse-dashboard-config.json file of my self-hosted Parse Dashboard. You can add multiple apps to this config file, thus manage all of your apps from a single self-hosted Parse Dashboard.
Here's an example of that config file with two apps:
{
"apps": [
{
"serverURL": "https://my-parse-server-1.herokuapp.com/parse",
"appId": "b44gL7uAB1z...lwUJneaoKdX9",
"masterKey": "HrSqFbH...hfiwuCCOLDvHF",
"appName": "parse-server-1"
},
{
"serverURL": "https://my-parse-server-2.herokuapp.com/parse",
"appId": "b44gL7uAB1z...lwUJneaoKdX9",
"masterKey": "HrSqFbH...hfiwuCCOLDvHF",
"appName": "parse-server-2"
}
],
"users": [
{
"user":"admin",
"pass":"somePasswordHere"
}
]
}
This seems to be the only way currently to create apps that you can connect to your self-hosted Parse Dashboard.
It's also important to note that, at the moment, it appears as though the self-hosted Parse Server package only supports a single app. I have no idea if there are any plans to support multiple apps as they have done with Parse Dashboard.
And finally, you can use the Parse Command Line tool to create new apps as well: https://parse.com/docs/cloudcode/guide#command-line-creating-a-parse-app
They also have some interesting integrations with Heroku which facilitate the entire process. That might be worth looking into. You could create a simple Node app yourself with a GUI for creating new Parse apps. In this case, you would create a simple form, that when submitted is validated and then executes the command line methods to create a new app with the ShellJS node package. You could even modify the Parse Dashboard package to include this feature yourself within the self-hosted Dashboard.