I try to test the Azure-Samples eventhubs example
I want to execute it from the Cloud Shell and I'm getting this error:
Caused by: com.azure.core.exception.ClientAuthenticationException: ERROR: Tenant shouldn't be specified for Cloud Shell account
I was checking the code and the tenant is not used in any place in this simple example. So how to run it in the Cloud Shell and, in general, what is the reason of the error and how to avoid that error?
As I followed all the step in the above. creating the application of spring cloud azure starter EventHub to send and receive messages with azure event hubs.
We used terraform file format to create resource group and EventHub namespace as well through azure CLI.
provider "azurerm"{
features{}
}
resource "azurerm_resource_group" "examplegroup" {
name = "springterra"
location = "West Europe"
}
resource "azurerm_storage_account" "example" {
name = "mystorage01acc"
resource_group_name = azurerm_resource_group.examplegroup.name
location = azurerm_resource_group.examplegroup.location
account_tier = "Standard"
account_replication_type = "GRS"
}
resource "azurerm_eventhub_namespace" "example" {
name = "evnt01hub"
location = azurerm_resource_group.examplegroup.location
resource_group_name = azurerm_resource_group.examplegroup.name
sku = "Standard"
capacity = 2
tags = {
environment = "Production"
}
}
Created a spring boot app by adding event Hub starter dependencies in pom.xml file to send or receive messages from the event Hub.
Everything works fine as per the above document. In my reproduce I didn't find the tenant ID which you got in the error exception.
Mostly, these type of errors will get while login with Azure CLI only.
Reference's:
How to install the Azure CLI | Microsoft Learn
azurerm_eventhub | Resources | hashicorp/azurerm | Terraform Registry
Related
Websphere version 9.0 is installed in our RHEL 8.3 OS .
Now in that i have deployed one web app - .war file which contains multiple modules - webservice, web module etc.
This war is successfully deployed and i am able to start it also going to Websphere Enterprise Applications - AppName - START.
The app gets started with a success message.
Now the problem lies ahead. Our application requires a certain file bootstrap.properties.
This file has several configurations like jdbc params, jmx ports, jms configurations, jvm arguments, logging paths etc.
Once the web module of this app is run on <SERVER_IP>:9080/Context url, it throws error on GUI saying Unable to locate bootstrap.properties.
Analysing at the code level , found out that below code is throwing this error:
private static Properties config;
private static final String CONFIG_ROOT = System.getProperty("bootstrap.system.propertiespath");
private static final String configFile = "bootstrap.properties";
private JMXConfig() {
}
public static String getConfigRoot() {
if (CONFIG_ROOT == null) {
System.err.println("Not able to locate bootstrap.properties. Please configure bootstrap.system.propertiespath property.");
throw new ConfigException("Unable to locate bootstrap.properties.");
} else {
return CONFIG_ROOT + File.separator;
}
}
I wanted to know where can we specify the absolute paths in the websphere console where our property file can be read as a system argument once the application is loaded.
Since you're using System.getProperty() to read the property, it needs to be specified as a Java system property passed into the JVM. You can do that from the JVM config panel, adding it as either a custom property on the JVM or as a -D option in the server's generic JVM arguments.
Custom property: https://www.ibm.com/docs/en/was/9.0.5?topic=jvm-java-virtual-machine-custom-properties
Generic JVM argument: https://www.ibm.com/docs/en/was/9.0.5?topic=jvm-java-virtual-machine-settings (search for "Generic JVM arguments")
Note that if you use a custom property, you would simply set the "name" field to "bootstrap.system.propertiespath" and the "value" to the path you need; if you use a generic JVM argument, you'd add an argument with the structure "-Dbootstrap.system.propertiespath=/path/to/file".
I followed the following tutorial to create a Lambda deploy pipeline using CDK. When I try to keep everything in the same account it works well.
https://docs.aws.amazon.com/cdk/latest/guide/codepipeline_example.html
But my scenario is slightly different from the example because it involves two AWS accounts instead one. I maintain application source code and pipeline
in the OPS account and this pipeline will deploy the Lambda application to the UAT account.
OPS Account (12345678) - CodeCommit repo & CodePipeline
UAT Account (87654321) - Lambda application
As per the aws following aws documentation (Cross-account actions section) I made the following changes to source code.
https://docs.aws.amazon.com/cdk/api/latest/docs/aws-codepipeline-actions-readme.html
Lambda stack expose deploy action role as follows
export class LambdaStack extends cdk.Stack {
public readonly deployActionRole: iam.Role;
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
...
this.deployActionRole = new iam.Role(this, 'ActionRole', {
assumedBy: new iam.AccountPrincipal('12345678'), //pipeline account
// the role has to have a physical name set
roleName: 'DeployActionRole',
});
}
}
In the pipeline stack,
new codePipeline.Pipeline(this, 'MicroServicePipeline', {
pipelineName: 'MicroServicePipeline',
stages: [
{
stageName: 'Deploy',
actions: [
new codePipelineAction.CloudFormationCreateUpdateStackAction({
role: props.deployActionRole,
....
})
]
}
]
});
Following is how I initiate stacks
const app = new cdk.App();
const opsEnv: cdk.Environment = {account: '12345678', region: 'ap-southeast-2'};
const uatEnv: cdk.Environment = {account: '87654321', region: 'ap-southeast-2'};
const lambdaStack = new LambdaStack(app, 'LambdaStack', {env: uatEnv});
const lambdaCode = lambdaStack.lambdaCode;
const deployActionRole = lambdaStack.deployActionRole;
new MicroServicePipelineStack(app, 'MicroServicePipelineStack', {
env: opsEnv,
stackName: 'MicroServicePipelineStack',
lambdaCode,
deployActionRole
});
app.synth();
AWS credentials profiles looks liks
[profile uatadmin]
role_arn=arn:aws:iam::87654321:role/PigletUatAdminRole
source_profile=opsadmin
region=ap-southeast-2
when I run cdk diff or deploy I get an error saying,
➜ infra git:(master) ✗ cdk diff MicroServicePipelineStack --profile uatadmin
Including dependency stacks: LambdaStack
Stack LambdaStack
Need to perform AWS calls for account 87654321, but no credentials have been configured.
What have I done wrong here? Is it my CDK code or is it the way I have configured my AWS profile?
Thanks,
Kasun
The problem is with your AWS CLI configuration. You cannot use the CDK CLI natively to deploy resources in two separate accounts with one CLI command. There is a recent blog post on how to tell CDK which credentials to use, depending on the stack environment parameter:
https://aws.amazon.com/blogs/devops/cdk-credential-plugin/
The way we use it is to deploy stacks into separate accounts with multiple CLI commands specifying the required profile. All parameters that need to be exchanged (such as the location of your lambdaCode) is passed via e.g. environment variables.
Just try to use using the environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html
Or
~/.aws/credentials
[default]
aws_access_key_id=****
aws_secret_access_key=****
~/.aws/config
[default]
region=us-west-2
output=json
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html
It works for me.
I'm using cdk version 1.57.0
The issue is in the fact that you have resources that exist in multiple accounts and hence there are different credentials required to create those resources. However, CDK does not understand natively how to get credentials for those different accounts or when to swap between the different credentials. One way to fix this is to use cdk-assume-role-credential-plugin, which will allow you to use a single CDK deploy command to deploy to many different accounts.
I wrote a detailed tutorial here: https://johntipper.org/aws-cdk-cross-account-deployments-with-cdk-pipelines-and-cdk-assume-role-credential-plugin/
I am trying to develop a google assistant app with actions sdk. I found lot of samples online which all are using google's firebase cloud functions to deploy.
From this link(https://actions-on-google.github.io/actions-on-google-nodejs/) I also found that it is possible to deploy the actions sdk functions into aws lambda.
But unfortunately I did not find any sample which is showing how to write and deploy actions sdk into aws lambda.
Can anybody help me to write an application which is similar to the one shown here(https://github.com/actions-on-google/actionssdk-say-number-nodejs) and deploy it into aws lambda?
I tried the following to do the same. But it did not worked.
Created a folder and initialized it with "npm init".
Added index.js file.
Then ran the command "npm install actions-on-google". It appeared in the package.json file.
Created a zip folder of the entire source inside that folder I created.
Created a aws lambda function and uploaded the zip folder and set the "Handler" of the lambda function as "index.fulfillment".
Created an api gateway and linked it to the lambda function and deployed it.
Then took the url and editted the "actions.json" file and ran the gactions command.
Then when I started testing the app in the actions console using the simulator I am getting the error "UnparseableJsonResponse API Version 2: Failed to parse JSON response string with 'INVALID_ARGUMENT' error: "error_message: Cannot find field"
Here is the code inside index.js file
'use strict';
const {actionssdk, SimpleResponse} = require('actions-on-google');
const app = actionssdk({debug: true});
app.intent('actions.intent.MAIN', (conv) => {
conv.ask("welcome");
});
app.intent('actions.intent.TEXT', async (conv, input) => {
conv.ask('You said ' + input);
});
exports.fulfillment = app
Here is the cloud watch logs from aws
2018-11-10T08:35:46.715Z 9dbb17f8-e4c3-11e8-bce3-730a5244a300
{
"errorMessage": "Cannot convert undefined or null to object",
"errorType": "TypeError",
"stackTrace": [
"Function.keys (<anonymous>)",
"Lambda.<anonymous> (/var/task/node_modules/actions-on-google/dist/framework/lambda.js:36:36)",
"Generator.next (<anonymous>)",
"/var/task/node_modules/actions-on-google/dist/framework/lambda.js:22:71",
"new Promise (<anonymous>)",
"__awaiter (/var/task/node_modules/actions-on-google/dist/framework/lambda.js:18:12)",
"/var/task/node_modules/actions-on-google/dist/framework/lambda.js:30:46",
"omni (/var/task/node_modules/actions-on-google/dist/assistant.js:44:53)"
]
}
The code changes to host it on AWS are fairly straightforward. Instead of importing the firebase-functions library and using it, you just need to establish the lambda endpoint with the dialogflow app itself. So the code might look something like:
const { dialogflow } = require('actions-on-google')
const app = dialogflow()
// Setup intent handlers with app.intent() here
exports.factsAboutGoogle = app
I have tried a few ways to get sonarQube running in our AWS environment, all successfully. However, SonarQube is unstable. Whenever Elastic beanstalk recycles an instance, my SonarQube environment is wiped out.
Here is what I tried:
Attempt 1: EC2 instance. I create the EC2 instance off of a bitnami ami imageId: ami-0f9cf81913a6dce27
This seemed like pretty simple process. But I prefer elastic beanstalk environment to manage our sonarQube EC2 instances.
Attempt 2: Create a EB Environment using a single docker instance, with this dockerfile:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "sonarqube:7.1"
},
"Ports": [{
"ContainerPort": "9000"
}]
}
This created the EB environment. It creates an RDS instance (with mySql 5.x) to store the scan data (in a database called ebdb). The sonarQube server hosts an internal elasticsearch instance locally for it's search data.
I then have to add a few environment variables to support the RDS instance (jdbc username, password, url endpoint, etc).
I then have to configure the sonarQube security side.
No marketplace features are installed. So I add SonarJava, Groovy, and SonarJS.
I add a login user for scans. All good.
Except, occasionally Elastic Beanstalk will have a health issue and drop the current instance, and re-create a new instance.
In this case, everything is still in tact - security: users, passwords, etc. Except the marketplace features are gone. So code scans will fail until I manually add them back.
The schema for single instance docker container is pretty sparse, I did not see any way to further customize w/ the docker file.
Attempt 3: Use multi-instance docker container. The schema is more robust, perhaps I can configure sonarQube more explicitly. e.g. You can pass environment variables, mysql settings, etc.
I was unable to get this to work. I did learn I needed to set the memory above 2 GB, for elasticsearch to start up. But i was unable to get the sonarQube environment to come up.
I might revisit this later.
Attempt 4: use AMI in elastic beanstalk (with terraform aws provider)
main.tf
resource "aws_elastic_beanstalk_application" "sonarqube" {
name = "SonarQube"
description = "SonarQube for nano-services"
}
resource "aws_elastic_beanstalk_environment" "nonprod" {
name = "${var.application-name}"
application = "${aws_elastic_beanstalk_application.sonarqube.name}"
solution_stack_name = "64bit Amazon Linux 2018.03 v2.10.0 running Docker 17.12.1-ce"
wait_for_ready_timeout = "30m"
setting {
namespace = "aws:autoscaling:updatepolicy:rollingupdate"
name = "Timeout"
value = "PT1H"
}
setting {
namespace = "aws:elasticbeanstalk:environment"
name = "ServiceRole"
value = "aws-elasticbeanstalk-service-role"
}
setting {
namespace = "aws:elasticbeanstalk:command"
name = "DeploymentPolicy"
value = "Rolling"
}
setting {
namespace = "aws:elasticbeanstalk:command"
name = "BatchSizeType"
value = "Fixed"
}
setting {
namespace = "aws:elasticbeanstalk:command"
name = "BatchSize"
value = "1"
}
setting {
namespace = "aws:elasticbeanstalk:command"
name = "IgnoreHealthCheck"
value = "true"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "EC2KeyName"
value = "web-aws-key"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "IamInstanceProfile"
value = "arn:aws:iam::<redacted>:instance-profile/aws-elasticbeanstalk-ec2-role"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "instanceType"
value = "t2.xlarge"
}
setting {
namespace = "aws:elb:listener:443"
name = "ListenerProtocol"
value = "SSL"
}
setting {
namespace = "aws:elb:listener:443"
name = "InstanceProtocol"
value = "SSL"
}
setting {
namespace = "aws:elb:listener:443"
name = "SSLCertificateId"
value = "arn:aws:acm:<redacted>"
}
setting {
namespace = "aws:elb:listener:443"
name = "ListenerEnabled"
value = "true"
}
}
Initially I included the sonarQube AMI:
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "imageId"
value = "ami-0f9cf81913a6dce27"
}
This does create everything. However, the EC2 instances respond too slowly, and EB goes to Grey status. Even though SonarQube is up and running, EB is unaware of it. So I commented this out, and manually modified the image id as a one-off.
wait_for_ready_timeout does assist with this, as that simply keeps terraform from timing out. e.g. It finishes in 22.5 minutes instead of a hard stop at 20 minutes.
In this case, it creates SonarQube with a local mysql database (no RDS instance) w/ elasticsearch being local as well.
SonarQube's market place features are also included, except for Groovy. Which I added.
However, same issue as before. When EB drops an instance and re-creates it, the sonarQube environment is wiped out. This time, the credentials, marketplace features, and everything.
Has anyone run into this problem and figured it out?
I resolved the issue by using ECS (Fargate), instead of the Elastic Beanstalk container.
Steps:
Create an RDS mysql instance in AWS for sonar
Open a mysql shell for this instance, and configure it for sonar, see: Sonar setup with MySql
Create a dockerfile with the plugins you care about, e.g:
FROM sonarqube:latest
ENV SONARQUBE_JDBC_USERNAME=[YOUR-USERNAME] \
SONARQUBE_JDBC_PASSWORD=[YOUR-PASSWORD] \
SONARQUBE_JDBC_URL=jdbc:mysql://[YOUR-RDS-ENDPOINT]:3306/sonar?useSSL=false&useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance
RUN wget "https://sonarsource.bintray.com/Distribution/sonar-java-plugin/sonar-java-plugin-5.7.0.15470.jar" \
&& wget "https://sonarsource.bintray.com/Distribution/sonar-javascript-plugin/sonar-javascript-plugin-4.2.1.6529.jar" \
&& wget "https://sonarsource.bintray.com/Distribution/sonar-groovy-plugin/sonar-groovy-plugin-1.4.jar" \
&& mv *.jar $SONARQUBE_HOME/extensions/plugins \
&& ls -lah $SONARQUBE_HOME/extensions/plugins
EXPOSE 9000
EXPOSE 9092
I exposed 9092 in case i wanted to comment out the mysql connection, and test locally with the internal h2 database at some point.
Verify the docker image runs locally
eval $(docker-machine env)
docker build -t sonar .
docker run -it -d --rm --name sonar -p 9000:9000 -p 9092:9092 sonar:latest
echo $DOCKER_HOST
Open a browser to this ip address, port 9000. e.g. http://192.x.x.x:9000
Create a new ECS repository called sonar to store the docker image.
The AWS interface actually tells you how to publish your docker image, so this should be self-evident.
Tag and push the docker file to the sonar repository
$(aws ecr get-login --no-include-email --region [YOUR-AWS-REGION])
docker tag sonar:latest [YOUR-ECS-DOCKER-IMAGE-URI]/sonar:latest
docker push [YOUR-ECS-DOCKER-IMAGE-URI]/sonar:latest
Create a new fargate cluster, called sonar
Create a new task definition.
For your container, use the ECS docker image URI. I gave mine 6 GB memory and 2 cpus, with 1024 cpu units. Here I exposed port 9000 and 9092. I added the environment vars in the Dockerfile here as well.
Create an ECS service, and include the task. Run it, verify the logs cloudwatch. And hit the public endpoint on port 9000, and done.
I largely borrowed from this: https://www.infralovers.com/en/articles/2018/05/04/sonarqube-on-aws-fargate/
I hope this helps others.
Using TF 0.7.2 on a Win 10 machine.
I'm trying to set up an edit/upload cycle for development of my lambda functions in AWS, using the new "archive_file" resource introduced in TF 0.7.1
My configuration looks like this:
resource "archive_file" "cloudwatch-sumo-lambda-archive" {
source_file = "${var.lambda_src_dir}/cloudwatch/cloudwatchSumologic.js"
output_path = "${var.lambda_gen_dir}/cloudwatchSumologic.zip"
type = "zip"
}
resource "aws_lambda_function" "cloudwatch-sumo-lambda" {
function_name = "cloudwatch-sumo-lambda"
description = "managed by source project"
filename = "${archive_file.cloudwatch-sumo-lambda-archive.output_path}"
source_code_hash = "${archive_file.cloudwatch-sumo-lambda-archive.output_sha}"
handler = "cloudwatchSumologic.handler"
...
}
This works the first time I run it - TF creates the lambda zip file, uploads it and creates the lambda in AWS.
The problem comes with updating the lambda.
If I edit the cloudwatchSumologic.js file in the above example, TF doesn't appear to know that the source file has changed - it doesn't add the new file to the zip and doesn't upload the new lambda code to AWS.
Am I doing something wrong in my configuration, or is the archive_file resource not meant to be used in this way?
You could be seeing a bug. I'm on 0.7.7 and the issue now is the SHA changes even when you don't make changes. Hashicorp will be updating this resource to a data source in 0.7.8
https://github.com/hashicorp/terraform/pull/8492