How to create mount path in Azure container instance for LUIS setup - azure-language-understanding

I am trying to setup LUIS Image on Azure container instanse. i had setup Enviornment variables like Eula,API Key and Endpoint
But I am not sure how to create a mount path /input where i can plance my LUIS Exported file.
Please let me know if anyone has info on this?

Related

Webhooks for Oracle Cloud Infrastructure - container registry

Looking for a solution to this use case
Docker image is pushed to Oracle Cloud Infrastructure - container registry (OCIR)
Jenkins has a webhook on the OCIR and Jenkins pipeline gets triggered as a new image is available in OCIR
How is it possible to have a webhook or some kind of mechanish for letting Jenkins know there is a new push to OCIR?
This blog post walks you thru how to set up a continuous pipeline that may be able to be used in full or in part to accomplish this
https://blogs.oracle.com/cloud-infrastructure/build-a-continuous-integration-pipeline-using-github,-docker-and-jenkins-on-oracle-cloud-infrastucture
We can listen to the OCI container registry events via Service Connector. You can configure Service Connector to invoke your custom functions on a specific event 'Container Image - Upload' under service name 'Registry'.
You can find a sample illustration below to perform some custom tasks during an image upload to OCI Container Registry.
Ref: https://github.com/RahulMR42/oci-devops-deploy-on-imageupload

Do i need to ask gcloud support for enabling exchange custom rules in VPC peering?

I have created two gcloud projects, one for cloud sql and one for kubernete cluster. For accessing SQL in project one i have set import export custom routes . Do i need to take gcloud confirmation for this or this is enough? as i have read somewhere that after these steps ask gcloud support for enable the exchange of custom routes for your speckle-umbrella VPC network associated with your instance that is automatically created upon the Cloud SQL instance is created.
As far as I know this step is not included in the public documentation and is not necessary if you are connecting from within the same project or from any service project ( if configured with Shared VPC), then you don't require to export those routes. This is generally required if you are connecting from on-prem.
If you are having anny issue please let us know

How can I configure application.properties using AWS CodeDeploy and/or CloudFormation?

I have a Spring Web Service deployed on Elastic Beanstalk. I'm using AWS CloudFormation for the infrastructure and I'm using AWS CodePipeline to deploy the web service automatically from merges to the master branch.
Recently I added DynamoDB integration, and I need to configure a couple things in my application.properties. I attempted to use environment variables to configure the application.properties but I hit a wall when trying to set the environment variables from CodeDeploy.
This is my application.properties
amazon.dynamodb.endpoint=${DYNAMODB_ENDPOINT:http://localhost:8000}
amazon.dynamodb.region=${AWS_REGION:default-region}
amazon.dynamodb.accesskey=${DYNAMODB_ACCESS_KEY:TestAccessKey}
amazon.dynamodb.secretkey=${DYNAMODB_SECRET_KEY:TestSecretKey}
spring.data.dynamodb.entity2ddl.auto = create-drop
spring.data.dynamodb.entity2ddl.gsiProjectionType = ALL
spring.data.dynamodb.entity2ddl.readCapacity = 10
spring.data.dynamodb.entity2ddl.writeCapacity = 1
The defaults are for when I'm running a local DynamoDB instance and they work fine. However, I can't figure out how to get CodeDeploy to set environment variables for me, I also considered getting CloudFormation to set the environment variables, but couldn't find how to do that either. I tried manually setting the environment variables in the EC2 instance but that didn't work and isn't the solution I'm looking for as I'm using EB and want this project to use fully automated deployments. Please let me know if this is possible, what the industry standard is for configuring web services, and if I'm misunderstanding either CodeDeploy or CloudFormation.
In general, it is a bad practice to include access and secret keys in any sort of files or in your deployment automation.
Your instance that your application is deployed to should have an instance profile (i.e. IAM Role) attached to it which should have the appropriate DynamoDB permissions you need.
If you have that instance profile attached, the SDK should automatically be able to detect the credentials, region and endpoint is needs to communicate with.
You may need to update the way you are creating your DynamoDB client to just use the defaults.
To setup your development machine with these properties in a way that the AWS SDK can retrieve without explicitly putting them in properties files, you can run the aws configure command of the AWS CLI which should setup your ~/.aws/ folder with information about your region and credentials to use on your dev machine.

Created an instance on Azure environment and the site URL

We have created an instance on Azure environment and the site URL is http://mydomainname.cloudapp.net/
We needs ‘www’ to be used in the site name. as http://www.mydomainname.com
Kindly do the needful and let us know the steps to do so at the earliest.
Take a look at how you could map a custom domain to a cloud service: http://www.windowsazure.com/en-us/develop/net/common-tasks/custom-dns/.

Windows+CloudFormation :User doesn't have permission to call IAM:CreateUser

I cannot find decent documentation about using CloudFormation with Windows 2008 R2 AMI. AWS recently released a new Windows AMI which has CloudFormation tools pre-installed.
The AMI itself can be found here :
https://aws.amazon.com/amis/microsoft-windows-server-2008-r2-base-cloudformation
Aim: I want to use CloudFormation so that during bootup the instance can download the latest dlls and config files of my application from S3.
In that AMI, by default, where are these tools located under C:\ ? ( I did a search in the file system and couldn't find it)
Do these tools already run by default automatically on bootup ? Or do I have to write a script to do so and re-bundle (remake) an EBS backed AMI ? I would like to test this !
To try out the sample templates provided by AWS for Windows, I tried launching the Windows Sharepoint template given here : https://s3.amazonaws.com/cloudformation-templates-us-east-1/Windows_Single_Server_SharePoint_Foundation.template .When I try to launch this stack given by that template, it gives me the following error and rollsback :
AccessDenied. User doesn't have permission to call iam:CreateUser
As per the "Account Owner", my IAM account belongs to the Administrators Group which "cannot create new users", if that's the case how should I tackle this issue.
As per my understanding, if I have to use CloudFormation to retrieve metadata, the CloudFormation stack creates a new IAM user with only "DescribeStackResource" action permission and this new IAM user lives as long as that stack lives.
It will be available under C:\Program Files (x86)\Amazon\cfn-bootstrap but not sure whether it will run on boot, that I have to verify.

Resources