Hi, can anyone explain about how to mask EKS secrets in dynatrace - spring-boot

Masking EKS secrets in dynatrace.
I have tried to mask the values in logback spring XML buy using some specific characteristics but I can't able to full overview can anyone explain how to mask api keys

Related

Is it possible to monitor 2 different infrastructures ? - Elastic Cloud

I'm currently using a the trial of Elastic Cloud for my project.
I would like to be able to monitor 2 infrastructures at the same time, I have created a space for each infrastructure as well as 2 agent policies linked to the agents of their own infrastructure.
I was wondering if there was a way to separate the agents by agent policy, for example with a filter, to get only the agents belonging to the space of the chosen infrastructure or by another way.
Thanks in advance for your help
It's definitely possible to create filtered aliases and then in each Kibana space you can create an index pattern over each alias to only show the data relevant to the underlying agent in the relevant space.

Customised IPFS dashboard

I have a customised IPFS (created and maintained by someone else). I want to design the dashboard for this customised IPFS Private cluster (like the IPFS desktop for the nodes information). I am researching for Prometheus and Grafana service. What are the ways to achieve this task? I am new to IPFS. Please guide.
Edit: Recently I tried to get IPFS metrics using Prometheus.
http://localhost:5001/debug/metrics/prometheus gives some metric information but not sure it has complete information like peers, files etc info.
Are there any Prometheus exporters for IPFS? Or how could I use https://docs.ipfs.io/reference/http/api/#getting-started API data for Grafana?
You may need to export custom metrics, but the Prometheus endpoint seems like a reasonable place to start.
Some additional reading:
https://github.com/ipfs/go-ipfs/pull/6688
https://github.com/ipfs/go-metrics-prometheus

How to distribute / Where to store keys that applications need to access HashiCorp Vault

We want to use HashiCorp Vault to save the passwords used by our applications.
What is not clear to me is, how to distribute/ where to store the keys our applications need to access the vault in a secure way.
I think this issue is not addressed by the vault documentation. At least,
I couldn't find it. But clearly, it should by a problem every vault user has to handle.
Can someone give me a hint or provide an external tutorial, please?
Thx in advance!
What you need to figure out is what Authentication method is available to you.
https://www.vaultproject.io/docs/auth/index.html
For example, if you are running your app in AWS, you could be using iam to authenticate. In this case, you dont need to provide anything to your application as its handled behind the scenes from Vault and AWS.
Another way would be tokens authentication where you'd need to provide your application a valid Vault token so that it can be used to get credentials.
This has more information about auth.

Global borderless implementation website/app on Serverless AWS

I am planning to use AWS to host a global website that have customers all around the world. We will have a website and app, and we will use serverless architecture. I will also consider multi-region DynamoDB to allow users closer to the region to access the closest database instance.
My question regarding the best design to implement a solution that is not locked down to one particular region, and we are a borderless implementation. I am also looking at high traffic and high number of users across different countries.
I am looking at this https://aws.amazon.com/getting-started/serverless-web-app/module-1/ but it requires me to choose a region. I almost need a router in front of this with multiple S3 buckets, but don't know how. For example, how do users access a copy of the landing page closest to their region?, how do mobile app users call up lambda functions in their region?
If you could point me to a posting or article or simply your response, I would be most grateful.
Note: would be interested if Google Cloud Platform is also an option?
thank you!
S3
Instead of setting up an S3 bucket per-region, you could set up a CloudFront distribution to serve the contents of a single bucket at all edge locations.
During the Create Distribution process, select the S3 bucket in the Origin Domain Name dropdown.
Caveat: when you update the bucket contents, you need to invalidate the CloudFront cache so that the updated contents get distributed. This isn't such a big deal.
API Gateway
Setting up an API Gateway gives you the choice of Edge-Optimized or Regional.
In the Edge-Optimized case, AWS automatically serves your API via the edge network, but requests are all routed back to your original API Gateway instance in its home region. This is the easy option.
In the Regional case, you would need to deploy multiple instances of your API, one per region. From there, you could do a latency-based routing setup in Route 53. This is the harder option, but more flexible.
Refer to this SO answer for more detail
Note: you can always start developing in an Edge-Optimized configuration, and then later on redeploy to a Regional configuration.
DynamoDB / Lambda
DynamoDB and Lambda are regional services, but you could deploy instances to multiple regions.
In the case of DynamoDB, you could set up cross-region replication using stream functions.
Though I have never implemented it, AWS provides documentation on how to set up replication
Note: Like with Edge-Optimized API Gateway, you can start developing DynamoDB tables and Lambda functions in a single region and then later scale out to a multi-regional deployment.
Update
As noted in the comments, DynamoDB has a feature called Global Tables, which handles the cross-regional replication for you. Appears to be fairly simple -- create a table, and then manage its cross-region replication from the Global Tables tab (from that tab, enable streams, and then add additional regions).
For more info, here are the AWS Docs
At the time of writing, this feature is only supported in the following regions: US West (Oregon), US East (Ohio), US East (N. Virginia), EU (Frankfurt), EU West (Ireland). I imagine when enough customers request this feature in other regions it would become available.
Also noted, you can run Lambda#Edge functions to respond to CloudFront events.
The lambda function can inspect the AWS_REGION environment variable at runtime and then invoke (and forward the request details) a region-appropriate service (e.g. API Gateway). This means you could also use Lambda#Edge as an API Gateway replacement by inspecting the query string yourself (YMMV).

Application Deployment - Web + REST backend

I have an application which ember.js based front end, express.js based REST APIs with postgre as DB. There is also an android application consuming the REST APIs.
I want to deploy this application in cloud. I am very new to this area and not sure what approach to take that will be economical too. Its a startup application and will not have huge traffic in the start. I have been doing RnD on heroku and amazon aws.
Can any one please guide what deployment setup will be reliable and economical for me? Should I use cloude Db?. Any guide line or reference material will be great help.
Sorry If you find this question too generic.
Cheers
You can use AWS EC2 micro instance initially which is low cost.
Once you create the instance, you can install the tools required for you in instance.
What you need is create a AWS account and create the instance. In order to create a instance you can do it from console. Later you can access the instance using secrete key and access key. If you are a new user to AWS. Yo can get the usage free for one year.
As part of AWS’s Free Usage Tier, new AWS customers can get started with Amazon EC2 for free.
More details about free usage and pricing
As your application grows you can use Opscode Chef or puppet for configuration management.

Resources