We are trying to migrate our Spring Cloud Config Server from a Git Backend to S3.
With git when we want to get config for the service my-service and profile development we hit
https://my-config-server/my-service/development
and this returns something like
{
"name": "my-service",
"profiles": [
"development"
],
"label": null,
"version": "fgdfgdfg",
"state": null,
"propertySources": [
{
"name": "https://bitbucket.org/company/config-repo.git/my-service-development.yml",
"source": {
[properties]
}
},
{
"name": "https://bitbucket.org/company/config-repo.git/application-development.yml",
"source": {
[properties]
}
},
{
"name": "https://bitbucket.org/company/config-repo.git/my-service.yml",
"source": {
[properties]
}
},
{
"name": "https://bitbucket.org/company/config-repo.git/application.yml",
"source": {
[properties]
}
}
]
}
And we have the files my-service-development.yml, application-development.yml, my-service.yml and application.yml so that properties in the more specific ones override properties in the more general ones.
Eg we can set a property in my-service.yml but override it for the development environment in my-service-development.yml. Likewise we can set common properties for an environment (such as database connections or messaging connections in application-development.yml.
However when we switch to S3 and hit the same URL we only get
{
"name": "my-service",
"profiles": [
"development"
],
"label": null,
"version": "twtwert",
"state": null,
"propertySources": [
{
"name": "s3:my-service-development",
"source": {
[properties]
}
}
]
}
ie we only get the most specific property source and not the others.
Is there something we are missing or is this a bug in the S3 implementation?
We are using spring boot 2.5.4, spring cloud 2020.0.3 and spring-cloud-aws 2.3.2
Related
Similar to How to get Micrometer to output all custom metrics but I am trying to do it for Graphite.
I have a custom metric counter defined as:
#Bean
Counter successfulAuthenticationRequests(MeterRegistry meterRegistry) {
return Counter.builder(metricPrefix + ".auth.authentication.success")
.tag("group", "authentication")
.tag("state", "ok")
.register(meterRegistry);
}
In actuator I can see the value
❯ curl http://localhost:28082/actuator/metrics/spring.cloud.gateway.auth.authentication.success | jq
{
"name": "spring.cloud.gateway.auth.authentication.success",
"measurements": [
{
"statistic": "COUNT",
"value": 2
}
],
"availableTags": [
{
"tag": "node",
"values": [
"docker-desktop"
]
},
{
"tag": "service",
"values": [
"ds_gateway"
]
},
{
"tag": "state",
"values": [
"ok"
]
},
{
"tag": "group",
"values": [
"authentication"
]
}
]
}
The environment is set up as
management.graphite.metrics.export.host: graphite
management.graphite.metrics.export.enabled: "true"
management.graphite.metrics.export.port: 2004
And I can see that there's data added to Graphite so I presume the connection is all valid.
However, I can't see my custom counter. I tried to use a different prefix as well and none of it worked.
Is there anything I may be missing?
I also tried to disable tag support by adding
management.graphite.metrics.export.graphiteTagsEnabled: "false"
I get more metrics with this set in Spring but not for my custom metrics.
I am capturing the logins of my org users into slack using google workspace admin reports API as mentioned in the doc here: https://developers.google.com/admin-sdk/reports/v1/appendix/activity/saml#login_success
I want to identify the workspace that these users are login into. How can I identify this?
Here's the sample response that I get from the reports API:
{
"kind": "admin#reports#activities",
"etag": "\"SsISqFfgRYY11XaGpPyQF5FTf1EAwqUmKLMPaD85FHw/evu1UTmScwnBzMj7rPtBftM3N2k\"",
"items": [
{
"kind": "admin#reports#activity",
"id": {
"time": "2022-05-25T17:51:08.913Z",
"uniqueQualifier": "35251594669533645",
"applicationName": "token",
"customerId": "C02a9qd29"
},
"etag": "\"SsISqFfgRYY11XaGpPyQF5FTf1EAwqUmKLMPaD85FHw/U-RQigEfldlDShA5VdJAIizlnsQ\"",
"actor": {
"email": "vibhu#cloudeagle.ai",
"profileId": "116721330888590133060"
},
"ipAddress": "18.206.76.246",
"events": [
{
"name": "authorize",
"parameters": [
{
"name": "client_id",
"value": "606092904014-s1u3idjanlbhr4ns5b1hcjgfn63cr9nh.apps.googleusercontent.com"
},
{
"name": "app_name",
"value": "Slack"
},
{
"name": "client_type",
"value": "WEB"
},
{
"name": "scope_data",
"multiMessageValue": [
{
"parameter": [
{
"name": "scope_name",
"value": "openid"
},
{
"name": "product_bucket",
"multiValue": [
"IDENTITY"
]
}
]
},
{
"parameter": [
{
"name": "scope_name",
"value": "https://www.googleapis.com/auth/userinfo.email"
},
{
"name": "product_bucket",
"multiValue": [
"IDENTITY"
]
}
]
},
{
"parameter": [
{
"name": "scope_name",
"value": "https://www.googleapis.com/auth/userinfo.profile"
},
{
"name": "product_bucket",
"multiValue": [
"IDENTITY"
]
}
]
}
]
},
{
"name": "scope",
"multiValue": [
"openid",
"https://www.googleapis.com/auth/userinfo.email",
"https://www.googleapis.com/auth/userinfo.profile"
]
}
]
}
]
},
}
I am wondering if it is possible to identify the slack workspace from the above response or would it need other API endpoints and parameters.
keep in mind SAML is an authentication method that allows a Service Provider such as Slack in this scenario, use Google credentials as Identity Provider (IdP). That being said once the Authentication flow is completed usually the IdP doesn't have any control or access to the app activity.
In other words once the login is completed Google is blind about what users do in the app interface.
For that reason I am afraid what you are trying to achieve is not possible. In the Google Reports API link you shared data you can obtain is limited to failed/successful login details.
Testing the call to the Reports API you can see there is no additional details useful to your purpose:
I'm developing in AWS Cloud9, and have a basic "Hello, World" API set up using Lambda.
Now I would like to iterate so that the API can accept parameters. Cloud9 used to have a convenient UI for modifying the payload when running "local" (in the IDE, without deploy). But I can't find where this has been moved, and the documentation still references the previous UI.
To test this, I've included a simple print(event) in my Lambda, and started modifying various components. So far I only print an empty dict ({}).
I suspect it's in the launch.json but so far everything I've modified has not been picked up. Showing below
{
"configurations": [
{
"type": "aws-sam",
"request": "direct-invoke",
"name": "API token-to-geojson:HelloWorldFunction (python3.9)",
"invokeTarget": {
"target": "api",
"templatePath": "token-to-geojson/template.yaml",
"logicalId": "HelloWorldFunction"
},
"api": {
"path": "/hello",
"httpMethod": "get",
"payload": {
"json": {}
}
},
"lambda": {
"runtime": "python3.9"
}
},
{
"type": "aws-sam",
"request": "direct-invoke",
"name": "token-to-geojson:HelloWorldFunction (python3.9)",
"invokeTarget": {
"target": "template",
"templatePath": "token-to-geojson/template.yaml",
"logicalId": "HelloWorldFunction"
},
"lambda": {
"payload": {
"ticky": "tacky"
},
"environmentVariables": {},
"runtime": "python3.9"
}
}
]
}
The only thing I saw is we need to add "json" before the actual json data. In the example below, it appears the IDE already knows the id is event.id (note event is the first argument of the handler).
"lambda": {
"payload": {
"json": {
"id": 1001
}
},
"environmentVariables": {}
}
We have a Azure datafactory fully integrated with DevOps. Every change I make to the datafactory is deployed to all environments (OTAP), except alerts & metrics. I cannot find anything on how to deploy these to the other environments. Is this possible at all?
Is this possible at all?'
Quick answer is NO so far. I contacted Microsoft ADF team and got below response:
Azure Data Factory utilizes Azure Resource Manager templates to store
the configuration of your various ADF entities. Entities on Alerts &
Metrics does not get exported in the ARM template, so Alerts & Metrics
won’t be integrated using DevOps.
I did 2 verifications:
1.Check ARM template supported entities in ADF, Alerts & Metrics doesn't exist.
2.Try to export ARM template in the ADF UI but still no Alerts & Metrics
Really understand you would like to integrate all elements in Data Factory including Alerts & Metrics with DevOps. I suggest you submitting feedback here to push improvements of ADF, any voice is welcome.
There is a way to work around this one.
ADF alert is a "Microsoft.Insights/metricalerts" resource that you can deploy using ARM deployment operation from Azure Devops.
You can try to create an alert in ADF and then go to Portal, search for: Monitor > Alert > Alert Rule, and find the Alert you created in ADF. In my case there is an Alert called Test
Here is the ARM template exported from the alert
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"metricalerts_Alert_name": {
"defaultValue": "Alert",
"type": "String"
},
"factories_test_externalid": {
"defaultValue": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/yyyyy/providers/Microsoft.DataFactory/factories/test",
"type": "String"
},
"actionGroups_actiongroup1_externalid": {
"defaultValue": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/yyyyy/providers/microsoft.insights/actionGroups/actiongroup1",
"type": "String"
}
},
"variables": {},
"resources": [
{
"type": "Microsoft.Insights/metricalerts",
"apiVersion": "2018-03-01",
"name": "[parameters('metricalerts_Alert_name')]",
"location": "global",
"tags": {
"CreatedTimeUtc": "2022-09-13T05:28:46.0663823Z"
},
"properties": {
"severity": 0,
"enabled": true,
"scopes": [
"[parameters('factories_test_externalid')]"
],
"evaluationFrequency": "PT1M",
"windowSize": "PT15M",
"criteria": {
"allOf": [
{
"threshold": 1,
"name": "PipelineFailedRuns",
"metricNamespace": "Microsoft.DataFactory/factories",
"metricName": "PipelineFailedRuns",
"dimensions": [
{
"name": "Name",
"operator": "Include",
"values": [
"pipeline2"
]
},
{
"name": "FailureType",
"operator": "Include",
"values": [
"UserError",
"SystemError",
"BadGateway"
]
}
],
"operator": "GreaterThanOrEqual",
"timeAggregation": "Total",
"criterionType": "StaticThresholdCriterion"
}
],
"odata.type": "Microsoft.Azure.Monitor.SingleResourceMultipleMetricCriteria"
},
"actions": [
{
"actionGroupId": "[parameters('actionGroups_actiongroup1_externalid')]",
"webHookProperties": {}
}
]
}
}
]
}
For the actionGroups you can refer to
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"groupName": {
"defaultValue": "actiongroup1",
"type": "string"
},
"email_receiver_address": {
"defaultValue": "someEmail#gmail.com",
"type": "string"
}
},
"variables": {},
"resources": [
{
"type": "microsoft.insights/actionGroups",
"apiVersion": "2019-03-01",
"name": "[parameters('groupName')]",
"location": "global",
"tags": {
"CreatedTimeUtc": "2020-10-21T07:24:08.2808723Z",
},
"properties": {
"groupShortName": "test",
"enabled": true,
"emailReceivers": [
{
"name": "test email received",
"emailAddress": "[parameters('email_receiver_address')]",
"useCommonAlertSchema": false
}
],
"smsReceivers": [],
"webhookReceivers": [],
"itsmReceivers": [],
"azureAppPushReceivers": [],
"automationRunbookReceivers": [],
"voiceReceivers": [],
"logicAppReceivers": [],
"azureFunctionReceivers": []
}
}
]
}
I'm using qpid-broker for integration testing my spring-boot-start-amqp application which uses basicGet (autoAck=false), basicAck and basicReject for handling the messages. basicReject (with requeue=false) works fine with my external rabbitmq instance but doesn't work with the qpid-broker.
I have tested my code with an external RabbitMQ instance where everything works fine but with the embedded Apache Qpid server the test fails because basicReject is not working properly.
Getting the message and rejecting it:
rabbitTemplate.execute {
val response = it.basicGet(config.queueName, false)
it.basicReject(response.envelope.deliveryTag, false)
}
Check if the message is still in the queue:
rabbitTemplate.execute {
val response = it.basicGet(config.queueName, false)
Assertions.assertThat(response).isNull()
}
My Qpid config:
{
"name": "EmbeddedBroker",
"modelVersion": "7.0",
"authenticationproviders": [
{
"name": "password",
"type": "Plain",
"secureOnlyMechanisms": [],
"users": [
{
"name": "guest",
"password": "guest",
"type": "managed"
}
]
}
],
"ports": [
{
"name": "AMQP",
"port": "${qpid.amqp_port}",
"protocols": [ "AMQP_0_9_1" ],
"authenticationProvider": "password",
"virtualhostaliases": [
{
"name": "defaultAlias",
"type": "defaultAlias"
}
]
}
],
"virtualhostnodes": [
{
"name": "default",
"defaultVirtualHostNode": "true",
"type": "Memory",
"virtualHostInitialConfiguration": "{\"type\": \"Memory\" }"
}
]
}
Why does basicReject work fine with my RabbitMQ instance but doesn't work properly with the Apache Qpid embedded broker?
Edit: My solution was to move away from qpid and use a RabbitMQ Testcontainer.