Is the no public IP address feature for Databricks in GA? - azure-databricks

Is the feature enabled via the ARM template option enableNoPublicIP currently in general availability. The webinar https://databricks.com/p/webinar/azure-databricks-security-best-practices suggests this was in a private preview a while ago. I need to use this feature to be able to comply with the requirements for a client, but they also have a requirement that no feature in preview are used.

Currently, no-public IP is in public preview.
With secure cluster connectivity enabled, customer virtual networks have no open ports and Databricks Runtime cluster nodes have no public IP addresses.
Documentation: https://learn.microsoft.com/en-us/azure/databricks/security/secure-cluster-connectivity
Have you tried this template to enableNoPublicIP?
To create a Microsoft.Databricks/workspaces resource, add the following JSON to the resources section of your template.
{
'type': 'Microsoft.Databricks/workspaces',
'apiVersion': '2018-04-01',
'name': '[parameters('workspaceName')]',
'location': '[parameters('location')]',
'dependsOn': [],
'tags': {
'data platform': 'warehouse'
},
'sku': {
'name': 'standard'
},
'properties': {
'ManagedResourceGroupId': '[variables('managedResourceGroupId')]',
'parameters': {
'enableNoPublicIp': {
'value': true
}
}
}
}
In case, if you are receiving the below error message for the deployment, you need to create a support to ticket.
{
'status': 'Failed',
'error': {
'code': 'ResourceDeploymentFailure',
'message': 'The resource operation completed with terminal provisioning state 'Failed'.',
'details': [
{
'code': 'CustomerUnauthorized',
'message': 'CUSTOMER_UNAUTHORIZED: Subscription: XXXX-XXXX-XXXXX-XXXXX Is Not Authorized to Create No Public IP Workspaces.'
}
]
}
}
Once you create the support ticket, backend team will help you to enable the enableNoPublicIp for your subscription.

Related

How to share a private file in slack with everyone in workspace but not public?

After my app got installed, users can upload files in direct message with my bot.
Now I want to send a message to everyone and include this uploaded file but since it was uploaded in a private chat with my bot, this file is private.
I don't want to make it public(internet) available(also user should have this extra workspace permission to make files publicly available), just want to share it with everyone in the workspace only. so app.client.files.sharedPublicURL is not an option.
Maybe download the file using url_private and upload it again to slack using files.upload? which seems crazy!
Or with some other method which lets me change shares part of file to "general" channel so everyone can see it, do we have such method?
{
files: [
{
id: '...',
...
user: '....',
editable: false,
mode: 'hosted',
is_external: false,
external_type: '',
is_public: false,
public_url_shared: false,
display_as_bot: false,
username: '',
url_private: 'https://files.slack.com/files-pri/.../file.png',
url_private_download: 'https://files.slack.com/files-pri/..../download/file.png',
....
permalink: 'https://myworkspace.slack.com/files/....png',
permalink_public: 'https://slack-files.com/...',
has_rich_preview: false,
file_access: 'visible'
"shares": {
"public": {
"C0T8SE4AU": [
{
...
"channel_name": "general", //<<<<<<means everyone? is this possible?
...
}
]
}
},
}
]..
}

google-api-nodejs-client - Service Account credentials authentication issues

I am trying to use the google-api-nodejs library to manage some resources in the google Campaign Manager API.
I have confirmed that we currently have a project configured, and that this project has the google Campaign Manager API enabled (see screenshot at the bottom).
I have tried several ways of authenticating myself (particularly API keys, OAuth2, and Service account credentials). This question will focus on using a Service Account for authentication purposes.
Now, I have generated a new service account keyfile (see screenshot at the bottom)), and I configured my code as follows, following the service-account-credentials section of the library's repo. I've also extended the auth scope to include the necessary scope according to this endpoint API docs
import { assert } from "chai";
import { google } from "googleapis";
it("can query userProfiles using service account keyfile", async () => {
try {
const auth = new google.auth.GoogleAuth({
keyFile:
"/full-path-to/credentials-service-account.json",
scopes: [
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/dfatrafficking",
"https://www.googleapis.com/auth/ddmconversions",
"https://www.googleapis.com/auth/dfareporting",
],
});
const authClient = await auth.getClient();
// set auth as a global default
google.options({
auth: authClient,
});
const df = google.dfareporting("v3.5");
const res = await df.userProfiles.list({});
console.log("res: ", res);
assert(true);
} catch (e) {
console.error("error: ", e);
assert(false);
}
});
This results in the following error:
{
"code": 403,
"errors": [
{
"message": "Version v3.5 is no longer supported. Please upgrade to the latest version of the API.",
"domain": "global",
"reason": "forbidden"
}
]
}
This is an interesting error, because v3.5 is the latest version of that API (as of 14 April 2022) (This page shows the deprecation schedule: https://developers.google.com/doubleclick-advertisers/deprecation. Notice that v3.3 and v3.4 are deprecated, while v3.5 is not.)
In any case, using a different version of the dfareporting API still result in error:
// error thrown: "Version v3.5 is no longer supported. Please upgrade to the latest version of the API."
const df = google.dfareporting("v3.5");
// error thrown: "Version v3.4 is no longer supported. Please upgrade to the latest version of the API."
const df = google.dfareporting("v3.4");
// error thrown: 404 "The requested URL <code>/dfareporting/v3.3/userprofiles</code> was not found on this server"
const df = google.dfareporting("v3.3");
// Note 1: There are no other versions available
// Note 2: It is not possible to leave the version blank
const df = google.dfareporting();
// results in typescript error: "An argument for 'version' was not provided."
I also tried to query the floodlightActivities API, which failed with an authentication error.
// const res = await df.userProfiles.list({});
const res = await df.floodlightActivities.list({
profileId: "7474579",
});
This, in it's turn, results in the following error:
{
"code": 401,
"errors": [
{
"message": "1075 : Failed to authenticate. Google account can not access the user profile/account requested.",
"domain": "global",
"reason": "authError",
"location": "Authorization",
"locationType": "header"
}
]
}
Now, my question is:
am I doing something wrong while trying to authenticate using the service account credentials?
Or, is it possible that these endpoints do not support service-account-credentials?
Or, is something else going wrong here?

Traefik with dynamic routing to ECS backends, running as one-off tasks

I'm triying to implement solution for reverse-proxy service using Traefik v1 (1.7) and ECS one-off tasks as backends, as described in this SO question. Routing should by dynamic - requests to /user/1234/* path should go to the ECS task, running with the appropriate docker labels:
docker_labels = {
traefik.frontend.rule = "Path:/user/1234"
traefik.backend = "trax1"
traefik.enable = "true"
}
So far this setup works fine, but I need create one ECS task definition per one running task, because the docker labels are the property of ECS TaskDefinition, not the ECS task itself. Is it possible to create only one TaskDefinition and pass Traefik rules in ECS task tags, within task key/value properties?
This will require some modification in Traefik source code, are the any other available options or ways this should be implemented, that I've missed, like API Gateway or Lambda#Edge? I have no experience with those technologies, real-world examples are more then welcome.
Solved by using Traefik REST API provider. External component, which runs the one-off tasks, can discover task internal IP and update Traefik configuration on-fly by pair traefik.frontend.rule = "Path:/user/1234" and task internal IP:port values in backends section
It should GET the Traefik configuration first from /api/providers/rest endpoint, remove or add corresponding part (if task was stopped or started), and update Traefik configuration by PUT method to the same endpoint.
{
"backends": {
"backend-serv1": {
"servers": {
"server-service-serv-test1-serv-test-4ca02d28c79b": {
"url": "http://172.16.0.5:32793"
}
}
},
"backend-serv2": {
"servers": {
"server-service-serv-test2-serv-test-279c0ba1959b": {
"url": "http://172.16.0.5:32792"
}
}
}
},
"frontends": {
"frontend-serv1": {
"entryPoints": [
"http"
],
"backend": "backend-serv1",
"routes": {
"route-frontend-serv1": {
"rule": "Path:/user/1234"
}
}
},
"frontend-serv2": {
"entryPoints": [
"http"
],
"backend": "backend-serv2",
"routes": {
"route-frontend-serv2": {
"rule": "Path:/user/5678"
}
}
}
}
}

typeProviders with private GKE clusters

I am new to google cloud manager (GCM) and am writing some code in order to practice. I have read some interesting articles that detail how I can use deploymentmanager.v2beta.typeprovider in order to extend GCM and use it to configure Kubernetes objects themselves as additional deployment. This is a very appealing behavior for extension and seems to open up great opportunities to extend declarative automation of any API which is cool.
I am attempting to create a private node/public endpoint GKE cluster that is managed by custom typeProvider resources which correspond to GKE api calls. It seems that public node/public endpoint GKE cluster is the only way to support GCM custom typeProviders and this seems wrong considering private node/public endpoint GKE configuration possible.
It seems it would be weird to not have deploymentmanager.v2beta.typeprovider support a private node/public endpoint GKE configuration.
as an aside...
I feel a private node/private endpoint/Cloud Endpoint to expose it to the GCM typeProvider public API endpoint requirement should also be a valid architecture but I have yet to test.
Using the following code
def GenerateConfig(context):
# Some constant type vars that are not really
resources = []
outputs = []
gcp_type_provider = 'deploymentmanager.v2beta.typeProvider'
extension_prefix = 'k8s'
api_version = 'v1'
kube_initial_auth = {
'username': 'luis',
'password': 'letmeinporfavors',
"clientCertificateConfig": {
'issueClientCertificate': True
}
}
# EXTEND API TO CONTROL KUBERNETES BEGIN
kubernetes_exposed_apis = [
{
'name': '{}-{}-api-v1-type'.format(
context.env['deployment'],
extension_prefix
),
'endpoint': 'api/v1'
},
{
'name': '{}-{}-apps-v1-type'.format(
context.env['deployment'],
extension_prefix
),
'endpoint': 'apis/apps/v1'
},
{
'name': '{}-{}-rbac-v1-type'.format(
context.env['deployment'],
extension_prefix
),
'endpoint': 'apis/rbac.authorization.k8s.io/v1'
},
{
'name': '{}-{}-v1beta1-extensions-type'.format(
context.env['deployment'],
extension_prefix
),
'endpoint': 'apis/extensions/v1beta1'
}
]
for exposed_api in kubernetes_exposed_apis:
descriptor_url = 'https://{}/swaggerapi/{}'.format(
'$(ref.{}-k8s-cluster.endpoint)'.format(
context.env['deployment']
),
exposed_api['endpoint']
)
resources.append(
{
'name': exposed_api['name'],
'type': gcp_type_provider,
'properties': {
'options': {
'validationOptions': {
'schemaValidation': 'IGNORE_WITH_WARNINGS'
},
'inputMappings': [
{
'fieldName': 'name',
'location': 'PATH',
'methodMatch': '^(GET|DELETE|PUT)$',
'value': '$.ifNull($.resource.properties.metadata.name, $.resource.name)'
},
{
'fieldName': 'metadata.name',
'location': 'BODY',
'methodMatch': '^(PUT|POST)$',
'value': '$.ifNull($.resource.properties.metadata.name, $.resource.name)'
},
{
'fieldName': 'Authorization',
'location': 'HEADER',
'value': '$.concat("Bearer ", $.googleOauth2AccessToken())'
}
],
},
'descriptorUrl': descriptor_url
},
}
)
# EXTEND API TO CONTROL KUBERNETES END
# NETWORK DEFINITION BEGIN
resources.append(
{
'name': "{}-network".format(context.env['deployment']),
'type': "compute.{}.network".format(api_version),
'properties': {
'description': "{} network".format(context.env['deployment']),
'autoCreateSubnetworks': False,
'routingConfig': {
'routingMode': 'REGIONAL'
}
},
}
)
resources.append(
{
'name': "{}-subnetwork".format(context.env['deployment']),
'type': "compute.{}.subnetwork".format(api_version),
'properties': {
'description': "{} subnetwork".format(
context.env['deployment']
),
'network': "$(ref.{}-network.selfLink)".format(
context.env['deployment']
),
'ipCidrRange': '10.64.1.0/24',
'region': 'us-east1',
'privateIpGoogleAccess': True,
'enableFlowLogs': False,
}
}
)
# NETWORK DEFINITION END
# EKS DEFINITION BEGIN
resources.append(
{
'name': "{}-k8s-cluster".format(context.env['deployment']),
'type': "container.{}.cluster".format(api_version),
'properties': {
'zone': 'us-east1-b',
'cluster': {
'description': "{} kubernetes cluster".format(
context.env['deployment']
),
'privateClusterConfig': {
'enablePrivateNodes': False,
'masterIpv4CidrBlock': '10.0.0.0/28'
},
'ipAllocationPolicy': {
'useIpAliases': True
},
'nodePools': [
{
'name': "{}-cluster-pool".format(
context.env['deployment']
),
'initialNodeCount': 1,
'config': {
'machineType': 'n1-standard-1',
'oauthScopes': [
'https://www.googleapis.com/auth/compute',
'https://www.googleapis.com/auth/devstorage.read_only',
'https://www.googleapis.com/auth/logging.write',
'https://www.googleapis.com/auth/monitoring'
],
},
'management': {
'autoUpgrade': False,
'autoRepair': True
}
}],
'masterAuth': kube_initial_auth,
'loggingService': 'logging.googleapis.com',
'monitoringService': 'monitoring.googleapis.com',
'network': "$(ref.{}-network.selfLink)".format(
context.env['deployment']
),
'clusterIpv4Cidr': '10.0.0.0/14',
'subnetwork': "$(ref.{}-subnetwork.selfLink)".format(
context.env['deployment']
),
'enableKubernetesAlpha': False,
'resourceLabels': {
'purpose': 'expiramentation'
},
'networkPolicy': {
'provider': 'CALICO',
'enabled': True
},
'initialClusterVersion': 'latest',
'enableTpu': False,
}
}
}
)
outputs.append(
{
'name': '{}-cluster-endpoint'.format(
context.env['deployment']
),
'value': '$(ref.{}-k8s-cluster.endpoint)'.format(
context.env['deployment']
),
}
)
# EKS DEFINITION END
# bring it all together
template = {
'resources': resources,
'outputs': outputs
}
# give it to google
return template
if __name__ == '__main__':
GenerateConfig({})
Also, I will note the subsequent hello world template which uses the created typeProviders above.
def current_config():
'''
get the current configuration
'''
return {
'name': 'atrium'
}
def GenerateConfig(context):
resources = []
conf = current_config()
resources.append(
{
'name': '{}-svc'.format(conf['name']),
'type': "{}/{}-k8s-api-v1-type:/api/v1/namespaces/{}/pods".format(
context.env['project'],
conf['name'],
'{namespace}'
),
'properties': {
'namespace': 'default',
'apiVersion': 'v1',
'kind': 'Pod',
'metadata': {
'name': 'hello-world',
},
'spec': {
'restartPolicy': 'Never',
'containers': [
{
'name': 'hello',
'image': 'ubuntu:14.04',
'command': ['/bin/echo', 'hello', 'world'],
}
]
}
}
}
)
template = {
'resources': resources,
}
return template
if __name__ == '__main__':
GenerateConfig({})
If I leave enablePrivateNodes as False
'privateClusterConfig': {
'enablePrivateNodes': False,
'masterIpv4CidrBlock': '10.0.0.0/28'
}
I get this as a response
~/code/github/gcp/expiramentation/atrium_gcp_infra 24s
❯ bash atrium/recycle.sh
Waiting for delete [operation-1562105370163-58cb9ffb0b7b8-7479dd98-275c6b14]...done.
Delete operation operation-1562105370163-58cb9ffb0b7b8-7479dd98-275c6b14 completed successfully.
Waiting for delete [operation-1562105393399-58cba01134528-be47dc30-755cb106]...done.
Delete operation operation-1562105393399-58cba01134528-be47dc30-755cb106 completed successfully.
The fingerprint of the deployment is IiWcrdbZA5MedNlJLIicOg==
Waiting for create [operation-1562105786056-58cba187abee2-5d761e87-b446baca]...done.
Create operation operation-1562105786056-58cba187abee2-5d761e87-b446baca completed successfully.
NAME TYPE STATE ERRORS INTENT
atrium-k8s-api-v1-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-k8s-apps-v1-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-k8s-cluster container.v1.cluster COMPLETED []
atrium-k8s-rbac-v1-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-k8s-v1beta1-extensions-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-network compute.v1.network COMPLETED []
atrium-subnetwork compute.v1.subnetwork COMPLETED []
The fingerprint of the deployment is QJ2NS5EhjemyQJThUWYNHA==
Waiting for create [operation-1562106179055-58cba2fe76fe7-957ef7a6-f55257bb]...done.
Create operation operation-1562106179055-58cba2fe76fe7-957ef7a6-f55257bb completed successfully.
NAME TYPE STATE ERRORS INTENT
atrium-svc atrium-244423/atrium-k8s-api-v1-type:/api/v1/namespaces/{namespace}/pods COMPLETED []
~/code/github/gcp/expiramentation/atrium_gcp_infra 13m 48s
this is a good response and my custom typeProvider resource creates correctly using the API's of my freshly created cluster.
If I make this cluster have private nodes however... with
'privateClusterConfig': {
'enablePrivateNodes': True,
'masterIpv4CidrBlock': '10.0.0.0/28'
},
I fail with
~/code/github/gcp/expiramentation/atrium_gcp_infra 56s
❯ bash atrium/recycle.sh
Waiting for delete [operation-1562106572016-58cba47538c93-d34c17fc-8b863765]...done.
Delete operation operation-1562106572016-58cba47538c93-d34c17fc-8b863765 completed successfully.
Waiting for delete [operation-1562106592237-58cba4888184f-a5bc3135-4e662eed]...done.
Delete operation operation-1562106592237-58cba4888184f-a5bc3135-4e662eed completed successfully.
The fingerprint of the deployment is dk5nh_u5ZFFvYO-pCXnFBg==
Waiting for create [operation-1562106901442-58cba5af62f25-8b0e380f-3687aebd]...done.
Create operation operation-1562106901442-58cba5af62f25-8b0e380f-3687aebd completed successfully.
NAME TYPE STATE ERRORS INTENT
atrium-k8s-api-v1-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-k8s-apps-v1-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-k8s-cluster container.v1.cluster COMPLETED []
atrium-k8s-rbac-v1-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-k8s-v1beta1-extensions-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-network compute.v1.network COMPLETED []
atrium-subnetwork compute.v1.subnetwork COMPLETED []
The fingerprint of the deployment is 4RnscwpcYTtS614VXqtjRg==
Waiting for create [operation-1562107350345-58cba75b7e680-f548a69f-1a85f105]...failed.
ERROR: (gcloud.deployment-manager.deployments.create) Error in Operation [operation-1562107350345-58cba75b7e680-f548a69f-1a85f105]: errors:
- code: ERROR_PROCESSING_REQUEST
message: 'Error fetching URL https://10.0.0.2:443/api/v1/namespaces/default/pods,
reason: ERROR_EXCLUDED_IP'
The 10.0.0.2 seems to be the private endpoint of my cluster. I am having a difficult time tracking down where I can override the https://10.0.0.2:443/api/v1/namespaces/default/pods url host such that it would attempt to contact the publicEndpoint rather than the privateEndpoint.
If this call were to go out to the public endpoint it would create successfully I believe. As an interesting aside the typeProvider declarations in the descriptorUrl do attempt to hit the publicEndpoint of the cluster and are successful in doing so. However, despite this indication the creation of the actual api resources such as the hello world example attempt to interface with the private endpoint.
I feel this behavior should be overridable somewhere but I am failing to find this clue.
I have tried both a working public node configuration and a non-working private node configuration
So I ran into the same problem writing [this] (https://github.com/Aahzymandius/k8s-workshops/tree/master/8-live-debugging). While trying to debug this I ran into 2 issues
The public endpoont is used initially to fetch the swagger definition of the various k8s apis and resources, however, in that response is included the private endpoint for the API server which causes the k8s type to try to use that ip.
While further trying debug that error plus a new one I was hitting, I discovered that gke 1.14.x or later does not support insecure calls to the k8s api which caused the k8s type to fail even with fully public clusters.
Since the functionality seems to fail with newer versions of GKE, I just stopped trying to debug it. Though I'd recommend reporting an issue on the github repo for it. This functionality would be great to keep going

Cannot fire Bigcommerce webhooks

so far I've managed to create two webhooks by using their official gem (https://github.com/bigcommerce/bigcommerce-api-ruby) with the following events:
store/order/statusUpdated
store/app/uninstalled
The destination URL is a localhost tunnel managed by ngrok (the https) version.
status_update_hook = Bigcommerce::Webhook.create(connection: connection, headers: { is_active: true }, scope: 'store/order/statusUpdated', destination: 'https://myapp.ngrok.io/bigcommerce/notifications')
uninstall_hook = Bigcommerce::Webhook.create(connection: connection, headers: { is_active: true }, scope: 'store/app/uninstalled', destination: 'https://myapp.ngrok.io/bigcommerce/notifications')
The webhooks seems to be active and correctly created as I can retrieve and list them.
Bigcommerce::Webhook.all(connection:connection)
I manually created an order in my store dashboard but no matter to which state or how many states I change it, no notification is fired. Am I missing something?
The exception that I'm seeing in the logs is:
ExceptionMessage: true is not a valid header value
The "is-active" flag should be sent as part of the request body--your headers, if you choose to include them, would be an arbitrary key value pair that you can check at runtime to verify the hook's origin.
Here's an example request body:
{
"scope": "store/order/*",
"headers": {
"X-Custom-Auth-Header": "{secret_auth_password}"
},
"destination": "https://app.example.com/orders",
"is_active": true
}
Hope this helps!

Resources