Does anyone know how to manually create shared static durable subscriptions in TIBCO EMS?
There is no checkbox for "shared" in the web interface of tibco:
I also tried it with GEMS, but there was no option either:
The only way i could create shared durables was by starting up my application and let it be created automatically. But then the durable is not static
Only solution I found was to use the CLI of TIBCO EMS or to edit the durables.conf:
https://docs.tibco.com/pub/ems/8.6.0/doc/html/GUID-01D08BD3-6A07-4A11-A809-21436D479391.html
https://docs.tibco.com/pub/ems/8.5.1/doc/html/GUID-65F9AC62-C78E-411D-A7B0-50485812CE71.html
However it is recommended to use the CLI:
To configure durable subscriptions in this file, we recommend using
the create durable command in the tibemsadmin tool; see create
durable.
If the create durable command detects an existing dynamic durable
subscription with the same topic and name, it promotes it to a static
subscription, and writes a specification to the file durables.conf.
Related
I have an s3 connector deployed on MSK Connect, and a repository on github with the json connector configuration file. I'd like to update the connectors configuration on demand via MSK's REST API. I've checked the API documentation, but it seems like the UpdateConnector API only allows to modify the capacity configuration. The CreateConnector API does allow to provide connector configuration, but it returns an error if the connector already exists.
I could delete and then recreate the connector, but this doesn't seem like a good approach.
Is there another way to update a running connector configuration?
If the Connect REST API is not directly accessible in other ways, then it seems that delete/recreate is the only option.
For sink connectors, that's a relatively safe option because consumer offsets are tracked by the connector name itself and there's no state stored outside of the internal Connect topics
Based on an answer I got from AWS, it's indeed not supported at the moment. It's in their roadmap, but they don't have an ETA at the moment.
They suggest to follow the information here, for upcoming improvements.
All environments are in the same tenant, same Azure Active Directory.
Need to push data from one environment's (Line of Business) Common Data Service to another environment's Common Data Service (Central Enterprise CDS) where reporting is running from.
I've looked into using OData Dataflows, however this seems like more of a manually triggered option.
OData dataflows is meant for and designed to support migration and synchronization of large datasets in Common Data Service during such scenarios:
A one-time cross-environment or cross-tenant migration is needed (for
example, geo-migration).
A developer needs to update an app that is being used in production.
Test data is needed in their development environment to easily build
out changes.
Reference: Migrate data between Common Data Service environments using the dataflows OData connector
For continuous data synchronization, use the CDS connector in Power Automate and attribute filters for source CDS record updates to target CDS entities.
I have created two gcloud projects, one for cloud sql and one for kubernete cluster. For accessing SQL in project one i have set import export custom routes . Do i need to take gcloud confirmation for this or this is enough? as i have read somewhere that after these steps ask gcloud support for enable the exchange of custom routes for your speckle-umbrella VPC network associated with your instance that is automatically created upon the Cloud SQL instance is created.
As far as I know this step is not included in the public documentation and is not necessary if you are connecting from within the same project or from any service project ( if configured with Shared VPC), then you don't require to export those routes. This is generally required if you are connecting from on-prem.
If you are having anny issue please let us know
I've been playing around with writing a custom resource for AWS which combines other resources in a useful way. (It's too complex to achieve effectively with a Terraform module.)
The documentation (starting with the Plugins page) outlines how to create a completely new resource from scratch. However, is it possible to "attach" my custom resource to the AWS provider? This would allow me to:
name my resources e.g. aws_foo instead of awscontrib_foo
presumably, access AWS credentials already defined for that provider
You can use the following provider to do exactly the same with Custom Resources in AWS CloudFormation.
https://github.com/mobfox/terraform-provider-multiverse
you can use even AWS Lambda and use any language you like to manage your resources, it also keep state of your resource, so you can delete, read, update them too. It create a resource, so it is not like External Data
Yes, the process is outlined here
https://github.com/hashicorp/terraform#developing-terraform
Your customised terraform can be in your own version of the AWS plugin
I need to configure service accounts for connecting to some of the services and for that we are required to configure the details in a template file.
So basically that means, I want to configure service account at run time.
We are using oracle service bus 11g.
Since I've never worked on service accounts before, any suggestions will be helpful.
I checked that we can do that at run time by fn-bea:lookupBasicCredentials XQuery function. but this is not what we want. We want to generated dynamically through the template files.