Creating cross region autonomous database failing with 'message': "The following tag namespaces / keys are not authorized or not found: 'oracle-tags'" - oci-python-sdk

Need help in creating cross region standby database via python have tried creating with
oci.database.models.CreateCrossRegionAutonomousDatabaseDataGuardDetails
I am unable to find an example for the same so i tried with whatever i can find through sdk documentation
response = oci_client.get_autonomous_database(autonomous_database_id=primary_db_id)
primary_db_details = response.data
def create_cross_region_standby_db(db_client, primary_db_details: oci.database.models.AutonomousDatabase):
adw_request = oci.database.models.CreateCrossRegionAutonomousDatabaseDataGuardDetails()
adw_request.compartment_id = primary_db_details.compartment_id
adw_request.db_name = primary_db_details.db_name
adw_request.data_storage_size_in_tbs = primary_db_details.data_storage_size_in_tbs
adw_request.data_storage_size_in_gbs = primary_db_details.data_storage_size_in_gbs
adw_request.cpu_core_count = primary_db_details.cpu_core_count
adw_request.db_version = primary_db_details.db_version
adw_request.db_workload = primary_db_details.db_workload
adw_request.license_model = primary_db_details.license_model
adw_request.is_mtls_connection_required = primary_db_details.is_mtls_connection_required
adw_request.is_auto_scaling_enabled = primary_db_details.is_auto_scaling_enabled
adw_request.source_id = primary_db_details.id
adw_request.subnet_id = <standby subnet id>
adw_response = db_client.create_autonomous_database(create_autonomous_database_details=adw_request)
print(adw_response.data)
adw_id = adw_response.data.id
oci.wait_until(db_client, db_client.get_autonomous_database(adw_id), 'lifecycle_state', 'AVAILABLE')
print("Created ADW {}".format(adw_id))
return adw_id
create_cross_region_standby_db is done using standby region credentials. Creation of primary db in the same region works fine.

Related

Google Analytics Reporting v4 with streams instead of views

I've just created a new Google Analytics property and it now defaults to data streams instead of views.
I had some code that was fetching reports through the API that I now need to updated to work with those data streams instead of views since there are not views anymore.
I've looked in the docs but i don't see anything related to data streams, anybody knows how this is done now?
Here's my current code that works with a view ID (I'm using the ruby google-api-client gem):
VIEW_ID = "XXXXXX"
SCOPE = 'https://www.googleapis.com/auth/analytics.readonly'
client = AnalyticsReportingService.new
#server to server auth mechanism using a service account
#creds = ServiceAccountCredentials.make_creds({:json_key_io => File.open('account.json'), :scope => SCOPE})
#creds.sub = "myserviceaccount#example.iam.gserviceaccount.com"
client.authorization = #creds
#metrics
metric_views = Metric.new
metric_views.expression = "ga:pageviews"
metric_unique_views = Metric.new
metric_unique_views.expression = "ga:uniquePageviews"
#dimensions
dimension = Dimension.new
dimension.name = "ga:hostname"
#range
range = DateRange.new
range.start_date = start_date
range.end_date = end_date
#sort
orderby = OrderBy.new
orderby.field_name = "ga:pageviews"
orderby.sort_order = 'DESCENDING'
rr = ReportRequest.new
rr.view_id = VIEW_ID
rr.metrics = [metric_views, metric_unique_views]
rr.dimensions = [dimension]
rr.date_ranges = [range]
rr.order_bys = [orderby]
grr = GetReportsRequest.new
grr.report_requests = [rr]
response = client.batch_get_reports(grr)
I would expect that there would be a stream_id property on the ReportRequest object that I could use instead of the view_id but that's not the case.
Your existing code uses the Google Analytics Reporting api to extract data from a Universal analytics account.
Your new Google analytics property is a Google Analytics GA4 account. To extract data from that you need to use the Google analytics data api These are two completely different systems. You will not be able to just port it.
You can find info on the new api and new library here: Ruby Client for the Google Analytics Data API
$ gem install google-analytics-data
Thanks to Linda's answer i was able to get it working, here's the same code ported to the data API, it might end up being useful to someone:
client = Google::Analytics::Data.analytics_data do |config|
config.credentials = "account.json"
end
metric_views = Google::Analytics::Data::V1beta::Metric.new(name: "screenPageViews")
metric_unique_views = Google::Analytics::Data::V1beta::Metric.new(name: "totalUsers")
dimension = Google::Analytics::Data::V1beta::Dimension.new(name: "hostName")
range = Google::Analytics::Data::V1beta::DateRange.new(start_date: start_date, end_date: end_date)
order_dim = Google::Analytics::Data::V1beta::OrderBy::DimensionOrderBy.new(dimension_name: "screenPageViews")
orderby = Google::Analytics::Data::V1beta::OrderBy.new(desc: true, dimension: order_dim)
request = Google::Analytics::Data::V1beta::RunReportRequest.new(
property: "properties/#{PROPERTY_ID}",
metrics: [metric_views, metric_unique_views],
dimensions: [dimension],
date_ranges: [range],
order_bys: [orderby]
)
response = client.run_report request

Trouble to access azure containers from Azure/databricks

I am having trouble to access Azure container from Azure/Databricks.
I follow instructions from this tuto, so I started to create my container and generate sas.
Then on a databricks notebook I delivered the following command
dbutils.fs.mount( source = endpoint_source, mount_point = mountPoint_folder, extra_configs = {config : sas})
where I replace endppoint_source, mountPoint_folder, sas by the following
container_name = "containertobesharedwithdatabricks"
storage_account_name = "atabricksstorageaccount"
storage_account_url = storage_account_name + ".blob.core.windows.net"
sas = "?sv=2021-06-08&ss=bfqt&srt=o&sp=rwdlacupiytfx&se=..."
endpoint_source = "wasbs://"+ storage_account_url + "/" + container_name
mountPoint_folder = "/mnt/projet8"
config = "fs.azure.sas."+ container_name + "."+ storage_account_url
but I ended with the following exception:
shaded.databricks.org.apache.hadoop.fs.azure.AzureException: shaded.databricks.org.apache.hadoop.fs.azure.AzureException: Container $root in account atabricksstorageaccount.blob.core.windows.net not found, and we can't create it using anoynomous credentials, and no credentials found for them in the configuration.
I cannot figure out why databricks cannot find the root container.
Any help would be mutch appreciated. Thanks in advance.
The storage account and folder exist, as can be seen from this capture, so I am puzzled out.
Using the same approach as yours, I got the same error:
Using the following code, I was able to mount successfully. Change the endpoint_source value to the format wasbs://<container-name>#<storage-account-name>.blob.core.windows.net.
endpoint_source = 'wasbs://data#blb2301.blob.core.windows.net'
mp = '/mnt/repro'
config = "fs.azure.sas.data.blb2301.blob.core.windows.net"
sas = "<sas>"
dbutils.fs.mount( source = endpoint_source, mount_point = mp, extra_configs = {config : sas})
My bad..., I put a "/" instead of "#" between the container_name and the storage_account_url and inverse the order, so the right synthax is:
endpoint_source = "wasbs://"+ container_name + "#" + storage_account_url

Very slow connection to Snowflake from Databricks

I am trying to connect to Snowflake using R in databricks, my connection works and I can make queries and retrieve data successfully, however my problem is that it can take more than 25 minutes to simply connect, but once connected all my queries are quick thereafter.
I am using the sparklyr function 'spark_read_source', which looks like this:
query<- spark_read_source(
sc = sc,
name = "query_tbl",
memory = FALSE,
overwrite = TRUE,
source = "snowflake",
options = append(sf_options, client_Q)
)
where 'sf_options' are a list of connection parameters which look similar to this;
sf_options <- list(
sfUrl = "https://<my_account>.snowflakecomputing.com",
sfUser = "<my_user>",
sfPassword = "<my_pass>",
sfDatabase = "<my_database>",
sfSchema = "<my_schema>",
sfWarehouse = "<my_warehouse>",
sfRole = "<my_role>"
)
and my query is a string appended to the 'options' arguement e.g.
client_Q <- 'SELECT * FROM <my_database>.<my_schema>.<my_table>'
I can't understand why it is taking so long, if I run the same query from RStudio using a local spark instance and 'dbGetQuery', it is instant.
Is spark_read_source the problem? Is it an issue between Snowflake and Databricks? Or something else? Any help would be great. Thanks.

Sphinx + Oracle : Data source name not found error

I want to connect to remote oracle database server and index some data from there with sphinx search engine. My OS is ubuntu 16.04 and I havc installed sphinx on it and tested it with local mysql database and everthing was ok (All the data indexed and I could search and results was correct) . I also have installed unixODBC and tested it with isql tool to remote access to oracle database server and every thing was ok, but when I want to index data with indexer command of sphinx this error occure:
sql_connect: [unixODBC][Driver Manager]Data source name not found, and no default driver specified
Here is source block of my sphinx.conf file:
source src2
{
type = odbc
sql_host = hostName
sql_user = user
sql_pass = pass
sql_db = dbname
sql_port = 1521
odbc_dsn = DSN = mydsn; Driver={Oracle};Dbq=hostname:1521/dbname;Uid=user;Pwd=pass
sql_query = \
SELECT tableId, Name \
FROM sampleTable
}
And odbc.ini file:
[mydsn]
Application Attributes = T
Attributes = W
BatchAutocommitMode = IfAllSuccessful
BindAsFLOAT = F
CloseCursor = F
DisableDPM = F
DisableMTS = T
Driver = Oracle
DSN = mydsn
EXECSchemaOpt =
EXECSyntax = T
Failover = T
FailoverDelay = 10
FailoverRetryCount = 10
FetchBufferSize = 64000
ForceWCHAR = F
Lobs = T
Longs = T
MaxLargeData = 0
MetadataIdDefault = F
QueryTimeout = T
ResultSets = T
ServerName = MYDATABASE
SQLGetData extensions = F
Translation DLL =
Translation Option = 0
DisableRULEHint = T
UserID = user
Password = pass
StatementCache=F
CacheBufferSize=20
UseOCIDescribeAny=F
SQLTranslateErrors=F
MaxTokenSize=8192
AggregateSQLType=FLOAT
and odbcinst.ini file :
[Oracle]
Description= ODBC for Oracle
Driver = /opt/oracle/instantclient_12_2/libsqora.so.12.1
Setup =
FileUsage = 1
CPTimeout =
CPReuse = /usr/local/etc/odbcinst.ini
Try
odbc_dsn = DSN=mydsn;
i.e. w/o spaces around = after the DSN and since you have everything else specified in the ini file just the DNS should be enough. You also need only sql_query out of the rest sql_*. Like this:
source src2
{
type = odbc
odbc_dsn = DSN=mydsn;
sql_query = \
SELECT tableId, Name \
FROM sampleTable
}

Error while configuring EMS with Database in Fault Tolerant mode

I am trying to setup my EMS in FT Mode, I have configured all the parameters in the 2 EMS config files.
But Im getting the warning:
Unable to initialize fault tolerant connection, remote server returned 'invalid user name'
Servername and password are exactly the same in both config files,so I don't know where the error is.
I am attaching the EMS config files that i am using for the EMS servers:
tibemsd.conf:
authorization = enabled
password =
server=EMS-HakanLAL
listen=tcp://7222
Ft_active=tcp://8222
users = users.conf
groups = groups.conf
topics = topics.conf
queues = queues.conf
acl_list = acl.conf
factories = factories.conf
routes = routes.conf
bridges = bridges.conf
transports = transports.conf
tibrvcm = tibrvcm.conf
durables = durables.conf
channels = channels.conf
stores = stores.conf
store = "C:/temp"
tibemsdft.conf:
authorization = enabled
password =
server=EMS-HakanLAL
listen=tcp://8222
Ft_active=tcp://7222
users = C:\Tibco\ems\8.1\BackUp\users.conf
groups = C:\Tibco\ems\8.1\BackUp\groups.conf
topics = C:\Tibco\ems\8.1\BackUp\topics.conf
queues = C:\Tibco\ems\8.1\BackUp\queues.conf
acl_list = C:\Tibco\ems\8.1\BackUp\acl.conf
factories = C:\Tibco\ems\8.1\BackUp\factories.conf
routes = C:\Tibco\ems\8.1\BackUp\routes.conf
bridges = C:\Tibco\ems\8.1\BackUp\bridges.conf
transports = C:\Tibco\ems\8.1\BackUp\transports.conf
tibrvcm = C:\Tibco\ems\8.1\BackUp\tibrvcm.conf
durables = C:\Tibco\ems\8.1\BackUp\durables.conf
channels = C:\Tibco\ems\8.1\BackUp\channels.conf
stores = C:\Tibco\ems\8.1\BackUp\stores.conf
store = "C:\ProgramData\TIBCO3\tibco\cfgmgmt\ems\data"
your tibemsd.conf and tibemsdft.conf looks fine. What you are probably missing is registering the server-name as a user within the users.conf.
If you make that entry, both servers should be able to connect to each other.

Resources