unable to list google cloud logging sink for Bigquery - google-cloud-stackdriver

I have created a google cloud logging sink for Bigquery. (Reference) And I was able to share the BQ dataset with the Service Account that got created during log creation. And logs are being stored in the BQ dataset tables. But, I am unable to list the Sink that I have created. And neither I am able to recreate it nor delete it, Errors out saying sink "Already exists" and sink "does not exist" respectively.
I have admin role. And I hope I have followed instructions well and created the sink. But, still something is missing and I am unable to figure out how the sink itself is missing. Its been almost two days, so looking for guidance. Appreciate your time.
P.s, my first stackoverflow post. sorry if i missed anything.
Google cloud logging sink listing issue

I recommend you to take a look in the storage logs [1] to see if you can locate the issue here.
Also you can try to execute the deletion command using the flags --organization=ORGANIZATION_ID and --project=PROJECT_ID
for more reference follow the documentation [2].
Cheers,
[1] https://cloud.google.com/logging/docs/storage#overview
[2] https://cloud.google.com/sdk/gcloud/reference/logging/sinks/list

Related

How to view and Interprete Vertex AI Logs

We have deployed Models in the Vertex AI endpoint.
Now we want to know and interpret logs regarding events
of Node creation, POD creation, user API call matric etc.
Is there any way or key by which we can filter the logs for Analysis?
As you did not specify your question I will provide quite a general answer which might help other members.
There is a Documentation which explains Vertex AI logging information - Vertex AI audit logging information.
Google Cloud services write audit logs to help you answer the questions, "Who did what, where, and when?" within your Google Cloud resources.
Currently Vertex AI supports 2 types of Audit Logs:
Admin Activity audit logs
Admin Activity audit logs contain log entries for API calls or other actions that modify the configuration or metadata of resources. For example, these logs record when users create VM instances or change Identity and Access Management permissions.
Data Access audit logs
Data Access audit logs contain API calls that read the configuration or metadata of resources, as well as user-driven API calls that create, modify, or read user-provided resource data.
Two others like System Event logs and Policy Denied logs are currently not supported in Vertex AI. In guide Google services with audit logs you can find more information.
If you want to view audit logs, you can use Console, gcloud command or API. Depending on how you want to get them you should follow steps mentioned in Viewing audit logs. For example, if you would use Console, you will use Log Explorer.
Additional threads which might be helpful:
How do we capture all container logs on google Vertex AI?
How to structure container logs in Vertex AI?
For container logs (logs that are created by your model) you can't currently,
the entire log entry is captured by the Vertex AI platform and assigned as a string to the "message" field within the parent "jsonPayload" fields,
the answer above of #PjoterS suggests a workaround to that limitation which isn't easy in my opinion.
It would have been better if Vertex had offered some mechanism by which you could log directly to the endpoint resource from the container using their gcloud logging lib or better, unpack the captured log fields as sub fields to the "jsonPayload" parent field, or into "message"

Azure Blob Storage lifecycle management - send report or log after run

I am considering using Azure Blob Storage's build-in lifecycle management feature for deleting blobs of a certain age.
However, due to a business requirement, it must be possible to generate a report or log statement after each daily execution of the defined ruleset. The report or log must state the number of blob blocks that were affected, e.g. deleted during the run.
I have read through the documentation and Googled to see if others have had similar inquiries, but so far without any luck.
So my question: Does any of you know if and how I can get a build-in Lifecycle management system to do one of the following after each daily run:
Add a log statement to the storage account containing the Blob storage.
Generate and send a report to an endpoint I define.
If the above can't be done I will have to code the daily deletion job and report generation myself, which surely I can do, but I would like to use the built-in feature if possible.
I summarize the solution as below.
If you want to know which blobs are deleted every day, we can configure Diagnostics settings in the storqge account. After doing that, we will get the logs for read, write, and delete requests for the blob. For more detail, please refer to here and here
Regarding how to enable it, we can use PowerShell command Set-AzStorageServiceLoggingProperty.

How to use graph.openManagement().updateIndex()?

I have janusgraph database, in which all indexes were built.But the status of some of those indexes is installed. Now I am trying to update those Indexes to the registered and then to enabled. So I have done some research and I found this Action(schemaAction).But I don't know the syntax and also how to use graph.openManagement().updateIndex().
Any suggestions regarding this issue or if there is anything other than this procedure, then please let me know it.
Thanks in advance!
If there are any open transactions while creating the indexes they might get stuck in the INSTALLED state.
You can find a clear explanation here.
After rolling back your open transactions and closing open instances, try reindexing.
Note: Block until the SchemaStatus transitions from INSTALLED to REGISTERED, after running the reindex command. Try to run groovy script instead of running the commands directly on gremlin console for building the indexes. Please find the sample script below.
import org.janusgraph.graphdb.database.management.ManagementSystem
mgmt = graph.openManagement();
mgmt.updateIndex(mgmt.getGraphIndex("giftIdByGift"), SchemaAction.REGISTER_INDEX).get();
ManagementSystem.awaitGraphIndexStatus(graph, "giftIdByGift").call();
mgmt.commit();
mgmt = graph.openManagement();
mgmt.updateIndex(mgmt.getGraphIndex("giftIdByGift"), SchemaAction.REINDEX).get();
ManagementSystem.awaitGraphIndexStatus(graph, "giftIdByGift").status(SchemaStatus.ENABLED).call();
mgmt.commit();

Unable to Create Common Data Service DB in Default Environment Power Apps

I am unable to create a new Common Data Service Database in my Power Apps default environment. Please see the error text below.
It looks like you don't have permission to use the Common Data Service
in this environment. Switch to a different environment, or create your
own.
Which as I understand I should be able to create after the Microsoft Business Application October 2018 update as listed in the article available at following link.
https://community.dynamics.com/365/b/dynamicscitizendeveloper/archive/2018/10/17/demystifying-dynamics-365-and-powerapps-environments-part-1
Also when I try to create a Common Data Service app in my default environment, I encounter following error.
The data did not load correctly. Please try again.
The environment 'Default-57e1485d-1197-4afd-b792-5c423ab508d9' is not
linked to a new CDS 2.0 instance. The operation 'ListInstanceMetadata'
is forbidden for unlinked environments
Moreover I am unable to see the default environment on https://admin.powerapps.com/environments, I can only see the Sandbox environment there.
Any ideas what I am missing here?
Thank you.
Someone else faced a similar issue and I read in one of the threads about deleting the browser cache and trying it again or trying it in a different browser resolved the issue. Could you try these first level steps and check if you still have these issues?
Ref: https://powerusers.microsoft.com/t5/Common-Data-Service-for-Apps/Default-Environment-Error-on-CDS/m-p/233582#M1281
Also, for your permission error ref: https://powerusers.microsoft.com/t5/Common-Data-Service-for-Apps/Common-Data-Service-Business-Flows/td-p/142053
I have not validated these findings. But as these answers are from MS and PowerApps team, hope it helps!

Thinkaurelius Titan configuring BerkeleyDB

I am trying to created a Thinkaurelius titan datastore using:
TitanGraph graph = TitanFactory.open("/tmp/graph")
The documentation can be found at https://github.com/thinkaurelius/titan/wiki/Using-BerkeleyDB
But each time I open the graph a new datastore is being created. I even tryed using the configure object but it did not help. Has any one worked on this before? I wanto create a titan datastore that should be reusable, i.e. it should not create a new datastore each time I open it.
Any suggestions please?
It sounds like the changes aren't being committed to the database. Look more into how transactions work.
https://github.com/thinkaurelius/titan/wiki/Transaction-Handling

Resources