I am trying to find:
who created a dataset in BigQuery
and if possible if it was done via GUI or CLI etc.
Currently using the Google Cloud SDK:
I am checking against every project my account is linked with, using the Google Cloud SDK.
With the following command inside a loop for every project, I get info for the label userByEmail.
Command: bq show --format=prettyjson ${dataset} | awk /userByEmail/'{gsub ("\"", ""); print proj",",dat",",$2}' proj=${project} dat=${dataset}
This gives me info about who has access to these datasets, but it's not what I am looking for.
Any ideas on how to get the correct info on an automated fashion?
The only place creator information would be exposed is via the BigQuery audit logs.
Even with the audit information at your disposal, the second part of your question (what tool issued the request) is likely to remain ambiguous.
Related
I'm trying to run a active scan from OWASP ZAP using only my Ubuntu(22.04) terminal by importing a external open API definition. This can be easily done through the GUI, but I need to do the same process using only command line. I didn't found a proper documentation to follow either.
I have tried following structure of the command to do the active scan, but seems it fails.
/path/to/zap.sh -daemon -openapifile /path/to/swagger.json -openapitargeturl /path/to/targetUrl -quickout /path/to/output.html
Can anyone suggest a proper way to make this active scan through the Ubuntu terminal.
We have lots of documentation for automating ZAP - see https://www.zaproxy.org/docs/automate/
I recommend looking at the API packaged scan and the Automation Framework.
I am considering using Azure Blob Storage's build-in lifecycle management feature for deleting blobs of a certain age.
However, due to a business requirement, it must be possible to generate a report or log statement after each daily execution of the defined ruleset. The report or log must state the number of blob blocks that were affected, e.g. deleted during the run.
I have read through the documentation and Googled to see if others have had similar inquiries, but so far without any luck.
So my question: Does any of you know if and how I can get a build-in Lifecycle management system to do one of the following after each daily run:
Add a log statement to the storage account containing the Blob storage.
Generate and send a report to an endpoint I define.
If the above can't be done I will have to code the daily deletion job and report generation myself, which surely I can do, but I would like to use the built-in feature if possible.
I summarize the solution as below.
If you want to know which blobs are deleted every day, we can configure Diagnostics settings in the storqge account. After doing that, we will get the logs for read, write, and delete requests for the blob. For more detail, please refer to here and here
Regarding how to enable it, we can use PowerShell command Set-AzStorageServiceLoggingProperty.
Cross-posting from https://groups.google.com/forum/#!topic/kythe/86kNuSCeorI, since I was directed here by Beam faq for Beam questions.
In short, I run a job written using the golang sdk successfully using the direct runner, but trying to use the dataflow runner I get the following error in the google cloud console:
2019-02-17 (12:03:53) Step with name e19 already exists. Duplicates
are not allowed.
I attach the plan that was printed to the stderr at https://pastebin.com/vpu3U52j. Grepping for e19: https://pastebin.com/L24L1guT.
I'm not very familiar with beam yet. I wonder which part is responsible for generating the step names? What are likely causes of a collision?
Thank you!
It was a bug actually, sent PR to beam.
I have looked everywhere but cannot seem to figure out how to setup cloud coding on the Parse Open Server using Heroku.
I see this link which tells me what to put in the Index.js and Main.js file: Implementing Cloud Code on Open Source Parse Server. However, I cannot seem to find those files. Nor can I find the "cloud" folder.
How do I find the cloud folder?
I created the Parse Server on MongoDB using the "Deploy to Heroku" link on this page: https://github.com/ParsePlatform/parse-server-example. After creating my application by filling out all the information, I ran the command heroku git:clone -a yourAppName to clone the application files. However, when I use the command I obtain a empty repository and get the following message in my terminal:
Cloning into 'hyv3-moja'...
warning: You appear to have cloned an empty repository.
So, how/where do I find the cloud folder with main.js? Did I miss any step in creating the Parse Server?
I also tried using the Parse Command Line. However, when I try to use the parse new command, it requires me to login to a Parse account. However, since Parse is going down, they are not accepting new accounts and I did not have an account before. Regardless, this seems like a deadend.
So can someone please explain to me how to set up Cloud Code?? I want to create a code that decrements a column in the database every second so it operates like a timer. Basically, I want my application to create objects on the database that last a certain amount of time chosen by the user. For this example, ill say 24 hours. So from the moment it is created, I want to decrement those 24 hours in the database. That way when a user of my application clicks to view the object, I translate the time remaining from the database and just output that value to the user to show how much time is remaining for the life of the object.
I'm working on an app that uses Jena for storage (with the TDB backend). I'm looking for something like the equivalent of Squirrel, that lets me see what's being stored, run queries etc. This seems like an obvious thing to need, but my (perhaps badly phrased) google queries aren't turning up anything promising.
Any suggestions, please? I'm on XP. Even a command line tool would be helpful.
Take a look at my Store Manager tool which is part of the dotNetRDF Toolkit which I develop as part of the wider dotNetRDF project I maintain.
It provides a fairly basic GUI through which you can connect to various Triple Stores including TDB provided that you expose your dataset via Joseki/Fuseki. You need to have .Net 3.5 installed to run the apps in the toolkit.
If you don't already expose your TDB dataset via HTTP try using Fuseki as it is ridiculously easy to use and can be run just on your local machine when necessary to make your TDB store available via HTTP for use with my tool e.g.
java -jar fuseki-0.1.0-server.jar --update --loc data /dataset
Please see the Fuseki wiki for more information on running Fuseki and the various options. In the above example Fuseki is run with SPARQL Update enabled (the --update flag), using the TDB dataset located in the directory data (the --loc data argument) and with a base URI of /dataset for the data.
Once running you can use my tool to connect to a Fuseki server by going to File > New Generic Store Manager, selecting the "Fuseki" tab from the dialog that appears, entering the URI http://localhost:3030/dataset/data and then clicking "Connect to Fuseki".
Twinkle is a handy SPARQL client : http://www.ldodds.com/projects/twinkle/
As it happens I'm working on something similar myself, but it still needs a lot of work (check back in a month :) http://hyperdata.org/wiki/Scute
first download jena fusaki from
https://jena.apache.org/download/index.cgi
un-zip the file and copy the "jena-fuseki-1.0.1" to c drive
open cmd
type for accesing the folder
"cd C:\jena-fuseki-1.0.1"
then type
"java -jar fuseki-server.jar --update --loc data /dataset"
at last open a browser and type
"localhost:3030/"
remember you must first declear the enviorment verible(located in system poperties then advance tab)
and edit variable name call "Path" in the "System verible" to
"C:\jena-fuseki-1.0.1"
I also develop a SPARQL client, Open Source in Java Swing: EulerGUI.
In fact it does a lot more, see the manual:
http://eulergui.svn.sourceforge.net/viewvc/eulergui/trunk/eulergui/html/documentation.html
For the SPARQL feature, better take the EulerGUI minimal build:
http://sourceforge.net/projects/eulergui/files/eulergui/1.11/