Trigger Google Workflow from document update in Firestore - google-workflows

When there is an update in my Cloud Firestore, I would like it to trigger a workflows that consumes the document id as a parameter.

You can leverage EventArc to have an event reported to Audit Logs be picked up by EventArc and used as trigger for your Workflow.
Essentially the command is from this tutorial page:
gcloud eventarc triggers create TRIGGER \
--location=LOCATION \
--destination-workflow=DESTINATION_WORKFLOW \
--destination-workflow-location=DESTINATION_WORKFLOW_LOCATION \
--event-filters="type=google.cloud.audit.log.v1.written" \
--event-filters="serviceName=SERVICE_NAME" \
--event-filters="methodName=METHOD_NAME" \
--service-account="MY_SERVICE_ACCOUNT#PROJECT_ID.iam.gserviceaccount.com"
where the event-filter
serviceName= firestore.googleapis.com
methodName=pick from a list
the methodName options are described on this page.

Related

How to create local "copy" of remote Hasura server?

I want to set up a dev environment of Hasura on my local machine, that replicates my existing production (same tables, same schema, same data).
What are the required steps to achieve this task?
I've found this process to work well.
Create a clean empty local postgresql database and Hasura instance. To update an existing local database, drop it and recreate it.
Dump the schema and data from your existing Hasura server (as per the answer by #protob, but with clean_output set so that manual changes to the output do not have to be made. See pgdump for details.
curl --location --request POST 'https://example.com/v1alpha1/pg_dump' \
--header 'Content-Type: application/json' \
--header 'X-Hasura-Role: admin' \
--header 'Content-Type: text/plain' \
--header 'x-hasura-admin-secret: {SECRET}' \
--data-raw '{ "opts": ["-O", "-x","--inserts", "--schema", "public"], "clean_output": true}' > hasura-db.sql
Import the schema and data locally:
psql -h localhost -U postgres < hasura-db.sql
The local database has all the migrations because we copied the latest schema, so just mark them as applied:
# A simple `hasura migrate apply --skip-execution` may work too!
for x in $(hasura migrate status | grep "Not Present" | awk '{ print $1 }'); do
hasura migrate apply --version $x --skip-execution
done
# and confirm the updated status
hasura migrate status
Now finally apply the Hasura metadata using the hasura CLI:
hasura metadata apply
Enjoy your new instance!
Backup the database.
Run Hasura with the database.
Make sure Hasura metadata is synced.
Hasura has a special endpoint for executing pg_dump on the Postgres instance.
Here is a sample CURL request:
curl --location --request POST 'https://your-remote-hasura.com/v1alpha1/pg_dump' \
--header 'Content-Type: application/json' \
--header 'X-Hasura-Role: admin' \
--header 'Content-Type: text/plain' \
--data-raw '{
"opts": ["-O", "-x","--inserts", "--schema", "public"]
}'
It outputs the schema and data in psql format.
You can use a tool such as Postman for convenience to import, test and run the CURL query.
Please follow the pg_dump documentation to adjust needed opts.
i.e. the above query uses "--inserts" opt, which produces "INSERT INTO" statements in the output.
The output can be copied, pasted and imported directly to Hasura Panel SQL Tab ("COPY FROM stdin" statements result in errors when inserted in the panel).
http://localhost:8080/console/data/sql
Before import, comment out or delete the line CREATE SCHEMA public; from query, because it already exists.
You also have to select tables and relations to be tracked, during or after executing the query.
If the amout of data is bigger, it might be better to use CLI for import.

How to override a render ISML template in Intershop7

In Enfinity Suite 6.4 we used to customize storefront pages by overriding ISML templates of the PrimeTech cartridges. For example, it was possible to add a dependency in our custom cartridge to "sld_ch_consumer_app" and replace any Primetech ISML template by adding the template with the same name and hierarchy in our custom cartridge.
Is something like that possible in Intershop7? For example, we would like to change the order of the ISML elements in ProductTile.isml, without overriding the pagelet model. If we add a dependency in our custom cartridge to "app_sf_responsive_cm" and create the ISML template with the same name and folder hierarchy in our custom cartridge, the system still loads the ISML template from the "app_sf_responsive_cm".
Only way we managed to achieve to see the changes in storefront was by overriding the pagelet model and changing the render template name to "ProductTileCustom". Like this:
If we don't use the custom name for the render template, system will first load the ProductTile.isml from the app_sf_responsive_cm instead of the one n our custom cartridge (app_sf_a1_shop_cm).
The order of the cartridges in the cartridgelist.properties is this:
....
bc_urlrewrite_test \
bc_product_rating_orm_test \
commerce_management_b2c_component \
app_core_a1 \
app_sf_a1_shop \
app_sf_a1_shop_cm \
app_bo_a1 \
app_sf_responsive \
app_sf_responsive_cm \
app_sf_responsive_b2c \
app_sf_responsive_smb \
as_responsive \
as_a1 \
Is there some easier way of overriding the responsive store ISML templates other than overriding the pagelet model?
In addition to what Bas de Groot was mentioning in regards to use the intershop studio wizard to override an ISML template, I want to point out that your problem lies within the wrong order of cartridges in your cartridgelist.properties. So instead of:
bc_urlrewrite_test \
bc_product_rating_orm_test \
commerce_management_b2c_component \
app_core_a1 \
app_sf_a1_shop \
app_sf_a1_shop_cm \
app_bo_a1 \
app_sf_responsive \
app_sf_responsive_cm \
app_sf_responsive_b2c \
app_sf_responsive_smb \
as_responsive \
as_a1 \
You must use this order here:
bc_urlrewrite_test \
bc_product_rating_orm_test \
commerce_management_b2c_component \
app_sf_responsive \
app_sf_responsive_cm \
app_sf_responsive_b2c \
app_sf_responsive_smb \
as_responsive \
as_a1 \
app_core_a1 \
app_sf_a1_shop \
app_sf_a1_shop_cm \
app_bo_a1 \
In other words, your project cartridges must be loaded after the intershop stuff
There should be no need to override the pagelet model, just overriding the ISML template should do the trick. You can easily override ISML templates in Intershop 7 by doing the following:
Right click the custom cartridge to which you want to add the new template and select new > ISML Template.
In the popup window that appears click Override Existing....
Select the template you want to override and click open.
Click Finish.
Intershop Studio will now automatically create the template and correct folder structure inside your specified cartridge.
Depending on your settings in the appserver.properties file you might need to restart or even re-deploy the application server before the new template will show up in the frontend.

Create Parse objects fail when Class Level Permission says only owner can create the class

Issue Description
Let's say I have a Parse "Diary class" with a text and owner property, where owner is a pointer.
Using default Class Level Permissions (Pointer permissions) on the dashboard, I imagine anyone can create new objects and claims that they are owned by another random user, like this:
curl -X POST \
-H "X-Parse-Application-Id: myAppId" \
-H "Content-Type: application/json" \
-d '{"text":"hacked","owner": {"__type": "Pointer","className": "_User","objectId": "ANYTHING_THE_HACKER_WANT"}}' \
http://server.com/parse/classes/Diary
So I try to modify the CLP on the dashboard, hoping that diaries that belongs to a user should only be created by the user. But I couldn't get this to work.
Steps to reproduce
1) Configure Class Level Permissions on a class like this:
2) Try to create new objects, saying they are owned by the logged in user:
curl -X POST \
-H "X-Parse-Application-Id: myAppId" \
-H "Content-Type: application/json" \
-H "X-Parse-Session-Token: r:4613f36ba383022378780d4c2bcdf1cd" \
-d '{"text":"...","owner": {"__type": "Pointer","className": "_User","objectId": "CSosrTAkxL"}}' \
https://server.com/parse/classes/Diary
Expected Results
Diary objects should be created successfully since the person calling this is the owner. I.e. the session token matches the objectId of the owner?
Actual Outcome
It returns error instead:
{"code":119,"error":"Permission denied for action create on class Diary."}
Why? Am I expecting a right behaviour?
Check the Create box for Public. Pointer permissions for CLPs affect only the specific object in the database. Since these objects are not yet in the database, the user does not pass the CLP to be able to save this object without the master key override from cloud code. Checking the Create box will mean any user can create one of these objects, but they must then be the Owner in order to update or delete that object.

TeamCity API setting configuration parameters

I have configuration parametr current_build_date (User Defined Parameter) I want just to set this parameter to current date by API TeamCity.
On docs I have seen this:
http://teamcity:8111/httpAuth/app/rest/buildTypes/<buildTypeLocator>/parameters/<parameter_name>
I know my Build configuration ID, but I can't understand how by this to make buildTypeLocator.
I assume result will be something like this:
curl -u Login:Password \
-X PUT \
-d 'valueOfMyParam' \
-H 'Content-Type: text/plain' \
http://teamcity:8111/httpAuth/app/rest/buildTypes/<buildTypeLocator>/parameters/current_build_date
I will realy appreciate if somebody who knows TeamCity API will help me with this problem.
I made attempt just to pass instead of buildTypeLocator my Build configuration ID and I got ERROR:
[17:08:25][Step 3/3] Error has occurred during request processing (Not Found).
[17:08:25][Step 3/3] Error: jetbrains.buildServer.server.rest.errors.NotFoundException: No project found by name or internal/external id 'BuildConfigurationID'.
If there are any problems or ambiguities with my question please add comment, i'll try to fix it.
If you browse the REST API endpoints in a browser you'll be able to see the format of the build locator.
Visit http://teamcity:8111/httpAuth/app/rest/buildTypes/ and you'll see the entries have a href attribute that contains the buildLocator (generally a property:value combination)
You'll then be able to navigate using that url / communicate via the API
Hope this helps
I solved problem: build type locator was id:Build configuration ID
current_build_date=`date +%%Y-%%m-%%d:%%H:%%M:%%S`
echo $current_build_date;
curl -u Login:Password \
-X PUT \
-d $current_build_date \
-H 'Content-Type: text/plain' \
https://teamcity.billing.ru/httpAuth/app/rest/buildTypes/id:Build
configuration ID/parameters/current_build_date

Can I set the TTL for documents loaded into Couchbase from HDFS using Sqoop?

I am attempting to load a JSON document from Hadoop HDFS into Couchbase using sqoop. I am able to load the documents correctly, but the TTL of the document is 0. I would like to expire the documents over a period of time and not have them live forever. Is that possible with the Couchbase connector for Sqoop?
As I said, the documents are loaded correctly, just without a TTL.
The document looks like this:
key1#{"key": "key1", "message": "A message here"}
key2#{"key": "key2", "message": "Another message"}
The sqoop call looks like this:
sqoop export -D mapred.map.child.java.opts="-Xmx4096m" \
-D mapred.job.map.memory.mb=6000 \
--username ${COUCHBASE_BUCKET} \
--password-file ${COUCHBASE_PASSWORD_FILE} \
--table ignored \
--connect ${COUCHBASE_URL} \
--export-dir ${INPUT_DIR} \
--verbose \
--input-fields-terminated-by '#' \
--lines-terminated-by '\n' \
-m 2
Thank you for your help.
I do not think there's a straightforward UI/settings to do it. The code would have to be modified within the connector.
There is no TTL option in the current sqoop plugin version. However, if you just want to set the same TTL for all the imported objects, you can quite easily add the code yourself. Take a look at line 212 here: https://github.com/couchbase/couchbase-hadoop-plugin/blob/master/src/java/com/couchbase/sqoop/mapreduce/db/CouchbaseOutputFormat.java#L212
You just need to add a TTL parameter to the set calls. If you want to be thorough about it, you can take the TTL value from the command line and put it in the DB configuration object, so you can use it in code.

Resources