Square Variation Stock status shows none even though there is a valid in stock quantity - square-connect

I'm trying to update the on-hand of a variation using the inventory/batch-change API. My system is the source of record for the item on-hand so I'm posting a PHYSICAL_COUNT to the variant. Everything looks fine if you drill down into the stock section of the variant; however, the main item dashboard has a - and the variation shows none in the stock section. I'm not sure what the issue is because when I post thePHYSICAL_COUNT I also set the state=IN_STOCK.
Here is the json used to update inventory
API URL :https://connect.squareup.com/v2/inventory/batch-change
{
"idempotency_key": "XXXXXXXX",
"changes": [
{
"type": "PHYSICAL_COUNT",
"physical_count": {
"catalog_object_id": "XXXXXXXX",
"state": "IN_STOCK",
"location_id": "XXXXXXXX",
"quantity": "3",
"occurred_at": "2020-04-20T15:02:00Z"
}
}
],
"ignore_unchanged_counts": true
}
Square Stock None
Square Stock

There is a known issue around this, when using the Inventory API and looking at the dashboard it will be out of sync unless you update the item variation in the Catalog API to use individual location_overrides (regardless if it's going to be available in every location). This field lives in the CatalogObject->item_variation_data->location_overrides (https://developer.squareup.com/reference/square/catalog-api/upsert-catalog-object#modal__property-item_variation_data).

Related

Is there a way to write an Expression in Power Automate to retrieve item from SurveyMonkey?

There is no dynamic content you can get from the SurveyMonkey trigger in Power Automate except for the Analyze URL, Created Date, and Link. Is it possible I could retrieve the data with an expression so I could add fields to SharePoint or send emails based on answers to questions?
For instance, here is some JSON data for a county multiple choice field, that I would like to know the county so I can have the email sent to the correct person:
{
"id": "753498214",
"answers": [
{
"choice_id": "4963767255",
"simple_text": "Williamson"
}
],
"family": "single_choice",
"subtype": "menu",
"heading": "County where the problem is occurring:"
}
And basically, a way to create dynamic fields from the content so it would be more usable?
I am a novice so your answer will have to assume I know nothing!
Thanks for considering the question.
Overall, anything I have tried is unsuccessful!
I was able to get an answer on Microsoft Power Users support.
Put this data in compose action:
{
"id": "753498214",
"answers": [
{
"choice_id": "4963767255",
"simple_text": "Williamson"
}
],
"family": "single_choice",
"subtype": "menu",
"heading": "County where the problem is occurring:"
}
Then these expressions in additional compose actions:
To get choice_id:
outputs('Compose')?['answers']?[0]?['choice_id']
To get simple_text:
outputs('Compose')?['answers']?[0]?['simple_text']
Reference link here where I retrieved the answer is here.
https://powerusers.microsoft.com/t5/General-Power-Automate/How-to-write-an-expression-to-retrieve-answer/m-p/1960784#M114215

Azure Data Factory REST API paging with Elasticsearch

During developing pipeline which will use Elasticsearch as a source I faced with issue related paging. I am using SQL Elasticsearch API. Basically, I've started to do request in postman and it works well. The body of request looks following:
{
"query":"SELECT Id,name,ownership,modifiedDate FROM \"core\" ORDER BY Id",
"fetch_size": 20,
"cursor" : ""
}
After first run in response body it contains cursor string which is pointer to next page. If in postman I send the request and provide cursor value from previous request it return data for second page and so on. I am trying to archive the same result in Azure Data Factory. For this I using copy activity, which store response to Azure blob. Setup for source is following.
copy activity source configuration
This is expression for body
{
"query": "SELECT Id,name,ownership,modifiedDate FROM \"#{variables('TableName')}\" WHERE ORDER BY Id","fetch_size": #{variables('Rows')}, "cursor": ""
}
I have no idea how to correctly setup pagination rule. The pipeline works properly but only for the first request. I've tried to setup Headers.cursor and expression $.cursor but this setup leads to an infinite loop and pipeline fails with the Elasticsearch restriction.
I've also tried to read document at https://learn.microsoft.com/en-us/azure/data-factory/connector-rest#pagination-support but it seems pretty limited in terms of usage examples and difficult for understanding.
Could somebody help me understand how to build the pipeline with paging abilities utilization?
Responce with the cursor looks like:
{
"columns": [
{
"name": "companyId",
"type": "integer"
},
{
"name": "name",
"type": "text"
},
{
"name": "ownership",
"type": "keyword"
},
{
"name": "modifiedDate",
"type": "datetime"
}
],
"rows": [
[
2,
"mic Inc.",
"manufacture",
"2021-03-31T12:57:51.000Z"
]
],
"cursor": "g/WuAwFaAXNoRG5GMVpYSjVWR2hsYmtabGRHTm9BZ0FBQUFBRUp6VGxGbUpIZWxWaVMzcGhVWEJITUhkbmJsRlhlUzFtWjNjQUFBQUFCQ2MwNWhaaVIzcFZZa3Q2WVZGd1J6QjNaMjVSVjNrdFptZDP/////DwQBZgljb21wYW55SWQBCWNvbXBhbnlJZAEHaW50ZWdlcgAAAAFmBG5hbWUBBG5hbWUBBHRleHQAAAABZglvd25lcnNoaXABCW93bmVyc2hpcAEHa2V5d29yZAEAAAFmDG1vZGlmaWVkRGF0ZQEMbW9kaWZpZWREYXRlAQhkYXRldGltZQEAAAEP"
}
I finally find the solution, hopefully, it will be useful for the community.
Basically, what needs to be done it is split the solution into four steps.
Step 1 Make the first request as in the question description and stage file to blob.
Step 2 Read blob file and get the cursor value, set it to variable
Step 3 Keep requesting data with a changed body
{"cursor" : "#{variables('cursor')}" }
Pipeline looks like this:
pipeline
Configuration of pagination looks following
pagination . It is a workaround as the server ignores this header, but we need to have something which allows sending a request in loop.

How to prevent unnecessary G Suite API data consumption?

I am currently consuming data from the G Suite API.
An inconvenience I have found is that for some of the APIs the number of resources available might be quite large.
For instance, when I consume the Users:list API (https://www.googleapis.com/admin/directory/v1/users), given the number of resources and the maximum number of results per query I need to perform a significant number of queries. Find below an example JSON response:
{
"kind": "admin#directory#users",
"etag": "\"WqpSTs-zelqnIvn63V............................/v3ENarMfXkTh9ijs3OVkQRoUSVU\"",
"users": [
{
"kind": "admin#directory#user",
"id": "7720745322191632224007",
"etag": "\"WqpSTs-zelqnIvn63V........................PfcSmik3zEJwHAl1UbgSk\"",
"primaryEmail": ...,
...
},
{
"kind": "admin#directory#user",
"id": "227945583287518253104",
"etag": "\"WqpSTs-zelqnIvn63V..........-zY30eInIGOmLI\"",
"primaryEmail": ...,
...
},
...
N-users
...
]
}
I am running this query several times a day.
Ideally I would only retrieve the resources that have changed and the new ones, excluding from the response the ones that have not changed.
Is it possible to do that? If so, how?
Thank you in advance for your answers.
You could create custom attributes for your users, and then filter your requests using the query parameter according to your custom attribute.
Or define exactly what you mean by "changed" or "not changed" as the user properties will change on every login to update the last login attribute.
Update:
You can watch for changes on the list of users in your domain by supplying an address to receive notifications in a POST request to the watch endpoint:
https://www.googleapis.com/admin/directory/v1/users/watch
References:
Users.watch
Custom User Fields
Query string for User fields

How to index and query Nested documents in the Elasticsearch

I have 1 million users in a Postgres table. It has around 15 columns which are of the different datatype (like integer, array of string, string). Currently using normal SQL query to filter the data as per my requirement.
I also have an "N" number of projects (max 5 projects) under each user. I have indexed these projects in the elasticsearch and doing the fuzzy search. Currently, for each project (text file) I have a created a document in the elasticsearch.
Both the systems are working fine.
Now my need is to query the data on both the systems. Ex: I want all the records which have the keyword java (on elasticsearch) and with experience of more than 10 years (available in Postgres).
Since the user's count will be increasing drastically, I have moved all the Postgres data into the elasticsearch.
There is a chance of applying filters only on the fields related to the user (except project related fields).
Now I need to created nest projects for the corresponding users. I tried parent-child types and didn't work for me.
Could anyone help me with the following things?
What will be the correct way of indexing projects associated with the users?
Since each project document has a field called category, is it possible to get the matched category name in the response?
Are there any other better way to implement this?
By your description, we can tell that the "base document" is all based on users.
Now, regarding your questions:
Based on what I said before, you can add all the projects associated to each user as an array. Like this:
{
"user_name": "John W.",
..., #More information from this user
"projects": [
{
"project_name": "project_1",
"role": "Dev",
"category": "Business Intelligence",
},
{
"project_name": "project_3",
"role": "QA",
"category": "Machine Learning",
}
]
},
{
"user_name": "Diana K.",
..., #More information from this user
"projects": [
{
"project_name": "project_1"
"role": "Project Leader",
"category": "Business Intelligence",
},
{
"project_name": "project_4",
"role": "DataBase Manager",
"category": "Mobile Devices",
},
{
"project_name": "project_5",
"role": "Project Manager",
"category": "Web services",
}
]
}
This structure is with the goal of adding all the info of the user to each document, doesn't matter if the info is repeated. Doing this will allow you to bring back, for example, all the users that work in a specific project with queries like this:
{
"query":{
"match": {
"projects.name": "project_1"
}
}
}
Yes. Like the query above, you can match all the projects by their "category" field. However, keep in mind that since your base document is merely related to users, it will bring back the whole user's document.
For that case, you might want to use the Terms aggregation, which will bring you the unique values of certain fields. This can be "combined" with a query. Like this:
{
"query":{
"match": {
"projects.category": "Mobile Devices"
}
}
},
"size", 0 #Set this to 0 since you want to focus on the aggregation's result.
{
"aggs" : {
"unique_projects_names" : {
"terms" : { "field" : "projects.name" }
}
}
}
That last query will bring back, in the aggregation fields, all the unique projects' name with the category "Mobile Devices".
You can create a new index where you'll store all the information related to your projects. However, the relationships betwen users and projects won't be easy to keep (remember that ES is NOT intended for being an structured or ER DB, like SQL) and the queries will become very complex, even if you decide to name both of your indices (users and projects) in a way you can call them with a wildcard.
EDIT: Additional, you can consider store all the info related to your projects in Postgress and do the call separately, first get the project ID (or name) from ES and then the project's info from Postgres (since I assume is maybe the info that is more likely not to change).
Hope this is helpful! :D

Google place api - application specific search

I am trying to do application specific places search with google place api. Here is how I am adding a place:
Request:
{
"location": {
"lat": 37.760538,
"lng": -121.900879
},
"accuracy": 50,
"name": "p2p",
"types": ["other"]
}
I get success response as shown below:
Response:
{
"id" : "dfe583b1ac058750cf524f958afc5e82ade455d7",
"place_id" : "qgYvCi0wMDAwMDBhNWE4OWU4NTMzOjgwOGZlZTBhNjI3OjBjNTU1OTU4M2Q2NDI5YmM",
"reference" : "CkQxAAAAsPE72V-jhHUjj6vPy2HdC__2MhAdXanL6mlFBA4bcayRabKyMlfKFiah7U2vkoCj1P_0w9ESFSv5mfDkyufaZhIQTHBHY_jPGRHEE3EmEAGElhoUXTSylMslwHSTK5tYdstW2rOZKbw",
"scope" : "APP",
"status" : "OK"
}
When I search for this place using radar search, I get ZERO_RESULTS.
Request:
https://maps.googleapis.com/maps/api/place/radarsearch/json?key=key&radius=5000&location=37.761926,-121.891856&keyword=p2p
Response:
{
"html_attributions": [ ],
"results": [ ],
"status": "ZERO_RESULTS"
}
Is there something that I am doing the right way? Please help.
Thanks & Regards,
--Rajani
Your scope is "APP". That means you can access it (via PlaceID) from the application that created the entry only. If the location passes Google's moderation process, then it will gain scope "GOOGLE" and be accessible from the general searches.
scope — Indicates the scope of the place_id. The possible values are:
APP: The place ID is recognised by your application only. This is because your
application added the place, and the place has not yet
passed the moderation process.
GOOGLE: The place ID is available to other applications and on Google Maps.
Note: The scope field is included only in Nearby Search results and
Place Details results. You can only retrieve app-scoped places via the
Nearby Search and the Place Details requests. If the scope field is
not present in a response, it is safe to assume the scope is GOOGLE.
See: https://developers.google.com/places/documentation/search

Resources