Laravel Lighthouse: how to delete records that match certain conditions (rather than deleting via primary key) - laravel

In Laravel Lighthouse GraphQL, I'd love to be able to delete records that match certain conditions rather than passing just an individual ID.
I get this error:
The #delete directive requires the field deletePostTag to only contain a single argument.
This functionality seems currently unsupported, but if I'm wrong and this is actually supported, please let me know, because this would be the most straightforward approach.
So then my second approach was to try to first run an #find query to retrieve the ID of the record that I want to delete (based on certain fields equaling certain values).
But https://lighthouse-php.com/4.16/api-reference/directives.html#find shows:
type Query {
userById(id: ID! #eq): User #find
}
and does not show how I could provide (instead of the primary key ID) 2 arguments: a foreign key ID, and a string.
How can I most simply accomplish my goal of deleting records that match certain conditions (rather than deleting via primary key)?

I'm not sure about the #delete functionality regarding multiple arguments, but from what you've posted that appears to be unsupported at the moment. Regarding your query, you should instead use something like #all in conjunction with #where which would allow you to filter the collection by as many vars/args as you'd like. If your argument list grows beyond 3 or so, I would take a look at Complex Where Conditions. They have worked very well for my team so far, and allow a lot of filtering flexibility.
Also take a look at the directive's docs stating:
You can also delete multiple models at once. Define a field that takes a list of IDs and returns a collection of the deleted models.
So if you return multiple models you'd like to delete from your query, you may use this approach to delete them all at once.

Related

Map multiple values to a unique column in Elasticsearch

I want to work with Elasticsearch to process some Whatsapp chats. So I am initially planning the data load.
The problem is that the data exported from Whatsapp, doesn't contain a real unique id per user but it only contains the name of the user taken from the contact directory of the device where the chat is exported (ie. a user can change the number or have two numbers in the same group).
Because of that, I need to create a custom explicit mapping table between the user names and a self-generated unique id, that gets populated in an additional column.
Then, my question is: "How can I implement such kind of explicit mapping in Elasticsearch to generate an additional unique column?". Alternatively, a valid answer could be a totally different approach to the problem.
PS. As I write, I think the solution could be in the ingestion process, like in a python script, but I still want to post the question to understand if this is something that Elasticsearch can do by itself.
yes, do it during the index process
if you had the data that maps the name and the id stored in a separate index you could do this with an enrich processor when you index the data to add whichever value you want to the document via a pipeline
also - Elasticsearch doesn't have columns, only fields

Can I somehow tag data in Redis?

I have an object Company and multiple methods that can be used to get this object. Ex. GetById, GetByEmail, GetByName.
What I'd like is to cache those method calls with a possibility to invalidate all cache entries related to one object at once.
For example, a company is cached. There are 3 entries in cache with following keys:
Company:GetById:123
Company:GetByEmail:foo#bar.com
Company:GetByName:Acme
All three keys are related to one company.
Now let's assume that company has changed. Then I would like to invalidate all keys related to this company. I didn't find any built-in solution for that purpose.
Tagging cache entries with some common id (companyId for example) and then removing all entries by it would be great, but this feature doesn't seem to exist.
So to answer your question directly, You'd probably want to maintain all the keys related to your company in a list, scan through that list, and delete all the associated keys with a DEL command.
So something like:
LPUSH companies-keys:Acme Company:GetById:123 Company:GetByEmail:foo#bar.com Company:GetByName:Acme
Then
RPOP companies-keys:Acme
and for each entry you get out of the list:
UNLINK keyname
To answer it not so directly, you may want to consider using a Hash rather than just keys, that way you can just modify one of the fields in the hash rather than having to invalidate all the keys associated with it.
So you could create it with:
HSET companies:123 id 123 email foo#bar.com name acme
Then you could update a particular entry in the company record with HMSET:
HMSET companies:123 email bar#foo.com
Since it sounds like being able to look up a given record by different fields is really important to your use case - you may also want to consider adding RediSearch and indexing the fields you want to be able to search on different fields for the set of fields listed above, and index of:
FT.CREATE companies-idx ON HASH PREFIX 1 companies: SCHEMA id TAG email TEXT name TEXT
Might be appropriate - then you could look up a company with a given email like:
FT.SEARCH companies-idx "#email: foo"

The generally accepted way to do bulk product importing in a Shopify App

I'm currently working on my first Shopify app and it requires bulk importing products. I'm looking around the web, and it appears there's no query to do bulk importing. It also looks like if I want to add the price to the items I import, I'll have to make a separate query from the one that creates the products in the first place.
I'm thinking the easier way would be to create a .csv but there's no query to upload a .csv either.
Has anyone tackled something like this before, and what's the usual way to go about it?
If not supported then standard graphql possibilities [possibly (not tested)] can (should be possible) be applied:
Aliases
You can make multiple queries/mutations within one request.
Guaranteed order of mutations on root level
Multiple (aliased) chains of mutations:
m1a: insert #1,
m1b: define #1 price,
m2a: insert #2,
m2b: define #2 price
...
Probably you can use arguments used for element creation (one mutation) to search item for following mutation (if id not required).
You can try/check this scenario in the playground.
Of course in this case you need to construct query dynamically (usually not advised) in the app. You will need dynamic aliases and input variable names.

How can I search users by id

Correct me if I'm wrong but it appears that the admin-sdks's Users>list operation doesnt support searching users by ID (According to the docs here).
For example I use the Members api to get all the members of a given group. It returns a list or User Ids.
The only way to fetch data about those users is to call the get operation for each user. Seems pretty inefficient to me.
How come this functionality is not implemented (or perhaps I'm missing something)?
Search feature means you have a pattern and you want the list of all entities which relate to given pattern. It assumes you don't have the unique id of the entity you need. The output of search feature is the list of unique ids with optional additional minimal information which matches to search pattern. To get full information of the individual entity, you need to use unique id and use get information feature.
However, if you already have the unique id, then you don't need the search function. Directly use get information feature.
So google has provided sufficient functionality. If you already have userid, why using search call, use retrieve user call directly.

Deleting multiple columns from a Parse Class

I have accidentally created hundreds of columns in my User class. Can anyone tell me how to easily delete a number of columns? Currently the only way I can delete them is individually through the Parse online interface.
You can use the Schema API to delete columns from your collection, check the documentation for details. A good strategy might be to fetch a list of columns the collection you want to modify has, create a list of columns you want to keep and then delete each column that is not in this list.

Resources