Error when deleting a node using Cypher in Memgraph - memgraphdb

When I try to delete the node
MATCH (c:Customer {name: 'John Smith'})
DELETE c;
I get the message Failed to remove node because of it's existing connections. Consider using DETACH DELETE.. What should I do?

Just follow the error message and replace DELETE with DETACH DELETE:
MATCH (c:Customer {name: 'John Smith'})
DETACH DELETE c;
DELETE can only be used on nodes that have no relationships. To delete a node along with all of its relationships, you need to add DETACH.

Related

Flow is deleting list values

I am attempting to create a flow which will be used to update the members of various SharePoint Permission Groups. I ran into an issue with one of the actions not executing due to the fact it said that the value could not be found. After much trial and error I still could not figure out why it was failing so I started to remove actions and steps from the flow. I've taken it all the way back to my trigger and 1 action and I can't figure out what is causing my issue. Here is the setup.
I have a list with the following fields:
Employee Name - Person or Group
Folder - Choice Column
Action - Choice Column
Flow is triggered when an item is created or modified and has the trigger condition of #not(equals(triggerBody()?['Action'],'Updated'))
1st action is just a Get items
When I add an entry to the list and select a person, a Folder and an Action the flow will run. But when it does it is deleting or removing the selected choice in the Folder column leaving it blank. Why would it do that? In the 2 steps I'm not even specifically calling that field and if it could be due to the fact that it is a choice field, why isn't the Action column value also not removed? It is not my intent to delete or remove field values.
I need the value in that field to not be removed as I intend to call on it later in a concat string but I can't call what isn't there.
What is going on?
Update #1: As an update I deleted the original flow and rebuilt it again with just the 2 steps but without the trigger condition. Re-ran the flow and immediately the option selected in the "Folder" column is removed from the list. None of the list columns are set as "required" and the choice fields are not multi-select.
Update #2: In looking at the trigger action settings the Split On statement is #triggerOutputs()?['body/value']. In looking at the sample I was using to build my flow they show the statement to be #triggerBody()?['value']. There doesn't seem to be any way for me to change the statement, could this have anything to do with why my field value is being removed from the list?

How to remove unwanted nested columns?

I've been tasked to alter the company's Event Export from our PlayFab Environment into Azure. Initially, we've set it up to Export all events but after looking at the data we do have some data exported that we don't want for legal reasons. I was exploring the Use custom query method and was trying to build the query to get all data except the columns I want to exclude. The problem is that these columns are nested. I've tried using the project-away query to exclude one column for now but when I run the below query
['events.all']
| project-away EventData.ColumnToExclude
| limit 100
I get this error
I'm assuming it's because it is not supporting nested columns. Is there an easy way to exclude the column without having to flatten the data or list all my columns (our developers might create new events without notice so that won't work)?
UPDATE 1:
I've found that project-away is the syntax to remove a column from a table but what I needed is a way to remove a key from a json/dynamic object so found that using bag_remove_keys() is the correct approach
['events.all']
| project EventData=bag_remove_keys(EventData, dynamic(['Key1', 'Key2', '$.Key3.SubKey1'])
But now I am facing another issue. When I use the '$.' notation for subkeys I get the below error
Query execution has resulted in error (0x8000FFFF): Partial query failure: Catastrophic failure (message: 'PropertyBagEncoder::Add: non-contiguous entries: ', details: '').
[0]Kusto.Data.Exceptions.KustoDataStreamException: Query execution has resulted in error (0x8000FFFF): Partial query failure: Catastrophic failure (message: 'PropertyBagEncoder::Add: non-contiguous entries: ', details: '').
Timestamp=2022-01-31T13:54:56.5237026Z
If I don't list any subkeys I don't get this issue and I can't understand why
UPDATE 2:
I found that bag_remove_keys has a bug. On the below query I get the described error in UPDATE 1
datatable(d:dynamic)
[
dynamic(
{
"test1": "val",
"test2": {},
"test3": "val"
}
)
]
| extend d1=bag_remove_keys(d, dynamic(['$.SomeKey.Sub1', '$.SomeKey.Sub2']))
However, if I move the "test2" key at the end I don't get an error but d1 will not show the "test2" key in the output.
Also, if I have a key in bag_remove_keys() that matches a key from the input like | extend d1=bag_remove_keys(d, dynamic(['$.SomeKey.Sub1', '$.SomeKey.Sub2', 'test1'])) then, again it will not error but will remove "test2" from the output
Thanks for reporting it Andrei, it is a bug and we are working on a fix.
Update - fix had been checked in and will be deployed within two weeks, please open a support ticket if you need it earlier.

mongodb inserting inspite of a document with a specific name being present

I have this following piece of code
coll_name = "#{some_name}_#{differentiator}"
coll_object = #database[coll_name]
idExist = coll_object.find({"adSet.name" => NAME}).first()
if idExist.nil?
docId = coll_object.insert(request)
else
docId = idExist["_id"]
end
return docId
differentiator can be the same or different from the Loop that is code is called.So everytime there can be a new collection or same collection.Now if the same collection is recieved then there might be an object with name = NAME. In that case no insert should be carried out.However i have observed that documents with the same NAME are getting inserted.Can anybody helpout on this problem.
The explanation for this behavior could be a race condition: The duplicate is inserted by another thread/process between line 3 and 5 of your application. Two threads try to create the same name at the same time, the database returns for both that the name doesn't exist yet, and when those replies arrived, both insert the document.
To prevent this from happening, create an unique index on the name-field. This will prevent MongoDB from inserting two documents with the same name. When you do this, you could remove the check for existence before inserting. Just try to insert the document, and then call getLastError to find out if it worked. When it didn't, retrieve the existing document with an additional query.

propel nested set behavior

I woudl like to build an oriented Graph using propel. The behavior I am looking for is similar to the nested set but with multiple parents for one child.
What exists:
P: Parent Node
C: Child Node
(0,1)P <- (0,n)C
What I need:
(0,n)P <- (0,n)C
I have read this:
http://propelorm.org/behaviors/nested-set.html
and that: https://github.com/CraftyShadow/EqualNestBehavior
Could you give me some direction please?
I found a solution, I am using a ManyToMany relationship with the same table on the two relations links.
table Node
Table ParentNode (ParentFK: Node, NodeFK: Node)

Ignore error in SSIS

I am getting an "Violation of UNIQUE KEY constraint 'AK_User'. Cannot insert duplicate key in object 'dbo.tblUsers when trying to copy data from an excel file to sql db using SSIS.
Is there any way of ingnoring this error, and let the package continue to the next record without stopping?
What I need is if it inserts three records but the first record is a duplicate, instead of failing, it should continue with the other records and insert them.
There is a System variable called propagate which can be used to continue or stop the execution of package .
1.Create an ON-Error event handler for the task which is failing .Generally it is created for the entire Data Flow Task.
2.Press F4 to get the list of all variables and click on the Icon at the top
to show System Variable.By default Propagate variable will be True ,you need to change it to false ,which basically means that SSIS wont propagate the Error to other component and let the execution continue
Update 1:
To skip the bad rows there are basically 2 ways to do so :-
1.Use Lookup
Try to match the primary key column values in source and destination and then use Lookup No Match Output to your destination.If the value doesn't match with the destination then insert the rows else just skip the rows or redirect to some table or flat file using Lookup Match Output
Example
For more details on Lookup refer this article
2.Or you can redirect the error rows to a flat file or a table .Every SSIS Data Flow components has a Error Output .
For example for Derived component ,the error output dialogue box is
But this condition may not helpful to u in your case as redirect error rows in destination doesn't work properly .If an error occurs it redirects the entire data without inserting any row in the destination .I think this happens because OLEDB destination does a bulk insert or inserts data using transactions.So try to use lookup to achieve your functionality .

Resources