Cannot flush nftable element's set - set

I am actually learning to use nftables on a test environment and I'm actually working with nftables sets.
I was on version 0.7 and since my tests weren't working I updated to 0.9.4 version but my problem was still the same.
I can create my sets on my table without any problems. And my set elements must contain ipv4 adresses.
I worked with nftables tables, chains and sets without problems, my rules worked etc...
So what I want to do but can't find how to do it is to delete all my set's elements without precising the ipv4 addresses one by one.
Let's say my table's name is test and my set name's is tmp with an ipv4_addr type, my configuration will looks like that:
table ip test {
set tmp {
type ipv4_addr
}
}
I can add element to this set successfully with this command:
nft add element ip test tmp { 10.10.10.10 }
Now what I want to do is to delete all the elements of my set, I looked in the man page of nft and it say that I can flush all elements from my set with the flush command:
SETS
[...]
flush Remove all elements from the specified set.
So I tried this command to delete all my elements from my set:
nft flush set test tmp
But it returns me this error:
Error: Could not process rule: Invalid argument
flush set test tmp
^^^^^^^^^^^^^^^^^^^
I tried a lot of commands in the same way (adding table before set, not precising the table), it always returns me an error, but not every time the same.
I think I must do something wrong but I can't figure what. If you have any idea please? I will be very thankful!
Maybe my overall configuration is bad and I must not think of sets that way?
And if it's not possible to flush the elements from a set, is there a way to delete all elements from a set (besides defining a flag timeout)?
Sorry if my message isn't clear, I'm french and it's a little hard writing in an other language to describe a problem...
Thanks!
Regards.

So I contacted the netfilter team and gave me an answer.
The flush option for a set only works from Linux 4.10 onwards and my version was below.
I found a way to flush the table anyway with these commands on Debian if you are interested:
Store the elements from the set in variable:
elements=$(nft list set test tmp | awk '/{ /,/}/' | cut -d '=' -f 2)
Delete the elements with the delete command
nft delete element test tmp ${ip_elements}
Hope it'll help some people.

If you are using a nft script for the atomic update functionality, you can also store your set as a variable in that script.
Using a named set, you would do something like
#!/usr/sbin/nft -f
flush set ip test tmp
table ip test {
set tmp {
type ipv4_addr
elements = { 10.10.10.10 }
}
chain somechain {
type filter hook input priority 0; policy accept
ip saddr #tmp meta nftrace set 1
}
}
but since that doesn't work, you can do this instead
define tmp = { 10.10.10.10 }
flush chain ip test somechain
table ip test {
chain somechain {
type filter hook input priority 0; policy accept
ip saddr $tmp meta nftrace set 1
}
}
This keeps the readability of using the named set, but it is converted to an in-line anonymous set upon execution.
This updates the whole rule, and thus "flushes" the old values in the set.
Of cause, this requires that the rule is in a chain you can flush and recreate on every run.

Related

Logging and asserting the number of previously-unknown DOM elements

I'ts my first tme using Cypress and I almost finalized my first test. But to do so I need to assert against a unknown number. Let me explain:
When the test starts, a random number of elements is generated and I shouldn't have control on such a number (is a requirement). So, I'm trying to get such number in this way:
var previousElems = cy.get('.list-group-item').its('length');
I'm not really sure if I'm getting the right data, since I can not log it (the "cypress console" shows me "[Object]" when I print it). But let's say such line returns (5) to exemplify.
During the test, I simulate a user creating extra elements (2) and removing an (1) element. Let's say the user just creates one single extra element.
So, at the end os the test, I need to check if the number of eements with the same class are equals to (5+2-1) = (6) elements. I'm doing it in this way:
cy.get('.list-group-item').its('length').should('eq', (previousTasks + 1));
But I get the following message:
CypressError: Timed out retrying: expected 10 to equal '[object Object]1'
So, how can I log and assert this?
Thanks in advance,
PD: I also tryed:
var previousTasks = (Cypress.$("ul").children)? Cypress.$("ul").children.length : 0;
But it always returns a fixed number (2), even if I put a wait before to make sure all the items are fully loaded.
I also tryed the same with childNodes but it always return 0.
Your problem stems from the fact that Cypress test code is run all at once before the test starts. Commands are queued to be run later, and so storing variables as in your example will not work. This is why you keep getting objects instead of numbers; the object you're getting is called a chainer, and is used to allow you to chain commands off other commands, like so: cy.get('#someSelector').should('...');
Cypress has a way to get around this though; if you need to operate on some data directly, you can provide a lambda function using .then() that will be run in order with the rest of your commands. Here's a basic example that should work in your scenario:
cy.get('.list-group-item').its('length').then(previousCount => {
// Add two elements and remove one...
cy.get('.list-group-item').its('.length').should('eq', previousCount + 1);
});
If you haven't already, I strongly suggest reading the fantastic introduction to Cypress in the docs. This page on variables and aliases should also be useful in this case.

Chef Ruby hash.merge VS hash[new_key]

I ran into an odd issue when trying to modify a chef recipe. I have an attribute that contains a large hash of hashes. For each of those sub-hashes, I wanted to add a new key/value to a 'tags' hash within. In my recipe, I create a 'tags' local variable for each of those large hashes and assign the tags hash to that local variable.
I wanted to add a modification to the tags hash, but the modification had to be done at compile time since the value was dependent on a value stored in an input json. My first attempt was to do this:
tags = node['attribute']['tags']
tags['new_key'] = json_value
However, this resulted in a spec error that indicated I should use node.default, or the equivalent attribute assignment function. So I tried that:
tags = node['attribute']['tags']
node.normal['attribute']['tags']['new_key'] = json_value
While I did not have a spec error, the new key/value was not sticking.
At this point I reached my "throw stuff at a wall" phase and used the hash.merge function, which I used to think was functionally identical to hash['new_key'] for a single key/value pair addition:
tags = node['attribute']['tags']
tags.merge({ 'new_key' => 'json_value' })
This ultimately worked, but I do not understand why. What functional difference is there between the two methods that causes one to be seen as a modification of the original chef attribute, but not the other?
The issue is you can't use node['foo'] like that. That accesses the merged view of all attribute levels. If you then want to set things, it wouldn't know where to put them. So you need to lead off by tell it where to put the data:
tags = node.normal['attribute']['tags']
tags['new_key'] = json_value
Or just:
node.normal['attribute']['tags']['new_key'] = json_value
Beware of setting things at the normal level though, it is not reset at the start of each run which is probably what you want here, but it does mean that even if you remove the recipe code doing the set, the value will still be in place on any node that already ran it. If you want to actually remove things, you have to do it explicitly.

Darwin Streaming Server install problems os x

My problem is the same as the one mentioned in this answer. I've been trying to understand the code and this is what I learned:
It is failing in the file parse_xml.cgi, tries to get messages (return $message{$name}) from a file named messages (located in the html_en directory).
The $messages value comes from the method GetMessageHash in file adminprotocol-lib.pl:
sub GetMessageHash
{
return $ENV{"QTSSADMINSERVER_EN_MESSAGEHASH"}
}
The $ENV{"QTSSADMINSERVER_EN_MESSAGEHASH"} is set in the file streamingadminserver.pl:
$ENV{"QTSSADMINSERVER_EN_MESSAGEHASH"} = $messages{"en"}
I dont know anything about Perl so I have no idea of what the problem can be, for what I saw $messages{"en"} has the correct value (if I do print($messages{"en"}{'SunStr'} I get the value "Sun")).
However, if I try to do print($ENV{"QTSSADMINSERVER_EN_MESSAGEHASH"}{'SunStr'} I get nothing. Seems like $ENV{"QTSSADMINSERVER_EN_MESSAGEHASH"} is not set
I tried this simple example and it worked fine:
$ENV{"HELLO"} = "hello";
print($ENV{"HELLO"});
and it works fine, prints "hello".
Any idea of what the problem can be?
Looks like $messages{"en"} is a HashRef: A pointer to some memory address holding a key-value-store. You could even print the associated memory address:
perl -le 'my $hashref = {}; print $hashref;'
HASH(0x1548e78)
0x1548e78 is the address, but it's only valid within the same running process. Re-run the sample command and you'll get different addresses each time.
HASH(0x1548e78) is also just a human-readable representation of the real stored value. Setting $hashref2="HASH(0x1548e78)"; won't create a real reference, just a copy of the human-readable string.
You could easily proof this theory using print $ENV{"QTSSADMINSERVER_EN_MESSAGEHASH"} in both script.
Data::Dumper is typically used to show the contents of the referenced hash (memory location):
use Data::Dumper;
print Dumper($messages{"en"});
# or
print Dumper($ENV{"QTSSADMINSERVER_EN_MESSAGEHASH"});
This will also show if the pointer/reference could be dereferenced in both scripts.
The solution for your problem is probably passing the value instead of the HashRef:
$ENV{"QTSSADMINSERVER_EN_SUN"} = $messages{"en"}->{SunStr};
Best Practice is using a -> between both keys. The " or ' quotes for the key also optional if the key is a plain word.
But passing everything through environment variables feels wrong. They might not be able to hold references on OSX (I don't know). You might want to extract the string storage to a include file and load it via require.
See http://www.perlmaven.com/ or http://learn.perl.org for more about Perl.
fix code:
$$ENV{"QTSSADMINSERVER_EN_MESSAGEHASH"} = $messages{"en"};
sub GetMessageHash
{
return $$ENV{"QTSSADMINSERVER_EN_MESSAGEHASH"};
}
ref:
https://github.com/guangbin79/dss6.0.3-linux-patch

What means "Name=SWEIPS" Parametr in Siebel

Writing script in LR for Siebel Open UI. All my requests contains this parameter, with different values. What does it mean?
Examples (from different requests):
"Name=SWEIPS", Value = #0'0'1'0'GetProfileAttr'3'attrName'SBRF Position Id'"
"Name=SWEIPS", Value = #0'0''0'3'1-SQE21A, 1-SQL21E, 1SQE31"
And so on.
Can I simple delete it?
Can I simply delete it? - No, you’re not supposed to delete it.
Compare SWEIPS value by recording twice or trice with different data sets, check is there any date/time values in SWEIPS. If there is nothing to correlate leave as it is, no need to delete.
Ensure to correlate values like SWET,ROWID,SWECount,SWEC and so on.

validating ip address in datastage

I have a source file that contains two fields: IP_ADDRESS and USER_NAME. I want to check whether the IP address is correct or not before loading it to the datawarehouse using DATASTAGE. How do I do this?
I was browsing Stack Overflow and think I might have a solution to your question. If you create a job to grab all of the IP_ADDRESS's from the file and send them to a BASIC transformer (search for BASIC transformer in DataStage. It is NOT the one that is normally on the palette). From there, set the Stage Variables as 'SetUserStatus() and write out the column name to a peek stage (You don't need the output at all. The SetUserStatus is the important part). This will now allow you to pass up the Command Output (list of IP Addresses) to a Sequence. From the Sequence, start with the job you just created (BASIC transformer job) and link that to a User Variables Activity. In the User Variables Activity stage, Set the name to something like 'IP Address' and Expression as IP_ADDRESS.$UserStatus. You can then use a Loop to take that output that is now a List and send each individual IP Address to an Execute Stage with a Ping command to see if it returns a valid IP Address. If it does return a valid IP then have your job that writes the USER_NAME and IP_ADDRESS to do a 'Select' statement where the IP_ADDRESS = the valid IP_ADDRESS. For the ones that aren't valid, you can send them down a different path and have them write out to '.txt' file somewhere so you know which ones weren't valid. I'm sure you will need a few more steps in there but that should be the gist of it.
Hope my quick stab at your issue helps.
Yes, you can use a transformer or a transformer and a filter to do that, depending on the version of Datastage you're using. If you're using PX, just encode the validation logic in a transformer stage, and then, on the output link set up a filter that doesn't allow the rows to pass forward if they didn't pass the validation logic.

Resources