When I print the attributes of the XCUIApplication(), what does the value of the traits key represent? - xcode-ui-testing

Example output from dumping the XCUIApplication()
Application, pid: 34372, {{0.0, 0.0}, {320.0, 568.0}}, label: 'MyApp'
Window, {{0.0, 0.0}, {320.0, 568.0}}
Other, {{0.0, 0.0}, {320.0, 568.0}}
Other, traits: 8589934592, {{0.0, 0.0}, {320.0, 568.0}}
NavigationBar, **traits: 35192962023424, {{0.0, 20.0}, {320.0, 44.0}}, identifier: 'SillyDashboardView'
in the output above, what does traits: 8589934592 represent?
Reviewing the XCUIApplication object doesn't help, nor can I find any documentation from Apple. It would be useful to know what these values represent.

According to the official documentation, UIAccessibilityTraits is:
A mask that contains the OR combination of the accessibility traits that best characterize an accessibility element.
What is actually UIAccessibilityTraits? Just another alias for 64-bit integer value which means that there are 64 different traits a view can have each bit representing one trait. Looking at the list of all possible traits, you can see that there are about 17 known tratis (as Oletha pointed out, there may be some unknown traits that Apple uses but they doesn't share with us).
If you print some of them, like this:
print(UIAccessibilityTraitNone) //Prints 0
print(UIAccessibilityTraitButton) //Prints 1
print(UIAccessibilityTraitLink) //Prints 2
print(UIAccessibilityTraitImage) //Prints 4
//...
You can see that every trait is a value that is of some power of 2 (which in turn has only one bit set). So, OR-ing every particular trait gives you final number you see when you print out XCUIApplication().
So, in your example, if you pick the one with number 35192962023424, you have:
35192962023424 or in binary:
0000 0000 0000 0000 0010 0000 0000 0010 0000 0000 0000 0000 0000 0000 0000 0000
^ ^
Which means that there are two traits applied for this view. The one with value 35184372088832, or in binary:
0000 0000 0000 0000 0010 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
^
and the one with value 8589934592, or in binary:
0000 0000 0000 0000 0000 0000 0000 0010 0000 0000 0000 0000 0000 0000 0000 0000
^
Looking at the known traits for those two numbers, you can conclude that no known trait is given for those views.
My guess looking at the output is that 35184372088832 trait is NavigationBars trait, and 8589934592 is Others trait. Maybe, this is how you query navigationBars or otherElements.

The traits number will be different based on which accessibilityTraits are set on the object. Some objects come with accessibility traits out of the box, and you can add or remove them as you please. These traits will mean different things to XCTest, e.g. the .button trait means the element will show up when you query for buttons, the .selected trait affects the value of XCUIElement.isSelected...
It's possible that this number is also affected by other properties that Apple don't share with us, but for the purposes of a UI test, you should only need to observe the value of accessibilityTraits.

Related

How to find tag bit in cache given word address

Caches are important to providing a high-performance memory hierarchy
to processors. Below is a list of 32-bit memory address references,
given as word addresses.
3, 180, 43, 2, 191, 88, 190, 14, 181, 44, 186, 253
For each of these references, identify the binary address, the tag,
and the index given a direct-mapped cache with two-word blocks and a
total size of 8 blocks. Also list if each reference is a hit or a
miss, assuming the cache is initially empty.
the answer is :
I understood that it was to find tag, index, and offset values ​​from the 32-bit memory address value and use it in the cache table, but I do not understand well that the memory address is given as a word. For example, does the word address 3 actually mean 0000 0000 0000 0000 0000 0000 0000 0011? Given a word address, how can it be thought of as a 32-bit address in the figure below?
For the word address 3 (0000 0000 0000 0000 0000 0000 0000 0011), the offset would be 1, the index would be 001, and the tag would be 0000 0000 0000 0000 0000 0000 0000.
2 words in block = 1 bit
for offset (2^1).
8 blocks in cache = 3 bits for index (2^3).
32 - 4 = 28 bits for tag.

What is the byte/bit order in this Microsoft document?

This is the documentation for the Windows .lnk shortcut format:
https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-shllink/16cb4ca1-9339-4d0c-a68d-bf1d6cc0f943
The ShellLinkHeader structure is described like this:
This is a file:
Looking at HeaderSize, the bytes are 4c 00 00 00 and it's supposed to mean 76 decimal. This is a little-endian integer, no surprise here.
Next is the LinkCLSID with the bytes 01 14 02 00 00 00 00 00 c0 00 00 00, representing the value "00021401-0000-0000-C000-000000000046". This answer seems to explain why the byte order changes because the last 8 bytes are a byte array while the others are little-endian numbers.
My question is about the LinkFlags part.
The LinkFlags part is described like this:
And the bytes in my file are 9b 00 08 00, or in binary:
9 b 0 0 0 8 0 0
1001 1011 0000 0000 0000 1000 0000 0000
^
By comparing different files I found out that the bit marked with ^ is bit 6/G in the documentation (marked in red).
How to interpret this? The bytes are in the same order as in the documentation but each byte has its bits reversed?
The issue here springs from the fact the shown list of bits in these specs is not meant to fit a number underneath it at all. It is meant to fit a list of bits underneath it, and that list goes from the lowest bit to the highest bit, which is the complete inverse of how we read numbers from left to right.
The list clearly shows bits numbered from 0 to 31, though, meaning this is indeed one 32-bit value, and not four bytes. Specifically, this means the original read bytes need to be interpreted as a single 32-bit integer before doing anything else. Like with all other values, this means it needs to be read as little-endian number, with its bytes reversed.
So your 9b 00 08 00 becomes 0008009b, or, in binary, 0000 0000 0000 1000 0000 0000 1001 1011.
But, as I said, that list in the specs shows the bits from lowest to highest. So to fit them under that, reverse the binary version:
0 1 2 3
0123 4567 8901 2345 6789 0123 4567 8901
ABCD EFGH IJKL MNOP QRST UVWX YZ#_ ____
---------------------------------------
1101 1001 0000 0000 0001 0000 0000 0000
^
So bit 6, indicated in the specs as 'G', is 0.
This whole thing makes a lot more sense if you invert the specs, though, and list the bits logically, from highest to lowest:
3 2 1 0
1098 7654 3210 9876 5432 1098 7654 3210
____ _#ZY XWVU TSRQ PONM LKJI HGFE DCBA
---------------------------------------
0000 0000 0000 1000 0000 0000 1001 1011
^
0 0 0 8 0 0 9 b
This makes the alphabetic references look a lot less intuitive, but it does perfectly fit the numeric versions underneath. The bit matches your findings (third bit on what you have as value '9'), and you can also clearly see that the highest 5 bits are unused.

Why does the CRC of "1" yield the generator polynomial itself?

While testing a CRC implementation, I noticed that the CRC of 0x01 usually (?) seems to be the polynomial itself. When trying to manually do the binary long division however, I keep ending up losing the leading "1" of the polynomial, e.g. with a message of "0x01" and the polynomial "0x1021", I would get
1 0000 0000 0000 (zero padded value)
(XOR) 1 0000 0010 0001
-----------------
0 0000 0010 0001 = 0x0021
But any sample implementation (I'm dealing with XMODEM-CRC here) results in 0x1021 for the given input.
Looking at https://en.wikipedia.org/wiki/Computation_of_cyclic_redundancy_checks, I can see how the XOR step of the upper bit leaving the shift register with the generator polynomial will cause this result. What I don't get is why this step is performed in that manner at all, seeing as it clearly alters the result of a true polynomial division?
I just read http://www.ross.net/crc/download/crc_v3.txt and noticed that in section 9, there is mention of an implicitly prepended 1 to enforce the desired polynomial width.
In my example case, this means that the actual polynomial used as divisor would not be 0x1021, but 0x11021. This results in the leading "1" being dropped, and the remainder being the "intended" 16-bit polynomial:
1 0000 0000 0000 0000 (zero padded value)
(XOR) 1 0001 0000 0010 0001
-----------------
0 0001 0000 0010 0001 = 0x1021

Reverse Engineering - AND 0FF

I do some reverse engineering stuff with simple crackme app and I'am debugging it with OllyDbg.
I'm stuck at the behavior of instruction AND with operand 0x0FF. I mean It's equivalent in C++ to
if(... = true).
So what's confusing is that:
ECX = CCCCCC01
ZF = 1
AND ECX, 0FF
### After instruction
ECX = 00000001
ZF = 0
ZF - Should be active
I don't know why is result of ECX register 1 and ZF isn't active.
AND => 1 , 1 = 1 (Same operands)
Otherwise = 0
Can someone explain me that?
thankx for help
It's a bit-wise AND, so in binary you have
1100 1100 1100 1100 1100 1100 0000 0001
AND 0000 0000 0000 0000 0000 0000 1111 1111
----------------------------------------
0000 0000 0000 0000 0000 0000 0000 0001

Are these endian transformations correct?

I am struggling to figure this out, I am trying to represent a 32bit variable in both big and little endian. For the sake of argument let's say we try the number, "666."
Big Endian: 0010 1001 1010 0000 0000 0000 0000
Little Endian: 0000 0000 0000 0000 0010 1001 1010
Is this correct, or is my thinking wrong here?
666 (decimal) as 32-bit binary is represented as:
[0000 0000] [0000 0000] [0000 0010] [1001 1010] (big endian, most significant byte first))
[1001 1010] [0000 0010] [0000 0000] [0000 0000] (little endian, least significant byte first)
Ref.
(I have used square brackets to group 4-bit nibbles into bytes)

Resources