While capturing packets on Windows 7 x64, timestamps don't seem to have float precision. the code is given below.
from scapy.all import sniff
pkts = sniff(count=10)
pkts[0].time
-> for higher precision output
print('%.6f'%pkts[0].time)
output
1506009934
1506009934.000000
any ideas how to get precise values for timestamps?
This has been fixed in Scapy's development version. Get it from the official repository and your code should give the results you expect.
Related
Title may be wildly incorrect for what I'm trying to work out.
I'm trying to interpret packets I am recieving from a racing game in a way that I understand, but I honestly don't really know what I'm looking at, or what to search to understand it.
Information on the packets I am recieving here:
https://forums.codemasters.com/topic/54423-f1%C2%AE-2020-udp-specification/?tab=comments#comment-532560
I'm using python to print the packets, here's a snippet of the output, which I don't understand how to interpret.
received message: b'\xe4\x07\x01\x03\x01\x07O\x90.\xea\xc2!7\x16\xa5\xbb\x02C\xda\n\x00\x00\x00\xff\x01\x00\x03:\x00\x00\x00 A\x00\x00\xdcB\xb5+\xc1#\xc82\xcc\x10\t\x00\xd9\x00\x00\x00\x00\x00\x12\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00$tJ\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01
I'm very new to coding, and not sure what my next step is, so a nudge in the right direction will help loads, thanks.
This is the python code:
import socket
UDP_IP = "127.0.0.1"
UDP_PORT = 20777
sock = socket.socket(socket.AF_INET, # Internet
socket.SOCK_DGRAM) # UDP
sock.bind((UDP_IP, UDP_PORT))
while True:
data, addr = sock.recvfrom(4096)
print ("received message:", data)
The website you link to is describing the data format. All data represented as a series of 1's and 0's. A byte is a series of 8 1's and 0's. However, just because you have a series of bytes doesn't mean you know how to interpret them. Do they represent a character? An integer? Can that integer be negative? All of that is defined by whoever crafted the data in the first place.
The type descriptions you see at the top are telling you how to actually interpret that series of 1's and 0's. When you see "unit8", that is an "unsigned integer that is 8 bits (1 byte) long". In other words, a positive number between 0 and 255. An "int8" on the other hand is an "8-bit integer", or a number that can be positive or negative (so the range is -128 to 127). The same basic idea applies to the *16 and *64 variants, just with 16 bits or 64 bits. A float represent a floating point number (a number with a fractional part, such as 1.2345), generally 4 bytes long. Additionally, you need to know the order to interpret the bytes within a word (left-to-right or right-to-left). This is referred to as the endianness, and every computer architecture has a native endianness (big-endian or little-endian).
Given all of that, you can interpret the PacketHeader. The easiest way is probably to use the struct package in Python. Details can be found here:
https://docs.python.org/3/library/struct.html
As a proof of concept, the following will interpret the first 24 bytes:
import struct
data = b'\xe4\x07\x01\x03\x01\x07O\x90.\xea\xc2!7\x16\xa5\xbb\x02C\xda\n\x00\x00\x00\xff\x01\x00\x03:\x00\x00\x00 A\x00\x00\xdcB\xb5+\xc1#\xc82\xcc\x10\t\x00\xd9\x00\x00\x00\x00\x00\x12\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00$tJ\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01'
#Note that I am only taking the first 24 bytes. You must pass data that is
#the appropriate length to the unpack function. We don't know what everything
#else is until after we parse out the header
header = struct.unpack('<HBBBBQfIBB', data[:24])
print(header)
You basically want to read the first 24 bytes to get the header of the message. From there, you need to use the m_packetId field to determine what the rest of the message is. As an example, this particular packet has a packetId of 7, which is a "Car Status" packet. So you would look at the packing format for the struct CarStatus further down on that page to figure out how to interpret the rest of the message. Rinse and repeat as data arrives.
Update: In the format string, the < tells you to interpret the bytes as little-endian with no alignment (based on the fact that the documentation says it is little-endian and packed). I would recommend reading through the entire section on Format Characters in the documentation above to fully understand what all is happening regarding alignment, but in a nutshell it will try to align those bytes with their representation in memory, which may not match exactly the format you specify. In this case, HBBBBQ takes up 2 bytes more than you'd expect. This is because your computer will try to pack structs in memory so that they are word-aligned. Your computer architecture determines the word alignment (on a 64-bit computer, words are 64-bits, or 8 bytes, long). A Q takes a full word, so the packer will try to align everything before the Q to a word. However, HBBBB only requires 6 bytes; so, Python will, by default, pad an extra 2 bytes to make sure everything lines up. Using < at the front both ensures that the bytes will be interpreted in the correct order, and that it won't try to align the bytes.
Just for information if someone else is looking for this. In python there is the library f1-2019-telemetry existing. On the documentation, there is a missing part about the "how to use" so here is a snippet:
from f1_2020_telemetry.packets import *
...
udp_socket = socket.socket(family=socket.AF_INET, type=socket.SOCK_DGRAM)
udp_socket.bind((host, port))
while True:
udp_packet = udp_socket.recv(2048)
packet = unpack_udp_packet(udp_packet)
if isinstance(packet, PacketSessionData_V1): # refer to doc for classes / attribute
print(packet.trackTemperature) # for example
if isinstance(packet, PacketParticipantsData_V1):
for i, participant in enumerate(packet.participants):
print(DriverIDs[participant.driverId]) # the library has some mapping for pilot name / track name / ...
Regards,
Nicolas
Situation:
I'm trying to manipulate/hack a Debian kernel to be able to use 9-bit UART via the parity hack that you can find reference to everywhere on the NET. Now for some reason Tx is working just fine, but when we look at Rx we're dropping bytes. We're expecting 9 bytes back, including the crc, but we get a variable amount back between 4-7. It will vary with the exact same code run a few times in a row, so it almost appears to be buffer related. Hooking it up to the logic analyzer the response is proper(9 expected bytes) from the slave device, but it is getting "mangled" somewhere low level that I can't track down.
Attempted:
So I started dumping all the bytes(char) that come in here:
static void serial_omap_rdi(struct uart_omap_port *up, unsigned int lsr)
Which still results in only seeing 4-7 bytes of bad data. I'm assuming this is due to parity/framing/FIFO errors but I just can't seem to get any deeper to where I can see exactly where it's failing.
Questions:
Has anyone seen anything like this type of discarded data on the UART line with a linux kernel? If so would you know any possible avenues to take in tracking it down?
Is there anyplace in the kernel I can look to actually dump the bit-by-bit data that is coming into the UART Rx line?
Any help is greatly appreciated as I'm at my wits end understanding how/where exactly the kernel is discarding these bytes...
Platform:
BBB Debian Linux 3.8.13-bone53 armv7l GNU/Linux
16750 UART (I Believe)
I am using Arduino Leonardo to transmit an string to a wifi module. The format of command that wifi module can recognize is:
AT60,1,content to a server
I am using an virtual server(TCP/IP Builder) to test the content I can received.
Here is the content I want to send:
smart/device/deviceCmd?userId=1010002003&deviceId=A00019999990002&cmd=ON
Since I try to send it again and again, I use a loop to send it. In the virtual server side, the content I got is:
smart/device/deviceCmd?userId=1010002003&devceId=A00019999990002&cmd=ON
smart/device/deviceCmd?userId=1010002003&devceId=A00019999990002&cmd=ON
smart/device/deviceCmd?userId=1010002003&dviceId=A00019999990002&cmd=ON
smart/device/deviceCmd?userId=1010002003&eviceId=A00019999990002&cmd=ON
smart/device/deviceCmd?userId=1010002003&devieId=A00019999990002&cmd=ON
smart/device/deviceCmd?userId=1010002003deviceId=A00019999990002&cmd=ON
smart/device/deviceCmd?userId=1010002003&dviceId=A00019999990002&cmd=ON
smart/device/deviceCmd?userId=1010002003&dviceId=A00019999990002&cmd=ON
smart/device/deviceCmd?userId=1010002003&deiceId=A00019999990002&cmd=ON
smart/device/deviceCmd?userId=1010002003&dviceId=A00019999990002&cmd=ON
This is the QUESTION: There exist one terrible mistake in the content I received, which is the deviceId part never correct. It's so weird.
Here is part of related code:
//In Uart.cpp
//These three lines can sent a formatted string as "AT60,1,content"
Serial1.write("AT60,");
Serial1.write(channelID); //channel ID = 1 here
Serial1.write(reportIsFire, 76);
//In Uart.h
//Definition of the string I need to send, which has 76 characters.
char reportIsFire[76] = ",smart/device/deviceCmd?userId=1010002003&deviceId=A00019999990002&cmd=ON \n";
Here is few background of this application:
I am using Arduino 1.5.8 IDE with VisualStudio
Since the serial buffer of Arduino is only 64 Bytes, I have already
change the buffer size to 128 Bytes in "HardwareSerial.h" to send
out this large string.
The baud rate is 115200 and I am using Serial 1. I have used Serial 1
to transmit few other characters and it works fine.
I will appreciate that If you have any idea about this question.
I am betting that the serial baud rate of the Arduino is not 100% correct. Increasing the buffer size will not matter if the data is being lost due to a timing issue in the physical link.
I'd recommend double-checking the code that initializes the serial baud rate generator. It may be possible to get a closer rate to 115,200 by either adjusting the available settings, altering the main clock speed (if possible), implementing some form of flow control, or all of the above.
In extreme cases, you may consider using a special-frequency oscillator. Many Microchip PICs use an internal or external 4MHz or 8MHz crystal, but this can produce far too much timing error for lengthy serial transmissions at high speed. In that case, something special, like a 7.3728MHz crystal can be used, bringing the accuracy to exactly 100% (at least on some PIC devices.)
Lastly, another consideration is if any pre-emptive code is running on the device, such as interrupts or timers which could inadvertently interfere with the serial output.
I don't have an answer, but I suspect the most likely problem is that the Wifi card can't read characters at a sustained 115200 baud rate. If possible, set the Wifi baud rate and the Arduino Serial.begin() to a lower rate, such as 57600 or 19200.
If the Arduino baud rate was simply inaccurate, I'd expect to see the problem appearing at random locations in the string, rather than about 40 characters in.
I am debugging some snmp code for an integer overflow problem. Basically we use an integer to store disk/raid capacity in KB. However when a disk/raid of more than 2TB is used, it'll overflow.
I read from some internet forums that snmp v2c support integer64 or unsigned64. In my test it'll still just send the lower 32 bits even though I have set the type to integer64 or unsigned64.
Here is how I did it:
a standalone program will obtain the capacity and write the data to a file. example lines for raid capacity
my-sub-oid
Counter64
7813857280
/etc/snmp/snmpd.conf has a clause to pass thru the oids:
pass_persist mymiboid /path/to/snmpagent
in the mysnmpagent source, read the oidmap into oid/type/value structure from the file, and print to stdout.
printf("%s\n", it->first.c_str());
printf("%s\n", it->second.type.c_str());
printf("%s\n", it->second.value.c_str());
fflush(stdout);
use snmpget to get the sub-oid, and it returns:
mysuboid = Counter32: 3518889984
I use tcpdump and the last segment of the value portion is:
41 0500 d1be 0000
41 should be the tag, 05 should be the length, and the value is only carrying the lower 32-bit of the capacity. (note 7813857280 is 0x1.d1.be.00.00)
I do find that using string type would send correct value (in octetstring format). But I want to know if there is a way to use 64-bit integer in snmp v2c.
I am running NET-SNMP 5.4.2.1 though.
thanks a lot.
Update:
Found the following from snmpd.conf regarding pass (and probably also pass_persist) in net-snmp doc page. I guess it's forcing the Counter64 to Counter32.
Note:
The SMIv2 type counter64 and SNMPv2 noSuchObject exception are not supported.
You are supposed to use two Unsigned32 for lower and upper bytes of your large number.
Counter64 is not meant to be used for large numbers this way.
For reference : 17 Common MIB Design Errors (last one)
SNMP SMIv2 defines a new type Counter64,
https://www.rfc-editor.org/rfc/rfc2578#page-24
which is in fact unsigned 64 bit integer. So if your data fall into the range, using Counter64 is proper.
"In my test it'll still just send the lower 32 bits even though I have set the type to integer64 or unsigned64" sounds like a problem, but unless you show more details (like showing some code) on how you tested it out and received the result, nobody might help further.
I'm in a situation that involves the manual reconstruction of raw data, to include MFT records and other Windows artifacts. I understand that timestamps in MFT records are 64-bit integers, big endian, and are calculated by the number of 100 nanosecond intervals since 01/01/1601 00:00:00 UTC. I am also familiar with Windows email header timestamps, which consist of two 32-bit values that combine to form a single 64-bit value, also calculated by the number of 100 nanosecond intervals since 01/01/1601 00:00:00 UTC.
But there are other Windows timestamps with different epochs, such as SQL Server timestamps, which I believe use a date in the 1800's. I cannot find much documentation on all of this. What timestamps are used in Windows other than the two listed above? How do you break them down?
Here is some code for decoding Windows timestamps:
static string microsoftDateToISODate(const uint64_t &time) {
/**
* See comment above for more information on
* SECONDS_BETWEEN_WIN32_EPOCH_AND_UNIX_EPOCH
*
* Convert UNIX time_t to ISO8601 format
*/
time_t tmp = (time / ONE_HUNDRED_NANO_SEC_TO_SECONDS)
- SECONDS_BETWEEN_WIN32_EPOCH_AND_UNIX_EPOCH;
struct tm time_tm;
gmtime_r(&tmp, &time_tm);
char buf[256];
strftime(buf, sizeof(buf), "%Y-%m-%dT%H:%M:%S", &time_tm);
return string(buf);
}
And your reference for Mac timestamps is:
Apple Mac and Unix timestamps
http://developer.apple.com/library/mac/#qa/qa1398/_index.html