SCSI Inquiry Data - scsi

I am new to SCSI Programming and hence sorry for asking basic question. I sent SCSI Inquiry command to a Tape Device through 6 byte CDB
ccb = (Exec_IO_CCB *)( buffer + header_size );
ccb->ccb_length = sizeof(Exec_IO_CCB);
ccb->cam_opcode = 0x1;
ccb->connect_id = 0;
ccb->sense_buf_ptr = (long)(header_size + ccb->ccb_length);
ccb->sense_buf_length = MAX_SENSE_LEN;
ccb->time_out = CAM_TIMEOUT;
ccb->cdb_length = 6;
/* For INQUIRY sets cam_flags and cdb[0] */
ccb->cam_flags = NO_DATA;
ccb->cdb[0] = INQUIRY; /* 0x12 SCSI Opcode for Inquiry Command */
ccb->cdb[1] = 0;
ccb->cdb[2] = 0;
ccb->cdb[3] = 0;
ccb->cdb[4] = 3200;
ccb->cdb[5] = 0;
The SCSI Command is successful . How do i capture the output of INQUIRY command so that i can get
Vendor ID / Product ID ??
I have declared the Execute I/O SCSI buffer as follows
typedef struct {
long ccb_address; /* Address of this CCB */
short ccb_length; /* CAM Control Block Length */
char cam_opcode; /* CAM Operation Code */
char status; /* CAM Status */
long connect_id; /* Connect ID - no fields supported */
long cam_flags; /* CAM Flags */
long pd_pointer; /* Peripheral driver pointer */
long next_ccb_ptr; /* Next CCB Pointer */
long req_map_info; /* Request mapping information */
long call_on_comp; /* Callback on completion */
long data_buf_ptr; /* Data Buffer Pointer */
long data_xfer_length; /* Data transfer length */
long sense_buf_ptr; /* Sense information buffer pointer */
char sense_buf_length; /* Sense information buffer length */
char cdb_length; /* Command Descriptor Block (CDB) **
** length */
short num_sg_entries; /* Number of scatter/gather entries */
long vendor_unique; /* Vendor Unique field */
char scsi_status; /* SCSI status */
char auto_resid; /* Auto sense residual length */
short reserved; /* Reserved */
long resid_length; /* Residual length */
char cdb[12]; /* Command Descriptor Block (CDB) */
long time_out; /* Time-out value */
long msg_buf_ptr; /* Message buffer pointer */
short msg_buf_length; /* Message buffer length */
short vu_flags; /* Vendor-unique flags */
char tag_queue_act; /* Tagged Queue action */
char tag_id; /* Tag ID (target only) */
char init_id; /* Initiator ID (target only) */
char reserved2; /* Reserved */
} Exec_IO_CCB;
This structure will never capture SCSI Output ?
I have declared the Inquiry Structure as follows . But I am not sure how Inquire command will
populate Inquiry_Data structure data ??
typedef struct {
short data_valid; /* Flag that indicates whether or not the */
/* structure has been filled in with */
/* inquiry data from the device. */
byte periph_qual;
byte periph_dev_type;
byte rmb;
byte iso_version;
byte ecma_version;
byte ansi_version;
byte resp_data_format;
byte rel_adr;
byte sync;
byte linked;
byte cmd_que;
byte sft_rst;
char vendor_id[9];
char prod_id[17];
char prod_rev[5];
char reserved;
} Inquiry_Data;

The first thing that you have assigned a short to cdb[4], but cdb[4] is a byte. The assignment probably put a 0 there since the compiler would truncate. Since bytes 3 and 4 are the allocation length you have told the target not to send anything. Maybe you ment to assign 32 to cdb[4]; but since your Inquiry_Data structure is 44 bytes you probably want to assign 44 to cdb[4].

Related

Need help understanding stack frame layout

While implementing a stack walker for a debugger I am working on I reached the point to extract the arguments to a function call and display them. To make it simple I started with the cdecl convention in pure 32-bit (both debugger and debuggee), and a function that takes 3 parameters. However, I cannot understand why the arguments in the stack trace are out of order compared to what cdecl defines (right-to-left, nothing in registers), despite trying to figure it out for a few days now.
Here is a representation of the function call I am trying to stack trace:
void Function(unsigned long long a, const void * b, unsigned int c) {
printf("a=0x%llX, b=%p, c=0x%X\n", a, b, c);
_asm { int 3 }; /* Because I don't have stepping or dynamic breakpoints implemented yet */
}
int main(int argc, char* argv[]) {
Function(2, (void*)0x7A3FE8, 0x2004);
return 0;
}
This is what the function (unsurprisingly) printed to the console:
a=0x2, c=0x7a3fe8, c=0x2004
This is the stack trace generated at the breakpoint (the debugger catches the breakpoint and there I try to walk the stack):
0x3EF5E0: 0x10004286 /* previous pc */
0x3EF5DC: 0x3EF60C /* previous fp */
0x3EF5D8: 0x7A3FE8 /* arg b --> Wait... why is b _above_ c here? */
0x3EF5D4: 0x2004 /* arg c */
0x3EF5D0: 0x0 /* arg a, upper 32 bit */
0x3EF5CC: 0x2 /* arg a, lower 32 bit */
The code that's responsible for dumping the stack frames (implemented using the DIA SDK, though, I don't think that is relevant to my problem) looks like this:
ULONGLONG stackframe_top = 0;
m_frame->get_base(&stackframe_top); /* IDiaStackFrame */
/* dump 30 * 4 bytes */
for (DWORD i = 0; i < 30; i++)
{
ULONGLONG address = stackframe_top - (i * 4);
DWORD value;
SIZE_T read_bytes;
if (ReadProcessMemory(m_process, reinterpret_cast<LPVOID>(address), &value, sizeof(value), &read_bytes) == TRUE)
{
debugprintf(L"0x%llX: 0x%X\n", address, value); /* wrapper around OutputDebugString */
}
}
I am compiling the test program without any optimization in vs2015 update 3.
I have validated that I am indeed compiling it as cdecl by looking in the pdb with the dia2dump sample application.
I do not understand what is causing the stack to look like this, it doesn't match anything I learned, nor does it match the documentation provided by Microsoft.
I also checked google a whole lot (including osdev wiki pages, msdn blog posts, and so on), and checked my (by now probably outdated) books on 32-bit x86 assembly programming (that were released before 64-bit CPUs existed).
Thank you very much in advance for any explanations or links!
I had somehow misunderstood where the arguments to a function call end up in memory compared to the base of the stack frame, as pointed out by Raymond. This is the fixed code snippet:
ULONGLONG stackframe_top = 0;
m_frame->get_base(&stackframe_top); /* IDiaStackFrame */
/* dump 30 * 4 bytes */
for (DWORD i = 0; i < 30; i++)
{
ULONGLONG address = stackframe_top + (i * 4); /* <-- Read before the stack frame */
DWORD value;
SIZE_T read_bytes;
if (ReadProcessMemory(m_process, reinterpret_cast<LPVOID>(address), &value, sizeof(value), &read_bytes) == TRUE)
{
debugprintf(L"0x%llX: 0x%X\n", address, value); /* wrapper around OutputDebugString */
}
}

Why is stat's st_size field offset 96 on 64bit OSX and can it be calculated?

Using the latest sources from apple's open source repo I have derived the following structure for the "stat" struct (in go syntax):
type timespec struct {
tv_sec int32
tv_nsec uint32
}
type stat64 struct {
st_dev int32 /* [XSI] ID of device containing file */
st_mode uint16 /* [XSI] Mode of file (see below) */
st_nlink uint16 /* [XSI] Number of hard links */
st_ino uint64 /* [XSI] File serial number */
st_uid uint32 /* [XSI] User ID of the file */
st_gid uint32 /* [XSI] Group ID of the file */
st_rdev int32 /* [XSI] Device ID */
st_atimespec timespec /* time of last access */
st_mtimespec timespec /* time of last data modification */
st_ctimespec timespec /* time of last status change */
st_birthtimespec timespec /* time of file creation(birth) */
st_size int64 /* [XSI] file size, in bytes */
st_blocks int64 /* [XSI] blocks allocated for file */
st_blksize int32 /* [XSI] optimal blocksize for I/O */
st_flags uint32 /* user defined flags for file */
st_gen uint32 /* file generation number */
st_lspare int32 /* RESERVED: DO NOT USE! */
st_qspare [2]int64 /* RESERVED: DO NOT USE! */
}
but in practice it turns out st_size has an offset of 96 bytes instead of the 60 shown above. What's the cause of this discrepancy and how can this be seen from the original source code?
On OS X, both fields of struct timespec are long, which is 64-bit in the usual LP64 convention. Therefore, sizeof(struct timespec) == 16 (you can check this yourself), and it is aligned on a 64-bit boundary, giving you an offset of 96 for st_size.

copy_from_user is fetching unexpected data

I want to use the write sycall for copying a struct
from userspace to kernel.
In both user and kernel space, the struct is defined as
struct packet{
unsigned char packet[256];
int length;
}__attribute__ ((packed));
User space uses a local variable of type struct packet and passes it to the write syscall.
struct packet p;
/* ... (fill in data) */
printf("packet.length: %d\n",packet.length); /* looks correct */
result = write(uartFD, &p, sizeof(struct packet));
The kernel side looks like this, checking for correct length is done, just removed from example.
/* write syscall */
ssize_t packet_write(
struct file *file_ptr,
const char __user *user_buffer,
size_t count, loff_t *position)
{
struct packet p;
int retval;
if (copy_from_user((void*)&p, user_buffer, sizeof(struct packet))){
retval = -EACCES;
goto err;
}
/* looks wrong - different numbers like 96373062 or 96373958 */
printk("packet length: %d\n",p.length);
The opposite direction using read sycall is working as expected:
/* read syscall */
struct packet p;
/* ... (fill in data) */
copy_to_user(user_buffer, (void*)&p, sizeof(struct packet));
/* userspace */
read(uartFD, (void*)&packet, sizeof(struct packet));
What am I doing wrong with write syscall?
(Posted on behalf of the OP).
This is solved - it was my own silly. Both copying an integer and an unsigned char buffer separately was working, so it had to be something about the struct.
One site was packed, the other was not... reusing old code...

Trouble capturing IP packets with libpcap

First the structs:
/* Ethernet addresses are 6 bytes */
#define ETHER_ADDR_LEN 6
/* Ethernet header */
struct sniff_ethernet {
u_char ether_dhost[ETHER_ADDR_LEN]; /* Destination host address */
u_char ether_shost[ETHER_ADDR_LEN]; /* Source host address */
u_short ether_type; /* IP? ARP? RARP? etc */
};
#define ETHERTYPE_IP 0x0800 /* IP */
/* IP header */
struct sniff_ip {
u_char ip_vhl; /* version << 4 | header length >> 2 */
u_char ip_tos; /* type of service */
u_short ip_len; /* total length */
u_short ip_id; /* identification */
u_short ip_off; /* fragment offset field */
u_char ip_ttl; /* time to live */
u_char ip_p; /* protocol */
u_short ip_sum; /* checksum */
struct in_addr ip_src,ip_dst; /* source and dest address */
};
#define IP_HL(ip) (((ip)->ip_vhl) & 0x0f)
#define IP_V(ip) (((ip)->ip_vhl) >> 4)
I have opened the network device with pcap_open_live, the pcap_datalink is DLT_EN10MB for that device, but I am receiving lots of IP headers with 0 length, weird version number, etc.
Here's a snippet that outputs this:
eth = (struct sniff_ethernet*)(packet);
ip = (struct sniff_ip*)(eth + 14); /* ETHERNET = 14 */
int version_ip = IP_V(ip);
int size_ip = IP_HL(ip)*4;
printf("caplen=%d len=%d eth->type=%d version_ip=%d size_ip=%d !\n", header.caplen, header.len, eth->ether_type, version_ip, size_ip);
And some sample output:
caplen=94 len=94 eth->type=8 version_ip=0 size_ip=0 !
caplen=159 len=159 eth->type=8 version_ip=9 size_ip=12 !
caplen=110 len=110 eth->type=8 version_ip=0 size_ip=12 !
caplen=200 len=336 eth->type=8 version_ip=4 size_ip=20 ! (this one is OK)
What is going on here?
Why is eth type 0x0008 and not 0x0800? Endianness?
What's up with the weird ip header versions?
How can the IP header length be below 20 bytes?
Found the problem...
eth = (struct sniff_ethernet*)(packet);
ip = (struct sniff_ip*)(eth + 14); /* should be (packet + 14) */
The 'smart' C pointer arithmetics doesn't add 14 bytes, but 14*sizeof(struct sniff_ethernet).

What is the use of ssi_code in signalfd_siginfo structure?

I am using signalfd() to monitor the death of child processes created by my process. If I kill a child process with a signal, parent gets a read event on the signal fd with signalfd_siginfo structure populated. It has a field ssi_code which is set to the signal number the child received (for example 9 if I sent SIGKILL to the child).
Can I rely on this behavior always ? All versions of Linux kernel where signalfd is supported has the same usage for this field ?
Note: If the child calls exit() then the code passed to exit is populated in ssi_code.
man page of signalfd states :
The format of the signalfd_siginfo structure(s) returned by read(2)s from a signalfd file descriptor is as follows:
struct signalfd_siginfo {
uint32_t ssi_signo; /* Signal number */
int32_t ssi_errno; /* Error number (unused) */
int32_t ssi_code; /* Signal code */
uint32_t ssi_pid; /* PID of sender */
uint32_t ssi_uid; /* Real UID of sender */
int32_t ssi_fd; /* File descriptor (SIGIO) */
uint32_t ssi_tid; /* Kernel timer ID (POSIX timers)
uint32_t ssi_band; /* Band event (SIGIO) */
uint32_t ssi_overrun; /* POSIX timer overrun count */
uint32_t ssi_trapno; /* Trap number that caused signal */
int32_t ssi_status; /* Exit status or signal (SIGCHLD) */
int32_t ssi_int; /* Integer sent by sigqueue(2) */
uint64_t ssi_ptr; /* Pointer sent by sigqueue(2) */
uint64_t ssi_utime; /* User CPU time consumed (SIGCHLD) */
uint64_t ssi_stime; /* System CPU time consumed (SIGCHLD) */
uint64_t ssi_addr; /* Address that generated signal
(for hardware-generated signals) */
uint8_t pad[X]; /* Pad size to 128 bytes (allow for
additional fields in the future) */
};
It seems clear : ssi_signo contains the signal number. It says about ssi_code that :
Not all fields in the returned signalfd_siginfo structure will be valid for a specific signal; the set of valid fields can be determined
from the value returned in the ssi_code field. This field is the
analog of the siginfo_t si_code field; see sigaction(2) for details.
See sigaction man page for more details about this code which is not the signal number.

Resources