I have an issue with a stack-use-after-scope with error with the C++ Armadillo library within an OpenMP blog in an R package and I cannot figure out what is wrong. The complete gcc log is here from the CRAN GCC ASAN check of the R-package. I have have kept the relevant part of the log below
==33791==ERROR: AddressSanitizer: stack-use-after-scope on address 0x7ffd03364940 at pc 0x7ff8127abc07 bp 0x7ffd03364680 sp 0x7ffd03364670
WRITE of size 4 at 0x7ffd03364940 thread T0
#0 0x7ff8127abc06 in arma::Mat<double>::Mat(double*, unsigned int, unsigned int, bool, bool) /data/gannet/ripley/R/test-3.5/RcppArmadillo/include/armadillo_bits/Mat_meat.hpp:1215
#1 0x7ff8129fb0c2 in GMA<logistic>::solve() [clone ._omp_fn.0] /data/gannet/ripley/R/test-3.5/RcppArmadillo/include/armadillo_bits/Col_meat.hpp:411
#2 0x7ff825ae2cde in GOMP_parallel (/lib64/libgomp.so.1+0xdcde)
#3 0x7ff812a0c9f8 in GMA<logistic>::solve() ddhazard/GMA_solver.cpp:83
#4 0x7ff81276421d in ddhazard_fit_cpp(...
Address 0x7ffd03364940 is located in stack of thread T0 at offset 416 in frame
#0 0x7ff8129fa82f in GMA<logistic>::solve() [clone ._omp_fn.0] ddhazard/GMA_solver.cpp:83
This frame has 5 object(s):
[32, 40) 'dest'
[96, 104) 'src'
[160, 176) 'ans'
[224, 384) 'my_X_cross'
[416, 576) '<unknown>' <== Memory access at offset 416 is inside this variable
HINT: this may be a false positive if your program uses some custom stack unwind mechanism or swapcontext
(longjmp and C++ exceptions *are* supported)
SUMMARY: AddressSanitizer: stack-use-after-scope /data/gannet/ripley/R/test-3.5/RcppArmadillo/include/armadillo_bits/Mat_meat.hpp:1215 in arma::Mat<double>::Mat(double*, unsigned int, unsigned int, bool, bool)
Shadow bytes around the buggy address:
0x1000206648d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x1000206648e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x1000206648f0: 00 00 00 00 f1 f1 f1 f1 00 f2 f2 f2 f2 f2 f2 f2
0x100020664900: 00 f2 f2 f2 f2 f2 f2 f2 f8 f8 f2 f2 f2 f2 f2 f2
0x100020664910: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x100020664920: 00 00 00 00 f2 f2 f2 f2[f8]f8 f8 f8 f8 f8 f8 f8
0x100020664930: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f3 f3 f3 f3
0x100020664940: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x100020664950: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x100020664960: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x100020664970: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 f1 f1
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==33791==ABORTING
The WRITE that causes the error is in the dynamichazard/src/ddhazard/GMA_solver.cpp and particularly this OpenMP block
#ifdef _OPENMP
int n_threads = std::max(1, std::min(omp_get_max_threads(),
(int)r_set.n_elem / 1000 + 1));
#pragma omp parallel num_threads(n_threads) if(n_threads > 1)
{
#endif
arma::mat my_X_cross(q, q, arma::fill::zeros);
#ifdef _OPENMP
#pragma omp for schedule(static)
#endif
for(arma::uword i = 0; i < r_set.n_elem; i++){
auto trunc_eta = T::truncate_eta(
is_event[i], eta[i], exp(eta[i]), at_risk_length[i]);
h_1d[i] = w[i] * T::d_log_like(
is_event[i], trunc_eta, at_risk_length[i]);
double h_2d_neg = - w[i] * T::dd_log_like(
is_event[i], trunc_eta, at_risk_length[i]);
sym_mat_rank_one_update(h_2d_neg, X_t.unsafe_col(i), my_X_cross);
}
#ifdef _OPENMP
#pragma omp critical(gma_lock)
{
#endif
X_cross += my_X_cross;
#ifdef _OPENMP
}
}
#endif
As far as I can tell, the error is at the X_t.unsafe_col(i) call in the call to sym_mat_rank_one_update. The declaration of the function is
void sym_mat_rank_one_update(const double, const arma::vec&, arma::mat&);
It should trigger a call to the arma::col<double> constructor in line 411 of include/armadillo_bits/Col_meat.hpp which inherit the arma::mat<double> constructor in line 1215 of include/armadillo_bits/Mat_meat.hpp. I gather this is where the 4 bit write occurs with one of the unsigned int since the arma::mat<double> constructor is
template<typename eT>
inline
Mat<eT>::Mat(eT* aux_mem, const uword aux_n_rows, const uword aux_n_cols, const bool copy_aux_mem, const bool strict)
: n_rows ( aux_n_rows )
, n_cols ( aux_n_cols )
, n_elem ( aux_n_rows*aux_n_cols )
, vec_state( 0 )
, mem_state( copy_aux_mem ? 0 : ( strict ? 2 : 1 ) )
, mem ( copy_aux_mem ? 0 : aux_mem )
{
arma_extra_debug_sigprint_this(this);
if(copy_aux_mem == true)
{
init_cold();
arrayops::copy( memptr(), aux_mem, n_elem );
}
}
where
template<typename eT>
class Mat : public Base< eT, Mat<eT> >
{
public:
typedef eT elem_type; //!< the type of elements stored in the matrix
typedef typename get_pod_type<eT>::result pod_type; //!< if eT is std::complex<T>, pod_type is T; otherwise pod_type is eT
const uword n_rows; //!< number of rows (read-only)
const uword n_cols; //!< number of columns (read-only)
const uword n_elem; //!< number of elements (read-only)
const uhword vec_state; //!< 0: matrix layout; 1: column vector layout; 2: row vector layout
const uhword mem_state;
...
See include/armadillo_bits/Mat_bones.hpp and notice that arma::uword is unsigned int. However, I cannot figure out why this would cause a stack-use-after-scope.
A similar error is in the Morpho package. See the current CRAN log here and src/createL.cpp.
Setup
The above check is on CRAN. As far as I can tell, it is with gcc 7.2 on Fedora 26 with the following config.site used to build R
CXX="g++ -fsanitize=address,undefined,bounds-strict -fno-omit-frame-pointer"
CFLAGS="-g -O2 -Wall -pedantic -mtune=native -fsanitize=address"
FFLAGS="-g -O2 -mtune=native"
FCFLAGS="-g -O2 -mtune=native"
CXXFLAGS="-g -O2 -Wall -pedantic -mtune=native"
MAIN_LDFLAGS=-fsanitize=address,undefined
Further, the following ~/.R/Makevars is used
CC = gcc -std=gnu99 -fsanitize=address,undefined -fno-omit-frame-pointer
F77 = gfortran -fsanitize=address
FC = gfortran -fsanitize=address
FCFLAGS = -g -O2 -mtune=native -fbounds-check
FFLAGS = -g -O2 -mtune=native -fbounds-check
The error does not happen with clang 5.0.0 and valgrind on the same machine. Further, I cannot reproduce them on a local Ubuntu 17.04 with gcc version 6.3 and clang version 4.0.0.
Minimal, Complete, and Verifiable example
I will work on making one.
Related
Writing some code to reformat some CSV data.
It's been a while since I was working in C and doing programming with memory management this raw, and I don't have a lot of experience with many of the tools for debugging memory allocation.
Diving right in I found some forum posts suggesting compiling like this to figure out where the segmentation fault was originating.
gcc -o CSVreader_v0.0-memcheck -static-libasan -O -g -fsanitize=address -fno-omit-frame-pointer -Wall -Wno-unused-result CSVreader.c
My only problem is I have no idea how to interpret the output can someone help walk me through this or point me to a guide on what all this means.
==9474==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x602000000015 at pc 0x7f1a330533a6 bp 0x7ffedc0840d0 sp 0x7ffedc083878
WRITE of size 6 at 0x602000000015 thread T0
#0 0x7f1a330533a5 (/usr/lib/x86_64-linux-gnu/libasan.so.4+0x663a5)
#1 0x55a46460155a in parseRow (/home/kodachi/workspace/Tactical Engram/CSVreader_v0.0+0x155a)
#2 0x55a4646018e5 in parseCSV (/home/kodachi/workspace/Tactical Engram/CSVreader_v0.0+0x18e5)
#3 0x55a464601c7b in main (/home/kodachi/workspace/Tactical Engram/CSVreader_v0.0+0x1c7b)
#4 0x7f1a32e220b2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x270b2)
#5 0x55a46460122d in _start (/home/kodachi/workspace/Tactical Engram/CSVreader_v0.0+0x122d)
0x602000000015 is located 0 bytes to the right of 5-byte region [0x602000000010,0x602000000015)
allocated by thread T0 here:
#0 0x7f1a330cbb40 in __interceptor_malloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdeb40)
#1 0x55a4646014f1 in parseRow (/home/kodachi/workspace/Tactical Engram/CSVreader_v0.0+0x14f1)
#2 0x55a4646018e5 in parseCSV (/home/kodachi/workspace/Tactical Engram/CSVreader_v0.0+0x18e5)
#3 0x55a464601c7b in main (/home/kodachi/workspace/Tactical Engram/CSVreader_v0.0+0x1c7b)
#4 0x7f1a32e220b2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x270b2)
SUMMARY: AddressSanitizer: heap-buffer-overflow (/usr/lib/x86_64-linux-gnu/libasan.so.4+0x663a5)
Shadow bytes around the buggy address:
0x0c047fff7fb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c047fff7fc0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c047fff7fd0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c047fff7fe0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c047fff7ff0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0c047fff8000: fa fa[05]fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff8010: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff8020: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff8030: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff8040: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff8050: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==9474==ABORTING
I'd like to understand what all this means but equally if someone can point me to how I can get a more human readable output that would also be helpful, I tried valgrind, but I'm not sure I was using it correctly.
Right now my code is taking the first line of the csv file using
fscanf(fp, "%[^\n]\n", line_buffer);
with char* line_buffer and char** hStrings to store the column headers, I pass &linebuffer and &hStrings into the following function to parse the column headers in the 'line_buffer' string into 'hStrings' with the delimiting character being '|'.
/**
* Split str around the '|' character and store the data in dStr
**/
int parseRow(char** str, char*** dStr)
{
unsigned int columns = 0;
char* col;
printf("Splitting: %s\n", *str);
col = strsep(str, "|");
while(col != NULL && hasPrint(col, 0, strlen(col)))
{
printf("\n");
(*dStr)[columns] = malloc(strlen(col)*sizeof(char));
strcpy((*dStr)[columns], col);
columns++;
col = strsep(str, "|");
}
//printf("\n");
printf("counted: %d columns\n", columns);
return columns;
}
Before I was using `-fsanitize=address' the program would run this part correctly and a segmentation fault would occur later when I was parsing the rows of data, not the column headers. Now it generates the output I provided as it is working on the first row containing the headers. Not sure if that helps understand what's going on here.
Suppose I want to debug this program using the WinDbg, cdb, or ntsd debuggers for Windows:
/* test.c */
#include <stdio.h>
int rip = 42;
int main(void)
{
puts("Hello world!");
return (0);
}
I compile the program for AMD64 and run it under WinDbg. I set a breakpoint at main(), and when the breakpoint hits, I want to inspect the value at the RIP register (program counter), and the memory around that value if the value is treated as a pointer.
I can see the value of the register directly with r rip, but when I try to look at the memory around that address, WinDbg shows me a different address! Having read the symbols in test.pdb, WinDbg sees that rip is a global variable declared in the C code and shows me the memory around &rip.
0:000> bu test!main
0:000> g
Breakpoint 0 hit
test!main:
00007ff6`de1868d0 4883ec28 sub rsp,28h
0:000> r rip
rip=00007ff6de1868d0
0:000> db rip
00007ff6`de1f2000 2a 00 00 00 ff ff ff ff-01 00 00 00 00 00 00 00 *...............
00007ff6`de1f2010 01 00 00 00 02 00 00 00-ff ff ff ff ff ff ff ff ................
00007ff6`de1f2020 00 00 00 00 00 00 00 00-43 46 92 e5 1b df 00 00 ........CF......
00007ff6`de1f2030 bc b9 6d 1a e4 20 ff ff-00 00 00 00 00 00 00 00 ..m.. ..........
00007ff6`de1f2040 00 01 00 00 00 00 00 00-ca b0 1e de f6 7f 00 00 ................
00007ff6`de1f2050 00 00 00 00 00 80 00 00-00 00 00 00 00 80 00 00 ................
00007ff6`de1f2060 d0 66 fc c2 f2 01 03 00-ab 90 ec 5e 22 c0 b2 44 .f.........^"..D
00007ff6`de1f2070 a5 dd fd 71 6a 22 2a 15-00 00 00 00 00 00 00 00 ...qj"*.........
0:000> ? rip
Evaluate expression: 140698265264128 = 00007ff6`de1f2000
0:000> ? dwo(rip)
Evaluate expression: 42 = 00000000`0000002a
This is really annoying, but as long as I'm aware of it, it isn't a problem when manually reading data like this. But if I want to use the register value, for example in scripting the debugger, then there is no easy workaround:
0:000> bu test!main ".if (dwo(rip) == 0n42) { .echo Whoops! I don't want to get here! }"
0:000> g
Whoops! I don't want to get here!
test!main:
00007ff6`de1868d0 4883ec28 sub rsp,28h
This problem, that symbols in the program hide register names, makes things really difficult for me. An actual scenario this broke:
I wanted to set a breakpoint on CreateFileW(), a very commonly called Windows API function.
Since I only cared about one particular file, I wanted to inspect the filename, which is passed in the RCX register, and continue past the breakpoint unless the filename matched the file I wanted.
But I couldn't write this condition, because another module in the program defined a symbol foobar!rcx, and any references to rcx I make in the command to execute on the breakpoint refer to that global variable!
So how do I tell WinDbg that yes, I really meant to read the register? And what if I want to write that register? There must be a simple thing I am missing here.
As noted in passing by another question, you can put an at sign (#) in front of a register name to force it to be interpreted as a register or pseudo-register, bypassing the attempt to parse it as a hexadecimal number or a symbol.
Registers and Pseudo-Registers in MASM Expressions
You can use registers and pseudo-registers within MASM expressions. You can add an at sign (#) before all registers and pseudo-registers. The at sign causes the debugger to access the value more quickly. This at sign is unnecessary for the most common x86-based registers. For other registers and pseudo-registers, we recommend that you add the at sign, but it is not actually required. If you omit the at sign for the less common registers, the debugger tries to parse the text as a hexadecimal number, then as a symbol, and finally as a register.
It's a simple program.
test environment: debian 8, go 1.4.2
union.go:
package main
import "fmt"
type A struct {
t int32
u int64
}
func test() (total int64) {
a := [...]A{{1, 100}, {2, 3}}
for i := 0; i < 5000000000; i++ {
p := &a[i%2]
total += p.u
}
return
}
func main() {
total := test()
fmt.Println(total)
}
union.c:
#include <stdio.h>
struct A {
int t;
long u;
};
long test()
{
struct A a[2];
a[0].t = 1;
a[0].u = 100;
a[1].t = 2;
a[1].u = 3;
long total = 0;
long i;
for (i = 0; i < 5000000000; i++) {
struct A* p = &a[i % 2];
total += p->u;
}
return total;
}
int main()
{
long total = test();
printf("%ld\n", total);
}
result compare:
go:
257500000000
real 0m9.167s
user 0m9.196s
sys 0m0.012s
C:
257500000000
real 0m3.585s
user 0m3.560s
sys 0m0.008s
It seems that the go compiles lot of weird assembly codes (you could use objdump -D to check it).
For example, why movabs $0x12a05f200,%rbp appears twice?
400c60: 31 c0 xor %eax,%eax
400c62: 48 bd 00 f2 05 2a 01 movabs $0x12a05f200,%rbp
400c69: 00 00 00
400c6c: 48 39 e8 cmp %rbp,%rax
400c6f: 7d 46 jge 400cb7 <main.test+0xb7>
400c71: 48 89 c1 mov %rax,%rcx
400c74: 48 c1 f9 3f sar $0x3f,%rcx
400c78: 48 89 c3 mov %rax,%rbx
400c7b: 48 29 cb sub %rcx,%rbx
400c7e: 48 83 e3 01 and $0x1,%rbx
400c82: 48 01 cb add %rcx,%rbx
400c85: 48 8d 2c 24 lea (%rsp),%rbp
400c89: 48 83 fb 02 cmp $0x2,%rbx
400c8d: 73 2d jae 400cbc <main.test+0xbc>
400c8f: 48 6b db 10 imul $0x10,%rbx,%rbx
400c93: 48 01 dd add %rbx,%rbp
400c96: 48 8b 5d 08 mov 0x8(%rbp),%rbx
400c9a: 48 01 f3 add %rsi,%rbx
400c9d: 48 89 de mov %rbx,%rsi
400ca0: 48 89 5c 24 28 mov %rbx,0x28(%rsp)
400ca5: 48 ff c0 inc %rax
400ca8: 48 bd 00 f2 05 2a 01 movabs $0x12a05f200,%rbp
400caf: 00 00 00
400cb2: 48 39 e8 cmp %rbp,%rax
400cb5: 7c ba jl 400c71 <main.test+0x71>
400cb7: 48 83 c4 20 add $0x20,%rsp
400cbb: c3 retq
400cbc: e8 6f e0 00 00 callq 40ed30 <runtime.panicindex>
400cc1: 0f 0b ud2
...
while the C assembly is more clean:
0000000000400570 <test>:
400570: 48 c7 44 24 e0 64 00 movq $0x64,-0x20(%rsp)
400577: 00 00
400579: 48 c7 44 24 f0 03 00 movq $0x3,-0x10(%rsp)
400580: 00 00
400582: b9 64 00 00 00 mov $0x64,%ecx
400587: 31 d2 xor %edx,%edx
400589: 31 c0 xor %eax,%eax
40058b: 48 be 00 f2 05 2a 01 movabs $0x12a05f200,%rsi
400592: 00 00 00
400595: eb 18 jmp 4005af <test+0x3f>
400597: 66 0f 1f 84 00 00 00 nopw 0x0(%rax,%rax,1)
40059e: 00 00
4005a0: 48 89 d1 mov %rdx,%rcx
4005a3: 83 e1 01 and $0x1,%ecx
4005a6: 48 c1 e1 04 shl $0x4,%rcx
4005aa: 48 8b 4c 0c e0 mov -0x20(%rsp,%rcx,1),%rcx
4005af: 48 83 c2 01 add $0x1,%rdx
4005b3: 48 01 c8 add %rcx,%rax
4005b6: 48 39 f2 cmp %rsi,%rdx
4005b9: 75 e5 jne 4005a0 <test+0x30>
4005bb: f3 c3 repz retq
4005bd: 0f 1f 00 nopl (%rax)
Could somebody explain it? Thanks!
The main difference is the the array bounds checking. In the disassembly dump for the Go program, there is:
400c89: 48 83 fb 02 cmp $0x2,%rbx
400c8d: 73 2d jae 400cbc <main.test+0xbc>
...
400cbc: e8 6f e0 00 00 callq 40ed30 <runtime.panicindex>
400cc1: 0f 0b ud2
So if %rbx is greater than or equal to 2, then it jumps down to a call to runtime.panicindex. Given you're working with an array of size 2, that is clearly the bounds check. You could make the argument that the compiler should be smart enough to skip the bounds check in this particular case where the range of the index can be determined statically, but it seems that it isn't smart enough to do so yet.
While you're seeing a noticeable performance difference for this micro-benchmark, it might be worth considering whether this is actually representative of your actual code. If you're doing other stuff in your loop, the difference is likely to be less noticeable.
And while bounds checking does have a cost, in many cases it is better than the alternative of the program continuing with undefined behaviour.
I initially (asked for help) and wrote a BASIC program in the 6502 pet emulator which added two n-byte integers. However, my feedback was that it was simply adding two 16 bit integers (not adding n-byte integers).
Can anyone help me understand this feedback by looking at my code and point me in the right direction to make a program that adds two n-byte integers?
Thank You for the collaboration!
Documentation:
Adds two n-byte integers using absolute indexed addressing. The addends begin at memory locations $0600, $0700 and the answer is at $0800. Byte length of the integers is at $0600 (¢ —> 256)
Machine Code:
18 a2 00 ac 00 06 bd 00
07 7d 00 08 9d 00 09 e8
00 88 00 d0
Op Codes, Documentation, Variables:
A1 = $0600
B1 = $0700
B2 = $0800
Z1 = $0900
[START] = $0500
CLC 18 // loads x with 0
LDX A2 00 // loads length on Y
LDY A1 AC 00 06 // load first operand
loop: LDA B1, x BD 00 07 // adds second operand
ADC B2, x 7D 00 08 // store result
STA Z1, x 9D 00 09 // go to next byte
INX E8 00 // count how many are left
DEY 88 00 // do more if needed
BNE loop D0
It looked to me like your code does what you claim -- adds two N byte operands in little-endian byte order. I vaguely remembered the various addressing modes of the 6502 from my misspent youth and the code seems fine. X is used to index the current byte from the two numbers, Y is a counter for the length of the operands in bytes and you loop over those bytes, stored at addresses 0x0700 and 0x0800 and write the result at address 0x0900.
Rather than get the Commodore 64 out of the attic and try it out I used an online virtual 6502 simulator. On this site we can set the memory address and load the byte values in. They even link to a page to assemble opcodes too. So setting the memory locations 0x0600 to "04" and both 0x0700 and 0x0800 to "04 03 02 01" we should see this code add these two 32 bit values (0x01020304 + 0x01020304 == 0x02040608).
Stepping through the code by clicking on the PC register and setting it to 0x0500 and then single stepping we see there is a bug in your machine code. After INX which compiles to E8 we hit a spurious 0x00 value(BRK) which terminates. The corrected code as below runs to completion and the expected value is seen by reading the memory at 0x0900.
0000 CLC 18
0001 LDX #$00 A2 00
0003 LDY $0600 AC 00 06
0006 LOOP: LDA $0700,X BD 00 07
0009 ADC $0800,X 7D 00 08
000C STA $0900,X 9D 00 09
000F INX E8
0010 DEY 88
0011 BNE LOOP: D0 F3
Memory dump:
:0900 08 06 04 02 00 00 00 00
:0908 00 00 00 00 00 00 00 00
I have a byte array:
00 01 00 00 00 12 81 00 00 01 00 C8 00 00 00 00 00 08 5C 9F 4F A5 09 45 D4 CE
It is read via StreamReader using UTF8 encoding
// Note I can't change this code, to many component dependent on it.
using (StreamReader streamReader =
new StreamReader(responseStream, Encoding.UTF8, false))
{
string streamData = streamReader.ReadToEnd();
if (requestData.Callback != null)
{
requestData.Callback(response, streamData);
}
}
When that function runs I get the following returned to me (i converted to a byte array)
00 01 00 00 00 12 EF BF BD 00 00 01 00 EF BF BD 00 00 00 00 00 08 5C EF BF BD 4F EF BF BD 09 45 EF BF BD
Somehow I need to take whats returned to me and get it back to the right encoding and the right byte array, but I've tried alot.
Please be aware, I'm working with WP7 limited API.
Hopefully you guys can help.
Thanks!
Update for help...
if I do the following code, it's almost right, only thing that is wrong is the 5th to last byte gets split out.
byte[] writeBuf1 = System.Text.Encoding.UTF8.GetBytes(data);
string buf1string = System.Text.Encoding.BigEndianUnicode.GetString(writeBuf1, 0, writeBuf1.Length);
byte[] writeBuf = System.Text.Encoding.BigEndianUnicode.GetBytes(buf1string);
The original byte array is not encoded as UTF-8. The StreamReader therefore replaces each invalid byte with the replacement character U+FFFD. When that character gets encoded back to UTF-8, this results in the byte sequence EF BF BD. You cannot construct the original byte value from the string because the information is completely lost.