C++/CLI serial port send command - visual-studio-2010

I have a hardware here, wich communicates over serial port. I use MS Visual C++ 2010, and I want to send a command: <-S->
I am doing this:
SerialPort^ serialPort = gcnew SerialPort(portName , 9600, Parity::None, 8, StopBits::One);
serialPort->Open();
serialPort->WriteLine("<-S->");
serialPort->Close();
But the command that goes out is <-S->., and not <-S->
(please notice the point that is attached to the outgoing command).
I use Free Serial Port Monitor to watch my ingoing/outcoming data.
So how can I get rid of that point in <-S->. ?
This is what is going out:
3C 2D 53 2D 3E 0A = <-S->.
This is what I want:
3C 2D 53 2D 3E = <-S->
Thanks for help.

You are using WriteLine(), which is appending a newline (character 0x0A) to your output (which something is showing as a ., but it's not really a dot). Try Write() instead.

It is the value of the SerialPort.NewLine property. A line-feed by default. Use Write() instead.
There's more trouble, your code will only work well when you single-step with the debugger. Without it, the Close() method will instantly purge the transmit buffer and only a random number of characters will manage to get sent, including nothing at all. Only close serial ports when your program terminates.

Since you're using C++/CLI, there's really no reason to use the horrid .NET SerialPort class. The native Win32 serial port functions are much more powerful and reliable. Turn on OVERLAPPED mode, attach an event, and you can get background I/O operations going without having to mess with the thread pool and synchronization.

Related

Concurrent DOS + QEMU losing data through parallel port emulation

I have a piece of software running on concurrent DOS 3.1, which I emulate with QEMU 5.1.
In this program, there are several options to print data. The problem is that the data arriving to my host does not correspond to the data sent.
the command to start qemu:
qemu-system-i386 -chardev file,id=imp0,path=/path/to/file -parallel chardev:imp0 -hda DISK.Raw
So the output sent on parallel port of my guest is redirected to /path/to/file.
When I send the charactère 'é' from CDOS:
echo é>>PRN
The code page used on CDOS is Code Page 437, and in this charactere set, the charactère é is represented by 0x82, but on my host, instead, I receive the following:
cp437 é -> 0x82 ---------> host -> x1b52 x017b x1b52 x00
So I tried something else. I wrote the charactère 'é' in a file, and sent the file with nc.exe (from brutman and libmtcp), and with nc, the value stays 0x82.
So my question, what happen when I send my data to virtual parallel port? When does my data get transformed? Is it the parallel port on Concurrent DOS? Is it the QEMU? I can't figure out how to send my data through LPT1 properly.
I also tried this:
qemu-system-i386 -chardev socket,id=imp0,host=127.0.0.1,port=2222,server,nowait -parallel chardev:imp0 -hda DISK.Raw
I can read the socket fine, but same output as when I write in a file, the é get transformed to x1b52 x017b x1b52 x00.
The byte sequence "1B 52 01 7B 1B 52 00" is using Epson FX-style printer escape sequences (there's a reference here). Specifically, "1B 52 nn" is ESC R n which selects the international character set, where character set 0 is USA and 1 is France. So the sequence as a whole is "Select French character set; print byte 0x7b; select US character set". In the French character set for this printer standard, 0x7B is the e-acute é.
This is almost certainly the CDOS printer driver assuming that the thing on the end of PRN: is an Epson printer and emitting the appropriate escape sequences to output your text on that kind of printer.
OK, so i finally figured it out... After hours of searching where this printer.sys driver could be, or how to remove it, no Concurrent DOS, the setup command is "n". And of course, it is not listed in the "help" command...
Anyway, in there you can setup your printer, and select "no conversion" for the port you want. And it was actually on Epson MX-80/MX-100.
So thanks to Peter's answer, you led me to the right path !

Is there a way to remove "getKey"'s input lag?

I've recently decided to try ti-basic programming, and while I was playing with getKey; I noticed that it had a 1s~ input lag after the first input. Is this built into the calculator, or can this be changed?
I recognize that "Quick Key" code above ;) (I'm the original author and very glad to see it spread around!).
Anyway, here is my low-level knowledge of the subject:
The operating system uses what is known as an interrupt in order to handle reading the keyboard, link port, USB port, and the run indicator among other things. The interrupt is just software code, nothing hardware implemented. So it is hardwired into the OS not the calculator.
The gist of the code TI uses is that once it reads that a key press occurred, it resets a counter to 50 and decrements it so long as the user holds down the key. Once the counter reaches zero, it tells getKey to recognize it as a new keypress and then it resets the counter to 10. This cause the initial delay to be longer than subsequent delays.
The TI-OS allows third party "hooks" to jump in and modify the getkey process and I used such a hook in another more complicated program (Speedy Keys). However, this hook is never called during BASIC program execution except at a Pause or Menu( command, where it isn't too helpful.
Instead what we can do is setup a parser hook that modifies the getkey counters. Alternatively, you can use the QuickKey code above, or you can use Hybrid BASIC which requires you to download a third-party App. A few of these apps (BatLib [by me], Celtic 3, DoorsCS7, and xLIB) offer a very fast getKey alternative as well as many other powerful functions.
The following is the code for setting up the parser hook. It works very well in my tests! See notes below:
#include "ti83plus.inc" ; ~~This column is the stuff for manually
_EnableParserHook = 5026h ; creating the code on calc. ~~
.db $BB,$6D ;AsmPrgm
.org $9D95 ;
ld hl,hookcode ;21A89D
ld de,appbackupscreen ;117298
ld bc,hookend-hookcode ;010A00
ldir ;EDB0
ld hl,appbackupscreen ;217298
ld a,l ;7D
bcall(_EnableParserHook);EF2650
ret ;C9
hookcode: ;
.db 83h ;83
push af ;F5
ld a,1 ;3E01
ld (8442h),a ;324284
pop af ;F1
cp a ;BF
ret ;C9
hookend: ;
Notes: other apps or programs may use parser hooks. Using this program will disable those hooks and you will need to reinstall them. This is pretty easy.
Finally, if you manually putting this on your calculator, use the right column code. Here is an animated .gif showing how to make such a program:
You will need to run the program once either on the homescreen or at the start of your main program. After this, all getKeys will have no delay.
I figured out this myself too when I was experimenting with my Ti-84 during the summer. This lag cannot be changed. This is built into the calculator. I think this is because of how the microchip used in ti-84 is a Intel Zilog Z80 microprocessor which was made in 1984.
This is unfortunately simply the inefficiency of the calculator. TI-basic is a fairly high-level language and meant to be easy to use and is thus not very efficient or fast. Especially with respect to input and output, i.e. printing messages and getting input.
Quick Key
:AsmPrgm3A3F84EF8C47EFBF4AC9
This is a getKey routine that makes all keys repeat, not just arrows and there is no delay between repeats. The key codes are different, so you might need to experiment.

How to know where the address is through Turbo Debugger?

I would like to understand how Turbo Debugger works. So, for example, I have a message and I moved it to the DX register.
(I show you how it looks in the debugger):
MOV DX,009E ;This is a version which debugger shows me in debugger mode;
;It takes all information from this code: MOV DX,OFFSET MSG.
In fact, the message's first element address is at 9E (this is how the debugger understands). But actually, in the debugger screen, I can see that in the DS register, MSG address is at A0. How can it be?
I know that code is more preferable, but this time a screenshot is more suitable:
As you can see, I marked 2 addresses, but they are not the same. Actually, I can see that my MSG begins in the above-marked address A0, but that the debugger understands it like 9E and moves it to DX. Can someone explain to me how this can be?
By the way, the program works and prints everything fine, the purpose is to understand how the debugger understands addresses.
MSG code is simply:
MSG db 'Hello, how do you do!!!!,'$'
I believe that if you will single step your code, instruction by instruction, you will see that
You are correct that that your message starts at DS:00A0
Turbo Debugger is also correct that it starts two bytes before that
Look carefully at what is located at DS:009E.
What do you see there ? Two bytes: 0A and 0D
That's an ascii "Line Feed" and an Ascii "Carriage Return".
Your confusion can be reduced by understanding the historical perspective...
Way back when printers used ink and paper, and telephones carried modem signals at 1200 BPS, and you paid something like ten hours of minimum wage pay for one hour of that connection to a city only three states away, there really was an economic imperative in choosing between running the little print head back to the left, or just jacking the platen down a line while the print head stayed in the same position.
I mean, you really saw it in your phone bill.
No joke, this one change, using the 0A byte without the 0D byte, could mean a 10 or 20 dollar difference in your phone bill; and remember to factor in inflation back then.
The reason that you see the message properly is because your machine is first placing a "line feed" (i.e., the cursor is probably dropping to the next line) and a "carriage return" (i.e., the cursor jumps back to the left edge) before putting your message on the screen. This happens much faster than your eye can see.
With the miracle of Turbo Debugger, you can single step this and watch it happen.
So, you are correct when you write that your message "starts" at 00A0, but Turbo Debugger is also right when it's telling you that the message starts two bytes ahead of that.

Is it possible to use midiOutLongMsg to play a chord? (Win32 API)

This guys says yes:
http://web.tiscalinet.it/giordy/midi-tech/lowmidi.htm
Same with a really old book from 1998 (Maximum MIDI).
MSDN doesn't mention it.
I'm not getting any sound.
I fill a char buffer with status|note|velocity|status|note|velocity...
Set lpData, dwBufferLength, and dwFlags of a MIDIHDR struct
call midiOutPrepareHeader (MMSYSERR_NOERROR)
call midiOutLongMsg (MMSYSERR_NOERROR)
Still no sound! Spamming midiOutShortMsg is working but will that work for slower machines? Did they change the functionality?
Thanks.
I'm an idiot! I figured it out: Microsoft GS Wavetable Synth does NOT support sending multiple short messages in midiOutLongMsg. The MIDI Mapper DOES!
midiOutShortMsg should be plenty fast, even on slow machines. MIDI interfaces themselves (hardware that is, but some software will limit themselves) run at 31,250 baud. This of course is ignoring any slow code you may have wrapped around where you call midiOutShortMsg.
Anyway, technically you should also be able to get away with one status byte, if the following notes use the same status byte. So, if you want to do note on/off (using velocity 0 for off) and those notes are on the same channel, you could do this:
status|note|velocity|note|velocity|note|velocity|note|velocity
This is called running status.

Fix hard-coded display setting without source (24-bit, need 32-bit)

I wrote a program about 10 years ago in Visual Basic 6 which was basically a full-screen game similar to Breakout / Arkanoid but had 'demoscene'-style backgrounds. I found the program, but not the source code. Back then I hard-coded the display mode to 800x600x24, and the program crashes whenever I try to run it as a result. No virtual machine seems to support 24-bit display when the host display mode is 16/32-bit. It uses DirectX 7 so DOSBox is no use.
I've tried all sorts of decompiler and at best they give me the form names and a bunch of assembly calls which mean nothing to me. The display mode setting was a DirectX 7 call but there's no clear reference to it in the decompilation.
In this situation, is there any pointers on how I can:
pin-point the function call in the program which is setting the display mode to 800x600x24 (ResHacker maybe?) and change the value being passed to it so it sets 800x600x32
view/intercept DirectX calls being made while it's running
or if that's not possible, at least
run the program in an environment that emulates a 24-bit display
I don't need to recover the source code (as nice as it would be) so much as just want to get it running.
One technique you could try in your disassembler is to do a search for the constants you remember, but as the actual bytes that would be contained within the executable. I guess you used the DirectDraw SetDisplayMode call, which is a COM object so can't be as easily traced to/from an entry point in a DLL. It takes parameters for width, height and bits per pixel and they are DWORDs (32-bit) so do a search for "58 02 00 00", "20 03 00 00" and "18 00 00 00". Hopefully that will narrow it down to what you need to change.
By the way which disassembler are you using?
This approach may be complicated somewhat if your VB6 program compiled to p-code rather than native code as you'll just get a huge chunk of data that represents the program rather than useful assembler instructions.
Check this:
http://www.sevenforums.com/tutorials/258-color-bit-depth-display-settings.html
If your graphics card doesn't have an entry for 24-bit display....I guess hacking your code's the only possibility. That or finding an old machine to throw windows 95 on :P.

Resources