When one uses the "ACSII BEL" character i.e. '\a' in any programming language? - ascii

I stumbled upon fancy ASCII BEL in a book. Then I (Briefly) read the article on ASCII BEl on Wikipedia, and did know that it produces a noise(alert noise).
Tried in python(3), tried in C, didn't work.
Probably people don't need the bell these days or something.

It is seldom in use now and many consoles are not supporting it. In case you are interested in the history of it, I found this from Microsoft:
A long time ago, all PC computers shared a common 8254 programable interval timer chip for the generation of primitive sounds. The Beep function was written specifically to emit a beep on that piece of hardware.
On these older systems, muting and volume controls have no effect on Beep; you would still hear the tone. To silence the tone, you used the following commands:
net stop beep
sc config beep start= disabled
Since then, sound cards have become standard equipment on almost all PC computers. As sound cards became more common, manufacturers began to remove the old timer chip from computers. The chips were also excluded from the design of server computers. The result is that Beep did not work on all computers without the chip. This was okay because most developers had moved on to calling the MessageBeep function that uses whatever is the default sound device instead of the 8254 chip.
Eventually because of the lack of hardware to communicate with, support for Beep was dropped in Windows Vista and Windows XP 64-Bit Edition.

Related

Real keyboard simulation

I'm trying to write a bot for a game. The huge problem I'm running into for months now is keyboard simulation
My operation system is windows 10. On that OS when trying to simulate key press via code, windows 10 adds to the request a flag. This flag indicates that this key press came from a program, and not from the hardware.
That way games can check the user input and filter all of the key simulated presses
My game does that, So I have been trying to find a way to pass that.
VirtualBox
The first solution is to run the game from a virtual box. Then running the bot program from the host. When focusing on the VirtualBox the program is being able to simulate the keyboard with no problem. (That is because VirtualBox does the hard word of "fooling" the OS that the key simulation came from the hardware)
This way works pretty well, but the main disadvantage is that running the game on VM is super slow, the game flips a lot. I have tried multiple tutorials on how to get the best gaming results on VM but nothing really worked..
Real keyboard simulation
That idea came to me lately. I wonder if I can somehow fool my PC to think the key press came from the hardware.
Maybe using male to male USB cable and connect the PC to itself, then do real keyboard simulation (sending keyboard requests from one side of the USB and get it through the other).
Or maybe some other way to achieve that?
What I don't want
There are some solutions that will probably work, but I don't want to try:
Changing my OS to windows 7: I don't want to lower my OS version
Dual boot of windows 7: I have tried before dual boots for linux, it was hideous to make it work
The question is, do you have any idea how to simulate keyboard on such way that Windows 10 won't add the "rat" flag
Option 1
Windows driver is exactly what you need. In your windows driver, create an keyboard HID device, then send your keys though this HID device.
Pros:
Total software
Cons:
Complicated
Windows driver should be signed (you must pay for it), or you must set your windows 10 to Test Mode to load driver
Option 2
Use Arduino to send your keys. ref to https://www.arduino.cc/reference/en/language/functions/usb/keyboard/
Pros
Easy to learn
Cons
Hardware is required.
you need driver level key simulation.
because some games made by D3D game engine will blocks system level simulation like winAPI or pykeyboard.
I used use the driver level key simulation to cheat in games, like lol, cs, pubg...
so , if u use python, u can use keyboard, mouse,
ctypes, etc.
if u are win32 platform, u can use winio and pydamo.
they all are driver level simulation.
ps: If your game blocks one solution, please try another one.
keyboard
ctypes simulation
Considerations about other solutions here:
I tested most of the solutions (python libraries, mostly):
keyboard
ctypes simulation
However, only two seem to work. It is by simulating a virtual environment or using an Arduino.
Both solutions require much effort in terms of installation and programming (maybe you can get the Arduino code, but, you will have to deal with Arduino code)
Solution:
A possible workaround I tested and worked quite well is using keyboard software such as Hyperx NGENUITY to save a macro. The macro is associated with the button in the keyboard software that the keyboard hardware executes. This way, the keyboard will work as an Arduino.

What causes poor network performance when playing audio or video in Windows Vista and newer?

The software in question is a native C++/MFC application that receives a large amount of data over UDP and then processes the data for display, sound output, and writing to disk among other things. I first encountered the problem when the application's CHM help document was launched from its help menu and then I clicked around the help document while gathering data from the hardware. To replicate this, an AutoHotkey script was used to rapidly click around in the help document while the application was running. As soon as any sound occurred on the system, I started getting errors.
If I have the sound card completely disabled, everything processes fine with no errors, though sound output is obviously disabled. However, if I have sound playing (in this application, a different application or even just the beep from a message box) I get thousands of dropped packets (we know this because each packet is timestamped). As a second test, I didn't use my application at all and just used Wireshark to monitor incoming packets from the hardware. Sure enough, whenever a sound played in Windows, we had dropped packets. In fact, sound doesn't even have to be actively playing to cause the error. If I simply create a buffer (using DirectSound8) and never start playing, I still get these errors.
This occurs on multiple PCs with multiple combinations of network cards (both fiber optic and RJ45) and sound cards (both integrated and separate cards). I've also tried different driver versions for each NIC and sound card. All tests have been on Windows 7 32bit. Since my application uses DirectSound for audio, I've tried different CooperativeLevels (normal operation is DSSCL_PRIORITY) with no success.
At this point, I'm pretty convinced it has nothing to do with my application and was wondering if anyone had any idea what could be causing this problem before I started dealing with the hardware vendors and/or Microsoft.
It turns out that this behavior is by design. Windows Vista and later implemented something called the Multimedia Class Scheduler service (MMCSS) that is intended to make all multimedia playback as smooth as possible. Since multimedia playback relies on hardware interrupts to ensure smooth playback, any competing interrupts will cause problems. One of the major hardware interrupt sources is network traffic. Because of this, Microsoft decided to throttle the network traffic when a program was running under MMCSS.
I guess this was a big deal back in 2007 when Vista came out, but I missed it. There was an article by Mark Russinovich (thanks ypnos) describing MMCSS. It seems that the my entire problem boiled down to this:
Because the standard Ethernet frame
size is about 1500 bytes, a limit of
10,000 packets per second equals a
maximum throughput of roughly 15MB/s.
100Mb networks can handle at most
12MB/s, so if your system is on a
100Mb network, you typically won’t see
any slowdown. However, if you have a
1Gb network infrastructure and both
the sending system and your Vista
receiving system have 1Gb network
adapters, you’ll see throughput drop
to roughly 15%. Further, there’s an
unfortunate bug in the NDIS throttling
code that magnifies throttling if you
have multiple NICs. If you have a
system with both wireless and wired
adapters, for instance, NDIS will
process at most 8000 packets per
second, and with three adapters it
will process a maximum of 6000 packets
per second. 6000 packets per second
equals 9MB/s, a limit that’s visible
even on 100Mb networks.
I haven't verified that the multiple adapter bug still exists in Windows 7 or Vista SP1, but it is something to look for if you are running into problems.
From the comments on Russinovich's post, I found that Vista SP1 introduced some registry settings that allowed one to adjust how MMCSS affects Windows. Specifically the NetworkThrottlingIndex key.
The solution to my issue was to completely disable network throttling by setting the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Multimedia\SystemProfile\NetworkThrottlingIndex key to 0xFFFFFFFF and then rebooting. This completely disables the network throttling portion of MMCSS. I had tried simply upping the value to 70, but it didn't stop causing errors until I completely disabled it.
Thus far I have not seen any adverse effects on other multimedia applications (nor the video capture and audio output portions of my own application) from this change. I will report back here if that changes.
It is known that Microsoft built some weird anti-feature into the Windows Vista kernel that will degrade I/O performance preventatively to make sure that multimedia applications (windows media player, directX) get 100% responsiveness. I don't know if that also means packet loss with UDP. Read this lame justification for the method: http://blogs.technet.com/b/markrussinovich/archive/2007/08/27/1833290.aspx
One of the comments there summarizes this quite well: "Seems to me Microsoft tried to 'fix' something that wasn't broken."

Does using a barcode scanner as a keyboard wedge imply you can't confirm receipt of the scan?

I have an extremely simple application running off a series of deprecated scanners that picks up a barcode scan off a serial port and sends back to the scanner an ok that it received the scan. Based on that, the scanner flashes green and the user knows they can continue.
I like this model over my understanding of a keyboard wedge because if something happens to the application picking up the scan (the application hangs, the form with the focus gets changed, the PC hangs, the PC can't keep up picking up the scans), the person holding the scan gun will know there is a problem because they won't receive the green flash and they won't be able to continue scanning.
I'm looking at adding some scanners and it seems many people are using barcode scanners that effectively act as keyboard wedges. Some of these scanners have ranges that exceed 100 feet, implying people are using them far away from the PC (as my users are). So I'm wondering if I'm missing something regarding the keyboard wedge model. Is there some mechanism I'm missing to ensure that a scan decoded by a scanner acting as a keyboard wedge actually reaches the application running on the PC? A full-blown hand-held computer running something like Windows Mobile seems like massive overkill for just wanting to ensure my user is not scanning data that isn't going into the application and so does even a mid-range scanner with a keypad and screen, but is the latter the entry point for any sort of programmatibility of the scanner?
You are correct- there isn't a feedback loop to the scanner when running as a wedge. We use wedge scanners a lot, and in a modern environment (ie, Windows, multiple apps, etc), focus, "dropped scans", etc, are all real problems.
We're in the middle of switching over to a different way. If you have your choice of hardware, many new USB barcode scanners have the ability to operate in a serial emulation mode that allows the same kind of interaction you describe (where you can prevent a second scan until the host has ACK'd the first, or you can beep/flash something on the scanner as an ACK). Also, there's a USB HID POS (point of sale) mode that some higher-end USB scanners support that gives you an even greater degree of flexibility, with the added bonus of "driver free" installation (it looks like a generic HID device to the system, like a joystick or keyboard, but with 2-way comm ability). The downside of POS mode is that it's a little harder than serial programming, but there are abstraction layers available for different platforms.
RF mobile computers with built in scanners, like the Symbol MC9090-G, are by far the most flexible and what we use the most. As for wedges, depending on the distance from the PC and factory environment - we have used visual feedback via the PC screen and audio via the PC speakers. The users listen for the audio feedback after each scan and when they don't hear it they look back to the PC screen for visual feedback as to the problem. Not perfect but it has worked well.

Turbo C 3.0 and lower versions were really using high CPU power?

I am using Turbo C 3.0 and Turbo c 2.0 for the programming. Added to this I am using Windows XP. While using Windows 98, the above said programs were really worked fine. But after installing XP, those programs were really slow-down my system. Those were really using high CPU power even when idle(idle refers to "no interaction between program and user").
Can anybody previously solved this issue, Post here.
Also, I want to know what is causing those slow-down!
Those are 16 bit DOS programs, and they probably will not run on XP. They are probably running in the NT Virtual DOS Machine. Use the task manager, or better yet, Process Explorer, to check this. You will probably not see your programs running; look for instances of ntvdm.exe instead.
I have noticed several antivirus programs (Checkpoint, Proventia Desktop) seem have a problem with ntvdm. It is as if they eat up quite a bit of cpu when an ntvdm instance is running.
Also, wasn't Turbo C finicky about its extended memory settings? If you still have your Autoexec.bat and Config.sys files from the Win98 system, you could try changing XP's settings to match. The XP equivalent to these files are autoexec.nt and config.nt; they are in the Windows\System32 directory.
I suspect Adrian's comment is the correct answer: old DOS programs did not account for multitasking and so tended to put themselves in tight loops when "idle". Back in the day, it didn't matter as nothing else was running at the same time and the operating system would interrupt the running program to handle hardware, well, interrupts.
I would highly recommend avoiding such tools on modern hardware because the programs the generate are likewise not multitasking friendly. They are also going to be optimized for ancient processors and have limited memory addressing. If you have some old hardware and want to goof around with it, then knock yourself out. But there are plenty of modern compilers that are free (either as Visual C++ Express is to get you hooked, or open source).
This can be avoided partially by setting process priority.
Start the App eg. Turbo C++ 3.0
Minimize and go to Task Manager
Find ntvdm.exe
Right Click > Set Priority > Low > Yes
Then it runs with not so annoying speeds.

How to emulate/replace/re-enable classical Sound Mixer controls (or commands) in Windows Vista?

I have a problem (and have been having it for some time now) -- the new sound mixer stack in Vista features new cool things, but also re-invents the wheel. Many applications that used to use Volume Mixer on a Windows system to mix different voiced outputs into one input (for example Wave-out + Line-in --> Stereo Mix) have since stopped working. The prime example of this behavior is the Shoutcast DSP plugin (could be useful for solution testing).
How Can I re-enable XP-mixer controls, or maybe emulate this behavior somehow, so that the program (SC DSP) can properly manage Microphone/Line-In playback volume along with Wave-out playback volume?
My thinking would be to emulate a program hooked-in into the Vista Mixer for Wave-Out and Line-out (or Mic speaker volume -- all playback, shown as separate adjustable "programs" so that the Vista Mixer could refer to it) and 'hook' it into the system under some emulation representing itself as the old volume mixer control interface for the program, but I frankly have no idea how to do that.
To clarify: this is not my PC (it is a HP Pavilion laptop). The problem seems to exist mostly due to the fact that Vista mixer controls separate programs, not separate inputs/outputs. The hardware is fully capable of doing what is needed when using Windows XP. I am well aware of the fact that this is a driver issue, but the driver is simply prepared for what Vista presents to the programmer through interfaces. The mixer device - as seen in the operating system, however it might look in software - is based on the mixer APIs for Windows Audio control.
Search using Google on Vista and line-in playback volume control for more info on the problem (and the sheer amount of users affected by it). Of course, a re-write of the Shoutcast Source DSP plug-in for WinAMP would do the trick, but that is not likely to happen...
Controlling the volume levels of a soundcards indivudual input/output levels in Windows Vista mixer is possible using the audio EndPoint API
This should allow you to adjust the main volume, and the volume of and connected audio inputs. One wrinkle about this that when you enumerate the end points, if there isn't a microphone plugged into your soundcard, then nothing will be enumerated. This means you'll need to change your application to respond to "microphone plugged in" events, and notify the user appropriately.
Another option is to dip below the Microsoft Core Audio and access the WaveRT driver directly. This is a lot more work than using the WASAPI/Endpoint APIs, but will give you the most control over access to the inputs/outputs of the soundcard.
The audio driver controls which mixer controls are available, and this will depend largely on the capabilities of the hardware.
If the Vista driver doesn't have certain controls, then it's likely to be a shortcoming of that driver and not of Vista.
(Please tell us which sound card/device you are using.)
It would be possible to write a program to create your own mixer controls (this would be a software-only driver for a virtual sound card), but this program wouldn't be able to affect the audio routing inside the device if the actual driver doesn't have some mixer control for this.
If you mark your app as running in Windows XP compatibility, then all the old controls and behaviors will come back.
If you mark your app as running in Windows XP compatibility, then all the old controls and behaviors will come back.
This is true, but as of Vista SP1 patch KB957388, included in SP2, and with some soundcard drivers, the old mixer API (winmm.dll) functions can hang when the app is in XP compatibility mode. In particular, mixerGetNumDevs and less often mixerOpen will not return on some computers.
I've got reports from 5 Vista users out of around 200 Vista users in total where my app hangs when starting up, and I have tracked it down to these functions hanging.
I would like to report this to Microsoft but cannot find anywhere to do so.
All I can do now is release my software without compatibility mode enabled, but this loses functionality in my app, and the software cannot control the line-in or microphone mixers.
I don't have time to work with low level API functions directly. I rely on high level components, and I cannot find any for the new audio API's for my development system (Delphi).
I would be interested in paying someone to write a DLL for me!!!
e mail ross att stationplaylist dott com

Resources