How to stop blue lines from appearing on a Windows console during an ASCII game? - c++11

I've run into a very unusual problem. So unusual in fact, that I haven't found even a mention of anything like it on google. Here's the deal. I was working on making a ASCII based computer game for a programming class. I had created a simple program that moved a one character rectangle around the screen when you pressed the four arrow keys. But, I quickly noticed that when the rectangle moved right, it left a trail of vertical blue lines in it's wake. I have absolutely no idea why this is happening, and as mentioned, google doesn't seem to have the answers. So I'm wondering if there is anyway to fix this problem. If you want some technical information, I am running this on Windows 10 with Dev C++.
#include <ctime>
#include <iostream>
#include <windows.h>
void writeToConsole(char chr, COORD pos) {
static const HANDLE hOut = GetStdHandle(STD_OUTPUT_HANDLE);
std::cout.flush();
// makes sure not changing cursor during cout
SetConsoleCursorPosition(hOut, pos);
// sets where next characters are printed
std::cout << chr;
}
int main(){
COORD squareXY = {0, 0};
const int width = 30;
for (int i = 0; i < width; i++)
// prints spaces that will be overwritten
std::cout << ' ';
writeToConsole((char)219, squareXY);
// (char)219 is a solid rectangle
int lastTimeMove = clock();
// time measured in clocks, not second
while (true) {
if ((clock() - lastTimeMove) > .1 * CLOCKS_PER_SEC) {
// trigger approximately every tenth of a second
writeToConsole(' ', squareXY);
// erase previous
squareXY.X = (squareXY.X + 1) % width;
writeToConsole((char)219, squareXY);
lastTimeMove = clock();
}
}
}
UPDATE:
I found someone having a similar problem with the windows WriteConsoleOutput function. I changed my code to use that function, and now I have red and blue lines. Here are some screen shots:
Blue
Blue & Red

Related

Alpha channel in C++Builder

In Borland/Embarcadero C++Builder with VCL, I am trying to develop an application with an image where some parts (in fact, circles) fade in or out over time.
My code is mostly as follows:
void __fastcall TfmMain::FormCreate(TOBject *Sender)
{
img = new TBitmap;
img->Width = 800;
img->Height = 600;
fmMain->DoubleBuffered = true;
...
}
void __fastcall TfmMain::tmMainTimer(TObject *Sender)
{
for(int i = 0; i < nbParts; i++){
...
img->Brush->Color = clRed | alpha (t_time) << 24;
// alpha is a function returning 0 to 0xff, depending on required level of fade at time t_time)
img->Canvas->Ellipse(....);
}
fmMain->Canvas->Draw(0, 0, img);
}
But the result is not at all what I want : as an example, a part supposed to fade out has its color alternating between red and black. Same for a part supposed to fade in.
I tried DrawTransparent(), but had the error:
DrawTransparent is not accessible
And it has a transparency value for the whole bitmap, not for individual parts.
I tried a separate bitmap for each part, but I may have hundreds of them, and the animation becomes too slow.
Please, can someone help, and tell me what I should do?

Processing: How to add audio into a webcam effect?

How Can I add a song to this code using processing?, And synchronize it with a PIR sensor in Arduino?.
import processing.video.*;
import ddf.minim.*;
import ddf.minim.AudioPlayer;
// Size of each cell in the grid
int cellSize = 20;
// Number of columns and rows in our system
int cols, rows;
// Variable for capture device
Capture video;
Minim minim;
AudioPlayer song;
void setup() {
size(1280, 720);
frameRate(30);
cols = width / cellSize;
rows = height / cellSize;
colorMode(RGB, 255, 255, 255, 100);
// This the default video input, see the GettingStartedCapture
// example if it creates an error
video = new Capture(this, width, height);
// Start capturing the images from the camera
video.start();
background(0);
}
{
// we pass this to Minim so that it can load files from the data directory
minim = new Minim(this);
// loadFile will look in all the same places as loadImage does.
// this means you can find files that are in the data folder and the
// sketch folder. you can also pass an absolute path, or a URL.
song = minim.loadFile("untitled.wav");
}
void draw() {
if (video.available()) {
video.read();
video.loadPixels();
// Begin loop for columns
for (int i = 0; i < cols; i++) {
// Begin loop for rows
for (int j = 0; j < rows; j++) {
// Where are we, pixel-wise?
int x = i*cellSize;
int y = j*cellSize;
int loc = (video.width - x - 1) + y*video.width; // Reversing x to mirror the image
float r = red(video.pixels[loc]);
float g = green(video.pixels[loc]);
float b = blue(video.pixels[loc]);
// Make a new color with an alpha component
color c = color(r, g, b, 75);
// Code for drawing a single rect
// Using translate in order for rotation to work properly
pushMatrix();
translate(x+cellSize/2, y+cellSize/2);
// Rotation formula based on brightness
rotate((2 * PI * brightness(c) / 255.0));
rectMode(CENTER);
fill(c);
noStroke();
// Rects are larger than the cell for some overlap
rect(0, 0, cellSize+6, cellSize+6);
popMatrix();
}
}
}
}
I am interested to detect the movement to activate or desactivate this feature.
Please, Can you help me.
This is the error that I got:
The sketch path is not set. ==== JavaSound Minim Error ==== ==== java.lang.reflect.InvocationTargetException
=== Minim Error === === Couldn't load the file untitled.wav
Stack Overflow isn't really designed for general "how do I do this" type questions. It's for specific "I tried X, expected Y, but got Z instead" type questions. But I'll try to help in a general sense:
You need to break your problem down into smaller pieces and then take those pieces on one at a time. Get a simple example working. If you're asking about audio, then forget about the webcam for a second. Create a simple sketch that just plays a sound. Separately from that, create a simple sketch that just gets a webcam working. When you have those working perfectly, then you can think about combining them. But work your way forward in small steps. Write down exactly what you want to happen, in English, and that will be an algorithm that you can think about implementing with code.
Then if you get stuck, you can post a more specific question along with a MCVE. Good luck.

Drawing image(PGraphics) gives unwanted double image mirrored about x-axis. Processing 3

The code is supposed to fade and copy the window's image to a buffer f, then draw f back onto the window but translated, rotated, and scaled. I am trying to create an effect like a feedback loop when you point a camera plugged into a TV at the TV.
I have tried everything I can think of, logged every variable I could think of, and still it just seems like image(f,0,0) is doing something wrong or unexpected.
What am I missing?
Pic of double image mirror about x-axis:
PGraphics f;
int rect_size;
int midX;
int midY;
void setup(){
size(1000, 1000, P2D);
f = createGraphics(width, height, P2D);
midX = width/2;
midY = height/2;
rect_size = 300;
imageMode(CENTER);
rectMode(CENTER);
smooth();
background(0,0,0);
fill(0,0);
stroke(255,255);
}
void draw(){
fade_and_copy_pixels(f); //fades window pixels and then copies pixels to f
background(0,0,0);//without this the corners dont get repainted.
//transform display window (instead of f)
pushMatrix();
float scaling = 0.90; // x>1 makes image bigger
float rot = 5; //angle in degrees
translate(midX,midY); //makes it so rotations are always around the center
rotate(radians(rot));
scale(scaling);
imageMode(CENTER);
image(f,0,0); //weird double image must have something not working around here
popMatrix();//returns window matrix to normal
int x = mouseX;
int y = mouseY;
rectMode(CENTER);
rect(x,y,rect_size,rect_size);
}
//fades window pixels and then copies pixels to f
void fade_and_copy_pixels(PGraphics f){
loadPixels(); //load windows pixels. dont need because I am only reading pixels?
f.loadPixels(); //loads feedback loops pixels
// Loop through every pixel in window
//it is faster to grab data from pixels[] array, so dont use get and set, use this
for (int i = 0; i < pixels.length; i++) {
//////////////FADE PIXELS in window and COPY to f:///////////////
color p = pixels[i];
//get color values, mask then shift
int r = (p & 0x00FF0000) >> 16;
int g = (p & 0x0000FF00) >> 8;
int b = p & 0x000000FF; //no need for shifting
// reduce value for each color proportional
// between fade_amount between 0-1 for 0 being totallty transparent, and 1 totally none
// min is 0.0039 (when using floor function and 255 as molorModes for colors)
float fade_percent= 0.005; //0.05 = 5%
int r_new = floor(float(r) - (float(r) * fade_percent));
int g_new = floor(float(g) - (float(g) * fade_percent));
int b_new = floor(float(b) - (float(b) * fade_percent));
//maybe later rewrite in a way to save what the difference is and round it differently, like maybe faster at first and slow later,
//round doesn't work because it never first subtracts one to get the ball rolling
//floor has a minimum of always subtracting 1 from each value each time. cant just subtract 1 ever n loops
//keep a list of all the pixel as floats? too much memory?
//ill stick with floor for now
// the lowest percent that will make a difference with floor is 0.0039?... because thats slightly more than 1/255
//shift back and or together
p = 0xFF000000 | (r_new << 16) | (g_new << 8) | b_new; // or-ing all the new hex together back into AARRGGBB
f.pixels[i] = p;
////////pixels now copied
}
f.updatePixels();
}
This is a weird one. But let's start with a simpler MCVE that isolates the problem:
PGraphics f;
void setup() {
size(500, 500, P2D);
f = createGraphics(width, height, P2D);
}
void draw() {
background(0);
rect(mouseX, mouseY, 100, 100);
copyPixels(f);
image(f, 0, 0);
}
void copyPixels(PGraphics f) {
loadPixels();
f.loadPixels();
for (int i = 0; i < pixels.length; i++) {
color p = pixels[i];
f.pixels[i] = p;
}
f.updatePixels();
}
This code exhibits the same problem as your code, without any of the extra logic. I would expect this code to show a rectangle wherever the mouse is, but instead it shows a rectangle at a position reflected over the X axis. If the mouse is on the top of the window, the rectangle is at the bottom of the window, and vice-versa.
I think this is caused by the P2D renderer being OpenGL, which has an inversed Y axis (0 is at the bottom instead of the top). So it seems like when you copy the pixels over, it's going from screen space to OpenGL space... or something. That definitely seems buggy though.
For now, there are two things that seem to fix the problem. First, you could just use the default renderer instead of P2D. That seems to fix the problem.
Or you could get rid of the for loop inside the copyPixels() function and just do f.pixels = pixels; for now. That also seems to fix the problem, but again it feels pretty buggy.
If somebody else (paging George) doesn't come along with a better explanation by tomorrow, I'd file a bug on Processing's GitHub. (I can do that for you if you want.)
Edit: I've filed an issue here, so hopefully we'll hear back from a developer in the next few days.
Edit Two: Looks like a fix has been implemented and should be available in the next release of Processing. If you need it now, you can always build Processing from source.
An easier one, and works like a charm:
add f.beginDraw(); before and f.endDraw(); after using f:
loadPixels(); //load windows pixels. dont need because I am only reading pixels?
f.loadPixels(); //loads feedback loops pixels
// Loop through every pixel in window
//it is faster to grab data from pixels[] array, so dont use get and set, use this
f.beginDraw();
and
f.updatePixels();
f.endDraw();
Processing must know when it's drawing in a buffer and when not.
In this image you can see that works

How to avoid perceived flicker during scrolling in Qt?

I'm trying use Qt framework(4.7.4) to demonstrate a sliding display in which new pixel data is added to first row of the screen and previous pixels are scrolled one pixel below in every refresh.
It is refreshed 20 times per second and in every refresh, random green points (pixels) are drawn on black background.
The problem is; there is highly noticeable flickers in every refresh. I have researched through the web and optimized my code as much as possible. I tried to use raster rendering with both QPainter (on QWidget) and QGraphicsScene(on QGraphicsView) and even I tried to use OpenGL rendering on QGLWidget. However, at the end I have still the same flicker problem.
What may cause this flickering? I begin to suspect that my LCD monitor can not refresh the display for black to green transitions. I have also noticed that if I select a gray background instead of black, there happens no flicker.
The effect you're seeing is purely psychovisual. It's a human defect, not a software defect. I'm serious. You can verify by fixing the value of x - you'll still be repainting the entire pixmap on the window, there won't be any flicker - because there is no flicker per se.
The psychovisual flicker occurs when the scroll rate is not tied to the passage of real time. When occasionally the time between updates varies due to CPU load, or due to system timer inaccuracies, our visual system integrates two images and it appears as if the overall brightness is changed.
You've correctly noticed that the perceived flicker is reduced as you reduce the contrast ratio of the image by setting the background to grey. This is an additional clue that the effect is psychovisual.
Below is a way of preventing this effect. Notice how the scroll distance is tied to the time (here: 1ms = 1pixel).
#include <QElapsedTimer>
#include <QPaintEvent>
#include <QBasicTimer>
#include <QApplication>
#include <QPainter>
#include <QPixmap>
#include <QWidget>
#include <QDebug>
static inline int rand(int range) { return (double(qrand()) * range) / RAND_MAX; }
class Widget : public QWidget
{
float fps;
qint64 lastTime;
QPixmap pixmap;
QBasicTimer timer;
QElapsedTimer elapsed;
void timerEvent(QTimerEvent * ev) {
if (ev->timerId() == timer.timerId()) update();
}
void paintEvent(QPaintEvent * ev) {
qint64 time = elapsed.elapsed();
qint64 delta = time - lastTime;
lastTime = time;
if (delta > 0) {
const float weight(0.05);
fps = (1.0-weight)*fps + weight*(1E3/delta);
if (pixmap.size() != size()) {
pixmap = QPixmap(size());
pixmap.fill(Qt::black);
}
int dy = qMin((int)delta, pixmap.height());
pixmap.scroll(0, dy, pixmap.rect());
QPainter pp(&pixmap);
pp.fillRect(0, 0, pixmap.width(), dy, Qt::black);
for(int i = 0; i < 30; ++i){
int x = rand(pixmap.width());
pp.fillRect(x, 0, 3, dy, Qt::green);
}
}
QPainter p(this);
p.drawPixmap(ev->rect(), pixmap, ev->rect());
p.setPen(Qt::yellow);
p.fillRect(0, 0, 100, 50, Qt::black);
p.drawText(rect(), QString("FPS: %1").arg(fps, 0, 'f', 0));
}
public:
explicit Widget(QWidget *parent = 0) : QWidget(parent), fps(0), lastTime(0), pixmap(size())
{
timer.start(1000/60, this);
elapsed.start();
setAttribute(Qt::WA_OpaquePaintEvent);
}
};
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
Widget w;
w.show();
return a.exec();
}
I'd recommend you do not scroll the pixmap in-place, but create a second pixmap and use drawPixmap() to copy everything but one line from pixmap 1 to pixmap 2 (with the scroll offset). Then continue painting on pixmap 2. After the frame, exchange the references to both pixmaps, and start over.
The rationale is that copying from one memory area to a different one can be optimised more easily than modifying one memory area in-place.

OpenCV cvblob - Render Blob

I'm trying to detect a object using cvblob. So I use cvRenderBlob() method. Program compiled successfully but when at the run time it is returning an unhandled exception. When I break it, the arrow is pointed out to CvLabel *labels = (CvLabel *)imgLabel->imageData + imgLabel_offset + (blob->miny * stepLbl); statement in the cvRenderBlob() method definition of the cvblob.cpp file. But if I use cvRenderBlobs() method it's working fine. I need to detect only one blob that is the largest one. Some one please help me to handle this exception.
Here is my VC++ code,
CvCapture* capture = 0;
IplImage* frame = 0;
int key = 0;
CvBlobs blobs;
CvBlob *blob;
capture = cvCaptureFromCAM(0);
if (!capture) {
printf("Could not initialize capturing....\n");
return 1;
}
int screenx = GetSystemMetrics(SM_CXSCREEN);
int screeny = GetSystemMetrics(SM_CYSCREEN);
while (key!='q') {
frame = cvQueryFrame(capture);
if (!frame) break;
IplImage* imgHSV = cvCreateImage(cvGetSize(frame), 8, 3);
cvCvtColor(frame, imgHSV, CV_BGR2HSV);
IplImage* imgThreshed = cvCreateImage(cvGetSize(frame), 8, 1);
cvInRangeS(imgHSV, cvScalar(61, 156, 205),cvScalar(161, 256, 305), imgThreshed); // for light blue color
IplImage* imgThresh = imgThreshed;
cvSmooth(imgThresh, imgThresh, CV_GAUSSIAN, 9, 9);
cvNamedWindow("Thresh");
cvShowImage("Thresh", imgThresh);
IplImage* labelImg = cvCreateImage(cvGetSize(imgHSV), IPL_DEPTH_LABEL, 1);
unsigned int result = cvLabel(imgThresh, labelImg, blobs);
blob = blobs[cvGreaterBlob(blobs)];
cvRenderBlob(labelImg, blob, frame, frame);
/*cvRenderBlobs(labelImg, blobs, frame, frame);*/
/*cvFilterByArea(blobs, 60, 500);*/
cvFilterByLabel(blobs, cvGreaterBlob(blobs));
cvNamedWindow("Video");
cvShowImage("Video", frame);
key = cvWaitKey(1);
}
cvDestroyWindow("Thresh");
cvDestroyWindow("Video");
cvReleaseCapture(&capture);
First off, I'd like to point out that you are actually using the regular c syntax. C++ uses the class Mat. I've been working on some blob extraction based on green objects in the picture. Once thresholded properly, which means we have a "binary" image, background/foreground. I use
findContours() //this function expects quite a bit, read documentation
Descriped more clearly in the documentation on structural analysis. It will give you the contour of all the blobs in the image. In a vector which is handling another vector, which is handling points in the image; like so
vector<vector<Point>> contours;
I too need to find the biggest blob, and though my approach can be faulty to some extend, I won't need it to be different. I use
minAreaRect() // expects a set of points (contained by the vector or mat classes
Descriped also under structural analysis
Then access the size of the rect
int sizeOfObject = 0;
int idxBiggestObject = 0; //will track the biggest object
if(contours.size() != 0) //only runs code if there is any blobs / contours in the image
{
for (int i = 0; i < contours.size(); i++) // runs i times where i is the amount of "blobs" in the image.
{
myVector = minAreaRect(contours[i])
if(myVector.size.area > sizeOfObject)
{
sizeOfObject = myVector.size.area; //saves area to compare with further blobs
idxBiggestObject = i; //saves index, so you know which is biggest, alternatively, .push_back into another vector
}
}
}
So okay, we really only measure a rotated bounding box, but in most cases it will do. I hope that you will either switch to c++ syntax, or get some inspiration from the basic algorithm.
Enjoy.

Resources