upscaling an image using Nearest-neighbor in processing - processing

While doing something like this in processing 3.5.4
void draw() {
image(img.source, 0, 0, 50000, 50000);
}
The image comes out blurry.
Is there a way I can do this using nearest-neighbor?
The image is 1000 by 1000 pixels.

If it's the default (JAVA2D) renderer you're using with size() then
simply calling noSmooth() in setup() should do the trick.
If you use P2D you need to tell OpenGL to use nearest neighbour in setup():
((PGraphicsOpenGL) g).textureSampling(2);
(I learned this from Vallentin's answer)

Related

Running background in P5

Im trying to make a side-scroller game and I got stuck on the running background part. I've looked for solutions and I've discovered some , but they were using javascript not the p5 library.
I started from the tutorials found on The Coding Train , and looked over all the examples and references on their site.
Although I could avoid this by using something else, just for the sake of it being here in case someone gets stuck on the same issue, could anyone offer a solution to this in p5? Disclaimer: Im a total noob p5.js.
later edit : By running background i mean moving Background image in a loop from left to right
Honestly, from the discussion we had in the comments, it sounds like you're overthinking it.
The general approach to animation (that tutorial is for Processing, but the principles apply to P5.js as well) is as follows:
Step 1: Create a set of variables that represent the state of your scene.
Step 2: Use those variables to draw your scene every frame.
Step 3: Change those variables over time to make your scene move.
You already know what to do: load an image that contains your background, then draw that image, and move it a little bit each frame.
You've said you want to call the background() function instead of the image() function, which doesn't make a ton of sense. The background() function is not any more efficient than the image() function. In fact, the background() function just calls the image() function for you!
From the P5.js source:
p5.prototype.background = function() {
if (arguments[0] instanceof p5.Image) {
this.image(arguments[0], 0, 0, this.width, this.height);
} else {
this._renderer.background.apply(this._renderer, arguments);
}
return this;
};
P5.js simply checks whether the argument is an image, and if so, calls the image() function for you. So it doesn't really make sense to say that using the image() function is "less efficient" than using the background() function.
Taking a step back, you should really avoid thinking about these kinds of micro-optimizations until you A: understand the problem and B: actually have a problem. Don't make assumptions about "efficiency" until you've actually measured your code for performance.
Anyway, back to your question. You also said that you're loading the image twice, which you shouldn't have to do. You can just load the image once (make sure you do that in the setup() function and not the draw() function, and then draw that image twice:
var img;
function preload() {
img = loadImage("image.jpg");
}
function setup() {
image(img, 0, 0);
image(img, 100, 100);
}
And since you can draw two images, you'd then just draw them next to each other. Here's an example using colored rectangles to show the approach more clearly:
var offsetX = 0;
function setup() {
createCanvas(200, 200);
}
function draw() {
background(0);
fill(0, 255, 0);
rect(offsetX, 0, width, height);
fill(0, 0, 255);
rect(offsetX + width, 0, width, height);
offsetX--;
if(offsetX <= -width){
offsetX = 0;
}
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.5.14/p5.js"></script>
There are other ways to do it, like creating an image that contains the wrapping itself. But the general approach is pretty much the same.
If you're still stuck, please try to break your problem down into smaller pieces like I've done here. For example, notice that I created a simple sketch that deals with images, and another simple sketch that deals with moving rectangles. Then if you get stuck, please post a MCVE in a new question post and we'll go from there. Good luck.
Maybe it is a late answer.. but you can make the environment 3D and then move the camera.
Docs: https://p5js.org/reference/#/p5/camera
Example:
function setup() {
createCanvas(windowWidth - 200, windowHeight - 200, WEBGL);
background(175);
frameRate(30);
}
function draw() {
background(175);
//move the camera Xaxis when mouse is moved
let camX = map(mouseX, 0, width, 0,width);
camera(camX, 0, (height/2.0) / tan(PI*30.0 / 180.0), camX, 0, 0, 0, 1, 0);
normalMaterial();
noStroke();
ambientLight(251,45,43);
box(100, 100, 50);
ang += 0.3;
rotateY(ang * 0.03);
}
Keep calm and Happy Coding!

Eliminating View Angles in Processing

I'm working on a Quad-copter and for testing purposes I have decided to use Processing to give me a visual example of what the micro-controller is processing and calculating (and possibly some control algorithm simulation later on). So I have made a simple model of a Quad-copter and was displaying it in the upper right of my screen. In the "rest position," I want a perfect side view of the Quad-copter, like this:
Instead, I get an image like this:
The second image was when I rendered the Quad in the upper right, and the first is when I rendered it dead center in the window.
I understand what is happening here but I don't know how to fix it. The rendering system assumes my point of view is dead center in the screen, so anything up and to the right of my point of view is seen from underneath and in the front a little. I poked around on the Reference tab on their website and nothing seems to do exactly what I want. I would think that there would be a solution to this, but I currently can't find one.Does anyone know how to fix this? Thanks.
It sounds like you might be looking for the ortho() function. You can read about it in the reference here.
Sets an orthographic projection and defines a parallel clipping volume. All objects with the same dimension appear the same size, regardless of whether they are near or far from the camera.
Consider this little example program without calling the ortho() function:
void setup(){
size(500, 500, P3D);
}
void draw(){
background(255);
translate(300, 100);
noFill();
stroke(0);
box(100, 100, 100);
}
Now let's add the call to the ortho() function:
void setup(){
size(500, 500, P3D);
}
void draw(){
background(255);
translate(300, 100);
ortho();
noFill();
stroke(0);
box(100, 100, 100);
}
You now no longer see the "depth" of the square. You can add parameters to the ortho() function to make it do exactly what you want, but those are the basics.
Alternatively, you could do something like setup a view that you draw to the middle of, and then draw that view in the upper-right corner of your main view.

In Processing, how can I save part of the window as an image?

I am using Processing under Fedora 20, and I want to display an image of the extending tracks of objects moving across part of the screen, with each object displayed at its current position at the end of the track. To avoid having to record all the co-ordinates of the tracks, I usesave("image.png"); to save the tracks so far, then draw the objects. In the next frame I use img = loadImage("image.png"); to restore the tracks made so far, without the objects, which would still be in their previous positions.. I extend the tracks to their new positions, then usesave("image.png"); to save the extended tracks, still without the objects, ready for the next loop round. Then I draw the objects in their new positions at the end of their extended tracks. In this way successive loops show the objects advancing, with their previous positions as tracks behind them.
This has worked well in tests where the image is the whole frame, but now I need to put that display in a corner of the whole frame, and leave the rest unchanged. I expect that createImage(...) will be the answer, but I cannot find any details of how to to so.
A similar question asked here has this recommendation: "The PImage class contains a save() function that exports to file. The API should be your first stop for questions like this." Of course I've looked at that API, but I don't think it helps here, unless I have to create the image to save pixel by pixel, in which case I would expect it to slow things down a lot.
So my question is: in Processing can I save and restore just part of the frame as an image, without affecting the rest of the frame?
I have continued to research this. It seems strange to me that I can find oodles of sketch references, tutorials, and examples, that save and load the entire frame, but no easy way of saving and restoring just part of the frame as an image. I could probably do it using Pimage but that appears to require an awful lot of image. in front of everything to be drawn there.
I have got round it with a kludge: I created a mask image (see this Processing reference) the size of the whole frame. The mask is defined as grey areas representing opacity, so that white, zero opacity (0), is transparent and black, fully opaque (255) completely conceals the background image, thus:
{ size (1280,800);
background(0); // whole frame is transparent..
fill(255); // ..and..
rect(680,0,600,600); // ..smaller image area is now opaque
save("[path to sketch]/mask01.jpg");
}
void draw(){}
Then in my main code I use:
PImage img, mimg;
img = loadImage("image4.png"); // The image I want to see ..
// .. including the rest of the frame which would obscure previous work
mimg = loadImage("mask01.jpg"); // create the mask
//apply the mask, allowing previous work to show though
img.mask(mimg);
// display the masked image
image(img, 0, 0);
I will accept this as an answer if no better suggestion is made.
void setup(){
size(640, 480);
background(0);
noStroke();
fill(255);
rect(40, 150, 200, 100);
}
void draw(){
}
void mousePressed(){
PImage img =get(40, 150, 200, 100);
img.save("test.jpg");
}
Old news, but here's an answer: you can use the pixel array and math.
Let's say that this is your viewport:
You can use loadPixels(); to fill the pixels[] array with the current content of the viewport, then fish the pixels you want from this array.
In the given example, here's a way to filter the unwanted pixels:
void exportImage() {
// creating the image to the "desired size"
PImage img = createImage(600, 900, RGB);
loadPixels();
int index = 0;
for(int i=0; i<pixels.length; i++) {
// filtering the unwanted first 200 pixels on every row
// remember that the pixels[] array is 1 dimensional, so some math are unavoidable. For this simple example I use the modulo operator.
if (i % width >= 200) { // "magic numbers" are bad, remember. This is only a simplification.
img.pixels[index] = pixels[i];
index++;
}
}
img.updatePixels();
img.save("test.png");
}
It may be too late to help you, but maybe someone else will need this. Either way, have fun!

In a Processing sketch, can I control the position of the display window?

I hope that this is the correct site for this question, apologies if not.
In a Processing sketch, can I control the initial position of the display window? The size() function that must be called first allows only the width and height to be specified.
This has occurred as a likely problem now I am starting to use the G4P (GUI for Processing) library, where the position of a GWindow() has position parameters, but they do not seem to be relative to the main display window but to the whole monitor screen, and I want the extra windows to appear within the main window. This will especially matter when I want to transfer use of the program from my desktop (monitor 1280 by 1024 pixels) to my laptop (1280 by 800 pixels).
Assuming you're talking about Java mode, then you have access to a variable called frame.
However, the frame variable does not seem to be available until after the setup() function finishes, so you have to access it after that. This works for me:
boolean frameMoved = false;
void setup() {
size(500, 500);
}
void draw() {
if(!frameMoved){
frame.setLocation(100, 100);
frameMoved = true;
}
background(0);
}
There is probably a smarter way to do this, but it works for me.
I had negative function with the frame.setLocation solution in Processing 3.5.1
after using the surface.setsize to size the sketch from a variable.
The code below works without needing a burner boolean.
void setup() {
int myVariableW=600;
surface.setSize(myVariableW,400);
}
void draw() {
background(0);
if(1==frameCount) surface.setLocation(150,100);
ellipse(300,300,200,100);
}

LibGDX - How to smooth out actor drawable when scaling a Scene2d stage?

This is my set-up:
stage = new Stage(1280, 800, false);
button = new Button(drawableUp, drawableDown);
stage.add(button);
this gets rendered as following:
#Override
public void render(float delta) {
Gdx.gl.glClearColor(RED,GREEN,BLUE,ALPHA);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
stage.act(delta);
stage.draw();
}
The issue is that when the stage is shown on 1280x800 the button looks like that:
If the stage is rescaled to e.g. 1280x736, the button drawable is scaled in the following way:
Is there a way to smooth out the edges somehow? Because right now it looks to me that the scaling is simply done by removing one pixel line in the upper half and one in the lower half of the picture.
Are you using filters anywhere in your code? If not, then try this:
texture.setFilter(TextureFilter.Linear, TextureFilter.Linear);
Where texture is the texture object that you're using.

Resources