Why I can use size(1920, 1080) in setup()
but If I use
setup()
visualContext = new VisualContext(
new Area(0, 0, 1920, 1080),
new Area(158, 150, 1340, 950)
);
size(visualContext.getGlobalArea().getWidth(), visualContext.getGlobalArea().getHeight());
There will be an error
When not using the PDE, size() can only be used inside settings().
Remove the size() method from setup(), and add the following:
public void settings() {
size(1920, 1080);
}
I can not find any doc about this topic.
Curious, can size only be initialized by constant but variable?
use settings if you want to invoke size with parameters.
VisualContext visualContext = new VisualContext(
new Area(0, 0, 1920, 1080),
new Area(158, 150, 1340, 950)
);
void settings() {
size(visualContext.getGlobalArea().getWidth(), visualContext.getGlobalArea().getHeight()); //<>//
}
https://processing.org/reference/settings_.html
Related
I'm using the blue_thermal_printer with Flutter Android to try and create an image from a Canvas recording but the image prints as a solid block instead of an image:
This class is responsible for creating the bytedata of the image:
import 'dart:typed_data';
import 'dart:ui' as ui;
import 'package:flutter/material.dart';
class LabelPainter {
Future<ByteData> getImageByteData() async {
int _width = 60;
int _height = 60;
ui.PictureRecorder recorder = new ui.PictureRecorder();
Paint _paint = Paint()
..style = PaintingStyle.stroke
..strokeWidth = 4.0;
Canvas c = new Canvas(recorder);
c.drawRRect(RRect.fromLTRBAndCorners(20, 30, 40, 50), _paint);
_paint.color = Colors.red;
c.drawRect(Rect.fromLTWH(10, 10, 10, 10), _paint);
_paint.color = Colors.blue;
c.drawRect(
Rect.fromCenter(center: Offset(50, 50), height: 50, width: 50), _paint);
_paint.color = Colors.black;
c.drawRect(
Rect.fromPoints(
Offset(0, 0), Offset(_width.toDouble(), _height.toDouble())),
_paint);
// c.drawPaint(Paint()); // etc
ui.Picture p = recorder.endRecording();
ui.Image _uiImg = await p.toImage(
_width, _height); //.toByteData(format: ImageByteFormat.png);
ByteData _byteData =
await _uiImg.toByteData(format: ui.ImageByteFormat.png);
return _byteData;
}
}
This is part of the status widget that gets the ByteData then saves the image to the file directory:
class _PrinterState extends State<Printer> {
String pathImage;
LabelPainter _labelPainter = new LabelPainter();
#override
void initState() {
super.initState();
initSavetoPath();
}
initSavetoPath() async {
//read and write
//image max 300px X 300px
final filename = 'yourlogo.png';
// var bytes = await rootBundle.load("images/logo.png");
ByteData bytes = await _labelPainter.getImageByteData();
String dir = (await getApplicationDocumentsDirectory()).path;
writeToFile(bytes, '$dir/$filename');
setState(() {
pathImage = '$dir/$filename';
});
}
#override
Widget build(BuildContext context) {
return Container();
}
//write to app path
Future<void> writeToFile(ByteData data, String path) {
final buffer = data.buffer;
return new File(path).writeAsBytes(
buffer.asUint8List(data.offsetInBytes, data.lengthInBytes));
}
}
This is the method I call when I want to print the image:
void _tesPrint() async {
//SIZE
// 0- normal size text
// 1- only bold text
// 2- bold with medium text
// 3- bold with large text
//ALIGN
// 0- ESC_ALIGN_LEFT
// 1- ESC_ALIGN_CENTER
// 2- ESC_ALIGN_RIGHT
bluetooth.isConnected.then((isConnected) async {
if (isConnected) {;
// bluetooth.printImageBytes(await _labelPainter.getImageBytesUint());
bluetooth.printImage(pathImage);
// bluetooth.printNewLine();
// bluetooth.printCustom("Terimakasih", 2, 1);
// bluetooth.printNewLine();
// bluetooth.printQRcode("Insert Your Own Text to Generate", 50, 50, 0);
// bluetooth.paperCut();
}
});
I already had this problem.
Only difference is that i generate image from widget, like a screenshoot.
So the same problem occours to share and print, the image is totally black.
The solution was to provide a white background and then, the problem was solved. The image content can be visualized.
Im looking forward to test flutter memory/cpu usage. But I'm totaly lost at what widget do I pick for:
widget will contain custom canvas drawing(full screen)
widget must update itself 30 times per second(calling repaint from 0 each time)
in general, we have our own engine that revolves around uiview/surfaceview. I want to write same stuff on dart, connect to server, get same data, get same picture. But I dont unrestand what widget to take. As far as I see, I'l pick statefull widget and will change it state 30 timer per sec with timer. But that's not sounds right to me thought
You can use Ticker, which is the same mechanism that animations (i.e. AnimationController) use to update every frame.
import 'dart:math' as math;
import 'package:flutter/material.dart';
import 'package:flutter/scheduler.dart';
void main() => runApp(const CanvasWidget());
class CanvasWidget extends StatefulWidget {
const CanvasWidget({super.key});
#override
State<CanvasWidget> createState() => _CanvasWidgetState();
}
class _CanvasWidgetState extends State<CanvasWidget> with SingleTickerProviderStateMixin {
final drawState = DrawState();
Ticker? ticker;
#override
void initState() {
super.initState();
ticker = createTicker(tick);
ticker!.start();
}
#override
void dispose() {
ticker?.stop();
ticker?.dispose();
super.dispose();
}
#override
Widget build(BuildContext context) {
return Container(
width: double.infinity,
height: double.infinity,
color: Colors.white,
child: CustomPaint(
painter: MyPainter(drawState),
),
);
}
void tick(Duration elapsed) {
var t = elapsed.inMicroseconds * 1e-6;
double radius = 100;
drawState.x = radius * math.sin(t);
drawState.y = radius * math.cos(t);
setState(() {});
}
}
class DrawState {
double x = 0, y = 0;
}
class MyPainter extends CustomPainter {
final DrawState state;
MyPainter(this.state);
#override
void paint(Canvas canvas, Size size) {
var paint = Paint()..color = Colors.red;
canvas.drawCircle(Offset(state.x + size.width * 0.5, state.y + size.height * 0.5), 20, paint);
}
#override
bool shouldRepaint(covariant CustomPainter oldDelegate) {
return true;
}
}
I'm using Processing to make something and basically my keyDown() is not working. It supposed to be triggered when any key is pressed but the function is not being called. Code below:
int playerno=0; //determines player
boolean ready=true;
void setup() {
size(700, 700);
background(#FFFFFF);
fill(#000000);
textSize(50);
text("Press Any Key To Start", 350, 350);
}
void keyPressed() {
if (ready) {
fill(#FFFFFF);
rect(350, 350, 200, 100);
fill(#000000);
textSize(50);
text("Game Ready", 350, 350);
boolean ready=false;
}
}
This will won't work without draw function. Also you are declaring new local variable ready inside keypressed() this is bad mistake. Try move your drawing code from "keyDown()" into "drawing" like this:
void draw() {
if (ready == false) {
background(#FFFFFF); //This is needed for redrawing whole scene
fill(#FFFFFF);
rect(350, 350, 200, 100);
fill(#000000);
textSize(50);
text("Game Ready", 350, 350);
}
}
void keyPressed() {
if (ready) {
ready=false;
}
}
How to create more than one window of a single sketch in Processing?
Actually I want to detect and track a particular color (through webcam) in one window and display the detected co-ordinates as a point in another window.Till now I'm able to display the points in the same window where detecting it.But I want to split it into two different windows.
You need to create a new frame and a new PApplet... here's a sample sketch:
import javax.swing.*;
SecondApplet s;
void setup() {
size(640, 480);
PFrame f = new PFrame(width, height);
frame.setTitle("first window");
f.setTitle("second window");
fill(0);
}
void draw() {
background(255);
ellipse(mouseX, mouseY, 10, 10);
s.setGhostCursor(mouseX, mouseY);
}
public class PFrame extends JFrame {
public PFrame(int width, int height) {
setBounds(100, 100, width, height);
s = new SecondApplet();
add(s);
s.init();
show();
}
}
public class SecondApplet extends PApplet {
int ghostX, ghostY;
public void setup() {
background(0);
noStroke();
}
public void draw() {
background(50);
fill(255);
ellipse(mouseX, mouseY, 10, 10);
fill(0);
ellipse(ghostX, ghostY, 10, 10);
}
public void setGhostCursor(int ghostX, int ghostY) {
this.ghostX = ghostX;
this.ghostY = ghostY;
}
}
One option might be to create a sketch twice the size of your original window and just offset the detected coordinates by half the sketch's size.
Here's a very rough code snippet (assumming blob will be a detected color blob):
int camWidth = 320;
int camHeight = 240;
Capture cam;
void setup(){
size(camWidth * 2,camHeight);
//init cam/opencv/etc.
}
void draw(){
//update cam and get data
image(cam,0,0);
//draw
rect(camWidth+blob.x,blob.y,blob.width,blob.height);
}
To be honest, it might be easier to overlay the tracked information. For example, if you're doing color tracking, just display the outlines of the bounding box of the tracked area.
If you really really want to display another window, you can use a JPanel.
Have a look at this answer for a running code example.
I would recommend using G4P, a GUI library for Processing that has some functionality built in for handling multiple windows. I have used this before with a webcam and it worked well. It comes with a GWindow object that will spawn a window easily. There is a short tutorial on the website that explains the basics.
I've included some old code that I have that will show you the basic idea. What is happening in the code is that I make two GWindows and send them each a PImage to display: one gets a webcam image and the other an effected image. The way you do this is to augment the GWinData object to also include the data you would like to pass to the windows. Instead of making one specific object for each window I just made one object with the two PImages in it. Each GWindow gets its own draw loop (at the bottom of the example) where it loads the PImage from the overridden GWinData object and displays it. In the main draw loop I read the webcam and then process it to create the two images and then store them into the GWinData object.
Hopefully that gives you enough to get started.
import guicomponents.*;
import processing.video.*;
private GWindow window;
private GWindow window2;
Capture video;
PImage sorted;
PImage imgdif; // image with pixel thresholding
MyWinData data;
void setup(){
size(640, 480,P2D); // Change size to 320 x 240 if too slow at 640 x 480
// Uses the default video input, see the reference if this causes an error
video = new Capture(this, 640, 480, 24);
numPixels = video.width * video.height;
data = new MyWinData();
window = new GWindow(this, "TEST", 0,0, 640,480, true, P2D);
window.isAlwaysOnTop();
window.addData(data);
window.addDrawHandler(this, "Window1draw");
window2 = new GWindow(this, "TEST", 640,0 , 640,480, true, P2D);
window2.isAlwaysOnTop();
window2.addData(data);
window2.addDrawHandler(this, "Window2draw");
loadColors("64rev.csv");
colorlength = mycolors.length;
distances = new float[colorlength];
noCursor();
}
void draw()
{
if (video.available())
{
background(0);
video.read();
image(video,0,0);
loadPixels();
imgdif = get(); // clones the last image drawn to the screen v1.1
sorted = get();
/// Removed a lot of code here that did the processing
// hand data to our data class to pass to other windows
data.sortedimage = sorted;
data.difimage = imgdif;
}
}
class MyWinData extends GWinData {
public PImage sortedimage;
public PImage difimage;
MyWinData(){
sortedimage = createImage(640,480,RGB);
difimage = createImage(640,480,RGB);
}
}
public void Window1draw(GWinApplet a, GWinData d){
MyWinData data = (MyWinData) d;
a.image(data.sortedimage, 0,0);
}
public void Window2draw(GWinApplet a, GWinData d){
MyWinData data = (MyWinData) d;
a.image(data.difimage,0,0);
}
I'm a fairly experienced programmer, but new to GUI programming. I'm trying to port a plotting library I wrote for DFL to gtkD, and I can't get drawings to show up. The following code produces a blank window for me. Can someone please tell me what's wrong with it, and/or post minimal example code for getting a few lines onto a DrawingArea and displaying the results in a MainWindow?
import gtk.DrawingArea, gtk.Main, gtk.MainWindow, gdk.GC, gdk.Drawable,
gdk.Color;
void main(string[] args) {
Main.init(args);
auto win = new MainWindow("Hello, world");
win.setDefaultSize(800, 600);
auto drawingArea = new DrawingArea(800, 600);
win.add(drawingArea);
drawingArea.realize();
auto drawable = drawingArea.getWindow();
auto gc = new GC(drawable);
gc.setForeground(new Color(255, 0, 0));
gc.setBackground(new Color(255, 255, 255));
drawable.drawLine(gc, 0, 0, 100, 100);
drawingArea.showAll();
drawingArea.queueDraw();
win.showAll();
Main.run();
}
I have no experience whatsoever in D, but lots in GTK, so with the help of the gtkD tutorial I managed to hack up a minimal example:
import gtk.DrawingArea, gtk.Main, gtk.MainWindow, gdk.GC, gdk.Drawable,
gdk.Color, gtk.Widget;
class DrawingTest : MainWindow
{
this()
{
super("Hello, world");
setDefaultSize(800, 600);
auto drawingArea = new DrawingArea(800, 600);
add(drawingArea);
drawingArea.addOnExpose(&drawStuff);
showAll();
}
bool drawStuff(GdkEventExpose *event, Widget self)
{
auto drawable = self.getWindow();
auto gc = new GC(drawable);
gc.setForeground(new Color(cast(ubyte)255, cast(ubyte)0, cast(ubyte)0));
gc.setBackground(new Color(cast(ubyte)255, cast(ubyte)255, cast(ubyte)255));
drawable.drawLine(gc, 0, 0, 100, 100);
return true;
}
}
void main(string[] args) {
Main.init(args);
new DrawingTest();
Main.run();
}
In GTK, a DrawingArea is actually just a blank widget for you to paint on, and painting on widgets must always be done in the expose-event handler. (Although I understand this will change in GTK 3!)
I understand you can't connect functions as signal callbacks, only delegates, so that's the reason for the DrawingTest class.