i've written a java program with JOGL included.
Everything worked fine, until i wanted to render an image (jpg,gif,png) with a texture.
i found following code snippet:
import javax.media.opengl.*;
...
GL2 gl = drawable.getGL().getGL2();
gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_S, GL.GL_CLAMP);
gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_T, GL.GL_CLAMP);
gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR);
gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR);
gl.glTexEnvf(GL.GL_TEXTURE_ENV, GL.GL_TEXTURE_ENV_MODE, GL.GL_REPLACE);
gl.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGBA, w, h, 0, GL.GL_RGBA, GL.GL_UNSIGNED_BYTE, bb);
int left = 100;
int top = 100;
gl.glEnable(GL.GL_TEXTURE_2D);
gl.glBindTexture(GL.GL_TEXTURE_2D, 13);
gl.glBegin(GL.GL_POLYGON);
...
the problem is that Eclipse can't resolve GL.GL_CLAMP, GL.GL_TEXTURE_ENV, GL.GL_TEXTURE_ENV_MODE and GL.GL_POLYGON.
as an alternative for GL.GL_CLAMP there is just GL.GL_CLAMP_TO_EDGE.
i can't even find GL.GL_QUAD, just GL.GL_TRIANGLE.
what is wrong?
do i missed an import?
other GL constants can be resolved.
regards,
peter
The proper way in JOGL2 is GL2.GL_CLAMP (not GL.GL_CLAMP). API Docs
Related
I am trying to set up rust imgui for a custom renderer I am porting to rust.
I am stuck on two fronts, getting the peripheral callbacks, and the rendering.
In C++ the setup was farily simple
ImGuiContext* InitImgui(ModuleStorage::ModuleStorage& module, NECore::Gallery& gallery)
{
ImGuiContext* imgui_context = ImGui::CreateContext();
ImGuiIO& io = ImGui::GetIO();
unsigned char* pixels;
int width, height;
io.Fonts->GetTexDataAsRGBA32(&pixels, &width, &height);
CpuImage font_image(pixels, width, height, 4);
uint font_id = gallery.StoreImage<CpuImage::GetImageData>(
font_image, "__ImguiFont", NECore::ImageFormat::R8G8B8A8_UNORM);
io.Fonts->SetTexID((ImTextureID)(intptr_t)font_id);
ImGui_ImplGlfw_InitForVulkan(module.GetWindow().GetGLFWWindow(), true);
imgui_shader = module.AddShader(
{"./CommonShaders/imgui.vert",
"./CommonShaders/imgui.frag"});
return imgui_context;
}
30 lines of code and we have the initialization done.
Well some issues in rust, io.Fonts->GetTexDataAsRGBA32(&pixels, &width, &height); does not exist. I assume the equivalent is let font = fonts.build_rgba32_texture();
Assuming that's the case the next issue is setting the texture id, which I cannot find anywhere in the docs or the source code.
io.Fonts->SetTexID((ImTextureID)(intptr_t)font_id);
That function does not exist in the rust bindings. And ImGui_ImplGlfw_InitForVulkan is no where to be found either.
The examples https://github.com/imgui-rs/imgui-rs/blob/main/imgui-examples/examples/support/mod.rs
Seem to be using pre existen renderers and do not do a good job of showing how to integrate the tool onto an existing renderer other than the ones the author chose, which is baffling, one of the biggest selling points of imgui is how simple it is to integrate in pre-existing codebases.
I am at a loss, hwo do you bootstrap the library in rust?
I am new to Halide and have written a simple code to compute max(127, pix(x,y)) for every pixel in an image.
Though the code runs fine on CPU, it gives me wrong outputs when I set Target::CUDA. I'm not able to find the issue.
The following is a part of my code. Let me know if there is a mistake in the code, or do I have to re-build Halide with a support that will enable CUDA.
Halide::Var x, y;
Halide::Buffer<uint8_t> inputImageBuf(inpImg, imgSizes);
Halide::Func reluOp("ReLU Operation");
reluOp(x,y) = Halide::max(127, inputImageBuf(x, y));
int numTiles = 4;
Halide::Var threads_x, threads_y, blocks_x, blocks_y;
Halide::Target targetCUDA = Halide::get_host_target();
targetCUDA.set_feature(Halide::Target::CUDA);
targetCUDA.set_feature(Halide::Target::Debug);
reluOp.gpu_tile(x, y, blocks_x, blocks_y, threads_x, threads_y, numTiles, numTiles, Halide::TailStrategy::Auto, Halide::DeviceAPI::CUDA);
// reluOp.compile_jit(targetCUDA);
reluOp.print_loop_nest();
Halide::Buffer<uint8_t> result = reluOp.realize(cols, rows, targetCUDA);
result.copy_to_host();
One thing to try is adding a inputImageBuf.set_host_dirty(). If that helps I would consider that a bug in Halide.
You can also scroll through the debug output and see if the expected number of copies to and from the host are happening.
Pango syntax supports some text only markup. As far as i can see this does not extend to embedding images as well.
Looking around I cannot find much in the way of an existing implementation, but i havent done pango+cairo work before so i might be missing the obvious community for it.
As far as i can tell a reasonable approach would be to just analyse a string, pull out any tags, create cairo images, and then modify the pango layout around them accordingly.
It also seems like something someone might have done before.
Im specifically looking for an answer on these questions:
Does pango+cairo already solve this and I have just misread the docs?
Has something like this been done before, and where is a reference?
Is this a reasonable approach, or should i try something else, and what?
(also note i am using ruby, so that may affect my options)
I've been through the source of the markup parser and it does not allow for "shape" attributes (the way Pango almost incorporates graphics) but it is possible to do it "by hand".
Since there is absolutely no example code on the Web, here's Pango/Cairo/Images 101.
For a simple demo, I created an 800x400 window, added a GtkDrawingArea and connected up the "draw" signal. Before entering the main program loop, I initialized it with the following code:
PangoLayout *Pango;
void init_drawingArea (GtkWidget *pWidget)
{
cairo_surface_t *pImg = cairo_image_surface_create_from_png ("linux.png");
PangoRectangle r = {0, 0, PANGO_SCALE * cairo_image_surface_get_width (pImg),
PANGO_SCALE * cairo_image_surface_get_height(pImg)};
PangoContext *ctxt = gtk_widget_get_pango_context (pWidget);
PangoAttrList *attList = pango_attr_list_new();
PangoAttribute *attr;
Pango = pango_layout_new (ctxt);
pango_cairo_context_set_shape_renderer (ctxt, render, NULL, NULL);
pango_layout_set_text (Pango, pszLorem, -1);
pango_layout_set_width(Pango, PANGO_SCALE * 800);
attr = pango_attr_shape_new_with_data(&r, &r, pImg, NULL, NULL);
attr->start_index = 0; attr->end_index = 1;
pango_attr_list_insert (attList, attr);
attr = pango_attr_shape_new_with_data(&r, &r, pImg, NULL, NULL);
attr->start_index = 152; attr->end_index = 153;
pango_attr_list_insert (attList, attr);
pango_layout_set_attributes (Pango, attList);
}
The context's shape renderer is set to render () and a PangoLayout is created and initialized. It then creates 2 shape attributes, sets the user data to a cairo surface which we populate from a png file and applies the attributes to characters 0 and 152 of the text.
The "draw" signal processing is straightforward.
gboolean onDraw (GtkWidget *pWidget, cairo_t *cr, gpointer user_data)
{
pango_cairo_show_layout (cr, Pango);
return 1;
}
and the render () PangoCairoShapeRenderFunc function is called as needed:
void render (cairo_t *cr, PangoAttrShape *pShape, gboolean do_path, gpointer data)
{
cairo_surface_t *img = (cairo_surface_t *)pShape->data;
double dx, dy;
cairo_get_current_point(cr, &dx, &dy);
cairo_set_source_surface(cr, img, dx, dy);
cairo_rectangle (cr, dx, dy, pShape->ink_rect.width/PANGO_SCALE,
pShape->ink_rect.height/PANGO_SCALE);
cairo_fill(cr);
}
Taking the current point from cairo, it draws a rectangle and fills it with the image.
And that's pretty much all it does. Images were added as an afterthought and it shows. They are subject to the same rules as any other glyph so they are limited to the equivalent of CSS's display: inline.
I've put the code up at http://immortalsofar.com/PangoDemo/ if anyone wants to play with it. Me, I arrived here trying to get around GtkTextBuffer's limitations. Guess I'll just have to go deeper.
I want to save a view's snapshot to a file, but comes the error.
My code is as following:
View decor = ***; //
int width = decor.getWidth();
int height = decor.getHeight();
Bitmap bitmap = Bitmap.createBitmap(width, height, Config.ARGB_8888);
Canvas canvas = new Canvas(bitmap);
decor.draw(canvas);
int bytes = bitmap.getByteCount() + 8;
ByteBuffer buffer = ByteBuffer.allocate(bytes);
buffer.putInt(width);
buffer.putInt(height);
bitmap.copyPixelsToBuffer(buffer);
byte[] array = buffer.array();
but bitmap.copyPixelsToBuffer(buffer); will crash.
the error is like this:
12-09 08:36:43.107: E/libEGL(14642): call to OpenGL ES API with no current context (logged once per thread)
This error only happens on Android 5.0, are there any changes in the new platform? I know android 5.0 use a ThreadedRender to render a surface, how can I handle this problem? Thanks very much!!!
Even i am facing the same problem! It seems like in pre-lollipop version the context was leaking and is being used by default (I assume). But with lollipop it is strictly needed to create or pass the context explicitly!
hope these helps!
https://stackoverflow.com/a/27092070
I've printed a few short qr-codes (like "HAEB16653") on a page using this algorythm:
private void CreateQRCodeFile(int size, string filename, string codecontent)
{
QRCodeWriter writer = new QRCodeWriter();
com.google.zxing.common.ByteMatrix matrix;
matrix = writer.encode(codecontent, BarcodeFormat.QR_CODE, size, size, null);
Bitmap img = new Bitmap(size, size);
Color Color = Color.FromArgb(0, 0, 0);
for (int y = 0; y < matrix.Height; ++y)
{
for (int x = 0; x < matrix.Width; ++x)
{
Color pixelColor = img.GetPixel(x, y);
//Find the colour of the dot
if (matrix.get_Renamed(x, y) == -1)
{
img.SetPixel(x, y, Color.White);
}
else
{
img.SetPixel(x, y, Color.Black);
}
}
}
img.Save(filename, ImageFormat.Png);
}
The printed barcodes work very well and fast with the integrated WP7 bing scan&search.
When I try to scan the very same printed qrcodes with Stéphanie Hertrichs sample app, scanning is very slow, most do not scan at all, or will only be recognized when I slowly rotate the camera around.
How do I get my scanning to be as reliable as the integrated barcode recognition? I only need to scan QrCodes, so I disabled all the others, still it does not work most of the time.
Is there maybe some other barcode scanning library which is working better?
The silverlight port in Stéphanie Hertrichs sample app is very old. It seems to me that the project at codeplex isn't maintained anymore since more then 1 year. You should try one of the newer and maintained ports like ZXing.Net
zxing works very well -- just try it on Android. I would not be surprised if it is what powers the Bing search.
The problems are likely in the port. Any non-Java port is at best old and incomplete. I also can't speak to the efficiency of the approach used in the sample you are looking at. For example, is it really binarizing the image from the APIs correctly? Also make sure it is not using TRY_HARDER mode.
There is no objective answer to this question...
My personal opinion is that the ZXing lib that you tried (Stéphanie Hertrichs sample app) is the best you can get. As far as I know it is used on the other plattforms, too (e.g. Android).
As I tested the lib a few months ago, I had the impression it worked very reliable and quick, but it may be that you had other circumstances (lighting, camera, angle, etc...)