No matter what i try i cant get my text to load into a texture in SDL 2.0 using SDL_ttf.
Here is my textToTexture code
void sdlapp::textToTexture(string text, SDL_Color textColor,SDL_Texture* textTexture)
{
//free prevoius texture in textTexture if texture exists
if (textTexture != nullptr || NULL)
{
SDL_DestroyTexture(textTexture);
}
SDL_Surface* textSurface = TTF_RenderText_Solid(m_font, text.c_str(), textColor);
textTexture = SDL_CreateTextureFromSurface(m_renderer, textSurface);
//free surface
SDL_FreeSurface(textSurface);
}
And then here is me loading the texture and text
bool sdlapp::loadMedia()
{
bool success = true;
//load media here
//load font
m_font = TTF_OpenFont("Fonts/MotorwerkOblique.ttf", 28);
//load text
SDL_Color textColor = { 0x255, 0x255, 0x235 };
textToTexture("im a texture thing", textColor, m_font_texture);
return success;
}
And then this is the code i am using to render it
void sdlapp::render()
{
//clear the screen
SDL_RenderClear(m_renderer);
//do render stuff here
SDL_Rect rect= { 32, 64, 128, 32 };
SDL_RenderCopy(m_renderer, m_font_texture, NULL, NULL);
//update the screen to the current render
SDL_RenderPresent(m_renderer);
}
Does anyone know what i am doing wrong?
Thanks in Advance, JustinWeq.
textToTexture renders the text with SDL_ttf, the resulting SDL_Texture address is then assigned to a variable called textTexture. Problem is, textTexture is a local variable pointing to the same address as m_font_texture. They're not the same variable, they're different variables poiting to the same place, thus you're not changing any callee variables.
For clarification on pointers, I'd recommend seeing question 4.8 of the C-FAQ
I'd make textToTexture return the new texture address, and don't bother freeing resources that are not managed by it (m_font_texture belongs to sdlapp, it should be managed by it).
Related
I'm trying to create an EGLSurface in a Windows UWP app. The creation code is in a xaml.cpp file, as shown below.
When I try creating the surface using the optional property EGLRenderResolutionScaleProperty, it fails with an EGL_BAD_ALLOC error. Two alternate approaches work, but I need to try to use the resolution scale option for my app.
void MyClass::CreateRenderSurface()
{
if (mRenderSurface == EGL_NO_SURFACE)
{
// NOTE: in practice, I only have one of the three following implementations in the code;
// all are included together here for ease of comparison.
// 1. This works
mRenderSurface = CreateSurface(mSwapChainPanel, nullptr, nullptr);
// 2. and this works (here I hardwired the size to twice the
// the size of the window I happen to be using, because
// Windows display settings is set at 200%)
Size size;
size.Height = 1448; // hardwired value for testing, in this case window height is 724 pix
size.Width = 1908; // hardwired value for testing, in this case window width is 954 pix
mRenderSurface = CreateSurface(mSwapChainPanel, &size, nullptr);
// 3. but this fails (and this is the one I want to use)
float resolutionScale = 1.0;
mRenderSurface = CreateSurface(mSwapChainPanel, nullptr, &resolutionScale);
}
}
EGLSurface MyClass::CreateSurface(SwapChainPanel^ panel, const Size* renderSurfaceSize, const float* resolutionScale)
{
if (!panel)
{
throw Exception::CreateException(E_INVALIDARG, L"SwapChainPanel parameter is invalid");
}
if (renderSurfaceSize != nullptr && resolutionScale != nullptr)
{
throw Exception::CreateException(E_INVALIDARG, L"A size and a scale can't both be specified");
}
EGL _egl = this->HelperClass->GetEGL();
EGLSurface surface = EGL_NO_SURFACE;
const EGLint surfaceAttributes[] =
{
EGL_ANGLE_SURFACE_RENDER_TO_BACK_BUFFER, EGL_TRUE,
EGL_NONE
};
// Create a PropertySet and initialize with the EGLNativeWindowType.
PropertySet^ surfaceCreationProperties = ref new PropertySet();
surfaceCreationProperties->Insert(ref new String(EGLNativeWindowTypeProperty), panel);
// If a render surface size is specified, add it to the surface creation properties
if (renderSurfaceSize != nullptr)
{
surfaceCreationProperties->Insert(ref new String(EGLRenderSurfaceSizeProperty), PropertyValue::CreateSize(*renderSurfaceSize));
}
// If a resolution scale is specified, add it to the surface creation properties
if (resolutionScale != nullptr)
{
surfaceCreationProperties->Insert(ref new String(EGLRenderResolutionScaleProperty), PropertyValue::CreateSingle(*resolutionScale));
}
surface = eglCreateWindowSurface(_egl._display, _egl._config, reinterpret_cast<IInspectable*>(surfaceCreationProperties), surfaceAttributes);
EGLint err = eglGetError();
if (surface == EGL_NO_SURFACE)
{
throw Exception::CreateException(E_FAIL, L"Failed to create EGL surface");
}
return surface;
}
where
const wchar_t EGLNativeWindowTypeProperty[] = L"EGLNativeWindowTypeProperty";
const wchar_t EGLRenderSurfaceSizeProperty[] = L"EGLRenderSurfaceSizeProperty";
const wchar_t EGLRenderResolutionScaleProperty[] = L"EGLRenderResolutionScaleProperty";
I have tried changing the cast of the EGLNativeWindowType argument (as in How to create EGLSurface using C++/WinRT and ANGLE?) - that only creates other problems. As indicated, this code does work to create a surface in the basic case, just not when using the EGLRenderResolutionScaleProperty.
My guess is that something about the way I'm supplying that property is failing, because it fails on what should be reasonable values (e.g., 1.0).
Solved this by first checking that swapChainPanel size is not zero:
void MyClass::CreateRenderSurface()
{
if (mRenderSurface == EGL_NO_SURFACE)
{
if (0 == mSwapChainPanel->ActualHeight || 0 == mSwapChainPanel->ActualWidth)
{
mRenderSurface = CreateSurface(mSwapChainPanel, nullptr, &resolutionScale);
}
}
}
(The code checks elsewhere whether the render surface has been created, and will call this again if needed.)
Interestingly, the original code that used nullptr for both size and resolution arguments (case 1 in original snippet above) didn't need that check.
Can we do copy(Ctrl+C) and paste(Ctrl+V) the image from User system(desktop/any folder) to canvas using fabric.js. I have seen the copy and paste program inside the canvas, I have found this Example while searching google but didnt find any relevant example for desktop to canvas. Here is the snippet for copy and paste
function onKeyDownHandler(event) {
//event.preventDefault();
var key;
if(window.event){
key = window.event.keyCode;
}
else{
key = event.keyCode;
}
switch(key){
//////////////
// Shortcuts
//////////////
// Copy (Ctrl+C)
case 67: // Ctrl+C
if(ableToShortcut()){
if(event.ctrlKey){
event.preventDefault();
copy();
}
}
break;
// Paste (Ctrl+V)
case 86: // Ctrl+V
if(ableToShortcut()){
if(event.ctrlKey){
event.preventDefault();
paste();
}
}
break;
default:
// TODO
break;
}
}
function ableToShortcut(){
/*
TODO check all cases for this
if($("textarea").is(":focus")){
return false;
}
if($(":text").is(":focus")){
return false;
}
*/
return true;
}
function copy(){
if(canvas.getActiveGroup()){
for(var i in canvas.getActiveGroup().objects){
var object = fabric.util.object.clone(canvas.getActiveGroup().objects[i]);
object.set("top", object.top+5);
object.set("left", object.left+5);
copiedObjects[i] = object;
}
}
else if(canvas.getActiveObject()){
var object = fabric.util.object.clone(canvas.getActiveObject());
object.set("top", object.top+5);
object.set("left", object.left+5);
copiedObject = object;
copiedObjects = new Array();
}
}
function paste(){
if(copiedObjects.length > 0){
for(var i in copiedObjects){
canvas.add(copiedObjects[i]);
}
}
else if(copiedObject){
canvas.add(copiedObject);
}
canvas.renderAll();
}
Is it possible to do actually I have heard dat it's may not possible.Can anyone guide me how to do please.
If you're targeting modern browsers you can combine 2 new (but widely adopted) html5 features to accomplish your task:
You can create a dropzone on your page using the dragover and drop events.
Then you can use the FileReader API to read the image files into an image object.
Then it's back to FabricJS to load the image as usual.
Here's a tutorial describing how to do the hard bits (#1,#2): http://www.html5rocks.com/en/tutorials/file/dndfiles/
[ Added code that SOMETIMES allows cut/paste of image files ]
Most modern browsers support binding the “paste” event.
// listen for the paste event
window.addEventListener("paste",pasteImage);
But...!!
Support for non-text mime types (ie “image”) is scarce. Chrome seems to support it “off-and-on”.
…And browsers are constantly revising their cut/paste capabilities because of security concerns.
Here is code that sometimes works in Chrome.
// listen for the paste event
window.addEventListener("paste",pasteImage);
function pasteImage(event) {
// get the raw clipboardData
var cbData=event.clipboardData;
for(var i=0;i<cbData.items.length;i++){
// get the clipboard item
var cbDataItem = cbData.items[i];
var type = cbDataItem.type;
// warning: most browsers don't support image data type
if (type.indexOf("image")!=-1) {
// grab the imageData (as a blob)
var imageData = cbDataItem.getAsFile();
// format the imageData into a URL
var imageURL=window.webkitURL.createObjectURL(imageData);
// We've got an imageURL, add code to use it as needed
// the imageURL can be used as src for an Image object
}
}
}
In my Application im loading images from sd card almost 40 to 50 images in grid view for this i created adapter in which constructor loads images from sd card to global variable of array list of bitmaps with in get view im returning bitmap from array list the problem is from this activity user will move to other activity where he will delete image after deletion user will come back to grid activity and have to load the adapter again this creates "bitmap exceeds vm budget". so i don't know is this way to load images from sd card is there any better approach for that any libraries available for loading images from sd card with out memory leak
public class PicAdapter extends BaseAdapter {
int defaultItemBackground;
private Context galleryContext;
Bitmap placeholder;
public PicAdapter(Context c) {
// instantiate context
galleryContext = c;
// create bitmap array
bitmaplist=new ArrayList<Bitmap>();
GlobalData.imageBitmaps = new WeakReference<ArrayList<Bitmap>>(bitmaplist);
imageDescription = new ArrayList<String>();
imagetitle = new ArrayList<String>();
picturesCursor.moveToFirst();
int i = 0;
for (i = 0; i < picturesCursor.getCount(); i++) {
String imagepath = picturesCursor.getString(picturesCursor
.getColumnIndex("img"));
File cacheimage=new File(imagepath);
if(!cacheimage.exists())
{
dBopenHelper.deleteHappyMoments(imagepath);
}
else
{
Bitmap bitmap =
ViewUtils.decodeSampledBitmapFromResource(imagepath,
AppConstants.FRAME_WIDTH,
AppConstants.FRAME_HEIGHT);
ViewUtils.recycleBitmapFrame();
GlobalData.imageBitmaps.get().add(bitmap);
imageDescription.add(picturesCursor.getString(picturesCursor
.getColumnIndex("image_description")));
imagetitle.add(picturesCursor.getString(picturesCursor
.getColumnIndex("image_title")));
picturesCursor.moveToNext();
}
}
if (i == picturesCursor.getCount()) {
Bitmap bitmap = (Bitmap) BitmapFactory.decodeResource(getResources(), R.drawable.add_new);
GlobalData.imageBitmaps.get().add(bitmap);
imageDescription.add(
"Click Add to new Pictures..............");
}
// get the styling attributes - use default Andorid system resources
}
// BaseAdapter methods
// return number of data items i.e. bitmap images
public int getCount() {
return bitmaplist.size();
}
// return item at specified position
public Object getItem(int position) {
return position;
}
// return item ID at specified position
public long getItemId(int position) {
return position;
}
public View getView(int position, View convertView, ViewGroup parent) {
if (convertView == null) {
convertView = (ImageView) getLayoutInflater().inflate(
R.layout.item_grid_image, null);
convertView.setLayoutParams(new GridView.LayoutParams(
AppConstants.FRAME_WIDTH, AppConstants.FRAME_HEIGHT));
}
galleryImageView = (ImageView) convertView;
galleryImageView.setImageBitmap(GlobalData.imageBitmaps.get().get(position));
public void addPic(Bitmap newPic) {
// set at currently selected index
GlobalData.imageBitmaps.get().add(currentPic, newPic);
}
// return bitmap at specified position for larger display
public Bitmap getPic(int posn) {
// return bitmap at posn index
return GlobalData.imageBitmaps.get().get(posn);
}
}
sample code helps me alot
thanks in advance
If your images have high dimension- you need to display them one by one using some kind of queue.
Make sure that you decode them correctly.
Here is my code :
void MainWindow::on_actionOpen_Image_triggered()
{
QString fileName = QFileDialog::getOpenFileName(this,"Open Image File",QDir::currentPath());
if(!fileName.isEmpty())
{
QImage image(fileName);
if(image.isNull())
{
QMessageBox::information(this,"Image Viewer","Error Displaying image");
return;
}
QGraphicsScene scene;
QGraphicsView view(&scene);
QGraphicsPixmapItem item(QPixmap::fromImage(image));
scene.addItem(&item);
view.show();
}
}
I want to display image from file, code works fine but image disappiars very fast.
How can I pause image screen?
And how can I load image in "graphicsView" widget?
My code:
void MainWindow::on_actionOpen_Image_triggered()
{
QString fileName = QFileDialog::getOpenFileName(this,"Open Image File",QDir::currentPath());
if(!fileName.isEmpty())
{
QImage image(fileName);
if(image.isNull())
{
QMessageBox::information(this,"Image Viewer","Error Displaying image");
return;
}
QGraphicsScene scene;
QGraphicsPixmapItem item(QPixmap::fromImage(image));
scene.addItem(&item);
ui->graphicsView->setScene(&scene);
ui->graphicsView->show();
}
}
It does not work.
How to fix this?
You need to create all your objects on the heap, otherwise they get deleted when they go out of scope:
QGraphicsScene* scene = new QGraphicsScene();
QGraphicsView* view = new QGraphicsView(scene);
QGraphicsPixmapItem* item = new QGraphicsPixmapItem(QPixmap::fromImage(image));
scene->addItem(item);
view->show();
Your second question might be related - scene is assigned to ui->graphicsView but it gets deleted immediately after, so again create all your objects on the heap.
If you don't have to stick with QGraphicsView one possibility is to use QLabel instead. I didn't manage to solve it for QGraphicsView...
QString filename = "X:/my_image";
QImage image(filename);
ui->label->setPixmap(QPixmap::fromImage(image));
I'm trying to ascertain wither there is a limitation on the camera access in the j2me implementation on the HTC Touch2. The native camera is 3MP however it seams that the quality is notably reduced when accessed via j2me, in fact it seams that the only size and format the .getSnapshot() method is able to return is a 240x320 pixel jpeg. I'm trying to confirm that this is a limitation if the j2me implementation and not my coding. Hears and example of some of the things I have tried:
private void showCamera() {
try {
mPlayer = Manager.createPlayer("capture://video");
// mPlayer = Manager.createPlayer("capture://video&encoding=rgb565&width=640&height=480");
mPlayer.realize();
mVideoControl = (VideoControl)mPlayer.getControl("VideoControl");
canvas = new CameraCanvas(this, mVideoControl);
canvas.addCommand(mBackCommand);
canvas.addCommand(mCaptureCommand);
canvas.setCommandListener(this);
mDisplay.setCurrent(canvas);
mPlayer.start();
}
catch (Exception ex) {}
}
public void capture() {
try {
// Get the image.
byte[] raw = mVideoControl.getSnapshot("encoding=jpeg&quality=100&width=640&height=480");
// byte[] raw = mVideoControl.getSnapshot("encoding=png&quality=100&width=
// 640&height=480");
// byte[] raw = mVideoControl.getSnapshot(null);
Image image = Image.createImage(raw, 0, raw.length);
// Image thumb = createThumbnail(image);
// Place it in the main form.
if (mMainForm.size() > 0 && mMainForm.get(0) instanceof StringItem)
mMainForm.delete(0);
mMainForm.append(image);
If anyone could help it would be much appreciated.
I have reseved word from a number of sources that there is indeed a limitation on the camera access the JVM has witch is put in place by the operating system.