I am trying to render 3D bar chart in SCNView using ScreenKit framework.
My rendering code is,
int height=10,y=0,x=0;
for (int i=0; i<10; i++) {
SCNBox *box1 = [SCNBox boxWithWidth:4 height:height length:2 chamferRadius:0];
boxNode1 = [SCNNode nodeWithGeometry:box1];
boxNode1.position = SCNVector3Make(x, y, 0);
SCNMaterial *material = [SCNMaterial material];
material.diffuse.contents = (NSColor *)[self.colorArray objectAtIndex:i%6];
material.specular.contents = [NSColor whiteColor];
material.shininess = 1.0;
box1.materials = #[material];
//boxNode1.transform = rot;
[scene.rootNode addChildNode:boxNode1];
x+=6;
height+=10;
y += 5 ;
}
I can render but while re-sizing the view the chart bars goes to the center of the view.
I need to render the chart, which cover the margins of the view and when Re-size it have to change accordingly. The image(s) below shows my problem.
Original Image:
Image where less stretching of both windows:
Can anyone please help me to fix the issue.
The the windows in the image that you had linked to in your original question was very stretched and that made it very hard to see what was going on. When I took that image and made the windows less stretched it was easier to have some idea of what is going on.
I think that you are seeing a general resizing issue. Either you are using springs and struts and have configured flexible margins on the left and right or you are using auto layout with a centered view with fixed width.
I assume that the red boxes that I have drawn in the image below is the bounds of your scene view in both these cases. You can easily see if this is the case by giving the scene view a different background color and resize it again.
My solution to your problem would be to change how your view resizes as the window resizes, to better meet your expectations.
Related
I have an imageviewer with a couple images, When i zoom in the first image and wanna see the right side of the image. The folowing image is overlapping the inzooming image. This is very annoying. Is there a way how i can set the zoom in image on the front.
How can i set the zoomed image always on front.
See picture https://i.stack.imgur.com/5f5JZ.jpg
My imageviewer is in a container
Image red ;
red = EncodedImage.create("/HW_Delfzijl_Waddenzee_Oost.jpg");
Image blue = Image.createImage(500, 500, 0xff0000ff);
Image red2 = Image.createImage(500, 500, 0xffff0000);
Image List1[]= new Image [3];
List1[0] = red;
List1[1] = blue;
List1[2] = red2;
iv = new ImageViewer();
iv.setWidth(500);
iv.setHeight(500);
iv.setImageList(new DefaultListModel<>(List1));
Container1 = BoxLayout.encloseY( Kaarten.AuvHW, AdvSpr,iv,
Up,progressbar);
I was able to reproduce this with the following code:
Form hi = new Form("ImageViewer", BoxLayout.y());
Image red = Image.createImage(2000, 800, 0xffff0000);
Image blue = Image.createImage(2000, 500, 0xff0000ff);
ImageViewer viewer = new ImageViewer();
viewer.setImageList(new DefaultListModel<>(red, blue));
hi.add(BoxLayout.encloseY(viewer, new Label("Dummy")));
hi.show();
The problem only happens if I stand on the blue image (the second one) scale it up and then try to move. It doesn't happen when moving from the red image to the blue.
I believe this is due to this method in the ImagaViewer code. Since background isn't painted the old image isn't cleaned up. We need to add a condition that disables that while dragging. I think changing the code in that method to this will fix the problem but it's a bit risky and might trigger flickering:
protected void paintBackground(Graphics g) {
// disable background painting for performance when zooming
if(imageDrawWidth < getWidth() || imageDrawHeight < getHeight() || panPositionX != 0) {
super.paintBackground(g);
}
}
I would suggest filing an issue so we can consider the options here as this isn't trivial. One partial workaround is to use:
viewer.setImageInitialPosition(ImageViewer.IMAGE_FILL);
Which minimizes the impact of the issue.
So I created a snake game with a border created with 2d sprites. I have my game window set to 16:9, when in this resolution the images look fine. However, scaling to anything else begins to make the game look weird. I want the game window to be re-sizable. How can I make my sprites stretch and shrink based on the current resolution?
I have already tried creating a sprite that is 120 in Width and 1 in Height, then using the x,y,z scales to change the scale to 16. This produced a huge sprite.
I am experimenting with using a canvas scaler, but with no success.
My end goal isn't to have my game fit pre-defined resolutions like 16:9, but to scale according to the current window size. So that if they make the window extremely thin, the game will only make the top and bottom borders extremely thin, while still confing the game play into the borders.
Below I post the screenshots of how my sprites are setup. And how they are placed into the hierarchy.
Border sprite - this sprite's width is now 70 pixels, because this is how it was given to me.
Border in hierarchy, position, scale, and rotation are defaults. Then for example the BorderTop is moved 25 on the y axis to move it to the top of the screen.
Camera setup
Example resolutions and current output
16:9
5:4
Add a simple script to every border:
public class Border : MonoBehaviour {
enum BorderTypes
{
bottom, top, left, right
}
[SerializeField] float borderOffset = 0.1f;
[SerializeField] BorderTypes type = BorderTypes.top;
// Use this for initialization
void Start () {
switch (type)
{
case BorderTypes.bottom:
transform.position = Camera.main.ViewportToWorldPoint(new Vector3(0.5f, borderOffset, 10));
break;
case BorderTypes.top:
transform.position = Camera.main.ViewportToWorldPoint(new Vector3(0.5f, 1 - borderOffset, 10));
break;
case BorderTypes.left:
transform.position = Camera.main.ViewportToWorldPoint(new Vector3(borderOffset, 0.5f, 10));
break;
case BorderTypes.right:
transform.position = Camera.main.ViewportToWorldPoint(new Vector3(1 - borderOffset, 0.5f, 10));
break;
default:
break;
}
}
}
You can use a simple way:
First, create a Canvas
in Canvas component in Inspector, set Render Mode to Screen Space - Camera and drag your main camera to Render Camera field.
Then, in Canvas Scaler component in Inspector window, set UI Scale Mode to Scale Width Screen Size and other settings like this image
Now drag your game objects to Canvas.
First, you should not change your sprites. You problem is a game viewport. You simply cannot have 16:9 fixed aspect ratio on every device in "full-screen". You have 2 options here:
Don't care about aspect ratio and adapt your gameplay, make your canvas scalermode to "scale with screen size" and reference resolution something like 1920x1080 or 1600:900 or 800:450. Your game logic must take into account that you may have different size of screens. You can experiment in editor by switching different aspect ratios in game view.
Maintain 16:9 and therefore calculate for where to add "bars" (sides or top and down), when the game is initialized.
I found a class called ClippingNode that I can use on sprites to only display a specified rectangular area: https://github.com/njt1982/ClippingNode
One problem is that I need to do exactly the opposite, meaning I want the inverse of that. I want everything outside of the specified rectangle to be displayed, and everything inside to be taken out.
In my test I'm using a position of a sprite, which will update frame, so that will need to happen to meaning that new clipping rect will be defined.
CGRect menuBoundaryRect = CGRectMake(lightPuffClass.sprite.position.x, lightPuffClass.sprite.position.y, 100, 100);
ClippingNode *clipNode = [ClippingNode clippingNodeWithRect:menuBoundaryRect];
[clipNode addChild:darkMapSprite];
[self addChild:clipNode z:100];
I noticed the ClippingNode class allocs inside but I'm not using ARC (project too big and complex to update to ARC) so I'm wondering what and where I'll need to release too.
I've tried a couple of masking classes but whatever I mask fits over the entire sprite (my sprite covers the entire screen. Additionally the mask will need to move, so I thought glscissor would be a good alternative if I can get it to do the inverse.
You don't need anything out of the box.
You have to define a CCClippingNode with a stencil, and then set it to be inverted, and you're done. I added a carrot sprite to show how to add sprites in the clipping node in order for it to be taken into account.
#implementation ClippingTestScene
{
CCClippingNode *_clip;
}
And the implementation part
_clip = [[CCClippingNode alloc] initWithStencil:[CCSprite spriteWithImageNamed:#"white_board.png"]];
_clip.alphaThreshold = 1.0f;
_clip.inverted = YES;
_clip.position = ccp(self.boundingBox.size.width/2 , self.boundingBox.size.height/2);
[self addChild:_clip];
_img = [CCSprite spriteWithImageNamed:#"carrot.png"];
_img.position = ccp(-10.0f, 0.0f);
[_clip addChild:_img];
You have to set an extra flag for this to work though, but Cocos will spit out what you need to do in the console.
I once used CCScissorNode.m from https://codeload.github.com/NoodlFroot/ClippingNode/zip/master
The implementation (not what you are looking for the inverse) was something :
CGRect innerClippedLayer = CGRectMake(SCREENWIDTH/14, SCREENHEIGHT/6, 275, 325);
CCScissorNode *tmpLayer = [CCScissorNode scissorNodeWithRect:innerClippedLayer];
[self addChild:tmpLayer];
So for you it may be like if you know the area (rectangle area that you dont want to show i.e. inverse off) and you know the screen area then you can deduct the rectangle are from screen area. This would give you the inverse area. I have not did this. May be tomorrow i can post some code.
I load an image (NSImage) from the disk, and draw it to an NSImageView on MAC, No problem image looks fine and clear.
After drawing it to the NSImageView, I call the function below with the same image, then draw the returned value to the same NSImageView. The resulting image is extremly blurry, even if all I do is lockFocus and UnlockFocus without doing anything else.
-(NSImage*)addTarget:(NSImage*)image
{
[image lockFocus]; // this image is sharp and clear
[image unlockFocus];
return image; // this image is extremely blurry
}
Anybody knows why or how to fix that?
thanks
rough
So as doing some research I realized that this is related to Retina displays. I guess locking focus will always draw at bestRepresentation which if there is a retina display attached anywhere to the computer, it will render based on that scale factor. So in order to get this to maintain proper dimensions and DPI I created a method that iterates through all screens and returns the largest backingScaleFactor
func maximumScaleFactor(screen: NSScreen) -> CGFloat {
var max: CGFloat = 0
for s in NSScreen.screens()! {
if s.backingScaleFactor > max { max = s.backingScaleFactor }
}
return max / screen.backingScaleFactor
}
Then for the 'NSImage' I did the following
-(NSImage*)addTarget:(NSImage*)image
let scale = self.maximumScaleFactor(currentScreen)
let originalSize = image.size
//cut the image size in half
var size = image.size
size.x /= scale
size.y /= scale
image.lockFocus()
//do whatever drawing you need here
image.unlockFocus()
//set the image back to its original size
image.size = originalSize
return image
}
So far this has worked well for me and the image quality subjectively appears the same to me.
To fix your problem is difficult because you don't say what you want to achieve. Why did you write that method, why do you call it, what do you expect this method does? You said: all I do is lockFocus and UnlockFocus without doing anything else. Indeed it looks like calling lockFocus unlockFocus (and nothing between) does nothing. But that is wrong. [image lockFocus] alone changes image dramatically. An NSImage object contains zero, one (in most cases) or more (icon or some TIFFs) object of class NSImageRep. A call of lockFocus on this image selects an NSImageRep, that is best suited for depicting on the screen. Then it computes how many pixels (but now for a screen) it needs to render the image with the given size but with a resolution of only 72 dpi (or 144 dpi for retina screens). And then removes the NSImageRep from the list of representations and creates instead a new NSImageRep. In former OS-versions (before 10.6) an NSCachedImageRep was created. But now an NSCGImageSnapshotRep is created which under the hood is a CGImage. Make a
NSLog(#" image is:\n%#", image );
before lockFocus and one after the call of unlockFocus and the you will see what happens: for a high resolution image the number of pixels will go down, which is a nothing else than a reduction in quality. And that makes your image blurry.
I've been doing some programming for iPhone lately and now I'm venturing into the iPad domain. The concept I want to realise relies on a navigation that is similar to time machine in osx. In short I have a number of views that can be panned and zoomed, as any normal view. However, the views are stacked upon each other using a third dimension (in this case depth). the user will the navigate to any view by, in this case, picking a letter, whereupon the app will fly through the views until it reaches the view of the selected letter.
My question is: can somebody give the complete final code for how to do this? Just kidding. :) What I need is a push in the right direction, since I'm unsure how to even start doing this, and whether it is at all possible using the frameworks available. Any tips are appreciated
Thanks!
Core Animation—or more specifically, the UIView animation model that's built on Core Animation—is your friend. You can make a Time Machine-like interface with your views by positioning them in a vertical line within their parent view (using their center properties), having the ones farther up that line be scaled slightly smaller than the ones below (“in front of”) them (using their transform properties, with the CGAffineTransformMakeScale function), and setting their layers’ z-index (get the layer using the view’s layer property, then set its zPosition) so that the ones farther up the line appear behind the others. Here's some sample code.
// animate an array of views into a stack at an offset position (0 has the first view in the stack at the front; higher values move "into" the stack)
// took the shortcut here of not setting the views' layers' z-indices; this will work if the backmost views are added first, but otherwise you'll need to set the zPosition values before doing this
int offset = 0;
[UIView animateWithDuration:0.3 animations:^{
CGFloat maxScale = 0.8; // frontmost visible view will be at 80% scale
CGFloat minScale = 0.2; // farthest-back view will be at 40% scale
CGFloat centerX = 160; // horizontal center
CGFloat frontCenterY = 280; // vertical center of frontmost visible view
CGFloat backCenterY = 80; // vertical center of farthest-back view
for(int i = 0; i < [viewStack count]; i++)
{
float distance = (float)(i - offset) / [viewStack count];
UIView *v = [viewStack objectAtIndex:i];
v.transform = CGAffineTransformMakeScale(maxScale + (minScale - maxScale) * distance, maxScale + (minScale - maxScale) * distance);
v.alpha = (i - offset > 0) ? (1 - distance) : 0; // views that have disappeared behind the screen get no opacity; views still visible fade as their distance increases
v.center = CGPointMake(centerX, frontCenterY + (backCenterY - frontCenterY) * distance);
}
}];
And here's what it looks like, with a couple of randomly-colored views:
do you mean something like this on the right?
If yes, it should be possible. You would have to arrange the Views like on the image and animate them going forwards and backwards. As far as I know aren't there any frameworks for this.
It's called Cover Flow and is also used in iTunes to view the artwork/albums. Apple appear to have bought the technology from a third party and also to have patented it. However if you google for ios cover flow you will get plenty of hits and code to point you in the right direction.
I have not looked but would think that it was maybe in the iOS library but i do not know for sure.