why are the dimensions on my Figma file so much bigger than it is meant to be when i try to apply them? is there a way to scale it down? - user-interface

Okay so i have used Figma before, although it was long ago, i didnt have this problem. I am in a team of developers. we arent very good in designing so we paid a designer to do the design for us. Everything was all good till i started coding the landing page. The dimensions i got from Figma for basically everything is way too big and so unrealistic. I mean it told me that the height of the Navbar is 135px, 135?????? thats way too big for a Navbar. I have to scale it down myself and use trial and error method till it looks somewhat like the design. Normally i wouldnt complain if i was programming for myself but its not a personal project and i am on a deadline so the fact that i have to manually use trial and error method to make the designs match instead is just adding so much time that i didnt account for. Is this a fault of the designer or figma itself? is there a way to scale down the dimensions to make it browser-size ?

The first thing to understand is that if your screen has a higher resolution, it may look smaller or larger.
Second, your designer did his designs on a different scale.
You will have to make a system of scales. If he gave you the height as 135px and you think 80px is actually the correct value, your scale is 0.6 and all values should be passed as height x 0.6. Same way for the width.
The easiest way is for the designer to resize his design to the correct size. This will have only one drawback and that's he will have to edit multiple elements in each page individually and it can take hours. However, it's his mistake.

Related

How to adapt text and/or elements size while designing to smaller screens?

I have a personal project designed for the desktop that I previously created in Adobe XD, and now I would like to put it on Behance. To do so, I need to adapt the layout, designed for the desktop, to mobile.
I don't usually design for smaller screens, so I am wondering how much I need to decrease text and element sizes? For example, if I have a text with a font size of 40px, what calculations should I use to decrease the size for mobile? Is there a default percentage to reduce desktop values? Alternatively, are there visual rules that other designers follow?
I always design for Bootstrap, but I'm not sure if I am thinking about mobile the right way.
I've also posted this on the User Experience Stack Exchange forum, but I'm not sure which one is the best for my question.
Thank you for sharing your thoughts and advice.
I have designed mostly for desktops as a traditional web designer, and now I'm trying to migrate to UI/UX.
Modern devices do most of the scale conversion work for you by adequately scaling the viewport to compensate for the smaller screens and often higher resolutions. Depending on the type of application you are designing, the technology is different, but the result is very similar.
For example, if you were implementing the design for the Web, you would likely need to use browser features like media queries to manage your content.
However, because you are focusing on the design of the site, you should not need to worry about the 'how', so you can focus on what to do.
Here are some tips:
Elements and text appear roughly the same size on desktop and mobile if you hold the device at a casual but comfortable distance and compare it to the size it appears on your desktop's screen at an average viewing distance. You can try this by going to a website built for mobile like Apple's.
Because of the similar size but reduced screen dimensions, you need to simplify your design, avoid multiple columns (especially for phones).
Because you see a smaller portion of your design at once on mobile, there is less need for significant visual hierarchy. For example, if you have multiple heading levels with a significant visual size difference on the desktop, you can probably get away with making them closer in size on mobile.
If you want to see what your design looks like on mobile, try emailing the design to your phone, save it to your pictures, and load the image full screen. You may need to zoom the image in a bit so that the left and right of the design are touching the sides of your phone's screen. If your text looks too small or your elements are too large, adjust the design and load it on your phone again. Keep doing this until you get it right.
With a little practice and effort, you will get the hang of Mobile design. And, if you want to take it to the next level, try researching mobile first design. Here is just one of many articles on the subject.

Adjusting hard values in processing for any screen size

So I'm making a game with my group on processing for a project and we all have different computers. The problem is we built the game on one computer, however at this point we have realized the the (1200,800) size we used does not work on our professors computer. Unfortunately we have hard coded thousands of values to fit on this resolution. Is there any way to make it fit on all computers?
From my own research I found you can use screen.width and screen.height in order to get the size of the screen, I set the game window to about half the screen size. However all the images I had loaded for background and stuff are 1200x800 So I am unsure how to go about modifying ALL of my pictures (backgrounds), and hard values.
Is there anyway to fix this without having to go manually change the 1000's of hard values? (Yes I am fully aware how bad it is I hard coded the numbers).
Any help would be greatly appreciated. As mentioned in title, the language is processing.
As I'm sure you have learned your lesson about hard-coding numbers, I won't say anything about it :)
You may have heard of embedding a processing PApplet inside a traditional java JFrame or similar. If you are okay with scaling the image that your PApplet draws (ie it draws it at the resolution that you've coded, and then the resulting image is scaled up or down to match the screen), then you could embed your papplet in a frame, capture the papplet's output to an image, scale the image, then draw it to the screen. A quick googling yielded this SO question. It may make your game look funny if the resolutions are too different, but this is a quick and dirty way. It's possible that you'll want to have this done in a separate thread, as suggested here.
Having said that, I do not recommend it. One of the best thing (IMO) of Processing is not having to mess directly with AWT/Swing. It's also a messy kludge and the "right thing to do" is just to go back and change the hard-coded numbers to variables. For your images, you can use PImage's resize(). You say your code is several hundred lines long, but in reality that isn't a huge amount-- the best thing to do is just to suck it up and be unhappy for a few hours. Good luck!

Is it better to use images or CSS to keep performance of a webpage or application as high as possible?

My project's creative designer and I have had some amicable disagreements as far as whether it is better to use slices of his comp or rely on the browser's rendering engine to create certain aspects of the user experience.
One specific example is with a horizontal "bar" that runs the entire width of the widget. He created a really snazzy gradient that involves several stops of different colors and opacities, and he feels that the small size of the image is preferable or at least comparable to the added lines of CSS code needed to generate the gradient natively.
We're already using CSS sprite technique to reduce the number of return trips from the server so that's not an issue for us. Right now it's simply the pro's and con's of using sliced up imagery versus rendering with CSS and HTML.
Is there a definitive rule as to how large an image needs to be to be the "worse" option of the two?
UPDATE: Since a few people have mentioned the complexity of the gradient as a factor I'm going to present it here:
-webkit-gradient(linear, left top, left bottom, color-stop(50%, rgb(0,0,0)), to(rgba(0,0,0,0.9)));
Very complex! ;-)
Measure it! Not the answer you like I think, but it really depends how complex the CSS will be and therefore how long it takes to be rendered.
In most cases it'll be the render time (CSS version) vs. request overhead and transmission time (image version). You will most probably see the big numbers here. Since you're already using image sprites you're reducing the request overhead to a minimum.
Browser compatibility should also be something you should be aware of. Here images will often win over CSS when it comes to gradients and something like that.
Some very complex CSS3-site to demonstrate what I mean: http://lea.verou.me/css3patterns/
This is a VERY nice case study, but incredible slow. It lags when loading. It lags even more when scrolling. And I am sure it is much slower than a solution using an image sprite for all of that.
Don't treat me wrong! I love CSS, but images are fine too. Sometimes even finer.
Summary
Measure it! When you do not have the time to measure, then estimate how complex the css would be. When it tends to get complex, then use images. When you've compatibility issues, then use images.
In general, if it can be done with CSS, definitely skip the image. If for no other reason it's easier to maintain.
HOWEVER, a complex gradient is likely where I'd give in and throw in an image. Having recently implemented some CSS gradients, it is a bit of a mess right now.
Ultimately, Fabian Barney has the correct answer. If it's an issue of performance, you need to measure things. In the grand scheme of things, this is likely low on the performance issues to dwell on.
i think CSS is more reusable. Imagine you have to repeat the same image over and over again in the same web document. Lots of HTTP requests there. But that's my opinion, please correct me if I'm wrong I think CSS is way more expressive and flexibly stylish.
However, there are things you have to watch out for. Stuff like browser specific prefixing kill pages with too much CSS.

Tips for optimizing performance of -webkit-transform?

I'm using webkit-transform: translate3d and a few other properties pretty extensively on a mobile app for iPhone because its hardware accelerated. With about 98% of the features in place, performance is great. I'm aware of not trying to do too much at once.
I'm successfully simulating swiping in a very excellent, native way. What I've noticed now is that when I add the last 2% of features I'm seeing some image redrawing issues in the that is being animated while swiping. After you swipe through all 4 images and they load, then performance is perfectly smooth again. However, when this section is hidden and shown, the same thing happens.
What I hypothesize is happening is there's an internal buffer being hit and it has to reload each time.
So this with that background, the general question is what kinds of performance optimizations have other developers been making for -webkit-transform? I'm not necessarily asking about my particular situation, but rather what wider range of optimizations have people figured out for their individual needs?
Hopefully if this question gets some answers, it can be a resource for other folks asking the same question down the road.
It's a fairly well known thing, but making sure the element you transform is using 3d transforms where possible helps a lot on devices that hardware acccelerate transforms (iOS at the moment).
The easiest way to do that is to add:
transform: translate3d(0,0,0);
with the appropriate prefixes to the css of the element in question, then just animate it as normal, either by using 2d or 3d transforms.
It might sound a bit weird but i had a similar issue and i solved it by using -webkit-perspective: 1000.
Don't know how this acts in favor of the transitions, but in my case it did.

Dilemma about image cropping algorithm - is it possible?

I am building a web application using .NET 3.5 (ASP.NET, SQL Server, C#, WCF, WF, etc) and I have run into a major design dilemma. This is a uni project btw, but it is 100% up to me what I develop.
I need to design a system whereby I can take an image and automatically crop a certain object within it, without user input. So for example, cut out the car in a picture of a road. I've given this a lot of thought, and I can't see any feasible method. I guess this thread is to discuss the issues and feasibility of achieving this goal. Eventually, I would get the dimensions of a car (or whatever it may be), and then pass this into a 3d modelling app (custom) as parameters, to render a 3d model. This last step is a lot more feasible. It's the cropping issue which is an issue. I have thought of all sorts of ideas, like getting the colour of the car and then the outline around that colour. So if the car (example) is yellow, when there is a yellow pixel in the image, trace around it. But this would fail if there are two yellow cars in a photo.
Ideally, I would like the system to be completely automated. But I guess I can't have everything my way. Also, my skills are in what I mentioned above (.NET 3.5, SQL Server, AJAX, web design) as opposed to C++ but I would be open to any solution just to see the feasibility.
I also found this patent: US Patent 7034848 - System and method for automatically cropping graphical images
Thanks
This is one of the problems that needed to be solved to finish the DARPA Grand Challenge. Google video has a great presentation by the project lead from the winning team, where he talks about how they went about their solution, and how some of the other teams approached it. The relevant portion starts around 19:30 of the video, but it's a great talk, and the whole thing is worth a watch. Hopefully it gives you a good starting point for solving your problem.
What you are talking about is an open research problem, or even several research problems. One way to tackle this, is by image segmentation. If you can safely assume that there is one object of interest in the image, you can try a figure-ground segmentation algorithm. There are many such algorithms, and none of them are perfect. They usually output a segmentation mask: a binary image where the figure is white and the background is black. You would then find the bounding box of the figure, and use it to crop. The thing to remember is that none of the existing segmentation algorithm will give you what you want 100% of the time.
Alternatively, if you know ahead of time what specific type of object you need to crop (car, person, motorcycle), then you can try an object detection algorithm. Once again, there are many, and none of them are perfect either. On the other hand, some of them may work better than segmentation if your object of interest is on very cluttered background.
To summarize, if you wish to pursue this, you would have to read a fair number of computer vision papers, and try a fair number of different algorithms. You will also increase your chances of success if you constrain your problem domain as much as possible: for example restrict yourself to a small number of object categories, assume there is only one object of interest in an image, or restrict yourself to a certain type of scenes (nature, sea, etc.). Also keep in mind, that even the accuracy of state-of-the-art approaches to solving this type of problems has a lot of room for improvement.
And by the way, the choice of language or platform for this project is by far the least difficult part.
A method often used for face detection in images is through the use of a Haar classifier cascade. A classifier cascade can be trained to detect any objects, not just faces, but the ability of the classifier is highly dependent on the quality of the training data.
This paper by Viola and Jones explains how it works and how it can be optimised.
Although it is C++ you might want to take a look at the image processing libraries provided by the OpenCV project which include code to both train and use Haar cascades. You will need a set of car and non-car images to train a system!
Some of the best attempts I've see of this is using a large database of images to help understand the image you have. These days you have flickr, which is not only a giant corpus of images, but it's also tagged with meta-information about what the image is.
Some projects that do this are documented here:
http://blogs.zdnet.com/emergingtech/?p=629
Start with analyzing the images yourself. That way you can formulate the criteria on which to match the car. And you get to define what you cannot match.
If all cars have the same background, for example, it need not be that complex. But your example states a car on a street. There may be parked cars. Should they be recognized?
If you have access to MatLab, you could test your pattern recognition filters with specialized software like PRTools.
Wwhen I was studying (a long time ago:) I used Khoros Cantata and found that an edge filter can simplify the image greatly.
But again, first define the conditions on the input. If you don't do that you will not succeed because pattern recognition is really hard (think about how long it took to crack captcha's)
I did say photo, so this could be a black car with a black background. I did think of specifying the colour of the object, and then when that colour is found, trace around it (high level explanation). But, with a black object in a black background (no constrast in other words), it would be a very difficult task.
Better still, I've come across several sites with 3d models of cars. I could always use this, stick it into a 3d model, and render it.
A 3D model would be easier to work with, a real world photo much harder. It does suck :(
If I'm reading this right... This is where AI shines.
I think the "simplest" solution would be to use a neural-network based image recognition algorithm. Unless you know that the car will look the exact same in each picture, then that's pretty much the only way.
If it IS the exact same, then you can just search for the pixel pattern, and get the bounding rectangle, and just set the image border to the inner boundary of the rectangle.
I think that you will never get good results without a real user telling the program what to do. Think of it this way: how should your program decide when there is more than 1 interesting object present (for example: 2 cars)? what if the object you want is actually the mountain in the background? what if nothing of interest is inside the picture, thus nothing to select as the object to crop out? etc, etc...
With that said, if you can make assumptions like: only 1 object will be present, then you can have a go with using image recognition algorithms.
Now that I think of it. I recently got a lecture about artificial intelligence in robots and in robotic research techniques. Their research went on about language interaction, evolution, and language recognition. But in order to do that they also needed some simple image recognition algorithms to process the perceived environment. One of the tricks they used was to make a 3D plot of the image where x and y where the normal x and y axis and the z axis was the brightness of that particular point, then they used the same technique for red-green values, and blue-yellow. And lo and behold they had something (relatively) easy they could use to pick out the objects from the perceived environment.
(I'm terribly sorry, but I can't find a link to the nice charts they had that showed how it all worked).
Anyway, the point is that they were not interested (that much) in image recognition so they created something that worked good enough and used something less advanced and thus less time consuming, so it is possible to create something simple for this complex task.
Also any good image editing program has some kind of magic wand that will select, with the right amount of tweaking, the object of interest you point it on, maybe it's worth your time to look into that as well.
So, it basically will mean that you:
have to make some assumptions, otherwise it will fail terribly
will probably best be served with techniques from AI, and more specifically image recognition
can take a look at paint.NET and their algorithm for their magic wand
try to use the fact that a good photo will have the object of interest somewhere in the middle of the image
.. but i'm not saying that this is the solution for your problem, maybe something simpler can be used.
Oh, and I will continue to look for those links, they hold some really valuable information about this topic, but I can't promise anything.

Resources