Isolate OpenSCAD view - view

I occasionally do some work in OpenSCAD on Fiverr. Instead of sending 100 screenshots each day I would like to provide my clients with a live 3D preview of the object. But I need to do this without giving the source away (in the past I have been naive enough to get scammed this way).
I want my clients to be able look at the live 3D view without being able to see the source code.
For example, the following is a possible solution I was thinking of: hardcode the contents of the .scad file into a string inside an executable. Then start OpenSCAD with this string but only show the preview window, without the client being to look at the code.
You can, in fact, use the openscad.exe to generate a preview from a .scad file:
& "C:\Program Files\OpenSCAD\openscad.exe" --preview --camera=0,0,0,45,45,0,200 test.scad -o test.png
However, there are two problems with this method. 1. It only generates a PNG, I need my clients to be able to pan and zoom. 2. It needs a local file. I could generate a tmp file, open it with above command and then quickly delete the file.

Consider sending an STL file for them to look at. OpenSCAD can export to STL. There are two versions of STL: ascii and binary. OpenSCAD outputs the binary form. Your clients could view your STL file in something like viewstl.com.
Since they could 3d print from the STL you might consider adding some watermark type textures or features that would be difficult to remove. Another option would be to change some key dimensions enough to make it unusable but not so much that it looks bad.

Related

Cropping SVG to range in Inkscape?

Say I have a range – something like a 400x400 rectangle at 60, 60 – which is dynamically generated by a separate program. I'm wondering how it's possible to crop my document to that range in the command line?
Everything I've read has suggested I'd need to add a rectangle to the document, resize the document to that rectangle (resize to selection), and then remove the rectangle.
But I'm having trouble with adding and removing that rectangle. I found the ToolRect verb, but I can't seem to find anything related to actually drawing that rectangle (or removing it).
So, am I doing this wrong or is there just no way to add (and select) the rectangle using only the command line? Using another program is also fine, but I haven't had much luck with that (I couldn't get the python modules installed for the only possibly helpful thing I found..).
In this email discussion from 2012, someone said:
There is no way to pass parameters to verbs (with the current
implementation, they don't take parameters by design).
In case they add this capability later, the required verbs to crop the page would be:
EditSelectAll
SelectionGroup
ToolRect (requires parameters, i.e. where to crop)
EditSelectAll
ObjectSetClipPath
FitCanvasToDrawing
FileVacuum
FileSaveAs (requires a parameter, so that we don't have to overwrite the original)
Since Inkscape can edit any valid SVG, I'd rather look into other available SVG libraries, like this one for Python.
If you are OK with rasterising your image, take a look at this question. Inkscape unfortunately ignores the --export-area option when exporting to svg or pdf.
My – admittedly, unsatisfying – solution was to create a separate program to add a viewbox to the SVG text.
The program I made was implemented into a separate part of my project, so I don't have a good command line version, but if you plan on making one yourself, whatever XML editing library you have for your language of choice should be all you need. I used xmldom for Node.js with relative ease.

Processing: save color and depth images

I just started working with Processing, because I need to get a sequence of images, color and depth. When I save does images while drawing, so for each image I get I save it. I have around 2fps. Is there a way to improve this?
My thought was to store the image in an array list. I thought there is a function setup() so there would be also a function shutdown() or something. So When i hit the Esc button or close the window which is getting cold. Like a decompiler. Where I can run a loop trough that lists and save them. But I don't find such a function.
I am working on a MacBook Air (2013)
If you use OpenNI/SimpleOpenNI I recommend a nicer option: use the the .oni format (which stores both depth and rgb streams). All you have to do is:
Record to an .oni file (fast/realtime)
Read the depth/color streams from the recorded .oni streams when you need to.
To record to an .oni file you've got two options:
Use the Examples > Contributed Libraries > SimpleOpenNI > OpenNI > RecorderPlay sketch to record (some explanations at the bottom of this answer)
Use OpenNI SDK's NiViewer utility which can also save/load .oni files. (You can easily install this using homebrew: brew install homebrew/science/openni2. The path in this case will be something like /usr/local/Cellar/openni2/2.2.0.33/share/openni2/tools/NiViewer)
Once you have your .oni file, you can easily read it/play it back at different rate and access depth/rgb streams to save to disk.
Regarding your existing program
The frame rate drops because in the same thread it's encoding and writing two images to disk per frame. You can improve this by:
saving to an uncompressed format (like tiff)
threading the image save operation (see the bottom of this answer for some ideas)

Any CLI tool to perform 3d texture mapping on the fly

I'm currently looking for a way to create a 'configurator' for a upholsters, similar to http://digitaldraping.com/configurator/furniture-sofa/?Cushions_Plain-Cream.png,Sofa_Stripe-Orange.png - you select your fabrics and they are 'drawn' on the sofa automatically.
Unfortunately, all the sites I've looked at seem to use pre-rendered transparent PNGs that are overlaid over each other to build up the full picture. The problem here is that we've figured out that we'd require over 120,000 different images to cover all models, fabrics etc!!
I've looked at a few 3d texture tools such as http://www.arahne.si/products/arah-drape.html, hoping that one of them would have a CLI option where you give it a pre-created wireframe, and a fabric to overlay, and it generates the required image on the fly, but so far everything seems to require real-time use of the GUI to use it.
So, is there a CLI tool that would do what I'm after, or can anyone suggest a way to manipulate the GUI automatically? (from a tech point of view, I'm comfortable with C, Bash, Python or PHP as a solution!)
Thanks!
ArahDrape 2.2 can now work from a command line without any GUI interface. You can also call ArahDrape as a C library. In this way, it can be used in a web server to create texture mapped images on the fly. The command line options are explained below.
ArahDrape 2.2j command line version, ©2015 Arahne
usage:
adCommand -o /tmp/outputImage.png -tN /home/user/texture.png [-hidemodel] [-divide 2] [-filterPNG] [-compressPNG 2] [-m /home/user/model.png] -owner name -activation 174b3cfb49e9 /home/user/project.drape
Input and output images can have png, .tif or .jpg extensions
-o output_image_file
-tN texture_image_file [N goes from 0 to 199]
-hidemodel will render all areas not in region as white
-divide N [N goes from 2 to 5] divide resulting image pixel size
-filterPNG if you do not filter it, rendering is faster
-compressPNG N [N goes from 0 to 9] lower number saves faster, but bigger files
-m model_image_file use this if you want to replace model image from the project; must have same pixel size
-owner owner_name pass the given owner name
-activation activation_code pass the given activation code
last parameter should be ArahDrape project file
All files should be entered with full path.
If you need spaces in filenames, use quotes "" around the filename.
If you provide only Owner name, without activation code, program returns registration code.
ArahDrape supports batch export.
Open ArahDrape project, click on texture you wish to replace, put all your texture in a directory, select from menu
Textures > Browse textures, and as you click the texture to load it, program will save the draped picture. If you have thousands of images, use keyboard shortcut = and program will automatically do them all.
Alpha channel transparency is supported in loading model images or textures, and saving the draped images, as long as you use PNG or TIFF.
Please check this video to see how
ArahDrape works in batch mode.
we (http://digitaldraping.com/) can do just what you are asking. We have two options creating images and rendering a meshed image on the fly. Just get in touch if you still need this solution.

Very large images in web browser

We would like to display very large (50mb plus) images in Internet Explorer. We would like to avoid compression as compression algorithms are not what CSI would have us believe that they are and the resulting files are too lossy.
As a result, we have come up with two options: Silverlight Deep Zoom or a Flash based solution (such as Zoomify). The issue is that both of these require conversion to a tiled output and/or conversion to a specific file type (Zoomify supports a single proprietary file type, PFF).
What we are wondering is if a solution exists which will allow us to view the image without a conversion before hand.
PS: I know that you can write an application to tile the images (as needed or after the load process) and output them; however, we would like to do this without chopping up the file.
The tiled approach really is the right way to do it.
Your users don't want to download a 50mb file before they can start viewing the image. You don't want to spend the bandwidth to serve 50 megs to every user who might only view a fraction of your image.
If you serve the whole file, users will eventually be able to load and view it, but it won't run smoothly for most of them.
There is no simple non-tiled way to serve just a portion of an image unless you want to use a server-side library like imagemagik or PIL to extract a specific subset of the image for each user. You probably don't want to do that because it will place a significant load on your server.
Alternatively, you might use something like google's map tool to provide zooming and scaling. Some comments on doing that are available here:
http://webtide.wordpress.com/2008/08/27/custom-google-maps/
Take a look at OpenSeadragon. To make a image can work with OpenSeadragon, you should generate a zoomable image format which mentioned here. Then follow starting guide here
The browser isn't going to smoothly load a 50 meg file; if you don't chop it up, there's no reasonable way to make it not lag.
If you dont want to tile, you could have the server open the file and render a screen sized view of the image for display in the browser at the particular zoom resolution requested. This way you arent sending 50 meg files across the line when someone only wants to get an overview of the image. That is, the browser requests a set of coordinates and an output size in pixels, the server opens the larger image and creates a smaller image that fits the desired view, and sends that back to the web browser.
As far as compression, you say its too lossy, but if thats what you are seeing you are probably using the wrong compression algorithm or setting for the type of image you have. The jpg format has quality settings to control lossiness, and PNG compression is lossless (the pixels you get after decompressing are the exact values you had prior to compression). So consider changing what you are using as compression, and dont just rely on the default settings in an image editor.

How to handle images during software development

For software development one often needs images. But when I start working on an image I very fast end up with dozens of versions, like so
Start with a nice large scale image, let's say a photo from my camera(x.nef)
I do some adjustments on exposure correction and white balance, convert it to a x.jpg
start to add some little stuff by copying in various pieces from two other images. (a.jpg, b.jpg resulting in a layered image x.pdn
now I scale it to the required size and save it as x_small.jpg
By now I have 6 different image files floating around, and nobody but me knows the process behind them.
So the question is: How do you handle images in the development process?
Edit:
Thx for all the great input. I combined various questions to my own personal best answer. But I accepted jiinx0r's answer because it really contained the missing idea for me to apply a naming convention for the kind of changes done.
You could just put your images under source control.
That would handle the revision history and notes. If you really need to keep all the transitional versions of the image around and don't want that in your project folder, most source control trees have a 'tools' area for that type of thing.
EDIT:
If what you're after is keeping track of the various sizes (thumbnails, etc), I would go with convention over configuration and implement a uniform file (or directory) naming system.
For instance, I would probably have seperate folders for the 100px and 500px versions of the same image. Or maybe I would put them in the same folder with a special naming convention: logo-100.jpg, and logo-500.jpg ...Either way is probably fine, just make a decision and be sure to stay consistent throughout the project.
One last thought: some folks like to include a ton of metadata in the file name. To me it depends on the scope of your operation and your individual needs. I would personally default to a less is more approach -- if you're thinking about investing in maintaining something like that (or creating a tool to do it for you), make sure it's actually a net gain of time and not just something for your OCD to filddle with!
As developers, we do tend to make glaring mistakes in this area. I know I've been guilty a bunch of times.
file naming should be handled via a naming convention.
{name}-{mod type}-{size}-{version}-{create date}.png
{name}-final.png
e.g.
file-white_balance-800x600-v01-20090831.png
file-white_balance-800x600-v02-20090831.png
file-final.jpg
the real point is to create an agreed on convention that people see the value in following
(however simple/complex is necessary for your group). In my organization we do this for input/output datafiles, images, scripts, etc. (not the same convention necessarily for all, but that they follow something that was agree upon)
Hope that helps.
I try hard to have only a single "source" image and then pour all the changes into a short Python script or some other piece of code so that I can recreate the effects and/or adjust them any time later.
The original image is saved either as PNG or TIFF (to avoid quality loss due by saving) and converted into the final type as the very last step. That's also the time when I do the scaling and other lossy operations.
We developed a downloadable and a web game with a few hundred graphic assets, most of which were stored as psd files during development. We needed jpg and and png versions for the release version of the game and lower quality jpg and png versions for the web version.
We checked the originals into source control to handle versioning.
In order to remain flexible and able to alter the original without having to re-pack the image twice after each update, we had a Perl / ImageMagick script that would update the packed images automatically.
The file name remained the same, but the compressed images would go to different directories, depending which version of the game each image was packed for.
We typically have the image title and resolution appended together in the name.
myimage_800_600.png
this way all of the like images are grouped together in the folder view and you can easily select the size you want without having to wander what "medium" means.
I agree in that source control might be your best bet for this. However conventional source control doesn't really fit images.
Have you looked at http://www.alienbrain.com ?
It's commercial but may be something that could help. I was also looking and saw something about Photoshop or Imageready having version control in it too. You could look into that.
I put all the bits and pieces together from the various answers, for a system that fits my needs:
Images go into source control. This includes images of or intermediate steps.
If multiple images are needed based on one source image, but with different transformations, this can be integrated into automatic builds (scaling, compressing, tinting)
Based on a naming convention or folder structure files can get categorized into: source (e.g. original photo), intermediate (for the various processing steps), base (an image that is actually used in the software or possible after automatic processing as in step 2)
For the processing steps, a naming convention should ensure that the kind of processing can be recognised, and also the order of steps. So one would be able to move from the source image through the various processing steps to the final image.

Resources