How to Calculate the Accurate Paint Area in Revit or Revit API? - area

I have completed the 3D modeling of the building in Revit, and now I want to calculate the area that the building needs to be painted in order to estimate the project price.
However, I spent a lot of time trying many methods (Creating Parts, Split Faces, Paint), and I couldn't get the desired result in the end.
Next, I will describe the problem I encountered in detail:
① We can see the beam system of the building in the picture, use the "Paint" function to add yellow paint to one of the surfaces, and its area is (7.15m * 0.9m = 6.435㎡).
② Use "Isolate Element", we can see the yellow paint on this surface (including the part that intersects with other elements). In the material schedule, we can see that the area of the yellow paint is 6.44㎡.
③ In fact, since this beam intersects with other beams, it is impossible to paint the intersecting part when painting during construction. Therefore, the calculated 6.44㎡ is larger than the actual one.
④ In order to better illustrate the problem, I added two additional structural columns with the same volume as the intersection (which do not exist in reality).
⑤ At this time, use "Isolate Element", we can see that there are two gaps on the painted surface (both 0.2m * 0.7m), so the area of the remaining part is (7.15 * 0.9 - 0.2 * 0.7 * 2 = 6.155㎡). In the material schedule, we can also see that the area of the yellow paint is 6.16㎡. This area is the real paint area in the actual construction.
However, this method looks stupid and it also destroys the true appearance of the building model, so I would like to ask if there is a more sensible approach? I have programming ability, and if secondary development is required based on the Revit API, it is acceptable to me, as long as the problem can be solved.

Related

Algorithms for selecting 3D elements (vertices, edges, faces) under a 2D rectangle region

I'm trying to write my own CAD program, and it's pretty important that I give the user the ability to select vertices/edges/faces(triangles) of interest by drawing a box or polygon on a 2D screen, and then highlighting whatever is underneath in the 3D view (both ignoring the back faces and also not ignoring the back faces).
How is this done usually? Is there any open-source example I can look at? What's generally the process for this?
This is especially harder when you are trying to handle 1 million+ triangles.
there are many ways to do this here two most often used:
Ray picking
First see:
OpenGL 3D-raypicking with high poly meshes
The idea is to cast a ray(s) from camera focal point into mouse (or cursor) direction and what the ray hits is selected. The link above exploits OpenGL rendering where you can do this very easily and fast (almost for free) and the result is pixel perfect. In order to use selecting box/polygon you need to read all of its pixels on CPU side and convert them to list of entities selected. This is slightly slower but still can be done very fast (regardless of complexity of the rendered scene). This approach is O(1) however if you do the same on CPU it will be much much slower with complexity O(n) while n is number of entities total (unless BVH or is Octree used). This method however will select only what is visible (so no back faces or objects behind).
Geometry tests
Basicaly your 2D rectangle will slice the perspective 3D frustrum to smaller one and what is inside or intersecting should be selected. You can compute this with geometry on CPU side in form of tests (object inside cuboid or box) something like this:
Cone to box collision
The complexity is also O(n) unless BVH or Octree is used. This method will select all objects (even not visible ones).
Also I think this might be interesting reading for you:
simple Drag&Drop in C++
It shows a simple C++ app architecture able of placing and moving objects in 2D. Its a basic start point for CAD like app. For 3D you just add the matrix math and or editation controls...
Also a hint for selecting in CAD/CAM software (IIRC AUTOCAD started with) is that if your selection box was created from left to right and from top to bottom manner you select all objects that are fully inside or intersect and if the selection box was created in reverse direction you select only what is fully inside. This way allows for more comfortable editation.

How to identify if a set of lines is similar to a shape

Currently I have a program that allows the user to paint on it by capturing the mouse position every 0.05 seconds and drawing a line between a point and the next. With that setup I am looking for a way to identify shapes like a circle, a rectangle or the letter 'P'.
My current algorithm divides the screen on sections, then marks the sections with points recorded by the player and makes a matrix with the marked sections, then compares that matrix with every shape matrix.
This lacks any kind of support for rotations, sizes or positions. Also the control of the threshold is tricky returning in most cases fake results.
I need an algorithm that allows to identify for example a ' P ' as a ' P '.
Note: My current application is running on a c++ framework so any libraries or tools are welcome but I am interested on the algorithm behind.
Edit: After thinking around the problem I have changed the current grid on the screen, instead of that I capture the points and shift them to resize
the shape so it fits on a grid and over that grid compare with the known shapes.
Picture of the process
This solves the position and size problems while being fast enough, also rotating the input and then resizing in a loop may solve the rotation problem (seems though it would have an high cost and won't be very reliable)
I would gladly welcome alternative methods of handling shape comparison or the rotation.
After thinking around the problem I have changed the current grid on the screen, instead of that I capture the points and shift them to resize
the shape so it fits on a grid and over that grid compare with the known shapes.
Picture of the process
This solves the position and size problems while being fast enough, also rotating the input and then resizing in a loop may solve the rotation problem (seems though it would have an high cost and won't be very reliable)

Future prospects for improvement of depth data on Project Tango tablet

I am interested in using the Project Tango tablet for 3D reconstruction using arbitrary point features. In the current SDK version, we seem to have access to the following data.
A 1280 x 720 RGB image.
A point cloud with 0-~10,000 points, depending on the environment. This seems to average between 3,000 and 6,000 in most environments.
What I really want is to be able to identify a 3D point for key points within an image. Therefore, it makes sense to project depth into the image plane. I have done this, and I get something like this:
The problem with this process is that the depth points are sparse compared to the RGB pixels. So I took it a step further and performed interpolation between the depth points. First, I did Delaunay triangulation, and once I got a good triangulation, I interpolated between the 3 points on each facet and got a decent, fairly uniform depth image. Here are the zones where the interpolated depth is valid, imposed upon the RGB iamge.
Now, given the camera model, it's possible to project depth back into Cartesian coordinates at any point on the depth image (since the depth image was made such that each pixel corresponds to a point on the original RGB image, and we have the camera parameters of the RGB camera). However, if you look at the triangulation image and compare it to the original RGB image, you can see that depth is valid for all of the uninteresting points in the image: blank, featureless planes mostly. This isn't just true for this single set of images; it's a trend I'm seeing for the sensor. If a person stands in front of the sensor, for example, there are very few depth points within their silhouette.
As a result of this characteristic of the sensor, if I perform visual feature extraction on the image, most of the areas with corners or interesting textures fall in areas without associated depth information. Just an example: I detected 1000 SIFT keypoints from an an RGB image from an Xtion sensor, and 960 of those had valid depth values. If I do the same thing to this system, I get around 80 keypoints with valid depth. At the moment, this level of performance is unacceptable for my purposes.
I can guess at the underlying reasons for this: it seems like some sort of plane extraction algorithm is being used to get depth points, whereas Primesense/DepthSense sensors are using something more sophisticated.
So anyway, my main question here is: can we expect any improvement in the depth data at a later point in time, through improved RGB-IR image processing algorithms? Or is this an inherent limit of the current sensor?
I am from the Project Tango team at Google. I am sorry you are experiencing trouble with depth on the device. Just so that we are sure your device is in good working condition, can you please test the depth performance against a flat wall. Instructions are as below:
https://developers.google.com/project-tango/hardware/depth-test
Even with a device in good working condition, the depth library is known to return sparse depth points on scenes with low IR reflectance objects, small sized objects, high dynamic range scenes, surfaces at certain angles and objects at distances larger than ~4m. While some of these are inherent limitations in the depth solution, we are working with the depth solution provider to bring improvements wherever possible.
Attached an image of a typical conference room scene and the corresponding point cloud. As you can see, 1) no depth points are returned from the laptop screen (low reflectance), the table top objects such as post-its, pencil holder etc (small object sizes), large portions of the table (surface at an angles), room corner at the far right (distance >4m).
But as you move around the device, you will start getting depth point returns. Accumulating depth points is a must to get denser point clouds.
Please also keep us posted on your findings at project-tango-hardware-support#google.com
In my very basic initial experiments, you are correct with respect to depth information returned from the visual field, however, the return of surface points is anything but constant. I find as I move the device I can get major shifts in where depth information is returned, i.e. there's a lot of transitory opacity in the image with respect to depth data, probably due to the characteristics of the surfaces.
So while no return frame is enough, the real question seems to be the construction of a larger model (point cloud to open, possibly voxel spaces as one scales up) to bring successive scans into a common model. It's reminiscent of synthetic aperture algorithms in spirit, but the letters in the equations are from a whole different set of laws.
In short, I think a more interesting approach is to synthesize a more complete model by successive accumulation of point cloud data - now, for this to work, the device team has to have their dead reckoning on the money for whatever scale this is done. Also this addresses an issue that no sensor improvements can address - if your visual sensor is perfect, it still does nothing to help you relate the sides of an object at least be in the close neighborhood of the front of the object.

Shading mask algorithm for radiation calculations

I am working on a software (Ruby - Sketchup) to calculate the radiation (sun, sky and surrounding buildings) within urban development at pedestrian level. The final goal is to be able to create a contour map that shows the level of total radiation. With total radiation I mean shortwave (light) and longwave(heat). (To give you an idea: http://www.iaacblog.com/maa2011-2012-digitaltools/files/2012/01/Insolation-Analysis-All-Year.jpg)
I know there are several existing software that do this, but I need to write my own as this calculation is only part of a more complex workflow.
The (obvious) pseudo code is the following:
Select and mesh surface for analysis
From each point of the mesh
Cast n (see below) rays in the upper hemisphere (precalculated)
For each ray check whether it is in shade
If in shade => Extract properties from intersected surface
If not in shade => Flag it
loop
loop
loop
The approach above is brute force, but it is the only I can think of. The calculation time increases with the fourth power of the accuracy (Dx,Dy,Dazimth, Dtilt). I know that software like radiance use a Montecarlo approach to reduce the number of rays.
As you can imagine, the accuracy of the calculation for a specific point of the mesh is strongly dependent by the accuracy of the skydome subdivision. Similarly the accuracy on the surface depends on the coarseness of the mesh.
I was thinking to a different approach using adaptive refinement based on the results of the calculations. The refinement could work for the surface analyzed and the skydome. If the results between two adjacent points differ more than a threshold value, than a refinement will be performed. This is usually done in fluid simulation, but I could not find anything about light simulation.
Also i wonder whether there are are algorithms, from computer graphics for example, that would allow to minimize the number of calculations. For example: check the maximum height of the surroundings so to exclude certain part of the skydome for certain points.
I don't need extreme accuracy as I am not doing rendering. My priority is speed at this moment.
Any suggestion on the approach?
Thanks
n rays
At the moment I subdivide the sky by constant azimuth and tilt steps; this causes irregular solid angles. There are other subdivisions (e.g. Tregenza) that maintain a constant solid angle.
EDIT: Response to the great questions from Spektre
Time frame. I run one simulation for each hour of the year. The weather data is extracted from an epw weather file. It contains, for each hour, solar altitude and azimuth, direct radiation, diffuse radiation, cloudiness (for atmospheric longwave diffuse). My algorithm calculates the shadow mask separately then it uses this shadow mask to calculate the radiation on the surface (and on a typical pedestrian) for each hour of the year. It is in this second step that I add the actual radiation. In the the first step I just gather information on the geometry and properties of the various surfaces.
Sun paths. No, i don't. See point 1
Include reflection from buildings? Not at the moment, but I plan to include it as an overall diffuse shortwave reflection based on sky view factor. I consider only shortwave reflection from the ground now.
Include heat dissipation from buildings? Absolutely yes. That is the reason why I wrote this code myself. Here in Dubai this is key as building surfaces gets very, very hot.
Surfaces albedo? Yes, I do. In Skethcup I have associated a dictionary to every surface and in this dictionary I include all the surface properties: temperature, emissivity, etc.. At the moment the temperatures are fixed (ambient temperature if not assigned), but I plan, in the future, to combine this with the results from a building dynamic thermal simulation that already calculates all the surfaces temperatures.
Map resolution. The resolution is chosen by the user and the mesh generated by the algorithm. In terms of scale, I use this for masterplans. The scale goes from 100mx100m up to 2000mx2000m. I usually tend to use a minimum resolution of 2m. The limit is the memory and the simulation time. I also have the option to refine specific areas with a much finer mesh: for example areas where there are restaurants or other amenities.
Framerate. I do not need to make an animation. Results are exported in a VTK file and visualized in Paraview and animated there just to show off during presentations :-)
Heat and light. Yes. Shortwave and longwave are handled separately. See point 4. The geolocalization is only used to select the correct weather file. I do not calculate all the radiation components. The weather files I need have measured data. They are not great, but good enough for now.
https://www.lucidchart.com/documents/view/5ca88b92-9a21-40a8-aa3a-0ff7a5968142/0
visible light
for relatively flat global base ground light map I would use projection shadow texture techniques instead of ray tracing angular integration. It is way faster with almost the same result. This will not work on non flat grounds (many bigger bumps which cast bigger shadows and also change active light absorbtion area to anisotropic). Urban areas are usually flat enough (inclination does not matter) so the technique is as follows:
camera and viewport
the ground map is a target screen so set the viewpoint to underground looking towards Sun direction upwards. Resolution is at least your map resolution and there is no perspective projection.
rendering light map 1st pass
first clear map with the full radiation (direct+diffuse) (light blue) then render buildings/objects but with diffuse radiation only (shadow). This will make the base map without reflections and or soft shadows in the Magenta rendering target
rendering light map 2nd pass
now you need to add building faces (walls) reflections for that I would take every outdoor face of the building facing Sun or heated enough and compute reflection points onto light map and render reflection directly to map
in tis parts you can add ray tracing for vertexes only to make it more precise and also for including multiple reflections (bu in that case do not forget to add scattering)
project target screen to destination radiation map
just project the Magenta rendering target image to ground plane (green). It is only simple linear affine transform ...
post processing
you can add soft shadows by blurring/smoothing the light map. To make it more precise you can add info to each pixel if it is shadow or wall. Actual walls are just pixels that are at 0m height above ground so you can use Z-buffer values directly for this. Level of blurring depends on the scattering properties of the air and of coarse pixels at 0m ground height are not blurred at all
IR
this can be done in similar way but temperature behaves a bit differently so I would make several layers of the scene in few altitudes above ground forming a volume render and then post process the energy transfers between pixels and layers. Also do not forget to add the cooling effect of green plants and water vaporisation.
I do not have enough experience in this field to make more suggestions I am more used to temperature maps for very high temperature variances in specific conditions and material not the outdoor conditions.
PS. I forgot albedo for IR and visible light is very different for many materials especially aluminium and some wall paintings

Best approach for specific Object/Image Recognition task?

I'm searching for an certain object in my photograph:
Object: Outline of a rectangle with an X in the middle. It looks like a rectangular checkbox. That's all. So, no fill, just lines. The rectangle will have the same ratios of length to width but it could be any size or any rotation in the photograph.
I've looked a whole bunch of image recognition approaches. But I'm trying to determine the best for this specific task. Most importantly, the object is made of lines and is not a filled shape. Also, there is no perspective distortion, so the rectangular object will always have right angles in the photograph.
Any ideas? I'm hoping for something that I can implement fairly easily.
Thanks all.
You could try using a corner detector (e.g. Harris) to find the corners of the box, the ends and the intersection of the X. That simplifies the problem to finding points in the right configuration.
Edit (response to comment):
I'm assuming you can find the corner points in your image, the 4 corners of the rectangle, the 4 line endings of the X and the center of the X, plus a few other corners in the image due to noise or objects in the background. That simplifies the problem to finding a set of 9 points in the right configuration, out of a given set of points.
My first try would be to look at each corner point A. Then I'd iterate over the points B close to A. Now if I assume that (e.g.) A is the upper left corner of the rectangle and B is the lower right corner, I can easily calculate, where I would expect the other corner points to be in the image. I'd use some nearest-neighbor search (or a library like FLANN) to see if there are corners where I'd expect them. If I can find a set of points that matches these expected positions, I know where the symbol would be, if it is present in the image.
You have to try if that is good enough for your application. If you have too many false positives (sets of corners of other objects that accidentially form a rectangle + X), you could check if there are lines (i.e. high contrast in the right direction) where you would expect them. And you could check if there is low contrast where there are no lines in the pattern. This should be relatively straightforward once you know the points in the image that correspond to the corners/line endings in the object you're looking for.
I'd suggest the Generalized Hough Transform. It seems you have a fairly simple, fixed shape. The generalized Hough transform should be able to detect that shape at any rotation or scale in the image. You many need to threshold the original image, or pre-process it in some way for this method to be useful though.
You can use local features to identify the object in image. Feature detection wiki
For example, you can calculate features on some referent image which contains only the object you're looking for and save the results, let's say, to a plain text file. After that you can search for the object just by comparing newly calculated features (on images with some complex scenes containing the object) with the referent ones.
Here's some good resource on local features:
Local Invariant Feature Detectors: A Survey

Resources