CAD secondary development - autocad

I got some taught about the CAD secondary development.I use a dll 'Teigha' to analysis the dwg file. But there no have any API can get the CAD drawing's xrefs ,so How to got the drawing's xrefs.
I tried to get the xrefs's path and analysis it as a normal drawing(I save the xrefs data to the main drawing (I think when I finished it and it can become one drawing )) but I got nothing.Why?
Then I found even though I got the all data(main drawing and xrefs) I still cannot get the correct data because the xrefs in the main drawing cant copy paste .I cannot get these change,Only I can got the origin xrefs.
So what can I do ?

In Teigha, you must use the OdDbXrefGraph.getFrom method to obtain the graph of the Xrefs. After that, you can explore the graph in a way similar to this.

Related

TM1py upload of different cubes

Im very new to Tm1 and have to implement a new function in my code.
Is there any way to undo your last action?
For better understanding ill write an example:
I have 8 different cubes and i will upload one after one.
If one cube is not able to be uploaded all the others shouldnt be uploaded too.
Every cube which is already being uploaded should get a reset to the previous state.
Is there a way to implement it?
You have to inbriquate your 8 loading process in a master (others are slaves). If you have a condition that make one or your cube unuploadable you use the ProcessError function. In the master process, you fetch the result of the execution of each slave process. If one is in error, you use the processerror function (in the master). All the chain won't be commited.
The answer above from Wuzardor provides an approach using TI processes but your question seems to suggest you're using TM1py/Python to do the upload to TM1, either directly, or by triggering TI processes through the REST API.
In general, there's no easy way to roll back changes to cube data. However, it should be simple enough to structure your Python code such that the existence and validity of all the load files is established before you push anything to any of your cubes. It's difficult to suggest the best approach without more details about what you're trying to achieve and how.
Updated in response to OP comment:
OK, while it's not clear what IT isn't cooperating with, but if you are unable to verify the source prior to extracting it, you can always load it first to a staging cube, where the data can be checked, before copying anything to your main cubes. Depending on what issues you tend to face with the data, you might be able to automate this check or might need to rely on a human looking at it. Either way, just don´t overwrite your historic data until you've checked the new data.
Furthermore, you might want to think about your overall design. Might it make sense to retain a copy of the previous data in the cubes anyway? Why not build your cubes such that you can keep the history, rather than re-overwriting each time? Finding a sensible design really depends on the details of your application but you might benefit from looking at it with fresh eyes.
Cheers
Alex

How do I output the current db level on Mac?

Hello and thanks for checking out my question,
I am working on a project analysing film and visualizing the data I got from it. I'm quite new at programming and only have some basic experience in java and javascript.
For my project I want to store the db levels of a movie in a csv file, to later work with the data in processing. I couldn't find anything that wasn't too complex for me to comprehend for Mac (OSX.)
Help would be much appreciated!
Thank you.
You're going to have to break your problem down into smaller steps.
Step 1: Generating the CSV file.
There are probably a million different ways to do this, and that can be pretty confusing. But break this down into smaller sub-steps and then take those steps one at a time. Can you get a movie playing in Processing? There is a Video library that does just that. Then can you get the volume level every X seconds? You might start with a separate sketch that just prints something to the console every X seconds. For getting the volume, you might try out the Minim library. If that doesn't work, Google is your friend, and remember to keep breaking your problem down into smaller steps!
Step 2: Loading the CSV file.
Now that you have the CSV file, you have to load it into Proccessing. There are several functions in the reference that might come in handy. Again, start with an example program that just prints the values to the console. Get that working perfectly before moving on.
Step 3: Visualizing the data.
Now that you have the data in your Processing code, you can start thinking about how you want to visualize the data. Maybe a line chart that just shows the volume over time just to start with.
If you get stuck on a specific step, then try to break it down into smaller sub-steps. Create an example program that just tests one of those smaller sub-steps (also known as an MCVE), and you'll be able to ask a more specific code-oriented question. Good luck, sounds like an interesting project!

threejs load big stl file

I want to use threejs to display the stl, I read some sample online.
The problem is that my stl files are quite big(binary format: 28mb), it takes too long to load.
Is that possible to load part of stl file and display a basic shape, then to load the rest of file?
Make sure that you working with bufferedgeometry, and take a look at setDraRange, which set limit for drawing vertices, so create some setInteraval where you increase count of items(position, normals) to be produced in Webgl context.

What does an Area Description File (ADF) looks like?

I'm starting to work with the Google Tango Tablet, hopefully to create (basic) 2D / 3D maps from scanned areas. But first I would like to read as much about the Tango (sensors / API) as I can, in order to create a plan to be as time efficient as possible.
I instantly noticed the ability to learn areas, which is a very interesting concept, nevertheless I couldn't find anything about these so called Area Description Files (ADF).
I know the ADF files can be geographically referenced, that they contain metadata and an unique UUID. Furthermore I know their basic functionalities, but that's about it.
In some parts of the modules ADF files are referred to as 'maps', in other parts they are just called 'descriptions'.
So what do these files look like? Are they already basic (GRID) (2D) maps, or are they just descriptions?
I know there are people who already extracted the ADF files, so any help would be greatly appreciated!
From Tango ADF Doco
Important: Saved area descriptions do not directly record images or
video of the location, but rather contain descriptions of images of
the environment in a very compressed form. While those descriptions
can’t be directly viewed as images, it is in principle possible to
write an algorithm that can reconstruct a viewable image. Therefore,
you must ask the user for permission before saving any of their
learned areas to the cloud or sharing areas between users to protect
the user's privacy, just as you would treat images and video.
Other than that there doesn't seem to be much info about the file internals - I use a lot of them, but I've never been compelled to look inside - curious yes, but not compelled
Without any direct info from the project Tango folks anything we provide would be merely speculation. I'm with Mark, not much compelling reason to get details. My speculation: probably contains a set of image descriptors, like SIFT, and whatever other known device settings are available, like GPS location, orientation (gravity), time(?), etc.
I got the ADF file, basically coded binaries and seems difficult to decode.
I will be happy to share the file if anyone is still interested.

What makes a JavaFx 1.2 Scene Graph Refresh?

My first question =). I'm writing a video game with a user interface written in JavaFx. The behavior is correct, but I'm having performance problems. I'm trying to figure out how to figure out what is queuing up the refreshes which are slowing down the app.
I've got a relatively complex Scene Graph that represents a hexagonal map. It scales so that you could have 100 or a 1000 hexagons in the map. As the number of hexagons grow the responsiveness of the gui decreases. I've used YourKit (a Java Profiler) to trace these delays to major redraw operations.
I've spent most of the night trying to figure out how to do two things and understand one thing:
1) Cause a CustomNode to print something to the console whenever it is painted. This would help me identify exactly when these paints are being queued.
2) Identify when a CustomNode is put on the repaint queued.
If I answered 1 and 2, I might be able to figure out what it is that is binding all these different nodes together. Is it possible that JavaFX only works through global refreshes (doubtful)?
JavaFX script is a powerful UI language but certain practices will kill performance. Best performance generally boils down to:
keeping the Scene Graph small
keeping use of bind to a minimum (you can look at using triggers instead which are more performant)
This blog post by Jim Weaver expands these points.
I'm not sure as to the specific answers to your questions. If you examine the 1.2.1 docs you might be able to find a point in the Node documentation that you can override and add println statements but I'm not sure it can be done. You could try posting on forums.sun.com
This is a partial post. I expect to expand it after I've done some more work. I wanted to put in what I've done to date so I don't forget.
I realized that I'd need to get my IDE running with a full compliment of the JavaFx 1.2 source. This would allow me to put break points into the core code to figure out what is going on. I decided to do this configuration on Eclipse for remote debugging. I'm developing my FX in Netbeans but am more comfortable with Eclipse so that's what I want to debug in if I can.
To get this info into Eclipse, I first made a project with the Java source that my code uses. I then added external Jars to the project. On my Mac, the Jars I linked to were in /Library/Frameworks/JavaFX.framework/Versions/1.2
Then I went searching for the Source to link to these Jars. Unfortunately, it's not available. I could find some of it in /Library/Frameworks/JavaFX.framework/Versions/1.2/src.zip.
I did some research and found that the only available option left was to install a Java Decompilier. I used this one because it was easy to install into Eclipse 3.4: http colon_ //java dot decompiler _dot free.fr/ (<-- Please forgive the psudo link, I'm limited because I'm new)
This is where I am now. I can navigate into the Core FX classes and believe I'll be able to set break points and begin real analysis. I'll update this post as I progress.
I found a helpful benchmarking tool:
If you run with the JVM arg:
-Djava.util.logging.config.file=/path/to/logging/file/logging.properties
And you've put the following args into the file referenced by that arg:
handlers = java.util.logging.ConsoleHandler
java.util.logging.ConsoleHandler.level = ALL
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
com.sun.scenario.animation.fps.level = ALL
You'll get console output that includes your frame count per second. For FX 1.2 it wasn't working for me, but it appears to be working for 1.2.1 (which was released Sept. 9, 2009.) I don't have a Netbeans that runs 1.2.1 yet.
You may want to read this article.
http://fxexperience.com/2009/09/performance-improving-insertion-times/
Basically, insertions in to the scenegraph are slow and benefits can be seen by batching up inserts

Resources