How to copy faces and preserve transformation in Sketchup Ruby - ruby

I am new to Sketchup Ruby and am blown away this is not more simple but here goes...
I would like to copy all the groups matching a certain layer name to a new temporary group. I have basically given up on trying to copy the whole group because that appears to be fraught with peril and Bugsplats if not done in some super-anal retentive way that considers context, immediate exploding of objects, etc...
So, I have resorted to trying to loop through all matching groups entities and copying faces instead, which seems much more straight-forward. My goal here is not to become a Ruby wizard but just accomplish this one script.
I have been able to copy faces BUT the faces lose their transformation on copy and just end up at some random size at the origin rather than wherever they were at the model.
Here is the code:
SKETCHUP_CONSOLE.clear
mod = Sketchup.active_model # Open model
ent = mod.entities # All entities in model
temp_wall_primitives = ent.add_group #create a new empty temporary group
mod.definitions.each{|d|
next if d.image? || d.group? || d.name!="WALL"
d.entities.each{ |wall_primative_group|
if wall_primative_group.layer.name == "WALL_PRIMITIVES"
wall_primative_group.entities.each{ |wall_primative_group_entity|
if wall_primative_group_entity.is_a? Sketchup::Face
new_face = temp_wall_primitives.entities.add_face(wall_primative_group_entity.vertices)
end
}
end
}
}
I believe I need to somehow get the transformation of each face and apply it to the new faces as they are created?

Instead of trying to copy the entities from an instance to another one, place a new instance;
# Lets say we have a group.
source_group = model.entities.grep(Sketchup::Group)
# We can "copy" this to another group by fetching it'd definition and adding a new one:
new_group = model.entities.add_group
new_group.entities.add_instance(source_group.definition, source_group.transformation)
In your current solution, where you re-create each face, the reason for the transformation being lost is that vertex positions are relative to their parent. You pass in the vertices you copy from directly: temp_wall_primitives.entities.add_face(wall_primative_group_entity.vertices)
But you need to apply the transformation for the instance they relate to as well.
Additionally your current solution doesn't seem to take into account nested instances. And that faces can have holes in them - in which case face.vertices would not form a valid single loop. Manually recreating faces quickly gets complicated. If you want the whole content of a group or component instance, just make a copy of the instance itself. You can explode the new instance if you want.
But I would question why you have a temporary group in the first place. (Often this turns out to not be necessary. It would help if you explained the higher level task you are trying to perform here.)

Related

Selection script for maya

I'm noob in script but good in animation, I need some help to create a script selection.
I found an exemple :
import maya.cmds as cmds
# Get selected objects
curSel = maya.cmds.ls(sl=True)
# Or, you can also specify a type in the listRelatives command
nurbsNodes = maya.cmds.listRelatives(curSel, allDescendents=True, noIntermediate=True, fullPath=True, type="nurbsCurve", path=True)
cmds.select(nurbsNodes)
But It doesn't select all the character's controlers...
I would like If I select a character controler curve or locator and I run the script, the result is all controls who can be keyed should be selected. Without the referenced character name.
Thanks a lot for the one who can help
Currently the listRelatives command is being used to list all child nodes under the currently selected transforms, whose type is a NURBS Curve, e.g. type="nurbsCurve". Typically all nodes in Maya inherit from some other node type (It's worth checking the nodes in Maya help -> technical documents ->nodes). Luckily locator nodes and curves both inherit from 'geometryShape', so you should be able to replace "nurbsCurve" with "geometryShape", and that will probably get you most of the way there. You may need to ignore certain returned nodes though - i.e. polygonal meshes you are using for rendering.

YAML - Assigning alias to anchor alternatives

In YAML, we are not allowed to assign an alias to an anchor. How can I acheieve similar functionality so that I can use one generic key throughout the YAML file while only needing to make an update in one location?
t_shirt_sizes:
&t_shirt_xs EXTRA_SMALL
&t_shirt_sm SMALL
&t_shirt_md MEDIUM
&t_shirt_lg LARGE
&t_shirt_xl EXTRA_LARGE
t_shirt:
&t_shirt_size *t_shirt_md
# Use the *t_shirt_size further down the YAML file
store:
order_shirt_sizes: *t_shirt_size
This is possible:
t_shirt:
&t_shirt_size EXTRA_SMALL
It fulfils your requirement to only need a single change to change the size everywhere. If you need the indirection, the closest thing you can do is
t_shirt:
&t_shirt_size [ *t_shirt_md ]
Then you'd need to handle the size value as 1-value sequence during loading.
YAML serializes a graph of directed nodes. Using an alias makes another connection to the referenced node, therefore does not create a new node and thus it cannot be anchored. The purpose of anchors & aliases is to be able to serialize cyclic graphs.

Setting the power spectral density from a file

How does one set the power spectral density (PSD) from file and is it possible to use a different PSD for generating the data and for likelihood evaluation?
Question asked by Vivien Raymond by email.
Setting the PSD from file
To set the PSD from a file, first initialise a list of interferometers, here we just use Hanford:
>>> ifos = bilby.gw.detector.InterferometerList(['H1'])
Every element of the list is initialised with a default PSD using the advanced LIGO noise curve, to check this
>>> ifos[0].power_spectral_density
PowerSpectralDensity(psd_file='/home/user1/miniconda3/lib/python3.6/site-packages/bilby-0.3.5-py3.6.egg/bilby/gw/noise_curves/aLIGO_ZERO_DET_high_P_psd.txt', asd_file='None')
Note, no data has yet been generated. To overwrite the PSD,simply create a new PowerSpectralDensity object and assign it (if you have multiple detectors, you'll need to do this for every element of the list)
ifos[0].power_spectral_density = bilby.gw.detector.PowerSpectralDensity(psd_file=PATH_TO_FILE)
Nest, generate an instance of the strain data from the PSD:
ifos.set_strain_data_from_power_spectral_densities(
sampling_frequency=4096, duration=4,
start_time=-3)
You can check what the data looks like by doing
ifos[0].plot_data()
Note, you can also inject signals using the ifos.inject_signal method.
Using a different PSD for likelihood evaluation
Each ifo in the ifos list contains both the data and a PSD (or equivalent ASD). For inference, we pass that list into the bilby.gw.GravitationalWaveLikelihood object as the first argument and the PSD for each element of the list is used in calculating the likelihood.
So, if you want to use a different PSD for likelihood estimate. First generate the data (as above). Then, assign the PSD you want to use for sampling to each element of ifos and pass that object into the likelihood instead. This won't overwrite the data (provided you don't call set_strain_data_from_power_spectral_densities of course).

Chef Ruby hash.merge VS hash[new_key]

I ran into an odd issue when trying to modify a chef recipe. I have an attribute that contains a large hash of hashes. For each of those sub-hashes, I wanted to add a new key/value to a 'tags' hash within. In my recipe, I create a 'tags' local variable for each of those large hashes and assign the tags hash to that local variable.
I wanted to add a modification to the tags hash, but the modification had to be done at compile time since the value was dependent on a value stored in an input json. My first attempt was to do this:
tags = node['attribute']['tags']
tags['new_key'] = json_value
However, this resulted in a spec error that indicated I should use node.default, or the equivalent attribute assignment function. So I tried that:
tags = node['attribute']['tags']
node.normal['attribute']['tags']['new_key'] = json_value
While I did not have a spec error, the new key/value was not sticking.
At this point I reached my "throw stuff at a wall" phase and used the hash.merge function, which I used to think was functionally identical to hash['new_key'] for a single key/value pair addition:
tags = node['attribute']['tags']
tags.merge({ 'new_key' => 'json_value' })
This ultimately worked, but I do not understand why. What functional difference is there between the two methods that causes one to be seen as a modification of the original chef attribute, but not the other?
The issue is you can't use node['foo'] like that. That accesses the merged view of all attribute levels. If you then want to set things, it wouldn't know where to put them. So you need to lead off by tell it where to put the data:
tags = node.normal['attribute']['tags']
tags['new_key'] = json_value
Or just:
node.normal['attribute']['tags']['new_key'] = json_value
Beware of setting things at the normal level though, it is not reset at the start of each run which is probably what you want here, but it does mean that even if you remove the recipe code doing the set, the value will still be in place on any node that already ran it. If you want to actually remove things, you have to do it explicitly.

Using Kiba: Is it possible to define and run two pipelines in the same file? Using an intermediate destination & a second source

My processing has a "condense" step before needing further processing:
Source: Raw event/analytics logs of various users.
Transform: Insert each row into a hash according to UserID.
Destination / Output: An in-memory hash like:
{
"user1" => [event, event,...],
"user2" => [event, event,...]
}
Now, I've got no need to store these user groups anywhere, I'd just like to carry on processing them. Is there a common pattern with Kiba for using an intermediate destination? E.g.
# First pass
source EventSource # 10,000 rows of single events
transform {|row| insert_into_user_hash(row)}
#users = Hash.new
destination UserDestination, users: #users
# Second pass
source UserSource, users: #users # 100 rows of grouped events, created in the previous step
transform {|row| analyse_user(row)}
I'm digging around the code and it appears that all transforms in a file are applied to the source, so I was wondering how other people have approached this, if at all. I could save to an intermediate store and run another ETL script, but was hoping for a cleaner way - we're planning lots of these "condense" steps.
To directly answer your question: you cannot define 2 pipelines inside the same Kiba file. You can have multiple sources or destinations, but the rows will all go through each transform, and through each destination too.
That said you have quite a few options before resorting to splitting into 2 pipelines, depending on your specific use case.
I'm going to email you to ask a few more detailed questions in private, in order to properly reply here later.

Resources