Is any way to force acedTraceBoundary returns always regions - autocad-plugin

Acad::ErrorStatus acedTraceBoundary( const AcGePoint3d& seedPoint, bool detectIslands, AcDbVoidPtrArray& resultingBoundarySet )
Here We can read that resultingBoundarySet : "Contains the resulting boundary in form of AcDbPolyline* objects" but sometimes we got set of AcDbRegions* (when boundary contains spline maybe ). And Regions are what I need. Do You know any way to force acedTraceBoundary always create AcDbRegion not AcDbPolylines ?

There isn't way to force an acedTraceBoundary retrieve set of Plines alone, if in case if it is returning set of regions, you can always extract plines/primitive entities from a region.
Use getSplitCurves on AcDbRegion

Related

Selection script for maya

I'm noob in script but good in animation, I need some help to create a script selection.
I found an exemple :
import maya.cmds as cmds
# Get selected objects
curSel = maya.cmds.ls(sl=True)
# Or, you can also specify a type in the listRelatives command
nurbsNodes = maya.cmds.listRelatives(curSel, allDescendents=True, noIntermediate=True, fullPath=True, type="nurbsCurve", path=True)
cmds.select(nurbsNodes)
But It doesn't select all the character's controlers...
I would like If I select a character controler curve or locator and I run the script, the result is all controls who can be keyed should be selected. Without the referenced character name.
Thanks a lot for the one who can help
Currently the listRelatives command is being used to list all child nodes under the currently selected transforms, whose type is a NURBS Curve, e.g. type="nurbsCurve". Typically all nodes in Maya inherit from some other node type (It's worth checking the nodes in Maya help -> technical documents ->nodes). Luckily locator nodes and curves both inherit from 'geometryShape', so you should be able to replace "nurbsCurve" with "geometryShape", and that will probably get you most of the way there. You may need to ignore certain returned nodes though - i.e. polygonal meshes you are using for rendering.

How to load value from dynamically specified parameter in NiFi

I have several processes with almost same flow like "Get some parameters, extract data from database according to them and upload them to target". The parameters vary slightly across processes as well as targets but only a bit. Most of the process is the same. I would like to extract those differences to parameter-context and dynamically load them. My idea is to have parameters defined following way and then using them.
So core of question is:
How to dynamically choose which parameter group load and use?
Having several parameter contexts with same-named/different-valued parameters and dynamically switching them would be probably the best, but it is not possible as far as I know.
Also duplicating flows is out-of-the-table. Any error correction would be spread out over several places and maintenance would be a nightmare.
Moreover, I know I can do it like "In GenetrateFlowFile for process A set value1=#{A_value1} and in GenetrateFlowFile for process B set value1=#{B_value1}. But this is tedious, error-prone and scales kinda bad. Not speaking of situation when I can have dozens of parameters and several processes. Also it is a kind of hardcoding, not configuring...
I was hoping for something like defining group=A and then using it like value1=#{ ${ group:append('_value1') } } but this does not work - it is evaluated as parameter literally named ${ group:append('_value1') }.
TL;DR: Use evaluateELString().
The actual solution is to set in GenetrateFlowFile processor group=A and in next UpdateAttribute processor set the following:
value1=${ group:prepend('hash{ '):append('_value1 }'):replace('hash', '#'):evaluateELString() }
The magic being done here is "Take value of group slap around it #{ and _value1 } to make it valid NiFi Expression Language statement and then evaluate it." (Notice - the word hash and function replace is there since I didn´t manage to escape the # char right before {.)
If you would like to have your value1 at the beginning of the statement then you can use following code. The result is same, it is easier to use (often-changed value value1 is at the beginning of the statement) and is less readable "what is really going on?"-wise.
value1=${ literal('value1'):prepend('_'):prepend(${ group }):prepend('hash{ '):append(' }'):replace('hash', '#'):evaluateELString() }

Web2py Number Formatting for Thousands

I'm sort of new to Web2py. I have a system that's working just fine, but I want to make an improvement regarding visualization. There's a couple of fields that use numbers (defined as double in their respective define_table methods) to represent currency, but I want them to also show with a separator for thousands, such as 183,403,293.34. I checked some documentation, but I couldn't find a direct way to handle this form of customization, though I could be missing something.
Any suggestions regarding this? Cheers!
First, if representing currency, you should use the decimal field type rather than double (some calculations using double values may yield incorrect results due to the use of floating point representations internally). However, if using SQLite, there is no distinction between decimal and double, so in that case, you might want to multiply all values by 100 and instead store integers.
In any case, to display a given numeric value with thousands separators in Python, you can do:
'{:,}'.format(myvalue)
For more details, see https://stackoverflow.com/a/10742904/440323 and https://stackoverflow.com/a/21208495/440323.
If you are using the values via web2py functionality that makes use of the field's represent function (e.g., the grid or the .render() method), you can define a custom represent function, such as:
Field('amount', 'decimal(12, 2)',
represent=lambda v, r: '{:,}'.format(v) if v is not None else '')
You could use the Python function of the locale module:
{{= locale.format ('%. 2f', your_value, grouping = True)}}

What means "Name=SWEIPS" Parametr in Siebel

Writing script in LR for Siebel Open UI. All my requests contains this parameter, with different values. What does it mean?
Examples (from different requests):
"Name=SWEIPS", Value = #0'0'1'0'GetProfileAttr'3'attrName'SBRF Position Id'"
"Name=SWEIPS", Value = #0'0''0'3'1-SQE21A, 1-SQL21E, 1SQE31"
And so on.
Can I simple delete it?
Can I simply delete it? - No, you’re not supposed to delete it.
Compare SWEIPS value by recording twice or trice with different data sets, check is there any date/time values in SWEIPS. If there is nothing to correlate leave as it is, no need to delete.
Ensure to correlate values like SWET,ROWID,SWECount,SWEC and so on.

Interpreting Cascading dot diagrams

Can someone explain how to read these diagrams? I understand the flow from head to tail, but I am specifically wondering about how to read the field (bracket) transitions between ellipses (Pipes/Taps).
By way of example using the Fields following the Every Pipe in the image, the way I have been able to interpret these is the first Field set i.e. [{2}:'token', 'count'] is what goes into the next Pipe/Tap, but what is the significance of the second Field set [{1}: 'token']?
Is this the field set that went into the previous Pipe above? Is there a programmatic significance to the second bracket i.e. are we able to access it within that pipe with particular Cascading code? (In the case where the second Fields set is greater than the first)
(source: cascading.org)
The second field set represents which fields are available for subsequent operations in that map or reduce.
In your example above, in the reduce step, since you grouped by 'token', only 'token' is available for subsequent aggregations (Everys) in that reduce step. You could, for example, add another aggregation which output the average token length, but you could not use an aggregation which utilized the 'count' yet.
The reason for this behaviour is that subsequent aggregations on the same group happen in parallel. Thus, the Count won't be completed to feed into any other aggregations you chained on.

Resources