Wind scaling in Maxscript - max

I'm trying to write a script in 3ds max, which creates smoke animation. I recorded the process of creating the scene and I have a problem to recreate it by script. I want to set wind parameters:
select $'Smoke wind'
$.frequency = 0.78
$.turbulence = 0.03
$.scale = 0.03 -- problem
But I received the following error:
Unable to convert: 0.03 to type: Point3
And I have no idea what could go wrong, because when I set parameters in 3ds max, everything's ok. The problem occurs only when I type scale instruction in console. Anyone knows what's going on?

There is also the object-level scale property for the scale of an object (X,Y,Z), which is the reason for this ambiguity and error. Access the Wind baseObject instead, like this:
$.baseObject.scale = 0.03

Related

Why does Trackpy give me an error when I try to compute the overall drift speed?

I'm going through the Trackpy walkthrough (http://soft-matter.github.io/trackpy/v0.3.0/tutorial/walkthrough.html) but using my own pictures. When I get to calculating the overall drift velocity, I get this error and I don't know what it means:drift error
I don't have a ton of coding experience so I'm not even sure how to look at the source code to figure out what's happening.
Your screenshot shows the traceback of the error, i.e. you called a function, tp.compute_drift(), but this function called another function, pandas_sort(), which called another function, etc until raise ValueError(msg) is called, which interrupts the chain. The last line is the actual error message:
ValueError: 'frame' is both an index level and a column label, which is ambiguous.
To understand it, you have to know that Trackpy stores data in DataFrame objects from the pandas library. The tracking data you want to extract drift motion from is stored in such an object, t2. If you print t2 it will probably look like this:
y x mass ... ep frame particle
frame ...
0 46.695711 3043.562648 3.881068 ... 0.007859 0 0
3979 3041.628299 1460.402493 1.787834 ... 0.037744 0 1
3978 3041.344043 4041.002275 4.609833 ... 0.010825 0 2
The word "frame" is the title of two columns, which confuses the sorting algorithm. As the error message says, it is ambiguous to sort the table by frame.
Solution
The index (leftmost) column does not need a name here, so remove it with
t2.index.name = None
and try again. Check if you have the newest Trackpy and Pandas versions.

Load obj and mtl files hosted on Google Cloud Storage in Three.js

I am building a web app for dealing with 3D data, and want to render the 3D data belonging to each user. The 3D files (.obj, .mtl, png etc.) is stored in GCS, and the users should only be able to access their own 3D files.
I'm now trying to render the 3d files using three.js, but can't for the life of me find a good way to load the files using OBJLoader and MTLLoader. It works fine to load and render if I serve files as static content from my server, but I can't find a way of loading the non-public files stored in GCS.
I am using this code to load static files
const mtlFile = "test.mtl";
const objFIle = "test.obj";
var mtlLoader = new THREE.MTLLoader();
mtlLoader.load(mtlFile, function( materials ) {
materials.preload();
var objLoader = new THREE.OBJLoader();
objLoader.setMaterials( materials );
ObjLoader.load( objFile, function ( object ) {
object.position.z = 40;
scene.add( object );
}, onProgress, onError );
});
The problem is that the my mtl file normally looks like this:
newmtl default
Ka 1.00 1.00 1.00
Kd 1.00 1.00 1.00
Ks 0.00 0.00 0.00
Ns 20.00
illum 2
newmtl test
Ka 1.00 1.00 1.00
Kd 1.00 1.00 1.00
Ks 0.00 0.00 0.00
Ns 20.00
illum 2
map_Ka test.png
map_Kd test.png
i.e. it points to a file called test.png. It's possible to set a base path, so if I could serve the file using an url http://example.com/test.png that would work here.
However, I want the files to only be accessible to the user that created them. And if I hosted the test.png in GCS I could get a signed link that looks like this:
https://storage.googleapis.com/test-bucket/test.png?GoogleAccessId=blajlakdsjflasjdflkj&Expires=14954&Signature=V92B0........
which I can't find a way to play nicely with the OBJ/MTL Loaders
So, any tips on how to solve this? Do I need to build some sort of file proxy that serves the files only connected to the user?
Btw. I'm hosting my own user database and authentication, so there's no connection to the user's google account.
For ideal security, a proxy might suit your needs best. It wouldn't have to actually proxy all of the data. Maybe three.js handles redirects well? You could issue redirects to a signed URL.
However, it depends how you model the privacy of these objects. If you just give all of these objects lengthy random names that are effectively impossible to guess, you have mostly solved your security issue so long as no bad parties can learn the object name. That will quickly solve your problem without the need to build elaborate extra services.
Of course, security modeling is complicated. Do you care about the need to revoke access? Do you need to be able to limit how long a user can access an object? Should a user be able to share access to an object? I don't know the answers to these questions for your case. But long, unguessable names accessed anonymously via HTTPS may be sufficient for many of them.

How to pyplot pause() for zero time?

I'm plotting an animation of circles. It looks and works great as long as speed is set to a positive number. However, I want to set speed to 0.0. When I do that, something changes and it no longer animates. Instead, I have to click the 'x' on the window after each frame. I tried using combinations of plt.draw() and plt.show() to get the same effect as plt.pause(), but the frames don't show up. How do I replicate the functionality of plt.pause() precisely either without the timer involved or with it set to 0.0?
speed = 0.0001
plt.ion()
for i in range(timesteps):
fig, ax = plt.subplots()
for j in range(num):
circle = plt.Circle(a[j], b[j]), r[j], color='b')
fig.gca().add_artist(circle)
plt.pause(speed)
#plt.draw()
#plt.show()
plt.clf()
plt.close()
I've copied the code of pyplot.pause() here:
def pause(interval):
"""
Pause for *interval* seconds.
If there is an active figure it will be updated and displayed,
and the GUI event loop will run during the pause.
If there is no active figure, or if a non-interactive backend
is in use, this executes time.sleep(interval).
This can be used for crude animation. For more complex
animation, see :mod:`matplotlib.animation`.
This function is experimental; its behavior may be changed
or extended in a future release.
"""
backend = rcParams['backend']
if backend in _interactive_bk:
figManager = _pylab_helpers.Gcf.get_active()
if figManager is not None:
canvas = figManager.canvas
canvas.draw()
show(block=False)
canvas.start_event_loop(interval)
return
# No on-screen figure is active, so sleep() is all we need.
import time
time.sleep(interval)
As you can see, it calls start_event_loop, which starts a separate crude event loop for interval seconds. What happens if interval == 0 seems backend-dependend. For instance, for the WX backend a value of 0 means that this loop is blocking and never ends (I had to look in the code here, it doesn't show up in the documentation. See line 773).
In short, 0 is a special case. Can't you set it to a small value, e.g. 0.1 seconds?
The pause docstring above says that it can only be used for crude anmiations, you may have to resort to the animation module if you want something more sophisticated.

How to slow down Framer animations

I'm looking for a solution to slow down FramerJS animations by a certain amplitude.
In the Velocity Animation framework it's posible to do Velocity.mock = 10, to slow down everything by a factor of 10.
Either the docs are lacking in the respect, or this feature doesn't currently exist and should really be implemented.
You can use
Framer.Loop.delta = 1 / 120
to slow down all the animations by a factor of 2. The default value is 1 / 60.
While Javier's answer works for most animations, it doesn't apply to delays. While not ideal, the method I've adopted is to set up a debugging variable and function, and pass every time-related value through it:
slowdown = 5
s = (ms) ->
return ms * slowdown
Then use it like so:
Framer.Defaults.Animation =
time: s 0.3
…and:
Utils.delay s(0.3), ->
myLayer.sendToBack()
Setting the slowdown variable to 1 will use your standard timing (anything times 1 is itself).

How to use output of CIFilter recursively as new input?

I've written an own CIFilter kernel which is doing some image processing on the camera signal. It takes two arguments:
Argument one is "inputImage" (the current camera image) argument 2 is "backgroundImage" which is being initialized with the first camera image.
The filter is supposed to work recursively. The result of the filter should be used as new "backgroundImage" in the next iteration. I am calculating a background image and some variances and therefore need the result from the previous render.
Unfortunately I cannot use the output CIImage of the CIFilter in the next iteration, because the memory load gets up and up. After 10 seconds of processing it ends up with 1.4GB of RAM usage. Using the filter in a standard manner (without recursion) memory management is fine.
How can I reuse the output of a filter as input in the next iteration?
I've done a NSLog on the result image. Ant it told me
background {
CISampler:0x1002e0360 image {
FEPromise: 0x1013aa230 extent [0 0 1280 720]; DOD [0 0 1280 720]; filter MyFeatureDetectFilter: 0x101388dd0; kernel coreImageKernel; image {
CISampler:0x10139e200 image {
FEBufferImage: 0x10139bee0 metadata.colorspace: HDTV; extent: [0 0 1280 720]; format: BGRA_8; uid 5
}
After some seconds the log becomes sth. like
}
}
}
}
}
This tells me that CIImages are 'always' prototypes of the desired operation. And using them recursively adds just the "resulting CIImage 'prototype'" as input into the new 'prototype'.
Over time the "rule" for rendering blows up into a huge structure of nested prototypes.
Is there any way to force CIImages to flatten the structure inside memory?
I would be happy if I could do recursive processing, because this would blow up the power of QuartzCore to the extreme.
I tried the same in QuartzComposer. Connecting the output with the input works, but takes a lot of memory, too. After some time it crashes. Then I tried to use the Queue from QC and everything worked fine. What is the "xcode" equivalent of the QC Queue? Or is there any mechanism to rewrite my kernel to keep "results" in memory for the next iteration?
It seems like what you're looking for is the CIImageAccumulator class. This allows you to use the output of a filter as its input on the next iteration.
Edit:
For an example of how to use it, you can check out this Apple sample code.

Resources