I have a FlxSprite variable named mySpr and two png files, one.png and two.png, inside the assets/ folder of my project.
Each png file include 4 frames and has a size of 200x200px. Can I do something like this?
mySpr.loadGraphic("assets/one.png",true,100,100);
mySpr.loadGraphic("assets/two.png",true,100,100);
mySpr.animation.add("run", [0, 1, 2, 3, 4, 5, 6, 7], 10, true);
Or is there any way to achieve a similar result?
You can try to use the function FlxSprite.stamp() for "appending" the second image to the side of the first (or any other location), after that is just a matter of adding another animation.
FlxSprint.stamp function documentation: https://api.haxeflixel.com/flixel/FlxSprite.html#stamp
Related
Using OpenSCAD 2019.05 on Mac OS 10.15.7,
difference() {
cube(size = [14, 24, 17], center = false);
% cube(size = [10, 20, 17], center = false);
}
fails (sometimes with the familiar "No geometry" error, sometimes with a cube with no subtraction).
However, with only the substition of # for %, the expected behavior is produced. Am I misunderstanding the semantics of # and %?
(As you'd expect, I didn't write the code this way, I took CSG output from my original program and boiled it down to this example.)
I used to get confused by them too. % allows you to put things in the F5 preview of the model that will not show up in the actual render with F6. See https://en.wikibooks.org/wiki/OpenSCAD_User_Manual/Modifier_Characters#Background_Modifier
"Ignore this subtree for the normal rendering process and draw it in transparent gray (all transformations are still applied to the nodes in this tree)."
So, the line with % is being ignored when you render it and it should result in the first cube being rendered unmodified.
I'm developing a real time editor with ace and call editor.setValue() when ever there is a change in the editor and the cursor positions itself at random positions below the text. What I want is pretty obvious, the cursor to position exactly where the text ends after the set value loads the new texts. Any ideas??
that is not very simple at all, because if the text have changed the same position will not be the same anymore.
if you want to restore the same selection, you can use toJSON method
before = editor.selection.toJSON();
editor.setValue(editor.getValue() + "xxx")
editor.selection.fromJSON(before)
but that in addition to being wasteful for performance, will not work correctly in the case when lines are shifted e.g. editor.setValue("xxx\n" + editor.getValue())
In general it is a better to use session.insert/session.remove methods
editor.session.insert({row: 0, column: 0}, "xxx\n")
editor.session.remove({start: {row: 0, column: 1}, end: {row: 1, column: 1}})
This question already has an answer here:
Animated plots MATLAB [closed]
(1 answer)
Closed 6 years ago.
I am running a loop which generate an image fg_modify inside loop. i.e. the image is generated inside loop every time with same name and get over-written in next run. I want to make a video using all these images. Please help me.
for i=1:numframes
%blah blah
%blah blah
%some code
figure; imshow(fg_modify,[])
end
I want to make a video using all images fg_modify which is generated in the for loop.
The first part has to go behind your plot in the loop:
plot(x,y)
drawnow
F(i) = getframe(fig)
And then you can play the movie afterwards
fig = figure;
movie(fig,F,2)
This would repeat the movie twice
You can find more at:
http://de.mathworks.com/help/matlab/ref/getframe.html
An Example:
for i=1:10
x=1:10
y=1:10
y=y*i
plot(x,y)
M(i)=getframe;
end
movie(M,5)
Use MuPad:
plot(plot::Line3d([0, 0, 0], [a, a, 1], a = 0..1),
plot::Line3d([1, 0, 0], [a, 0, 1], a = 1..0))
To start video click on picture , or choose start on menu .
Than right click on image to save video. Choose .avi file format.
I can set a scale like this:
~pp = Scale.phrygian(\pythagorean);
I can then create a Pbind which plays the scale like this:
(
Pbind(
*[
instrument: \default,
scale: ~pp,
degree: Pseq([0, 1, 2, 3, 4, 5, 6, 7], inf),
amp: 0.5,
dur: 0.5
]
).play;
)
But Synth.new doesn't seem to get it at all (just results in silence):
b = Synth.new(\default, [\scale: ~pp, \degree: 3, \amp, 0.5]);
Interestingly, if I remove the scale parameter:
b = Synth.new(\default, [\degree: 3, \amp, 0.5]);
then I get a note, but it's always the same note. It doesn't respond to the degree parameter.
Ultimately, I would like to be able to trigger notes from an external OSC device (my phone!). This means hooking up OSCFunc to listen out for certain triggers, and play notes from a scale when those OSC events occur. I thought I could use Synth.new inside OSCFunc to actually play the notes, but it doesn't seem to know about scales, so I'm a bit stuck.
Can anyone provide any advice about how to acheive this?
Have a good read of the Pattern Guide, in particular Pattern Guide 07: Value Conversions. It's a good tutorial. It will tell you that these magical conversions are not used everywhere in SuperCollider, but only when you use Event-based scheduling such as Patterns (e.g. your Pbind). The value conversions are actually defined in "the default Event", as described further in that tutorial article.
One consequence of all of this is that, if you want to launch just one note but you still want value conversions, you can do it with the Event style of playing notes, which creates an event using () and then calls .play on it:
~synth = (instrument: \default, scale: [0,2,4,5,7,9,11], degree: 12.rand, amp: 0.5).play;
~synth = (instrument: \default, scale: [0,3,6,9], degree: 12.rand, amp: 0.5).play;
This still returns a Synth object.
See the Event helpfile for more on this way of doing it.
Please consider this data:
var data = [];
data.segments = [
{ "id": "A", "start": 0, "end": 4},
{ "id": "B", "start": 5, "end": 9},
{ "id": "C", "start": 10, "end": 14},
];
data.stream = [
[ 0, 0, 0, 0, 0,
65, 60, 75, 85, 60,
20, 30, 20, 25, 15,
],
];
I want to display it as three distinct parts, where the A segment (ie: the first 5 entries in the stream) would be red (or whatever the color), the B segment (the middle 5 entries) green and the C segment (the last 5 entries) blue.
Here's what it would look like with help from a photo-editing program:
So far, I'm able to display data.stream as a stream. However, I'm stuck at breaking it into segments.
If my data was structured differently (as in this question), things would be easier. However, the way the data is structured right now is sort of ideal at it lets me separate the segment definitions from the stream data. This is useful as I want to be able to use different segments down the line. (You can look at those segments as sounds or words inside of an audio. Sometimes I would focus on individual sounds, sometimes on individual words, but the stream would always be the same.)
I put a working demo on JSFiddle here: http://jsfiddle.net/vsFhf/
How can I color the different parts of the stream?
Let me know if you need more details.
Thank you for the help-
Fabien
No matter what, you still need individual <path> elements for each segment. You could construct a segmented data array as #ValarDohaeris suggests. But, you can also do it without transforming the data:
Instead of binding to data.stream, you need to bind to data.segments, which will enable you to create that one <path> per segment. Then you call pathGenerator for each of those <paths>, passing in a slice of the stream you're rendering data.streams[0]. You'll also need to X-translate each <path> to the appropriate position, using your x scale function.
Here's the modified fiddle.
Would it help to split the data according to your segment definitions?
var segmentdata = data.segments.map(function(segment, i) {
return data.stream[0].slice(segment.start, segment.end + 1);
});
This will create:
segmentdata = [[0,0,0,0,0], [65,60,75,85,60], [20,30,20,25,15]]