How to do pitch replacement in praat? - praat

I tried to replace the pitch, but nothing happened when I pressed "replace pitch tier"?
In order to clone a pitch contour and apply it to a different signal (replacing its original pitch contour):
select a sound in the objects window
go to To Manipulation and create a manipulation object
select it and go to Extract pitch tier
create a manipulation object of a second sound object (the one which will have its pitch replaced)
select both the second manipulation object and the pitch tier of the first one, then press Replace pitch tier
listen to the second manipulation object (Play), to which the pitch of the first one is now applied.

Related

How to set a Sprite to a specific Frame in Godot

I have the Player move around and when he enters a new Room (via Instancing) his Sprite shows him facing in the Default direction (in my Case down). So If you enter a Room from any other direction then it looks weird, cause for a short Moment you can see the Player facing down even if you came from the right. How can I tell Godot to set the Player Sprite to a specific Frame in Code, so I can set it to the proper Frame for each Direction. I'm new to Godot and I used HeartBeast Action RPG Tutorial for my Movement. So it's using an AnimationTree and AnimationPlayer. I tried "set_frame" but Godot just says it doesn't know the Method.
If you are following the tutorial series I think you are following (Godot Action RPG)… You are using an AnimationTree with AnimationNodeBlendSpace2D (BlendSpace2D).
The BlendSpace2D picks an animation based on an input vector "blend_position". This way you can use BlendSpace2D to pick an animation based on the direction of motion or the direction the player character is looking at. For example, you can "idle_up", "idle_down", "idle_left", and "idle_right" animations, and use BlendSpace2D to pick one in runtime based on a direction vector.
Thus, you need to set the "blend_position" of the BlendSpace2D like this:
animationTree.set("parameters/NameOfTheBlendSpàce2D/blend_position", vector)
Where:
animationTree is a variable set to the AnimationTree.
"NameOfTheBlendSpàce2D" is the name of the BlendSpace2D you want to set (e.g. "Idle").
vector is a Vector2D with the direction you want (e.g. Vector2.UP).
This is shown in the episode 6 of the tutorial series (Animation in all directions with an AnimationTree).
You can find a reference project by HeartBeast at arpg-reference, where you can find a function update_animation_blend_positions that looks like this:
func update_animation_blend_positions():
animationTree.set("parameters/Idle/blend_position", input_vector)
animationTree.set("parameters/Run/blend_position", input_vector)
animationTree.set("parameters/Attack/blend_position", input_vector)
animationTree.set("parameters/Roll/blend_position", input_vector)
Here "Idle", "Run", "Attack", and "Roll" are BlendSpace2D, each configured with animations for the corresponding actions, and this function updates them in sync so that they are picking the correct animation.
As far as I can tell the code from the repository is further refactored from what is show in the tutorial series. This code from the repository is under MIT licence.

Frame-by-frame value for pitch and intensity?

Is there a way I can view the frame-by-frame value for pitch and intensity on Praat? As of now, I can only view the mean pitch and intensity across the whole time window of my recording. I have included the code that I have used so far. When I run the code, I get the following error message: Picture of error message. Thanks in advance for your help!
'''
#creating a table for the values
timeseriesp = Create Table with column names: "timeseriesp", 0,"p_ts"
#reading in the file
sound = Read from file: fileName$
#selecting the sound object and extracting continuous pitch
selectObject: sound
View & Edit
#610 is length of video
Select: 0, 610
p_ts = Pitch listing
#putting data into the table
selectObject: timeseriesp
Append row
row = Get number of rows
Set string value: row, "test", p_ts
'''
Two points here.
First, the error message you are getting is due to you trying to use an editor command ("Select"). In roder to avoid the error, embed the command in an editor -- endeditor structure:
editor sound
Select: 0, 610
endeditor
Second, in general though, you can get pitch and intensity values quicker without reverting to the editor window. This will speed up your script if you are processing a lot of files.
Create Pitch and Intensity objects from your sound file. You can query those objects for pitch/intensity values at specific times/frames and save them in your table.

Using AutoLISP to select multple copies of an object one by one

I am trying out AutoLISP for the first time.
In my AutoCAD drawing I have around 300 copies of an object spread at different places.
I want to mirror each object around a fixed axis in middle of the object.
The first roadblock that I am getting is selecting each copy of the object one by one for doing the mirroring operation.
Can anyone help me with that? Is it possible?
You can obtain an selection using the AutoLISP ssget function with an appropriate mode string and filter list argument permitting selection of objects whose properties meet your selection criteria.
If your selection is to be automated with no user input (for example, using the X mode string to query the drawing database), you will need a property by which to distinguish the target objects from other objects in the drawing - this may depend on the type of object that you are looking to select.
For example, you can filter for all objects of the same type using DXF group 0; within the same layout using DXF group 410; residing on the same layer using DXF group 8; or by other properties, such as colour, linetype, or lineweight.
Filters based on the object geometry will be dependent on the type of object that you are looking to target, for example, a selection of circles of the same radius could be acquired by filtering on DXF group 40; or standard (non-dynamic) blocks of the same name using DXF group 2.
Upon obtaining your selection, you'll then need to iterate over the selection such that the mirror operation can be performed on each object individually (since the mirror axis will be different for every object in the selection). To accomplish this, you can choose one of the methods that I describe in my tutorial on Selection Set Processing.

Define timeframe for animation in KML

Is there any possibility to define a timewindow for an animation of KML objects? Like if i have to occurences animated (polygon 1 appears on the 1/1/2018 and polygon 2 on the 6/10/2018). Is there any way to define that the whole animation should last for f.e. 30 or 45 seconds? I only see that Google Earth always interpolates the animation time depending on the given
<TimeSpan> <begin> 2004-03 </begin> <end>2004-04</end></TimeSpan>
dates of the document.
In the current Google Earth Pro interface, there is no way to specify the duration (in playback-time) of an animation like this. As you noted, it expands the time slider to include the dates & times from all loaded KMLs, and plays across the slider at a preset speed (adjustable in the slider settings).
One way you could apply time playback with precise control is to set up a KML Tour which animates between two views (with time tags applied), over a specific number of seconds. Then you could have your user play it back using the Tour interface instead, and see the timing you want. Unfortunately KML Touring is rather complex with a long learning curve. There are simple things you can do (possibly including something like your request) using the tour recording interfaces in Earth Pro, but to really harness the full power of touring you'll need to create custom KML code, so consider yourself warned. :-)

Image similarity detection

I've been playing around writing a scraper that scrapes Deviantart.com. It saves a copy of new images locally, and also creates a record in a Postgresql DB for the image. My problem: as new images come in, how do I know if this new image corresponds to an image I've seen before? Dupes are fairly rare on DA, but at the same time, this is an interesting problem in a more general sense.
Thoughts on ways to proceed?
Right now the Postgresql DB is populated as I scrape images, and which has a table which looks like:
CREATE TABLE Image
(
id SERIAL PRIMARY KEY NOT NULL,
url varchar(5000) UNIQUE NOT NULL,
dateadded timestamp without time zone default (now() at time zone 'utc'),
width int,
height int
);
Where url is the link to the image as I scraped it from DA (ex: http://th05.deviantart.net/fs70/PRE/f/2014/222/2/3/sketch_dump_56_by_lilaira-d7uj8pe.png), dateadded is the datetime the scraper found the image, and width & height are the image dimensions.
I currently don't store the image itself in the database, but I do keep a local mirror -- I take the url for the image and wget -r -nc the file. So for a url: http://th05.deviantart.net/fs70/PRE/f/2014/222/2/3/sketch_dump_56_by_lilaira-d7uj8pe.png I keep a local copy at <somedir>/th05.deviantart.net/fs70/PRE/f/2014/222/2/3/sketch_dump_56_by_lilaira-d7uj8pe.png
Now, image recognition in the general case is quite hard. I want to be able to handle things like slight resizes, which I could account for by normalizing all images kept to a specific resolution, and normalize the query image to that same resolution at query time. I want to be able to handle things like change of format (PNG vs JPG vs etc) which I could do by reading an image file into a normalized format (ex: uncompressed RGB values for each pixel, though ideally some "slack" would be tolerated here).
Nice to haves (would be willing to give up for simplification/better accuracy):
I'd like to be able to handle cropping an image (ex: I've previously seen imageA, and somebody takes imageA and crops it and uploads it as imageB I'd like to notice that as a duplicate).
I'd like to be able to handle watermarking an image with a logo
I'd like to be able to handle cropping in a case where the new image to classify is a subimage of a previously seen image (ie - I have imageA stored, somebody takes imageA and crops it, I'd like to be able to map that cropped image to imageA)
Constraints/extra info:
I'm not at all interested in finding images that are different yet similar (ex: two distinct photos of the same Red Bus should be reported as two distinct images)
while I'm not entirely opposed to using metadata (ex: artist, image category, etc), I'd like to keep this as constrained to just the image data (EXIF data, resolution, RBG colour values) as possible.
an image that is sized down and appears in a new larger image I wish to consider as different. Ex: I have imageA, I resize it to 50x50, and that 50x50 grid appears in a new image, I would not consider the new image "the same" as imageA (though I suppose by the criteria outlined previously I would consider imageA a duplicate of the new image)
It would be nice but not required if one could detect "minor" revisions in the image (ex: a blanket change to the the gamma value in an image, etc)
Thoughts? Suggestions?
For my use case I'm far more concerned about false positives than false negatives, and as such a "fuzzy match" approach should err on the side of caution.
In case it matters I'm writing all of this in Python, though TBH I'm happy to use an alternate tech if it solves my problem elegantly/efficiently.
I would grab a small subimage somewhere not near the edges, and cross correlate this within the vicinity of its source location in your database images. You can resample it prior to cross correlation to account for small resizes, and you can choose the size of the vicinity that you match against to account for asymmetrical crops of a certain percentage.
To avoid percect fits on featureless regions (e.g. the sky) you could use local image variation as a selection criterion for the subimage location.
This would still be quite slow, so it will be necessary to use a global image metric to first select candidate duplicates from the database (e.g. the color histograms mentioned by danf).

Resources