I am using AudioKit in my project. By using the process suggested in the mixing nodes playground example, I am playing the multiple audios. My requirement is to upload the mixed audio to the server and displayed and played some other screens. I followed this suggestion. How to get and save the mixed of multiple audios in to single audio in swift but it's not working.
Give suggestions to get the mixed audio output for uploading to the server.
AudioKit provides a node recorder that can be attached to any node in your signal chain (though it seems to prefer being connected to mixer nodes).
First set up a place for the recording to be kept:
let file = try AKAudioFile()
Then assign a recorder to record to that file
let recorder = try AKNodeRecorder(node: nodeYouWantToRecord, file: file)
Start recording:
try recorder.record()
Stop recording at a later time:
recorder.stop()
Then save your file:
file.exportAsynchronously(name: "nameString",
baseDir: .documents,
exportFormat: .caf) { [weak self] _, _ in
// optional do something after exporting
}
Check out how this playground saves the AudioKit output: https://audiokit.io/playgrounds/Basics/Mixing%20Nodes/
Related
We see an issue that on stream analytics when using a blob reference input. Upon restarting the stream, it prints double values for things joined to it. I assume this is an issue with having more than 1 blob active during the time it restarts. Currently we pull the files from a folder path in ADLS structured as Output/{date}/{time}/Output.json, which ends up being Output/2021/04/16/01/25/Output.json. These files have a key that the data matches on in the stream with:
IoTData
LEFT JOIN kauiotblobref kio
ON kio.ParentID = IoTData.ConnectionString
which I don't see any issue with, but those files are actually getting created every minute on the minute by an azure function. So it may be possible during the start of stream analytics, it grabs the last and the one that gets created following. (That would be my guess, but I'm not sure how we would fix that).
Here's a visual in powerBI of the issue:
Peak
Trough
This is easily explained when looking at the cosmosDB for that device it's capturing from, there are two entries with the same value, assetID, timestamp, different recordID(just means cosmosDB counted it as two separate events). This shouldn't be possible because we can't send duplicates with the same timestamp from a device.
This seems to be a core issue with blob storage on stream analytics, since it traditionally takes more than 1 minute to start. The best way I've found to resolve is to stop the corresponding functions before starting stream back up. Working to automate through CI/CD pipelines, which is good practice anyways for editing the stream.
I am currently trying to import a single-label dataset that contains ~7300 images. I use a single CSV file in the following format to create the dataset from (paths shortened):
gs://its-2018-40128940-automl-vis-vcm/[...].jpg,CAT_00
gs://its-2018-40128940-automl-vis-vcm/[...].jpg,CAT_00
gs://its-2018-40128940-automl-vis-vcm/[...].jpg,CAT_00
[...]
However, the import process failed after processing for over 7 hours (which I find unusually long based on previous experience) with the following error:
File unreadable or invalid gs://[...]
The strange thing is: The files were there and I was able to download and view them on my machine. And once I removed all entries from the CSV except the two "unreadable or invalid" ones and imported this CSV file (same bucket), it worked like a charm and took just a few seconds.
Another dataset with 500 other images caused the same strange behavior.
I have imported and trained a few AutoML Vision models before and I can't figure out what is going wrong this time. Any ideas or debugging tips appreciated. The GCP project is "its-2018-40128940-automl-vis".
Thanks in advance!
File unreadable or invalid is returned when a file either cannot be accessed from GCS (cannot be read due to file size or permissions) or when the file format is considered invalid. For example image is in different format than the extension used or in format that is not supported by image service.
When there are errors the pipeline may be slow because currently it does re-tries with exponential backoff. It tries to detect non-retry-able errors and fail fast - but errs on the re-try if unsure side.
It would be best if you could ensure the images are in the right format - for example by re-converting the images into one of the supported formats.
Depending on your platform there are tools to do that.
When I check a file via uploaded in the UI of GCP Storage
To match this one we have to upload the file in following configurations,
storage.bucket(bucketName).upload(`./${csv_file}`, {
// Support for HTTP requests made with `Accept-Encoding: gzip`
destination: `csv/${csv_file}`,
gzip: false,
metadata: {
},
});
I'm using Xamarin-Forms-Labs ISoundService in Xamarin to play audio in a Timer.
Every 30 seconds I have it play an audio file (each file is different)
I can get it to play subsequent files if the file size is small (less than 5k) but if it larger is keeps replaying the "larger audio" file in place of the subsequent clips.
Any thoughts how I can resolve this? Am I "stopping" the audio properly. Should I async stop since it is async play?
I appreciate the time and expertise
Xamarin-Forms-Labs ISoundService
https://github.com/XLabs/Xamarin-Forms-Labs/blob/master/src/Platform/XLabs.Platform/Services/Media/ISoundService.cs
My code
var audioFile = intervalSettingsAudioDict[currentInterval];
Console.WriteLine(audioFile);
soundService = DependencyService.Get<ISoundService>();
if (soundService.IsPlaying){
soundService.Stop();
}
soundService.Volume = 1.0;
soundService.PlayAsync(audioFile);
I think the problem is that the default behavior of DependencyService.Get() is to act as a Singleton: http://forums.xamarin.com/discussion/22174/is-a-dependencyservice-implementation-class-treated-as-a-singleton.
I solved it using the following:
var SoundService = DependencyService.Get<ISoundService>(DependencyFetchTarget.NewInstance);
This worked on iOS, but I'm still troubleshooting an unrelated problem on Android.
HTH.
I am building a simple real time delay system on my mac (2010-11 model; os x Mavericks; serial audio input) using Simulinks (Matlab 2014a) consisting of a 'Audio Input' block, an 'Audio Output' block a 'delay' block and an adder (to add the delayed signal to the original signal), but I receive the error: 'Error in 'untitled/From Audio Device': A given audio device may only be opened once.' twice for the audio input block.
When I try the same using a audio file as my input I get the desired results. Also the same diagram works fine on a windows machine.
Please help.
Thank you.
I think the issue is that you are trying to output a sound to the audio device, while at the same time to trying to read from the audio device. That won't work, you can't do that. See Keep playing a sound over and over again in Matlab? for a similar issue in MATLAB. You need to somehow wait for the reading part to complete before outputting the sound back to the audio device, or use two different devices, one for reading and one for writing.
I suspect the same model worked on a Windows machine because it probably had two audio devices (maybe a built-in and an external), and the model automatically detected this, reading from one device, and outputting to the other. The documentation for both blocks says:
Use the Device parameter to specify the device from which to acquire
audio. This parameter is automatically populated based on the audio
devices installed on your system.
which again, reinforces that theory. If you still have access to the Windows machine, you can double-check that this is the case.
I am using an external soundfont to play MusicStrings and everything is working find. When I use player.saveMidi(etc, etc) the files are saved with the original MIDI soundfont.
Soundbank soundbank = MidiSystem.getSoundbank(new File("SGM-V2.01.sf2"));
Synthesizer synth = MidiSystem.getSynthesizer();
synth.open();
synth.loadAllInstruments(soundbank);
Player player = new Player(synth);
Pattern pattern = new Pattern("C5majw C5majw C5majw");
player.play(pattern); // works fine with external soundbank
player.saveMidi(pattern, filename); //Doesn't save with external soundbank instruments
Is there any workaround or built in feature that supports this functionality?
Thanks!
Keep in mind that MIDI is a set of musical instructions. Regardless of whether you load a soundbank into the Java program, when you save as MIDI, you're only saving musical instructions. (By "musical instructions", I mean things like "NOTE ON" or "INSTRUMENT CHANGE" but not actual musical sound data)
It sounds like what you want to do is render your music into a WAV file using the sounds from the soundbank that you have loaded. To do this, you'll want to use the Midi2WavRenderer available here: http://www.jfugue.org/code/Midi2WavRenderer.java