Recording and sending audio in nativescript/nativescript-audio - nativescript

Im working with angular/nativescript with the plugin 'nativescript-audio', I need record audio with best quality possible and send to an API with HttpClient.
There are a few things I need to do, like converting audio to base64 and recording it with a good quality, but I do not have a good knowledge of native audio.
Currently, using this plugin, I can record and play the audio in the library itself, but when sent in base 64 it arrives unrecognizable in the API.
What Im doing:
private async _startRecord(args) {
if (!this._recorder.hasRecordPermission()) await this._askRecordPermission(args);
try {
this._isRecording = true;
await this._recorder.start(this._getRecorderOptions());
} catch (err) {
console.log(err);
}
}
private _getRecorderOptions(): AudioRecorderOptions {
let audioFileName = 'record'
let audioFolder = knownFolders.currentApp().getFolder('audio');
let recordingPath = `${audioFolder.path}/${audioFileName}.${this.getPlatformExtension()}`;
let androidFormat, androidEncoder;
if (platform.isAndroid) {
androidFormat = 4;
androidEncoder = 2;
}
return {
filename: recordingPath,
format: androidFormat,
encoder: androidEncoder,
metering: true,
infoCallback: info => { console.log(JSON.stringify(info)); },
errorCallback: err => this._isRecording = false
}
}
Then:
let audioFileName = 'record';
let audioFolder = knownFolders.currentApp().getFolder('audio');
let file: File = audioFolder.getFile(`${audioFileName}.${this.getPlatformExtension()}`);
let b = file.readSync();
var javaString = new java.lang.String(b);
var encodedString = android.util.Base64.encodeToString(
javaString.getBytes(),
android.util.Base64.DEFAULT
);
this.service.sendFile(encodedString)
.subscribe(e => {
console.log(e);
}, error => {
console.log('ERROR ////////////////////////////////////////////');
console.log(error.status);
})
The service:
sendFile(fileToUpload: any): Observable<any> {
let url: string = `myapi.com`
let body = { "base64audioFile": fileToUpload }
return this.http.post<any>(url, body, {
headers: new HttpHeaders({
// 'Accept': 'application/json',
// 'Content-Type': 'multipart/form-data',
// "Content-Type": "multipart/form-data"
}),
observe: 'response'
});
}
I've already tried changing the recording options in several ways, but I do not know which one is right for the best audio quality and which formats and encodings I need:
androidFormat = 3;
androidEncoder = 1;
channels: 2,
sampleRate: 96000,
bitRate: 1536000,
The result base64 varies a lot with the type of encode I make, but so far I have not managed to get anything recognizable, just some hissing and unrecognizable noises.

First of all, you may just pass the bytes directly to encodeToString(...) method. You don't have to create a string from bytes, it's not valid.
Also use NO_WRAP flag instead of DEFAULT.
var encodedString = android.util.Base64.encodeToString(
b,
android.util.Base64.NO_WRAP
);
Here is a Playground Sample which I wrote a while ago to test base64 encode on iOS & Android. You might have to update the file url, the one in the source gives 404 (I had just picked a random mp3 file form internet for testing).

Related

How to make a bot send a message to a specific channel after receiving a reaction from a certain message

So I'm trying to develop a bot for a very small project (I'm not a programmer or anything, just need to do this one thing). All I need it to do is to collect reactions from a specific message I sent and send another message to a channel as soon as it detects a reaction. The message would contain the reactor's tag and some text. I would need it to be activelly collecting reactions all the time, without any limit. I tried looking through the documentation but I don't really know how to implement the .awaitmessageReactions or whatever it's called. Can you help me out?
You can use method createReactionCollector for this. But when bot go down, this collector will stop.
const Discord = require('discord.js');
const bot = new Discord.Client();
let targetChannelId = '1232132132131231';
bot.on('ready', () => {
console.log(`${bot.user.tag} is ready on ${bot.guilds.cache.size} servers!`);
});
bot.on('message', (message) => {
if (message.content === 'test') {
message.channel.send(`i\`m await of reactions on this message`).then((msg) => {
const filter = (reaction, user) => !user.bot;
const collector = msg.createReactionCollector(filter);
collector.on('collect', (reaction, user) => {
let channel = message.guild.channels.cache.get(targetChannelId);
if (channel) {
let embed = new Discord.MessageEmbed();
embed.setAuthor(
user.tag,
user.displayAvatarURL({
dynamic: true,
format: 'png',
}),
);
}
embed.setDescription(`${user} (${user.tag}) has react a: ${reaction.emoji}`);
channel.send(embed);
});
collector.on('end', (reaction, reactionCollector) => {
msg.reactions.removeAll();
});
});
}
});
bot.login('token');
Or you can use emitter messageReactionAdd and handle reaction on specific message.
const Discord = require('discord.js')
const token = require('./token.json').token
const bot = new Discord.Client({ partials: ['MESSAGE', 'CHANNEL', 'REACTION'] });
bot.once('ready', () => {
console.log(`${bot.user.tag} is ready on ${bot.guilds.cache.size} guilds`)
})
let targetChannelId = '668497133011337224'
bot.on('messageReactionAdd', async (reaction, user) => {
if (reaction.partial) {
// If the message this reaction belongs to was removed the fetching might result in an API error, which we need to handle
try {
await reaction.fetch();
} catch (error) {
console.log('Something went wrong when fetching the message: ', error);
// Return as `reaction.message.author` may be undefined/null
return;
}
}
if (reaction.message.id === '730000158745559122') {
let channel = reaction.message.guild.channels.cache.get(targetChannelId);
console.log(reaction.emoji)
if (channel) {
let embed = new Discord.MessageEmbed();
embed.setAuthor(
user.tag,
user.displayAvatarURL({
dynamic: true,
format: 'png',
}),
);
embed.setDescription(`${user} has react a: ${reaction.emoji}`);
channel.send(embed);
}
}
});
bot.login(token)

Use of HEIC files with filepond?

I'm trying to upload an HEIC file with filepond. Which file type should I specify?
At the moment I have this:
accepted-file-types="image/jpeg, image/png, image/gif, image/jpg"
I can't find anything in the docs about this, and my experimentation doesn't work.
Here's the test file I'm trying to upload:
https://github.com/tigranbs/test-heic-images/raw/master/image1.heic
Thank to #Rik for pointers. Here is some code which does this using filepond in Vue. Feels slightly hacky but does the job.
Accept the image/heic content type and add a custom validator:
<file-pond
v-if="supported"
ref="pond"
name="photo"
:allow-multiple="multiple"
accepted-file-types="image/jpeg, image/png, image/gif, image/jpg, image/heic"
:file-validate-type-detect-type="validateType"
:files="myFiles"
image-resize-target-width="800"
image-resize-target-height="800"
image-crop-aspect-ratio="1"
label-idle="Drag & Drop photos or <span class="btn btn-white ction"> Browse </span>"
:server="{ process, revert, restore, load, fetch }"
#init="photoInit"
#processfile="processed"
#processfiles="allProcessed"
/>
Then in the validator check for the filename, to handle browsers that don't set the correct MIME type:
validateType(source, type) {
const p = new Promise((resolve, reject) => {
if (source.name.toLowerCase().indexOf('.heic') !== -1) {
resolve('image/heic')
} else {
resolve(type)
}
})
return p
}
Then in the process callback, spot the HEIC file and use heic2any to convert it to PNG and use that data in the upload.
async process(fieldName, file, metadata, load, error, progress, abort) {
await this.$store.dispatch('compose/setUploading', true)
const data = new FormData()
const fn = file.name.toLowerCase()
if (fn.indexOf('.heic') !== -1) {
const blob = file.slice(0, file.size, 'image/heic')
const png = await heic2any({ blob })
data.append('photo', png, 'photo')
} else {
data.append('photo', file, 'photo')
}
data.append(this.imgflag, true)
data.append('imgtype', this.imgtype)
data.append('ocr', this.ocr)
data.append('identify', this.identify)
if (this.msgid) {
data.append('msgid', this.msgid)
} else if (this.groupid) {
data.append('groupid', this.groupid)
}
const ret = await this.$axios.post(process.env.API + '/image', data, {
headers: {
'Content-Type': 'multipart/form-data'
},
onUpLoadProgress: e => {
progress(e.lengthComputable, e.loaded, e.total)
}
})
if (ret.status === 200 && ret.data.ret === 0) {
this.imageid = ret.data.id
this.imagethumb = ret.data.paththumb
this.image = ret.data.path
if (this.ocr) {
this.ocred = ret.data.ocr
}
if (this.identify) {
this.identified = ret.data.items
}
load(ret.data.id)
} else {
error(
ret.status === 200 ? ret.data.status : 'Network error ' + ret.status
)
}
return {
abort: () => {
// We don't need to do anything - the server will tidy up hanging images.
abort()
}
}
},
The server is then blissfully unaware that the original file was HEIC at all, and everything proceeds as normal.
The format is image/heic, I tested this using this tool:
https://codepen.io/rikschennink/pen/NzRvbj
It's possible that not all browsers assign the correct mime type. You can work around that by using the fileValidateTypeDetectType property see here for an example:
https://pqina.nl/filepond/docs/patterns/plugins/file-validate-type/#custom-type-detection
Please note that uploading will work, but previewing the image won't.

How use MediaFilePicker and PhotoEditor plugins in Nativescript

I am trying to use MediaFilePicker on nativescript and at the same time use the PhotoEditor plugin to crop/edit the photo taken from the camera but I don't make it work... here is part of my code:
let options: ImagePickerOptions = {
android: {
isCaptureMood: true, // if true then camera will open directly.
isNeedCamera: true,
maxNumberFiles: 1,
isNeedFolderList: true
}, ios: {
isCaptureMood: true, // if true then camera will open directly.
maxNumberFiles: 1
}
};
let mediafilepicker = new Mediafilepicker();
mediafilepicker.openImagePicker(options);
mediafilepicker.on("getFiles", function (res) {
let results = res.object.get('results');
let result = results[0];
let source = new imageSourceModule.ImageSource();
source.fromAsset(result.rawData).then((source) => {
const photoEditor = new PhotoEditor();
photoEditor.editPhoto({
imageSource: source,
hiddenControls: [],
}).then((newImage) => {
}).catch((e) => {
reject();
});
});
});
The result object of the FilePicker comes like:
{
"type": "capturedImage",
"file": {},
"rawData": "[Circular]"
}
I believe if the picture was taken from the camera, then use the rawData field, but I dont know which format is coming and how to give it to PhotoEditor pluging to play with it.
Any suggestions?
Thanks!
The issue was at this line source.fromAsset(result.rawData) here, result.rawData is not an ImageAsset but it's PHAsset. You will have to create an ImageAsset from PHAsset and pass it on to fromAsset. So it would look like,
import { ImageAsset } from "tns-core-modules/image-asset";
....
....
imgSource.fromAsset(new ImageAsset(img)).then((source) => {
const photoEditor = new PhotoEditor();
console.log(source === imgSource);
photoEditor.editPhoto({
imageSource: source,
hiddenControls: [],
}).then((newImage: ImageSource) => {
console.log('Get files...');
// Here you can save newImage, send it to your backend or simply display it in your app
}).catch((e) => {
//reject();
});
});

Spot the difference between these two images

Programmatically, my code is detecting a difference between two classes of images, and always rejecting one class, while always allowing the other.
I have yet to find any difference between the images that yield the error and the ones that don't an yield error. But there has to be some difference, because the ones that yield an error do so 100% of the time, and the others work as expected 100% of the time.
In particular, I have inspected color format: RGB in both groups; size: no notable difference; datatype: uint8 in both; magnitude of pixel values: similar in both.
Below are two images that never work, followed by two images that always work:
This image never works: https://www.colourbox.com/preview/11906131-maple-tree-and-grass-silhouette.jpg
This image never works: http://feldmanphoto.com/wp-content/uploads/awe-inspiring-house-clipart-black-and-white-disney-coloring-pages-big-clipartxtras-illistration-background-housewives-bouncy.jpeg
This image always works: http://www.spacedesign.us/wp-content/uploads/landscape-with-old-tree-and-grass-over-white-background-black-and-black-and-white-trees.jpg
This image always works: http://www.modernhouse.co/wp-content/uploads/2017/07/1024px-RoseSeidlerHouseSulmanPrize.jpg
How can I spot the difference?
The scenario is that I am using Firebase with Swift iOS front end to send these images to a Google Cloud ML-engine hosted convnet. Some images work all the time and certain others never work as above. Further, all images work when I use the gcloud versions predict CLI. To me the issue is necessarily something in the images. Hence I am posting here for the imaging group. Code is included as requested for completeness.
CODE of index.js file is included:
'use strict';
const functions = require('firebase-functions');
const gcs = require('#google-cloud/storage');
const admin = require('firebase-admin');
const exec = require('child_process').exec;
const path = require('path');
const fs = require('fs');
const google = require('googleapis');
const sizeOf = require('image-size');
admin.initializeApp(functions.config().firebase);
const db = admin.firestore();
const rtdb = admin.database();
const dbRef = rtdb.ref();
function cmlePredict(b64img) {
return new Promise((resolve, reject) => {
google.auth.getApplicationDefault(function (err, authClient) {
if (err) {
reject(err);
}
if (authClient.createScopedRequired && authClient.createScopedRequired()) {
authClient = authClient.createScoped([
'https://www.googleapis.com/auth/cloud-platform'
]);
}
var ml = google.ml({
version: 'v1'
});
const params = {
auth: authClient,
name: 'projects/myproject-18865/models/my_model',
resource: {
instances: [
{
"image_bytes": {
"b64": b64img
}
}
]
}
};
ml.projects.predict(params, (err, result) => {
if (err) {
reject(err);
} else {
resolve(result);
}
});
});
});
}
function resizeImg(filepath) {
return new Promise((resolve, reject) => {
exec(`convert ${filepath} -resize 224x ${filepath}`, (err) => {
if (err) {
console.error('Failed to resize image', err);
reject(err);
} else {
console.log('resized image successfully');
resolve(filepath);
}
});
});
}
exports.runPrediction = functions.storage.object().onChange((event) => {
fs.rmdir('./tmp/', (err) => {
if (err) {
console.log('error deleting tmp/ dir');
}
});
const object = event.data;
const fileBucket = object.bucket;
const filePath = object.name;
const bucket = gcs().bucket(fileBucket);
const fileName = path.basename(filePath);
const file = bucket.file(filePath);
if (filePath.startsWith('images/')) {
const destination = '/tmp/' + fileName;
console.log('got a new image', filePath);
return file.download({
destination: destination
}).then(() => {
if(sizeOf(destination).width > 224) {
console.log('scaling image down...');
return resizeImg(destination);
} else {
return destination;
}
}).then(() => {
console.log('base64 encoding image...');
let bitmap = fs.readFileSync(destination);
return new Buffer(bitmap).toString('base64');
}).then((b64string) => {
console.log('sending image to CMLE...');
return cmlePredict(b64string);
}).then((result) => {
console.log(`results just returned and is: ${result}`);
let predict_proba = result.predictions[0]
const res_pred_val = Object.keys(predict_proba).map(k => predict_proba[k])
const res_val = Object.keys(result).map(k => result[k])
const class_proba = [1-res_pred_val,res_pred_val]
const opera_proba_init = 1-res_pred_val
const capitol_proba_init = res_pred_val-0
// convert fraction double to percentage int
let opera_proba = (Math.floor((opera_proba_init.toFixed(2))*100))|0
let capitol_proba = (Math.floor((capitol_proba_init.toFixed(2))*100))|0
let feature_list = ["houses", "trees"]
let outlinedImgPath = '';
let imageRef = db.collection('predicted_images').doc(filePath.slice(7));
outlinedImgPath = `outlined_img/${filePath.slice(7)}`;
imageRef.set({
image_path: outlinedImgPath,
opera_proba: opera_proba,
capitol_proba: capitol_proba
});
let predRef = dbRef.child("prediction_categories");
let arrayRef = dbRef.child("prediction_array");
predRef.set({
opera_proba: opera_proba,
capitol_proba: capitol_proba,
});
arrayRef.set({first: {
array_proba: [opera_proba,capitol_proba],
brief_description: ["a","b"],
more_details: ["aaaa","bbbb"],
feature_list: feature_list},
zummy1: "",
zummy2: ""});
return bucket.upload(destination, {destination: outlinedImgPath});
});
} else {
return 'not a new image';
}
});
Issue was that the bad images were grayscale, not RGB as expected by my model. I initially had checked this first by looking at the shape. But the 'bad' images had 3 color channels, each of those 3 channels stored the same number --- so my model was refusing to accept them. Also, as expected and contrary to what I initially thought I observed, turns out the gcloud ML-engine predict CLI actually also failed for these images. Took me 2 days to figure this out!

hi, is there a way to get pdf document thru apollo2angular and graphql?

following is the response info:
"%PDF-1.4\n%����\n2 0 obj\n<>stream\nx��X�s�F\u0010�\u0019����$\u0010JaF\u000f\t\t�*:Iw�:�h������-ڀg/}]�yk��Z\u0002��0Vڈ��&��\n��ɐֳ�L\u0012?�4\u000bi���\u001b�D\u0017\tPقYl���\u0019p\u0010:\-g���Oǜ�F�\u0019�L�&�_c6�'\u0016��cv���fhm�ҏ?;�ǴK�U��d\u0007�!����qG{,k�M��\u001e۬�)-�E�GU¢\u0019�\u0003(�����q2R�\u001c%K�vC���[�;�j\u0004�E�PO�hH��dk\u0011\fN����騈���\u000fs=8���9\u001fY�\u00185t\u001d��A0Sa��!\u000e��i\u000f�(1R�q����>�\u0017�i��\u0017<�$��\u0015�;�|/^�;���v̺H\u0019�1�\t�#��m�?�\u00137q�\u0004�\\u0010ְ'M鴨\u000b���0�9B1�ێ���=�K,Z\"'���\u0019���3��W\u0010q\u0000A�D��� �Hh���\u0017�c�k��̉�i�W\u000b�Mph�P�#��0W:ʥ\u0003�*ӕ�9���\u0015OG��\u000e�$A)>�\u0018H*�R�1 7��~ch\u001a�CUfQ�j�9��+��K��Џ�\u001a{�G��)\f�D\u0012\f���(C\u0005��?ݗ��m�������c��0Ϩ�v#*#Hp�\u0019�Y�!��7�-�\u001b�a�w�3��&�T0q=�K�-ؚ�\u0018\u000e�)���]N�P\u0019ZB.�$w\u0015�\u0006n�&6|
Hey yes there is but it takes a little custom work!
Here is how you can do it to work with scaphold.io but you could extend this to your own implementation as well!
Basically what is happening is you attach the file at the root level of a multipart/form-data request and then from the input in your GraphQL variables you point to the key of the file in the request. Here is some code to show how you can retrofit and ApolloClient instance with a file-ready network interface that you can then feed into Angular2Apollo.
import { createNetworkInterface } from 'apollo-client';
// Create a file-ready ApolloClient instance.
export function makeClientApolloFileHandler() {
const graphqlUrl = `https://scaphold.io/graphql/my-app`;
const networkInterface = createNetworkInterface(graphqlUrl);
networkInterface.query = (request) => {
const formData = new FormData();
// Parse out the file and append at root level of form data.
const varDefs = request.query.definitions[0].variableDefinitions;
let vars = [];
if (varDefs && varDefs.length) {
vars = varDefs.map(def => def.variable.name.value);
}
const activeVars = vars.map(v => request.variables[v]);
const fname = find(activeVars, 'blobFieldName');
if (fname) {
const blobFieldName = fname.blobFieldName;
formData.append(blobFieldName, request.variables[blobFieldName]);
}
formData.append('query', printGraphQL(request.query));
formData.append('variables', JSON.stringify(request.variables || {}));
formData.append('operationName', JSON.stringify(request.operationName || ''));
const headers = { Accept: '*/*' };
if (localStorage.getItem('scaphold_user_token')) {
headers.Authorization = `Bearer ${localStorage.getItem('scaphold_user_token')}`;
}
return fetch(graphqlUrl, {
headers,
body: formData,
method: 'POST',
}).then(result => result.json());
};
const clientFileGraphql = new ApolloClient({
networkInterface,
initialState: {},
});
return clientFileGraphql;
}
After this function does its work on the request you might have FormData that looks like this (if it were encoded in JSON):
{
"query": "mutation CreateFile($input:CreateFileInput) { createFile(input: $input) { ... }",
"variables": {
"input": {
"blobFieldName": "myFile",
"name": "MyFileName!"
}
},
// Note how the blobFieldName points to this key
"myFile": <Buffer of file data in multipart/form-data>
}
The server would then need to understand how to interpret this in order to look for the file in the payload and associate it with the correct object etc.
Hope this helps!

Resources