Copy or download generated QR (vue-qrcode) code with VueJs - image

I use the plugin "vue-qrcode" to generate a qr code for my users to for a link to their public profile so they can share it e.g. on thei business card.
The Idea is now to give my users the possibility via a button to download the qr code and via another button to copy the qr code to the clipboard to make it easier to send it e.g. via mail (specially for the smartphone users).
Problem: I don't know how i can download or copy the qr code. Does anybody know to copy or download the qr code? Currently I use 'vue-clipboard2' to copy links etc. but it seems it can't copy images.
I use the below code to display the qr code on our site:
<template>
<qrcode-vue
style = "cursor: pointer;"
:value = "linkToProfile"
size = 160
level = "M"
:background = "backgroundcolor_qrcode"
:foreground = "color_qrcode"
></qrcode-vue>
</template>
<script>
import Vue from 'vue'
import QrcodeVue from 'qrcode.vue'
Vue.component('qrcode-vue', QrcodeVue )
export default {
data: () => ({
linkToProfile: "http://www.example.com/johnDoe",
})
</script>
Thanks -
Christian

I figured it out. The solution looks like this:
TEMPLATE AREA:
<qrcode-vue
id="qrblock"
:value = "linkToSki"
size = 220
level = "M"
ref="qrcode"
></qrcode-vue>
FUNCITONS AREA:
// -- FUNCTIONS AREA TO COPY / DOWNLOAD QR CODE - END ---
function selectText(element) {
if (document.body.createTextRange) {
const range = document.body.createTextRange();
range.moveToElementText(element);
range.select();
} else if (window.getSelection) {
const selection = window.getSelection();
const range = document.createRange();
range.selectNodeContents(element);
selection.removeAllRanges();
selection.addRange(range);
}
}
function copyBlobToClipboardFirefox(href) {
const img = document.createElement('img');
img.src = href;
const div = document.createElement('div');
div.contentEditable = true;
div.appendChild(img);
document.body.appendChild(div);
selectText(div);
const done = document.execCommand('Copy');
document.body.removeChild(div);
return done;
}
function copyBlobToClipboard(blob) {
// eslint-disable-next-line no-undef
const clipboardItemInput = new ClipboardItem({
'image/png' : blob
});
return navigator.clipboard
.write([clipboardItemInput])
.then(() => true)
.catch(() => false);
}
function downloadLink(name, href) {
const a = document.createElement('a');
a.download = name;
a.href = href;
document.body.append();
a.click();
a.remove();
}
// -- FUNCTIONS AREA TO COPY / DOWNLOAD QR CODE - END ---

Related

Google address autocomplete api in Stenciljs

I am trying to add a search field for an address using google's address autocomplete in a Stenciljs component. There aren't any resources on it.
First you'll need to load the google maps api script, so that you can interact with the global google.maps object. You can either do that by including a script tag, or write something like the following helper function.
const googleApiKey = '...';
export const importMapsApi = async () =>
new Promise<typeof google.maps>((resolve, reject) => {
if ('google' in window) {
return resolve(google.maps);
}
const script = document.createElement('script');
script.onload = () => resolve(google.maps);
script.onerror = reject;
script.src = `https://maps.googleapis.com/maps/api/js?key=${googleApiKey}&libraries=places`;
document.body.appendChild(script);
});
In order to get the TypeScript types for the global google object, you should install #types/googlemaps into your dev-dependencies.
Then you'll need to implement a function that allows you to search for places, e. g.:
export const searchForPlaces = async (input: string, sessionToken: google.maps.places.AutocompleteSessionToken) => {
const maps = await importMapsApi();
const service = new maps.places.AutocompleteService();
return new Promise<google.maps.places.AutocompletePrediction[]>((resolve) =>
service.getPlacePredictions({ input, sessionToken }, (predictions, status) => {
if (status !== maps.places.PlacesServiceStatus.OK) {
return resolve([]);
}
resolve(predictions);
}),
);
};
None of this is specific to Stencil btw. All that is left to do is to use the searchForPlaces function in your component. A very simple example would be something like:
#Component({ tag: 'maps-place-search' })
export class MapsPlaceSearch {
sessionToken: string;
#State()
predictions: google.maps.places.AutocompletePrediction[];
async componentWillLoad() {
const maps = await importMapsApi();
this.sessionToken = new maps.places.AutoCompleteSessionToken();
}
async search = (e: InputEvent) => {
const searchTerm = e.target.value;
if (!searchTerm) {
this.predictions = [];
return;
}
this.predictions = await searchForPlaces(searchTerm, this.sessionToken);
}
render() {
return (
<Fragment>
<input onInput={this.search} />
<ul>
{this.predictions.map(prediction => <li key={prediction.description}>{prediction.description}</li>)}
</ul>
<Fragment>
);
}
}
The place search will give you a placeId for each prediction. That and the session token you can pass on to a maps.places.PlacesService to get the details for the place and auto-fill your form or whatever you're trying to achieve.

CKEditor 5 - writer.setAttribute('title', ...) on img element doesn't work

I am creating a plugin for CKEditor 5, and I can't figure out how to set the title attribute on an <img> element using writer.setAttribute('title', ...).
I have tried stuff like schema.extend but to no avail.
The thing is, code works flawlessly when operating on the alt attribute.
Am I missing something?
My plugin code:
const ButtonView = require('#ckeditor/ckeditor5-ui/src/button/buttonview').default;
const imageIcon = require('#ckeditor/ckeditor5-core/theme/icons/low-vision.svg').default;
export default class ImageTextTitle extends Plugin {
init() {
const editor = this.editor;
editor.ui.componentFactory.add('imageTextTitle', locale => {
const view = new ButtonView(locale);
view.set({
label: 'Insert image title',
icon: imageIcon,
tooltip: true
});
view.on('execute', () => {
const newTitle = prompt('New image title');
const selection = editor.model.document.selection;
const imageElement = selection.getSelectedElement();
if (newTitle !== null) {
editor.model.change(writer => {
writer.setAttribute('title', newTitle, imageElement);
});
}
});
return view;
});
}
}

Nativescript imagepicker not working in iOS :: not picking up image path?

I am using Nativescript with Angular and have a page where I photograph a receipt or add from gallery and add a couple of text inputs and send to server.
The Add from gallery is working fine in Android but not in iOS.
Here is the template code:
<Image *ngIf="imageSrc" [src]="imageSrc" [width]="previewSize" [height]="previewSize" stretch="aspectFit"></Image>
<Button text="Pick from Gallery" (tap)="onSelectGalleryTap()" class="btn-outline btn-photo"> </Button>
and the component:
public onSelectGalleryTap() {
let context = imagepicker.create({
mode: "single"
});
let that = this;
context
.authorize()
.then(() => {
that.imageAssets = [];
that.imageSrc = null;
return context.present();
})
.then((selection) => {
alert("Selection done: " + JSON.stringify(selection));
that.imageSrc = selection.length > 0 ? selection[0] : null;
// convert ImageAsset to ImageSource
fromAsset(that.imageSrc).then(res => {
var myImageSource = res;
var base64 = myImageSource.toBase64String("jpeg", 20);
this.expense.receipt_data=base64;
})
that.cameraImage=null;
that.imageAssets = selection;
that.galleryProvided=true;
// set the images to be loaded from the assets with optimal sizes (optimize memory usage)
selection.forEach(function (element) {
element.options.width = that.previewSize;
element.options.height = that.previewSize;
});
}).catch(function (e) {
console.log(e);
});
}
I have posted below the Android and iOS screenshots of the line:
alert("Selection done: " + JSON.stringify(selection));
In Android there is a path to the location of the image in the file system but in iOS there are just empty curly brackets where I'd expect to see the path and then when submitted the message back is "Unable to save image" although the image preview is displaying on the page in Image.
Here are the screenshots:
Android:
iOS:
Any ideas why it is failing in iOS?
Thanks
==========
UPDATE
I am now saving the image to a temporary location and it is still not working in iOS. It works in Android.
Here is my code now.
import { ImageAsset } from 'tns-core-modules/image-asset';
import { ImageSource, fromAsset, fromFile } from 'tns-core-modules/image-source';
import * as fileSystem from "tns-core-modules/file-system";
...
...
public onSelectGalleryTap() {
alert("in onSelectGalleryTap");
var milliseconds=(new Date).getTime();
let context = imagepicker.create({
mode: "single"
});
let that = this;
context
.authorize()
.then(() => {
that.imageAssets = [];
that.previewSrc = null;
that.imageSrc = null;
return context.present();
})
.then((selection) => {
that.imageSrc = selection.length > 0 ? selection[0] : null;
// convert ImageAsset to ImageSource
fromAsset(that.imageSrc)
.then(res => {
var myImageSource = res;
let folder=fileSystem.knownFolders.documents();
var path=fileSystem.path.join(folder.path, milliseconds+".jpg");
var saved=myImageSource.saveToFile(path, "jpg");
that.previewSrc=path;
const imageFromLocalFile: ImageSource = <ImageSource> fromFile(path);
var base64 = imageFromLocalFile.toBase64String("jpeg", 20);
this.expense.receipt_data=base64;
})
that.cameraImage=null;
that.imageAssets = selection;
that.galleryProvided=true;
// set the images to be loaded from the assets with optimal sizes (optimize memory usage)
selection.forEach(function (element) {
element.options.width = that.previewSize;
element.options.height = that.previewSize;
});
}).catch(function (e) {
console.log(e);
});
}
Any ideas? Thanks.
It is an already communicated issue, several of us subscribed for, check here issue #321
for updates.

Spot the difference between these two images

Programmatically, my code is detecting a difference between two classes of images, and always rejecting one class, while always allowing the other.
I have yet to find any difference between the images that yield the error and the ones that don't an yield error. But there has to be some difference, because the ones that yield an error do so 100% of the time, and the others work as expected 100% of the time.
In particular, I have inspected color format: RGB in both groups; size: no notable difference; datatype: uint8 in both; magnitude of pixel values: similar in both.
Below are two images that never work, followed by two images that always work:
This image never works: https://www.colourbox.com/preview/11906131-maple-tree-and-grass-silhouette.jpg
This image never works: http://feldmanphoto.com/wp-content/uploads/awe-inspiring-house-clipart-black-and-white-disney-coloring-pages-big-clipartxtras-illistration-background-housewives-bouncy.jpeg
This image always works: http://www.spacedesign.us/wp-content/uploads/landscape-with-old-tree-and-grass-over-white-background-black-and-black-and-white-trees.jpg
This image always works: http://www.modernhouse.co/wp-content/uploads/2017/07/1024px-RoseSeidlerHouseSulmanPrize.jpg
How can I spot the difference?
The scenario is that I am using Firebase with Swift iOS front end to send these images to a Google Cloud ML-engine hosted convnet. Some images work all the time and certain others never work as above. Further, all images work when I use the gcloud versions predict CLI. To me the issue is necessarily something in the images. Hence I am posting here for the imaging group. Code is included as requested for completeness.
CODE of index.js file is included:
'use strict';
const functions = require('firebase-functions');
const gcs = require('#google-cloud/storage');
const admin = require('firebase-admin');
const exec = require('child_process').exec;
const path = require('path');
const fs = require('fs');
const google = require('googleapis');
const sizeOf = require('image-size');
admin.initializeApp(functions.config().firebase);
const db = admin.firestore();
const rtdb = admin.database();
const dbRef = rtdb.ref();
function cmlePredict(b64img) {
return new Promise((resolve, reject) => {
google.auth.getApplicationDefault(function (err, authClient) {
if (err) {
reject(err);
}
if (authClient.createScopedRequired && authClient.createScopedRequired()) {
authClient = authClient.createScoped([
'https://www.googleapis.com/auth/cloud-platform'
]);
}
var ml = google.ml({
version: 'v1'
});
const params = {
auth: authClient,
name: 'projects/myproject-18865/models/my_model',
resource: {
instances: [
{
"image_bytes": {
"b64": b64img
}
}
]
}
};
ml.projects.predict(params, (err, result) => {
if (err) {
reject(err);
} else {
resolve(result);
}
});
});
});
}
function resizeImg(filepath) {
return new Promise((resolve, reject) => {
exec(`convert ${filepath} -resize 224x ${filepath}`, (err) => {
if (err) {
console.error('Failed to resize image', err);
reject(err);
} else {
console.log('resized image successfully');
resolve(filepath);
}
});
});
}
exports.runPrediction = functions.storage.object().onChange((event) => {
fs.rmdir('./tmp/', (err) => {
if (err) {
console.log('error deleting tmp/ dir');
}
});
const object = event.data;
const fileBucket = object.bucket;
const filePath = object.name;
const bucket = gcs().bucket(fileBucket);
const fileName = path.basename(filePath);
const file = bucket.file(filePath);
if (filePath.startsWith('images/')) {
const destination = '/tmp/' + fileName;
console.log('got a new image', filePath);
return file.download({
destination: destination
}).then(() => {
if(sizeOf(destination).width > 224) {
console.log('scaling image down...');
return resizeImg(destination);
} else {
return destination;
}
}).then(() => {
console.log('base64 encoding image...');
let bitmap = fs.readFileSync(destination);
return new Buffer(bitmap).toString('base64');
}).then((b64string) => {
console.log('sending image to CMLE...');
return cmlePredict(b64string);
}).then((result) => {
console.log(`results just returned and is: ${result}`);
let predict_proba = result.predictions[0]
const res_pred_val = Object.keys(predict_proba).map(k => predict_proba[k])
const res_val = Object.keys(result).map(k => result[k])
const class_proba = [1-res_pred_val,res_pred_val]
const opera_proba_init = 1-res_pred_val
const capitol_proba_init = res_pred_val-0
// convert fraction double to percentage int
let opera_proba = (Math.floor((opera_proba_init.toFixed(2))*100))|0
let capitol_proba = (Math.floor((capitol_proba_init.toFixed(2))*100))|0
let feature_list = ["houses", "trees"]
let outlinedImgPath = '';
let imageRef = db.collection('predicted_images').doc(filePath.slice(7));
outlinedImgPath = `outlined_img/${filePath.slice(7)}`;
imageRef.set({
image_path: outlinedImgPath,
opera_proba: opera_proba,
capitol_proba: capitol_proba
});
let predRef = dbRef.child("prediction_categories");
let arrayRef = dbRef.child("prediction_array");
predRef.set({
opera_proba: opera_proba,
capitol_proba: capitol_proba,
});
arrayRef.set({first: {
array_proba: [opera_proba,capitol_proba],
brief_description: ["a","b"],
more_details: ["aaaa","bbbb"],
feature_list: feature_list},
zummy1: "",
zummy2: ""});
return bucket.upload(destination, {destination: outlinedImgPath});
});
} else {
return 'not a new image';
}
});
Issue was that the bad images were grayscale, not RGB as expected by my model. I initially had checked this first by looking at the shape. But the 'bad' images had 3 color channels, each of those 3 channels stored the same number --- so my model was refusing to accept them. Also, as expected and contrary to what I initially thought I observed, turns out the gcloud ML-engine predict CLI actually also failed for these images. Took me 2 days to figure this out!

Nativescript how to save image to file in custom component

I have created a custom component that access the device's camera to snap a picture, set it as source of an ImageView and then save it to a file.
Here is the Javascript code
CAMERA.JS
Here is the initialization of the imageView
export function cameraLoaded(args):void{
cameraPage = <Page> args.object;
imageView = <Image> cameraPage.getViewById("img_upload");...
}
Here I set the imageViews'source to the just taken picture
export function takePicture():void{
camera.takePicture(
{
})
.then(
function(picture){
imageView.imageSource = picture;
},function(error){
});
}
This works perfectly.
Now I try to save the picture to a file.
export function saveToFile():void{
try {
let saved = imageView.imageSource.saveToFile(path,enums.ImageFormat.png);
HalooseLogger.log(saved,"upload");
})
}catch (e){
...
}
}
Here I get an error cannot read property saveToFile of undefined
This is very unusual, in fact if I console.log(imageView) here is the output :
Image<img_upload>#file:///app/views/camera/camera.xml:4:5;
but if I console.log(imageView.imageSource) i see it is ´undefined`.
How is this possible? What am I doing wrong?
ADDITIONAL INFO
The previous code and relatex xml is included in another view as follows :
MAIN.XML
<Page
xmlns="http://schemas.nativescript.org/tns.xsd"
xmlns:cameraPage="views/camera"
loaded="loaded">
<StackLayout orientation="vertical">
<StackLayout id="mainContainer">
<!-- DYNAMIC CONTENT GOES HERE -->
</StackLayout>
</StackLayout>
</Page>
MAIN.JS
This is were the camera view gets loaded dynamically
export function loaded(args):void{
mainPage = <Page>args.object;
contentWrapper = mainPage.getViewById("mainContainer");
DynamicLoaderService.loadPage(mainPage,contentWrapper,mainViewModel.currentActive);
}
The loadPage method does the following :
public static loadPage(pageElement,parentElement,currentActive):void{
let component = Builder.load({
path : "views/camera",
name : "camera",
page : pageElement
});
parentElement.addChild(component);
}
The thing is that as of NativeScript 2.4.0 the Image created for Android will always return null for its property imageSource. Currently, optimisations are on the way to prevent Out of Memory related issues when working with multiple large images and that is why image-asset was presented in nativeScript 2.4.0.
Now I am not sure if you are using the latest nativescript-camera (highly recommended) but if so you should consider that the promise from takePicture() is returning imageAsset. Due to the memory optimization imageSource will always return undefined (for Android) unless you specifically create one. You can do that with fromAsset() method providing the ImageAsset returned from the camera callback.
Example:
import { EventData } from 'data/observable';
import { Page } from 'ui/page';
import { Image } from "ui/image";
import { ImageSource, fromAsset } from "image-source";
import { ImageAsset } from "image-asset";
import * as camera from "nativescript-camera";
import * as fs from "file-system";
var imageModule = require("ui/image");
var img;
var myImageSource: ImageSource;
// Event handler for Page "navigatingTo" event attached in main-page.xml
export function onLoaded(args: EventData) {
// Get the event sender
let page = <Page>args.object;
img = <Image>page.getViewById("img");
camera.requestPermissions();
}
export function takePhoto() {
camera.takePicture()
.then(imageAsset => {
console.log("Result is an image asset instance");
img.src = imageAsset;
fromAsset(imageAsset).then(res => {
myImageSource = res;
console.log(myImageSource);
})
}).catch(function (err) {
console.log("Error -> " + err.message);
});
}
export function saveToFile(): void {
var knownPath = fs.knownFolders.documents();
var folderPath = fs.path.join(knownPath.path, "CosmosDataBank");
var folder = fs.Folder.fromPath(folderPath);
var path = fs.path.join(folderPath, "Test.png");
var saved = myImageSource.saveToFile(path, "png");
console.log(saved);
}

Resources