Unity: Loading image from URL failed - image

I am loading image from script to texture from URL. It is loading successfully, but the image is not reflecting on the texture.
How can I fix this?

you should UV unwrapped your model(that you want to put your image on it as a texture) you save the uv and then import it to unity then you can apply texture to it , here is tutorial that will show you how to do it in blender

Alright, if I'm gathering from the image correctly, it looks like you're using NGUI. If so, then you don't use Mesh Renderer. It's been a little bit since I used NGUI, but you need a reference to the game object or to the UITexture component that is on the game object. So I'll provide some code samples below for both scenarios. But you don't use a mesh renderer unless you're wrapping the texture around a 3D model. If this is for the UITexture then you would attach the image on there.
Game Object Reference:
imageObject.GetComponent<UITexture>().mainTexture = textureToUse;
UITexture Reference:
uiTextureReference.mainTexture = textureToUse;
I think that is what you were looking for, otherwise you need to do unwrapping and what not.

It is very easy if you are using NGUI.
Create a UITexture.
Refer it in you class either by declaring it public variable of getting it at runtime.
Import MiniJSON class that is easily available on internet. (Do not hesitate if you failed to find MiniJSON.Json, it could be MiniJSON.Deserialize or only Json.Deseiralize)
Following is the code snippet to get Facebook Display Picture with User_API_ID
string myImageUrl = "https://graph.facebook.com/"+API_ID+"/picture?type=large&redirect=0";
WWW myImageGraphRequest = new WWW(myImageUrl);
yield return myImageGraphRequest;
if (!string.IsNullOrEmpty(myImageGraphRequest.error))
Debug.Log("Could not fetch own User Image due to: " + myImageGraphRequest.error);
else
{
var myImageData = MiniJSON.Json.Deserialize (myImageGraphRequest.text) as Dictionary<string, object>;
var myImageLinkWithData = myImageData["data"] as Dictionary<string, object>;
string myImageLink = (string)myImageLinkWithData["url"];
WWW myImageRequest = new WWW(myImageLink);
yield return myImageRequest;
if (!string.IsNullOrEmpty(myImageRequest.error))
Debug.Log("Could not get user facebook image: " + myImageRequest.error);
else
{
userDP.mainTexture = myImageRequest.texture;
}
}
userDP is your UITexture.

Related

How to detect if image is all black using flutter?

I'm building an app that user required to send a selfie but some user just block the camera to take a picture and the result is all black, is there a way to detect if image is black?
I'm thinking of using face detection but I haven't tried it and its way to much for my simple app.
One way you can try is using palette_generator package to extract colors in the image and then computing average luminance of the image, like so.
import 'package:palette_generator/palette_generator.dart';
Future<bool> isImageBlack(ImageProvider image) async {
final double threshold = 0.2; //<-- play around with different images and set appropriate threshold
final double imageLuminance = await getAvgLuminance(image);
print(imageLuminance);
return imageLuminance < threshold;
}
Future<double> getAvgLuminance(ImageProvider image) async {
final List<Color> colors = await getImagePalette(image);
double totalLuminance = 0;
colors.forEach((color) => totalLuminance += color.computeLuminance());
return totalLuminance / colors.length;
}
Before use camera check if your app has permission for that.
For this porpouse i've recommend to use the permission handler
Few lines from official documentation
var status = await Permission.camera.status;
if (status.denied) {
// We didn't ask for permission yet or the permission has been denied before but not permanently.
}

Setting link URL with Google Docs API doesn't result in update of image

I'm trying to update a placeholder image with a new image that has an updated URL. The URL in fact is a valid Google Static Map URL that I'm using in other contexts successfully. I'm using the Google Document API to manipulate the document. Following the code I've been using:
var element = body.findElement(DocumentApp.ElementType.INLINE_IMAGE).getElement();
var imageMap = element.asInlineImage();
// if there was an image found in document
if (imageMap != null) {
// get current parent and index inside parent
var parent = imageMap.getParent();
var childIndex = parent.getChildIndex(imageMap);
// remove image from paragraph
imageMap = imageMap.removeFromParent();
// get static image url for territory
var url = getStaticMapURLForTerritory(id);
Logger.log(url);
imageMap.setLinkUrl(url);
// create a new image
parent.insertInlineImage(childIndex, imageMap)
}
This seems to work fine in that it does update the image url correctly. However, the image itself (the result of the url) is not updated. When I click on the link URL it does return the correct image.
Is there a way to force a refetch of the image blob associated with the URL? I've also attempted to use UrlFetchApp but that complains about a missing size parameter (google static api) which is certainly included in the url string and within the max 640x640 bounds.
I've exhausted all my options unless....
TIA, --Paul
setLinkUrl only does that: sets the link. To actually add a new image you'll have to get its blob:
function replaceImage() {
// [...]
// get static image url for territory
const url = getStaticMapURLForTerritory(id)
const response = UrlFetchApp.fetch(url)
// create a new image
parent.insertInlineImage(childIndex, response.getBlob())
.setAltDescription(img.getAltDescription())
.setAltTitle(img.getAltTitle())
.setWidth(img.getWidth())
.setHeight(img.getHeight())
.setLinkUrl(url)
}
References
Class InlineImage (Google Apps Script reference)

Is this scenario possible with Vuforia User Defined Target?

I am trying to store the Vuforia User Image targets locally so users will not have to create the targets every time they leave the scene.
I tried to store the variables to create the new image target in a static class so i can retrieve them when i am back to the scene. and create the image targets then add them to the dataset.
//****************************************************************************************************
//Trying to load image target between scenes
//****************************************************************************************************
//Deactivate the dataset in order to add the image target
m_ObjectTracker.DeactivateDataSet(m_UDT_DataSet);
var LastImageTrackable = ImageTargetStorage.newTrackable;
if(LastImageTrackable != null)
{
//Find UDT behavior and Gameobject
ImageTargetBehaviour imageTargetCopy = Instantiate(ImageTargetTemplate);
System.Random rnd = new System.Random();
int RandomNum = rnd.Next(1, 1000);
imageTargetCopy.gameObject.name = "UserDefinedTarget-" + RandomNum;
// Add the target to the dataset
try{
m_UDT_DataSet.CreateTrackable(LastImageTrackable, imageTargetCopy.gameObject);
}
catch(Exception ex)
{
Debug.Log("An Error has occured while trying to create image target from different scene.");
}
// Activate the dataset again
m_ObjectTracker.ActivateDataSet(m_UDT_DataSet);
}
//****************************************************************************************************
Unity Crashes and never give me any useful data on the issue.
the step that causes the crash is
m_UDT_DataSet.CreateTrackable(LastImageTrackable, imageTargetCopy.gameObject);
which gets the LastImageTrackable from previous creation.
Please advise me on this error and kindly suggest a different framework if Vuforia is not suitable for this simple task.

How to extract text from image Android app

I am working on a feature for my Android app. I would like to read text from a picture then save that text in a database. Is using OCR the best way? Is there another way? Google suggests in its documentation that NDK should only be used if strictly necessary but what are the downfalls exactly?
Any help would be great.
you can use google vision library for convert image to text, it will give better output from image.
Add below library in build gradle:
compile 'com.google.android.gms:play-services-vision:10.0.0+'
TextRecognizer textRecognizer = new TextRecognizer.Builder(getApplicationContext()).build();
Frame imageFrame = new Frame.Builder()
.setBitmap(bitmap) // your image bitmap
.build();
String imageText = "";
SparseArray<TextBlock> textBlocks = textRecognizer.detect(imageFrame);
for (int i = 0; i < textBlocks.size(); i++) {
TextBlock textBlock = textBlocks.get(textBlocks.keyAt(i));
imageText = textBlock.getValue(); // return string
}
From this Simple example of OCRReader in Android tutorial you can read text from image and also you can scan for text using camera, using very simple code.
This library is developed using Mobile Vision Text API
For scan text from camera
OCRCapture.Builder(this)
.setUseFlash(true)
.setAutoFocus(true)
.buildWithRequestCode(CAMERA_SCAN_TEXT);
For extract text from image
String text = OCRCapture.Builder(this).getTextFromUri(pickedImage);
//You can also use getTextFromBitmap(Bitmap bitmap) or getTextFromImage(String imagePath) buplic APIs from OCRLibrary library.
Text from an image can be extracted using Firebase machine learning (ML) kit. There are two versions of the text recognition API, on-device API (free) and on-cloud API.
To use the API, first create BitMap of the image, which should be upright. Then create FirebaseVisionImage object passing the bitmap object.
FirebaseVisionImage image = FirebaseVisionImage.fromBitmap(bitmap);
Then create FirebaseVisionTextRecognizer object.
FirebaseVisionTextRecognizer textRecognizer = FirebaseVision.getInstance()
.getCloudTextRecognizer();
Then pass the FirebaseVisionImage object to processImage() method, add listeners to the resulting task and capture the extracted text in success callback method.
textRecognizer.processImage(image)
.addOnSuccessListener(new OnSuccessListener<FirebaseVisionText>() {
#Override
public void onSuccess(FirebaseVisionText firebaseVisionText) {
//process success
}
})
.addOnFailureListener(new OnFailureListener() {
#Override
public void onFailure(#NonNull Exception e) {
//process failure
}
});
For complete example which shows how to use Firebase ML text recognizer, see https://www.zoftino.com/extracting-text-from-images-android
There is a different option. You can upload your image to the server, OCR it from the server, then get the result.

AForge.NET Image Color Manipulation

I discovered AForge a few days ago with a goal in mind. I wanted to be able to manipulate an image's colors. However, after trying several different methods I have not been able to find a resolution.
I looked thoroughly through the documentation they give, but it hasn't been any help to me. The specific part of the documentation I have been using is:
http://www.aforgenet.com/framework/docs/html/3aaa490f-8dbe-f179-f64b-eedd0b9d34ac.htm
The example they give:
// create filter
YCbCrLinear filter = new YCbCrLinear( );
// configure the filter
filter.InCb = new Range( -0.276f, 0.163f );
filter.InCr = new Range( -0.202f, 0.500f );
// apply the filter
filter.ApplyInPlace( image );
I replicated it for a button click event, but the 'image' portion of it wasn't specified. I converted the image inside of my picturebox to a bitmap, then referenced it in the last line thinking that it would work. But it had no affect at all.
My code is the following:
private void ColManButton_Click(object sender, EventArgs e)
{
Bitmap newimage = new Bitmap(pictureBox1.Image);
YCbCrLinear filter = new YCbCrLinear();
filter.InCb = new Range(-0.276f, 0.163f);
filter.InCr = new Range(-0.202f, 0.500f);
filter.ApplyInPlace(newimage);
}
My question essentially is, to anyone familiar or willing to help with this framework, how do I take my image and manipulate its color using AForge's YCbCrLinear Class under my button's click event?
Remember to set the picture box image after you have applied the filtering.
private void ColManButton_Click(object sender, EventArgs e)
{
Bitmap newimage = new Bitmap(pictureBox1.Image);
YCbCrLinear filter = new YCbCrLinear();
filter.InCb = new Range(-0.276f, 0.163f);
filter.InCr = new Range(-0.202f, 0.500f);
filter.ApplyInPlace(newimage);
pictureBox1.Image = newimage;
}
at the aforge website you can download the source code of a sample filter application, did you try it ?

Resources