I have my code That consist: I open the images that I want to upload, then I convert it in grayscale and later in binary image. But I have a question. How do I get values (0,1) of binary image in order to create a matrix with that values with emgucv c#??
OpenFileDialog Openfile = new OpenFileDialog();
if (Openfile.ShowDialog() == DialogResult.OK)
{
Image<Gray, Byte> My_Image = new Image<Gray, byte>(Openfile.FileName);
pictureBox1.Image = My_Image.ToBitmap();
My_Image = My_Image.ThresholdBinary(new Gray(69), new Gray(255));
pictureBox2.Image = My_Image.ToBitmap();
}
}
Second Post
I think I misunderstood the question, sorry for giving wrong info. But I think you may get some understanding from this post? Work with matrix in emgu cv
First post
By passing your My_Image which is result after ThresholdBinary() to following function, you can have array of zero and one only about the binary image.
public int[] ZeroOneArray(Image<Gray, byte> binaryImage)
{
//get the bytes value after Image.ThresholdBinary()
//either 0 or 255
byte[] imageBytes = binaryImage.Bytes;
//change 255 to 1 and remain 0
var binary_0_Or_255 = from byteInt in imageBytes select byteInt / 255;
//convert to array
int[] arrayOnlyOneOrZero = binary_0_Or_255.ToArray();
//checking the array content
foreach (var bin in arrayOnlyOneOrZero)
{
Console.WriteLine(bin);
}
return arrayOnlyOneOrZero;
}
Is this what you want? thanks
Third Post
By understanding this, chris answer in error copying image to array, I write a function for you to transfer your gray binary image to gray matrix image
public Image<Gray, double> GrayBinaryImageToMatrixImage(Image<Gray, byte> binaryImage)
{
byte[] imageBytes = binaryImage.Bytes;
Image<Gray, double> gray_image_div = new Image<Gray, double>(binaryImage.Size);//empty image method one
//or
Image<Gray, double> gray_image_div_II = binaryImage.Convert<Gray, double>().CopyBlank();//empty image method two
//transfer binaryImage array to gray image matrix
for (int i = 0; i < binaryImage.Width; i++)
{
for (int j = 0; j < binaryImage.Height; j++)
{
if (imageBytes[i*binaryImage.Width+j] == 0)
{
//grey image only one channel
gray_image_div.Data[i, j, 0] = 0;
}
else if (imageBytes[i*binaryImage.Width+j] == 255)
{
gray_image_div.Data[i, j, 0] = 255;
}
}
}
return gray_image_div;
}
Related
I am trying to get an image using the camera. The image is to be 256x256 and I want it to come from the centre of a photo taken using the camera on a phone. I found this code at: https://forums.xamarin.com/discussion/37647/cross-platform-crop-image-view
I am using this code for Android...
public byte[] CropPhoto(byte[] photoToCropBytes, Rectangle rectangleToCrop, double outputWidth, double outputHeight)
{
using (var photoOutputStream = new MemoryStream())
{
// Load the bitmap
var inSampleSize = CalculateInSampleSize((int)rectangleToCrop.Width, (int)rectangleToCrop.Height, (int)outputWidth, (int)outputHeight);
var options = new BitmapFactory.Options();
options.InSampleSize = inSampleSize;
//options.InPurgeable = true; see http://developer.android.com/reference/android/graphics/BitmapFactory.Options.html
using (var photoToCropBitmap = BitmapFactory.DecodeByteArray(photoToCropBytes, 0, photoToCropBytes.Length, options))
{
var matrix = new Matrix();
var martixScale = outputWidth / rectangleToCrop.Width * inSampleSize;
matrix.PostScale((float)martixScale, (float)martixScale);
using (var photoCroppedBitmap = Bitmap.CreateBitmap(photoToCropBitmap, (int)(rectangleToCrop.X / inSampleSize), (int)(rectangleToCrop.Y / inSampleSize), (int)(rectangleToCrop.Width / inSampleSize), (int)(rectangleToCrop.Height / inSampleSize), matrix, true))
{
photoCroppedBitmap.Compress(Bitmap.CompressFormat.Jpeg, 100, photoOutputStream);
}
}
return photoOutputStream.ToArray();
}
}
public static int CalculateInSampleSize(int inputWidth, int inputHeight, int outputWidth, int outputHeight)
{
//see http://developer.android.com/training/displaying-bitmaps/load-bitmap.html
int inSampleSize = 1; //default
if (inputHeight > outputHeight || inputWidth > outputWidth) {
int halfHeight = inputHeight / 2;
int halfWidth = inputWidth / 2;
// Calculate the largest inSampleSize value that is a power of 2 and keeps both
// height and width larger than the requested height and width.
while ((halfHeight / inSampleSize) > outputHeight && (halfWidth / inSampleSize) > outputWidth)
{
inSampleSize *= 2;
}
}
return inSampleSize;
}
and this code for iOS...
public byte[] CropPhoto(byte[] photoToCropBytes, Xamarin.Forms.Rectangle
rectangleToCrop, double outputWidth, double outputHeight)
{
byte[] photoOutputBytes;
using (var data = NSData.FromArray(photoToCropBytes))
{
using (var photoToCropCGImage = UIImage.LoadFromData(data).CGImage)
{
//crop image
using (var photoCroppedCGImage = photoToCropCGImage.WithImageInRect(new CGRect((nfloat)rectangleToCrop.X, (nfloat)rectangleToCrop.Y, (nfloat)rectangleToCrop.Width, (nfloat)rectangleToCrop.Height)))
{
using (var photoCroppedUIImage = UIImage.FromImage(photoCroppedCGImage))
{
//create a 24bit RGB image to the output size
using (var cGBitmapContext = new CGBitmapContext(IntPtr.Zero, (int)outputWidth, (int)outputHeight, 8, (int)(4 * outputWidth), CGColorSpace.CreateDeviceRGB(), CGImageAlphaInfo.PremultipliedFirst))
{
var photoOutputRectangleF = new RectangleF(0f, 0f, (float)outputWidth, (float)outputHeight);
// draw the cropped photo resized
cGBitmapContext.DrawImage(photoOutputRectangleF, photoCroppedUIImage.CGImage);
//get cropped resized photo
var photoOutputUIImage = UIKit.UIImage.FromImage(cGBitmapContext.ToImage());
//convert cropped resized photo to bytes and then stream
using (var photoOutputNsData = photoOutputUIImage.AsJPEG())
{
photoOutputBytes = new Byte[photoOutputNsData.Length];
System.Runtime.InteropServices.Marshal.Copy(photoOutputNsData.Bytes, photoOutputBytes, 0, Convert.ToInt32(photoOutputNsData.Length));
}
}
}
}
}
}
return photoOutputBytes;
}
I am struggling to work out exactly what the parameters are to call the function.
Currently, I am doing the following:
double cropSize = Math.Min(DeviceDisplay.MainDisplayInfo.Width, DeviceDisplay.MainDisplayInfo.Height);
double left = (DeviceDisplay.MainDisplayInfo.Width - cropSize) / 2.0;
double top = (DeviceDisplay.MainDisplayInfo.Height - cropSize) / 2.0;
// Get a square resized and cropped from the top image as a byte[]
_imageData = mediaService.CropPhoto(_imageData, new Rectangle(left, top, cropSize, cropSize), 256, 256);
I was expecting this to crop the image to the central square (in portrait mode side length would be the width of the photo) and then scale it down to a 256x256 image. But it never picks the centre of the image.
Has anyone ever used this code and can tell me what I need to pass in for the 'rectangleToCrop' parameter?
Note: Both Android and iOS give the same image, just not the central part that I was expecting.
Here are the two routines I used:
Android:
public byte[] ResizeImageAndCropToSquare(byte[] rawPhoto, int outputSize)
{
// Create object of bitmapfactory's option method for further option use
BitmapFactory.Options options = new BitmapFactory.Options();
// InPurgeable is used to free up memory while required
options.InPurgeable = true;
// Get the original image
using (var originalImage = BitmapFactory.DecodeByteArray(rawPhoto, 0, rawPhoto.Length, options))
{
// The shortest edge will determine the size of the square image
int cropSize = Math.Min(originalImage.Width, originalImage.Height);
int left = (originalImage.Width - cropSize) / 2;
int top = (originalImage.Height - cropSize) / 2;
using (var squareImage = Bitmap.CreateBitmap(originalImage, left, top, cropSize, cropSize))
{
// Resize the square image to the correct size of an Avatar
using (var resizedImage = Bitmap.CreateScaledBitmap(squareImage, outputSize, outputSize, true))
{
// Return the raw data of the resized image
using (MemoryStream resizedImageStream = new MemoryStream())
{
// Resize the image maintaining 100% quality
resizedImage.Compress(Bitmap.CompressFormat.Png, 100, resizedImageStream);
return resizedImageStream.ToArray();
}
}
}
}
}
iOS:
private const int BitsPerComponent = 8;
public byte[] ResizeImageAndCropToSquare(byte[] rawPhoto, int outputSize)
{
using (var data = NSData.FromArray(rawPhoto))
{
using (var photoToCrop = UIImage.LoadFromData(data).CGImage)
{
nint photoWidth = photoToCrop.Width;
nint photoHeight = photoToCrop.Height;
nint cropSize = photoWidth < photoHeight ? photoWidth : photoHeight;
nint left = (photoWidth - cropSize) / 2;
nint top = (photoHeight - cropSize) / 2;
// Crop image
using (var photoCropped = photoToCrop.WithImageInRect(new CGRect(left, top, cropSize, cropSize)))
{
using (var photoCroppedUIImage = UIImage.FromImage(photoCropped))
{
// Create a 24bit RGB image of output size
using (var cGBitmapContext = new CGBitmapContext(IntPtr.Zero, outputSize, outputSize, BitsPerComponent, outputSize << 2, CGColorSpace.CreateDeviceRGB(), CGImageAlphaInfo.PremultipliedFirst))
{
var photoOutputRectangleF = new RectangleF(0f, 0f, outputSize, outputSize);
// Draw the cropped photo resized
cGBitmapContext.DrawImage(photoOutputRectangleF, photoCroppedUIImage.CGImage);
// Get cropped resized photo
var photoOutputUIImage = UIImage.FromImage(cGBitmapContext.ToImage());
// Convert cropped resized photo to bytes and then stream
using (var photoOutputNsData = photoOutputUIImage.AsPNG())
{
var rawOutput = new byte[photoOutputNsData.Length];
Marshal.Copy(photoOutputNsData.Bytes, rawOutput, 0, Convert.ToInt32(photoOutputNsData.Length));
return rawOutput;
}
}
}
}
}
}
}
So I made a small application that basicaly draw a whatever image is in the ClipBoard(memory) and trys to draw it.
This is a sample of the code:
private EventHandler<KeyEvent> copyPasteEvent = new EventHandler() {
final KeyCombination ctrl_V = new KeyCodeCombination(KeyCode.V, KeyCombination.CONTROL_DOWN);
#Override
public void handle(Event event) {
if (ctrl_V.match((KeyEvent) event)) {
System.out.println("Ctrl+V pressed");
Clipboard clipboard = Clipboard.getSystemClipboard();
System.out.println(clipboard.getContentTypes());
//Change canvas size if necessary to allow space for the image to fit
Image copiedImage = clipboard.getImage();
if (copiedImage.getHeight()>canvas.getHeight()){
canvas.setHeight(copiedImage.getHeight());
}
if (copiedImage.getWidth()>canvas.getWidth()){
canvas.setWidth(copiedImage.getWidth());
}
gc.drawImage(clipboard.getImage(), 0,0);
}
}
};
This is the image that was drawn and the correspecting data type:
A print from my screen.
A image from the internet.
However when i copy and paste a direct raw image from paint...
Object Descriptor is an OLE format from Microsoft.
This is why when you copy an image from a Microsoft application, you get these descriptors from Clipboard.getSystemClipboard().getContentTypes():
[[application/x-java-rawimage], [Object Descriptor]]
As for getting the image out of the clipboard... let's try two possible ways to do it: AWT and JavaFX.
AWT
Let's use the awt toolkit to get the system clipboard, and in case we have an image on it, retrieve a BufferedImage. Then we can convert it easily to a JavaFX Image and place it in an ImageView:
try {
DataFlavor[] availableDataFlavors = Toolkit.getDefaultToolkit().
getSystemClipboard().getAvailableDataFlavors();
for (DataFlavor f : availableDataFlavors) {
System.out.println("AWT Flavor: " + f);
if (f.equals(DataFlavor.imageFlavor)) {
BufferedImage data = (BufferedImage) Toolkit.getDefaultToolkit().getSystemClipboard().getData(DataFlavor.imageFlavor);
System.out.println("data " + data);
// Convert to JavaFX:
WritableImage img = new WritableImage(data.getWidth(), data.getHeight());
SwingFXUtils.toFXImage((BufferedImage) data, img);
imageView.setImage(img);
}
}
} catch (UnsupportedFlavorException | IOException ex) {
System.out.println("Error " + ex);
}
It prints:
AWT Flavor: java.awt.datatransfer.DataFlavor[mimetype=image/x-java-image;representationclass=java.awt.Image]
data BufferedImage#3e4eca95: type = 1 DirectColorModel: rmask=ff0000 gmask=ff00 bmask=ff amask=0 IntegerInterleavedRaster: width = 350 height = 364 #Bands = 3 xOff = 0 yOff = 0 dataOffset[0] 0
and displays your image:
This part was based in this answer.
JavaFX
Why didn't we try it with JavaFX in the first place? Well, we could have tried directly:
Image content = (Image) Clipboard.getSystemClipboard().getContent(DataFormat.IMAGE);
imageView.setImage(content);
and you will get a valid image, but when adding it to an ImageView, it will be blank as you already noticed, or with invalid colors.
So how can we get a valid image? If you check the BufferedImage above, it shows type = 1, which means BufferedImage.TYPE_INT_RGB = 1;, in other words, it is an image with 8-bit RGB color components packed into integer pixels, without alpha component.
My guess is that JavaFX implementation for Windows doesn't process correctly this image format, as it probably expects a RGBA format. You can check here how the image is extracted. And if you want to dive into the native implementation, check the native-glass/win/GlassClipboard.cpp code.
So we can try to do it with a PixelReader. Let's read the image and return a byte array:
private byte[] imageToData(Image image) {
int width = (int) image.getWidth();
int height = (int) image.getHeight();
byte[] data = new byte[width * height * 3];
int i = 0;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int argb = image.getPixelReader().getArgb(x, y);
int r = (argb >> 16) & 0xFF;
int g = (argb >> 8) & 0xFF;
int b = argb & 0xFF;
data[i++] = (byte) r;
data[i++] = (byte) g;
data[i++] = (byte) b;
}
}
return data;
}
Now, all we need to do is use this byte array to write a new image and set it to the ImageView:
Image content = (Image) Clipboard.getSystemClipboard().getContent(DataFormat.IMAGE);
byte[] data = imageToData(content);
WritableImage writableImage = new WritableImage((int) content.getWidth(), (int) content.getHeight());
PixelWriter pixelWriter = writableImage.getPixelWriter();
pixelWriter.setPixels(0, 0, (int) content.getWidth(), (int) content.getHeight(),
PixelFormat.getByteRgbInstance(), data, 0, (int) content.getWidth() * 3);
imageView.setImage(writableImage);
And now you will get the same result, but only using JavaFX:
Is there a way to get an image's rgb matrix representation? and vice versa? I would like to perform image masking/filtering on the original image so it needs to be applied to its rgb matrix representation. Currently using this library to get an image from a device: https://pub.dartlang.org/packages/image_picker
https://pub.dartlang.org/packages/image provides image conversion and manipulation utility functions.
import 'dart:ui' as ui;
import 'package:flutter/material.dart';
// for a local iamge example
List RGBAList;
// 1. get [ImageProvider] instance
// [ExactAssetImage] extends [AssetBundleImageProvider] extends [ImageProvider]
ExactAssetImage provider = ExactAssetImage('$local_img_uri');
// 2. get [ui.Image] by [ImageProvider]
ImageStream stream = provider.resolve(ImageConfiguration.empty);
Completer completer = Completer<ui.Image>();
ImageStreamListener listener = ImageStreamListener((frame, sync) {
ui.Image image = frame.image;
completer.complete(image);
stream.removeListener(listener);
})
stream.addListener(listener);
// 3. get rgba array/list by [ui.Image]
completer.then((ui.Image image) {
image.toByteData(format: ui.ImageByteFormat.rowRgba).then((ByteData data) {
RGBAList = data.buffer.asUint8List().toList();
});
})
You can use this package - https://pub.dartlang.org/packages/image
import 'package:image/image.dart' as Imagi;
Here's how to use it to obtain RGB matrix of a file from ImagePicker()-
final image = await ImagePicker().pickImage(source: ImageSource.gallery);
if (image == null) return;
final imageTemp = File(image.path);
controlImage = imageTemp;
Now here's a function for obtaining the RGB matrix -
List<List<int>> imgArray = [];
void readImage() async{
final bytes = await controlImage!.readAsBytes();
final decoder = Imagi.JpegDecoder();
final decodedImg = decoder.decodeImage(bytes);
final decodedBytes = decodedImg!.getBytes(format: Imagi.Format.rgb);
// print(decodedBytes);
print(decodedBytes.length);
// int loopLimit = decodedImg.width;
int loopLimit =1000;
for(int x = 0; x < loopLimit; x++) {
int red = decodedBytes[decodedImg.width*3 + x*3];
int green = decodedBytes[decodedImg.width*3 + x*3 + 1];
int blue = decodedBytes[decodedImg.width*3 + x*3 + 2];
imgArray.add([red, green, blue]);
}
print(imgArray);
}
The array imgArray will contain the RGB matrix
Using processing I am trying to run a script that will process a folder full of frames.
The script is a combination of PixelSortFrames and SortThroughSeamCarving.
I am new to processing and what I want does not seems to be working. I would like the script to run back through and choose the following file in the folder to be processed. At the moment it stops at the end and does not return to start on next file (there are three other modules also involved).
Any help would be much appreciated. :(
/* ASDFPixelSort for video frames v1.0
Original ASDFPixelSort by Kim Asendorf <http://kimasendorf.com>
https://github.com/kimasendorf/ASDFPixelSort
Fork by dx <http://dequis.org> and chinatsu <http://360nosco.pe>
// Main configuration
String basedir = ".../Images/Seq_002"; // Specify the directory in which the frames are located. Use forward slashes.
String fileext = ".jpg"; // Change to the format your images are in.
int resumeprocess = 0; // If you wish to resume a previously stopped process, change this value.
boolean reverseIt = true;
boolean saveIt = true;
int mode = 2; // MODE: 0 = black, 1 = bright, 2 = white
int blackValue = -10000000;
int brightnessValue = -1;
int whiteValue = -6000000;
// -------
PImage img, original;
float[][] sums;
int bottomIndex = 0;
String[] filenames;
int row = 0;
int column = 0;
int i = 0;
java.io.File folder = new java.io.File(dataPath(basedir));
java.io.FilenameFilter extfilter = new java.io.FilenameFilter() {
boolean accept(File dir, String name) {
return name.toLowerCase().endsWith(fileext);
}
};
void setup() {
if (resumeprocess > 0) {i = resumeprocess - 1;frameCount = i;}
size(1504, 1000); // Resolution of the frames. It's likely there's a better way of doing this..
filenames = folder.list(extfilter);
size(1504, 1000);
println(" " + width + " x " + height + " px");
println("Creating buffer images...");
PImage hImg = createImage(1504, 1000, RGB);
PImage vImg = createImage(1504, 1000, RGB);
// draw image and convert to grayscale
if (i +1 > filenames.length) {println("Uh.. Done!"); System.exit(0);}
img = loadImage(basedir+"/"+filenames[i]);
original = loadImage(basedir+"/"+filenames[i]);
image(img, 0, 0);
filter(GRAY);
img.loadPixels(); // updatePixels is in the 'runKernals'
// run kernels to create "energy map"
println("Running kernals on image...");
runKernels(hImg, vImg);
image(img, 0, 0);
// sum pathways through the image
println("Getting sums through image...");
sums = getSumsThroughImage();
image(img, 0, 0);
loadPixels();
// get start point (smallest value) - this is used to find the
// best seam (starting at the lowest energy)
bottomIndex = width/2;
// bottomIndex = findStartPoint(sums, 50);
println("Bottom index: " + bottomIndex);
// find the pathway with the lowest information
int[] path = new int[height];
path = findPath(bottomIndex, sums, path);
for (int bi=0; bi<width; bi++) {
// get the pixels of the path from the original image
original.loadPixels();
color[] c = new color[path.length]; // create array of the seam's color values
for (int i=0; i<c.length; i++) {
try {
c[i] = original.pixels[i*width + path[i] + bi]; // set color array to values from original image
}
catch (Exception e) {
// when we run out of pixels, just ignore
}
}
println(" " + bi);
c = sort(c); // sort (use better algorithm later)
if (reverseIt) {
c = reverse(c);
}
for (int i=0; i<c.length; i++) {
try {
original.pixels[i*width + path[i] + bi] = c[i]; // reverse! set the pixels of the original from sorted array
}
catch (Exception e) {
// when we run out of pixels, just ignore
}
}
original.updatePixels();
}
// when done, update pixels to display
updatePixels();
// display the result!
image(original, 0, 0);
if (saveIt) {
println("Saving file...");
//filenames = stripFileExtension(filenames);
save("results/SeamSort_" + filenames + ".tiff");
}
println("DONE!");
}
// strip file extension for saving and renaming
String stripFileExtension(String s) {
s = s.substring(s.lastIndexOf('/')+1, s.length());
s = s.substring(s.lastIndexOf('\\')+1, s.length());
s = s.substring(0, s.lastIndexOf('.'));
return s;
}
This code works by processing all images in the selected folder
String basedir = "D:/things/pixelsortframes"; // Specify the directory in which the frames are located. Use forward slashes.
String fileext = ".png"; // Change to the format your images are in.
int resumeprocess = 0; // If you wish to resume a previously stopped process, change this value.
int mode = 1; // MODE: 0 = black, 1 = bright, 2 = white
int blackValue = -10000000;
int brightnessValue = -1;
int whiteValue = -6000000;
PImage img;
String[] filenames;
int row = 0;
int column = 0;
int i = 0;
java.io.File folder = new java.io.File(dataPath(basedir));
java.io.FilenameFilter extfilter = new java.io.FilenameFilter() {
boolean accept(File dir, String name) {
return name.toLowerCase().endsWith(fileext);
}
};
void setup() {
if (resumeprocess > 0) {i = resumeprocess - 1;frameCount = i;}
size(1920, 1080); // Resolution of the frames. It's likely there's a better way of doing this..
filenames = folder.list(extfilter);
}
void draw() {
if (i +1 > filenames.length) {println("Uh.. Done!"); System.exit(0);}
row = 0;
column = 0;
img = loadImage(basedir+"/"+filenames[i]);
image(img,0,0);
while(column < width-1) {
img.loadPixels();
sortColumn();
column++;
img.updatePixels();
}
while(row < height-1) {
img.loadPixels();
sortRow();
row++;
img.updatePixels();
}
image(img,0,0);
saveFrame(basedir+"/out/"+filenames[i]);
println("Frames processed: "+frameCount+"/"+filenames.length);
i++;
}
essentially I want to do the same thing only with a different image process but my code is not doing this to all with in the folder... just one file.
You seem to be confused about what the setup() function does. It runs once, and only once, at the beginning of your code's execution. You don't have any looping structure for processing the other files, so it's no wonder that it only processes the first one. Perhaps wrap the entire thing in a for loop? It looks like you kind of thought about this, judging by the global variable i, but you never increment it to go to the next image and you overwrite its value in several for loops later anyway.
I'm trying to create Pdf using itextsharp. I have added one table conataining two columns one containing text and other image. I want to have constant image size
My Image automatically resizes if the text present in another cell increases and image present in other cell has different sizes
for (int i = 0; i < visitInfo.VisitsiteComplience.Count; ++i)
{
cellprop.Colspan = 1;
cellprop.Pharse = visitInfo.VisitsiteComplience[i].Compliencedescription;
cellprop.BaseColor = null;
table.AddCell(AddCelltoTable(cellprop));
yesicon.ScaleAbsolute(35f, 35f);
noicon.ScaleAbsolute(35f, 35f);
if (visitInfo.VisitsiteComplience[i].Status == "1")
{
statuscell.AddElement(new Chunk(noicon, 0, 0));
}
else
{
// statuscell.AddElement(new Chunk(noicon, 0, 0));
}
statuscell.FixedHeight = 10;
//headerLeftCell.Border = PdfPCell.NO_BORDER;
table.AddCell(statuscell);
}
2. Then I changed the code but now Image size increases and occupies full cell
for (int i = 0; i < visitInfo.VisitsiteComplience.Count; ++i)
{
cellprop.Colspan = 1;
cellprop.Pharse = visitInfo.VisitsiteComplience[i].Compliencedescription;
cellprop.BaseColor = null;
table.AddCell(AddCelltoTable(cellprop));
yesicon.ScaleAbsolute(35f, 35f);
noicon.ScaleAbsolute(35f, 35f);
if (visitInfo.VisitsiteComplience[i].Status == "1")
{
statuscell.AddElement(new Chunk(noicon, 0, 0));
}
else
{
// statuscell.AddElement(new Chunk(noicon, 0, 0));
}
//headerLeftCell.Border = PdfPCell.NO_BORDER;
table.AddCell(statuscell);
}
I think you're scaling the image yourself like this: noicon.ScaleAbsolute(35f, 35f);
It also puzzles me why you're wrapping the image inside a Chunk. You can create a PdfPCell that takes an Image as parameter as well as a Bool to defines whether or not iText should scale the Image. See page 109 of the book iText in Action (of which I'm the author) and take a look at the XMen example of chapter 4.
Image image = Image.getInstance("D:/star.png");
PdfPCell cell = new PdfPCell();
cell.setFixedHeight(40f);
cell.addElement(image);
table.addCell(cell);