Images from our website do not display in Safari for some Mac users and they report seeing either no image or a black image. Here is an example:
http://s3-eu-west-2.amazonaws.com/bp18.boxcleverpress.com/Boxclever_logo_chartreuse.png
What I have discovered is:
Images display on PC
Images display on SOME Macs (I have an older one that is OK)
Images display on iPhones and iPads
Images are PNG
I have optimised the images with pngtastic
When images are copied to the Mac and opened with Adobe Photoshop they give the error: the file format module cannot parse the file
When I tried to open a pngtastic optimised file in Photoshop Elements on Windows I also get that error
When I tried to open the optimised file in Photoshop on Windows I get the error IDAT: incorrect data check
I will replace the optimised images with unoptimised ones but I am not sure if this problem is with pngtastic or Adobe image libraries or something else.
The problem lies in Zopfli.java, included by pngtastic.
It uses this Java code to calculate the Adler-32 checksum:
/**
* Calculates the adler32 checksum of the data
*/
private static int adler32(byte[] data) {
int s1 = 1;
int s2 = 1 >> 16;
int i = 0;
while (i < data.length) {
int tick = Math.min(data.length, i + 1024);
while (i < tick) {
s1 += data[i++];
s2 += s1;
}
s1 %= 65521;
s2 %= 65521;
}
return (s2 << 16) | s1;
}
However, bytes in Java are always signed, and so it may return a wrong checksum value for some data inputs. Also, the bare int declarations for s1 and s2 cause further complications.
With (my C version of) the same code and data explicitly declared as signed char and both s1 and s2 as signed int, I get a wrong checksum FFFF9180 – exactly the one in your damaged PNG.
If I change the declaration to use unsigned char and unsigned int, it returns the correct checksum 1BCD6EB2 again.
The original C code for the Adler-32 checksum in zopfli uses unsigned types throughout, so it's just the Java implementation that suffers from this.
The problem appears to be due to the use of the zopfli compression in the PNGs that I optimised using pngtastic. The workaround is to use a different pngtastic compression option and the PNGs are then readable in Photoshop.
Using a different compression algorithm will result in less optimisation.
I am not sure why the zopfli compression is a problem, it could be that there is a fault in my code (although the same code works fine when only the zopli option is changed), in pngtastic, or that MacOS and Adobe don't support zopfli.
#usr2564301 has done some investigation and it appears the Adler-32 checksum on the compressed data in my example image is incorrect. usr2564301 has also tested the pngtastic code and found it to produce the correct checksum. The problem might be in how I handle the bytestream out of pngtastic.
The code below performs the PNG optimisation using pngtastic (com.googlecode.pngtastic.core)
public static final String OPT_ZOPFLI = "zopfli";
public static final String OPT_DEFAULT = "default";
public static final String OPT_IMAGEOPTIM = "imageoptim";
private String optimization = OPT_ZOPFLI;
public void optimizePng(File infile, String out) {
final InputStream in;
try {
in = new BufferedInputStream(new FileInputStream(infile));
final PngImage image = new PngImage(in);
// optimize
final PngOptimizer optimizer = new PngOptimizer();
optimizer.setCompressor(optimization, 1);
final PngImage optimizedImage = optimizer.optimize(image, false, 9);
// export the optimized image to a new file
final ByteArrayOutputStream optimizedBytes = new ByteArrayOutputStream();
optimizedImage.writeDataOutputStream(optimizedBytes);
optimizedImage.export(out, optimizedBytes.toByteArray());
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
Related
I'm having trouble with saveBytes(). When I call saveBytes(), it doesn't actually save the bytes into a file, like it should. The file is in the same folder, and is correctly named. The bytes just aren't being written into the file.
Here is my code:
int varOne = 0;
int varTwo = 4;
int varThree = 2;
void setup(){
size(500, 500);
}
void draw(){
saveTheBytes();
}
void saveTheBytes(){
byte[] byteArray = {(byte)varOne, (byte)varTwo, (byte)varThree}
saveBytes("filename.txt", byteArray)
}
Any help is appreciated. Thanks!
Other than the missing semicolons at the end of each statement in saveTheBytes() the code looks legit.
One note: you're overwriting this file multiple times a second in draw(). Maybe you meant to do that once in setup() ?
Double check the filesize of your file: it should be exactly 3 bytes.
These aren't going to be visible in a text editor (as they are ASCII characters NULL, END OF TRANSMISSION and START OF TEXT).
You should see the bytes in a with a hex editor as 0x00 0x04 0x02.
Here's a preview using HexFiend on OSX:
I use the new 3D reconstruction API's (MIRA release). I have a problem when a call the Tango3DR_update function. It returns TANGO_3DR_INVALID code when I set the parameters associated with an image camera (const Tango3DR_ImageBuffer * image * const Tango3DR_Pose image_pose, Tango3DR_CameraCalibration const * calibration). I have checked my parameters, they seem to be correct. When I call this function without image parameters, this to work properly ... Is this a known bug?
thank you in advance for your answers.
TLDR; The support library ImageBufferManager has a bug with strides. Do color_image.stride = image_buffer->width; when creating your Tango3DR_ImageBuffer.
I think there are two things :
Image Format
First, you have to make sure to use the TANGO_HAL_PIXEL_FORMAT_YCrCb_420_SP. You can do that by using the ImageBufferManager from the support library.
ImageBufferManager and strides
Second, there is a catch if you use the support library ImageBufferManager though. TangoSupport_getLatestImageBuffer seems to fail to initialize the stride of the returned image (I got 0 and some other very large values) which the 3DR library doesn't like. The original TangoImageBuffer from OnColorAvailable has stride=1280 (=image_width) and forcing that value on the TangoImageBuffer returned
from the ImageBufferManager seems to fix the issue. I believe this is a bug in ImageBufferManager.
This means doing
color_image.stride = image_buffer->width;
instead of
color_image.stride = image_buffer->stride
when creating the Tango3DR_ImageBuffer.
Full code example
I got it working with the following code in my Render method :
TangoImageBuffer* image_buffer;
ret = TangoSupport_getLatestImageBuffer(
image_buffer_manager_, &image_buffer);
if (ret != TANGO_SUCCESS) {
LOG(ERROR) << "Error in TangoSupport_getLatestImageBuffer";
}
...
Tango3DR_ImageBuffer color_image;
color_image.width = image_buffer->width;
color_image.height = image_buffer->height;
// VERY Important - The support library ImageBufferManager seems to have
// a bug where it will always put the stride of the returned buffer
// at 0, which causes 3DR to fail
color_image.stride = image_buffer->width;
color_image.timestamp = image_buffer->timestamp;
color_image.format = (Tango3DR_ImageFormatType)image_buffer->format;
color_image.data = image_buffer->data;
ret = Tango3DR_update(
tango_3dr_context_,
&cloud,
&depth_pose_3dr,
&color_image,
&color_pose_3dr,
&tango_3dr_calibration_,
&updated_indices);
I am using the ImageManager from the support library. So my OnColorAvailable looks like that
void SynchronizationApplication::OnColorAvailable(
const TangoImageBuffer* buffer) {
if (tango_3dr_enabled_ && tango_3dr_use_color_) {
TangoErrorType ret = TangoSupport_updateImageBuffer(
image_buffer_manager_, buffer);
if (ret != TANGO_SUCCESS) {
LOG(ERROR) << "Error in TangoSupport_updatePointCloud";
}
}
}
And the image_buffer_manager_ is initialized as follow (the pixel format might be important).
TangoSupport_createImageBufferManager(
TANGO_HAL_PIXEL_FORMAT_YCrCb_420_SP,
image_width_,
image_height_,
&image_buffer_manager_
);
I am copying the calibration as follow :
void CopyCalibrationTangoTo3DR(const TangoCameraIntrinsics& tango,
Tango3DR_CameraCalibration* out) {
out->calibration_type =
(Tango3DR_TangoCalibrationType)tango.calibration_type;
out->cx = tango.cx;
out->cy = tango.cy;
memcpy(out->distortion, tango.distortion, sizeof(double) * 5);
out->fx = tango.fx;
out->fy = tango.fy;
out->height = tango.height;
out->width = tango.width;
}
I am trying to understand how NPN_RequestRead should be used when writing an NPAPI plugin. The documentation looked at first pretty clear but I still cannot make the plugin work so far.
Here is my goal: implement a JPEG 2000 plugin using NPAPI. To have a proper implementation I need to access the JPEG 2000 stream using random access. In my case images are huge (100000x100000 RGB), but can efficiently be displayed using the first few bytes (thanks to multiresolution !).
As far I can tell I cannot make the plugin stop the GET. I cannot use local file access in firefox since it appears to be broken. However I can use a local apache2 installation and have the plugin be called with NPP_NewStream( ... seekable=true ) mode:
$ HEAD http://localhost/test.jp2 | grep Accept-Ranges
Accept-Ranges: bytes
Since seekable is set to true, I create the plugin with *stype = NP_SEEK. It seems that from this point I should be able to stop the GET with:
NPError NPP_NewStream(NPP instance, NPMIMEType type, NPStream* stream, NPBool seekable, uint16_t* stype)
[...]
NPByteRange range;
range.offset = 0;
range.length = 0;
range.next = NULL;
NPError e = s_pBrowserFunctions->requestread(stream, &range);
However the requestread returns an error. I've had a little more chance with:
int32_t NPP_Write(NPP instance, NPStream* stream, int32_t offset, int32_t len, void* buffer)
[...]
NPByteRange range;
range.offset = 0;
range.length = 0;
range.next = NULL;
NPError e = s_pBrowserFunctions->requestread(stream, &range);
But still, from the network console I can see that the entire stream has been downloaded.
Does anyone has a minimal example of a working NPAPI using the NPN_RequestRead API ?
You're requesting 0 bytes (.length = 0).
Firefox will therefore skip the range. Since there are no other valid ranges, there are no actual requests and hence Firefox returns an error.
From nsPluginStreamListenerPeer.cpp, unrelated parts stripped:
int32_t requestCnt = 0;
for (NPByteRange * range = aRangeList; range != nullptr; range = range->next) {
// XXX zero length?
if (!range->length)
continue;
// ...
requestCnt++;
}
// ...
*numRequests = requestCnt;
// ...
if (numRequests == 0)
return NS_ERROR_FAILURE;
So, you'll need to actually request something!
(Admittedly, the implementation looks kinda broken/lmited, e.g. you cannot request bytes=0- with it)
I am using an external .txt file to save the incrementing name index for whenever someone "takes a picture" in my app (i.e. image_1.jpg, image_2.jpg, etc...). I am trying to save the data externally so that a user does not overwrite their pictures each time they run the program. However, because of the way that Processing packages its contents for export I cannot both read and write to the same file. It reads the appropriate file located in the apps package contents, however, when it tries to write to that file, it creates a new folder in the same directory as the app itself and writes to a new file with the same name instead.
Essentially, it reads the proper file but refuses to write to it, instead making a copy and writing to that one. The app runs fine but every time you open it and take pictures you overwrite the images you already had.
I have tried naming the "write to" location the explicitly same link as where the exported app stores the data folder inside the package contents (Contents/Resources/Java/data/assets) but this creates a copy of this directory in the same file as the app.
I have also tried excluding the file I am trying to read/write from my data folder when I export the app by changing the read code to ../storage/pictureNumber.txt and then putting this file next to app itself. When I do this the app doesn't launch at all because it is looking in its own data folder for storage and refuses to go outside of itself with ../ . Has anyone had luck both reading from and writing to the same file in an exported processing .app?
Here is the code for the class that is handling the loading and saving of the file:
class Camera {
PImage cameraImage;
int cameraPadding = 10;
int cameraWidth = 60;
int opacity = 0;
int flashDecrementer = 50; //higher number means quicker flash
int pictureName;
Camera() {
String[] pictureIndex = loadStrings("assets/pictureNumber.txt");
pictureName = int(pictureIndex[0]);
cameraImage = loadImage("assets/camera.jpg");
String _pictureName = "" + char(pictureName);
println(pictureName);
}
void display(float mx, float my) {
image(cameraImage, cameraPadding, cameraPadding,
cameraWidth, cameraWidth-cameraWidth/5);
}
boolean isOver(float mx, float my) {
if (mx >= cameraPadding &&
mx <= cameraPadding+cameraWidth &&
my >= cameraPadding &&
my <= cameraPadding+cameraWidth-cameraWidth/5) {
return true;
}
else {
return false;
}
}
void captureImage() {
save("pictures/"+lines.picturePrefix+"_"+pictureName+".jpg");
pictureName++;
String _null = "";
// String _tempPictureName = _null.valueOf(pictureName);
String[] _pictureName = {_null.valueOf(pictureName)};
saveStrings("assets/pictureNumber.txt", _pictureName);
println(_pictureName);
}
void flash() {
fill(255, opacity);
rect(0,0,width,height);
opacity -= flashDecrementer;
if(opacity <= 0) opacity = 0;
}
}
After a lot of searching I found that you have to use savePath() in order to read from a directory outside of the project.jar. The camera class constructor now looks like this:
path = savePath("storage");
println(path);
String[] pictureIndex = loadStrings(path+"/pictureNumber.txt");
pictureName = int(pictureIndex[0]);
cameraImage = loadImage("assets/camera.jpg");
String _pictureName = ""+char(pictureName);
and everything works!
I have an application where I am reading and writing small blocks of data (a few hundred bytes) hundreds of millions of times. I'd like to generate a compression dictionary based on an example data file and use that dictionary forever as I read and write the small blocks. I'm leaning toward the LZW compression algorithm. The Wikipedia page (http://en.wikipedia.org/wiki/Lempel-Ziv-Welch) lists pseudocode for compression and decompression. It looks fairly straightforward to modify it such that the dictionary creation is a separate block of code. So I have two questions:
Am I on the right track or is there a better way?
Why does the LZW algorithm add to the dictionary during the decompression step? Can I omit that, or would I lose efficiency in my dictionary?
Thanks.
Update: Now I'm thinking the ideal case be to find a library that lets me store the dictionary separate from the compressed data. Does anything like that exist?
Update: I ended up taking the code at http://www.enusbaum.com/blog/2009/05/22/example-huffman-compression-routine-in-c and adapting it. I am Chris in the comments on that page. I emailed my mods back to that blog author, but I haven't heard back yet. The compression rates I'm seeing with that code are not at all impressive. Maybe that is due to the 8-bit tree size.
Update: I converted it to 16 bits and the compression is better. It's also much faster than the original code.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;
namespace Book.Core
{
public class Huffman16
{
private readonly double log2 = Math.Log(2);
private List<Node> HuffmanTree = new List<Node>();
internal class Node
{
public long Frequency { get; set; }
public byte Uncoded0 { get; set; }
public byte Uncoded1 { get; set; }
public uint Coded { get; set; }
public int CodeLength { get; set; }
public Node Left { get; set; }
public Node Right { get; set; }
public bool IsLeaf
{
get { return Left == null; }
}
public override string ToString()
{
var coded = "00000000" + Convert.ToString(Coded, 2);
return string.Format("Uncoded={0}, Coded={1}, Frequency={2}", (Uncoded1 << 8) | Uncoded0, coded.Substring(coded.Length - CodeLength), Frequency);
}
}
public Huffman16(long[] frequencies)
{
if (frequencies.Length != ushort.MaxValue + 1)
{
throw new ArgumentException("frequencies.Length must equal " + ushort.MaxValue + 1);
}
BuildTree(frequencies);
EncodeTree(HuffmanTree[HuffmanTree.Count - 1], 0, 0);
}
public static long[] GetFrequencies(byte[] sampleData, bool safe)
{
if (sampleData.Length % 2 != 0)
{
throw new ArgumentException("sampleData.Length must be a multiple of 2.");
}
var histogram = new long[ushort.MaxValue + 1];
if (safe)
{
for (int i = 0; i <= ushort.MaxValue; i++)
{
histogram[i] = 1;
}
}
for (int i = 0; i < sampleData.Length; i += 2)
{
histogram[(sampleData[i] << 8) | sampleData[i + 1]] += 1000;
}
return histogram;
}
public byte[] Encode(byte[] plainData)
{
if (plainData.Length % 2 != 0)
{
throw new ArgumentException("plainData.Length must be a multiple of 2.");
}
Int64 iBuffer = 0;
int iBufferCount = 0;
using (MemoryStream msEncodedOutput = new MemoryStream())
{
//Write Final Output Size 1st
msEncodedOutput.Write(BitConverter.GetBytes(plainData.Length), 0, 4);
//Begin Writing Encoded Data Stream
iBuffer = 0;
iBufferCount = 0;
for (int i = 0; i < plainData.Length; i += 2)
{
Node FoundLeaf = HuffmanTree[(plainData[i] << 8) | plainData[i + 1]];
//How many bits are we adding?
iBufferCount += FoundLeaf.CodeLength;
//Shift the buffer
iBuffer = (iBuffer << FoundLeaf.CodeLength) | FoundLeaf.Coded;
//Are there at least 8 bits in the buffer?
while (iBufferCount > 7)
{
//Write to output
int iBufferOutput = (int)(iBuffer >> (iBufferCount - 8));
msEncodedOutput.WriteByte((byte)iBufferOutput);
iBufferCount = iBufferCount - 8;
iBufferOutput <<= iBufferCount;
iBuffer ^= iBufferOutput;
}
}
//Write remaining bits in buffer
if (iBufferCount > 0)
{
iBuffer = iBuffer << (8 - iBufferCount);
msEncodedOutput.WriteByte((byte)iBuffer);
}
return msEncodedOutput.ToArray();
}
}
public byte[] Decode(byte[] bInput)
{
long iInputBuffer = 0;
int iBytesWritten = 0;
//Establish Output Buffer to write unencoded data to
byte[] bDecodedOutput = new byte[BitConverter.ToInt32(bInput, 0)];
var current = HuffmanTree[HuffmanTree.Count - 1];
//Begin Looping through Input and Decoding
iInputBuffer = 0;
for (int i = 4; i < bInput.Length; i++)
{
iInputBuffer = bInput[i];
for (int bit = 0; bit < 8; bit++)
{
if ((iInputBuffer & 128) == 0)
{
current = current.Left;
}
else
{
current = current.Right;
}
if (current.IsLeaf)
{
bDecodedOutput[iBytesWritten++] = current.Uncoded1;
bDecodedOutput[iBytesWritten++] = current.Uncoded0;
if (iBytesWritten == bDecodedOutput.Length)
{
return bDecodedOutput;
}
current = HuffmanTree[HuffmanTree.Count - 1];
}
iInputBuffer <<= 1;
}
}
throw new Exception();
}
private static void EncodeTree(Node node, int depth, uint value)
{
if (node != null)
{
if (node.IsLeaf)
{
node.CodeLength = depth;
node.Coded = value;
}
else
{
depth++;
value <<= 1;
EncodeTree(node.Left, depth, value);
EncodeTree(node.Right, depth, value | 1);
}
}
}
private void BuildTree(long[] frequencies)
{
var tiny = 0.1 / ushort.MaxValue;
var fraction = 0.0;
SortedDictionary<double, Node> trees = new SortedDictionary<double, Node>();
for (int i = 0; i <= ushort.MaxValue; i++)
{
var leaf = new Node()
{
Uncoded1 = (byte)(i >> 8),
Uncoded0 = (byte)(i & 255),
Frequency = frequencies[i]
};
HuffmanTree.Add(leaf);
if (leaf.Frequency > 0)
{
trees.Add(leaf.Frequency + (fraction += tiny), leaf);
}
}
while (trees.Count > 1)
{
var e = trees.GetEnumerator();
e.MoveNext();
var first = e.Current;
e.MoveNext();
var second = e.Current;
//Join smallest two nodes
var NewParent = new Node();
NewParent.Frequency = first.Value.Frequency + second.Value.Frequency;
NewParent.Left = first.Value;
NewParent.Right = second.Value;
HuffmanTree.Add(NewParent);
//Remove the two that just got joined into one
trees.Remove(first.Key);
trees.Remove(second.Key);
trees.Add(NewParent.Frequency + (fraction += tiny), NewParent);
}
}
}
}
Usage examples:
To create the dictionary from sample data:
var freqs = Huffman16.GetFrequencies(File.ReadAllBytes(#"D:\nodes"), true);
To initialize an encoder with a given dictionary:
var huff = new Huffman16(freqs);
And to do some compression:
var encoded = huff.Encode(raw);
And decompression:
var raw = huff.Decode(encoded);
The hard part in my mind is how you build your static dictionary. You don't want to use the LZW dictionary built from your sample data. LZW wastes a bunch of time learning since it can't build the dictionary faster than the decompressor can (a token will only be used the second time it's seen by the compressor so the decompressor can add it to its dictionary the first time its seen). The flip side of this is that it's adding things to the dictionary that may never get used, just in case the string shows up again. (e.g., to have a token for 'stackoverflow' you'll also have entries for 'ac','ko','ve','rf' etc...)
However, looking at the raw token stream from an LZ77 algorithm could work well. You'll only see tokens for strings seen at least twice. You can then build a list of the most common tokens/strings to include in your dictionary.
Once you have a static dictionary, using LZW sans the dictionary update seems like an easy implementation but to get the best compression I'd consider a static Huffman table instead of the traditional 12 bit fixed size token (as George Phillips suggested). An LZW dictionary will burn tokens for all the sub-strings you may never actually encode (e.g, if you can encode 'stackoverflow', there will be tokens for 'st', 'sta', 'stac', 'stack', 'stacko' etc.).
At this point it really isn't LZW - what makes LZW clever is how the decompressor can build the same dictionary the compressor used only seeing the compressed data stream. Something you won't be using. But all LZW implementations have a state where the dictionary is full and is no longer updated, this is how you'd use it with your static dictionary.
LZW adds to the dictionary during decompression to ensure it has the same dictionary state as the compressor. Otherwise the decoding would not function properly.
However, if you were in a state where the dictionary was fixed then, yes, you would not need to add new codes.
Your approach will work reasonably well and it's easy to use existing tools to prototype and measure the results. That is, compress the example file and then the example and test data together. The size of the latter less the former will be the expected compressed size of a block.
LZW is a clever way to build up a dictionary on the fly and gives decent results. But a more thorough analysis of your typical data blocks is likely to generate a more efficient dictionary.
There's also room for improvement in how LZW represents compressed data. For instance, each dictionary reference could be Huffman encoded to a closer to optimal length based on the expected frequency of their use. To be truly optimal the codes should be arithmetic encoded.
I would look at your data to see if there's an obvious reason it's so easy to compress. You might be able to do something much simpler than LZ78. I've done both LZ77 (lookback) and LZ78 (dictionary).
Try running a LZ77 on your data. There's no dictionary with LZ77, so you could use a library without alteration. Deflate is an implementation of LZ77.
Your idea of using a common dictionary is a good one, but it's hard to know whether the files are similar to each other or just self-similar without doing some tests.
The right track is to use an library -- almost every modern language have a compression library. C#, Python, Perl, Java, VB.net, whatever you use.
LZW save some space by depending the dictionary on previous inputs. It have an initial dictionary, and when you decompress something, you add them to the dictionary -- so the dictionary is growing. (I am omitting some details here, but this is the general idea)
You can omit this step by supply the whole (complete) dictionary as the initial one. But this would cost some space.
I find this aproach quite interesting for repeated log entries and something I would like to explore using.
Can you share the compression statistics for using this approach for your use case so I can compare it with other alternatives?
Have you considered having the common dictionary grow over time or is that not a valid option?