Hidden message in a picture - ascii

I'd like to read a hidden message in the following picture:
The message is supposed to look like CTF{Something}.
I've tried to find out how to read it for hours without success.
So far, I've tried to read the RGB values of each cell.
For instance, first cell (1, 1) is rgb(88, 101, 114) or #586572.
First three cells would give: Xer, ddnc which obviously makes non sense.
Last cell #587c00 rgb(88, 124, 0) is then supposed to be a }.
The only clue I have to solve that is RGB is a kind of ASCII.
Could you help me to solve that ?

This is an absolute spoiler for the puzzle, but here goes. I did truncate the actual flag out of the message, though, so you'll have some work to do.
With the original image 6x7 image in hand (the 192x224 image in the original post can be losslessly downscaled down to that), convert it to an uncompressed format such as Netpbm PPM, then simply look at the raw data (as the clue says).
$ convert 9TwkJ-6x7.png 9t.ppm
$ cat 9t.ppm
P6
6 7
255
Yes, decoding the colors as ASCII characters was the solution. [...]
You can get to the same result with e.g. Python with
>>> from PIL import Image
>>> im = Image.open("9TwkJ-6x7.png")
>>> im.tobytes()
b'Yes, decoding the colors as ASCII characters was the solution. [...]
A more devilish CTF would have e.g. rotated the source 90 degrees...
As for
For instance, first cell (1, 1) is rgb(88, 101, 114) or #586572. First three cells would give: Xer, ddnc which obviously makes non sense.
that smells like a different color profile wreaking havoc on your data; Yes, deco and Xer, ddnc are all just 1 RGB value here or there...

Related

How to see the graphical representation of emoji codepoints in R Studio Windows?

I have in a data frame a column with code points corresponding to emoji.
They look something like this:
1F1E8
I am using the remoji library, but as you can see my code points do not have \U in front of them, which is necessary for the methods of this library, as far as I know.
Example:
#This works
message (sub_emoji ("This is silly \U1f626"))
#this does not work
message (sub_emoji ("This is silly 1f626"))
The most I've managed to do is transform the code points to \\U1f626 but it doesn't work either.
Thanks in advance
The solution I was trying was to paste the string "\U" at the beginning of the code points, but being the \ an escape character I couldn't do it. But with some "tricks" it can be done:
I transformed all the code points to the following structure (8 hex digits):
\\U000xxxxx (000 if 5 hex digits in the original code point)
\\U0000xxxx (0000 if 4 hex digits in the original code point)
I have not delved into their implication ("fill" with 0), but the truth is that they work the same way, as far as I've tried:
message(sub_emoji("This is silly \U0001f626"))
This is silly 🤦
and
message(sub_emoji("This is silly \U1f626"))
#This is silly 🤦
I "filled" with 0 because I used the function stri_unescape_unicode() to un-escape the code points \\Uxxxxxxxx and get the desired result \Uxxxxxxxx (one \) to pass it to sub_emoji()
And this function, stri_unescape_unicode(), only gives this result (one \) if the code point has 8 hex digits, I did not study why, I only noticed this by messing around. I also noticed that if the u is lowercase it has another effect.
For example:
#it does not work
stri_unescape_unicode("\\U1F926")
#[1] NA
#Warning message: .....
stri_unescape_unicode("\\U1F926\\U1F3FB")
#[1] NA
#Warning message: .....
#it works
stri_unescape_unicode("\\U0001F926")
#[1] "\U0001f926"
stri_unescape_unicode("\\U0001F926\\U0001F3FB")
# [1] "\U0001f926\U0001f3fb"
A complete example:
em = stri_unescape_unicode("\\U0001f626")
message(sub_emoji(paste("This is silly", em)))
#This is silly 🤦
emc = stri_unescape_unicode("\\U0001F926\\U0001F3FB")
message(sub_emoji(paste("This is silly", emc)))
#This is silly 🤦🏻
Pay attention to this last emoji, it has a different skin and hair color, there is the effect of ZWJ Sequence.

How to interpret a binary Bitmap picture, so I know which color on which pixel i change when changing something in the Code?

I just used this code to convert a picture to binary:
import io
tme = input("Name: ")
with io.open(tme, "rb") as se:
print(se.read())
se.close()
Now it looks like this:
5MEMMMMMMMMMMMMM777777777777777777777777777777777\x95\x95\x95\x95\x95\x95\x95\x95\x95\x95\x95\x95\x95MEEMMMMEEMM\x96\x97\x97\x97\x97\x97\x97\x97\x97\x97\x97\x97\x97\x97\x97\x97\x97
And now I want to be able to interpret what this binary code is telling me exactly... I know it ruffly but not enough to be able to do anything on purpose. I searched the web but I didn't find anything that could help me on that point. Can you tell me how is it working or send me a link where I can read how it's done?
You can't just change random bytes in an image. There's a header at the start with the height and width, probably the date and a palette and information about the number of channels and bits per pixel. Then there is the image data which is often padded, and/or compressed.
So you need an imaging library like PIL/Pillow and code something like this:
from PIL import Image
im = Image.open('image.bmp').convert('RGB')
px = im.load()
# Look at pixel[4,4]
print (px[4,4])
# Make it red
px[4,4] = (255,0,0)
# Save to disk
im.save('result.bmp')
Documentation and examples available here.
The output should be printed in hexadecimal format. The first bytes in bitmap file are 'B' and 'M'.
You are attempting to print the content in ASCII. Moreover, it doesn't show the first bytes because the content has scrolled further down. Add print("start\n") to make sure you see the start of the output.
import io
import binascii
tme = 'path.bmp'
print("start") # make sure this line appears in console output
with io.open(tme, "rb") as se:
content = se.read()
print(binascii.hexlify(content))
Now you should see something like
start
b'424d26040100000000003...
42 is hex value for B
4d is hex value for M ...
The first 14 bytes in the file are BITMAPFILEHEADER
The next 40 bytes are BITMAPINFOHEADER
The bytes after that are color table (if any) and finally the actual pixels.
See BITMAPFILEHEADER and BITMAPINFOHEADER

Pillow: image.getcolors(maxcolors) returns None unless maxcolors ≥ 148279

I'm using Pillow to read an image, and get all its colours with Image.getcolors(). From the getcolors() reference page:
Image.getcolors(maxcolors=256)
Returns a list of colors used in this image.
Parameters: maxcolors – Maximum number of colors. If this number is exceeded, this method returns None. The default limit is 256
colors.
Returns: An unsorted list of (count, pixel) values.
I am using the following code snippet to load an image, lena.png.
from PIL import Image
def main(filename):
with Image.open(filename) as image:
colors = image.getcolors(maxcolors) #where maxcolors is some value, as explained below
print(colors)
if __name__ == "__main__":
main("lena.png")
This will print None if image.getcolors() recieves no argument or maxcolors < 148279. If maxcolors >= 148279, the values are printed as expected. The output is along the lines of [(1, (233, 130, 132)), (1, (243, 205, 168)),...(1, (223, 140, 118))], a list of tuples containing the occurence of the colour first, and a tuple of the RGB values second. I checked the output, and there is not one single value greater than 255.
When using another image, test.jpg, the same occurs. There seems to be no correlation to filetype or dimension. Adding image.load() above colors = ... does not change this either. 148279 is a prime, which makes this seem even weirder to me.
Why does getcolors() not work as intended with the default value, and why does it work with 148279?
I think it works correctly, but is maybe not described very well.
If you use ImageMagick on your Lena image, as follows, you will find that your image has exactly 148,279 unique colours in it - see explanatory link here:
identify -format %k lena.png
148279
So, I think the Pillow description probably means that you want a list of the unique colours, so long as there are fewer than the number you specify. You can change the number of colours, also with ImageMagick, to say 32:
convert lena.png -colors 32 lena.png
Then run your Python again, and you will have to make maxcolors 32 or more to get a list of the colours. If you set it to 31 or less, it outputs None.

Other options to resize barcode for zebra printer using ZPL?

I want to print a Code 128 barcode with a Zebra printer. But I just can't get exactly where I want because the barcode is either too small or too big for the label size of 40x20 mm. Is there anything else I can try besides using the ^BY (Bar Code Field Default) module width and ratio?
^XA^PQ2^LH0,0^FS
^MUM
^GB40,20,0.1,B^FS
^FO1.5,4
^BY0.2
^BCN,10,N,N
^FD*030493LEJCG002999*^FS
^FO8,15
^A0N,3,3
^FD*030493LEJCG002830*^FS
^MUD
^XZ
Above script gives me a label that looks like this:
But when I just decrease the module width to 0.1 (which is the lowest) the barcode becomes too small and may be problematic to scan with a hand scanner:
Code-128 is a fixed-ratio code, so you would appear to have the choice of two sizes. You may be able to solve the problem by using a 300dpi printer in place of a 200.
If you can change the format (and I'm intrigued by the barcode and readable being different values) then you could save a little by printing one number-sequence and one alpha-sequence, as an even count of numerics will be encoded as alphabet C so you'd save one change-alphabet element.
Do you really need the * on each end?
Otherwise, perhaps code 39 (which prints the * if you use the print-interpretation-line option) would suit your purposes better.
Another Possibility is to do on the fly code-set changes, Try something like
^XA^PQ2^LH0,0^FS
^MUM
^GB60,20,0.1,B^FS
^FO1.5,4
^BY0.2
^BCN,10,N,N
^FD>:*>5030493>6LEJCG>5002830>6*^FS
^FO8,15
^A0N,3,3
^FD*030493LEJCG002830*^FS
^MUD
^XZ
This will allow less symbols to encode your data
If you can structure content to have all the alpha chars a one end or the other.
or (Depending on your firmware) you could use auto ^BCN,10,N,N,N,A

MATLAB - image huffman encoding

I have a homework in which i have to convert some images to grayscale and compress them using huffman encoding. I converted them to grayscale and then i tried to compress them but i get an error. I used the code i found here.
Here is the code i'm using:
A=imread('Gray\36.png');
[symbols,p]=hist(A,unique(A))
p=p/sum(p)
[dict,avglen]=huffmandict(symbols,p)
comp=huffmanenco(A,dict)
This is the error i get. It occurs at the second line.
Error using eps
Class must be 'single' or 'double'.
Error in hist (line 90)
bins = xx + eps(xx);
What am i doing wrong?
Thanks.
P.S. how can i find the compression ratio for each image?
The problem is that when you specify the bin locations (the second input argument of 'hist'), they need to be single or double. The vector A itself does not, though. That's nice because sometimes you don't want to convert your whole dataset from an integer type to floating precision. This will fix your code:
[symbols,p]=hist(A,double(unique(A)))
Click here to see this issue is discussed more in detail.
first, try :
whos A
Seems like its type must be single or double. If not, just do A = double(A) after the imread line. Should work that way, however I'm surprised hist is not doing the conversion...
[EDIT] I have just tested it, and I am right, hist won't work in uint8, but it's okay as soon as I convert my image to double.

Resources