Dot "cpbitmap" Images (imgaename.cpbitmap) - image

How can I convert the .cpbitmap images to .png or common images type ?
Thank you :)

Actually, the idea to write python code is excellent for it is easier to execute than to run some xcode stuff.
As the previous author stated, he did not tested the code, but I did. What I found is that it produces the image in which RED and BLUE components are misplaces.
That is why I decided to post a correct version of this code here:
#!/usr/bin/python
from PIL import Image,ImageOps
import struct
import sys
if len(sys.argv) < 3:
print "Need two args: filename and result_filename\n";
sys.exit(0)
filename = sys.argv[1]
result_filename = sys.argv[2]
with open(filename) as f:
contents = f.read()
unk1, width, height, unk2, unk3, unk4 = struct.unpack('<6i', contents[-24:])
im = Image.fromstring('RGBA', (width,height), contents, 'raw', 'RGBA', 0, 1)
r,g,b,a = im.split()
im = Image.merge('RGBA', (b,g,r,a))
im.save(result_filename)
Put this code in the file decode_cpbitmap, do
chmod 755 decode_cpbitmap
to make it executable and now you may call it as follows:
./decode_cpbitmap input_filename output_filename
where input_filename is a file '*.cpbitmap' that you already have and want to decode, and output_filename is smth.png (it will be created by this code).
You may get an error
ImportError: No module named PIL
Then you need to install PIL python module. I will not explain how to install python modules for you may find it elsewhere.

Here's a quick Python program to do it. I wasn't able to test it because I don't have any .cpbitmap images to use.
from PIL import Image
import struct
with open(filename) as f:
contents = f.read()
unk1, width, height, unk2, unk3, unk4 = struct.unpack('<6i', contents[-24:])
im = Image.fromstring('RGBA', (width,height), contents, 'raw', 'RGBA', 0, 1)
im.save('converted.png')

I tried to convert images from iOS 11, and these scripts do not work. Today the size of each row is rounded up to a number of 8 pixels by padding.
I wrote Node.JS script. Before run script install module jimp (npm install jimp). Tested on Node.JS v9.2.0 and jimp 0.2.28.
const fs = require('fs')
const util = require('util')
const Jimp = require('jimp')
const main = async () => {
if (process.argv.length != 4) {
console.log('Need two args: input filename and result filename')
console.log(`Example: ${process.argv[0]} ${process.argv[1]} HomeBackground.cpbitmap HomeBackground.png`)
return
}
const inpFileName = process.argv[2]
const outFileName = process.argv[3]
const readFile = util.promisify(fs.readFile)
const cpbmp = await readFile(inpFileName)
const width = cpbmp.readInt32LE(cpbmp.length - 4 * 5)
const height = cpbmp.readInt32LE(cpbmp.length - 4 * 4)
console.log(`Image height: ${height}, width: ${width}`)
const image = await new Jimp(width, height, 0x000000FF)
const calcOffsetInCpbmp = (x, y, width) => {
const lineSize = Math.ceil(width / 8) * 8
return x * 4 + y * lineSize * 4
}
const calcOffsetInImage = (x, y, width) => {
return x * 4 + y * width * 4
}
const swapRBColors = (c) => {
const r = c & 0xFF
const b = (c & 0xFF0000) >> 16
c &= 0xFF00FF00
c |= r << 16
c |= b
return c
}
for (let y = 0; y < height; y++) {
for (let x = 0; x < width; x++) {
const color = cpbmp.readInt32LE(calcOffsetInCpbmp(x, y, width))
image.bitmap.data.writeInt32LE(swapRBColors(color), calcOffsetInImage(x, y, width))
}
}
await image.write(outFileName)
console.log('Done')
}
main()

Related

Why is this shell script to collect data from a website not working

I want to collect wind data, from this URL http://nomads.ncep.noaa.gov using a shell script, for an OpenGL Application. Online it was adviced to install ecCodes, which i did (at least the package, i am honestly not sure if it is working corretly). This is my Shell Script so far:
#!/bin/bash
GFS_DATE="20161120"
GFS_TIME="06"; # 00, 06, 12, 18
RES="1p00" # 0p25, 0p50 or 1p00
BBOX="leftlon=0&rightlon=360&toplat=90&bottomlat=-90"
LEVEL="lev_10_m_above_ground=on"
GFS_URL="http://nomads.ncep.noaa.gov/cgi-bin/filter_gfs_${RES}.pl?file=gfs.t${GFS_TIME}z.pgrb2.${RES}.f000&${LEVEL}&${BBOX}&dir=%2Fgfs.${GFS_DATE}${GFS_TIME}"
curl "${GFS_URL}&var_UGRD=on" -o utmp.grib
curl "${GFS_URL}&var_VGRD=on" -o vtmp.grib
grib_set -r -s packingType=grid_simple utmp.grib utmp.grib
grib_set -r -s packingType=grid_simple vtmp.grib vtmp.grib
printf "{\"u\":`grib_dump -j utmp.grib`,\"v\":`grib_dump -j vtmp.grib`}" > tmp.json
rm utmp.grib vtmp.grib
DIR="c:\\Users\My Name\Documents\CGTutorial\CGTutorial - Minimal"
node ${DIR}/prepare.js ${1}/${GFS_DATE}${GFS_TIME}
rm tmp.json
There is also a javascript File called prepare.js which looks like this:
const PNG = require('pngjs').PNG;
const fs = require('fs');
const data = JSON.parse(fs.readFileSync('tmp.json'));
const name = process.argv[2];
const u = data.u;
const v = data.v;
const width = u.Ni;
const height = u.Nj - 1;
const png = new PNG({
colorType: 2,
filterType: 4,
width: width,
height: height
});
for (let y = 0; y < height; y++) {
for (let x = 0; x < width; x++) {
const i = (y * width + x) * 4;
const k = y * width + (x + width / 2) % width;
png.data[i + 0] = Math.floor(255 * (u.values[k] - u.minimum) / (u.maximum - u.minimum));
png.data[i + 1] = Math.floor(255 * (v.values[k] - v.minimum) / (v.maximum - v.minimum));
png.data[i + 2] = 0;
png.data[i + 3] = 255;
}
}
png.pack().pipe(fs.createWriteStream(name + '.png'));
fs.writeFileSync(name + '.json', JSON.stringify({
source: 'http://nomads.ncep.noaa.gov',
date: formatDate(u.dataDate + '', u.dataTime),
width: width,
height: height,
uMin: u.minimum,
uMax: u.maximum,
vMin: v.minimum,
vMax: v.maximum
}, null, 2) + '\n');
function formatDate(date, time) {
return date.substr(0, 4) + '-' + date.substr(4, 2) + '-' + date.substr(6, 2) + 'T' +
(time < 10 ? '0' + time : time) + ':00Z';
}
If i run the shell script, a window opens and closes again within a second. This is what it looks like:
I have been trying to change different parts of the script for days and to reinsatll ecCodes. But i think the problem is that my understanding of how these scripts should work isn't good enough. Could someone help me figure out what is going wrong here?

Experimenting inverting pictures with Octave

First time using Octave to experiment inverting an image. My filename is LinearAlgebraLab1.m and after I run the file with Octave I get error "error: no such file, '/home/LinearAlgebraLab1.m'"
However, before this, I was getting an error that my .jpg file couldn't be found. What should I change to have Octave run my script correctly without any errors?
%% import image
C = imread('MonaLisa2.jpg');
%% set slopes and intercepts for color transformation
redSlope = 1;
redIntercept = -80;
greenSlope = -.75;
greenIntercept = 150;
blueSlope = -.50;
blueIntercept = 200;
%%redSlope = 1;
%%redIntercept = -80;
%%greenSlope = -.75;
%%greenIntercept = 150;
%%blueSlope = -.50;
%%blueIntercept = 200; redSlope = 1;
%% store RGB channels from image separately
R = C(:,:,1);
G = C(:,:,2);
B = C(:,:,3);
C2 = C;
S=size(C);
m=S(1,1);
n=S(1,2);
%h=S(1,3);
%% change red channel
M = R;
%%M2 = redSlope*cast(M,'double') + redIntercept*ones(786,579);
M2 = redSlope*cast(M,'double') + redIntercept*ones(m,n);
C2(:,:,1) = M2;
%% change green channel
M = G;
M2 = greenSlope*cast(M,'double') + greenIntercept*ones(m,n);
C2(:,:,2) = M2;
%% change blue channel
M = B;
M2 = blueSlope*cast(M,'double') + blueIntercept*ones(m,n);
C2(:,:,3) = M2;
%% visualize new image
image(C2)
axis equal tight off
set(gca,'position',[0 0 1 1],'units','normalized')

I use multiprocessing to search for images, but I get an error

import cv2
import numpy as np
from PIL import ImageGrab
from multiprocessing import Pool
def getGrayBase():
base = ImageGrab.grab(bbox=(0, 0, 1920, 1080))
base.save(f'Screen\\base.png')
base = f'Screen\\base.png'
rgbBase = cv2.imread(base)
grayBase = cv2.cvtColor(rgbBase, cv2.COLOR_BGR2GRAY)
return grayBase
def findPict(sabjektSearch):
grayBase=getGrayBase()
template = cv2.imread(sabjektSearch, 0)
rezult = {'flag': False, 'x': 0, 'y': 0, 'func': 0,'scroll':0}
# w, h = template.shape[::-1] # Высота ширина
# We find the center of the desired image
M = cv2.moments(template)
centr_x = int(M['m10'] / M['m00'])
centr_y = int(M['m01'] / M['m00'])
# print(f'centr {centr_x, centr_y}')
rez_search = cv2.matchTemplate(grayBase, template, cv2.TM_CCOEFF_NORMED)
threshold = 0.8
# Check if the image you are looking for in the base
flag = False
for i in rez_search:
if np.amax(rez_search) > threshold:
flag = True
rezult['flag'] = flag
if flag == True:
loc = np.where(rez_search >= threshold)
for pt in zip(*loc[::-1]):
x = int(pt[0]) + centr_x
y = int(pt[1]) + centr_y
rezult['x'] = x
rezult['y'] = y
nameFunc = sabjektSearch.split('\\')[-1]
rez = f'{nameFunc} - YES ' if flag else f' {nameFunc} - no '
print(rez,end=" ")
return rezult
arr_pick_all=[f'Screen\p1.png',f'Screen\\p2',f'Screen\p3.png',f'Screen\p4.PNG',f'Screen\p5.png',f'Screen\\p6.png']
if __name__ == '__main__':
p = Pool(len(arr_pick_all))
rezult=p.map(findPict,arr_pick_all)
print()
print(rezult.get(timeout=1))
p.close()
p.join()
Finds 1-3 pictures, but with a large number an error occurs
cv2.error: OpenCV(4.2.0) C:\projects\opencv-python\opencv\modules\imgproc\src\color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor'

Convolution of image with kernel gives white output

I've a code that filters image with 3x3 Gaussian kernel but the output is white. GuassianFilter function works(output is correct) but there is problem in convolution function.
What would be the problem? I checked code again but couldn't solve this.
import math
import numpy as np
import cv2
path="funny_hats.jpg"
inputImage = cv2.imread(path,cv2.IMREAD_GRAYSCALE)
def GaussianFilter(img):
#generating 3x3 kernel
kernel = np.ones((3,3), dtype='float64')
size = 3
mean = int(size/2)
sigma = 1 # standart deviation is 1
sumAll = 0
for i in range(size):
for j in range(size):
kernel[i,j] = math.exp(-1* ((math.pow( (i-mean)/sigma, 2.0) + (math.pow((j-mean)/sigma, 2.0)) ) / (2* math.pow(sigma,2)) )) / (sigma * math.pow(2*math.pi, 1/2))
sumAll += kernel[i,j]
# normalizing kernel
for i in range(size):
for j in range(size):
kernel[i,j] /= sumAll
# Filter image with created kernel
img = convolution(img, kernel) # filtered image
print(img)
cv2.imshow('aa', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
def convolution(img, dest):
res = img
[h,w] = img.shape
[kh, kw] = dest.shape # kernel shape
kr = int(kh/2) # kernel radius
res = np.zeros(img.shape)
for i in range(0+kr,h-kr):
for j in range(0+kr,w-kr):
for k in range(-1 * kr, kr + 1):
for m in range(-1 * kr, kr + 1):
res[i,j] += dest[k,m]*img[i+k, j+m]
res[:,0] = res[:, 1]
res[:,w-1] = res[:, w-2]
res[0,:] = res[1,:]
res[h-1,:] = res[h-2,:]
return res
GaussianFilter(inputImage)
res = img
This is wrong. You must create image where all pixels will be zero (black).

Python 2.7 - How to compare two image?

In python 2.7, I want to compare 2 image to the same, How to do this? please show me step by step. Thanks!
There are many ways to do. By using some opensource Library, like OpenCV, Scikit Learn, TensorFlow.
To compare two images, you can do something like Template Matching in OpenCV
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('img.jpg', 0)
img2 = img.copy()
template = cv2.imread('img2.jpg', 0)
w, h = template.shape[::-1]
methods = ['cv2.TM_CCOEFF', 'cv2.TM_CCOEFF_NORMED']
for meth in methods:
img = img2.copy()
method = eval(meth)
res = cv2.matchTemplate(img, template, method)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
if method in [cv2.TM_SQDIFF or cv2. TM_SQDIFF_NORMED]:
top_left = min_loc
else:
top_left = max_loc
bottom_right = (top_left[0] + w, top_left[1] + h)
cv2.rectangle(img, top_left, bottom_right, 255,2)
plt.subplot(121), plt.imshow(res)
plt.title('Matching Result'), plt.xticks([]), plt.yticks([])
plt.subplot(122),plt.imshow(img,cmap = 'gray')
plt.title('Detected Point'), plt.xticks([]), plt.yticks([])
plt.suptitle(meth)
plt.show()
or Histogram comparison
import cv2
import numpy as np
base = cv2.imread('test4.jpg')
test1 = cv2.imread('test3.jpg')
test2 = cv2.imread('test5.jpg')
rows,cols = base.shape[:2]
basehsv = cv2.cvtColor(base,cv2.COLOR_BGR2HSV)
test1hsv = cv2.cvtColor(test1,cv2.COLOR_BGR2HSV)
test2hsv = cv2.cvtColor(test2,cv2.COLOR_BGR2HSV)
halfhsv = basehsv[rows/2:rows-1,cols/2:cols-1].copy() # Take lower half of the base image for testing
hbins = 180
sbins = 255
hrange = [0,180]
srange = [0,256]
ranges = hrange+srange # ranges = [0,180,0,256]
ranges=None
histbase = cv2.calcHist(basehsv,[0,1],None,[180,256],ranges)
cv2.normalize(histbase,histbase,0,255,cv2.NORM_MINMAX)
histhalf = cv2.calcHist(halfhsv,[0,1],None,[180,256],ranges)
cv2.normalize(histhalf,histhalf,0,255,cv2.NORM_MINMAX)
histtest1 = cv2.calcHist(test1hsv,[0,1],None,[180,256],ranges)
cv2.normalize(histtest1,histtest1,0,255,cv2.NORM_MINMAX)
histtest2 = cv2.calcHist(test2hsv,[0,1],None,[180,256],ranges)
cv2.normalize(histtest2,histtest2,0,255,cv2.NORM_MINMAX)
for i in xrange(5):
base_base = cv2.compareHist(histbase,histbase,i)
base_half = cv2.compareHist(histbase,histhalf,i)
base_test1 = cv2.compareHist(histbase,histtest1,i)
base_test2 = cv2.compareHist(histbase,histtest2,i)
print "Method: {0} -- base-base: {1} , base-test1: {2}, base_test2: {3}".format(i,base_base,base_test1,base_test2)

Resources