How can I import images into containerd? - containerd

I imported an image by using ctr image import alpine.tar but I use ctr images ls return is empty
first i use skopeo copy docker://docker.io/library/alpine:latest docker-archive:alpine.tar to get the tar
I wish I could import this image into containerd

Related

curl command BashOpertor in Cloud Composer

I am following the tutorial mentioned in this link - download_rocket_launches.py . As I am running this in Cloud Composer, I want to put in the native path i.e. /home/airflow/gcs/dags but it's failing with error path not found.
What path can I give for this command to work? Here is the task I am trying to execute -
download_launches = BashOperator(
task_id="download_launches",
bash_command="curl -o /tmp/launches.json -L 'https://ll.thespacedevs.com/2.0.0/launch/upcoming'", # noqa: E501
dag=dag,
)
This worked on my end:
import json
import pathlib
import airflow.utils.dates
import requests
import requests.exceptions as requests_exceptions
from airflow import DAG
from airflow.operators.bash import BashOperator
from airflow.operators.python import PythonOperator
dag = DAG(
dag_id="download_rocket_launches",
description="Download rocket pictures of recently launched rockets.",
start_date=airflow.utils.dates.days_ago(14),
schedule_interval="#daily",
)
download_launches = BashOperator(
task_id="download_launches",
bash_command="curl -o /home/airflow/gcs/data/launches.json -L 'https://ll.thespacedevs.com/2.0.0/launch/upcoming' ", # put space in between single quote and double quote
dag=dag,
)
download_launches
Output:
The key was to put space between single quote ' and double quote " towards the end of your bash command.
Also, it is recommended to use the Data folder when it comes to mapping out your output file as stated in the GCP documentation:
gs://bucket-name/data /home/airflow/gcs/data: Stores the data that tasks produce and use. This folder is mounted on all worker nodes.

How do I fix the cv.imwrite to that I don't get an error, and a copy of the images saves in my desired folder

(Beginner)
import cv2
from google.colab.patches import cv2_imshow
from google.colab import files
uploaded = files.upload()
lena.jpg(image/jpeg) - 91814 bytes, last modified: n/a - 100% done
Saving lena.jpg to lena.jpg
img = cv2.imread('lena.jpg', 0)
print(img)
cv2_imshow(img)
cv2.imwrite('C:/Users/borby/Desktop/image',img)
The output shows the image matrices and the image, but after, I receive this error:
error Traceback (most recent call last)
in ()
5 cv2_imshow(img)
6
----> 7 cv2.imwrite('C:/Users/borby/Desktop/image',img)
error: OpenCV(4.1.2) /io/opencv/modules/imgcodecs/src/loadsave.cpp:661: error: (-2:Unspecified error) could not find a writer for the specified extension in function 'imwrite_'
How do I fix it so a copy of img saves in the image folder?
When you are working on Google colab you are working on a virtual machine. So basically you cannot access your desktop files straight away and that is why you used files.upload in the first place, instead of just specifying the path of the image.
So just write:
cv2.imwrite('image.jpg',img)
Now you can access it on the left side and download it from there or you can add this code:
files.download('image.jpg')

Twitch Stream as input for ffmpeg

My objective is to take a twitch video stream and generate an image sequence from it without having to create an intermediary file. I found out that ffmpeg can take a video and turn it into a video and turn it into an image sequence. The ffmpeg website says that it's input option can take network streams, although I really can't find any clear documentation for it. I've searched through Stack Overflow and I haven't found any answers either.
I've tried adding the link to the stream:
ffmpeg -i www.twitch.tv/channelName
But the program either stated the error "No such file or directory":
or caused a segmentation fault when adding https to the link.
I'm also using streamlink and used that with ffmpeg in a python script to try the streaming url:
import streamlink
import subprocess
streams = streamlink.streams("http://twitch.tv/channelName")
stream = streams["worst"]
fd = stream.open()
url = fd.writer.stream.url
fd.close()
subprocess.run(['/path/to/ffmpeg', '-i', url], shell=True)
But that is producing the same error as the website URL. I'm pretty new to ffmpeg and streamlink so I'm not sure what I'm doing wrong. Is there a way for me to add a twitch stream to the input for ffmpeg?
I've figured it out. Ffmpeg won't pull the files that are online for you, you have to pull them yourself, this can be done by using call GET on the stream url which returns a file containing addresses of .ts files, curl can be used to download these files on your drive. Combine this with my image sequencing goal the process looks like this on python:
import streamlink
import subprocess
import requests
if __name__ == "__main__":
streams = streamlink.streams("http://twitch.tv/twitchplayspokemon")
stream = streams["worst"]
fd = stream.open()
url = fd.writer.stream.url
fd.close()
res = requests.get(url)
tsFiles = list(filter(lambda line: line.startswith('http'), res.text.splitlines()))
print(tsFiles)
for i, ts in enumerate(tsFiles):
vid = 'vid{}.ts'.format(i)
process = subprocess.run(['curl', ts, '-o', vid])
process = subprocess.run(['ffmpeg', '-i', vid, '-vf', 'fps=1', 'out{}_%d.png'.format(i)])
It's not a perfect answer, you still have to create the intermediary video files which I was hoping to avoid. Maybe there's a better and faster answer, but this will suffice.

How to read jpeg image with Adobe RGB colorspace in OpenCV?

I am trying to read and write jpegs wth Adobe RGB colorspace in OpenCV. OpenCV assumes the jpeg has sRGB colorspace and when displaying or writing to file, the image loses some of its color intensity. I found this intensity loss was due to colorspace difference by answers given to my previous question.
Is there anyway I can make OpenCV to read Adobe RGB colorspace without casting it to sRGB?
Some information that is hopefully useful for anyone looking for a work-around for dealing with ICC and other profiles...
You can see what profiles are present in an image using ImageMagick which is installed on most Linux distros and is available for macOS and Windows. In the Terminal, or Command Prompt on Windows, run:
magick identify -verbose frog.jpg | grep 'Profile-.*bytes'
Profile-icc: 578 bytes
That tells you this image has a 578 byte ICC profile embedded.
If you are on Windows and don't have grep, you can equally use the following, though you may need to double up the percent sign, or prefix it with a caret (^) or somehow escape it:
magick identify -format "%[profiles]" frog.jpg
icc
You can extract that profile from the image, using this command:
magick frog.jpg frog.icc
And, you'll get a 578 byte ICC profile:
ls -l *icc
-rw-r--r-- 1 mark staff 578 24 Apr 10:36 frog.icc
You can check that the profile looks correct using the file command:
file *icc
frog.icc: ColorSync color profile 2.1, type ADBE, RGB/XYZ-mntr device by ADBE, 560 bytes, 11-8-2000 19:51:59 "Adobe RGB (1998)"
You can apply that profile to some other file like this:
magick other.jpg -profile "icc:frog.icc" otherWithProfile.jpg
Once you have extracted the profile using the above method, you can apply it to an image that you plan to use with OpenCV using PIL/Pillow's ImageCMS Module.
For that, I think you need to use these steps or something very similar, though I have not tested it:
from PIL import Image, ImageCMS
import numpy as np
# Open frog with PIL/Pillow
im = Image.open('frog.jpg')
iccp = PIL.ImageCms.getOpenProfile("profile.icc")
rgbp = ImageCms.createProfile("sRGB")
icc2rgb = ImageCms.buildTransformFromOpenProfiles(rgbp, iccp, "RGB", "RGB")
result = ImageCms.applyTransform(im, icc2rgb)
You should then be able to convert the resulting image to a Numpy array that OpenCV can work with using:
OpenCVim = np.array(result)
and remember to then convert from RGB ordering to BGR with cv2.cvtColor().
Rather than detect and extract the ICC profile with ImageMagick, you could equally use PIL/Pillow like this:
from PIL import Image
im = Image.open('frog.jpg')
# Now look at "im.info"
{'jfif': 257,
'jfif_version': (1, 1),
'dpi': (72, 72),
'jfif_unit': 1,
'jfif_density': (72, 72),
'icc_profile': b'\x00\x00\x020ADBE\x02\x10\x00\x00mntrRGB XYZ \x07\xd0\x00\x08\x00\x0b\x00\x13\x003\x00;acspAPPL\x00\x00\x00\x00none\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xf6\xd6\x00\x01\x00\x00\x00\x00\xd3-ADBE\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\ncprt\x00\x00\x00\xfc\x00\x00\x002desc\x00\x00\x010\x00\x00\x00kwtpt\x00\x00\x01\x9c\x00\x00\x00\x14bkpt\x00\x00\x01\xb0\x00\x00\x00\x14rTRC\x00\x00\x01\xc4\x00\x00\x00\x0egTRC\x00\x00\x01\xd4\x00\x00\x00\x0ebTRC\x00\x00\x01\xe4\x00\x00\x00\x0erXYZ\x00\x00\x01\xf4\x00\x00\x00\x14gXYZ\x00\x00\x02\x08\x00\x00\x00\x14bXYZ\x00\x00\x02\x1c\x00\x00\x00\x14text\x00\x00\x00\x00Copyright 2000 Adobe Systems Incorporated\x00\x00\x00desc\x00\x00\x00\x00\x00\x00\x00\x11Adobe RGB (1998)\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00XYZ \x00\x00\x00\x00\x00\x00\xf3Q\x00\x01\x00\x00\x00\x01\x16\xccXYZ \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00curv\x00\x00\x00\x00\x00\x00\x00\x01\x023\x00\x00curv\x00\x00\x00\x00\x00\x00\x00\x01\x023\x00\x00curv\x00\x00\x00\x00\x00\x00\x00\x01\x023\x00\x00XYZ \x00\x00\x00\x00\x00\x00\x9c\x18\x00\x00O\xa5\x00\x00\x04\xfcXYZ \x00\x00\x00\x00\x00\x004\x8d\x00\x00\xa0,\x00\x00\x0f\x95XYZ \x00\x00\x00\x00\x00\x00&1\x00\x00\x10/\x00\x00\xbe\x9c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'}
Here's the frog.jpg image:
Keywords: Python, ImageMagick, image, image processing, profile, ICC profile, extract, insert, apply, transform, PIL, Pillow, OpenCV, CMS, pyCMS.

jpg won't optimize (jpegtran, jpegoptim)

I have an image and it's a jpg.
I tried running through jpegtran with the following command:
$ jpegtran -copy none -optimize image.jpg > out.jpg
The file outputs, but the image seems un-modified (no size change)
I tried jpegoptim:
$ jpegoptim image.jpg
image.jpg 4475x2984 24bit P JFIF [OK] 1679488 --> 1679488 bytes (0.00%), skipped.
I get the same results when I use --force with jpegoptim except it reports that it's optimized but there is no change in file size
Here is the image in question: http://i.imgur.com/NAuigj0.jpg
But I can't seem to get it to work with any other jpegs I have either (only tried a couple though).
Am I doing something wrong?
I downloaded your image from imgur, but the size is 189,056 bytes. Is it possible that imgur did something to your image?
Anyway, I managed to optimize it to 165,920 bytes using Leanify (I'm the author) and it's lossless.

Resources