creating Initramfs in Yocto - linux-kernel

I wanted to create initramfs in yocto. Therefore I created a custom recipe, added following lines;
require recipes-core/images/core-image-minimal.bb
IMAGE_FSTYPES = "${INITRAMFS_FSTYPES}"
It built successfully. I am not sure if all works properly.
I guess kernel and u-boot also need to be configured.
My question is, does yocto configure kernel and u-boot automatically after seeing
IMAGE_FSTYPES = "${INITRAMFS_FSTYPES}" or I should configure them myself?
Thank you.
Best regards

creating a new image-initramfs.bb image, and adding
LICENSE = "CLOSED"
include original-image.bb
IMAGE_FSTYPES = "${INITRAMFS_FSTYPES}"
PACKAGE_INSTALL = "${IMAGE_INSTALL}"
And then in conf/local.conf
INITRAMFS_IMAGE = "image-initramfs"
INITRAMFS_IMAGE_BUNDLE = "1"
Where image-initramfs is the initramfs image recipe name.

Related

Unable to resolve lmjoin in the winapi::um [duplicate]

I'm trying to use rand::SmallRng. The documentation says
This PRNG is feature-gated: to use, you must enable the crate feature small_rng.
I've been searching and can't figure out how to enable "crate features". The phrase isn't even used anywhere in the Rust docs. This is the best I could come up with:
[features]
default = ["small_rng"]
But I get:
Feature default includes small_rng which is neither a dependency nor another feature
Are the docs wrong, or is there something I'm missing?
Specify the dependencies in Cargo.toml like so:
[dependencies]
rand = { version = "0.7.2", features = ["small_rng"] }
Alternatively:
[dependencies.rand]
version = "0.7.2"
features = ["small_rng"]
Both work.

Yocto PREMIRROR/SOURCE_MIRROR_URL with url arguments (SAS_TOKEN) possible?

I sucessfully created a premirror for our yocto builds on an Azure Storage Blob,
that works if I set the access level to "Blob (Anonymous read).."
Now I wanted to keep the blob completely private, and access only via SAS Tokens.
SAS_TOKEN = "?sv=2019-12-12&ss=bf&srt=co&sp=rdl&se=2020-08-19T17:38:27Z&st=2020-08-19T09:38:27Z&spr=https&sig=abcdef_TEST"
INHERIT += "own-mirrors"
SOURCE_MIRROR_URL = "https://somewhere.blob.core.windows.net/our-mirror/downloads/BASENAME${SAS_TOKEN}"
BB_FETCH_PREMIRRORONLY = "1"
In general this works, but yocto (or to be exact the bitbake fetch module) will try then try to fetch from https://somewhere.blob.core.windows.net/our-mirror/downloads/bash-5.0.tar.gz%3Fsv%3D2019-12-12%26ss%3Dbf%26srt%3Dco%26sp%3Drdl%26se%3D2020-08-19T17%3A38%3A27Z%26st%3D2020-08-19T09%3A38%3A27Z%26spr%3Dhttps%26sig%3Dabcdef_TEST/bash-5.0.tar.gz
Which also encodes the special characters for the parameters and of course the fetch fill fail.
Did anybody has solved this or similar issues already?
Or is it possible to patch files inside the poky layer (namely in ./layers/poky/bitbake/lib/bb/fetch2) without changing them, so I can roll my on encodeurl function there?

sparkR 1.4.0 : how to include jars

I'm trying to hook SparkR 1.4.0 up to Elasticsearch using the elasticsearch-hadoop-2.1.0.rc1.jar jar file (found here). It's requiring a bit of hacking together, calling the SparkR:::callJMethod function. I need to get a jobj R object for a couple of Java classes. For some of the classes, this works:
SparkR:::callJStatic('java.lang.Class',
'forName',
'org.apache.hadoop.io.NullWritable')
But for others, it does not:
SparkR:::callJStatic('java.lang.Class',
'forName',
'org.elasticsearch.hadoop.mr.LinkedMapWritable')
Yielding the error:
java.lang.ClassNotFoundException:org.elasticsearch.hadoop.mr.EsInputFormat
It seems like Java isn't finding the org.elasticsearch.* classes, even though I've tried including them with the command line --jars argument, and the sparkR.init(sparkJars = ...) function.
Any help would be greatly appreciated. Also, if this is a question that more appropriately belongs on the actual SparkR issue tracker, could someone please point me to it? I looked and was not able to find it. Also, if someone knows an alternative way to hook SparkR up to Elasticsearch, I'd be happy to hear that as well.
Thanks!
Ben
Here's how I've achieved it:
# environments, packages, etc ----
Sys.setenv(SPARK_HOME = "/applications/spark-1.4.1")
.libPaths(c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib"), .libPaths()))
library(SparkR)
# connecting Elasticsearch to Spark via ES-Hadoop-2.1 ----
spark_context <- sparkR.init(master = "local[2]", sparkPackages = "org.elasticsearch:elasticsearch-spark_2.10:2.1.0")
spark_sql_context <- sparkRSQL.init(spark_context)
spark_es <- read.df(spark_sql_context, path = "index/type", source = "org.elasticsearch.spark.sql")
printSchema(spark_es)
(Spark 1.4.1, Elasticsearch 1.5.1, ES-Hadoop 2.1 on OS X Yosemite)
The key idea is to link to the ES-Hadoop package and not the jar file, and to use it to create a Spark SQL context directly.

JavaCV IplImage.createFrom hangs in place

Not sure why this is happening. The method :
IplImage.createFrom(image);
Is hanging without returning any value. I've tried multiple images, and confirmed their existence. I'm writing an application that harnesses template matching, however this initial step is giving me a headache. Does anyone know why this method would suspend the thread and not return any value? I've done some research, and confirmed my OpenCV path is set up, and that all my libraries are properly setup.
Before converting a BufferedImage to iplimage we need to create an iplimage which has same height and width as the BufferedImage. Try this code:
IplImage ipl_image = IplImage.create(buffered_image.getWidth(),buffered_.getHeight(),IPL_DEPTH_8U,1);
ipl_image = IplImage.createFrom(buffered_image);

get the operating system of an Amazon image via Fog

My end goal is to get the operating system of an Amazon image. When I do:
connection = Fog::Compute.new(provider: 'AWS',
aws_access_key_id: 'blah',
aws_secret_access_key: 'thing')
images = connection.describe_images('Owner' => 'self').body['imagesSet']
The data I get returned does not include platform, as this documentation suggests. However, I do get values like:
architecture: "x86_64",
imageType: "machine",
kernelId: "aki-825ea7eb",
And if I Google for that kernel ID I find this page saying it's Linux. Is there a way I can pass kernelId to Amazon via Fog and get back data about that kernelId, such as linux?
On a separate note, sometimes my images don't have kernelId, so are there any other fields in a <DescribeImagesResponse xmlns="http://ec2.amazonaws.com/doc/2012-12-01/"> that are definite indicators of operating system?
Here's a solution if you have the Kernel ID using http://thecloudmarket.com.
Pass the Kernel ID to a variable in ruby.
ker_id = imagesSet
url = []
url_0 = "http://thecloudmarket.com/image/"
url_1 = "ker_id"
url_2 = "#/definition"
new_url = url_0 + url_1 _ url_2
There are many ways to forge this url just made it easy to read.
Then use nokogiri to parse the webpage and put the image name back into your script.
I didn't see another notifiers in the documentation.

Resources