SWUpdate multiple bootenv sections - embedded-linux

I use SWUpdate to update different Hardware-Revisions of the same device with a double-copy strategy.
The bootloader environmnent of all those looks very similar. However, I have to set the mmc-partition to boot from depending on the active copy and the boot_file depending on the hardware-revision.
To keep the sw-description-file as comprehensive as possible and to make it easy to maintain I would like to set a "basic" boot-environment for all devices in a first step and in a second step overwrite some variables depending on hardware-revision and active copy:
software =
{
version = "1.1";
hardware-compatibility = ["0.1","1.0"];
device1=
{
copy-1:
{
images:
(
{
filename = "rootfs.ext3.gz";
device = "/dev/mmcblk0p3";
compressed = true;
},
{
filename = "u-boot-env-base"; #basic boot environment
type = "uboot";
}
);
bootenv: # device-specific boot variables
(
{
name = "boot_file"
value = "uImage1"
},
{
name = "mmcpart";
value = "3";
}
);
}
}
}
While parsing both bootloader environments are reported but only one is applied or both are, but in the wrong order, because when checking via fw_printenv the "u-boot-env-base" is unaltered.
I am using
SWUpdate v2018.11.0
U-Boot 2018.09.
I feel that I had this working in an older setup (SWUpdate 2016).

I have addressed the mailing list with this question. Stefano Babic, SWUpdate developer and maintainer, answered my question I am just trying to summarize it here.
What I have described is desired behaviour. It is not foreseen to set bootloader variables twice during an update. The u-boot variables defined in a file have priority over u-boot name-value-pairs in the bootenv section because the file is processed in the very end of the update. The solution in my case is to set the variables only in the bootenv section.

Related

Packer with vagrant post-processor "ovf file couldn't be found"

I'm new to packer. I've heard that you can add a vagrant post processor to get you an easy VM to test your new image in. Based on the examples and such I thought the code below would work. However, I get this error.
* Post-processor failed: ovf file couldn't be found
Here's my packer config/code.
source "digitalocean" "test" {
image = "ubuntu-20-10-x64"
region = "nyc1"
size = "s-1vcpu-1gb"
snapshot_name = "me-image-{{isotime \"2006-01-02T15:04\"}}"
snapshot_regions = [
"nyc1", "sgp1", "lon1", "nyc3", "ams3", "fra1", "tor1", "sfo2", "blr1",
"sfo3"
]
tags = ["delete"]
ssh_username = "root"
}
# a build block invokes sources and runs provisioning steps on them.
build {
sources = ["source.digitalocean.test"]
provisioner "file" {
source = "jump_host"
destination = "/tmp"
}
post-processor "vagrant" {
keep_input_artifact = true
provider_override = "virtualbox"
output = "out.box"
}
}
My packer version is 1.6.6
My vagrant version is 2.2.10
Had the Same(ish) Issue - Found the answer by bruteforce/chance
So I'm in the same boat as you, but I managed to find the hint for my solution here
Caveat: I'm working with an exported .vmdk, so this may not be a solution for you since you're looking for a way to get it straight from digital ocean?
The Hint
build {
sources = ["source.null.autogenerated_1"]
post-processor "shell-local" {
inline = ["echo Doing stuff..."]
}
post-processors {
post-processor "vagrant" {
--> include = ["image.iso"]
output = "proxycore_{{.Provider}}.box"
vagrantfile_template = "vagrantfile.tpl"
}
post-processor "vagrant-cloud" {
access_token = "${var.cloud_token}"
box_tag = "hashicorp/precise64"
version = "${local.version}"
}
}
}
This isn't listed on the Vagrant Post-Processor page, but it is on Vagrant Cloud Post-Processor. I just decided to try my luck and it worked.
Working Example
source "null" "example" {
communicator = "none"
}
build {
sources = ["source.null.example"]
post-processor "artifice" {
files = ["example-disk001.vmdk", "example.ovf"]
keep_input_artifact = true
}
post-processor "vagrant" {
include = ["example-disk001.vmdk", "example.ovf"]
keep_input_artifact = true
provider_override = "virtualbox"
}
}
Tl;dr it's not possible
What I wanted packer to do was build something for digitalocean then give me a copy so I could test it without paying for a vm from digitalocean and without needing internet. That isn't possible and after some reflection it makes sense why.
Digitalocean isn't just downloading the Ubuntu 20 ISO and throwing it on their servers. They configure and change the image so its optimized on their hardware. To expect their special images to run on some standard VM running on consumer hardware isn't realistic. Plus I'm not sure there's even a way to download a snapshot from DO.
But also in trying to do this I kind of missed the entire point of vagrant. If I'm testing a digitalocean image I will always need to connect for and pay for digitalocean. Vagrant is designed to make it easy for me to do that without having to click through the interface every single time. So I shouldn't even be trying to get this on my home computer.
PS: Thank you so much #RedGrin-Grumble for taking the time to add to this months old post.

Enabling Closed-Display Mode w/o Meeting Apple's Requirements

EDIT:
I have heavily edited this question after making some significant new discoveries and the question not having any answers yet.
Historically/AFAIK, keeping your Mac awake while in closed-display mode and not meeting Apple's requirements, has only been possible with a kernel extension (kext), or a command run as root. Recently however, I have discovered that there must be another way. I could really use some help figuring out how to get this working for use in a (100% free, no IAP) sandboxed Mac App Store (MAS) compatible app.
I have confirmed that some other MAS apps are able to do this, and it looks like they might be writing YES to a key named clamshellSleepDisabled. Or perhaps there's some other trickery involved that causes the key value to be set to YES? I found the function in IOPMrootDomain.cpp:
void IOPMrootDomain::setDisableClamShellSleep( bool val )
{
if (gIOPMWorkLoop->inGate() == false) {
gIOPMWorkLoop->runAction(
OSMemberFunctionCast(IOWorkLoop::Action, this, &IOPMrootDomain::setDisableClamShellSleep),
(OSObject *)this,
(void *)val);
return;
}
else {
DLOG("setDisableClamShellSleep(%x)\n", (uint32_t) val);
if ( clamshellSleepDisabled != val )
{
clamshellSleepDisabled = val;
// If clamshellSleepDisabled is reset to 0, reevaluate if
// system need to go to sleep due to clamshell state
if ( !clamshellSleepDisabled && clamshellClosed)
handlePowerNotification(kLocalEvalClamshellCommand);
}
}
}
I'd like to give this a try and see if that's all it takes, but I don't really have any idea about how to go about calling this function. It's certainly not a part of the IOPMrootDomain documentation, and I can't seem to find any helpful example code for functions that are in the IOPMrootDomain documentation, such as setAggressiveness or setPMAssertionLevel. Here's some evidence of what's going on behind the scenes according to Console:
I've had a tiny bit of experience working with IOMProotDomain via adapting some of ControlPlane's source for another project, but I'm at a loss for how to get started on this. Any help would be greatly appreciated. Thank you!
EDIT:
With #pmdj's contribution/answer, this has been solved!
Full example project:
https://github.com/x74353/CDMManager
This ended up being surprisingly simple/straightforward:
1. Import header:
#import <IOKit/pwr_mgt/IOPMLib.h>
2. Add this function in your implementation file:
IOReturn RootDomain_SetDisableClamShellSleep (io_connect_t root_domain_connection, bool disable)
{
uint32_t num_outputs = 0;
uint32_t input_count = 1;
uint64_t input[input_count];
input[0] = (uint64_t) { disable ? 1 : 0 };
return IOConnectCallScalarMethod(root_domain_connection, kPMSetClamshellSleepState, input, input_count, NULL, &num_outputs);
}
3. Use the following to call the above function from somewhere else in your implementation:
io_connect_t connection = IO_OBJECT_NULL;
io_service_t pmRootDomain = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("IOPMrootDomain"));
IOServiceOpen (pmRootDomain, current_task(), 0, &connection);
// 'enable' is a bool you should assign a YES or NO value to prior to making this call
RootDomain_SetDisableClamShellSleep(connection, enable);
IOServiceClose(connection);
I have no personal experience with the PM root domain, but I do have extensive experience with IOKit, so here goes:
You want IOPMrootDomain::setDisableClamShellSleep() to be called.
A code search for sites calling setDisableClamShellSleep() quickly reveals a location in RootDomainUserClient::externalMethod(), in the file iokit/Kernel/RootDomainUserClient.cpp. This is certainly promising, as externalMethod() is what gets called in response to user space programs calling the IOConnectCall*() family of functions.
Let's dig in:
IOReturn RootDomainUserClient::externalMethod(
uint32_t selector,
IOExternalMethodArguments * arguments,
IOExternalMethodDispatch * dispatch __unused,
OSObject * target __unused,
void * reference __unused )
{
IOReturn ret = kIOReturnBadArgument;
switch (selector)
{
…
…
…
case kPMSetClamshellSleepState:
fOwner->setDisableClamShellSleep(arguments->scalarInput[0] ? true : false);
ret = kIOReturnSuccess;
break;
…
So, to invoke setDisableClamShellSleep() you'll need to:
Open a user client connection to IOPMrootDomain. This looks straightforward, because:
Upon inspection, IOPMrootDomain has an IOUserClientClass property of RootDomainUserClient, so IOServiceOpen() from user space will by default create an RootDomainUserClient instance.
IOPMrootDomain does not override the newUserClient member function, so there are no access controls there.
RootDomainUserClient::initWithTask() does not appear to place any restrictions (e.g. root user, code signing) on the connecting user space process.
So it should simply be a case of running this code in your program:
io_connect_t connection = IO_OBJECT_NULL;
IOReturn ret = IOServiceOpen(
root_domain_service,
current_task(),
0, // user client type, ignored
&connection);
Call the appropriate external method.
From the code excerpt earlier on, we know that the selector must be kPMSetClamshellSleepState.
arguments->scalarInput[0] being zero will call setDisableClamShellSleep(false), while a nonzero value will call setDisableClamShellSleep(true).
This amounts to:
IOReturn RootDomain_SetDisableClamShellSleep(io_connect_t root_domain_connection, bool disable)
{
uint32_t num_outputs = 0;
uint64_t inputs[] = { disable ? 1 : 0 };
return IOConnectCallScalarMethod(
root_domain_connection, kPMSetClamshellSleepState,
&inputs, 1, // 1 = length of array 'inputs'
NULL, &num_outputs);
}
When you're done with your io_connect_t handle, don't forget to IOServiceClose() it.
This should let you toggle clamshell sleep on or off. Note that there does not appear to be any provision for automatically resetting the value to its original state, so if your program crashes or exits without cleaning up after itself, whatever state was last set will remain. This might not be great from a user experience perspective, so perhaps try to defend against it somehow, for example in a crash handler.

ContainerLaunchContext.setResource() missing of hadoop yarn

http://hadoop.apache.org/docs/r2.1.0-beta/hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html
I am try to make the example work well from the above link.but I can't compile the code below
Resource capability = Records.newRecord(Resource.class);
capability.setMemory(512);
amContainer.setResource(capability);
// Set the container launch content into the
// ApplicationSubmissionContext
appContext.setAMContainerSpec(amContainer);
amContainer is ContainerLaunchContext and my hadoop version is 2.1.0-beta.
I did some investigation. I found there's no method "setResource" in ContainerLaunchContext
I have 3 question about this
1) the method has been removed or something?
2) if the method has been removed, how can I do now?
3) is there any doc about yarn, because I found the doc in website is very easy, I hope I can get a manual or something. for example,
capability.setMemory(512);
I don't know it's 512k or 512M according comments in code.
This is actually proper solution to the question. Previous answer might cause incorrect execution !!!
#Dyin I couldn't fit it in the comment ;) Validated for 2.2.0 and 2.3.0
Driver setting up resources for AppMaster:
ApplicationSubmissionContext appContext = app.getApplicationSubmissionContext();
ApplicationId appId = appContext.getApplicationId();
appContext.setApplicationName(this.appName);
// Set up the container launch context for the application master
ContainerLaunchContext amContainer = Records.newRecord(ContainerLaunchContext.class);
Resource capability = Records.newRecord(Resource.class);
capability.setMemory(amMemory);
appContext.setResource(capability);
appContext.setAMContainerSpec(amContainer);
Priority pri = Records.newRecord(Priority.class);
pri.setPriority(amPriority);
appContext.setPriority(pri);
appContext.setQueue(amQueue);
// Submit the application to the applications manager
yarnClient.submitApplication(appContext); // this.yarnClient = YarnClient.createYarnClient();
In ApplicationMaster this is how you should specify resources for containers (workers).
private AMRMClient.ContainerRequest setupContainerAskForRM() {
// setup requirements for hosts
// using * as any host will do for the distributed shell app
// set the priority for the request
Priority pri = Records.newRecord(Priority.class);
pri.setPriority(requestPriority);
// Set up resource type requirements
// For now, only memory is supported so we set memory requirements
Resource capability = Records.newRecord(Resource.class);
capability.setMemory(containerMemory);
AMRMClient.ContainerRequest request = new AMRMClient.ContainerRequest(capability, null, null,
pri);
return request;
}
Some run() or main() method in your AppMaster
AMRMClientAsync.CallbackHandler allocListener = new RMCallbackHandler();
resourceManager = AMRMClientAsync.createAMRMClientAsync(1000, allocListener);
resourceManager.init(conf);
resourceManager.start();
for (int i = 0; i < numTotalContainers; ++i) {
AMRMClient.ContainerRequest containerAsk = setupContainerAskForRM();
resourceManager.addContainerRequest(containerAsk); //
}
Launching containers
You can use the original answer solution (java cmd), but it's just a cherry on top. It should work anyway.
You can set memory available to ApplicationMaster via commend. As such:
// Set the necessary command to execute the application master
Vector<CharSequence> vargs = new Vector<CharSequence>(30);
...
vargs.add("-Xmx" + amMemory + "m"); // notice "m" indicating megabytes, you can use also -Xms combined with -Xmx
... // transform vargs to String commands
amContainer.setCommands(commands);
This should solve your problem. As for the 3 questions. Yarn is rapidly evolving software. My advice forget documentation, get source code and read it. This will answer a lot of your questions.

debugging core files

I want to write a program which can read core files in Linux. However i cannot find any documentation which can guide me in this respect. Can someone please guide me as to where to do find some resources?
You can also take a look at GDB source code, gdb/core*.
For instance, in gdb/corelow.c, you can read at the end:
static struct target_ops core_ops;
core_ops.to_shortname = "core";
core_ops.to_longname = "Local core dump file";
core_ops.to_doc = "Use a core file as a target. Specify the filename of the core file.";
core_ops.to_open = core_open;
core_ops.to_close = core_close;
core_ops.to_attach = find_default_attach;
core_ops.to_detach = core_detach;
core_ops.to_fetch_registers = get_core_registers;
core_ops.to_xfer_partial = core_xfer_partial;
core_ops.to_files_info = core_files_info;
core_ops.to_insert_breakpoint = ignore;
core_ops.to_remove_breakpoint = ignore;
core_ops.to_create_inferior = find_default_create_inferior;
core_ops.to_thread_alive = core_thread_alive;
core_ops.to_read_description = core_read_description;
core_ops.to_pid_to_str = core_pid_to_str;
core_ops.to_stratum = process_stratum;
core_ops.to_has_memory = core_has_memory;
core_ops.to_has_stack = core_has_stack;
core_ops.to_has_registers = core_has_registers;
The struct target_ops defines a generic interface that the upper part of GDB will use to communicate with a target. This target can be a local unix process, a remote process, a core file, ...
So if you only investigate what's behing these functions, you won't be overhelmed by the generic part of the debugger implementation.
(depending of what's your final goal, you may also want to reuse this interface and its implementation in your app, it shouldn't rely on so many other things.
Having a look at the source of gcore http://people.redhat.com/anderson/extensions/gcore.c might be helpful.
Core files can be examined by using the dbx(1) or mdb(1) or one of the proc(1) tools.

How to get the installation directory?

The MSI stores the installation directory for the future uninstall tasks.
Using the INSTALLPROPERTY_INSTALLLOCATION property (that is "InstallLocation") works only the installer has set the ARPINSTALLLOCATION property during the installation. But this property is optional and almost nobody uses it.
How could I retrieve the installation directory?
Use a registry key to keep track of your install directory, that way you can reference it when upgrading and removing the product.
Using WIX I would create a Component that creates the key, right after the Directy tag of the install directory, declaration
I'd use MsiGetComponentPath() - you need the ProductId and a ComponentId, but you get the full path to the installed file - just pick one that goes to the location of your installation directory. If you want to get the value of a directory for any random MSI, I do not believe there is an API that lets you do that.
I would try to use Installer.OpenProduct(productcode). This opens a session, on which you can then ask for Property("TARGETDIR").
Try this:
var sPath = this.Context.Parameters["assemblypath"].ToString();
As stated elsewhere in the thread, I normally write a registry key in HKLM to be able to easily retrieve the installation directory for subsequent installs.
In cases when I am dealing with a setup that hasn't done this, I use the built-in Windows Installer feature AppSearch: http://msdn.microsoft.com/en-us/library/aa367578(v=vs.85).aspx to locate the directory of the previous install by specifying a file signature to look for.
A file signature can consist of the file name, file size and file version and other file properties. Each signature can be specified with a certain degree of flexibility so you can find different versions of the the same file for instance by specifying a version range to look for. Please check the SDK documentation: http://msdn.microsoft.com/en-us/library/aa371853(v=vs.85).aspx
In most cases I use the main application EXE and set a tight signature by looking for a narrow version range of the file with the correct version and date.
Recently I needed to automate Natural Docs install through Ketarin. I could assume it was installed into default path (%ProgramFiles(x86)%\Natural Docs), but I decided to take a safe approach. Sadly, even if the installer created a key on HKLM\Software\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall, none of it's value lead me to find the install dir.
The Stein answer suggests AppSearch MSI function, and it looks interesting, but sadly Natural Docs MSI installer doesn't provide a Signature table to his approach works.
So I decided to search through registry to find any reference to Natural Docs install dir, and I find one into HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\UserData\S-1-5-18\Components key.
I developed a Reg Class in C# for Ketarin that allows recursion. So I look all values through HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\UserData\S-1-5-18\Components and if the Main application executable (NaturalDocs.exe) is found into one of subkeys values, it's extracted (C:\Program Files (x86)\Natural Docs\NaturalDocs.exe becomes C:\Program Files (x86)\Natural Docs) and it's added to the system environment variable %PATH% (So I can call "NaturalDocs.exe" directly instead of using full path).
The Registry "class" (functions, actually) can be found on GitHub (RegClassCS).
System.Diagnostics.ProcessStartInfo startInfo = new System.Diagnostics.ProcessStartInfo("NaturalDocs.exe", "-h");
startInfo.UseShellExecute = false;
startInfo.CreateNoWindow = true;
var process = System.Diagnostics.Process.Start (startInfo);
process.WaitForExit();
if (process.ExitCode != 0)
{
string Components = #"SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\UserData\S-1-5-18\Components";
bool breakFlag = false;
string hKeyName = "HKEY_LOCAL_MACHINE";
if (Environment.Is64BitOperatingSystem)
{
hKeyName = "HKEY_LOCAL_MACHINE64";
}
string[] subKeyNames = RegGetSubKeyNames(hKeyName, Components);
// Array.Reverse(subKeyNames);
for(int i = 0; i <= subKeyNames.Length - 1; i++)
{
string[] valueNames = RegGetValueNames(hKeyName, subKeyNames[i]);
foreach(string valueName in valueNames)
{
string valueKind = RegGetValueKind(hKeyName, subKeyNames[i], valueName);
switch(valueKind)
{
case "REG_SZ":
// case "REG_EXPAND_SZ":
// case "REG_BINARY":
string valueSZ = (RegGetValue(hKeyName, subKeyNames[i], valueName) as String);
if (valueSZ.IndexOf("NaturalDocs.exe") != -1)
{
startInfo = new System.Diagnostics.ProcessStartInfo("setx", "path \"%path%;" + System.IO.Path.GetDirectoryName(valueSZ) + "\" /M");
startInfo.Verb = "runas";
process = System.Diagnostics.Process.Start (startInfo);
process.WaitForExit();
if (process.ExitCode != 0)
{
Abort("SETX failed.");
}
breakFlag = true;
}
break;
/*
case "REG_MULTI_SZ":
string[] valueMultiSZ = (string[])RegGetValue("HKEY_CURRENT_USER", subKeyNames[i], valueKind);
for(int k = 0; k <= valueMultiSZ.Length - 1; k++)
{
Ketarin.Forms.LogDialog.Log("valueMultiSZ[" + k + "] = " + valueMultiSZ[k]);
}
break;
*/
default:
break;
}
if (breakFlag)
{
break;
}
}
if (breakFlag)
{
break;
}
}
}
Even if you don't use Ketarin, you can easily paste the function and build it through Visual Studio or CSC.
A more general approach can be taken using RegClassVBS that allow registry key recursion and doesn't depend on .NET Framework platform or build processes.
Please note that the process of enumerating the Components Key can be CPU intense. The example above has a Length parameter, that you can use to show some progress to the user (maybe something like "i from (subKeysName.Length - 1) keys remaining" - be creative). A similar approach can be taken in RegClassVBS.
Both classes (RegClassCS and RegClassVBS) have documentation and examples that can guide you, and you can use it in any software and contribute to the development of them making a commit on the git repo, and (of course) opening a issue on it's github pages if you find any problem that you couldn't resolve yourself so we can try to reproduce the issue to figure out what we can do about it. =)

Resources