I am investigating signing kernel module from kernel source file in android open source project. I don't know how do it. How can I do it?
There's sign-file in scripts/ directory in Linux source. All details are in Documentation/module-signing.txt. Basically once you've generated signing keys using section GENERATING SIGNING KEYS in the document, run scripts/sign-file sha512 kernel-signkey.priv kernel-signkey.x509 module.ko as decribed under section MANUALLY SIGNING MODULES.
Related
Please note: I have created this GitHub project right here that can be used to perfectly reproduce the problem I'm seeing.
Java 8 here attempting to use Launch4J via the gradle-launch4j Gradle plugin to build a Windows native EXE application. I am doing the development of a Java Swing app on my Mac but the app must run as a Windows EXE on Windows 10. I am also using ShadowJar to build my self-contained "fat jar".
I can build my (Swing) app's fat jar and then run it on my Mac via java -jar build/lib/myapp.jar. It starts and runs no problem.
Here is my Gradle config for Launch4J:
launch4j {
mainClassName = 'com.example.windows.hello.HelloWindowsApp'
icon = "${projectDir}/icon.ico"
jdkPreference = 'jdkOnly'
initialHeapSize = 128
jreMinVersion = '1.8.0'
jreMaxVersion = '1.8.9'
maxHeapSize = 512
stayAlive = false
bundledJre64Bit = true
bundledJrePath = '../hello-windows/jre8'
}
When I run ./gradle clean build shadowJar createExe createDistro it produces:
hello-windows.zip/
hello-windows.exe --> The Windows EXE built by the 'createExe' task
lib/* --> The lib/ dir for the EXE that is also built by the `createExe` task
jre8/ --> OpenJDK JRE8 (copied from the libs/jre8 dir)
So I copy that ZIP file and port it over to a Windows 10 (64-bit) machine. I extract the ZIP and run the EXE by double clicking it inside Windows Explorer (which I can confirm does see the EXE as an Application type). First I see this:
Why is this happening? Are there any Launch4J configurations/settings I can change so that this doesn't happen?
Thanks in advance!
You need to sign the executable created by launch4j as described here to prevent SmartScreen from blocking it to be run. See also the related discussion in the support forum.
Your first question is more like a Windows question. When you unzip an application from a zip file, Windows will naturally mark it as unsafe, in fact if you check the application properties tab, you will see a checkbox where you can remove that unsafe attribute. It's same as running chmod+x for an executable script in Linux.
For the second part, I assume you are using the gradle plugin for Launch4j, there are two main ways to configure Launch4j assuming your project folder is structured commonly with the jre library in the same folder containing your executable folder.
By specifying the path only like
../jre
By specifying the full relative path
../jre/bin/javaw.exe
Your generated xml at the end should look like this in the first case.
<jre>
<path>../jre</path>
</jre>
The main point is that the path to JRE is relative to the position of the executable not the current directory. In this case, we step back one directory from the executable folder to the folder containing jre.
Try setting the bundledJrePath in your build.gradle to just jre8:
launch4j {
...
bundledJrePath = 'jre8'
}
Because that is in your case the relative path where the jre is when extracting the zip.
http://launch4j.sourceforge.net/docs.html
<path>, <minVersion>, <maxVersion>
The <path> property is used to specify the absolute or relative path (to the executable) of a bundled JRE, it does not rely on the current directory or <chdir>. Note that this path is not checked until the actual application execution
Beware that the path must not contain the /bin/javaw.exe.
When running the exe with the debug flag like this
hello-windows.exe --l4j-debug
then it will create a file launch4j.log in the same directory.
There you can check that the correct jre is picked up, for example:
...
Bundled JRE: jre8
Check launcher: C:\Users\IEUser\Downloads\hello-windows\jre8\bin\javaw.exe (OK)
...
I upvoted the answer above from sschuberth, as that is the best answer to your question. Signing the executable will make SmartScreen happy.
As addition I would rather prevent trying to create an executable, even signing it, best to create a MSI. For example by using Javapackager. See also this question. That guy created his own tool after using Nullsoft.
It is very cumbersome to get an executable accepted by every virus scanner around the world. I have the experience of using WIX Toolset to create an MSI and wrapped it into a bootstrapper executable, signed it using the company signing certificate. However in the end I had to send requests to McAfee, Norton, Avast, AVG, KasperSky and Trend Micro. Gladly all accepted it over time, only Trend Micro never even responded.
I extract Dsym file from the xarchive of release version and uploaded on crittercism but not able to find the symbolic crash report from the tool.
I contact crittercism helpDesk and all I come to know that I need to upload dsym file with all symbols... so, how can I validate that the file which I'm uploading is valid or not?
Build setting : GCC_GENERATE_DEBUGGING_SYMBOLS : Yes
File extract step : organizer > xarchive > release build > show package contents > dsyms > dsym.file
Apteligent also provides an automatic way using which, you'll never have to worry about tracking down your dSYMs again. This can be achieved by uploading the build_script file which is available inside the Apteligent iOS library (v4.0.1 and higher). All you need to do is specify the APP ID, API Key and the source path in order to configure your Xcode. Refer to the following article for complete description :
http://support.crittercism.com/articles/knowledge_base/Uploading-dSYMs-to-Apteligent-automatically
Once done, whenever you build your Xcode application, the dSYM files for your application (and any dependent modules to which you added the Run Script) will be uploaded to Apteligent and become available for crash symbolication.
I am getting familiar myself to LLVM, and my goal is to implement a back-end for my custom processor.
Before I jump into my back-end implementation, I first try to learn how a build procedure works, so I first copy lib/Target/MSP430 to lib/Target/myproc, and build llvm targeting "myproc" (even though it actually is a back-end for MSP430, I did this just to learn how I can add a new target to LLVM).
When I configure/make llvm, I got the following error message.
...
/bin/cp: cannot stat `/mydir/build/lib/Target/myproc/Debug+Asserts/MSP430GenRegisterInfo.inc.tmp': No such file or directory
...
I checked /lib/Target/myproc, and saw there was only one file, Makefile, copied from /lib/Target/myproc.
Here is what I have done before I configure and make.
In my LLVM source directory, copy lib/Target/MSP430 to lib/Target/myproc.
Modify configure and projects/sample/configure to add "myproc".
Go to lib/Target/myproc and change "MSP430" to "myproc" in MSP430.td, LLVMBuild.txt, and Makefile (I also modify the files in subdirectories).
As the LLVM compile works for other targets on my machine, I believe it's not the problem of machine of tools that I am using, but the problem of my modification.
Am I missing something? Are there any further modifications that I am supposed to make?
There's a decent tutorial for writing backends here:
http://llvm.org/docs/WritingAnLLVMBackend.html
There's also this tutorial from a dev meeting:
http://llvm.org/devmtg/2012-04-12/Slides/Workshops/Anton_Korobeynikov.pdf
*GenRegisterInfo.inc comes from running tblgen on the target .td file. The .inc output file name depends on what the .td files are named in the myproc/ target directory.
It would be helpful to see more of your make log but my guess is that you're getting a tblgen error when processing .td files in myproc/. That tblgen error is the real problem you need to diagnose and address.
The following script is used to build a specific kernel module.
make modules M=net/sctp
After a second thinking, I've figured out that some of the options were not opened, which is
CONFIG_SCTP_DBG_OBJCNT=y
However, the file that the option control was still not compiled after a "make module" command. Do I need to make the whole kernel to let the option take effects?
All configuration options will be converted into macros and will be written to the file include/generated/autoconf.h once you did make command to build the kernel.
After this when you change any of the configuration option you again need to run the make command which generates required files to include this new configuration options. But if you just use the command "make M=/net/sctp modules" after you change your configuration it will not affect in the make. Instead of building whole kernel what you can do is, just run the "make modules" command which generates the required files and builds your module with the options that you selected. This is the best way which also resolves if there are any dependencies on your newly configured option.
But in your case, if you know that objcnt.c doesn't depend on any other things you can change the make file of the sctp to include your file.
vim net/sctp/Makefile
sctp-y += objcnt.o
Then you can run the "make M=net/sctp modules"
According to https://www.kernel.org/doc/Documentation/kbuild/modules.txt:
To build external modules, you must have a prebuilt kernel available
that contains the configuration and header files used in the build.
[..] use the make target modules_prepare. This will
make sure the kernel contains the information required. The target
exists solely as a simple way to prepare a kernel source tree for
building external modules.
vim .config
make modules_prepare
Answer any kconfig prompts as changes to .config may enable new options that were not manually configured previously.
make M=net/sctp
utoday I got the task to integrate a floating license server for install4j to our build process. Therefore I read the README.txt and the following two threads:
install4jc-specifying-floating-server and floating-license-setup-on-a-headless-ubuntu-server
Now there are still some questions left. Therefore I will describe our build environment shortly. We are using maven to build our software and installer and startet to test install4j by using the maven-plugin for install4j:
We installed the install4j application as zip file to our maven repository
during maven build we download and extract this file to some target directory
until now we installed the demo-license via maven-plugin to the install4j folder
during the package-maven-phase we using the maven-plugin to build the installer mediums
That works very nice. Now the company decided to buy a license-bundle including a floating license server. But how to include this to our build process? My suggestion would be the following:
install license server to a server reachable over the network
modify the config.xml file at the zipped application of the maven-repository and integrate the floating-license-server address or set maven-plugin-license property something like FLOAT:[server-ip]:11862 at the build configuration?
Is the license-key gotten by ej-technologie only necessary for using the ui? Has anybody some similar build environment and can give me some information how to setup this correctly?
Thanks in advance
set maven-plugin-license property something like FLOAT:[server-ip]:11862 at the build configuration?
That would work for the multi-platform edition. If you have the Windows edition, you have to go with:
modify the config.xml file at the zipped application of the maven-repository and integrate the floating-license-server address
As for:
Is the license-key gotten by ej-technologie only necessary for using the ui?
The license key is entered in the license server. Both IDE and command line compiler contact the license server. Only the IDE actually checks out a license as long as the IDE is open, the command line compiler just verifies the validity of the license.