I am currently building Android 4.4. with seek-for-android to get UICC support on my Nexus 5.
So far without success. I did apply all patches mentioned here and here. But the sim1 Reader says there is no Secure Element present (only the embedded secure element reader returns true on .isSecureElemenPresend())
My question is: did anyone managed to activate SWP on a Nexus5 to successfully route APDUS to the SIM card and if so, how?
Edit: I found this comment, but it didn't help either.
I finally found the root of the problem and with that also the solution!
Alltho Google offers a package of allegedly all vendor specific libraries, the containing makefiles list a few (17) libraries that are not included in the package.
To add those libraries to your source folder you either need another hammerhead devices with stock firmware or the system.img file of the ROM.
you can get the libraries from the device like that:
adb pull system/app/OmaDmclient.apk
adb pull system/etc/DxHDCP.cfg
adb pull system/vendor/bin/vss_init
adb pull system/vendor/firmware/discretix/dxhdcp2.b00
adb pull system/vendor/firmware/discretix/dxhdcp2.b01
adb pull system/vendor/firmware/discretix/dxhdcp2.b02
adb pull system/vendor/firmware/discretix/dxhdcp2.b03
adb pull system/vendor/firmware/discretix/dxhdcp2.mdt
adb pull system/vendor/lib/libDxHdcp.so
adb pull system/vendor/lib/libvdmengine.so
adb pull system/vendor/lib/libvdmfumo.so
adb pull system/vendor/lib/libvss_common_core.so
adb pull system/vendor/lib/libvss_common_idl.so
adb pull system/vendor/lib/libvss_common_iface.so
adb pull system/vendor/lib/libvss_nv_core.so
adb pull system/vendor/lib/libvss_nv_idl.so
adb pull system/vendor/lib/libvss_nv_iface.so
if you use the system.img file then mount the image and copy them from there.
Now that we have the missing libraries we need to place them in the vendor directory: vendor/lge/hammerhead/proprietary and add them to the makefile vendor/lge/hammerhead/device-partial.mk
like that:
vendor/lge/hammerhead/proprietary/libvss_nv_iface.so:system/vendor/lib/libvss_nv_iface.so:lge \
vendor/lge/hammerhead/proprietary/libvss_nv_idl.so:system/vendor/lib/libvss_nv_idl.so:lge \
vendor/lge/hammerhead/proprietary/libvss_nv_core.so:system/vendor/lib/libvss_nv_core.so:lge \
vendor/lge/hammerhead/proprietary/libvss_common_iface.so:system/vendor/lib/libvss_common_iface.so:lge \
vendor/lge/hammerhead/proprietary/libvss_common_idl.so:system/vendor/lib/libvss_common_idl.so:lge \
vendor/lge/hammerhead/proprietary/libvss_common_core.so:system/vendor/lib/libvss_common_core.so:lge \
vendor/lge/hammerhead/proprietary/libvdmfumo.so:system/vendor/lib/libvdmfumo.so:lge \
vendor/lge/hammerhead/proprietary/libvdmengine.so:system/vendor/lib/libvdmengine.so:lge \
vendor/lge/hammerhead/proprietary/libDxHdcp.so:system/vendor/lib/libDxHdcp.so:lge \
now recompile, flash the image and everything should work.
Related
I'm wondering if rclone is able to donwload file from the shared folder of Google Drive. If yes, what is the command to do it?
rclone sync cloud_name:(what is the shared folder name?)file_name destination_path
You need to use rclone config to create a remote: for the Google 'Shared Drive'. See https://rclone.org/drive/
The line Configure this as a Shared Drive (Team Drive)?
Then the sync would be:
rclone sync SharedDriveName:"Directory/Directory" YourOtherRemote:"Directory/Directory"
Useful flags are:
-P or --progress (see progress during transfer)
-vv (see detailed logs)
--create-empty-src-dirs (to recreate empty directories)
-u or --update (Skip files that are newer on the destination)
--drive-server-side-across-configs (if you want to sync native Google docs)
--dry-run (as a practise)
I have a Python script with a Windows .exe dependency, which in return relies on a (closed-source) Windows DLL. The Python script runs just fine in Ubuntu via a call to Wine.
Is it possible (and practical) to run this on AWS Lambda?
What would be involved in preparing the code package?
Update: the lambda container image feature supports images up to 10gb. I haven't tried it but I think that would be a viable approach, and wouldn't require the hacks I did below to reduce the wine build size.
TL;DR;
Is it Possible? Yes.
Is it practical? The approach I tried is not. A better approach might be to try and put wine into different lambda layers or a custom execution environment.
Will it work for you? It depends, deployment package size and disk space are the limiting factors.
Old, somewhat hacky method to fit wine into the regular lambda environment:
I compiled a custom wine with minimal dependencies for lambda, compressed it and then put it onto S3.
Then, in the lambda at runtime, I downloaded the archive, extracted it to /tmp and ran it with a custom empty wine prefix.
My test windows executable was 64bit curl.exe.
1. Compile Wine for Lambda
From https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html, I first tried amzn-ami-hvm-2018.03.0.20181129-x86_64-gp2, but it had an older compilation environment and wouldn't configure.
With AMI amzn2-ami-hvm-2.0.20190313-x86_64-gp2 on a t3.2xlarge ec2, I was able to configure and compile. These are the commands I used, references aws-compile and building-wine:
> sudo yum groupinstall "Development Tools"
> mkdir -p ~/wine-dirs/wine-source
> git clone git://source.winehq.org/git/wine.git ~/wine-dirs/wine-source
> cd ~/wine-dirs/wine-source
> ./configure --enable-win64 --without-x --without-freetype --prefix /opt/wine
> make -j8
> sudo mkdir -p /opt/wine
> sudo chown ec2-user.ec2-user /opt/wine/
> make install
> cd /opt/
> tar zcvf ~/wine-64.tar.gz wine/
This was only a 64-bit build. It also had almost no other optional wine dependencies.
2. Reduce the size of the Wine build further
I removed a lot of optional dependencies from the wine build at compilation time, but it was still too big. /tmp is limited to 500MB.
I deleted files in the package subdirectories, including what looked like optional libs, until I got it down to around 300MB uncompressed.
I verified that wine would still run curl.exe after deleting files from the build.
3. Compress it
I created a tar.bz2 of wine and curl with default bz2 options, it ended up around 80MB. The compressed and extracted files together required about 390MB.
That way there is enough room to both download the archive and extract it to /tmp inside the lambda.
> du -h .
290M ./wine/lib64/wine
292M ./wine/lib64
276K ./wine/share/wine
8.0K ./wine/share/applications
288K ./wine/share
5.0M ./wine/curl-7.66.0-win64-mingw/bin
5.0M ./wine/curl-7.66.0-win64-mingw
12M ./wine/bin
308M ./wine
390M .
> ls
wine wine.tar.bz2
4. Upload wine.tar.bz2 to S3
Create an S3 bucket and upload the wine.tar.bz2 file to it.
5. Create the Lambda
Create an AWS Lambda using the python 3.7 runtime. While this uses a different underlying AMI than what wine was built on above, it still worked.
In the lambda execution role, grant access to the S3 bucket.
RAM: 1024MB. I chose this because lambda CPU power scales with the memory.
Timeout: 1 min
6. Lambda code:
I needed to follow the advice from this question and answer to change the wine prefix inside the lambda. I also turned off the display as it suggested.
e.g.:
handler():
... download from S3 to /tmp, cd to /tmp
subprocess.call(["tar", "-jxvf", "./wine.tar.bz2"])
os.environ['DISPLAY'] = ''
os.environ['WINEARCH'] = 'win64'
os.environ['WINEPREFIX'] = '/tmp/wineprefix'
subprocess.call(["./wine/bin/wine64", "./wine/curl-7.66.0-win64-mingw/bin/curl.exe", "http://www.stackoverflow.com"])
Success!
I need to push an apk to system/app (I need my apk to work as system app). I am using nexus 10 tablet (4.4.2 - Kitkat). I have rooted the tablet.
When I try to push the apk using adb, it show "failed to copy - Read only file system"
I also tried to remount the /system with read write permission, still I am getting same error.
After remounting, I changed the permission of system and app folder to "777". Still I get the same error.
Can someone tell me how can I make it to work.
adb shell
mount -o rw,remount /system
go to /system/app and remove your_app.apk if available
exit from shell
adb pull /data/system/packages.xml – this will download file to current directory on your PC
adb pull /data/system/packages.list
remove tag with attribute name= your_app_package (and all its content) from packages.xml and entry in packages.list
upload files using adb push command
clear dalvik cache using adb shell rm /data/dalvik-cache/*
remove your_app_package from a data folder using adb shell rm -r /data/data/your_app_package
reboot phone using adb reboot after restart you will be able to install new app
Some of this point can be omitted - please try
Hope will help
I made an ruby web application on nitrous.io, the tool is very nice and it helped a lot but now I want to download ther project in my computer and I didn't found any option to do that...
You can download and upload projects by any of the following options:
Utilize Nitrous Desktop to Sync your files locally.
Upload your project to Github, and pull the project from there. Here is a guide on adding the SSH key to Github if needed.
Upload the content via SCP. To do this, you will need to add an SSH Key to your account.
Next, run this command on your local machine, replacing {PORT} with the port # assigned to your Nitrous.IO box, and also changing usw1 with the proper region found in the SSH URI of your boxes page.
To Upload:
scp -P{PORT} -r path/to/yourFolder action#usw1-2.nitrousbox.com:~/workspace
To Download:
scp -P{PORT} -r action#usw1-2.nitrousbox.com:~/workspace path/to/yourLocalFolder
I do not know the service, but apparently they offer ssh access. Then you can use scp to copy the files to your machine. Anyway, probably you should ask their support...
...post a summary of their answer here and close the question :)
The easiest way is to store your project in a Git repository and then push this repository to an external host. You will then be able to clone your project from the external repository to any machine you want.
Personally, I use Bitbucket (Bitbucket as it is free and very easy to set up. Have a look at the tutorials there.
ok replying really late but I hope this will help anyone still looking for this. Here is how I download stuff from nitrous, no desktop utility download needed, and no ssh/scp or adding keys.
What you do is, simply make a archive for the folder you want to download by
tar -zcvf myarchive.tar.gz mydir/
now you got a *.gz file right? Whichever folder your gz file is in, be there and type:
python3.3 -m http.server 8080
you just started a cute little http server ready to serve you your download, now from the Preview menu click "Port 8080", this opens a new browser tab showing your gz file in the file listing (sample url http://yourboxes.apse1.nitrousbox.com:8080/). Now you can click your gz file and it will start downloading. Once done with the download, press Ctrl+C on the terminal to terminate the http server.
This is not limited to nitrous, you can make this work on many online VMs like cloud9 etc.
Background:
I recently installed MAMP, and am using it as a production server. The server setup did not come with an FTP server, and from what I've read, you can set up an FTP server via mod_ftp, an Apache module. I am not an expert with Apache software or server admin, although I can learn quickly. I can get to the following point and then I get stuck. Can someone please help me out?
I checked out the mod_ftp module files from the repository, here:
http://svn.apache.org/repos/asf/httpd/mod_ftp/trunk/
and I unzipped the contents into:
/Applications/MAMP/mod_ftp
I opened the README-FTP file (here):
http://svn.apache.org/repos/asf/httpd/mod_ftp/trunk/README-FTP
README-FTP:
To build and install as a DSO outside of the httpd source
build, from the ftp source root directory, simply;
./configure.apxs
make
make install
...
To build static, or as a DSO but within the same build as httpd,
copy the entire ftp source directory tree on top of your existing
httpd source tree, and from the httpd source root directory
./buildconf (to pick up ftp)
./configure --enable-ftp {your usual options}
and proceed as usual.
Some Questions:
"build and install a DSO outside of the httpd source build, from the ftp source root directory" -- is the ftp source root directory the mod_ftp folder that I created from the zipped files I checked out from the repository?
What does it mean "outside of the httpd source build"? -- is this the ServerRoot value I set in the httpd.conf as "/Applications/MAMP/Library" ?
Likewise, what does "within the same httpd build" mean -- what location is this referring to?
How do I know whether I want a static or DSO build?
What is the statement: "copy the entire ftp source directory tree on top of your existing
httpd source tree" actually asking me to do? (on top of?? As in, in the parent directory of the httpd source tree, or in the same directory?)
If you've made it this far, I'd like to commend you!
From this point, I chose the first option, and entered the commands seen in README-FTP into my Terminal.
Here's what my terminal looks like:
$ ./configure.apxs
Configuring mod_ftp for APXS in /usr/sbin/apxs
Detecting features
Finished, run 'make' to compile mod_ftp
Run 'make FTPPORT=8021 install' to install mod_ftp
(The default FTPPORT is 21 if not specified)
The manual pages ftp/index.html and mod/mod_ftp.html
will be installed to help get you started.
The conf/extra/ftpd.conf will be installed as an example
for you to work from. In your configuration file,
/private/etc/apache2/httpd.conf
uncomment the line '#Include conf/extra/ftpd.conf'
to activate this example mod_ftp configuration.
$ make
Making all in modules/ftp
$ sudo make install
Password:
Making install in modules/ftp
/usr/share/apr-1/build-1/libtool --silent --mode=install cp mod_ftp.la /usr/libexec/apache2/
Installing configuration files
for i in /private/etc/apache2/httpd.conf /private/etc/apache2/original/httpd.conf; do \
if test -f $i; then \
(awk -f /applications/mamp/library/mod_ftp/build/addloadexample.awk \
-v MODULE=ftp -v DSO=.so -v LIBPATH=libexec/apache2 \
-v EXAMPLECONF=/private/etc/apache2/extra/ftpd.conf \
< $i > $i.new && \
mv $i $i.bak && mv $i.new $i \
) || true; \
fi; \
done
Preserving existing FTP documents
Installing header files
Installing online manual
$
So what do I do from here?
I don't see mod_ftp.so anywhere, and I am particularly looking in this directory:
/Applications/MAMP/Library/modules (where all of Apache's other mod_*.so files are...)
and this directory:
/Applications/MAMP/mod_ftp/modules/ftp (where all of mod_ftp's various .c, .h and other files are)
Ultimately, I think the problem I am running into is that I don't understand how the file structures between my mod_ftp source folder and the httpd source folders need to be integrated in order to get the module running properly. Also, I don't know what I don't know, so there is probably one simple question to ask, but unfortunately I can't figure out how to ask it. Thank you for your help and patience!
Cheers!
P.S., yes, I have scoured the internet for hours.
I ended up scrapping MAMP and using the Mac's built in server. Through the System Preferences > Sharing menu, you can enable file sharing, which has an Options pane that allows you to "Share files and folders using FTP." I was able to obtain a static IP address through Comcast Business, and configured port forwarding on port 21 in my router to accept traffic. Then, I could use my FTP client to connect to my router with something like "123.456.789:21" as my host. Wasn't the best or most secure solution, but it worked, so take it with a grain of salt.
Right, I finally managed to install this on Ubuntu LTS 16.04.
First of all, you should install svn and the apxs functionality by running
sudo apt-get install subversion apache2-dev.
Then, cd to a convenient folder, and run svn co http://svn.apache.org/repos/asf/httpd/mod_ftp/trunk/. This downloads everything in the folder named /trunk. Then, cd into the /trunk folder of the downloaded repo.
Then, run the stated instructions
./configure.apxs --> does something in the subfolder to enable makefile to work
make --> this compiles the contents of the repo and changes things around.
make install --> you may want to run with the suggested flag. Essentially copies things to where it needs to be copied, and creates the necessary modules.
The suggested ./buildconf and ./configure should only be done if you are compiling apache2 with ftp at the same time. Since you should already have apache2 installed, this is not the option that you should be doing. Just stick with the first set of instructions, which are used to compile mod_ftp independently of apache2 and patch things in as needed.
At this point, the installation should technically work. However, you are not fully out of the woods yet. If you restart apache2 at this point, it should fail to start. If you run systemctl -xe, you will see that it is due to syntax errors in various places of the config files where someone forgot to prepend a forward slash, so rather than being given relative to root, the directories being specified end up being relative to /etc/apache2 instead. Fix those, using the line numbers as a guide. The omissions may be found in the apache2.conf file that specify the location of the mod_ftp module, and in the ftpd.conf file that specify the location of the error log.
You now need to mess around with apache2.conf and ftpd.conf (found in the /extra subfolder of the /etc/apache2 folder). Make sure that the lines
include /etc/apache2/extra/ftpd.conf
LoadModule ftp_module /usr/lib/apache2/modules/mod_ftp.so are present and uncommented.
The first basically tells the main apache2.conf file to include the configuration files for mod_ftp to help with partitioning your http and ftp configuration settings. The second just makes sure that the ftp module is loaded so that it can interpret the directives in the ftpd.conf file. Thus, you won't need to add the line "FTP on" or specify ports, as those are handled in ftpd.conf perfectly well.
You should now be good to go. Just note that for some weird reason if you set the document root in ftpd.conf to be the same as that in apache2.conf, apache2 will still run normally. The ftp server will work normally but the http server will not work. No idea why, but if you want to do that a simple workaround is to just do a symlink to the http document root and set that as the ftp document root.