What is the function of uiautomator apks present on the python wrapper for uiautomator 2 - android-uiautomator

I am trying to read and understand how python wrapper for uiautomator2 works. What is the function of uiautomator apks present inside libs and how does this whole framework work?
Also, where did they come from? I could not find source code of these apks.
https://github.com/openatx/uiautomator2

uiautomator apk are there to handle ui commands on phone.
python-uiautomator2 is a python-wrapper, which allows
scripting with Python on computer
controlling the mobile with computer with/without usb connection
screen-casting exact
Real time device controlling
Installation
Connect ONLY ONE mobile to the computer with developer-mode open,
make sure adb devices works
Install packages: pip3 install -U uiautomator2 weditor
Install daemons to the mobile: python/python3 -m uiautomator2 init
The weditor is a standalone web-server to interact with the mobile through browser.
Basic Usage
Connection
Connect the mobile by wifi and run below python script
import uiautomator2 as u2
d = u2.connect('192.168.31.37')
print(d.info)
(or)
Connect the mobile by USB and run below python script
import uiautomator2 as u2
d = u2.connect('mobile-serial') # get from "adb devices"
print(d.info)
Key events
d.screen_on()
d.screen_off()
d.press('home')
d.press('back')
for full details please follow the below link
uiautomator2 doc

Related

Running IPFS Desktop and CLI simultaneously

This is a rather beginner question. Apologies for nothing more challening :)
I am running IPFS Desktop on my computer. I downloaded it via the Ubuntu Software Center. I believe it's a snap install. I am using Ubuntu 20.04
I want to be able to access some of the CLI commands for the node that is being run via the IPFS Desktop but when I enter any ipfs command in the terminal, it says command not found. etc.
If I install the ipfs cli then it runs a different node through the terminal. Am I missing something obvious here? How can I access the IPFS Desktop node through the command line?
Thanks!
Without running into distribution/package-specifics, below are two ways that should work on all systems.
Quick ad-hoc solution: point the ipfs CLI client at the node run by IPFS Desktop by passing an explicit API endpoint (ipfs --api=/ip4/127.0.0.1/tcp/5001). You can find exact address via Status_→_Advanced_→_API in WebUI provided by Desktop app.
Alternative is to set IPFS_PATH variable in your env to the directory used by IPFS Desktop, ensuring ipfs CLI tool uses the same repo as Desktop app. This is especially useful when you need to run a command that does not work over API and requires direct access the repository (like ipfs key export|rotate).
Thank you all for your answers. I believe the problem was in installing it using snap store (Ubuntu Software Center) because this changes the default path of the installations. So in effect, the desktop and cli were installed at separate paths.
I followed the installation on the IPFS site which uses the install script and that put it in the correct path.
So I re-installed only the CLI and use the webUI in place of the desktop. Along with IPFS Companion, desktop is not really needed.
But I still wanted the functionality of having the desktop run the daemon behind the scenes without having a terminal open, so I created the following service unit file to do that:
Paste the following code in the file /etc/systemd/system/ipfs.service
[Unit]
Description=IPFS Daemon
After=network.target
[Service]
Type=simple
ExecStart=/usr/local/bin/ipfs daemon
User=user
Restart=on-failure
[Install]
WantedBy=default.target
Then I simply ran sudo systemctl start ipfs in a terminal to get the daemon running as a service.
Thanks!
Yes IPFS should not be installed as a snap as you discovered it creates a second path. Installation is preferred by deb over appimage as the appimage limits user to only GUI interface. Another possible pitfall in the future could be the definition of "daemon" and "cluster". These are true to Unix definitions so a "daemon" is a service that manages a number of nodes on the same machine. Cluster is for multiple nodes that are physically seperated to different machines and locales. Other than that I'd say you are on the right path!

Dockerize react-native dev environment & connect Android device to WSL2

I want to dockerize my react native development environment. Currently I have windows on my laptop, and I don't have the option to change that. I also have WSL2 installed and I started to build my dockerfile from this image
I also want to use my device, so I have to somehow connect the container (the docker runs actually in WSL2) with my android device. The WSL2 don't support USB devices (that are connected to the host windows system) currently so I was thinking about setting up a wireless adb connection on the local network but since my android is <11 I have to do some initial setup which requires me to connect adb over USB which is due to the reasons above, not possible... I don't want to use an USB server. Any ideas?
Assuming you have the whole dev-environment setup and working on linux/wsl, you can use socat to relay requests from wsl2 to adb on windows. You may have to deal with some firewall issues, but this works for me:
On Windows:
Run adb -a -P 5037 nodaemon server (credit to this redditor)
In another terminal, run adb devices to ensure your device is properly connected over usb.
On linux/wsl2:
Run socat -d -d TCP-LISTEN:5037,reuseaddr,fork TCP:$(cat /etc/resolv.conf | tail -n1 | cut -d " " -f 2):5037 (credit to this gist)
If you are developing for android with react-native and need to launch the metro bundler, you will need to open a port for the metro bundler on your windows firewall. I will assume you are using port 8081.
Run as administrator in powershell (Windows):
$WSL_CLIENT = bash.exe -c "ip addr show eth0 | grep -oP '(?<=inet\s)\d+(\.\d+){3}'"
iex "netsh interface portproxy add v4tov4 listenport=8081 listenaddress=127.0.0.1 connectport=8081 connectaddress=$WSL_CLIENT"
You should now be able to deploy your project to andriod.
On linux/wsl2:
npx react-native start
npx react-native run-android
If this doesn't work, the credited sources for steps (1) and (3) provide similar methods for achieving this. If you make a docker image, please share it here!

Can't run Fuchsia components with shell

So I am trying to get started developing on Fuchsia and I wanted to get the hello world component to run. However, following these steps doesn't work for me. I'm using core.qemu-x64 running on an Ubuntu 20.04 VM with Virtual Box. I have gotten the emulator to run with fx qemu -N but fx vdl start -N hasn't worked for me.
I run fx serve-updates but it just outputs "Discovery..." and never changes. Then I try to run fx shell run fuchsia-pkg://fuchsia.com/hello-world-cpp#meta/hello-world-cpp.cmx but it says "No devices found." It seem like this shouldn't be an issue because with Linux the device finder should automatically pick it up. Regardless I tried following the MAC instructions and setting the device with fx set-device 127.0.0.1:22. That just makes the run command say "ssh: connect to host 127.0.0.1 port 22: Connection refused". I also tried to set it to the device to the nodename outputted by the fx qemu -N command which is "fuchsia-####-####-####" but that just makes the run command say no devices are found again.
I have verified that I actually have the hello-world packages with the fx list-packages hello-world which outputs all the hello-world packages as expected.
Is there any way I can get the device to be discoverable by the shell command? Alternatively, can I run components like the hello-world component from the qemu emulator directly?
Please let me know if I can provide any additional information.
I guess I just wasn't patient enough. I assumed the emulator was done getting setup because it stopped giving console output and it allowed me to input commands but it seems I just had to wait longer. After 50 minutes of the fx qemu -N command running, the terminal that had fx serve-updates going finally picked up the device. Then I was able to execute the hello world component. It would be nice if the documentation at least gave an idea of how long the different commands would take before they'd be usable.

Mount NodeMCU with Micropython filesystem?

I've recently bought a NodeMCU board and flashed Micropython in it.
I've read about the boot.py and main.py scripts, but I can't understand how to access them. I have succesfully connected to the Python REPL with the screen command and everything works fine.
Is there a way to mount it as an external drive on Mac OS X? Because I haven't found a way till now.
Thanks in advance!
you can enable the webrepl and upload via that
I've found this one package very helpful to upload vi the serial.
pip install mpfshell
python -m mp.mpfshell
> open COM3
> put main.py
micropython is cool...
Due to some questions I had about the NodeMcu and running Python on it, I just set up a pretty end-to-end documentation python2nodemcu on GitHub.
Viewing, downloading, uploading or listing files of the MicroPython filesystem has its own section there.
It utilizes Ampy, a Python library to connect to the MicroPython-based board via its serial connection. For listing all files for example, simply run python3 ampy/cli.py --port /dev/tty.{device-file} --baud 115200 ls.
The mpy-utils software package contains a tool called mpy-fuse that allows you to mount a MicroPython device from Linux or MacOS using FUSE. I found this tool through this video that describes how to set it up and shows how it looks like in action.

Run window manager with chromium and go through proxy server on RPI

I am using Raspberry pi(s) for workstations in an office setup. I want the users to have access to the intranet and a couple of websites. I have a proxy set up with whitelist that works fine.
I want to boot the RPI and show only a web browser and connect through the proxy. I understand I need a window manager for this.
I have been experimenting with chromium (as it makes it very easy to insert the proxy address as an attribute when opening chromium via command line). The problem is, Chromium is a demanding browser and struggles with JQuery on the RPI.
I am looking for a browser I can run through a proxy, in a window manager from a start up script that won't be slow as hell!
Does this exist? Or am I going down the wrong path for this?
I don't know if this is really what you want but here is one way to boot directly into a web-browser:
1. Make sure you use a Raspberry with the latest version of raspbian installed and updated:
sudo apt-get update
sudo apt-get upgrade
2. If you don't want to use Midori, install the browser you want. I prefer chromium (Light version of Google Chrome) so that's what I will use here. To install chromium run the following command:
sudo apt-get install chromium
3. Configure raspi-config to start in GUI mode:
sudo raspi-config -> Enable boot to desktop/Scratch -> Desktop login as....
4. To start chromium on boot, comment all existing code in the file and add the following last:
sudo nano /etc/xdg/lxsession/LXDE/autostart
#xset s off
#xset -dpms
#xset s noblank
#chromium
On the last line you can also add switches like kiosk mode (--kiosk) after #chromium. Watch this link for more switches:
List of commandline switches
Hope this was any help! Good luck!

Resources