UiAutomator - How to know if an object is present at certain coordinates - android-uiautomator

I want to know if it is possible to determine using UiAutomator APIs if there is a widget at certain coordinates (like (100, 100) for example) on the home screen of Android device.
Is there any existing method that can help me determine this?

AndroidViewClient/culebra implements ViewClient.findViewsContainingPoint() which returns the list of Views that contain a specified point.
Using culebra to generate a script and adding this call at the end to find the Views at (544, 567) gives you something like this
#! /usr/bin/env python
# -*- coding: utf-8 -*-
'''
Copyright (C) 2013-2016 Diego Torres Milano
Created on 2016-07-19 by Culebra v11.5.9
__ __ __ __
/ \ / \ / \ / \
____________________/ __\/ __\/ __\/ __\_____________________________
___________________/ /__/ /__/ /__/ /________________________________
| / \ / \ / \ / \ \___
|/ \_/ \_/ \_/ \ o \
\_____/--<
#author: Diego Torres Milano
#author: Jennifer E. Swofford (ascii art snake)
'''
import re
import sys
import os
from com.dtmilano.android.viewclient import ViewClient
TAG = 'CULEBRA'
_s = 5
_v = '--verbose' in sys.argv
kwargs1 = {'ignoreversioncheck': False, 'verbose': False, 'ignoresecuredevice': False}
device, serialno = ViewClient.connectToDeviceOrExit(**kwargs1)
kwargs2 = {'forceviewserveruse': False, 'useuiautomatorhelper': False, 'ignoreuiautomatorkilled': True, 'autodump': False, 'startviewserver': True, 'compresseddump': True}
vc = ViewClient(device, serialno, **kwargs2)
vc.dump()
print vc.findViewsContainingPoint((544,567))
If you want to implement it in Java using UiAutomator just copy the implementation as everything can be reproduced more or less the same.

Related

Output training losses over iterations/epochs to file from trainer.py in HuggingFace Transfrormers

In the Transformer's library framework, by HuggingFace only the evaluation step metrics are outputted to a file named eval_resuls_{dataset}.txt in the "output_dir" when running run_glue.py. In the eval_resuls file, there are the metrics associated with the dataset. e.g., accuracy for MNLI and the evaluation loss.
Can a parameter be passed to run_glue.py to generate a training_results_{dataset}.txt file that tracks the training loss? Or would I have to build the functionality myself?
My file named run_python_script_glue.bash:
GLUE_DIR=../../huggingface/GLUE_SMALL/
TASK_NAME=MNLI
ID=OT
python3 run_glue.py \
--local_rank -1 \
--seed 42 \
--model_type albert \
--model_name_or_path albert-base-v2 \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--data_dir $GLUE_DIR/$TASK_NAME \
--max_seq_length 128 \
--per_gpu_train_batch_size 8 \
--per_gpu_eval_batch_size 8 \
--gradient_accumulation_steps 2\
--learning_rate 3e-5 \
--max_steps -1 \
--warmup_steps 1000\
--doc_stride 128 \
--num_train_epochs 3.0 \
--save_steps 9999\
--output_dir ./results/GLUE_SMALL/$TASK_NAME/ALBERT/$ID/ \
--do_lower_case \
--overwrite_output_dir \
--label_noise 0.2\
--att_kl 0.01\
--att_se_hid_size 16\
--att_se_nonlinear relu\
--att_type soft_attention \
--adver_type ot \
--rho 0.5 \
--model_type whai \
--prior_gamma 2.70 \
--three_initial 0.0
In the trainer.py file in the transformer library, the training loss variable during the training step is called tr_loss.
tr_loss = self._training_step(model, inputs, optimizer, global_step)
loss_scalar = (tr_loss - logging_loss) / self.args.logging_steps
logs["loss"] = loss_scalar
logging_loss = tr_loss
In the code, the training loss is first scaled by the logging steps and later passed to a logs dictionary. The logs['loss'] is later printed to the terminal but not to a file. Is there a way to upgrade this to include an update to a txt file?

Pandoc fails to render markdown with image: "! Missing endcsname inserted."

I am having trouble rendering a .md-file with since I inserted an image. Before everything worked quite well.
My configuration
MacBook Air / MacOS 12.5
multiple markdown files for each chapter: 1_einleitung.md / 2_theorie.md / ...
there is a short yaml-block that references the bibliography file within the same folder: bib.bib.
I use a special LUA filter (pangb4e) for numbered examples and interlinear glossing
a metadata.yml ➞ find the yaml code below.
a Make-file ➞ find the makefile code below
the image is stored in images/danes1.png (I even tried to use the entire path instead of the relative one)
2_theorie.md
---
bibliography: [bib.bib]
---
[...]
![Die einfache lineare Progression](images/danes1.png){width=300px #fig:danes1}
metadata.yml
---
author: ...
affiliation: ...
title: ...
date: \today
# number-sections: true
# abstract: This is the abstract.
# Formatting
bibliography: [bib.bib]
cls: linguistics-and-education.csl
lang: de-DE
link-citations: true
linkReferences: true
nameInLink: true
fontsize: 12pt
papersize: a4
indent: true
fontfamily: sourcesanspro
fontfamilyoptions: default
geometry: margin=2.5cm
linestretch: 1.5
header-includes:
- \usepackage{gb4e}
- \usepackage[nottoc]{tocbibind}
figureTitle: "Abbildung"
tableTitle: "Tabelle"
figPrefix:
- "Fig."
- "Figs."
tblPrefix:
- "Tab."
secPrefix:
- Kapitel
loftitle: "# Abbildungsverzeichnis"
lottitle: "# Tabellenverzeichnis"
...
makefile
1_einleitung:
pandoc 1_einleitung.md -o 1_einleitung.pdf \
--metadata-file=metadata.yml \
--number-sections \
--strip-comments \
--filter pandoc-crossref \
--citeproc \
--lua-filter addons/pangb4e.lua \
2_theorie:
pandoc 2_theorie.md -o 2_theorie.pdf \
--metadata-file=metadata.yml \
--number-sections \
--strip-comments \
--filter pandoc-crossref \
--citeproc \
--lua-filter addons/pangb4e.lua \
The error output
Currently, I do my writing within the 2_theorie.md and run the command make 2_theorie to produce a pdf. I just inserted an image in 2_theorie.md and a get the following error:
san#MacBook-Air Doktorarbeit % make 2_theorie
pandoc 2_theorie.md -o 2_theorie.pdf \
--metadata-file=metadata.yml \
--number-sections \
--strip-comments \
--filter pandoc-crossref \
--citeproc \
--lua-filter addons/pangb4e.lua \
Error producing PDF.
! Missing endcsname inserted.
<to be read again>
let
l.591 }
When I delete the image, the code runs as usual but I need to be able to use images in my work.
If you need more information, please let me know!
Thank you!
I found a solution to my problem myself here: https://tex.stackexchange.com/questions/448314/not-able-to-use-images-if-i-call-gb4e-sty
The problem was the package gb4e. I disabled the a feature of the package right after loading it and panda worked just fine
Edits in the metadata.yml
[...]
header-includes:
- \usepackage{gb4e}
- \noautomath # <-- I added this line the to metadata.yml
[...]

How to set a flag `/LFSM:n[KMG]` with robocopy: `Invalid Parameter #8 : "/LFSM:n"`

I can't find where I go wrong with the / LFSM: n [KMG] flag to robocopy. I try to set a floor level at 100 MB. Error I'm getting: ERROR : Invalid Parameter #8 : "/LFSM:n"
I tried:
robocopy "C: \ Users \ test" "C: \ Users \ b2b \ Desktop \ test" / E / XC / XO / V / TEE / LFSM: n M: 100
robocopy "C: \ Users \ test" "C: \ Users \ b2b \ Desktop \ test" / E / XC / XO / V / TEE / LFSM: n [M: 100]
robocopy "C: \ Users \ test" "C: \ Users \ b2b \ Desktop \ test" / E / XC / XO / V / TEE / LFSM: n [, M: 100,]
robocopy "C: \ Users \ test" "C: \ Users \ b2b \ Desktop \ test" / E / XC / XO / V / TEE / LFSM: n [, 100,]
As Mathias R. Jessen said, the error was simple. n in /LFSM:n[KMG] points out where should be a number posted. So for:
1 Giga: /LFSM:1G
1 Mega: /LFSM:1M
1 Kilo: /LFSM:1K

Spring native generation with multi module spring boot application

having a spring boot multi module application. Wanted to convert in spring native exe on windows. Also need to support third party lib as well. I am having 5 modules in my spring boot application:
Security (user management generates JWT and validate before each request).
Spring boot main application.
Configuration - use to do my application configuration)
UI (angular code )
Reporting (generates reports from DB)
I am able to create exe via maven AOT plugin. (did lot of configuration for initialize at run time and build time) exe is generated with following files:
target\application.exe (executable)
target\awt.dll (jdk_lib)
target\jaas.dll (jdk_lib)
target\javaaccessbridge.dll (jdk_lib)
target\jawt.dll (jdk_lib)
target\management.dll (jdk_lib)
target\sunmscapi.dll (jdk_lib)
target\w2k_lsa_auth.dll (jdk_lib)
target\java.dll (jdk_lib_shim)
target\jvm.dll (jdk_lib_shim)
target\cpos-admin-portal-application.build_artifacts.txt
target\cpos-admin-portal-application.exe
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.5.6)
but no hit goes to any controller. Not other logs of starting application is printed
When I start exe, application use to start following prints on console
Not getting why this much type of files has been generated.

SpringBootTest context not loading?

I have an extremely basic SpringBootTest that I am trying to run (the contextLoads()) test.
import org.junit.jupiter.api.Test;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.ActiveProfiles;
#SpringBootTest
#ActiveProfiles("test")
public class ApplicationTest {
#Test
void contextLoads() {
}
}
I have a config file application-test.yml under src/test/resources/ with some basic configuration that I'd like to use to test my application with. However even with the simple test above the context isn't loading. I'm using IntelliJ to run my tests - and when I run the unit test with Maven the test just is in a running state but it seems like it's frozen loading the context. The only thing I see is
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.3.1.RELEASE)
and that it says "Running tests..." and it doesn't resolve. I'm not exactly sure what's wrong here. I've noticed when I remove the #ActiveProfiles annotation that the test fails to run and throws back an exception Caused by: org.springframework.beans.factory.BeanDefinitionStoreException: Failed to process import candidates for configuration class where it looks like it's trying to get load my actual configuration file and it' cant because I haven't set environment variables that I use in the config file. So How can I get SpringBootTest to pick up on the application-test.yml and proceed with the test?

Resources