When one version of a set of scripts runs, which apply RRDTool, you try more of the same .....
Made a version of the lua-script, which now collects power/energy-info, and the related file create_pipower1A_graph.sh is a direct derivative of the errorfree running sh-file described in RRDTool, How to get png-files by means of os-execute-call from lua-script?
The derivative sh-file should produce a graph with the output of 3 inverters and the parallel consumption.
That sh-file for graphic output is below.
#!/bin/bash
rrdtool graph /home/pi/pipower1.png \
DEF:Pwr_MAC=/home/pi/pipower1.rrd:Power0430:AVERAGE \
DEF:Pwr_SAJ=/home/pi/pipower1.rrd:Power1530:AVERAGE \
DEF:Pwr_STECA=/home/pi/pipower1.rrd:Power2950:AVERAGE \
DEF:Pwr_Cons=/home/pi/pipower1.rrd:Power_Cons:AVERAGE \
LINE1:Pwr_MAC#ff0000:Output Involar \
LINE1:Pwr_SAJ#0000ff:Output SAJ1.5 \
LINE1:Pwr_STECA#5fd00b:Output STECA \
LINE1:Pwr_Cons#00ffff:Consumption \
COMMENT:"\t\t\t\t\t\t\l" \
COMMENT:"\t\t\t\t\t\t\l" \
GPRINT:Pwr_MAC:LAST:"Output_Involar Latest\: %2.1lf" \
GPRINT:Pwr_MAC:MAX:" Max.\: %2.1lf" \
GPRINT:Pwr_MAC:MIN:" Min.\: %2.1lf" \
COMMENT:"\t\t\t\t\t\t\l" \
GPRINT:Pwr_SAJ:LAST:"Output SAJ1.5k Latest\: %2.1lf" \
GPRINT:Pwr_SAJ:MAX:" Max.\: %2.1lf" \
GPRINT:Pwr_SAJ:MIN:" Min.\: %2.1lf" \
COMMENT:"\t\t\t\t\t\t\l" \
GPRINT:Pwr_STECA:LAST:"Output STECA Latest\: %2.1lf" \
GPRINT:Pwr_STECA:MAX:" Max.\: %2.1lf" \
GPRINT:Pwr_STECA:MIN:" Min.\: %2.1lf" \
COMMENT:"\t\t\t\t\t\t\l" \
GPRINT:Pwr_Cons:LAST:"Consumption Latest\: %2.1lf" \
GPRINT:Pwr_Cons:MAX:" Max.\: %2.1lf" \
GPRINT:Pwr_Cons:MIN:" Min.\: %2.1lf" \
COMMENT:"\t\t\t\t\t\t\l" \
--width 700 --height 400 \
--title="Graph B: Power Production & Consumption for last 24 hour" \
--vertical-label="Power(W)" \
--watermark "`date`"
The lua-script again runs without errors and as result the rrd-file is periodically updated, the graphic output is generated,but no graph appears! Tested on 2 different Raspberries, but no difference in reactions.
Running the sh-file create_pipower1A_graph from the commandline produces the following errors.
pi#raspberrypi:~$ sudo /home/pi/create_pipower1A_graph.sh
ERROR: 'I' is not a valid function name
pi#raspberrypi:~$ ./create_pipower1A_graph.sh
ERROR: 'I' is not a valid function name
Question: Puzzled, because nowhere in the sh-file an I is applied as function command. Explanation? Hint for remedy of this error?
Your problem is here:
LINE1:Pwr_MAC#ff0000:Output Involar \
LINE1:Pwr_SAJ#0000ff:Output SAJ1.5 \
LINE1:Pwr_STECA#5fd00b:Output STECA \
LINE1:Pwr_Cons#00ffff:Consumption \
These lines need to be quoted as they contain spaces and hash symbols.
LINE1:"Pwr_MAC#ff0000:Output Involar" \
LINE1:"Pwr_SAJ#0000ff:Output SAJ1.5" \
LINE1:"Pwr_STECA#5fd00b:Output STECA" \
LINE1:"Pwr_Cons#00ffff:Consumption" \
Related
In the Transformer's library framework, by HuggingFace only the evaluation step metrics are outputted to a file named eval_resuls_{dataset}.txt in the "output_dir" when running run_glue.py. In the eval_resuls file, there are the metrics associated with the dataset. e.g., accuracy for MNLI and the evaluation loss.
Can a parameter be passed to run_glue.py to generate a training_results_{dataset}.txt file that tracks the training loss? Or would I have to build the functionality myself?
My file named run_python_script_glue.bash:
GLUE_DIR=../../huggingface/GLUE_SMALL/
TASK_NAME=MNLI
ID=OT
python3 run_glue.py \
--local_rank -1 \
--seed 42 \
--model_type albert \
--model_name_or_path albert-base-v2 \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--data_dir $GLUE_DIR/$TASK_NAME \
--max_seq_length 128 \
--per_gpu_train_batch_size 8 \
--per_gpu_eval_batch_size 8 \
--gradient_accumulation_steps 2\
--learning_rate 3e-5 \
--max_steps -1 \
--warmup_steps 1000\
--doc_stride 128 \
--num_train_epochs 3.0 \
--save_steps 9999\
--output_dir ./results/GLUE_SMALL/$TASK_NAME/ALBERT/$ID/ \
--do_lower_case \
--overwrite_output_dir \
--label_noise 0.2\
--att_kl 0.01\
--att_se_hid_size 16\
--att_se_nonlinear relu\
--att_type soft_attention \
--adver_type ot \
--rho 0.5 \
--model_type whai \
--prior_gamma 2.70 \
--three_initial 0.0
In the trainer.py file in the transformer library, the training loss variable during the training step is called tr_loss.
tr_loss = self._training_step(model, inputs, optimizer, global_step)
loss_scalar = (tr_loss - logging_loss) / self.args.logging_steps
logs["loss"] = loss_scalar
logging_loss = tr_loss
In the code, the training loss is first scaled by the logging steps and later passed to a logs dictionary. The logs['loss'] is later printed to the terminal but not to a file. Is there a way to upgrade this to include an update to a txt file?
Background:
Syntax highlighting for perl files is extremely slow at times for large files (1k+ lines).
I profiled using:
:syntime on
"*** Do some slow actions ***
:syntime report
There were many slowly performaning regions, like: perlStatementProc
I significantly improved performance by removing some of the slowly performing syntax regions (there are more):
:syntax clear perlStatementProc
Now I want to use this vimrc with these improvements on a different machine which may not have a specific region defined.
I am seeing this ERROR when opening Vim:
E28: No such highlight group name: perlStatementProc
How can I check if the syntax region name perlStatementProc exists?
I found out about hlexists and implemented this solution in my vimrc:
" Remove some syntax highlighting from large perl files.
function! RemovePerlSyntax()
if line('$') > 1000
let perl_syntaxes = [
\ "perlStatementProc",
\ "perlMatch",
\ "perlStatementPword",
\ "perlQR",
\ "perlQW",
\ "perlQQ",
\ "perlQ",
\ "perlStatementIndirObjWrap",
\ "perlVarPlain",
\ "perlVarPlain",
\ "perlOperator",
\ "perlStatementFiledesc",
\ "perlStatementScalar",
\ "perlStatementInclude",
\ "perlStatementNumeric",
\ "perlStatementSocket",
\ "perlFloat",
\ "perlFormat",
\ "perlStatementMisc",
\ "perlStatementFiles",
\ "perlStatementList",
\ "perlStatementIPC",
\ "perlStatementNetwork",
\ "perlStatementTime",
\ "perlStatementIOfunc",
\ "perlStatementFlow",
\ "perlStatementControl",
\ "perlHereDoc",
\ "perlHereDocStart",
\ "perlVarPlain2",
\ "perlVarBlock",
\ "perlVarBlock2",
\ "perlDATA",
\ "perlControl",
\ "perlStatementHash",
\ "perlStatementVector",
\ "perlIndentedHereDoc",
\ "perlLabel",
\ "perlConditional",
\ "perlRepeat",
\ "perlNumber",
\ "perlStatementRegexp",
\ ]
for perl_syntax in perl_syntaxes
" NEW - Was missing this check before.
if hlexists( perl_syntax )
exec "syntax clear " . perl_syntax
endif
endfor
let b:remove_perl_syntax = 1
else
let b:remove_perl_syntax = 0
endif
endfunction
augroup remove_perl_syntax
autocmd!
autocmd BufNewFile,BufRead,BufReadPost,FileType perl call RemovePerlSyntax()
augroup END
I want to run (or resume) the run_mlm.py script with a specific learning rate, but it doesn't seem like setting it in the script arguments does anything.
os.system(
f"python {script} \
--model_type {model} \
--config_name './models/{model}/config.json' \
--train_file './content/{data}/train.txt' \
--validation_file './content/{data}/test.txt' \
--learning_rate 6e-4 \
--weight_decay 0.01 \
--warmup_steps 6000 \
--adam_beta1 0.9 \
--adam_beta2 0.98 \
--adam_epsilon 1e-6 \
--tokenizer_name './tokenizer/{model}' \
--output_dir './{out_dir}' \
--do_train \
--do_eval \
--num_train_epochs 40 \
--overwrite_output_dir {overwrite} \
--ignore_data_skip"
)
After warm-up, the log indicates that the learning rate tops out at 1e-05—a default from somewhere, I guess, but I'm not sure where (and certainly not 6e-4):
{'loss': 3.9821, 'learning_rate': 1e-05, 'epoch': 0.09}
I found two fine tune ways for Sequence Classification with transformer:
1,BertForSequenceClassification.from_pretrained(https://huggingface.co/transformers/training.html):
from transformers import BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained('bert-base-uncased')
model.train()
2,python run_glue.py(https://github.com/huggingface/transformers/tree/master/examples/text-classification):
export GLUE_DIR=/path/to/glue
export TASK_NAME=MRPC
python run_glue.py \
--model_name_or_path bert-base-cased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--data_dir $GLUE_DIR/$TASK_NAME \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /tmp/$TASK_NAME/
Are there any difference between them?
I've created quite a few RRDTool graphs monitoring various aspects of a Raspberry Pi server.
I'm displaying 36 hours, 10 days, 45 days and 18 months for things like transferred data, CPU temperature, load averages etc.
However, the only "continuous" looking graphs are the 10-day graphs, all the others have gaps in them. I'm recording each data point at a minutely interval.
There are 28 (29) images, so I'm not going to put them all here, so I've put them on imgur for your perusal
But here's an example of what I'm talking about:
10-days works fine!
45-days, not so much.
Here's my .rrd creation script:
rrdtool create data.rrd \
--start N --step '60' \
'DS:rx:GAUGE:60:0:U' \
'DS:tx:GAUGE:60:0:U' \
'DS:rxc:COUNTER:60:0:U' \
'DS:txc:COUNTER:60:0:U' \
'DS:wrx:GAUGE:60:0:U' \
'DS:wtx:GAUGE:60:0:U' \
'DS:wrxc:COUNTER:60:0:U' \
'DS:wtxc:COUNTER:60:0:U' \
'RRA:AVERAGE:0.5:1:129600' \
'RRA:AVERAGE:0.5:2:64800' \
'RRA:AVERAGE:0.5:60:14400' \
'RRA:AVERAGE:0.5:300:12960' \
'RRA:AVERAGE:0.5:3600:13140'
rrdtool create load.rrd \
--start N \
--step '60' \
'DS:load:GAUGE:60:0:4' \
'RRA:AVERAGE:0.5:1:129600' \
'RRA:AVERAGE:0.5:2:64800' \
'RRA:AVERAGE:0.5:60:14400' \
'RRA:AVERAGE:0.5:300:12960' \
'RRA:AVERAGE:0.5:3600:13140'
rrdtool create mem.rrd \
--start N \
--step '60' \
'DS:mem:GAUGE:60:0:100' \
'RRA:AVERAGE:0.5:1:129600' \
'RRA:AVERAGE:0.5:2:64800' \
'RRA:AVERAGE:0.5:60:14400' \
'RRA:AVERAGE:0.5:300:12960' \
'RRA:AVERAGE:0.5:3600:13140'
rrdtool create pitemp.rrd \
--start N \
--step '60' \
'DS:pitemp:GAUGE:60:U:U' \
'RRA:AVERAGE:0.5:1:129600' \
'RRA:AVERAGE:0.5:2:64800' \
'RRA:AVERAGE:0.5:60:14400' \
'RRA:AVERAGE:0.5:300:12960' \
'RRA:AVERAGE:0.5:3600:13140'
My entire draw script is like over 900 lines long, so I'll just include the actual draw code here for one set of graphs ($RRDTOOL is a variable containing the path /usr/bin/rrdtool):
$RRDTOOL graph /var/www/html/images/graphs/data36h.png \
--title 'Odin Absolute Traffic (eth0)' \
--watermark "Graph Drawn `date`" \
--vertical-label 'Bytes' \
--lower-limit '0' \
--rigid \
--alt-autoscale \
--units=si \
--width '640' \
--height '300' \
--full-size-mode \
--start end-36h \
'DEF:rx=/usr/local/bin/system/data.rrd:rx:AVERAGE' \
'CDEF:cleanrx=rx,UN,PREV,rx,IF' \
'DEF:tx=/usr/local/bin/system/data.rrd:tx:AVERAGE' \
'AREA:rx#00CC00FF:Download\:' \
'GPRINT:rx:LAST:\:%8.2lf %s]' \
'STACK:tx#0000FFFF:Upload\:' \
'GPRINT:tx:LAST:\:%8.2lf %s]\n'
$RRDTOOL graph /var/www/html/images/graphs/data10d.png \
--title 'Odin Absolute Traffic (eth0) 10 days' \
--watermark "Graph Drawn `date`" \
--vertical-label 'Bytes' \
--lower-limit '0' \
--rigid \
--alt-autoscale \
--units=si \
--width '640' \
--height '300' \
--full-size-mode \
--start end-10d \
'DEF:rx=/usr/local/bin/system/data.rrd:rx:AVERAGE' \
'DEF:tx=/usr/local/bin/system/data.rrd:tx:AVERAGE' \
'AREA:rx#00CC00FF:Download\:' \
'GPRINT:rx:LAST:\:%8.2lf %s]' \
'STACK:tx#0000FFFF:Upload\:' \
'GPRINT:tx:LAST:\:%8.2lf %s]\n'
$RRDTOOL graph /var/www/html/images/graphs/data45d.png \
--title 'Odin Absolute Traffic (eth0) 45 days' \
--watermark "Graph Drawn `date`" \
--vertical-label 'Bytes' \
--lower-limit '0' \
--rigid \
--alt-autoscale \
--units=si \
--width '640' \
--height '300' \
--full-size-mode \
--start end-45d \
'DEF:rx=/usr/local/bin/system/data.rrd:rx:AVERAGE' \
'DEF:tx=/usr/local/bin/system/data.rrd:tx:AVERAGE' \
'AREA:rx#00CC00FF:Download\:' \
'GPRINT:rx:LAST:\:%8.2lf %s]' \
'STACK:tx#0000FFFF:Upload\:' \
$RRDTOOL graph /var/www/html/images/graphs/data18m.png \
--title 'Odin Absolute Traffic (eth0) 18 month' \
--watermark "Graph Drawn `date`" \
--vertical-label 'Bytes' \
--lower-limit '0' \
--rigid \
--alt-autoscale \
--units=si \
--width '640' \
--height '300' \
--full-size-mode \
--start end-1y6m \
'DEF:rx=/usr/local/bin/system/data.rrd:rx:AVERAGE' \
'DEF:tx=/usr/local/bin/system/data.rrd:tx:AVERAGE' \
'AREA:rx#00CC00FF:Download\:' \
'GPRINT:rx:LAST:\:%8.2lf %s]' \
'STACK:tx#0000FFFF:Upload\:'
And yes, I know that the title on one of the graphs is wrong, I've fixed that, but only after saving all the images to imgur.
If you choose a --step of 60 seconds, I would choose a mrhb of 120s and not also of 60s because rrdtool will disregard any updates that are more than 60s apart.
rrdtool create data.rrd \
--start N --step '60' \
'DS:rx:GAUGE:120:0:U' \
'DS:tx:GAUGE:120:0:U' \
'DS:rxc:COUNTER:120:0:U' \
'DS:txc:COUNTER:120:0:U' \
'DS:wrx:GAUGE:120:0:U' \
'DS:wtx:GAUGE:120:0:U' \
'DS:wrxc:COUNTER:120:0:U' \
'DS:wtxc:COUNTER:120:0:U' \
'RRA:AVERAGE:0.5:1:129600' \
'RRA:AVERAGE:0.5:2:64800' \
'RRA:AVERAGE:0.5:60:14400' \
'RRA:AVERAGE:0.5:300:12960' \
'RRA:AVERAGE:0.5:3600:13140'