how to fix'Too few arguments to function Http\Adapter\Guzzle6\Client::buildClient()' when use 'GrahamCampbell/Laravel-GitHub' - laravel

I make a web application for the management of the educational establishment with laravel, so I have to make a collaborative workspace.
The idea that I find is to work with GitHub repository, after a search in the web I find 'GrahamCampbell / Laravel-GitHub'.
I do the installation like documentation, but when I test I have the following error:
Too few arguments to function Http \ Adapter \ Guzzle6 \ Client :: buildClient (),
0 passed in C: \ Users \ Fehmi \ Dropbox \ GRASP \ vendor \ php-http \ guzzle6-adapter \ src \ Client.php on line 31 and exactly 1 expected "
use GrahamCampbell\GitHub\Facades\GitHub;
class GitController extends Controller
{
public function FuncName ()
{
dd(GitHub::me()->organizations());
}
}
The result that I have is
Symfony \ Component \ Debug \ Exception \ FatalThrowableError (E_RECOVERABLE_ERROR)
Too few arguments to function Http\Adapter\Guzzle6\Client::buildClient(), 0 passed in C:\Users\Fehmi\Dropbox\GRASP\vendor\php-http\guzzle6-adapter\src\Client.php on line 31 and exactly 1 expected

Make sure to use the latest php-http/guzzle6-adapter version.
Only the one from May 2016 has a line 31 with $client = static::buildClient();, and it had an issue fixed in PR 32 to allow calling the buildClient() with no parameters.
GrahamCampbell/Laravel-GitHub only imposes a guzzle6 version as a range between 1.0 (included) and 2.0.
Maybe using ^2.0 or at least ^1.1 might help.

Related

Output training losses over iterations/epochs to file from trainer.py in HuggingFace Transfrormers

In the Transformer's library framework, by HuggingFace only the evaluation step metrics are outputted to a file named eval_resuls_{dataset}.txt in the "output_dir" when running run_glue.py. In the eval_resuls file, there are the metrics associated with the dataset. e.g., accuracy for MNLI and the evaluation loss.
Can a parameter be passed to run_glue.py to generate a training_results_{dataset}.txt file that tracks the training loss? Or would I have to build the functionality myself?
My file named run_python_script_glue.bash:
GLUE_DIR=../../huggingface/GLUE_SMALL/
TASK_NAME=MNLI
ID=OT
python3 run_glue.py \
--local_rank -1 \
--seed 42 \
--model_type albert \
--model_name_or_path albert-base-v2 \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--data_dir $GLUE_DIR/$TASK_NAME \
--max_seq_length 128 \
--per_gpu_train_batch_size 8 \
--per_gpu_eval_batch_size 8 \
--gradient_accumulation_steps 2\
--learning_rate 3e-5 \
--max_steps -1 \
--warmup_steps 1000\
--doc_stride 128 \
--num_train_epochs 3.0 \
--save_steps 9999\
--output_dir ./results/GLUE_SMALL/$TASK_NAME/ALBERT/$ID/ \
--do_lower_case \
--overwrite_output_dir \
--label_noise 0.2\
--att_kl 0.01\
--att_se_hid_size 16\
--att_se_nonlinear relu\
--att_type soft_attention \
--adver_type ot \
--rho 0.5 \
--model_type whai \
--prior_gamma 2.70 \
--three_initial 0.0
In the trainer.py file in the transformer library, the training loss variable during the training step is called tr_loss.
tr_loss = self._training_step(model, inputs, optimizer, global_step)
loss_scalar = (tr_loss - logging_loss) / self.args.logging_steps
logs["loss"] = loss_scalar
logging_loss = tr_loss
In the code, the training loss is first scaled by the logging steps and later passed to a logs dictionary. The logs['loss'] is later printed to the terminal but not to a file. Is there a way to upgrade this to include an update to a txt file?

Transformers fine tune model warning

python run_clm.py \
--model_name_or_path ctrl \
--train_file df_finetune_train.csv \
--validation_file df_finetune_test.csv \
--do_train \
--do_eval \
--preprocessing_num_workers 72 \
--block_size 256 \
--output_dir ./finetuned
I am trying to fine tune the ctrl model on my own dataset, where each row represents a sample.
However, I got the warning info below.
[WARNING|tokenization_utils_base.py:3213] 2021-03-25 01:32:22,323 >>
Token indices sequence length is longer than the specified maximum
sequence length for this model (934 > 256). Running this sequence
through the model will result in indexing errors
What is the cause for this ? any solutions ?

error with the use of the script $ this-> load-> library ('upload', $ config)

I have this error: Fatal error: Call to a member function load () on string C: \ xampp \ htdocs \ cloud.ikwook.com \ ikwook_system \ libraries \ Upload.php on line 1136 when I execute this script: $ this-> load-> library ('upload', $ config); it may be due to what exactly.
$ this-> load-> library ('upload', $ config);
I see spaces in between all this I think this should be the same in your code as well. it will be really helpful if u paste the code.
$this->load->library('upload' $config);

gcloud endpoints deploy error unresolved type

I'm trying to deploy a service which requires google protobuf's Timestamp but I am receiving an error.
gcloud endpoints services deploy api_descriptor.pb api_config.yaml --validate-only
ERROR: (gcloud.endpoints.services.deploy) INVALID_ARGUMENT: Cannot
convert to service config.
'ERROR: unknown location: Unresolved type '.google.protobuf.Timestamp''
my command to generate api_descriptor.pb:
protoc \
--plugin=protoc-gen-go=${GOBIN}/protoc-gen-go \
-I . proto/service.proto \
--descriptor_set_out=api_descriptor.pb \
--go_out=plugins=grpc:. \
relevant bit from proto file which requires google.protobuf.Timestamp:
syntax = "proto3";
package proto;
import "vendor/github.com/golang/protobuf/ptypes/timestamp/timestamp.proto";
message CandleStick {
string ID = 1;
double Open = 2;
double Close = 3;
double High = 4;
double Low = 5;
google.protobuf.Timestamp TimeStamp = 6;
}
Tried for hours unsuccessfully to resolve this issue. Thanks in advance!
In your protoc command line invocation, I think you need to include all the imports in the generated descriptor. You can do this using --include_imports:
protoc \
--plugin=protoc-gen-go=${GOBIN}/protoc-gen-go \
--include_imports \
-I . proto/service.proto \
--descriptor_set_out=api_descriptor.pb \
--go_out=plugins=grpc:. \

Disable all but few warnings in gcc

In gcc -w is used to disable all warnings. However in this case I can't enable specific ones (e.g. -Wreturn-type).
Is it possible to disable all warnings, but enable few specific ones?
As a workaround, is there a way to generate list of all -Wno-xxx at once? And will it help? I wouldn't want to do this manually just to find out that it is not equal to -w.
You can use the following command to get an WARN_OPTS variable suitable for injecting directly into your Makefile:
gcc --help=warnings | awk '
BEGIN { print "WARN_OPTS = \\" }
/-W[^ ]/ { print $1" \\"}
' | sed 's/^-W/ -Wno-/' >makefile.inject
This gives you output (in makefile.inject) like:
WARN_OPTS = \
-Wno- \
-Wno-abi \
-Wno-address \
-Wno-aggregate-return \
-Wno-aliasing \
-Wno-align-commons \
-Wno-all \
-Wno-ampersand \
-Wno-array-bounds \
-Wno-array-temporaries \
: : :
-Wno-variadic-macros \
-Wno-vector-operation-performance \
-Wno-vla \
-Wno-volatile-register-var \
-Wno-write-strings \
-Wno-zero-as-null-pointer-constant \
Once that's put in your actual Makefile, simply use $(WARN_OPTS) as part of your gcc command.
It may need a small amount of touch up to:
get rid of invalid options such as -Wno-;
fix certain -Wno-<key>=<value> types; and
remove the final \ character.
but that's minimal effort compared to the generation of the long list, something you can now do rather simply.
When you establish that you want one of those warnings, simply switch from the -Wno-<something> back to -W<something>.

Resources