OSX laravel5 composer install zlib_decode(): data error - composer-php

Another project can be composer install,but laravel5 composer Will not work
Failed to decode response: zlib_decode(): data error
Retrying with degraded mode, check https://getcomposer.org/doc/articles/troubleshooting.md#degraded-mode for more info

It's because of poor or interrupted internet connections,In order to solve the problem change your internet form WiFi to Ethernet also make sure you don't have proxy programs running in background which sometimes interupt, worth to mention: some users might need to enable HTTP checking in setting section of ESET antivirus. Good luck

If you have dev-master in your dependencies that may cause the problem. Try to write exact version of the the package that cause problem.
For instance
instead of
"dimsav/laravel-translatable": "dev-master",
write
"dimsav/laravel-translatable": "~5.1.1",
In order to find out which package is trouble run this command: composer diagnose. hope this helps.

Related

Can't Create an Environment with Conda

Have tries many of the suggestions on the web (such as setting SSL to false in conda config, etc) to no avail hoping someone can help. I am trying to create a new conda environment and am getting the following error (I also can not update my existing environments). This is running on an AWS box if relevant. Greatly appreciate any guidance. The link that is being tried seems fishy, atoti is the name of one of my other environments (but I am trying to create the new environment from base).
(base) PS C:\Users\ncosgrov> conda create --name HREnv
Collecting package metadata (current_repodata.json): done
Solving environment: done
CondaHTTPError: HTTP 000 CONNECTION FAILED for url https://conda.atoti/win-64/current_repodata.json
Elapsed: -
An HTTP error occurred when trying to retrieve this URL.
HTTP errors are often intermittent, and a simple retry will get you on your way.
'https://conda.atoti/win-64'
Have tried conda config --set ssl_verify false as recommended in one article. Have also opened hole in firewall for conda.exe
The one thing I note is that the URL it is trying to hit https://conda.atoti/win-64/current_repodata.json seems off. What should it be and where can I override/fix it?
Found the issue. It was a bad channel that had been added a while back, it was that URL that was failing not the URL to the update, but it was blocking anything else.
Many thanks to the community, the exercise of having to describe the issue to someone else actually caused me to realize what was goin on.

odoo.sh ver. 14 WKHTMLTOPDF 0.12.25: Unable to call host printing service (HTTPError). How to circumvent this?

We're using odoo.sh platform with odoo14. The installed wkhtmltopdf is wkhtmltopdf_paas_wrapper 0.12.5, we can't upgrade to 0.12.6 because the access is very limited we cant use 'sudo' to apt-install. To temporarily solve this, we decided to use the 0.12.5 version. But it returns "Unable to call host printing service (HTTPError)" even with the right arguments. I've already tried it with the staging and production server, but still the same result. The ticket I've sent hasn't been replied to yet. This is so frustrating, I'm going bonkers...please help.
here's a screenshot:
ps: unrecognized argument error was intentional so I can display the available args. I've also crossed out the project domain. Thank you
Apparently, to properly execute the package, it should not have been "wkhtmltopdf" but instead "wkhtmltopdf.bin". I've overridden the ir_actions_report.py to change the package name. Here's the snippet of the original source code:
They shouldve known better, its a paid platform.

Deleting ChainCode from peer

I made a mistake my chaincode and installed them on the peers on my network. When I tried to instantiate the chaincode in the channels, I found the error.
Is there a way to debug chaincode before installing it on peers ? It seems to only get flagged when you instantiate it.
Is there a way to delete the chaincode from the peers without having to restart the network?
Depends on what you mean by mistake / debug. You should make sure it compiles first. That eliminates all typos, syntax, missing libraries, etc. But there is no way to debug functionality except to install and instantiate.
Technically, no. You can delete all the storage (/var/hyperledger/production/peer, /var/hyperledger/production/orderer, the kafka/zookeeper files, and CouchDB). Not a real big deal, but you do have to restart the network and recreate the channel, join it, install and instantiate the cc, etc. But you can install as a different name. Just change the name in your app connection definition to match. You can also upgrade by changing the version number but keeping the same name.
I just change the name until I get to a fairly settled spot and then do the deletes and restart all to clean up. A full cleanup (4 peers, 3 orderers, 4 kafka,3 zoopkeeper) takes me maybe 30 minutes. Normally, I keep a CLI open with install ccname1 and instantiate ccname1 in the buffer and can easily increment to ccname2,3,4,5. It only takes a few seconds that way.
If the error is (chaincode is already present in the peers)
You can try installing the chain code with different version number or different chain code name.
You can initiate chaincode in the channel only once. Next time you have to follow the procedure of upgrade chaincode steps.
Note : Before installing chain code you can check the syntax errors form the machine by installing go and compile the chain code.

Unable to use activator on my Mac - get a timeout exception when I try and make an app from template

So I'm following this tutorial:
https://www.playframework.com/documentation/2.3.x/Installing
It all seems installed - i.e. all the commands work but when I try and call:
activator new my-first-app play-scala
I get the following:
Fetching the latest list of templates...
Could not fetch the updated list of templates. Using the local cache.
Check your proxy settings or increase the timeout. For more details see:
http://typesafe.com/activator/docs
OK, application "another-app" is being created using the "play-scala" template.
akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://default/user/template-cache#1575831997]] after [10000 ms]
at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:333)
at akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117)
at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:599)
at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:597)
at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(Scheduler.scala:467)
at akka.actor.LightArrayRevolverScheduler$$anon$8.executeBucket$1(Scheduler.scala:419)
at akka.actor.LightArrayRevolverScheduler$$anon$8.nextTick(Scheduler.scala:423)
at akka.actor.LightArrayRevolverScheduler$$anon$8.run(Scheduler.scala:375)
at java.lang.Thread.run(Thread.java:744)
And nothing happens.
I just installed it on a PC in my house under the same network so I don't think my connection is the issue. I'm not using a proxy either..
Got any ideas? I've been trying to get this working for over a day now.
I'm on OSX Yosemite by the way.
I sometimes have timeouts too, especially while working in the university on some sloppy WLAN.
There are two types of activator, the usual light-weight one and the offline version. In the second, all repositories are present so the activator does not need to gather anything from the internet.
When you go to https://www.playframework.com/download ... look for the offline distribution (around 400MB) and install it like the normal activator.
If this solves your problem, there was something wrong with the activator trying to get something from a repository (you said that you can run the project but get server timeouts).
[EDIT]: You can also set the timeout to 30 seconds and see if this helps
activator -Dactivator.timeout=30s new "project name"

Laravel 3 APC session lifetime is ignored

I have a Laravel 3 project, running on a plesk 11.5 CentOS 4(dedicated). It used to be on an IIS server, but i had to migrate it to plesk, since the company i'm working for is dumping the IIS server. Everything seemed to be running smoothly, until i logged out from my application, at first i got a WSOD (white screen of death), then i enabled php error reporting, and this is the error that was displayed:
Fatal error: Cannot override final method Laravel\Database\Eloquent\Model::sync()
This is a very strange error, since i have no method called Sync in any of my classes, and needless to say that there was no such error while the project was running on IIS.
I tried several different combinations of session/cache drivers, the only one that seems to be working is the APC driver.
When i have the APC driver enabled for cache and session, the above Fatal error is not displayed and everything works correctly. The PROBLEM is that i have given the Session Lifetime a value of 60(minutes) but it is completely ignored, meaning that the user is logged out after 2 or 3 minutes.
I've been to the Laravel IRC channel with this issue, some people kindly suggested to tweak the APC memory and ttl (time to leave) settings, but with no luck unfortunately :(.
Here are some APC settings from my server configuration:
apc.gc_ttl 3600
apc.shm_size 1024M
apc.shm_strings_buffer 32M
I desperately need help if anyone has any to offer! This is for a live running project and i need to find a solution asap.
I had the exact same issue and couldn't find a solution. I was going round in circles trying to figure out what on earth was going wrong.
I finally came across this post:
Fatal error: Cannot override final method
You need to make sure that the apc.include_once_override setting is set to 0. In your apc.ini file set like so:
apc.include_once_override=0
This error seems to be caused by caching of included classes.
I solved the problem after looking around the plesk panel.
The problem was that i had "Run PHP as FastCGI application" selected.
I switched to "Run PHP as CGI application" and everything works perfectly.
I'm not sure what the exact source of the problem was, only that FastCGI triggered the error.

Resources