Openwhisk: Unable to obtain the API list - openwhisk

I setup Apache Openwhisk locally following this guide: http://jamesthom.as/blog/2018/01/19/starting-openwhisk-in-sixty-seconds/. In general it seems to work correctly, but whenever I'm trying to execute any commands related to api, e.g.
wsk -i api list
it gives me an error,
Unable to obtain the API list: The requested resource does not exist. (code 153)
Any idea how to fix this?

This is unfortunately a temporary issue with docker-compose, and work is in progress to fix this.

Related

How to run a K6 script locally and send data to remote InfluxDB instance (No Docker)

I'm extremely new at k6 + influxdb + grafana, and I was given a task related to execute certain K6 Scripts locally but save/pass the data over a remote InfluxDB instance.
As of now I'm having issues given that I'm not sure what I'm missing regarding the needed configurations in order to do this since everytime I try to run the script pointing at the InfluxDB instance I'm just getting an error everytime I run it:
The command that I'm executing is:
k6 run --out influxdb="https://my_influxdb_url/write" //sampleScript.js
But the original URL that was handed over to me was something like this:
https://my_influxdb_url/write?db=DB_NAME&u=USERNAME&p=PASSWORD
And when I execute the first mentioned script I'm getting the following error:
ERRO[000X] Couldn't write stats error="404 page not found\n" output=InfluxDB1
So I've tried creating K6_INFLUXDB_USERNAME and K6_INFLUXDB_PASSWORD as environment variables but I'm still getting the same error.
I'm not sure if I might be missing some .yaml file like a datasource in which I should fill those 3 values? (DB_NAME, USERNAME, PASSWORD)
Or maybe I'm just doing it all wrong and not calling the execution command properly for this scenario.
Another weird thing that I noticed is that OUTPUT is throwing InfluxDB1 instead of my actual InfluxDB url which I guess might be where my issue lies.
Any kind of tip would be greatly appreciated since the actual documentation that I've found so far is always run either on a Docker container instance of Grafana+InfluxDB or simply running it locally which is not my case :(
Thanks a lot in advance as always!!

odoo.sh ver. 14 WKHTMLTOPDF 0.12.25: Unable to call host printing service (HTTPError). How to circumvent this?

We're using odoo.sh platform with odoo14. The installed wkhtmltopdf is wkhtmltopdf_paas_wrapper 0.12.5, we can't upgrade to 0.12.6 because the access is very limited we cant use 'sudo' to apt-install. To temporarily solve this, we decided to use the 0.12.5 version. But it returns "Unable to call host printing service (HTTPError)" even with the right arguments. I've already tried it with the staging and production server, but still the same result. The ticket I've sent hasn't been replied to yet. This is so frustrating, I'm going bonkers...please help.
here's a screenshot:
ps: unrecognized argument error was intentional so I can display the available args. I've also crossed out the project domain. Thank you
Apparently, to properly execute the package, it should not have been "wkhtmltopdf" but instead "wkhtmltopdf.bin". I've overridden the ir_actions_report.py to change the package name. Here's the snippet of the original source code:
They shouldve known better, its a paid platform.

Unable to Create Common Data Service DB in Default Environment Power Apps

I am unable to create a new Common Data Service Database in my Power Apps default environment. Please see the error text below.
It looks like you don't have permission to use the Common Data Service
in this environment. Switch to a different environment, or create your
own.
Which as I understand I should be able to create after the Microsoft Business Application October 2018 update as listed in the article available at following link.
https://community.dynamics.com/365/b/dynamicscitizendeveloper/archive/2018/10/17/demystifying-dynamics-365-and-powerapps-environments-part-1
Also when I try to create a Common Data Service app in my default environment, I encounter following error.
The data did not load correctly. Please try again.
The environment 'Default-57e1485d-1197-4afd-b792-5c423ab508d9' is not
linked to a new CDS 2.0 instance. The operation 'ListInstanceMetadata'
is forbidden for unlinked environments
Moreover I am unable to see the default environment on https://admin.powerapps.com/environments, I can only see the Sandbox environment there.
Any ideas what I am missing here?
Thank you.
Someone else faced a similar issue and I read in one of the threads about deleting the browser cache and trying it again or trying it in a different browser resolved the issue. Could you try these first level steps and check if you still have these issues?
Ref: https://powerusers.microsoft.com/t5/Common-Data-Service-for-Apps/Default-Environment-Error-on-CDS/m-p/233582#M1281
Also, for your permission error ref: https://powerusers.microsoft.com/t5/Common-Data-Service-for-Apps/Common-Data-Service-Business-Flows/td-p/142053
I have not validated these findings. But as these answers are from MS and PowerApps team, hope it helps!

Google Cloud Data flow jobs failing with error 'Failed to retrieve staged files: failed to retrieve worker in 3 attempts: bad MD5...'

SDK: Apache Beam SDK for Go 0.5.0
We are running Apache Beam Go SDK jobs in Google Cloud Data Flow. They had been working fine until recently when they intermittently stopped working (no changes made to code or config). The error that occurs is:
Failed to retrieve staged files: failed to retrieve worker in 3 attempts: bad MD5 for /var/opt/google/staged/worker: ..., want ; bad MD5 for /var/opt/google/staged/worker: ..., want ;
(Note: It seems as if it's missing a second hash value in the error message message.)
As best I can guess there's something wrong with the worker - It seems to be trying to compare md5 hashes of the worker and missing one of the values? I don't know exactly what it's comparing to though.
Does anybody know what could be causing this issue?
The fix to this issue seems to have been to rebuild the worker_harness_container_image with the latest changes. I had tried this but I didn't have the latest release when I built it locally. After I pulled the latest from the Beam repo, and rebuilt the image (As per the notes here https://github.com/apache/beam/blob/master/sdks/CONTAINERS.md) and reran it seemed to work again.
I'm seeing the same thing. If I look into the Stackdriver logging I see this:
Handler for GET /v1.27/images/apache-docker-beam-snapshots-docker.bintray.io/beam/go:20180515/json returned error: No such image: apache-docker-beam-snapshots-docker.bintray.io/beam/go:20180515
However, I can pull the image just fine locally. Any ideas why Dataflow cannot pull.

H2O Steam deploy can't connect to Prediction Service Builder

I am trying to use h2o steam (running on localhost) to deploy a model. After importing the model from h2o flow, clicking the "deploy model" option in the "models" section of the project, filling out the resulting dialog box, and clicking the "deploy" button, the following messages are displayed:
At first I thought that it was because maybe I needed to start up the service builder on my own, so I started it up following the docs here, but still got the same error. Any suggestions would be appreciated. Thanks :)
Just make sure jetty HTTP server is running locally by executing the following in your shell:
java -jar var/master/assets/jetty-runner.jar var/master/assets/ROOT.war
Looking here, it seems like I would need to "override" some kind of default browser restriction for accessing localhost:8080 (which is what I assume steam is trying to do to launch the service builder (I don't know much about networking related stuff)). I got around this by launching steam with the command:
$ ./steam serve master --prediction-service-host=localhost --prediction-service-port-range=12345:22345
where the ports are some arbitrary range between (1025, 65535) which I got by word-searching the a page of the steam source code (line 182 as of the date of this posting).
Doing this lets me deploy the models through the steam dialog without any error messages. Again, I don't know much about networking related stuff, so if anyone has a better way to solve this problem (ie. allow access of localhost:8080) please post or comment. Thanks.

Resources