Error logging in for osx - h2o

This is a continuation of
https://community.h2o.ai/questions/2165/error-logging-in.html
and
https://community.h2o.ai/questions/1665/can-not-login-to-steam-osx.html
from my original post on community.h2o:
"""
Hi there, I am currently trying to set up an an API on my local machine using steam.
The documentation is out of date on http://docs.h2o.ai/steam/latest-stable/steam-docs/Installation.html#steam-start-flags which says to start the steam service for the first time, execute ./steam serve master --admin-name=admin --admin-password=admin the first time around to set the password. The current command is ./steam serve master --superuser-name=admin --superuser-password=admin but when I try to log in to 0.0.0.0:9000 (which is what it asks me to point the browser to) I get 2017/06/14 13:03:48 User admin does not exist I tried using single, double and no quotes.
Thank you for your time!
"""

The default username/password is superuser/superuser. The flags are not honored by steam server command and looks like its a bug. See config.toml file in the folder.
This breaks the very first developer experience with h2o/steam and is not encouraging.

./steam serve master --superuser-name=superuser --superuser-password=superuser works fine

Related

Executing gcloud commands in bash

I've spent 3 days beating my head against this before coming here in desperation.
So long story short I thought I'd fire up a simple PHP site to allow moderators of a gaming group I'm in the ability to start GCP servers on demand. I'm no developer so I'm looking at this from a Systems perspective to find the simplest solution to do the job.
I fired up an Ubuntu 18.04 machine on GCP and set it up with the Google SDK, authorised it for access to the project and was able to simply run gcloud commands which worked fine. Had some issues with the PHP file calling the shell script to run the same commands but with some testing I can see it's now calling the shell script no worries (it broadcasts wall "test") to console everytime I click the button on the PHP page.
However what does not happen is the execution of the gcloud command. If I manually run this shell script it starts up the instance no worries and broadcasts wall, if I click the button it broadcasts but that's it. I've set the files to have execution rights and I've even added the user nginx runs as to have sudo rights, putting sudo sh in front of the command in the PHP file also made no difference. Please find the bash script below:
#!/bin/bash
/usr/lib/google-cloud-sdk/bin/gcloud compute instances start arma3s1-prod --zone=australia-southeast1-b
wall "test"
Any help would be greatly appreciated, this coupled with an automated shut down would allow our gaming group to save money by only running the servers people want to play on.
Any more detail you want about the underlying system please let me know.
So I asked a PHP dev at work about this and in two seconds flat she pointed out the issue and now I feel stupid. In /etc/passwd the www-data user had /usr/sbin/nologin and after I fixed that running the script gcloud wanted permissions to write a log file to /var/www. Fixed those and it works fine. I'm not terribly worried about the page or even server being hacked and destroyed, I can recreate them pretty easily.
Thanks for the help though! Sometimes I think I just need to take a step back and get a set fresh of eyes on the problem.
When you launch a command while logged in, you have your account access rights to the Google cloud API but the PHP account doesn't have those.
Even if you add the www-data user to root, that won't fix the problem, maybe create some security issues but nothing more.
If you really want to do this you should create a service account and giving the json to the env variable, GOOGLE_APPLICATION_CREDENTIALS, which only have the rights on the compute instance inside your project this way your PHP should have enough rights to do what you are asking him.
Note that the issue with this method is that if you are hacked there is a change the instance hosting your PHP could be deleted too.
You could also try to make a call to prepared cloud function which will create the instance, this way, even if your instance is deleted the cloud function would still be there.

H2O Steam deploy can't connect to Prediction Service Builder

I am trying to use h2o steam (running on localhost) to deploy a model. After importing the model from h2o flow, clicking the "deploy model" option in the "models" section of the project, filling out the resulting dialog box, and clicking the "deploy" button, the following messages are displayed:
At first I thought that it was because maybe I needed to start up the service builder on my own, so I started it up following the docs here, but still got the same error. Any suggestions would be appreciated. Thanks :)
Just make sure jetty HTTP server is running locally by executing the following in your shell:
java -jar var/master/assets/jetty-runner.jar var/master/assets/ROOT.war
Looking here, it seems like I would need to "override" some kind of default browser restriction for accessing localhost:8080 (which is what I assume steam is trying to do to launch the service builder (I don't know much about networking related stuff)). I got around this by launching steam with the command:
$ ./steam serve master --prediction-service-host=localhost --prediction-service-port-range=12345:22345
where the ports are some arbitrary range between (1025, 65535) which I got by word-searching the a page of the steam source code (line 182 as of the date of this posting).
Doing this lets me deploy the models through the steam dialog without any error messages. Again, I don't know much about networking related stuff, so if anyone has a better way to solve this problem (ie. allow access of localhost:8080) please post or comment. Thanks.

How to setup Cloud Coding on Parse Open Server using Heroku

I have looked everywhere but cannot seem to figure out how to setup cloud coding on the Parse Open Server using Heroku.
I see this link which tells me what to put in the Index.js and Main.js file: Implementing Cloud Code on Open Source Parse Server. However, I cannot seem to find those files. Nor can I find the "cloud" folder.
How do I find the cloud folder?
I created the Parse Server on MongoDB using the "Deploy to Heroku" link on this page: https://github.com/ParsePlatform/parse-server-example. After creating my application by filling out all the information, I ran the command heroku git:clone -a yourAppName to clone the application files. However, when I use the command I obtain a empty repository and get the following message in my terminal:
Cloning into 'hyv3-moja'...
warning: You appear to have cloned an empty repository.
So, how/where do I find the cloud folder with main.js? Did I miss any step in creating the Parse Server?
I also tried using the Parse Command Line. However, when I try to use the parse new command, it requires me to login to a Parse account. However, since Parse is going down, they are not accepting new accounts and I did not have an account before. Regardless, this seems like a deadend.
So can someone please explain to me how to set up Cloud Code?? I want to create a code that decrements a column in the database every second so it operates like a timer. Basically, I want my application to create objects on the database that last a certain amount of time chosen by the user. For this example, ill say 24 hours. So from the moment it is created, I want to decrement those 24 hours in the database. That way when a user of my application clicks to view the object, I translate the time remaining from the database and just output that value to the user to show how much time is remaining for the life of the object.

Using maketorrent in libtorrent examples

So I am trying to build an application that uses libtorrent. However, before I start I would like to make sure that I have compiled the lib correctly and that I have a functioning environment for testing.
I am currently running a VM with opentracker and I try to connect using the example client in libtorrent.
First I start by creating a .torrent file using libtorrent (I am currently not sitting in front of a computer with libtorrent available so I might be remembering the exact commands a bit wrong):
maketorrent.exe dummy.txt -t "http://10.XXX.XXX.XXX/announce"
This gives me a .torrent file called a.torrent. Opening the file everything looks ok, the bencoding is correct and the announce address is there.
Next I try to add it to the example client hoping it starts to seed:
client_test.exe a.torrent
Everything starts up OK, but no tracker is found. Then if I press t to show tracker information I see an error (maybe not the exact phrasing):
Alert: {null} unsupported URL protocol
OK, so maybe something is wrong with how I built libtorrent. So I get the Halite client instead since that is also supposed to be build upon libtorret. But there I have the same problem.
So I have a look at the code and found where this error message is generated. The code is checking if I am supplying an address using the HTTP or HTTPS protocol, which I am. So could it be that I am not able to use a bare IP-address or am I doing something wrong?
I found the problem. It was not a problem with the IP address or the torrent itself. Instead it was a problem with caching.
The first time I added the torrent I used http:\XXX.XXX.XXX.XXX instead of http://XXX.XXX.XXX.XXX which didn't work. However whatever change i did to the torrent file after that did not stick. It was always falling back to that original file until I removed the .resume folder.

service doesn't behave the same as command line

I am running on a Windows Server 2003. This is my problem:
I wrote a Perl script to automate the copy of some files from my Server machine to some network drives. I am using xcopy to copy the files. My problem is the permissions.
If I run the script from the command line, it works, all the copies are successful.
If I try to run the script using a service all the copies fail. This service is a program that I wrote that takes the script and runs it. In the background all it is doing is to call the C function 'system' and it runs the same program that I can run from the command line.
I have tried many variations of this to figure out what is wrong with it but I can't see why the service would not run the same way I run it from the command line.
I set up the service to run as the same user I am using from the command line.
I also tried to map the network drives as the user that has writing permission but the result is the same. Manually the script works, from the service, it doesn't.
Any suggestion is appreciated.
Thanks
Tony
The service may be running as the system and not have access to the network drives. In the Service settings, change the service to run under your account (or an account with the relevant permissions/mappings).
When the service runs, it uses whatever credentials you specify in the Services manager of Windows. The default, LOCAL SERVICE, probably does not have permission to access the resources to be copied.
Create a new user account with the minimum set of permissions needed to perform the copy and configure your service to run under that account.
I did figure out the issue (I think), and that matches what I later found in another post:
https://serverfault.com/questions/4623/windows-can-i-map-a-network-drive-for-a-service-account
<...Persistent drive mappings are only restored during an interactive login, which the service does not use. I believe the only way to get a service to use a network drive is for that service to map the drive itself or alternatively for it to us a UNC path instead of a mapped drive.>
What I did was mapping the drive using the service and that seems to work. It turns out, if I map the drive and save credentials, then I can access later the drive without having to map it again. I don't know why this approach seems to work though.
-Thanks everybody for your help.
Tony

Resources