Docpad Livereload plugin + Cloud9 IDE - cloud9-ide

Has anyone successfully got this combination working?
It seems to run correctly on the client side, but there's something about Cloud9's file system that means changes aren't detected when files are saved, so I'm having to restart the app every time.

problem is that cloud 9 gives u only one port(process.env.PORT) and you are using this port for running web server and you don't any have additional port for live-reload server.
for CSS you can use Live.js

Safareli is correct that Cloud 9 gives you only one port, but Live.js, the website Safareli linked in fact does work with refreshing everything, although I don't know how taxing it is to C9 since it is refreshing for headers pretty much all the time.

Related

Error 502 with Laravel when exporting to Excel on Azure Web App Linux

I have a Laravel App running on Azure Web App Linux service, all running nice and smoothly until I reach a feature that exports a query to an XLS for download. Then I receive the ERROR 502.
On my local environment works normally, I can export the query to XLS with no issues, it is not a large query, just a few rows.
In the same app, I have a function that exports to XLS just 1 row at a time and works fine, so it is just when I go for a larger(ish) query.
Any ideas? I have tried scaling up, restarting the app, apache, changed .ini (via .htaccess to increase execution time).
There is no trace in the logs either, there is something about the container crashing but cannot trace it to this particular error.
Ok, managed to solved it... was not straight forward at all. It has to do with the size of the query, even tough is not big by any means (a couple thousands max) raising memory limit to 1024M or further ended up in 502 Error. Decided to try different and moved from Laravel Excel to Fast-Excel which is less featured but man... it works. Now everything downloads perfectly. In case you are having this issue give fast-excel a try.

Laravel Sail how to change local dev domain

I have recently decided to try out Laravel Sail instead of my usual setup with Vagrant/Homestead. Everything seems to be beautifully and easily laid out but I cannot seem to find a workaround for changing domain names in the local environment.
I tried serving the application on say port 89 with the APP_PORT=89 sail up command which works fine using localhost:89 however it seems cumbersome to try and remember what port was which project before starting it up.
I am looking for a way to change the default port so that I don't have to specify what port to serve every time I want to sail up. Then I can use an alias like laravel.test for localhost:89 so I don't have to remember ports anymore I can just type the project names.
I tried changing the etc/hosts file but found out it doesn't actually help with different ports
I essentially am trying to access my project by simply typing 'laravel.test' on the browser for example.
Also open for any other suggestions to achieve this.
Thanks
**Update **
After all this search I actually decided to change my workflow to only have one app running at a time so now I am just using localhost and my CPU and RAM loves it, so if you are here moving from homestead to docker, ask yourself do you really need to run all these apps at the same time. If answer is yes research on, if not just use localhost, there is nothing wrong with it
To change the local name in Sail from the default 'laravel.test' and the port, add the following to your .env file:
APP_SERVICE="yourProject.local" APP_PORT=89
This will take effect when you build (or rebuild using sail build --no-cache) your Sail container.
And to be able to type in 'yourProject.local' in your web browser and have it load your web page, ensure you have your hosts file updated by adding
127.0.0.1 yourProject.localto your hosts file. This file is located:
Windows 10 – “C:\Windows\System32\drivers\etc\hosts”
Linux – “/etc/hosts”
Mac OS X – “/private/etc/hosts”
You'll need to close all browser instances and reopen after making chnages to the hosts file. With this, try entering the alias both with and without the port number to see which works. Since you already set the port via .env you may not need to include it in your alias.
If this doesn't work, change the .env APP_URL=http://yourProject.local:89
Ok another option, in /routes/web.php I assume you have a route set up that may either return a view or call a controller method. You could test to see if you can have this
‘return redirect('http://yourProject.local:89');’ This may involve a little playing around with the routes/controller, but this may be worth looking into.

H2O Steam deploy can't connect to Prediction Service Builder

I am trying to use h2o steam (running on localhost) to deploy a model. After importing the model from h2o flow, clicking the "deploy model" option in the "models" section of the project, filling out the resulting dialog box, and clicking the "deploy" button, the following messages are displayed:
At first I thought that it was because maybe I needed to start up the service builder on my own, so I started it up following the docs here, but still got the same error. Any suggestions would be appreciated. Thanks :)
Just make sure jetty HTTP server is running locally by executing the following in your shell:
java -jar var/master/assets/jetty-runner.jar var/master/assets/ROOT.war
Looking here, it seems like I would need to "override" some kind of default browser restriction for accessing localhost:8080 (which is what I assume steam is trying to do to launch the service builder (I don't know much about networking related stuff)). I got around this by launching steam with the command:
$ ./steam serve master --prediction-service-host=localhost --prediction-service-port-range=12345:22345
where the ports are some arbitrary range between (1025, 65535) which I got by word-searching the a page of the steam source code (line 182 as of the date of this posting).
Doing this lets me deploy the models through the steam dialog without any error messages. Again, I don't know much about networking related stuff, so if anyone has a better way to solve this problem (ie. allow access of localhost:8080) please post or comment. Thanks.

Unable to use activator on my Mac - get a timeout exception when I try and make an app from template

So I'm following this tutorial:
https://www.playframework.com/documentation/2.3.x/Installing
It all seems installed - i.e. all the commands work but when I try and call:
activator new my-first-app play-scala
I get the following:
Fetching the latest list of templates...
Could not fetch the updated list of templates. Using the local cache.
Check your proxy settings or increase the timeout. For more details see:
http://typesafe.com/activator/docs
OK, application "another-app" is being created using the "play-scala" template.
akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://default/user/template-cache#1575831997]] after [10000 ms]
at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:333)
at akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117)
at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:599)
at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:597)
at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(Scheduler.scala:467)
at akka.actor.LightArrayRevolverScheduler$$anon$8.executeBucket$1(Scheduler.scala:419)
at akka.actor.LightArrayRevolverScheduler$$anon$8.nextTick(Scheduler.scala:423)
at akka.actor.LightArrayRevolverScheduler$$anon$8.run(Scheduler.scala:375)
at java.lang.Thread.run(Thread.java:744)
And nothing happens.
I just installed it on a PC in my house under the same network so I don't think my connection is the issue. I'm not using a proxy either..
Got any ideas? I've been trying to get this working for over a day now.
I'm on OSX Yosemite by the way.
I sometimes have timeouts too, especially while working in the university on some sloppy WLAN.
There are two types of activator, the usual light-weight one and the offline version. In the second, all repositories are present so the activator does not need to gather anything from the internet.
When you go to https://www.playframework.com/download ... look for the offline distribution (around 400MB) and install it like the normal activator.
If this solves your problem, there was something wrong with the activator trying to get something from a repository (you said that you can run the project but get server timeouts).
[EDIT]: You can also set the timeout to 30 seconds and see if this helps
activator -Dactivator.timeout=30s new "project name"

Trouble Uploading Large Files to RStudio using Louis Aslett's AMI on EC2

After following this simple tutorial http://www.louisaslett.com/RStudio_AMI/ and video guide http://www.louisaslett.com/RStudio_AMI/video_guide.html I have setup an RStudio environment on EC2.
The only problem is, I can't upload large files (> 1GB).
I can upload small files just fine.
When I try to upload a file via RStudio, it gives me the following error:
Unexpected empty response from server
Does anyone know how I can upload these large files for use in RStudio? This is the whole reason I am using EC2 in the first place (to work with big data).
Ok so I had the same problem myself and it was incredibly frustrating, but eventually I realised what was going on here. The default home directory size for AWS is less than 8-10GB regardless of the size of your instance. As this as trying to upload to home then there was not enough room. An experienced linux user would not have fallen into this trap, but hopefully any other windows users new to this who come across this problem will see this. If you upload into a different drive on the instance then this can be solved. As the Louis Aslett Rstudio AMI is based in this 8-10GB space then you will have to set your working directory outside this, the home directory. Not intuitively apparent from Rstudio server interface. Whilst this is an advanced forum and this is a rookie error I am hoping no one deletes this question as I spent months on this and I think someone else will too. I hope this makes sense to you?
Don't you have shell access to your Amazon server? Don't rely on RStudio's upload (which may have a 2Gb limit, reasonably) and use proper unix dev tools:
rsync -avz myHugeFile.dat amazonusername#my.amazon.host.ip:
on your local PC command line (install cygwin or other unixy compatibility system) will transfer your huge file to your amazon server, and if interrupted will resume from that point, will compress the data for transfer too.
For a windows gui on something like this, WinSCP was what we used to do in the bad old days before Linux.
This could have something to do with your web server. Are you using nginx or apache as your web server. If so you can modify the upload feature in your nginx server. If you are running nginx on the front end of the web server I would recommend the following fix in your nginx.conf file.
http {
...
client_max_body_size 100M;
}
https://www.tecmint.com/limit-file-upload-size-in-nginx/
I had a similar problems with a 5GB file. What worked for me was to use SQLite to create a database with the csv file that I needed. Use SQLite code to bring create the database. Then I used a function in RStudio to communicate with the local database. In that way, I was able to bring in the csv file. I can track down the R code that I used if you like.

Resources