Keep a Play 2 application private on heroku - heroku

I'm using Heroku to host a Play 2 application for the purpose of testing and playing around. I'd like the application to be "private" at this point which means that every aspect of the application should only be visible to certain users.
Normally, I would just use an htaccess file with one single user/password, but that's a specific thing of Apache of course and doesn't help me in this case.
The protection doesn't have to be "strong". The main aim is to keep away bots and random visitors
It would be great if I didn't have to "pollute" the code of my play application. I'd prefer to have some external mechanism to achieve that. If there is no other way than to realize it using play itself, the solution should be loosely coupled from the rest of my play application.
How could I achieve that?
edit: to emphasize it: what I want to achieve won't be part of the final application in production mode. So it neither has to be super secure, nor super engineered.

Adreas example is correct but it is from play 2.1 and in play 2.2 the signature of Filter.apply has changed a little bit, this should work better with 2.2:
class BasicAuth extends Filter {
val username = "stig"
val password = "secretpassword"
override def apply(next: RequestHeader => Future[SimpleResult])(request: RequestHeader): Future[SimpleResult] = {
request.headers.get("Authorization").flatMap { authorization =>
authorization.split(" ").drop(1).headOption.filter { encoded =>
new String(org.apache.commons.codec.binary.Base64.decodeBase64(encoded.getBytes)).split(":").toList match {
case u :: p :: Nil if u == username && password == p => true
case _ => false
}
}.map(_ => next(request))
}.getOrElse {
Future.successful(Results.Unauthorized.withHeaders("WWW-Authenticate" -> """Basic realm="MyApp Staging""""))
}
}
}

I dont think Heroku offers a solution for this. I ended up implementing a Basic access authentication filter and used it in the Global object. It looks something like this
class HerokuHttpAuth extends Filter {
object Conf {
val isStaging = true // read a config instead of hard coding
val user = "theusername"
val password = "thepassword"
}
override def apply(next: RequestHeader => Result)(request: RequestHeader): Result = {
if (Conf.isStaging) {
request.headers.get("Authorization").flatMap { authorization =>
authorization.split(" ").drop(1).headOption.filter { encoded =>
new String(org.apache.commons.codec.binary.Base64.decodeBase64(encoded.getBytes)).split(":").toList match {
case u :: p :: Nil if u == Conf.user && Conf.password == p => true
case _ => false
}
}.map(_ => next(request))
}.getOrElse {
Results.Unauthorized.withHeaders("WWW-Authenticate" -> """Basic realm="MyApp Staging"""")
}
} else {
next(request)
}
}
}

Related

Function to read role, environment file in masterless puppet

I'm working with Puppet 4.5 in masterless configuration and am trying to create a Puppet function to read a simple config file that assigns roles and environments. I don't have any integration with hiera/facter that I can change.
The file format is:
host1::java_app_node::qa
host2::nodejs_app_node::prod
The Puppet function that will read this file is in a module called homebase. I want to function to return a hash or array of hashes that split the config values. This will let me use them in templates.
In modules/homebase/manifests/init.pp I define:
$role_file = 'puppet://role.lst'
I then created modules/homebase/functions/get_roles.pp as follows:
function homebase::get_roles() {
$func_name = 'homebase::get_roles()'
if ! File.exists?($::homebase::role_file) {
fail("Could not find #{$::homebase::role_file}")
}
hosts = { }
File.open($::homebase::role_file).each |line| {
parts = line.split(/::/)
hosts[parts[0]] = { 'host' => parts[0], 'role' => parts[1], 'env' => parts[2] }
}
return hosts
}
In other classes, I then want to call:
class myapp {
$servers = homebase::get_roles().each | k, v | {
$v['host'] if $v['role'] =~ /myapp/ && $v['env'] == $environment
}
file { 'myapp.cfg':
ensure => file,
path => '/opt/myapp/myapp.cfg',
source => template("/myapp/myapp.cfg.erb"),
mode => '0644',
owner => myuser,
group => myuser,
}
}
Seems like there would be a better way to do this. Am I completely off base?
There turned out to be a much easier way to this rather than try to create a function to read a non-standard configuration file. Instead, I used a site.pp file to create node {} entries. I also parameterized the myapp class to take inputs based on the node.
So my site.pp looks like:
node 'server1.mydomain', 'server2.mydomain' {
$myvar = [ 'val1', 'val2' ]
class { 'myapp':
values => $myvar
}
}
This could probably be improved. One of the issues is with a non-Puppet configuration file I was able to also able to control execution in my bash wrapper script. Much of the need for that went away when, though, with the node definitions.

Secured users created in grails integration test are unauthorized but bootstrapped ones are

I'm using Grails Spring Security Core and the Grails Spring Security REST plugin and I'm just starting to get things set up. I initialized the plugins with a User class and an Authority class (defaults) and went to write an integration test, following a guide I found on the Grails website.
It said to put the following in an integration test:
def "test a user with the role ROLE_BOSS is able to access /api/announcements url"() {
when: 'login with the sherlock'
RestBuilder rest = new RestBuilder()
def resp = rest.post("http://localhost:${serverPort}/api/login") {
accept('application/json')
contentType('application/json')
json {
username = 'sherlock'
password = 'elementary'
}
}
then:
resp.status == 200
resp.json.roles.find { it == 'ROLE_BOSS' }
}
I went ahead and did something similar and it worked with a bootstrapped User, but when I tried to do the exact same test with a User created in the test method itself, it would fail with a 401 HTTP response code.
The code I'm trying to run:
void "check get access token"() {
given:
RestBuilder rest = new RestBuilder()
new User(username: "securitySpecTestUserName", password: "securitySpecTestPassword").save(flush: true)
assert User.count == 2
when:
def resp = rest.post("http://localhost:${serverPort}/api/login") {
accept('application/json')
contentType('application/json')
json {
username = "securitySpecTestUserName"
password = "securitySpecTestPassword"
}
}
then:
resp.status == 200
}
Note that the User.count == 2 assertion passes because there is one User in Bootstrap.groovy and the one create in the test method.
Why does this work and pass with the bootstrapped User without any issues at all but not the one created in the method? Is there a way I can write this integration test so that I can test the /api/login endpoint included in the grails-spring-security-rest plugin in this way?
The User you create in the given section is in a transaction that has not been committed. When you make the REST call, the api/login controller will be run in a new transaction that cannot see your un-committed User.
A few options (there are others)...
Create User in BootStrap.groovy
def init = { servletContext ->
environments {
test {
new User(username: "securitySpecTestUserName", password: "securitySpecTestPassword").save(flush: true)
}
}
}
Make REST calls to create the User - assuming you have such functionality
Create User in setup
#Integration
#Rollback
class UserIntSpec extends Specification {
def setup() {
new User(username: "securitySpecTestUserName", password: "securitySpecTestPassword").save(flush: true)
}
void "check get access token"() {
given:
RestBuilder rest = new RestBuilder()
when:
def response = rest.post("http://localhost:${serverPort}/api/login") {
accept('application/json')
contentType('application/json')
json {
username = "securitySpecTestUserName"
password = "securitySpecTestPassword"
}
}
then:
response.status == HttpServletResponse.SC_OK
when:
def token = response.json.access_token
then:
token
}
}
Note: In Grails >= 3.0, setup() is run in a separate transaction and persisted (why it solves your problem) which is not rolled back. Any data will need to be cleaned up manually.
I suggest you read the grails documentation on testing: Integration Testing

Akka.Net Clustering Simple Explanation

I try to do a simple cluster using akka.net.
The goal is to have a server receiving request and akka.net processing it through it cluster.
For testing and learning I create a simple WCF service that receive a math equation and I want to send this equation to be solved.
I have one project server and another client.
The configuration on the server side is :
<![CDATA[
akka {
actor {
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
debug {
receive = on
autoreceive = on
lifecycle = on
event-stream = on
unhandled = on
}
deployment {
/math {
router = consistent-hashing-group #round-robin-pool # routing strategy
routees.paths = [ "/user/math" ]
virtual-nodes-factor = 8
#nr-of-instances = 10 # max number of total routees
cluster {
enabled = on
max-nr-of-instances-per-node = 2
allow-local-routees = off
use-role = math
}
}
}
}
remote {
helios.tcp {
transport-class = "Akka.Remote.Transport.Helios.HeliosTcpTransport, Akka.Remote"
applied-adapters = []
transport-protocol = tcp
port = 8081
hostname = "127.0.0.1"
}
}
cluster {
seed-nodes = ["akka.tcp://ClusterSystem#127.0.0.1:8081"] # address of seed node
}
}
]]>
On the Client side the configuration is like this :
<![CDATA[
akka {
actor.provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
remote {
log-remote-lifecycle-events = DEBUG
log-received-messages = on
helios.tcp {
transport-class = "Akka.Remote.Transport.Helios.HeliosTcpTransport, Akka.Remote"
applied-adapters = []
transport-protocol = tcp
port = 0
hostname = 127.0.0.1
}
}
cluster {
seed-nodes = ["akka.tcp://ClusterSystem#127.0.0.1:8081"] # address of the seed node
roles = ["math"] # roles this member is in
}
actor.deployment {
/math {
router = round-robin-pool # routing strategy
routees.paths = ["/user/math"]
nr-of-instances = 10 # max number of total routees
cluster {
enabled = on
allow-local-routees = on
use-role = math
max-nr-of-instances-per-node = 10
}
}
}
}
]]>
The cluster connection seems to correctly be made. I see the status [UP] and the association with the role "math" that appeared on the server side.
Event follwing the example on the WebCramler, I don't achieved to make a message to be delivered. I always get a deadletters.
I try like this :
actor = sys.ActorOf(Props.Empty.WithRouter(FromConfig.Instance), "math");
or
var actor = sys.ActorSelection("/user/math");
Does someone know a good tutorial or could help me ?
Thanks
Some remarks:
First: assuming your sending work from the server to the client. Then you are effectively remote deploying actors on your client.
Which means only the server node needs the actor.deployment config section.
The client only needs the default cluster config (and your role setting ofcourse).
Second: Try to make it simpler first. Use a round-robin-pool instead. Its much simpler. Try to get that working. And work your way up from there.
This way its easier to eliminate configuration/network/other issues.
Your usage: actor = sys.ActorOf(Props.Empty.WithRouter(FromConfig.Instance), "math"); is correct.
A sample of how your round-robin-pool config could look:
deployment {
/math {
router = round-robin-pool # routing strategy
nr-of-instances = 10 # max number of total routees
cluster {
enabled = on
max-nr-of-instances-per-node = 2
allow-local-routees = off
use-role = math
}
}
}
Try this out. And let me know if that helps.
Edit:
Ok after looking at your sample. Some things i changed
ActorManager->Process: Your creating a new router actor per request. Don't do that. Create the router actor once, and reuse the IActorRef.
Got rid of the minimal cluster size settings in the MathAgentWorker project
Since your not using remote actor deployment. I changed the round-robin-pool to a round-robin-group.
After that it worked.
Also remember that if your using the consistent-hashing-group router you need to specify the hashing key. There are various ways to do that, in your sample i think the easiest way would be to wrap the message your sending to your router in a ConsistentHashableEnvelope. Check the docs for more information.
Finally the akka deployment sections looked like this:
deployment {
/math {
router = round-robin-group # routing strategy
routees.paths = ["/user/math"]
cluster {
enabled = on
allow-local-routees = off
use-role = math
}
}
}
on the MathAgentWorker i only changed the cluster section which now looks like this:
cluster {
seed-nodes = ["akka.tcp://ClusterSystem#127.0.0.1:8081"] # address of the seed node
roles = ["math"] # roles this member is in
}
And the only thing that the ActorManager.Process does is:
return await Program.Instance.RouterInstance.Ask<TResult>(msg, TimeSpan.FromSeconds(10));

Using side effects in Akka Streams to implement commands received from a websocket

I want to be able to click a button on a website, have it represent a command, issue that command to my program via a websocket, have my program process that command (which will produce a side effect), and then return the results of that command to the website to be rendered.
The websocket would be responsible for updating state changes applied by different actors that are within the users view.
Example: Changing AI instructions via the website. This modifies some values, which would get reported back to the website. Other users might change other AI instructions, or the AI would react to current conditions changing position, requiring the client to update the screen.
I was thinking I could have an actor responsible for updating the client with changed information, and just have the receiving stream update the state with the changes?
Is this the right library to use? Is there a better method to achieve what I want?
You can use akka-streams and akka-http for this just fine. An example when using an actor as a handler:
package test
import akka.actor.{Actor, ActorRef, ActorSystem, Props, Stash, Status}
import akka.http.scaladsl.Http
import akka.http.scaladsl.model.ws.{Message, TextMessage}
import akka.http.scaladsl.server.Directives._
import akka.stream.scaladsl.{Flow, Sink, Source, SourceQueueWithComplete}
import akka.stream.{ActorMaterializer, OverflowStrategy, QueueOfferResult}
import akka.pattern.pipe
import scala.concurrent.{ExecutionContext, Future}
import scala.io.StdIn
object Test extends App {
implicit val actorSystem = ActorSystem()
implicit val materializer = ActorMaterializer()
implicit def executionContext: ExecutionContext = actorSystem.dispatcher
val routes =
path("talk") {
get {
val handler = actorSystem.actorOf(Props[Handler])
val flow = Flow.fromSinkAndSource(
Flow[Message]
.filter(_.isText)
.mapAsync(4) {
case TextMessage.Strict(text) => Future.successful(text)
case TextMessage.Streamed(textStream) => textStream.runReduce(_ + _)
}
.to(Sink.actorRefWithAck[String](handler, Handler.Started, Handler.Ack, Handler.Completed)),
Source.queue[String](16, OverflowStrategy.backpressure)
.map(TextMessage.Strict)
.mapMaterializedValue { queue =>
handler ! Handler.OutputQueue(queue)
queue
}
)
handleWebSocketMessages(flow)
}
}
val bindingFuture = Http().bindAndHandle(routes, "localhost", 8080)
println("Started the server, press enter to shutdown")
StdIn.readLine()
bindingFuture
.flatMap(_.unbind())
.onComplete(_ => actorSystem.terminate())
}
object Handler {
case object Started
case object Completed
case object Ack
case class OutputQueue(queue: SourceQueueWithComplete[String])
}
class Handler extends Actor with Stash {
import context.dispatcher
override def receive: Receive = initialReceive
def initialReceive: Receive = {
case Handler.Started =>
println("Client has connected, waiting for queue")
context.become(waitQueue)
sender() ! Handler.Ack
case Handler.OutputQueue(queue) =>
println("Queue received, waiting for client")
context.become(waitClient(queue))
}
def waitQueue: Receive = {
case Handler.OutputQueue(queue) =>
println("Queue received, starting")
context.become(running(queue))
unstashAll()
case _ =>
stash()
}
def waitClient(queue: SourceQueueWithComplete[String]): Receive = {
case Handler.Started =>
println("Client has connected, starting")
context.become(running(queue))
sender() ! Handler.Ack
unstashAll()
case _ =>
stash()
}
case class ResultWithSender(originalSender: ActorRef, result: QueueOfferResult)
def running(queue: SourceQueueWithComplete[String]): Receive = {
case s: String =>
// do whatever you want here with the received message
println(s"Received text: $s")
val originalSender = sender()
queue
.offer("some response to the client")
.map(ResultWithSender(originalSender, _))
.pipeTo(self)
case ResultWithSender(originalSender, result) =>
result match {
case QueueOfferResult.Enqueued => // okay
originalSender ! Handler.Ack
case QueueOfferResult.Dropped => // due to the OverflowStrategy.backpressure this should not happen
println("Could not send the response to the client")
originalSender ! Handler.Ack
case QueueOfferResult.Failure(e) =>
println(s"Could not send the response to the client: $e")
context.stop(self)
case QueueOfferResult.QueueClosed =>
println("Outgoing connection to the client has closed")
context.stop(self)
}
case Handler.Completed =>
println("Client has disconnected")
queue.complete()
context.stop(self)
case Status.Failure(e) =>
println(s"Client connection has failed: $e")
e.printStackTrace()
queue.fail(new RuntimeException("Upstream has failed", e))
context.stop(self)
}
}
There are lots of places here which could be tweaked, but the basic idea remains the same. Alternatively, you could implement the Flow[Message, Message, _] required by the handleWebSocketMessages() method by using GraphStage. Everything used above is also described in detail in akka-streams documentation.

Can Websocket and normal get route be same in Akka Http?

I do have a scenario where I will want my websocket route and get route paths to be the same. Is it possible in Akka Http?
Consider the below mentioned code:
def flow: Flow[Message, Message, Any] =
Flow.fromSinkAndSource(Sink.ignore,
Source.single(TextMessage.Strict("Hello from websocket")))
val route =
path("hello") {
get {
complete(HttpEntity(ContentTypes.`application/json`,"Simple hello"))
}
} ~ path("hello") {
handleWebSocketMessages(flow)
}
If, through a websocket client, I access ws://localhost:8080/hello, I get an websocket error. But a normal curl request gives the result of Simple hello. Is it possible to somehow achieve both actions on same route.
Something along the lines of the below should do
val route = path("hello") {
optionalHeaderValueByType[UpgradeToWebSocket](()) {
case Some(upgrade) => complete(upgrade.handleMessages(flow))
case None => get {
complete("Simple hello")
}
}
}

Resources