Empty response when running REST application on docker with Mac OS - macos

I've create a simple scala application(akka-http REST service) using SBT.
This is the application main class:
object UserApiWebService extends App {
override def main(args: Array[String]): Unit = {
implicit val system: ActorSystem = ActorSystem("user-api-system")
implicit val executionContext: ExecutionContext = system.dispatcher
implicit val materializer: ActorMaterializer = ActorMaterializer()
val userApiRoute = new UserApiRoute
val userApiRoutes = new UserApiRoutes(userApiRoute)
val config = ConfigFactory.load()
val host = config.getString("http.host")
val port = config.getInt("http.port")
println(s"Starting server on $host:$port..")
Http().bindAndHandle(userApiRoutes.routes, host, port)
println(s"Server started on $host:$port..") }
}
and my application.conf contains http.host = "127.0.0.1" and http.port = 9000
When I run the application in local running sbt run all works great.
So I decided to try docker and create a container for my Akka application.
I'm using the sbt DockerPlugin and running the command sbt docker:publishDocker it creates the docker image on local machine.
So I've started the docker container using the command
docker run -p 9000:9000 --name user-api user-api:0.1.0-SNAPSHOT
and I can see that the container is correctly running.
If I check if the API-REST application is working correctly using the command:
curl -XGET 127.0.0.1:9000/user/34
I get curl: (52) Empty reply from server as a response.
If I try the same command after entered in the container using docker exec -it a556b8846340 /bin/bash I get the correct response.
I'm working on a mac using macOS 10.12.6 and docker version 17.09.1-ce.
Anybody can help me?

Since you are running on Mac, you need docker-machine ip to talk to local VM (reference).
So your curl needs to be updated to curl -X GET http://$(docker-machine ip):9000/user/34

Related

Can not execute connection to Postgres via Jenkins Groovy

Using Scriptler can not connect to Postgresql via Jenkins groovy, neither via executing psql command, nor via using jdbc:
psql
command = """
PGPASSWORD=1111\
psql -h xxxx.rds.amazonaws.com\
-U master -d yyy -c "select * from table"
"""
proc = command.execute()
proc.waitFor()
return proc.in.text
I receive the error
Cannot run program "PGPASSWORD=1111": error=2, No such file or directory
jdbc
import groovy.sql.Sql
def dbUrl = "jdbc:postgresql://xxxx.rds.amazonaws.com/yyy"
def dbUser = "master"
def dbPassword = "1111"
def dbDriver = "org.postgresql.jdbcDriver"
def sql = Sql.newInstance(dbUrl, dbUser, dbPassword, dbDriver)
it returns
java.lang.ClassNotFoundException: org.postgresql.jdbcDriver
I installed plugins database, PostgreSQL API Plugin & database-postgresql. Jenkins v.2.176.1
So your first attempt via command.execute() will not work because you are trying to use shell command syntax and you're not running a shell.
The 2nd one will not work because you have to tell Groovy where to find the postgress jdbc library. You may be able to do this with Groovy Grape.
Personally I would do psql command using a shell step.

Putting to local DynamoDB table with Python boto3 times out

I am attempting to programmatically put data into a locally running DynamoDB Container by triggering a Python lambda expression.
I'm trying to follow the template provided here: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GettingStarted.Python.03.html
I am using the amazon/dynamodb-local you can download here: https://hub.docker.com/r/amazon/dynamodb-local
Using Ubuntu 18.04.2 LTS to run the container and lambda server
AWS Sam CLI to run my Lambda api
Docker Version 18.09.4
Python 3.6 (You can see this in sam logs below)
Startup command for python lambda is just "sam local start-api"
First my Lambda Code
import json
import boto3
def lambda_handler(event, context):
print("before grabbing dynamodb")
# dynamodb = boto3.resource('dynamodb', endpoint_url="http://localhost:8000",region_name='us-west-2',AWS_ACCESS_KEY_ID='RANDOM',AWS_SECRET_ACCESS_KEY='RANDOM')
dynamodb = boto3.resource('dynamodb', endpoint_url="http://localhost:8000")
table = dynamodb.Table('ContactRequests')
try:
response = table.put_item(
Item={
'id': "1234",
'name': "test user",
'email': "testEmail#gmail.com"
}
)
print("response: " + str(response))
return {
"statusCode": 200,
"body": json.dumps({
"message": "hello world"
}),
}
I know that I should have this table ContactRequests available at localhost:8000, because I can run this script to view my docker container dynamodb tables
I have tested this with a variety of values in the boto.resource call to include the access keys, region names, and secret keys, with no improvement to result
dev#ubuntu:~/Projects$ aws dynamodb list-tables --endpoint-url http://localhost:8000
{
"TableNames": [
"ContactRequests"
]
}
I am also able to successfully hit the localhost:8000/shell that dynamodb offers
Unfortunately while running, if I hit the endpoint that triggers this method, I get a timeout that logs like so
Fetching lambci/lambda:python3.6 Docker container image......
2019-04-09 15:52:08 Mounting /home/dev/Projects/sam-app/.aws-sam/build/HelloWorldFunction as /var/task:ro inside runtime container
2019-04-09 15:52:12 Function 'HelloWorldFunction' timed out after 3 seconds
2019-04-09 15:52:13 Function returned an invalid response (must include one of: body, headers or statusCode in the response object). Response received:
2019-04-09 15:52:13 127.0.0.1 - - [09/Apr/2019 15:52:13] "GET /hello HTTP/1.1" 502 -
Notice that none of my print methods are being triggered, if I remove the call to table.put, then the print methods are successfully called.
I've seen similar questions on Stack Overflow such as this lambda python dynamodb write gets timeout error that state that the problem is I am using a local db, but shouldn't I still be able to write to a local db with boto3, if I point it to my locally running dynamodb instance?
Your Docker container running the Lambda function can't reach the DynamoDB at 127.0.0.1. Try instead the name of your DynamoDB local docker container as the host name for the endpoint:
dynamodb = boto3.resource('dynamodb', endpoint_url="http://<DynamoDB_LOCAL_NAME>:8000")
You can use docker ps to find the <DynamoDB_LOCAL_NAME> or give it a name:
docker run --name dynamodb amazon/dynamodb-local
and then connect:
dynamodb = boto3.resource('dynamodb', endpoint_url="http://dynamodb:8000")
Found the solution to the problem here: connecting AWS SAM Local with dynamodb in docker
The question asker noted that he saw online that he may need to connect to the same docker network using:
docker network create lambda-local
So created this network, then updated my sam command and my docker commands to use this network, like so:
docker run --name dynamodb -p 8000:8000 --network=local-lambda amazon/dynamodb-local
sam local start-api --docker-network local-lambda
After that I no longer experienced the timeout issue.
I'm still working on understanding exactly why this was the issue
To be fair though, it was important that I use the dynamodb container name as the host for my boto3 resource call as well.
So in the end, it was a combination of the solution above and the answer provided by "Reto Aebersold" that created the final solution
dynamodb = boto3.resource('dynamodb', endpoint_url="http://<DynamoDB_LOCAL_NAME>:8000")

Spark app unable to write to elasticsearch cluster running in docker

I have a elasticsearch docker image listening on 127.0.0.1:9200, I tested it using sense and kibana, It works fine, I am able to index and query documents. Now when I try to write to it from a spark App
val sparkConf = new SparkConf().setAppName("ES").setMaster("local")
sparkConf.set("es.index.auto.create", "true")
sparkConf.set("es.nodes", "127.0.0.1")
sparkConf.set("es.port", "9200")
sparkConf.set("es.resource", "spark/docs")
val sc = new SparkContext(sparkConf)
val sqlContext = new SQLContext(sc)
val numbers = Map("one" -> 1, "two" -> 2, "three" -> 3)
val airports = Map("arrival" -> "Otopeni", "SFO" -> "San Fran")
val rdd = sc.parallelize(Seq(numbers, airports))
rdd.saveToEs("spark/docs")
It fails to connect, and keeps on retrying
16/07/11 17:20:07 INFO HttpMethodDirector: I/O exception (java.net.ConnectException) caught when processing request: Operation timed out
16/07/11 17:20:07 INFO HttpMethodDirector: Retrying request
I tried using IPAddress given by docker inspect for the elasticsearch image, that also does not work. However when I use a native installation of elasticsearch, the Spark App runs fine. Any ideas?
Also, set the config
es.nodes.wan.only to true
As mentioned in this answer if you are having issues writing to ES.
Couple things I would check:
The Elasticsearch-Hadoop spark connector version you are working with. Make sure that it is not beta. There was a fixed bug related to the IP resolving.
Since 9200 is the default port, you may remove this line: sparkConf.set("es.port", "9200") and check.
Check that there is no proxy configured in your Spark environment or config files.
I assume that you run Elasticsaerch and Spark on the same machine. Can you try to configure your machine IP address instead of 127.0.0.1
Hope this helps! :)
Had the same problem and a further issue was that the confs set using sparkConf.set() didn't have an effect. But supplying the confs with the saving function worked, like this:
rdd.saveToEs("spark/docs", Map("es.nodes" -> "127.0.0.1", "es.nodes.wan.only" -> "true"))

Cannot start meteor and mongo on windows 10?

I get an error
C:\Users\pavle\AppData\Local.meteor\packages\meteor-tool\1.3.4_1\mt-os.windows.x86_32\dev_bundle\server-lib\node_modules\fibers\future.js:280
Error: URL must be in the format mongodb: // user: pass # host: port / dvname
I do not understand, because of what is happening. I tried to install as
I tried so:
set MONGO_URL="mongodb://127.0.0.1:7777/mongo" && meteor --port 8031 --settings local-settings.json
set MONGO_URL="mongodb://root:password#127.0.0.1:7777/mongo" && meteor --port 8031 --settings local-settings.json
Set in settings file
"env":{
"MONGO_URL": "mongodb://root:password#localhost:7777/zenmarket"
}
"env":{
"MONGO_URL": "mongodb://#localhost:7777/zenmarket"
}
mongod port 7777
Windows 10 64
Meteor v1.3.4.1
Mongo shell v3.2.7
I'm desperate
I spent a few hours on it to understand the reason.
As always, it is simple
It is necessary to set the URL without the quotes
set MONGO_URL=mongodb://root:password#127.0.0.1:7777/mongo
Check that you should simply
echo% MONGO_URL%
Hopefully this will help someone

FunkLoad monitor doesn't show any graphs in report

I did set up everything according to tutorial here http://funkload.nuxeo.org/monitoring.html , started monitor server, made bench test, builded report. But in report there are no added graphs from monitoring... Any idea? I am using credential server as well, but that was and is working correctly... its just that after i added monitor things, nothing seems to change...
monitor.conf
[server]
host = localhost
port = 8008
interval = .5
interface = eth0
[client]
host = localhost
port = 8008
my_test.conf:
[main]
title= some title
description= some descr
url=http://localhost:8000
... some other not important lines here
[monitor]
hosts=localhost
[localhost]
port=8008
description=The benching machine
use
sudo easy_install -f http://funkload.nuxeo.org/snapshots/ -U funkload
instead of just
pip install funkload
Looks like pip does have some old bad version of funkload

Resources