phoenix framework - invalid argument at new Socket - windows - windows

I'm not able to run a new phoenix app. This is the error I'm getting; I'm not sure what the reason could be.
I tried changing the port, which didn't change the behaviour. Also, it seems like I'm able to run node correctly.
Compiled web/views/error_view.ex
Compiled web/controllers/page_controller.ex
Compiled web/views/page_view.ex
Compiled web/views/layout_view.ex
Compiled lib/test_phoenix/endpoint.ex
Generated test_phoenix app
[info] Running TestPhoenix.Endpoint with Cowboy on port 4000 (http)
net.js:156
this._handle.open(options.fd);
^
Error: EINVAL, invalid argument
at new Socket (net.js:156:18)
at process.stdin (node.js:664:19)
at bindWatcherEvents (c:\Desarrollo\Phoenix\test_phoenix\node_modules\brunch\l
ib\watch.js:597:12)
at c:\******\Phoenix\test_phoenix\node_modules\brunch\lib\watch.js:667:9
at c:\******\Phoenix\test_phoenix\node_modules\brunch\lib\watch.js:557:16
at c:\******\Phoenix\test_phoenix\node_modules\brunch\lib\watch.js:188:12
at c:\******\Phoenix\test_phoenix\node_modules\brunch\node_modules\async-e
ach\index.js:24:44
at c:\******\Phoenix\test_phoenix\node_modules\brunch\lib\watch.js:175:14
at Object.cb [as oncomplete] (fs.js:168:19)

I was just running into a similar problem and I updated Node to the latest version as Jose Valim suggested. That fixed the problem.

Related

Pybullet error: physics server version mismatch (expected 202010061 got 201902120)

Pybullet error: physics server version mismatch (expected 202010061 got 201902120)
I am trying to connect pybullet with VR, and I try to run the example vr code given inside pybullet, vr_kuka_setup_vrSyncPlugin.py, I first ran the "build_visual_studio_vr_pybullet_double.bat" script and the App_PhysicsServer_SharedMemory_VR*.exe, and then I tried vr_kuka_setup_vrSyncPlugin.py, but I get such an error message:
b3Error[examples/SharedMemory/PhysicsClientSharedMemory.cpp,359]:
Error: physics server version mismatch (expected 202010061 got 201902120)
I am pretty sure that there is only one version of pybullet in my computer, the 3.0.9, and the python file and App_PhysicsServer_SharedMemory_VR*.exe comes from the same repo, so I don't know what is the problem related to this error? Is it a version error or something else?
output for code:
code itself
I don't think the pybullet connects to the shared memory successfully, and I think it connects to GUI.

Running my revel application on windows 10 fail

I had problem when run my revel app on windows
it create fine but don't run when I try so only get this. any idea?
C:\Desarrollo\Web\webpro>revel run -a webpro
Revel executing: run a Revel application
WARN 05:53:33 harness.go:175: No http.addr specified in the app.conf listening on localhost interface only. This will not allow external access to your application
Changed detected, recompiling
Parsing packages, (may require download if not cached)... Completed
ERROR 05:53:38 build.go:406: Build errors errors="C:\\Users\\Mario\\go\\pkg\\mod\\github.com\\revel\\revel#v1.0.0\\cache\\memcached.go:11:2: no required module provides package github.com/bradfitz/gomemcache/memcache; to add it:\n\tgo get github.com/bradfitz/gomemcache/memcache\nC:\\Users\\Mario\\go\\pkg\\mod\\github.com\\revel\\revel#v1.0.0\\cache\\redis.go:10:2: no required module provides package github.com/garyburd/redigo/redis; to add it:\n\tgo get github.com/garyburd/redigo/redis\nC:\\Users\\Mario\\go\\pkg\\mod\\github.com\\revel\\revel#v1.0.0\\cache\\inmemory.go:12:2: no required module provides package github.com/patrickmn/go-cache; to add it:\n\tgo get github.com/patrickmn/go-cache\n"
C:\Users\Mario\go\src\webpro\C:\Users\Mario\go\pkg\mod\github.com\revel\revel#v1.0.0\cache\memcached.go:11
WARN 05:53:38 build.go:420: Could not find in GO path file=C:\\Users\\Mario\\go\\pkg\\mod\\github.com\\revel\\revel#v1.0.0\\cache\\memcached.go:11
ERROR 05:53:38 harness.go:239: Build detected an error error="Go Compilation Error (in C:\\Users\\Mario\\go\\pkg\\mod\\github.com\\revel\\revel#v1.0.0\\cache\\memcached.go:11:2): no required module provides package github.com/bradfitz/gomemcache/memcache; to add it:"
Error compiling code, to view error details see proxy running on http://:9000
Time to recompile 5.3684655s
I am newer ok
Best
Check your IPv4 address with the ipconfig command
Open webpro/conf/app.conf and paste the IPv4 address into the http.addr parameter

Caddy not working in api-platfrom 2.6.4 distribution - panic: proto: file "pb.proto" is already registered

When I try us api-platform version 2.6.4 I am not able to run it when i build adn strat containers and check logs caddy is not working i get an error like this. Any idea? Caddy version is 2.3.0
caddy_1 | panic: proto: file "pb.proto" is already registered
caddy_1 | See https://developers.google.com/protocol-buffers/docs/reference/go/faq#namespace-conflict
tureality_caddy_1 exited with code 2
Other people have reported having this bug and I had it too.
Fortunately, the bug as just been fixed by Dunglas itself. :)
https://github.com/api-platform/api-platform/issues/1881#issuecomment-822663193
The repair was done at the mercure level and not in the api platform source code itself so you can keep your current version.
You just have to docker-compose up and it will work.

Installing Meteor at Koding

I'm trying to instal meteor at koding and I got error on the last step meteor -p port this is what I get :
app/packages/mongo-livedata/mongo_driver.js:33
throw err;
^
Error: failed to connect to [127.0.0.1:1994]
at Server.connect.connectionPool.on.server._serverState (/Users/chlebta/meteor/dev_bundle/lib/node_modules/mongodb/lib/mongodb/connection/server.js:482:73)
at EventEmitter.emit (events.js:126:20)
at connection.on._self._poolState (/Users/chlebta/meteor/dev_bundle/lib/node_modules/mongodb/lib/mongodb/connection/connection_pool.js:96:15)
at EventEmitter.emit (events.js:99:17)
at Socket.errorHandler (/Users/chlebta/meteor/dev_bundle/lib/node_modules/mongodb/lib/mongodb/connection/connection.js:411:10)
at Socket.EventEmitter.emit (events.js:96:17)
at Socket._destroy.self.errorEmitted (net.js:329:14)
at process.startup.processNextTick.process._tickCallback (node.js:244:9)
Exited with code: 1
Your application is crashing. Waiting for file change.
There is a section about Meteor in the Koding wiki.
Also, please note that you should select a port inside the port range of 1024 to 10000. Some ports may be in use, so you might have to try out a few different ones.
Not sure if you've gotten past this, but I had a similar issue. I ended up having to create an environment variable named MONGO_URL:
export MONGO_URL=mongodb://user:pass#host:port/dbname
Of course, replace user, pass, host, port and dbname with what Koding assigned to you. Not the most secure, so I'll find a more elegant solution to this, but for the moment, it works.

HPCC/HDFS Connector

Does anyone know about HPCC/HDFS connector.we are using both HPCC and HADOOP.There is one utility(HPCC/HDFS connector) developed by HPCC which allows HPCC cluster to acess HDFS data
i have installed the connector but when i run the program to acess data from hdfs it gives error as libhdfs.so.0 doesn't exist.
I tried to build libhdfs.so using command
ant compile-libhdfs -Dlibhdfs=1
its giving me error as
target "compile-libhdfs" does not exist in the project "hadoop"
i used one more command
ant compile-c++-libhdfs -Dlibhdfs=1
its giving error as
ivy-download:
[get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
[get] To: /home/hadoop/hadoop-0.20.203.0/ivy/ivy-2.1.0.jar
[get] Error getting http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
to /home/hadoop/hadoop-0.20.203.0/ivy/ivy-2.1.0.jar
BUILD FAILED java.net.ConnectException: Connection timed out
any suggestion will be a great help
Chhaya, you might not need to build libhdfs.so, depending on how you installed hadoop, you might already have it.
Check in HADOOP_LOCATION/c++/Linux-<arch>/lib/libhdfs.so, where HADOOP_LOCATION is your hadoop install location, and arch is the machine’s architecture (i386-32 or amd64-64).
Once you locate the lib, make sure the H2H connector is configured correctly (see page 4 here).
It's just a matter of updating the HADOOP_LOCATION var in the config file:
/opt/HPCCSystems/hdfsconnector.conf
good luck.

Resources