Trying to make my sumo simulation work on OMNeT++, but when modifying .launchd.xml i couldn't give permission to running sumo.
I'm currently using Ubuntu 18.04.2 LTS, sumo 0.32.0, Veins 4.7.1 and OMNeT++ 5.3.
I've searched for means to make a sumo simulation different from the erlangen example work. Until this point what I have found was modifying the erlangen.launchd.xml file to make it run my simulation and running
sudo python sumo-launchd.py -vv -c /home/gustavo/Downloads/sumo-0.32.0/bin/
However, everytime I tried to run it a message saying it lost connection appeared, so I tried creating a poly.xml archive with nothing in it (because I didn't want any buildings or something like that in the simulation) and it didn't work. I looked into the linux terminal and saw a message saying there was no sumo.cfg archive in the sumo-0.32.0/bin folder (I don't understand why it should, there is no .sumo.cfg archive from the erlangen example on that folder too), so I copied all the archives for the simulation (.net .rou .sumo.cfg and .poly) into the folder and tried again. This problem was solved but another error showed up in the terminal:
Could not start SUMO (/home/gustavo/Downloads/sumo-0.32.0/bin/ -c simulation.sumo.cfg): [Errno 13] Permission denied
I tried running it the command with sudo but it didn't solved it. Does anyone know how to make it work or another way of making my own sumo simulation work in OMNeT++?
<?xml version="1.0"?>
<!-- debug config -->
<launch>
<copy file="simulation.net.xml" />
<copy file="simulation.rou.xml" />
<copy file="simulation.poly.xml" />
<copy file="simulation.sumo.cfg" type="config" />
</launch>
I hoped to make my sumo simulation work on OMNeT++ because any other website that I looked didn't show that problem.
The parameter -c for the sumo-launchd expects the full path to the executable so you need to include sumo at the end:
sudo python sumo-launchd.py -vv -c /home/gustavo/Downloads/sumo-0.32.0/bin/sumo
Related
When I try to run any kubectl command including kubectl version, I get a pop-up saying "This app can't run on your PC, To find a version for your PC, check with the software publisher" when this is closed, the terminal shows "access denied"
The weird thing is, when I run the "kubectl version" command in the directory where I have downloaded kubectl.exe, it works fine.
I have even added this path to my PATH variables.
thank you for the answer, #rally
apparently, in my machine, it was an issue of administrative rights during installation. My workplace's IT added the permission and it worked for me.
Adding this answer here so that if anyone else comes across this problem they can try this solution as well.
Not knowing what exactly you downloaded, i would suggest you to delete everying in the folder and follow the instructions for installing kubectl for Windows from here:
https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/
Note: downloading the .exe is not enough. You need a kubeconfig file "config", which contains the configuration to access your cluster.
kubectl looks for this file in a hidden folder under your user profile directory. c:\users<me>.kube.
Just to let you try, i would suggest you to activate Kubernetes in your Docker-Desktop installation. I guess you have this installed. If not install it from the Dockersite. https://www.docker.com/products/docker-desktop/
Activating Kubernetes inside Docker-desktop, will install also kubectl and save the config in the .kube folder.
After the installation finished, in a new terminal:
kubectl get node
You should see the 1 node in the kubernetes-docker-desktop cluster.
Now if you want to access another cluster, you need the kubeconfig-file for that cluster. If you have it, just rename the config in the .kube folder (to not loose it) and put the other config inside.
If the new config file is correct you should be able to access that cluster.
The config file can be structured to hold more than one cluster configuration and you can switch between them using a so called context.
Here you can get the information how to do that, according to your needs:
https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
Hope this can help you, starting with KUbernetes.
Iam failing while integrating pio to the CLion IDE. I included the PATH variable to the ~/.profile file (three different ways) as:
export PATH=$PATH:~/.platformio/penv/bin
export PATH=$PATH:home/jonas/.platformio/penv/bin
export PATH="~/.platformio/penv/bin":$PATH
and can run the pio – version (and also the equivalent platformio) without sudo privelegies.
But when I’m trying to create a new pio project in CLion I always get
```Cannot run programm ./home/jonas/.platformio/penv/bin” (in directory “/tmp”): error=13, Permission denied``
Ok, I got it working. The problem was that the path to pio was not complete. The path (/home/jonas/.platformio/penv/bin/home/jonas/.platformio/penv/bin) points to the folder but not to the file to run. The full path is:
/home/jonas/.platformio/penv/bin/pio (or platformio).
However, in the create project window, you don’t even get to correct the path and it was automatically created during installation. This is a bit confusing.
I have a ros (kinetic) environment set up on a raspberry pi 3 and am trying to get ros to execute upon startup via a simple bash script which calls roslaunch. Ros works in the user domain but fails when called from root.
Here is my launch_ros.sh script:
#!/bin/bash
source /home/pi/ros_catkin_ws/devel/setup.bash
export PYTHONPATH=/opt/ros/kinetic/lib/python2.7/dist-packages
roslaunch my_pkg pkg_launch.launch
When I run sudo /home/pi/Desktop/ros_launch.sh the roscore crashes with
ERROR: cannot launch node of type [rosout/rosout]: can't locate node
[rosout] in package [rosout] failed to start core service [/rosout]
The traceback for the exception was written to the log file
But, if I comment out
source /home/pi/ros_catkin_ws/devel/setup.bash
and execute /home/pi/Desktop/ros_launch.sh, ros works fine.
Also worthy of noting is if I leave the above source line uncommented when running in the user domain I get the same error as I do in the root. I think this might be pointing me to the solution but I am still very new to ros.
Has anyone come across this issue and found a solution?
In order to run a node as root after changing your shell to root using commands like sudo -i, You can source your current bash profile thats located inside your normal user .bashrc and use it inside root shell.
Try the following code:
#!/bin/bash
source /opt/ros/kinetic/setup.bash
source /home/pi/ros_catkin_ws/devel/setup.bash
export PYTHONPATH=/opt/ros/kinetic/lib/python2.7/dist-packages
roslaunch my_pkg pkg_launch.launch
You need to source your workspace devel to be able to find your own package.
But, you need to source ROS devel to be able to use roscd, roslaunch, ...
In the code below I added:
source /opt/ros/kinetic/setup.bash
to source ROS and be able to use it.
PS: If it's still not working you should try a short delay before running roslaunch.
I was accidentally in a conda environment (base only) and it was messing up big-time. Try disabling any Python virtual environments.
It really worked and it inspired me to report that I didn't have rosnode as a program when I was planning to call ROS scripts in my own applications. Calling source .bashrc directly from the Raspberry Pi's system would refresh the terminal, but there was no way for my program to take over. The solution was to place the required ROS environment scripts in a separate script like name init_env.sh and then call source init_env.sh before any other ROS scripts were executed.
I'm trying to use wkhtmltopdf on GCF for PDF generation.
When my function tries to spawn the child process I get the following error:
Error: ./services/wkhtmltopdf: error while loading shared libraries: libXrender.so.1: cannot open shared object file: No such file or director
The problem is clearly due to the fact that wkhtmltopdf binary depends on external shared libraries which are not installed in GCF environment.
Is there a way to solve this issue or should I give up and use other solutions (AWS Lambda o GAE)?
Thank you in advance
Indeed, I’ve found a way to solve this issue by copying all required libraries in the same folder (/bin for me) containing wkhtmltopdf binary. In order to let the binary file use uploaded libraries I added the following lines to wkhtmltopdf.js:
wkhtmltopdf.command = 'LD_LIBRARY_PATH='+path.resolve(__dirname, 'bin')+' ./bin/wkhtmltopdf';
wkhtmltopdf.shell = '/bin/bash';
module.exports = wkhtmltopdf;
Everything worked fine for a while. At a sudden I receive many connection errors from GCF or timeouts but I think it’s not related to my implementation but rather to Google.
I’ve ended up setting a dedicated server.
I have managed to get it working, there are 2 things needed to be done, as wkhtmltopdf won't work if:
libXrender.so.1 can't be loaded
you are using stdout to collect resulting pdf. Wkhtmltopdf has to write the result into a file
First you need to obtain correct version of libXrender.
I have found out, which docker image Cloud functions are using as base for nodejs functions. I've ran it locally, installed libxrender and copied the library into my function's directory.
docker run -it --rm=true -v /tmp/d:/tmp/d gcr.io/google-appengine/nodejs bash
Then, inside the runing container:
apt update
apt install libxrender1
cp /usr/lib/x86_64-linux-gnu/libXrender.so.1 /tmp/d
I have put this into my function's project directory and under lib sub directory. In my function's source file, I then set-up LD_LIBRARY_PATH to include the /user_code/lib directory (/user_code is the directory, where at last your function will end up being put by google):
process.env['LD_LIBRARY_PATH'] = '/user_code/lib'
This is enough for wkhtmltopdf to be able to execute. It will fail, as it won't be able to write to stdout and the function will eventually timeout and be killed (as Matteo experienced). I think this is because google runs the containers without a tty (just speculation), I can run my code in their container, if I run it with docker run -it flags. To solve this, I am invoking wkhtmltopdf so that it writes the output into a file under /tmp (this is in-memory tmpfs). I then read the file back and send it as my response body. Note that the tmpfs might be reused between function calls, so you need to use unique file every time.
This seems to do the trick and I am able to run wkhtmltopdf as Google CloudFunction.
I want to start using titan database and I have followed http://oren.github.io/blog/titan.html instructions. But when I try to start titan in docker it gives me the following error:
/opt/titan-0.5.4-hadoop2/run.sh: 2: /opt/titan-0.5.4-hadoop2/run.sh: : not found
run.sh file located in C:\Users\Modeso\titan but I can't find a way to change the folder location in docker.
Has anyone faced this problem before or have solution for it?
I suspect that in this case, the "not found" message may not be because the file is not found, but because the wrong line-endings are used in the file. If a shell-script uses Windows line-endings, Linux can produce weird errors, such as this one.
Did you try building from the GitHub repository? https://github.com/apobbati/titan-rexster
You can build an image from that repository through;
docker build -t titan-rexter github.com/apobbati/titan-rexster
And run it;