Is it possible to install PfSense on a Raspberry Pi 3 Model B+?
I found a lot of posts regarding this on Google, but most of them are outdated since both platforms evolved a lot in the past few years.
You can't install PfSense on the raspberry downloading it from the original website.
The problem is that raspberry have an ARM architecture, but PfSense requires an x86-based or AMD64 processor.
So, you can't install it.
Related
I have 32gb memory card and a raspberry pi 3 model B board I want to do project in windows 10 IoT core os and an another project in raspbian os
Is it possible to install both os in same sd card?
your memory card is 32 gb
but you can not install 2 os in one memory card.
use 16-16 gb memory card with one raspberry pi but you can't use one memory with two os in raspberry pi have't virtual software or partition window . Use raspbian that's best software
thanks you
if you have any query regarding raspberry pi so connect sajanarora517#gmail.com
information free for all
I just bought the USB Kinect adapter and realised it just works with USB 3.0. When connecting it to the USB 3.0 port of the PC everything works fine, but when I tried to connect to my Raspberry Pi3 it just won't work. I've already installed all the drivers (Openni,Sensorkinect) but when I execute the "Sample-NiSimpleRead" of OpenNI I get the following message:
One or more of the following nodes could not be enumerated:
Device: PrimeSense/SensorKinect/5.1.2.1: The device is not connected!
Besides, when I try sudo lsusb -v|grep -i nui I get:
iProduct 2 NuiSensor Adaptor
Still, there's no way to make it work. I've seen some projects in Youtube which use Raspberry3 and Kinect, so there should be one way to solve this. Do you have any idea?
Kinect v2 is solely USB3, while RPi is USB2 (as you stated). Maybe you've seen the videos of Kinect v1, which uses USB2?
Kincect V2 needs USB-3, so if you need to plugin into a embedded device then Nvidia Jetson TK1/TX1/TX2 boards are the best bet. Here is a link that shows a demo of Kinect-V2 with Jetson TK1. I have tried the same steps for TX2 and it works fine after the successful installation of "libfreenect2".
I want to upload Arduino sketch via Raspberry pi using windows 10 IOT platform (Visual studio universal application), Has anybody any idea how to do this?
Sketch from cloud > > > > Raspberry pi ------> Arduino
In this diagram sketch download from the cloud and via Rpi upload to Arduino.
The solution is pretty straight forward as the question stated the flow.
Cloud --> RPi --> Arduino.
Step 1
Upload the sketch in the Cloud Instance (ftp or http)
Step 2
In the windows IoT Core, wget the sketch file and Install Arduino IDE
wget http://cloud-server/sketches/program1.ino
sudo apt-get update
sudo apt-get install arduino
Step 3
Reboot the RPi
sudo reboot
Step 4
When the RPi rebooted,
open the Arduino IDE and select the port to upload the sketch
/dev/tty/USB0
If the Step 2 wont go successful then try to install windows gnu tool chain
http://gnutoolchains.com/raspberry/
to compile and install piduino as follows
mkdir hardware/RaspberryPi
cd hardware/RaspberryPi
git clone https://github.com/me-no-dev/RasPiArduino piduino
It could have been use full if you have mentioned the whole problem you have. I will try to answer the best I can:
Solution One:
You can maintain a shared memory for the Raspberry pi and Arduino board. (It can be some external SD card). Get the code from the cloud using Raspberry pi and write this using file streams into the shared memory. Then read this file from the same shared memory using Arduino.
Solution Two:
If you are not worried about the exact file and your only concern in the logic, interface the Arduino to Raspberry pi as a slave device. Use the Arduino I/O ports to read the digital signals or values from raspberry pi which are generated according to the code you got from cloud using windows10 IOT platform.
I hope this could help you to some extent.
From TensorFlow's "Getting Started" page:
# Only CPU-version is available at the moment.
$ pip install https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl
I'm not super familiar with using GPU or CUDA libraries, but if I installed TensorFlow inside a Linux VM (say the precise32 available through Vagrant), then would TensorFlow utilize the GPU when running inside that VM?
Probably not. VirtualBox, for example, does not support PCI Passthrough on a MacOS host, only a Linux host (and even then, I'd... uh, not get my hopes up). MacOS ends up so tightly integrated with its GPU(s) that I'd be very dubious that any VM can do it at this point.
As an update: Tensorflow can now use GPUs on Mac OS X. The relevant PR is https://github.com/tensorflow/tensorflow/pull/664 and after a brew install coreutils the Linux installation 'build from source' instructions should work. I see a 10x speedup compared to the CPU version with an NVIDIA gforce 960 and Intel i7-6700K.
Edit/(downdate?): Starting with MacOS Mojave, due some API changes and what appears to be some long-standing beef between Apple and NVidia, drivers for NVidia graphics cards are no longer available. No NVidia means no Cuda means no Tensorflow, nor really any other respectable machine learning. It appears something like Google Collaboratory is the way to go for now.
I have some basics knowledge on Linux (RHEL 5.4) Device Driver and Kernel internals and wish to gain expertise on same. I came to know of raspberry pi board.
My question is that the same code that I write on a Linux server will work there - is their architecture and concepts same. Kindly note that if it is not the same case then I need to buy a desktop PC otherwise for offline practicing purpose.
Note - I was unable to add raspberry pi group hence needed to remove the same and add the below ones.
Yes it depends on Architecture and the same code compiled on x86 will not wrok on Pi. However, there are ways to get around it.
As mentioned in the above post, use a cross compile toolchain(that comes with its own libc) to comile your code (kernel/userspace) to try it out on R pi. Again doing this, you will still not be able to test your code. To do that get a VM tool like qemu. I am not sure if there is a qemu port for R pi but in general a ARM 11 (ARMv6) based qemu should do. The following link should get you going with initial kernel development on your PC without owning a R pi.
http://xecdesign.com/qemu-emulating-raspberry-pi-the-easy-way/
Cheers
Subbu
Is their architecture and concepts same??
I would like to clarify that Rasperry Pi is ARM based board. Mostly I guess your server is running on X86.
Device drivers meant for devices. Rasperry Pi should have the device which you are writing driver for.
I suggest you to study the data sheet of rasperry pi and linux driver model.
Linux driver model is architexture independent only. so you need only some effort for porting your X86 driver to ARM. You need to concentrate on hardware part.
You might need to cross compile your code for ARM arch. if you are using x86 machine on your Linux Server.You can cross compile your modules for ARM using GNU ARM toolchain and then run on Raspberry pi.