Im wondering if it is possible to run AndroidThings on a Raspberry Pi 3 Model B+
Ive been following this tutorial :
Android Things - LED Blinker
It seems to state that Raspberry Pi 3 Model B is supported, but does not directly say the B+ is.
Each time i download a build and flash the SD card, once inserted my Pi 3B+ stops at the rainbow screen.
Any help would be really appreciated.
Thankyou
Related
let me start saying that I'm new to neural networks, Machine Learning etc, and so far I have just made few very simple experiments to learn so please be patient with me also If I ask very naive or long questions.
My favorite coding language is java and I'm looking at playing with Weka. To play with this API that looks like to me
being very clear and complete, this time, to have something more than a set of ideal data to train and check the success rate, I started not from just the software but from a real world problem that I created for myself.
The problem I created and that I'd like to solve with neural networks, is a four legged spider shaped robot controlled by a Raspberry PI with some ADC and servo Hats. This sort of strange robot has 4 legs, each leg being made
of 3 parts, each part is moved by a servo motor. In total I have so 4 legs * 3 leg parts = 12 servo motors. On each servo motor I have a 3 axis analog accellerometer attached (12 in total) I can read from the Raspberry PI.
From each of these accellerometers I read 2 axis to determine the position of each servo, that is the position of each leg segment for each leg. In addition, the "sole of the foot" of each of the 4 legs, has a button to
determine if the leg has reached the floor and is sustaining the spider. The spider construction blog is here, for those interested: https://thestrangespider.blogspot.com/
The purpose of this experiment is to make the spider able to put itself in balance and set itself horizontally starting from any condition.
The spider should be able in few words, to level horizontally its body regardless of the fact that I have put the spider on a horizontal or oblique surface. The hardware platform is ready, I miss few details but let's assume
I can read all the signals I need from a java Spring boot application using the PI4J APIs to interface hardware (24 values from ADC for 12 servos and 4 digital inputs (true/false) from the buttons on
the soles of the spider feets). The intent it to solve the problem of moving the legs servo motors using neural networks built with Weka, reading the various input signals until the system reaches the
success condition (static balance and body in a horizontal position). The main problem is how to use all the data I have to build a data set in the best way, what neural networks to use to put in place
the adaptive corrective feedback, until the spider reaches the success condition of having its body finally horizontal and in static balance.
Going deeper into this topic, let me provide my analysis.
Each leg should perform this steps:
Can move independently one servo per time starting from a random position (in the range of the angle allowed by the leg portion)
Re-read after each move, all the legs segments position, waiting for the next static condition
Determine if last move has brought an advantage or a disadvantage to the system as a whole.
If no change happened because of the last move, keep the latest change/move and continue
If any max angle has been reached, change the direction for that angle next move
Check if system success is closer now respect to previous step and determine next action type
The problem to me is a mix of open questions where one is how to represent this system from a data perspective:
Data coming from acceletometers all belonging from servos in the same legs are candidates for a cluster of data or a robot sub-system ?
Can each servo and its position measurement be a subsystem of the main system instead ?
Can these be multiple problems instead of one really ?
How would you approach this problem from a neural network perspective ?
after studying Weka classification and regression, I finally got to the conclusion that the processing should be a regression problem. That ended as well in a too complex problem that would have need regression with multi label output that apart from Weka would have required another framework on top of that called Meka.
Finally I got to a conclusion that now I will try that is, using first Weka to classify data and detect first on which length the inclination is, the solve the single specific problem of acting on that leg eventually with a different Weka model working only on a single leg per time.
Problem resolution so, will have these steps:
Use a legs Weka classification model first to detect which of the four legs has the higher inclination.
Use a leg specific Weka classification to correct for the leg detected above in first step, the actions to be taken by the three leg segments in order to correct the problem.
Stefano
I am a beginner about Veins. Now I am trying to simulate dynamic adjustment of the transmission power and speed between 2 running vehicles based on their distance with each other with Veins 4.5, Omnet 5.0 and Sumo 0.29.
So far I have built the Sumo model and run the model in Omnet but without any programming which means the transmission power and speed between 2 vehicles are all set in the .ini file. Now I want to implement an algorithm to adjust them dynamically.
As a beginner I barely know how to start this job. The Veins tutorial didn't tell how to apply the functions provided by Veins. I now build a new .cc file based on our .ned file. And from MyVeinsApp.cc I found some methods I need to implement. But still I need some programming instructions about my problem.
1 how to get the realtime distance between 2 running vehicles?
2 is it possible to control the transmission power and speed with Veins 4.5?
I am sorry for these initial questions.But I really don't know how to develop a Veins simulation from the very beginning step by step at the programming level
Thank you very much!
To get the distance between two vehicles you can use the built-in function distance() from Coord (see this post).
To control the transmit power you can use the parameter txPower from Mac1609_4.
For changing the speed of the vehicle you can check this post.
I have looked on stack overflows forum and can't find the answer to my 2 questions. So here they are.
Can I power a motor using an arduino uno with just the outputting to the motor like I can with a led without having a motor shield?
This is based on number one but let's say the answer is yes for now until I find out the answer to question number 1. I have a max of 3 volts that my small dc motor can take and I know the arduino out puts 5 volts, so that means I am 2 volts over the limit by simple math. I have about any resistor type you can think of, so which one will I need to put into my circuit? I am confused on why I can't find anything on converting 2 volts to ohms for the resistor value.
Thanks in advance.
Powering your motor right from the output is typically not advised. The digital pins on Arduino can only souce a small amount of current. Typically far too little to drive a motor. But, it depends on the motor of course. More important than voltage of your motor is knowing how much current is needed to drive it.
For your resistor question, look into voltage dividers using resistors.
Mike
I am looking to build an application for my boyfriend's birthday gift and would like some help getting started!
I'd like to be able to input a photo (of a potato chip or something with distinct edges) and have the app select & output which state in the US the chip looks most like. I plan on building this in Java and am wondering what the best approach to designing the algorithm would be. I've never done anything with edge detection or image comparisons and am wondering if you anyone could point me in the right direction in terms of getting started.
Thank you!
Look into Caffe, implemented by Berkeley. It uses neural networks to identify objects based of learning from models.
For your specific task, you can create models of different potato chips and Caffe will "learn" them. Then, you can write an application using a video camera where you can place a potato chip in the video feed and Caffe will identify it.
For any identification process, you need both feature detection but also object matching to a database of similar images, and Caffe can help with that.
I am using the `AndroidProximityLibrary' for a project where i'm measuring the distance to the beacon and when it reaches / passes a certain distance it will do something.
Everything is working fine except the distances i'm receiving from the library have a big variation of values. Even if i'm standing in front of the beacon in clear nice of sight, i can get distance values that goes from 1,5 to 4 meters ( When i'm standing around 3 meters from beacon)
My real question is if i can somehow get more distance values so i can get rid of those spikes, currently i'm receiving beacon information around 2 distance values per second. Is is the beacon that is only sending information with that frequency ? or is it the library that is only doing the callbacks with that frequency ?
As a beacon, i'm using a raspberry pi configured like the RadiusNetwork tutorial. I'm using a nexus 5 hosting the client application.
The reason the values vary so much is because there was a bug in that library that only used a single signal strength measurement to estimate distance. The latest version of the Android Beacon Library (which shares much of the code of the library you mention) uses a running average of signal strength samples over a 20 second window. This smooths out the noise significantly.
Unfortunately the AndroidProximityLibrary has been discontinued, and no new updates are being provided. If you are not using the cloud data features of the library, your best option is to migrate to the Android Beacon Library 2.0, which has all the other features. A migration guide is available here.