I followed these steps to get the Host Light example device registered on Weave and running on a Raspberry Pi 3. I'm able to control it with Home and the Weave Console.
Now I'm trying to do the same for a Host Hvac device (I looked at the Hvac example for the MW302 as a reference), but I'm not able to register the device with ./out/host/examples/hvac/hvac -r xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.
It gets stuck at the following, with no visible errors:
[(4068588.179)I daemon.c:146] Heap state at daemon_connected: free=0, iota_allocated=15721, iota_max_allocated=15754
[(4068588.180)I daemon.c:152] Daemon connected.
With the Host Light example, I do see the device registering and it works fine:
[(4069131.290)I daemon.c:268] Waiting for registration message to be sent.
[(4069131.290)I daemon.c:146] Heap state at daemon_connected: free=0, iota_allocated=15234, iota_max_allocated=15268
[(4069131.290)I daemon.c:152] Daemon connected.
[(4069131.290)I daemon.c:137] Registering with ticket xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
[(4069131.290)I weave_http.c:98] Sending PATCH Request https://www.googleapis.com/weave/v1/registrationTickets/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx?key=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
[(4069133.198)I weave_register.c:205] Sending Registration Finalize Request
[(4069133.198)I weave_http.c:98] Sending POST Request https://www.googleapis.com/weave/v1/registrationTickets/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/finalize?key=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
[(4069135.880)I settings.c:71] Device id: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
[(4069135.880)I weave_register.c:233] Sending Registration OAuth Request
[(4069135.880)I weave_http.c:98] Sending POST Request https://accounts.google.com/o/oauth2/token
[(4069135.196)I weave_register.c:270] Registration Complete
[(4069135.196)I dev_framework.c:295] Heap state at daemon_registered: free=0, iota_allocated=23704, iota_max_allocated=35344
[(4069135.196)I dev_framework.c:296] Registration Succeeded.
> [(4069135.197)I weave.c:550] Fetching Command Queue
Has anyone managed to create a Host Hvac device successfully?
not able to register the device with ./out/host/examples/hvac/hvac -r xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.
It gets stuck at the following, with no visible errors:
I think that you add the argc, argv in the main() and HostIotaFrameworkConfig config
Eg:
int main(int argc, char** argv)
HostIotaFrameworkConfig config = (HostIotaFrameworkConfig) {
.base =
(IotaFrameworkConfig){
.cli_commands = NULL,
.num_commands = 0,
.builder = create_daemon_,
},
.argc = argc,
.argv = argv,
.user_data = NULL,
};
There's now an Hvac host example in the Libiota repo: https://weave.googlesource.com/weave/libiota/+/master/examples/host/hvac_controller/
For those of you working on running any of the host example devices on a Raspberry Pi, note that you might need the following dependencies:
sudo apt-get install libssl-dev libldap2-dev libidn11-dev libssh2-1-dev libkrb5-dev librtmp-dev
Also weave_client (used to register the device) needs Pycurl on your Linux machine. Download the source from pycurl.io and install it with python setup.py install. You may need to install these dependencies as well:
sudo apt-get install libcurl4-gnutls-dev python-dev
Related
I would like to use Node-Red on my RaspberryPi B 3+ with GrovePi, but I get the same error message every time again and again.
[node-red-contrib-grovepi/grovepi] Error: Cannot find module 'i2c-bus'
i2c is enabled in the raspi-config and I have everything installed that I've read on Node-Red/RPI Configuration.
Thanks to all!
I have generated a Yocto image to be used on all my target devices. When that image is running on target devices, it must be able to be updated using a rpm remote repository through https protocol.
To try doing that, I have added a dnf bbappend to my custom layer:
$ cat recipes-devtools/dnf/dnf_%.bbappend
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
SRC_URI += " \
file://yocto-adv-rpm.repo \
"
do_install_append () {
install -d ${D}/etc/yum.repos.d
install -m 0600 ${WORKDIR}/yocto-adv-rpm.repo ${D}/etc/yum.repos.d/yocto-adv-rpm.repo
}
FILES_${PN} += "/etc/yum.repos.d"
This is the content of repository configuration file included by dnf bbappend recipe:
$ cat recipes-devtools/dnf/files/yocto-adv-rpm.repo
[yocto-adv-rpm]
name=Rocko Yocto Repo
baseurl=https://storage.googleapis.com/my_repo/
gpgkey=https://storage.googleapis.com/my_repo/PACKAGEFEED-GPG-KEY-rocko
enabled=1
gpgcheck=1
This repository configuration breaks the build process of the image. When I try to build myimage recipe, I always get this error:
ERROR: myimage-1.0-r0 do_rootfs: [log_check] myimage: found 1 error message in the logfile:
[log_check] Failed to synchronize cache for repo 'yocto-adv-rpm', disabling.
ERROR: myimage-1.0-r0 do_rootfs: Function failed: do_rootfs
ERROR: Logfile of failure stored in: /home/yocto/yocto/build/tmp/work/machine-poky-linux/myimage/1.0-r0/temp/log.do_rootfs.731
ERROR: Task (/home/yocto/yocto/sources/meta-mylayer/recipes-images/myimage.bb:do_rootfs) failed with exit code '1'
However, when I replace the "https" by "http" in "baseurl" variable:
baseurl=http://storage.googleapis.com/my_repo/
Then the myimage recipe is built fine.
The host machine can download files from the https repository using wget:
$ wget https://storage.googleapis.com/my_repo/PACKAGEFEED-GPG-KEY-rocko
Previous commands works fine, so the problem is not related with the host machine, I think it must be something related with google certificates and yocto stuff.
I found some relevant information inside this file:
yocto/build/tmp/work/machine-poky-linux/myimage/1.0-r0/temp/dnf.librepo.log
The relevant part:
15:56:41 lr_download: Downloading started
15:56:41 check_transfer_statuses: Transfer finished: repodata/repomd.xml (Effective url: https://storage.googleapis.com/my_repo/repodata/repomd.xml)
15:56:41 check_finished_transfer_status: Fatal error - Curl code (77): Problem with the SSL CA cert (path? access rights?) for https://storage.googleapis.com/my_repo/repodata/repomd.xml [error setting certificate verify locations:
CAfile: /home/yocto/yocto/build/tmp/work/x86_64-linux/curl-native/7.54.1-r0/recipe-sysroot-native/etc/ssl/certs/ca-certificates.crt
CApath: none]
15:56:41 lr_yum_download_repomd: repomd.xml download was unsuccessful
Can some of you provide any useful advice to try to fix this?
Thank you in advance for your time! :-)
I finally fixed my issue removing completely my dnf bbappend recipe from my custom layer and adding this variable to my distro.conf file:
PACKAGE_FEED_URIS = "https://storage.googleapis.com/my_repo/"
After that, at the end of the build process the image contains a valid /etc/yum.d/oe-remote-repo file and all the necesary stuff to manage it. There is no need to copy "ca-certificates.crt" manually at all.
Also, it's important to execute this command after finishing the build of the image:
$ bitbake package-index
This command generates a "repodata" directory within the package feed needed by the target device once it uses the repo to update packages using dnf client.
I found a temporal hack to fix my issue:
$ cp /etc/ssl/certs/ca-certificates.crt /home/yocto/yocto/build/tmp/work/x86_64-linux/curl-native/7.54.1-r0/recipe-sysroot-native/etc/ssl/certs/
After that, I was finally able to build the image using the "https" repo.
Now I am in the process of fixing this issue in the right way. I'll come back with the final solution.
When I start dnsmasq service in CentOS 7, I get such status:
This is because I add a wblist.conf in /etc/dnsmasq.d/wblist.conf
cat wblist.conf
# for router itself
server=/google.com.tw/192.168.8.20#53
ipset=/google.com.tw/gfwlist
ipset -L gfwlist
Name: gfwlist
Type: hash:net
Revision: 3
Header: family inet hashsize 1024 maxelem 65536
Size in memory: 16784
References: 0
Members:
But if I COMMENT the ipset line, the service can be restarted successfully.
I don't know why. I have used dnsmasq/ipset for a long time, but suddenly got this problem.
Have anyone met this situation?
Disable SElinux is not recommend.
You can solve this problem by create and install a SELinux Policy Modules.
First you need create a type enforcement rules file called my-dnsmasq.te, content like below:
module my-dnsmasq 1.0;
require {
type dnsmasq_t;
class netlink_socket { bind create write };
}
#============= dnsmasq_t ==============
allow dnsmasq_t self:netlink_socket { bind create write };
Now you can compile it into a policy module package file:
checkmodule -M -m -o my-dnsmasq.mod my-dnsmasq.te
semodule_package -o my-dnsmasq.pp -m my-dnsmasq.mod
Once you get the policy module package file my-dnsmasq.pp, install it:
sudo semodule -i my-dnsmasq.pp
Finally, restart the dnsmasq.service:
sudo systemctl restart dnsmasq
And make a test like below:
nslookup google.com.tw
ipset list gfwlist
If everything is fine, you will see a ip is added to ipset.
I found this article SELinux prevents ipset from creating a netlink socket, and I disabled SELinux, then it worked. I don't know why.
I have installed predictionio through brew on my osx ( Maverick ) and i can start the admin's service (http://0.0.0.0:9000 ) and the api's server (http://0.0.0.0:8000).
But reading the docs, with the ruby's sdk, says:
# Create a client object.
client = PredictionIO::EventClient.new(<ACCESS KEY>, <URL OF EVENTSERVER>)
At first, i have inserted the api's url, but reading other docs ( like the python's sdk ) says that the eventserver runs on http://0.0.0.0:7070.
If i try to create a event:
client.create_event('rate', 'user', rate.user_id, { 'targetEntityType'=> 'item', 'targetEntityId' => rate.rateable_id, 'properties'=> {'rating'=> 3 }})
it always return the same response: 'PredictionIO::EventClient::NotCreatedError: Your request is not supported'
The guide says that the command to run this server is:
pio eventserver
But I don't have this bin. I start everything with the script 'predicitonio-start-all.sh' but with this I can't start this event server.
Thanks in advance !!
The Homebrew script is maintained by community and has not yet been updated to 0.8.4 yet. It is using 0.7.3 (http://braumeister.org/formula/predictionio) which does not work with the current documentation.
Please follow the instructions here to install the latest version: http://docs.prediction.io/install/
I have installed net-snmp5.7.2 on my system, I have written my app_agent.conf for my application and
agentXSocket udp:X.X.X.X:1610
and exported SNMPCONFIGPATH=path_to_app_agent.conf
I have also wrtten snmpd.conf in /usr/etc/snmp/snmp.conf
trap2sink X.X.X.Y
agentXSocket udp:X.X.X.X:1610
I have two more snmpd.conf present in my /etc/snmp/ and /var/net-snmp/
Config from /etc/snmp:
com2sec notConfigUser default public
com2sec notConfigUser v1 notConfigUser
com2sec notConfigUser v1 notConfigUser
view systemview included .1.3.6.1.2.1.1
view systemview included .1.3.6.1.2.1.25.1.1
access notConfigGroup "" any noauth exact systemview none none
pass .1.3.6.1.4.1.4413.4.1 /usr/bin/ucd5820stat
Config from /var/net-snmp:
setserialno 1322276014
ifXTable .1 14:0 18:0x $
ifXTable .2 14:0 18:0x $
ifXTable .3 14:0 18:0x $
engineBoots 14
oldEngineID 0x80001f888000e17f6964b28450
I have started snmpd and snmptrapd. Now in my code I am calling
netsnmp_ds_set_boolean(NETSNMP_DS_APPLICATION_ID, NETSNMP_DS_AGENT_ROLE, 1);
init_agent("app_agent");
init_snmp("app_agent");
init_snmp is throwing a warning
Warning: Failed to connect to the agentx master agent ([NIL]):
I have no idea why?? Thanks in advance for any help
This is basically saying the sub-agent you wrote failed to connect to NetSNMP master agent, as the message suggested. In Linux, by default agentx will attempt to make the connection via socket using /var/agentx/master. The following hint might help:
Running your sub-agent under appropriate privilege that has access
to sockets e.x. sudo
Check socket setting in your snmpd.conf (which located varies) if not already specified, such as
agentxsocket /var/agentx/master and agentxperms 777 777
Restart NetSNMP for any change to take effect with sudo service snmpd restart; or as an option you can try stop the service with sudo service snmpd stop and run an instance with debugging mode snmpd -f -Lo -Dagentx which most likely will output useful information on sub-agent connection.
I ran into this problem right now with quagga and ospfd and after doing an strace -f -p PID, noticed this among the output:
connect(14, {sa_family=AF_FILE, path="/var/agentx/master"}, 110) = -1 EACCES (Permission denied)
so I:
$ ls -al /var/agentx/
total 8
drwx------ 2 root root 4096 Sep 12 20:50 .
drwxr-xr-x. 27 root root 4096 Sep 12 20:13 ..
srwxrwxrwx 1 root root 0 Sep 12 20:50 master
and then I:
$ chmod 755 /var/agentx/
and immediately zebra and ospfd had their Agentx subnets connect.
$ tail -10f /var/log/quagga/zebra.log
2014/09/12 20:52:59 ZEBRA: snmp[info]: NET-SNMP version 5.5 AgentX subagent connected
$ tail -10f /var/log/quagga/ospfd.log
2014/09/12 20:52:59 OSPF: snmp[info]: NET-SNMP version 5.5 AgentX subagent connected
This is running quagga-0.99.23-2014062401 on RHEL6. hope this helps.
Had a similar problem, whether it be with the unix Sockets or Tcp:localhost:750 i was still getting the same error message:
/var/log/quagga/ospfd.log: warning, failed to connect to Master AgentX [nill] or [tcp:localhost:750].
I resolved the issue by disabling SELINUX.
This is not the answer to your problem, but I too got "Warning: Failed to connect to the agentx master agent ([NIL]):" message when my snmpd service didn't startup properly or went down. For my SNMP Sub-Agent, I used the example they provide, example-demon.c, and found I get this message nonstop (about every second) when processing agent_check_and_process(0) on every loop.
while (true) {
agent_check_and_process(0); /* 0 == don't block */
}
This is how I fixed it.
netsnmp_transport *snmpTransport;
while( true ) {
// Check to see snmpd is still running
snmpTransport = netsnmp_transport_open_client("agentx", NULL);
if (snmpTransport == NULL)
{
// Just went down?
if (snmpAgentDown == false)
{
snmp_log( LOG_INFO, "Net-SNMP Agent is down\n" );
snmpAgentDown = true;
}
Sleep(5000); // Sleep for a 5 sec
} else
{
if (snmpAgentDown)
{
snmp_log( LOG_INFO, "Net-SNMP Agent is back up\n" );
snmpAgentDown = false;
}
// Close connection test
snmpTransport->f_close(snmpTransport); // This burn me without; its needed
netsnmp_transport_free(snmpTransport);
// Process SNMP request and notifications
agent_check_and_process( 0 ); // 0 == don't block, 1 = block
Sleep(1); // Sleep for 1ms; Need to sleep thread, but need subAgent to be responsive too
}
i++;
}
Now if the snmpd goes down, my app can detect it being down and not process agent_check_and_process() stopping the "Warning: Failed to connect to the agentx master agent ([NIL]):" from ever appearing. If snmpd comes back up, then it processes it.
Final Note: I determine that code based off subagent.c file subagent_open_master_session() funtion in net-snmp-5.7.2 package. snmpTransport->f_close(snmpTransport) is also needed and determine that by following what snmp_close() did at the end of subagent_open_master_session() function.
As the subagent of Net-SNMP sometimes unable to read the adress of master agent from the configuration file, so you can even try
/* set the location of master agent */
netsnmp_ds_set_string(NETSNMP_DS_APPLICATION_ID,
NETSNMP_DS_AGENT_X_SOCKET, "udp:X.X.X.X:1610");
Write these lines in the agentx code before calling init_agent().
I have solved problem next comands line in OS Ubuntu 17.07
Change code (add line)
view systemview included .1.3.6.1.2.1.1
view systemview included .1.3.6.1.2.1.2
view systemview included .1.3.6.1.2.1.25.1.1
instead of
view systemview included .1.3.6.1.2.1.1
view systemview included .1.3.6.1.2.1.25.1.1
Write down new line master agentx in /etc/snmpd.conf
Restart snmpd demon:
sudo /etc/init.d/snmpd restart or sudo service snmpd restart