Install a NTP Client in an yocto-base embedded Linux distribution - embedded-linux

I'm developing a yocto-base Linux distribution by the zeus yocto release.
I need to install a NTP client into the distribution and I don't want to install the NTP server inside the image.
In zeus yocto release I have found the following recipe:
meta-openembedded/meta-networking/recipes-support /ntp/ntp_4.2.8p15.bb
that is relative to Network Time Protocol (NTP).
The recipe contains following info about itself:
SUMMARY = "Network Time Protocol daemon and utilities"
DESCRIPTION = "The Network Time Protocol (NTP) is used to
synchronize the time of a computer client or server to
another server or reference time source, such as a radio
or satellite receiver or modem."
Previous information don't explain if the recipe can be use to install, in the distribution, a NTP Server, a NTP Client or both.
What I need is a NTP client application that is able to connect to an external NTP server.
The following instruction:
IMAGE_INSTALL += "ntp"
is not suitable because adds to the Linux distribution the NTP Server which is called ntpd.
What's the package that I have to add to the image to include a client NTP? Is it included in the previous recipe or I have to find a different recipe?
Thanks

Ok I have found and test the answer of the post: ntp recipe didn't install ntpdate files.
The answer of that question is perfect to solve my problem.
By the instruction:
IMAGE_INSTALL += "ntpdate"
in the distribution, are installed only ntpdate and the service ntpdate.service without installing ntpd and its service.
Obviously to avoid the installation of the ntpd program (NTP Server) I must remove the instruction:
IMAGE_INSTALL += "ntp"
from the distribution.
A useful teaching
Very useful the comment that I have found in the post: ntp recipe didn't install ntpdate files:
Explanation: Take a look at the recipe, and at the PACKAGES variable:
PACKAGES += "ntpdate sntp ${PN}-tickadj ${PN}-utils"
It means that the ntp recipe contains the packages: ntp (default ${PN}), ntpdate, sntp, ntp-tickadj, ntp-utils.
From this comment I have learnt that in general a recipe can define (an so it contains) many packages and every package contains many programs, configuration files, and so on.
The assignment IMAGE_INSTALL += ... depends on what must be installed.
In my case I have excluded the default package ntp and I have included the package ntpdate.

Related

Starting a subsystem process with sshd on a constrained embedded system

I am trying to get the yuma123 open source implementation of a NETCONF Server running on an embedded Linux system.
The NETCONF Server uses sshd, and yuma123 appears to assume that it is the openssh implementation of sshd as it uses the /etc/ssh/sshd_config file.
In particular, the README file in yuma123 states:
Tell sshd to listen on port 830. Add the following 2 lines to /etc/ssh/sshd_config:
Port 830
Subsystem netconf "/usr/sbin/netconf-subsystem --ncxserversockname=830#/tmp/ncxserver.sock"
However, the embedded system currently uses the dropbear cut-down implementation of sshd due to memory constraints, and I am having difficulty getting openssh (together at the same time with yuma123) installed on the embedded system due to the size of the executables, dependant libraries etc.
Can I get/amend the dropbear sshd to give me similar functionality? Can I cut-down the openssh sshd drastically to a small enough size? Any (other) suggestions on a good way forward to resolve this?
According to the dropbear source code, the only subsystem built into the dropbear SSH server is optional support for SFTP, which I'll describe below.
Supporting another subsystem requires making source code changes to dropbear. If that's an option, the code at issue is in the function sessioncommand() in the file svr-chansession.c:
#if DROPBEAR_SFTPSERVER
if ((cmdlen == 4) && strncmp(chansess->cmd, "sftp", 4) == 0) {
m_free(chansess->cmd);
chansess->cmd = m_strdup(SFTPSERVER_PATH);
} else
#endif
{
m_free(chansess->cmd);
return DROPBEAR_FAILURE;
}
Basically, you'd add some code similar to the "sftp" section which checks for your subsystem name and invokes the desired command.
If you don't want to build a custom version of dropbear, then you might be able to make do with a forced command. There are two ways to do this.
If the users of this subsystem will be authenticating with an SSH key, then you could put a "command" directive in the user's authorized_keys file:
command="/usr/sbin/netconf-subsystem..." ssh-rsa AAAAB3NzaC1yc2...
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When a client authenticates with the key from this line in the file, then any command, shell or subsystem request from the client will cause dropbear to invoke the listed command instead of the command requested by the client. This is an OpenSSH server feature that's also supported by dropbear.
Alternately, dropbear has a command-line option -c which lets you specify a forced command for all clients:
dropbear -p 830 ... -c '/usr/sbin/netconf-subsystem ...'
Dropbear will invoke the specified command for any client connecting to this dropbear instance. If you want to permit clients to do other things besides running that command, you'd have to run two instances of dropbear, listening on different ports or addresses.
SFTP Subsystem support:
I'll describe this because doesn't seem to be well documented.
If dropbear is compiled with the option DROPBEAR_SFTPSERVER, then dropbear will invoke the command "/usr/libexec/sftp-server" for clients requesting the "sftp" subsystem. For that to work, the administrator would normally build a copy of the OpenSSH sftp-server program from the OpenSSH source code, and install the resulting program as /usr/libexec/sftp-server.
Changing dropbear to run a different command requires altering the dropbear source code and recompiling dropbear.

Ansible on Ubuntu

I have created two Ubuntu machines on virtual box. I am able to ping the other machine from the terminal of the other.
However when I ping from ansible I get the following error.
My /etc/ansible/hosts file is :
Can I get the solution for this ?
If you read the documentation you will notice:
This is NOT ICMP ping
So the way in which the ping command works and the way in which Ansible module works is different.
Reading further, Ansible ping module is described as:
Try to connect to host, verify a usable Python and return pong on success.
So Ansible tries to connect (and the default connection method is SSH) and execute Python code.
In your case Ansible failed to connect.
SSH connectivity is a prerequisite, so you need to configure that before you'll be able to use Ansible. For Ubuntu 16.04 you might need to additionally install OpenSSH.
Refer to the official guide for the installation and configuration steps.
On top of that, Ubuntu Server 16.04 does not install Python 2 by default, so you need to manually add it (Ansible support for Python 3 is still experimental).
Refer to answers under this question on AskUbuntu.
Then you still might need to set a parameter in the inventory file to tell Ansible to use Python 2. Or make Python 2 the default interpreter.

How do I fix startup delays in chef-solo

Using chef solo, calling it directly - with logging set as high as possible, this is what I see in a tail:
[2015-03-24T12:21:48+11:00] INFO: Forking chef instance to converge...
[2015-03-24T12:21:48+11:00] DEBUG: Fork successful. Waiting for new chef pid: 5571
[2015-03-24T12:21:48+11:00] DEBUG: Forked instance now converging
[2015-03-24T12:21:49+11:00] INFO: *** Chef 11.8.2 ***
[2015-03-24T12:21:49+11:00] INFO: Chef-client pid: 5571
[2015-03-24T12:25:41+11:00] DEBUG: Building node object for localhost.localdomain
...
and then it continues and goes on to work perfectly. Notice in between the last two lines, there is a delay which varies between 3 and 5 minutes! With no explanation of what it's doing, nothing obvious on netstat or top - I'm at a loss for how to troubleshoot this.
I thought it might be a proxy thing, but setting the correct proxies in /etc/chef/client.rb changed nothing. Any ideas how I get rid of this delay?
The first thing that Chef does when it starts - whether it's chef-solo or chef-client, is profile the system with ohai.
A main difference between chef-solo and chef-client is that debug log level will show the ohai output with chef-client, but it does not with chef-solo.
Depending on your system's configuration, this can take a long time to do, as it runs through a plethora of plugins. In particular, if you have a Linux system that is connected to Active Directory, it can take awhile to retrieve the user/group records via AD, which is why Ohai supports disabling plugins. Also, if you're running Chef and Ohai on a Windows system, it can take a long time.
To disable plugins, you need to edit the appropriate application's configuration file.
chef-solo uses /etc/chef/solo.rb by default
chef-client uses /etc/chef/client.rb by default
Add the following line to the appropriate config:
Ohai::Config[:disabled_plugins] = [:Passwd]
to disable the user/group lookup that might use Active Directory.
Also, I see from the output that you're using Chef 11.8.2 which came out December 3, 2013 (over a year ago as of this answer). It's possible that a performance improvement was introduced since then.
However, if you're not specifically beholden to chef-solo, you might try using chef-client with local mode. There is more information about how to switch in a blog post by Julian Dunn on Chef's site. If you need further assistance I strongly suggest the Chef irc channel on irc.freenode.net, or the chef mailing list.

How to add DataNode to Cloudera hadoop

I am trying to add a Datanode to my existing Single Datanode. Since my Unix server does not have access to Internet , Cloudera Manager is unable to perform the installation as it throws below error. Is there any other CLI Method to Add Data Node instead of CM?
BEGIN yum info jdk
Loaded plugins: product-id, subscription-manager
Updating Red Hat repositories.
http://archive.cloudera.com/cm4/redhat/6/x86_64/cm/4.7.2/repodata/repomd.xml: [Errno 14] PYCURL ERROR 6 - "Couldn't resolve host 'archive.cloudera.com'"
Yes, there are two approaches (I'm keeping this Cloudera-specific since this is what you mentioned).
1) Download tarballs and install everything manually. There is a guide available here, and I think it is not a good candidate to be copied here on Stack Overflow, because it is very long and vendor-specific (if the document moves, the title is "Installation Path C - Installation Using Tarballs").
2) For large internal installations, you may consider setting up your own repositories with rpm packages that you can access with yum. For this you'll need to edit /etc/yum.repos.d and link to some accessible host that's going to be your repo server (of course, you'll have to put your files there in advance). More details here. You can download rpms here. I have never had to do this myself, so I help this will point you in the right direction.

Systemtap for production server

I want to use systemtap for extracting details of my linux production server from remote access. I have some of the doubts regarding this:
Whether is it necessary to have same kernel in both the linux production server and linux development server.If not then how to add the support for that?
What are the minimum requirements to be present in the production server? Whether is it necessary to compile the kernel of the production server with the debuginfo ?
How to enable users in some particular group to run the stap scripts?
The kernel running on the production server and linux development server do not need to be identical. The SystemTap Beginners Guide describes doing cross-compile where instrumentation for one kernel version is built on a machine currently running different kernel version. This is described in:
http://sourceware.org/systemtap/SystemTap_Beginners_Guide/cross-compiling.html
The production server just needs the systemtap-runtime package. The production server does not need the kernel-devel or kernel-debuginfo installed when using the cross compile method.
There are stapusr and stapdev groups that allow people to run scripts. stapusr allows one to run existing script in /lib/modules/uname -r/systemtap directory (probably what is wanted in the case of running cross-compiled systemtap scripts). stapdev allow one to compile a script.
The stapusr and stapdev groups are described in:
http://sourceware.org/systemtap/SystemTap_Beginners_Guide/using-usage.html
Another capability in systemtap >1.4 is remote execution:
development_host% stap --remote=user#deployment_host -e 'probe begin { exit() } '
where cross-compilation, module transfer, trace data transfer are all automagically done via an ssh transport, as long as the deployment_host has corresponding systemtap-runtime bits installed.

Resources