nftables ipv4 not loading - linux-kernel

I have a question about nftables in Ubuntu. I wanted to use nftables for a scientific application to send some image files from a remote telescope with UDP/IP from within a c++ telescope control program, and someone suggested that using nftables might be useful.
So I used the following commands to get nftables:
sudo apt update
sudo apt install nftables
following the installation guideline of the nftables wiki I loaded the module:
modprobe nf_tables
lsmode | grep nf_tables
nf_tables 143360 0
nfnetlink 16384 3 nf_conntrack_netlink,nf_tables
but I can not load the family modules:
modprobe nf_tables_ipv4
modprobe: FATAL: Module nf_tables_ipv4 not found in directory /lib/modules/5.4.0-70-generic
This is similar for nf_tables_ipv6.
I am running an Ubuntu 18.04 and Kernel 5.4.0.70. My question is that first how I should load the nf_tables_ipv4? Is it generally a problem when getting the distrbution packages and building from source solves it? or is there a different underlying issue that I can not load these family modules?
Second, are these essential packages necessary for my application?
And third: Given that I do not have any experience with it, is it actually a good idea to use nftables in my application? It is a very basic question, but I thought maybe someone can tell me if I am atleast on the right track here.
Thanks alot

Related

C/ C++ Build standalone executable (including libraries)

I wrote a C++ program with multiple classes and divided it into multiple files, which is intended to run on an embedded device (raspi 2 to be specific) that has no internet access. Therefore building the source and installing the dependencies on every device would be very laborious.
Is there a way to compile the program on one of the devices (as an exception to the others, this one has internet access), so that I can just transfer the build files, e.g. via USB, to the other devices? This should also include the various libraries I used so that I don't have to install them on every device. These are mainly std, but also a self-cloned and build library and a with apt installed library (I linked the libraries used as an example, but they shouldn't affect the process, I guess).
I'm using CMake. Is there an option, to make CMake compile a program into a (set of) files that run independently of the system-installed libraries with other words: they run without the need to have the required libraries installed on the system, but shipped with the build files.
Edit:
My main problem is, that I cannot get a certain dependency on the target devices due to a lack of internet access. Can I build the package and also include the library in that build, without me having to install it?
I'm not sure I fully understand why you need internet for your deployment but I can give you several methods and you'll choose the one you seem the best.
Method A: Cloning the SD card image
During your development phase, you ended to successfully have a RPi device working and you want to replicate this. You can use some tools to duplicate your image on another SD card, N times and eventually, this could be sufficient to make it work.
Pros: Very quick method
Cons: Usually, your development phase involve adjustments, trying different tools, different versions, etc.. your original RPi image is not clean and so you'll replicate this. Definitely not valid for an industrial project for could be sufficient for a personal one.
Method B: Create deployments scripts
You can create a deployment script on your computer to copy, configure, install what you need. Assuming you start with a certain version of Raspberry pi OS, you flash it, then you boot your PI that is connected via Ethernet for example. You can start a script on your computer that will:
Copy needed sources / packages / binaries
(Optional) compile sources (if you have a compiler that suits your need on RPi OS)
Miscellaneous configuration
To do all these, a script like this can do the job:
PI_USERNAME="pi"
PI_PASSWORD="raspberry"
PI_IPADDRESS="192.168.0.3"
# example on how to execute a command remotely
sshpass -p ${PI_PASSWORD} ssh ${PI_USERNAME}#${PI_IPADDRESS} sudo apt update
# example on how to copy a local file on the RPI
sshpass -p ${PI_PASSWORD} scp local_dir/nlohmann/json.hpp ${PI_USERNAME}#${PI_IPADDRESS}:/home/pi/sources
Importante note:
Hard coded credentials is not recommended.
This script is assuming you are using linux but you'll find equivalent tool under Windows.
This assume your RPI has a fixed IP but you can still improve this script to automatically find the RPI on the network. (lots of possibilities)
Pros: While you create this deployment script, you'll force yourself to start from a clean image and no dirty environment is duplicated.
Cons: Take a bit longer than method A
Method C: Create your own Raspberry PI image using Yocto
Yocto is a tool to create your own images, suitable for Raspberry PI. You can customize absolutely everything and produce an sdcard image you can just flash your RPis SD cards.
Pros: Very complete tool, industrial process
Cons: Quite complicated to deal with, not suitable for beginners, time cost
Saying in the comments it's only for 10 devices and that you were a bit scared to cross compile, I would not promote the Yocto method for you. I would not recommend the method A as well mostly because of the dirty environment duplication (but up to you in the end). The method B with the deployment script may be the best to go.

Installing glibc-2.29 from source in kali linux

I need a debug version of glibc.I have some doubts regarding the installation of glibc-2.29 from source in kali linux.Based on the post https://www.tldp.org/HOWTO/html_single/Glibc-Install-HOWTO/,
To install glibc you need a system with nothing running on it, since many processes (for example sendmail) always try to use the library and therefore block the files from being replaced. Therefore we need a "naked" system, running nothing except the things we absolutely need. You can achieve this by passing the boot option
init=/bin/bash to your kernel.
it says that we need to install the glibc in a single usermode environment.In another post https://www.tldp.org/HOWTO/Glibc2-HOWTO-5.html
single usermode is not required for installation but backing up the old libraries.I dont know which one to follow.Can anyone help?
I found that we can use glibc without installing but building from source by adding '-g' flag in ./configure and setting LD_LIBRARY_PATH varible as follows after building
LD_LIBRARY_PATH=/path/to/the/build_directory gdb -q application
Note: this solution only works when the system GLIBC and the built-from-source GLIBC exactly match, as explained here.
I need a debug version of glibc.
Most distributions supply ready-made libc6-dbg packages that match your installed GLIBC. This is the best approach unless you are a GLIBC developer (or plan to become one).
I have some doubts regarding the installation of glibc-2.29 from source in kali linux.
Installing / replacing system libc is almost guaranteed to render your system unbootable if there are any mistakes. Recent example.
Before you begin, make sure you either know how to recover from such a mistake (have a rescue disk ready and know how to use it), or you have nothing of value on the system and can re-image it from installation media in the likely case that you do make a mistake.
The document you referenced talks about upgrading from libc5 to libc6. It was last updated on 22 June 1998, and is more than 20 years old. I suggest you find some more recent sources. Current documentation does suggest doing make install while in single-user mode.

docker on OSX slow volumes

I'm trying to use docker beta on OSX, mainly for Symfony development but the mounted volumes are incredible slow. Even for a vanilla Symfony project I get 6s page load time. That's unbearable! Has anyone found a solution to this issue? Trying to move away from vagrant but I just can't find any reasonable way to work with docker instead.
Okay the user Spiil gave a solution but I wanted to elaborate on the exact steps to take since I went through 12 hours trying to figure it out, but once you know how its super easy and fixes all the slow down issues!
The key here is to understand this solution creates NFS (Network File System) drives as the means of communication from the Docker Containers to your Mac instead of the standard OSX File System which is very slow currently either due to bugs or the way it works*
Follow these steps exactly.
1.) Clone this repo here (https://github.com/IFSight/d4m-nfs) in your home directory. To do this open up terminal and type cd ~
Then type git clone https://github.com/IFSight/d4m-nfs
Alternatively you can also do this in a one liner git clone https://github.com/IFSight/d4m-nfs ~/d4m-nfs
2.) Next go into the d4m-nfs folder and create a new file in the /etc folder and title it d4m-nfs-mounts.txt
3.) Add the following lines of code to this.
/Users/yourusername:/Users/yourusername:0:0
What the above does is allows you to still use relative folders with docker-compose and allows all ports to connect on it hence the 0:0.
EDIT
Do not put /Volumes here!!
4.) Go to your docker preferences and do the following
Make sure only /tmp is showing and NOTHING ELSE. I mean nothing else it won't work if there is anything else since it will create conflicts with the NFS systems that the script will make for you later. Restart docker and docker-compose down any containers as well.
5.) Finally navigate to the d4m-nfs directory we created in step 1 and type the following command, /bin/bash d4m-nfs.sh
edit The correct way to type the command above is this as another user from the github (if-kenn) pointed out, ./d4m-nfs.sh which uses the Shebang for what shell should run it.
If done correctly there should be no errors and this should work. Please note DO NOT RUN as sh d4m-nfs.sh this will create errors and you will have to delete your exports file to start over. In fact anytime you make any changes you will have to clear your exports file.
This is what mine looks like.
EDIT:: IMPORTANT -- Remove the /private and volumes! This should only be users/username now!
If you see anything other than this you were not running with bash. You can quickly get to the exports file like this in Mac if you make any errors and just clear it out to start over.
Just select go to folder
and then type /etc/exports
This is a nice shortcut to quickly get to it and clear it out in your favorite text editor.
Also make sure no containers are running or you will get the ........ loop of death. If this loop of death continues make sure you upgrade docker and then restart your computer. Yes restart... it seemed to be the only way to get it to work on my friends computer. Refer to this (https://github.com/IFSight/d4m-nfs/issues/3)
Note to .... loop. I recently found another solution. Make sure you are NOT logged as root, and make sure you pulled the git repo into your users ~ folder not the root ~ folder. In otherwords, it should be in Users/username.
Also, make sure /tmp folder has full write permissions since the script needs to write here or this won't work either. chmod 777 -R /tmp
6.) If you did it right when running the script it will look like this.
Then simply run your docker-compose up -d as usual in your symfony project folder (or whatever project you are using with docker) and everything should work... except NO MORE slow downs!
You will need to run this anytime you restart your computer or docker.
Also note if you get mounting errors showing up, you probably don't have your project stored in your Users/username directory. Remember that is where we mounted it. If your project is somewhere other than there you will need to modify the d4m-nfs-mounts.txt file accordingly.
Other Info:
For people reading this now, maybe it's better to wait for Docker to fix this issue. A pull request has already been accepted to improve performance(https://github.com/docker/docker/pull/31047).
This will be release somewhere in April 2017 and should be a big improvement.
I've tried some workarounds for Docker for Mac, but all of them had some pretty big disadvantages, mostly in usability. A good source for alternatives of the OSXFS can be found at: https://github.com/EugenMayer/docker-sync/wiki/Alternatives-to-docker-sync. Credits for Eugen Mayer for setting this up.
EDIT:
First improvement is implemented in the edge release. https://github.com/docker/for-mac/issues/77 has more info on this.
There's a long thread with explanation from Docker Team and various workarounds.
Currently, the issue is being tracked on GitHub.
While some workarounds may be better than others, I'm afraid the ideal option for now is to switch to Linux.
I spent a lot of my time in searching viable solution. And I found.
d4m-nfs
allow you use docker volumes via nfs.
In my case it gave increase performance 16 times! (1.8sec vs ~30sec)
Also d4m-nfs has quite a intricate manual, so here is another link with detailed example: https://github.com/laradock/laradock/issues/353#issuecomment-262897619
I just leave this here for other googlers.
Normaly volumes should be fast.
But you can not change anything to make them faster if you dont want to change the format of your disk.
But maybe the bottleneck is the CPU or RAM.
You can check that with the command docker stats. These are by default set to 2 cores and 2 GB RAM. You can change this in the Docker for Mac GUI.
I had exactly the same thing. For me using docker-bg-sync (see on GitHub) made a dramatic improvement in speed and CPU usage.
Not as nice as just mounting the volume as you have to start a new container for every sync but it does the job.
In latest docker 17.06.0-ce-mac18 volumes mounted with :cached seems to run quite decent.
I've found that creating a CoreOS VM under Parallels, then using the Docker that is inside CoreOS is far faster than Docker for Mac (currently running Version 17.12.0-ce-mac49 (21995)).
I'm doing Linux code builds using CMAKE/Ninja/GCC and it's almost twice as fast as the exact same build from Docker for Mac.
In my case, I have a ton of library sources that are part of the container (e.g. Boost, OpenSSL), and a decent amount of C++ code that I keep local to my Mac.
This seems to be a recent development. Docker/Mac has become much slower than I remember it being a month or two ago. Maybe it's just me...
We overcame this issue by synchronizing the local and the docker for mac filesystem using syncthing. We built an open source tool that follows this approach, in case it helps:
https://github.com/okteto/cnd

Not able to install Gentoo Linux

Here is my situation, when I download Gentoo and start to run it and downloaded the stage III Tarball from links and then tried to extract it a stream of white sentences flows down my screen really fast for about a minute just like in the YouTube tutorial I was viewing. However, after that instead of going to the correct stage it says cannot right not enough space on device and I tried repartitioning it but I'm not sure what device it is talkingaboutHowever, after that instead of going to the correct stage it says cannot right not enough space on device and I tried repartitioning it but I'm not sure what device it is talking about. Please help
Sorry you're having this issue, though in general, I truly believe Gentoo Handbook is quite well written and even a newbie can follow it... Here are some advices that I hope I can give you (most important is, digest the handbook and follow it carefully, not that I'm saying "RTFM", it's just that for Gentoo, handbook is essential and without it, we can get lost if you're just starting).
From my experiences, the "stream of white sentences" I'd presume would be from verbose un-tar'ing your stage3. Usually, I only want to see the errors so my suggestions is to remove the "v" (i.e. from "tar xjvpf" to "tar xjpf") so that only errors would appear when un-tar'ing. The caveat to this is that you'll be wondering if it hung or is busy un-tar'ing. Use Alt-F1 and Alt-F2 (if on console/tty mode back-and-forth) to log in on another TTY and do 'ps -auxf' to see if it's still tar'ing. If you're using GUI Terminal, just open another tab and 'ps auxf', you get the picture...
Also, learn the commands "df", it'll come in handy. If you're running out of disk space, perhaps you're trying to install/untar stage3 to your ramdisk (grin) rather than your mounted (i.e. "/mnt/gentoo"). Mount your root '/' device to '/mnt/gentoo' and cd to that mounted path then try it (don't forget to mount your '/boot' as well as your proc, dev, sys, etc before you chroot - again, follow the handbook as carefully as you can - oh also, distro such as Debian hybrids including Ubuntu uses symlink to shm, so read that part about 'rm /dev/shm' and follow it carefully; if you're using Gentoo LiveCD, you can ignore that part).
Other useful commands if you're confused (or new to) mounting devices would be to learn to experiment with commands such as 'lsblk' and 'mount' (by itself) to inspect the sizes of your partition (again, use of 'df' comes in handy as well) as well as what is your device (i.e. /dev/sda1 versus /dev/sdb1). Hint: when you do 'mkfs', use "-L" (or for some file system, it's "-N") to label/name your devices, so that when you use commands such as 'mount' or 'lsblk', you can spot them easier. If you're using GUI/desktop versions of some distro, hopefully there are tools such as "gparted" which can give you visual information in GUI of your devices which can be helpful. One think I'd advise you to stay away from if you're just starting, is to avoid RAID (i.e. mdadm) until you're comfortable with how grub/lilo works. Get your kernel (Gentoo-sources) compiled and MBR written (i.e. grub-install), try booting and have fun first (oh also, if you can avoid GUI like installing Gnome/KDE from the get-go, avoid it as well - you'll get into issues such as "should I use SystemD or OpenRC" and then get hit by the obstacle of some gnome parts needs you to use systemd but you've chosen openrc, and so on).
If I may add my opinions, in my opinion, Gentoo (also Arch and FreeBSD) is an excellent place to start if you want to learn the inside of Linux application workings (library dependencies, why packages are important rather than downloading each libs manually and compile them one by one, etc). I hope this won't discourage you from switching to another distro, but if it does frustrate you on installation and all you want to do is test-drive Linux, there are much easier distro that you'd not have to understand USE and other compilation mechanisms (if you have an old i586, it makes sense to build it with pick-and-chose libraries so that leaner can be faster, but if you have fast machine, why compile binaries when somebody who is expert at it already have done it for you?). SUSE and Fedora/RedHat/CentOS used to be the least frustrating for it was able to find/detect hardwares (legacy and new) but these days, I usually tell people, "if you know how to install Windows, you can install Ubuntu" so that too may be a good way to wet your feet. Good luck!
0_o wow, well.. how about some 411 like size of your hdd and exactly how you partioned it? Linux will look for specific directorys and if missing will instead start to install into the root dir. How you partion is an importent first step. Once you got a generally good partion setup most linux installs will go fine. Most basic tables include /, /home,/var and a swap.

"Hacking: The Art of Exploitation" - Assembly Inconsistencies in book examples vs. my system's gcc

I am studying "Hacking: The Art of Exploitation". I am trying to follow the code examples, but for some reason the assembly codes simply does not match the one on my actual Linux (running on Virtual Box as Guest). I have made sure that I have installed 32 bit Linux OS. Is there any args that I can pass to gcc that lets me compile the code into an assembly that matches closely with the ones given in the book?
I would be fine reconciling the code differences between the book & what I see if they were minor, but the difference I see is stark. I somehow don't like to run the code from the "Preconfigured incubator environment" as this inhibits my skill development.
I've actually been in the same boat--for the last week or two I've tried a ton of ways to produce comparable assembly code in my normal development environment (LMDE), including chroot, compiling with the -m32 flag, installing an x86 ubuntu, etc, and nothing really worked. Today I found http://www.nostarch.com/hackingCD.htm and I followed the instructions and was able to get the livecd to boot in vmware workstation 10. Here's what I did:
Download the iso from the link above (though it should work with the
livecd as well)
Create a .vmx file and copy and paste the config from the link
I took out the section defining the cdrom device, since I was using an iso
Open the file with VmWare Workstation--if you are using the iso, go to "Edit VM Settings" and set up a cdrom device and point it to the iso
VM booted without any issues
I know this isn't as convenient as going through the examples in your main OS/system, and that you were trying to avoid using the LiveCD, but after doing a lot of research I've discovered that this is an extremely common issue and hopefully this answer helps someone. Using the LiveCD might not be ideal but it is still a heck of a lot better than dual booting.
for some reason the assembly codes simply does not match the one on my actual linux
The most likely reason is that the book was published in 2008, and used then-stable GCC (you can see GCC release history here).
GCC that you are using now is likely much newer, and so generates significantly different (and one hopes better) code.
Is there any args that I can pass to gcc that lets me compile the code into an assembly that matches closely with the ones given in the book?
No. You can try to compile and install a version from 2008, perhaps 4.2.3 or 4.3.0, and check whether that gives you closer output.
P.S. It looks like the first revision of the book is from 2003, and it's unlikely that the authors rebuilt all of their examples for the second edition in 2008, so perhaps try GCC 3.3 instead?
This is why the book comes with a LiveCD with a linux distro and all of the example source code from the book on there. All of the examples in the book match exactly with what will happen in the LiveCD.
Just run the included LiveCD using VirtualBox or VMware and follow along with the book using that. If you don't have the CD, it can be downloaded from a torrent provided by No Starch (linked from their website)
it doesn´t matter whether the output of gcc is different, the only thing it changes is the memory addresses; plus, you said u r using a VM to run it, meaning that the memory u will get is dummy memory, try booting the iso and run it directly, it will almost the same.
https://www.youtube.com/watch?v=pIN7oFkz5rM

Resources