How to compile QEMU for 64 bit - compilation

i'm trying to comlpile qemu source code for 64 bit ,but it is being compiled in 32 bit ..
These are the commands which i'm using
#!/bin/bash
cd qemu-1.6.0\
export
PKG_CONFIG_PATH=`pwd`/../support_libs/libs/glib/lib/pkgconfig:`pwd`/../suu
pport_libs/libs/zlib/lib/pkgconfig export CFLAGS="-mabi=64"
QEMU_CFLAGS="-mabi=64" sudo ./configure
--prefix=`pwd`/../support_libs/libs/qemuu --target-list=mips64-softmmu --enable-kvm --enable-fdt --with-coroutine=sigaltss tack --extra-cflags="-I`pwd`/../support_libs/libs/glib/include/glib-2.0/"
sudo make && sudo make install
i'm saving it in a file named "build.sh" and running this script as "./build.sh"
Any help would be Appreciated

You are executing script as ./build.sh the environments you export will be to the child shell sessions but when you give sudo make in the script it will not inherit the exported variables.
change the script a little bit by removing sudo for make and make install and run the script as sudo
#!/bin/bash
cd qemu-1.6.0\
export
PKG_CONFIG_PATH=`pwd`/../support_libs/libs/glib/lib/pkgconfig:`pwd`/../suu
pport_libs/libs/zlib/lib/pkgconfig export CFLAGS="-mabi=64"
QEMU_CFLAGS="-mabi=64" sudo ./configure
--prefix=`pwd`/../support_libs/libs/qemuu --target-list=mips64-softmmu --enable-kvm --enable-fdt --with-coroutine=sigaltss tack --extra-cflags="-I`pwd`/../support_libs/libs/glib/include/glib-2.0/"
make && make install
now run script as
sudo ./build.sh

Related

Trying to set GOPATH and GOROOT in AWS EC2 user data, but it is not working

I am trying to set up GOPATH GOROOT in my AWS EC2 Ubuntu 20.04 user data, but it never worked, every time I connect to the AWS EC2 and view the log in /var/log/cloud-init-output.log it always says
go: not found, but if I key in the echo part it will work.
I am trying to set up multiple EC2 with this basis, so I can't key in every instance myself.
The CloudFormation yaml user data part is below:
UserData:
Fn::Base64: |
#!/bin/bash
wget https://dl.google.com/go/go1.14.4.linux-amd64.tar.gz
tar -C /usr/local -zxvf go1.14.4.linux-amd64.tar.gz
mkdir -p ~/go/{bin,pkg,src}
echo 'export GOPATH=$HOME/go' >> ~/.bashrc
echo 'export GOROOT=/usr/local/go' >> ~/.bashrc
echo 'export PATH=$PATH:$GOPATH/bin:$GOROOT/bin' >> ~/.bashrc
echo 'export GO111MODULE=auto' >> ~/.bashrc
source ~/.bashrc
apt -y update
apt -y install mongodb wget git
systemctl start mongodb
apt -y install git gcc cmake autoconf libtool pkg-config libmnl-dev libyaml-dev
go get -u github.com/sirupsen/logrus
cd ~
git clone --recursive https://github.com/williamlin0504/free5gcWithOCF.git
cd free5gcWithOCF
make
And here is the error inside /var/log/cloud-init-output.log
Error while user data runs
Is there anyone is familiar with this, please I need some help~
In your error message, in the Makefile at line 30 there is a program bin/amf being used
This program appears to be a shell script with a problem in line 1
The nature of the problem is "go: not found"
If you have the bare word "go" in line 1 of the shell script and the path cannot find it then this is what will happen
Probably you need to alter the last line of your userdata shell script to say
PATH=/usr/local/go/bin:$PATH make
I know you have a source command earlier in the script that is supposed to set this up but it doesn't do what you think it does

How to auto run commands when log on to Windows Subsystem for Linux

I have a Ubuntu 20.04 running within WSL 2 on a Windows 10 computer.
Every time I login to Ubuntu, I had to manually execute these four line by pasting it one by one in the Windows 10 Terminal.
sudo apt-get update && sudo apt-get install -yqq daemonize dbus-user-session fontconfig
sudo daemonize /usr/bin/unshare --fork --pid --mount-proc /lib/systemd/systemd --system-unit=basic.target
exec sudo nsenter -t $(pidof systemd) -a su - $LOGNAME
sudo /etc/init.d/xrdp start
May I know if there is a way to skip this manual process?
You can use .bashrc file to execute commands whenever you open the terminal. It should be located at $HOME directory.
cd $HOME
nano .bashrc
place your commands at the end of the file, press ctl+x then y to save.

skip installing confirm('yes' or 'no') in Dockerfile [duplicate]

How do I install the anaconda / miniconda without prompts on Linux command line?
Is there a way to pass -y kind of option to agree to the T&Cs, suggested installation location etc. by default?
can be achieved by bash miniconda.sh -b (thanks #darthbith)
The command line usage for this can only be seen with -h flag but not --help, so I missed it.
To install the anaconda to another place, use the -p option:
bash anaconda.sh -b -p /some/path
AFAIK pyenv let you install anaconda/miniconda
(after successful instalation)
pyenv install --list
pyenv install miniconda3-4.3.30
For a quick installation of miniconda silently I use a wrapper
script script that can be executed from the terminal without
even downloading the script. It takes the installation destination path
as an argument (in this case ~/miniconda) and does some validation too.
curl -s https://gist.githubusercontent.com/mherkazandjian/cce01cf3e15c0b41c1c4321245a99096/raw/03c86dae9a212446cf5b095643854f029b39c921/miniconda_installer.sh | bash -s -- ~/miniconda
Silent installation can be done like this, but it doesn't update the PATH variable so you can't run it after the installation with a short command like conda:
cd /tmp/
curl -LO https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh -b -u
Here -b means batch/silent mode, and -u means update the existing installation of Miniconda at that path, rather than failing.
You need to run additional commands to initialize PATH and other shell init scripts, e.g. for Bash:
source ~/miniconda3/bin/activate
conda init bash

How to run (./) a bash script located in the cloud?

Using a ubuntu 16.04 what I do is :
Download the .sh script using wget https://gist.githubusercontent.com/...
Turn the .sh file executable sudo chmod guo+x sysInit.sh
Execute the code through sudo ./sysInit.sh
I was wondering if it is possible to run the code directly from the web.
Would be something like: sudo ./ https://gist.githubusercontent.com/....
Is it possible to do that?
You can use cUrl to download and run your script. I don't think its installed by default on Ubuntu so you'll have to sudo apt-get install curl first if you want to use it. To download and run your script with sudo just run
curl -sL https://gist.githubusercontent.com/blah.sh | sudo sh
Be warned this is very risky and not advised for security reasons. See this related question why-using-curl-sudo-sh-is-not-advised
Yes, it is possible using curl and piping the result to sh.
Try the following command.
curl https://url-to-your-script-file/scriptFile.sh | sh
No, sudo only works from a command line prompt in a shell

Bash Scripting; giving commands to programs stdin

I am very new to bash scripting. I have the following script:
cp /etc/apt/sources.list /var/chroot/etc/apt/sources.list
chroot /var/chroot/
apt-get update
apt-get --simulate install $a > output
I actually want the last 2 comands to be run in chroot environment but I do not know how to give it to it, I searched but I could not find. I also want chroot to exit after execution of the commands, but it currently hangs. What can I do to prevent this?
EDIT: For future visitors:
cp /etc/apt/sources.list /var/chroot/etc/apt/sources.list
chroot /var/chroot apt-get update > /dev/null
chroot /var/chroot apt-get --simulate install nodejs
The command you want to run in the chroot environment must be given to chroot as an argument. See the manual page.

Resources