Identifying RHEL8 version details - rhel8

I used to identify if the installed RHEL OS version was the "Server" or "Workstation" one in this way:
cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.6 (Maipo)
Today, with RHEL8, I have this output:
cat /etc/redhat-release
Red Hat Enterprise Linux release 8.5 (Ootpa)
it looks like I have no chance to have the Server/Workstation detail.
How is this possible? How can I get the info I need out of it?

Is there any ISO for RHEL 8/9 Server or Workstation?
SOLUTION VERIFIED by Red Hat- Updated March 24 2022 at 12:38 AM - English
The same ISO for RHEL 8/9 can be utilized as a Server, Workstation or a Compute Node.
Environment
Red Hat Enterprise Linux 8
Red Hat Enterprise Linux 9
Issue
Cannot find ISO of Server or Workstation for RHEL 8/9
Selecting 'Switch to Version 8' changes product variant from 'Red Hat Enterprise Linux Server' to 'Red Hat Enterprise Linux for x86_64' in the customer portal downloads page.
Resolution
Separate ISOs for Workstation or Server are NOT provided RHEL 8 onwards.
The offered variant for RHEL 8 is "Red Hat Enterprise Linux for x86_64".
You can make use of the 'System Purpose Role' to record the intended use of a Red Hat Enterprise Linux 8 system i.e Server, Workstation or Compute node; however, using this feature is optional .
You can configure 'System Purpose' in one of the following ways:
During image creation
During a GUI installation when using the Connect to Red Hat screen to register your system and attach your Red Hat subscription
During a Kickstart installation when using Kickstart automation scripts
Post-installation only with the # syspurpose command-line tool.
Possible components in System Purpose:
Role:
Red Hat Enterprise Linux Server
Red Hat Enterprise Linux Workstation
Red Hat Enterprise Linux Compute Node
Service Level Agreement:
Premium
Standard
Self-Support
Usage:
Production
Disaster Recovery
Development/Test
These components can be found on your RHEL 8 machine by running the following commands
# cat /etc/rhsm/syspurpose/valid_fields.json
Current set components:
# syspurpose show
ROLE
To Set the intended ROLE of the system: # syspurpose set-role
Replace Value with the role that you want to assign:
Red Hat Enterprise Linux Server
Red Hat Enterprise Linux Workstation
Red Hat Enterprise Linux Compute Node
To Unset the ROLE : # syspurpose unset-role
SLA
To Set the intended SLA for the system:# syspurpose set-sla
Replace Value with the Entitlement Support level that you want to assign:
Premium
Standard
Self-Support
To Unset the SLA : # syspurpose unset-sla
USAGE
To Set the intended USAGE of the system: # syspurpose set-usage
Replace Value with the usage that you want to assign:
Production
Disaster Recovery
Development/Test
To Unset the USAGE: # syspurpose unset-usage
Note:
To know more about System Purpose you can check # man syspurpose or documentation at
RHEL 8 6.6.5. Configuring system purpose.
RHEL 9 12.3.5.1. Introduction to System Purpose
Root Cause
The same ISO for RHEL 8/9 can be utilized as a Server, Workstation or a Compute Node.

Related

Oracle docker container not working properly on Mac M1 BigSur [duplicate]

This question already has answers here:
Oracle 12c docker setup on Apple M1
(6 answers)
Closed 10 months ago.
I was recently trying to create a docker container and connect it with my SQLDeveloper but I started facing some strange issues.
I downloaded the docker image using below pull request:
docker pull store/oracle/database-enterprise:12.2.0.1-slim
then I started the container from my docker-desktop using port 1521. The container started with a warning.
terminal message:
docker run -d -it -p 1521:1521 --name oracle store/oracle/database-enterprise:12.2.0.1-slim
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
5ea14c118397ce7ef2880786ac1fac061e8c92f9b09070edffe365653dcc03af
Now when I try connect to db using below command :
docker exec -it 5ea14c118397 bash -c "source /home/oracle/.bashrc; sqlplus /nolog"
SQL> connect sys as sysdba;
Enter password:
ERROR:
ORA-12547: TNS:lost contact
it shows this message, PASSWORD I USE IS Oradoc_db1.
Now after seeing some suggestions I tried using the below command for connecting to sqlplus:
docker exec -it f888fa9d0247 bash -c "source /home/oracle/.bashrc; sqlplus / as sysdba"
SQL*Plus: Release 12.2.0.1.0 Production on Mon Sep 6 06:15:58 2021
Copyright (c) 1982, 2016, Oracle. All rights reserved.
ERROR:
ORA-12547: TNS:lost contact
I also tried changing permissions of oracle file in $ORACLE_HOME as well for execution permissions as well but it didn't work.
Please help me out as I am stuck and don't know what to do.
There are two issues here:
Oracle Database is not supported on ARM processors, only Intel. See here: https://github.com/oracle/docker-images/issues/1814
Oracle Database Docker images are only supported with Oracle Linux 7 or Red Hat Enterprise Linux 7 as the host OS. See here: https://github.com/oracle/docker-images/tree/main/OracleDatabase/SingleInstance
Oracle Database ... is supported for Oracle Linux 7 and Red Hat Enterprise Linux (RHEL) 7. For more details please see My Oracle Support note: Oracle Support for Database Running on Docker (Doc ID 2216342.1)
The referenced My Oracle Support Doc ID goes on to say that the database binaries in their Docker image are built specifically for Oracle Linux hosts, and will also work on Red Hat. That's it.
Linux being what it is (flexible), lots of people have gotten the images to run on other flavors like Ubuntu with a bit of creativity, but only on x86 processors and even then the results are not guaranteed by Oracle: you won't be able to get support or practical advice when (and it's always when, not if in IT) things don't work as expected. You might not even be able to tell when things aren't working as they should. This is a case where creativity is not particularly rewarded; if you want it to work and get meaningful help, my advice is to use the supported hardware architecture and operating system version. Anything else is a complete gamble.
I think i will answer my own question and its similar to the accepted answer.
As the oracle db is very large in size and is very complex to handle and maintain on different systems like mac and all, therefore, its recommended to use the docker mentioned system configurations for running the docker container on your system.
I have used MariaDB and have also tried MongoDB (NoSQL) instead of using oracle as these are very much light and don't need strict configurations to run and they ran in no time. I could easily connect both of them with SQLDeveloper(MariaDB) and MongoDB compass(MongoDB).
There are some custom oracle light weight versions as well but they are not very full proof to work on your system.

Exposing USB COM serial devices to Docker Windows container

I have a script that I want to run as a container. It depends on a an USB camera to run.
My host is a Windows 10 IoT Enterprise machine. I have tried exposing the COM serial ports of my host through WSL 2 to a Linux container, but exposing COM serial ports through WSL 2 its currently not supported, so my only option is using Windows containers.
I followed the process used in the Microsoft documentation to expose devices (Just mirroring the OS and version from my host which would be 20H2, and using the GUID for COM serial ports)
docker run --isolation=process --device="class/86E0D1E0-8089-11D0-9CE4-08003E301F73" mcr.microsoft.com/windows:20H2
The container is created and runs for a couple seconds, but when connected to the CLI it displays this error: container 5245931c246cc8181221354158bf7ddad64eb5099a493354a68a39d94feed6f0 encountered an error during hcsshim::System::CreateProcess: failure in a Windows system call: The requested virtual machine or container operation is not valid in the current state. (0xc0370105)
Docker information:
Server: Docker Engine - Community
Engine:
Version: 20.10.7
API version: 1.41 (minimum version 1.24)
Go version: go1.13.15
Git commit: b0f5bc3
Built: Wed Jun 2 11:56:41 2021
OS/Arch: windows/amd64
Experimental: false
Any feedback will be appreciated, thanks!

Jenkins Master on VM Linux Centos and Jenkins Slave on VM Windows 7

Texte d'origine Ecoutez le texte source
Hi
Please, can you give me your opinions about this subject :
1.I have instaled Jenkins on VM Linux Centos
2.I have installed Jenkins on VM Windows 7 which contains the project Java Maven.
3.I shared the project from VM Windows 7 to VM Linux Centos and I succed to the project from my VM Linux
My goal is to make Jenkins Master on VM linux and Jenkins slave on VM Windows.
What I should do for creating the communication between all jenkins.
Thanks in Advance.
Best Regards,
Two ways of making a Windows slave (after defining nodes on master), on Windows server:
-browse to Jenkins node listing and launch web agent (Java Web Start) which has to be done manually each time the server starts up
-after starting the agent, register the slave as a service
Plenty of documentation, I found this just now.
Definitely prefer a service you don't have to worry about the web agent dying for some unknown reason and no one noticing, much better to have it auto start.

windows cluster - SSH seems to be failing

Two physical systems, each is running Server 2008
Installed DataStax Community (version 2.0.7 64-bit) on each (that is the version number in the DataStax package I downloaded according to the file name)
OpCenter running locally shows a running 1 node cluster. I can execute IO on the system at the command line (using cassandra-stress)
The system names are "5017-cassandra-1" and "5017-cassandra-2"
I'd like to create a cluster in which both nodes participate. This is not a production environment (I'm just trying to learn).
From OpCenter on 5017-cassandra-1 I go to Nodes (I see 1 node of course), Add Nodes.
I leave the "Package" drop down as default (but the latest version shown in the drop down is 2.0.6), enter the IP address of 5017-cassandra-2. I add the Administrator user name and password in the "Node Creditials (sudo)" fields and press "Add Nodes" and get:
Error provisioning cluster: Unable to SSH to some of the hosts
Unable to SSH to 10.108.14.224:
global name 'get_output' is not defined
Reading that I needed to add OpenSSL - I installed the runtime redistributables (on both system) and Win64 OpenSSL-1_0_1h.
The error persists.
any suggestions or link to a step-by-step would be appreciated.

Systemtap for production server

I want to use systemtap for extracting details of my linux production server from remote access. I have some of the doubts regarding this:
Whether is it necessary to have same kernel in both the linux production server and linux development server.If not then how to add the support for that?
What are the minimum requirements to be present in the production server? Whether is it necessary to compile the kernel of the production server with the debuginfo ?
How to enable users in some particular group to run the stap scripts?
The kernel running on the production server and linux development server do not need to be identical. The SystemTap Beginners Guide describes doing cross-compile where instrumentation for one kernel version is built on a machine currently running different kernel version. This is described in:
http://sourceware.org/systemtap/SystemTap_Beginners_Guide/cross-compiling.html
The production server just needs the systemtap-runtime package. The production server does not need the kernel-devel or kernel-debuginfo installed when using the cross compile method.
There are stapusr and stapdev groups that allow people to run scripts. stapusr allows one to run existing script in /lib/modules/uname -r/systemtap directory (probably what is wanted in the case of running cross-compiled systemtap scripts). stapdev allow one to compile a script.
The stapusr and stapdev groups are described in:
http://sourceware.org/systemtap/SystemTap_Beginners_Guide/using-usage.html
Another capability in systemtap >1.4 is remote execution:
development_host% stap --remote=user#deployment_host -e 'probe begin { exit() } '
where cross-compilation, module transfer, trace data transfer are all automagically done via an ssh transport, as long as the deployment_host has corresponding systemtap-runtime bits installed.

Resources