Error Handling in CI - codeigniter

I read Error Handling but if I use log_message('debug', 'Hi I m in Cart Controller'); or log_message('info', 'Hi I m in Cart Controller'); it does not log any message but work only for log_message('error', 'Hi I m in Cart Controller');
Any idea what my mistake is?

You have to set the log threshold in app/config/config.php:
/*
|--------------------------------------------------------------------------
| Error Logging Threshold
|--------------------------------------------------------------------------
|
| If you have enabled error logging, you can set an error threshold to
| determine what gets logged. Threshold options are:
| You can enable error logging by setting a threshold over zero. The
| threshold determines what gets logged. Threshold options are:
|
| 0 = Disables logging, Error logging TURNED OFF
| 1 = Error Messages (including PHP errors)
| 2 = Debug Messages
| 3 = Informational Messages
| 4 = All Messages
|
| For a live site you'll usually only enable Errors (1) to be logged otherwise
| your log files will fill up very fast.
|
*/
$config['log_threshold'] = 2;

Related

How to get execution times of Azure Function steps?

I have an Azure Function set up as illustrated bellow. I need to understand the execution times for Trigger, Function, and Output steps because even after the function is "warm", first request takes up to 7 seconds. After that the execution time drops to something like 100 ms.
So far I switched the logging level in host.json to
"logging": {
"fileLoggingMode": "always",
"logLevel": {
"default": "Information",
"Host.Results": "Error",
"Function": "Trace",
"Host.Aggregator": "Trace"
}
}
and watched the simple telemetry in live logs:
8:48:07 AM | Trace Request successfully matched the route with name 'main' and template 'api/{*segments}'
8:48:06 AM | Trace Executing 'Functions.main' (Reason='This function was programmatically called via the host APIs.', Id=...)
That's pretty much all I see. Also, when opening the Application - Functions - Function - main log from Visual Studio, the log levels still have [Information] preable.
What I would like to see is basically a duration-time output as in the monitor (from Functions - Main in web portal) section, but split by steps. For example:
date | step | success | result code | duration (ms)
---------------------------------------------------------
.... | trigger | success | 200 | 39
.... | function | success | 200 | 32
.... | output | success | 200 | 37
How to get the duration time for Trigger, Function, and Output steps on every execution?

Why does this AWS Config Rule have no results available?

I created an AWS Config rule and lambda operating on resource type AWS::RDS::DBInstance and Trigger Type = 'Configuration changes'. CloudWatch logs verify that the function return is ...
{ "ResultToken": "<Redacted>",
"Evaluations": [
{"ComplianceResourceId": "db-<Redacted>",
"ComplianceResourceType": "AWS::RDS::DBInstance",
"ComplianceType": "COMPLIANT",
"OrderingTimestamp": 1576676501.52}
]
}
And although the rule is successfully invoked, the AWS console claims that the compliance status of the rule is 'No results available'. Additionally, this bit of powershell script using the AWSPowershell module ...
Get-CFGComplianceByConfigRule -configrulename security-group-of-rds | select -expandProperty Compliance
... returns ...
INSUFFICIENT_DATA
Why isn't the reported compliance status COMPLIANT?
My first thought is that I've got the schema for the return object wrong, but based on the example functions that AWS has supplied, it looks correct to me.
The short answer is:
Evaluation results need to be reported via a call to config:Put_Evaluations() rather than the actual lambda return.
The lambda return should just be the evaluations list.
The long answer is, Here is my solution that works:
AWS Lambda Function (language python3.8) for the Config Rule
'''
#####################################
## Gherkin ##
#####################################
Rule Name:
security-group-of-rds
Description:
Checks that all Oracle databases are using the correct security group and only that group.
Trigger:
Configuration Change on AWS::RDS::DbInstance . Scope of changes == Resources.
Reports on:
AWS::RDS::DbInstance
Parameters:
| ----------------------|-----------|-----------------------------------------------|
| Parameter Name | Type | Description |
| ----------------------|-----------|---------------------------------------------- |
| vpcSecurityGroupId | string | Id of the required vpc Security Group. |
| ----------------------|-----------|---------------------------------------------- |
| Assume-Rule-Role | boolean | If true, switch to the config role. |
| | | Defaults to false. |
|-----------------------|-----------|-----------------------------------------------|
| Mode | Enum | Range: Fully-Operational-DeathStar | |
| | | Put-Evaluations-Test | |
| | | Lambda-Console-Test |
| | | Defaults to Fully-Operational-DeathStar . |
| | | Meanings: |
| | | Fully-Operational-DeathStar: |
| | | Normal operation. |
| | | Put-Evaluations-Test: Set TestMode to True, |
| | | when invoking put_evaluations. |
| | | Refer: https://docs.aws.amazon.com/config/latest/APIReference/API_PutEvaluations.html
| | | Lambda-Console-Test: |
| | | Do not call put_evaluations() at all. | |
|-----------------------|-----------|-----------------------------------------------|
Envars:
| ----------------------|-----------|-----------------------------------------------|
| Envar Name | Type | Description |
| ----------------------|-----------|---------------------------------------------- |
| PROXY | string | http(s) proxy. Default to no proxy. |
|-----------------------|-----------|-----------------------------------------------|
| NO_PROXY | comma- | list of exemptions to proxy. |
| | separated-| Defaults to no exemptions |
| | list | |
|-----------------------|-----------|-----------------------------------------------|
| TURN_OFF_SSL | boolean | Turns of SSL verification. Defaults to False |
|-----------------------|-----------|-----------------------------------------------|
| REGION | string | Region for config service. |
| | | Defaults to the lambda region |
|-----------------------|-----------|-----------------------------------------------|
| CONFIG_ENDPOINT | string | Customised end-point for config service |
| | | Defaults to the standard end-point. |
|-----------------------|-----------|-----------------------------------------------|
Feature:
In order to: to protect the data confidentiality for Oracle oracle-ee RDS databases.
As: a Developer
I want: To ensure that all databases have the correct security group attached.
Scenarios:
Scenario 1:
Given: Wrong security group
And: The group is inactive
Then: No conclusion.
Scenario 2:
Given: Wrong security group
And: The group is active
And: type == oracle-ee
Then: return NON_COMPLIANT
Scenario 3:
Given: Right security group
And: The group is active
And: type == oracle-ee
Then: return COMPLIANT
Scenario 4:
Given: No security group
And: type == oracle-ee
Then: return NON_COMPLIANT
Scenario 5:
Given: type != oracle-ee
Then: return NOT_APPLICABLE
Required Role Policy Statements:
If you are not assuming the config rule role, then the lambda role needs all these
actions, except sts:AssumeRole.
If you ARE assuming the config rule role, then the lambda role needs the logs and sts
actions, and the config rule role needs the logs and config actions.
| ----------------------|-------------|-----------------------------------------------|
| Action | Resource | Condition | Why do we need it? |
| ----------------------|-------------|---------------------------------------------- |
| logs:CreateLogGroup | * | Always | For logging. |
| logs:CreateLogStream | | | |
| logs:PutLogEvents | | | |
| ----------------------|-------------|------------|----------------------------------|
| sts:AssumeRole | Your AWS | if Assume-Rule-Role == True | If you want the |
| | config role | | lambda to execute in the main |
| | | | config role. |
| ----------------------|-------------|------------|----------------------------------|
| config:PutEvaluations | * | Always | To put the actual results. |
| ----------------------|-------------|------------|----------------------------------|
Inline Constants Configuration:
| ----------------------|-----------|-----------------------------------------------|
| Identifier | Type | Description |
| ----------------------|-----------|---------------------------------------------- |
| defaultRegion | string | Default region, if we can't get it from the |
| | | Lambda environment. |
| ----------------------|-----------|---------------------------------------------- |
'''
import json
import datetime
import time
import boto3
import botocore
import os
proxy = None
no_proxy = None
configClient = None
defaultRegion = 'ap-southeast-2'
def setEnvar( name, value):
if os.environ.get( name, '') != value:
if value != '':
os.environ[ name] = value
else:
del os.environ[ name]
def setProxyEnvironment():
# Sometimes lamdba's sit in VPC's which require proxy forwards
# in order to access some or all internet services.
global proxy
global noProxy
proxy = os.environ.get( 'PROXY' , None)
noProxy = os.environ.get( 'NO_PROXY', None)
if proxy is not None:
setEnvar( 'http_proxy' , proxy )
setEnvar( 'https_proxy', proxy )
if noProxy is not None:
setEnvar( 'no_proxy' , noProxy)
def jpath( dict1, path, sep = '.', default = None):
# Traverse a hierarchy of dictionaries, as described by a path, and find a value.
ret = dict1
if isinstance( path, str):
particleList = path.split( sep)
else:
particleList = path
for particle in particleList:
if isinstance( ret, dict):
ret = ret.get( particle, None)
elif (isinstance( ret, list) or isinstance( ret, tuple)) and particle.isdigit():
idx = int( particle)
if (idx >= 0) and (idx < len(ret)):
ret = ret[ idx]
else:
ret = None
else:
ret = None
if ret is None:
break
if ret is None:
ret = default
return ret
def coerceToList( val):
# Make it into a list.
if val is None:
return list()
else:
return val
def coerceToBoolean( val):
if isinstance( val, str):
return val.lower() == 'true'
else:
return bool( val)
def get_region():
# Find the region for AWS services.
return os.environ.get( 'REGION', os.environ.get( 'AWS_REGION', defaultRegion))
def get_assume_role_credentials( role_arn):
# Switch to a role. We need sts:AssumeRole for this.
global proxy
if coerceToBoolean( os.environ.get( 'TURN_OFF_SSL', False)):
sts_client = boto3.client('sts', verify=False)
else:
sts_client = boto3.client('sts')
try:
assume_role_response = sts_client.assume_role(RoleArn=role_arn, RoleSessionName="configLambdaExecution")
print( 'Switched role to ' + role_arn)
return assume_role_response['Credentials']
except botocore.exceptions.ClientError as ex:
# Scrub error message for any internal account info leaks
if 'AccessDenied' in ex.response['Error']['Code']:
ex.response['Error']['Message'] = "AWS Config does not have permission to assume the IAM role."
else:
ex.response['Error']['Message'] = "InternalError"
ex.response['Error']['Code'] = "InternalError"
print(str(ex))
raise ex
def get_client(service, event):
# Get the AWS service client for the specified service.
# If specified, switch roles and go through a custom service end-point.
global proxy
region = get_region()
ruleRole = jpath( event, 'executionRoleArn')
doAssumeRuleRole = coerceToBoolean( jpath( event, 'ruleParameters-parsed.Assume-Rule-Role', '.', False)) and (ruleRole is not None)
parms = {}
if coerceToBoolean( os.environ.get( 'TURN_OFF_SSL', False)):
parms['verify'] = False
if region is not None:
parms['region_name'] = region
if doAssumeRuleRole:
credentials = get_assume_role_credentials( ruleRole)
parms['aws_access_key_id' ] = credentials['AccessKeyId' ]
parms['aws_secret_access_key'] = credentials['SecretAccessKey']
parms['aws_session_token' ] = credentials['SessionToken' ]
endPointEnvarName = service.upper() + '_ENDPOINT'
endPointEnvarValue = os.environ.get( endPointEnvarName, '')
if endPointEnvarValue != '':
parms['endpoint_url'] = endPointEnvarValue
return boto3.client(service, **parms)
def get_configClient( event):
# Get the AWS 'config' service, and store it in a global singleton.
global configClient
if configClient is None:
configClient = get_client( 'config', event)
return configClient
def initiate_Globals():
# Mainly setup the proxy forward, if required.
configClient = None
setProxyEnvironment()
def evaluate_compliance( configuration_item, ruleParameters):
# Evaluate the compliance of the given changed resource.
# Return a dictionary in the standard 'evaluation' schema.
referenceVpcSecurityGroupId = ruleParameters.get('vpcSecurityGroupId','')
annotation = 'Ok'
if ((jpath( configuration_item, 'configuration.engine') == 'oracle-ee') and
(configuration_item.get('resourceType','') == 'AWS::RDS::DBInstance')):
ok = False
for vpcSecurityGroup in coerceToList( jpath( configuration_item, 'configuration.vpcSecurityGroups')):
actualId = vpcSecurityGroup.get('vpcSecurityGroupId','')
ok = ((actualId == referenceVpcSecurityGroupId) or
(vpcSecurityGroup.get('status','inactive') != 'active'))
if not ok:
# The security group was active, but was not equal to the prescribed one.
annotation = 'Wrong security group'
break
if ok:
# All active security groups, and at least one, are the prescribed one.
compliance_type = 'COMPLIANT'
else:
if referenceVpcSecurityGroupId == '':
annotation = 'Malformed rule parameter configuration'
if annotation == 'Ok':
annotation = 'No security groups'
compliance_type = 'NON_COMPLIANT'
else:
# This rule only deals with oracle-ee RDS databases.
compliance_type = 'NOT_APPLICABLE'
evaluation = dict()
evaluation['ComplianceResourceType'] = configuration_item['resourceType']
evaluation['ComplianceResourceId' ] = configuration_item['resourceId']
evaluation['OrderingTimestamp' ] = configuration_item['configurationItemCaptureTime']
evaluation['ComplianceType' ] = compliance_type
evaluation['Annotation' ] = annotation
return evaluation
def printEnvars( envarList):
for envarName in envarList.split(','):
envarValue = os.environ.get( envarName, None)
if envarValue is not None:
print( f'Envar {envarName} == {envarValue}')
def lambda_handler(event, context):
global configClient
# Phase 1: Setup and parsing input.
# Uncomment this when debugging:
# print( 'event == ' + json.dumps( event))
printEnvars( 'PROXY,NO_PROXY,TURN_OFF_SSL,REGION,CONFIG_ENDPOINT')
initiate_Globals()
invokingEvent = json.loads( event.get('invokingEvent','{}'))
event['invokingEvent-parsed'] = invokingEvent
ruleParameters = json.loads( event.get('ruleParameters','{}'))
event['ruleParameters-parsed'] = ruleParameters
print( 'Config rule Arn == ' + event.get( 'configRuleArn', ''))
print( 'Rule parameters == ' + json.dumps( ruleParameters))
get_configClient( event)
configuration_item = invokingEvent['configurationItem']
# Phase 2: Evaluation.
evaluation = evaluate_compliance( configuration_item, ruleParameters)
# Phase 3: Reporting.
evaluations = list()
evaluations.append( evaluation)
mode = ruleParameters.get( 'Mode', 'Fully-Operational-DeathStar')
if mode == 'Fully-Operational-DeathStar':
response = configClient.put_evaluations( Evaluations=evaluations, ResultToken=event['resultToken'])
elif mode == 'Put-Evaluations-Test':
response = configClient.put_evaluations( Evaluations=evaluations, ResultToken=event['resultToken'], TestMode=True)
else:
response = {'mode': mode}
# Uncomment this when debugging:
# print( 'response == ' + json.dumps( response))
print( 'evaluations == ' + json.dumps( evaluations))
return evaluations

Protect Responsive FileManager from direct access

I am using responsive FileManager 9.14.0 with TinyMCE 5.0.16 and Laravel 6 running on Nginx 1.16.1
I have the following folder structure:
| public
| |- uploads
| |- thumbs
| |- filemanager
| |- js
| | |- tinymce
| | | |- plugins
| | | | |- responsivefilemanager
| | | | | |- plugin.min.js
I use laravel authentication to protect a 'create' page where the user can add text using tinyMCE and upload images using RFM as tyniMCE plugin.
But RFM is accessible directly if with the following URL
http://www.myhost.test/filemanager/dialog.php
How can I prevent this behavior. I want RFM to be accessible only from the tinyMCE editor.
im not familier with laravel but ...
in Responsive File Manager 9.0 there is a folder called config that contain config.php
| public
| |- uploads
| |- thumbs
| |- filemanager
| | |- config
| | | |- config.php
| |- js
| | |- tinymce
| | | |- plugins
| | | | |- responsivefilemanager
| | | | | |- plugin.min.js
open config.php and change
define('USE_ACCESS_KEYS', false); // TRUE or FALSE -------- to ------> define('USE_ACCESS_KEYS', true); // TRUE or FALSE
this force Responsive File Manager to use Aaccess Key to prevent all attempt from being accessed to your files and folders.
in same file at line 190 add your users auth_key for whom they need to use file-manager .
for example :
username: jim auth_key: a1s2d3f4g5h6j7k8l9mm
username: lisa auth_key: zqxwd3f4vrbth6j7btny
so line 190 should rewrite like line below:
'access_keys' => array( "a1s2d3f4g5h6j7k8l9" , "zqxwd3f4vrbth6j7btny"),
go to your form and add a button/link to access RESPONSIVE FILE MANAGER
<a href="https://www.example.com/admin/responsive-filemanager/filemanager/dialog.php?akey=<?php echo {{{your authenticated user AUTH_KEY}}}; ?> </a>
if there is no {{{your authenticated user AUTH_KEY}}} there is 2 way:
1)) add a column auth_key to your users table and generate auth_key that should be equal for users they want to access to responsive file manager in both database and config.php file.
2)) use username instead of auth_key so your config at line 19 will be:
'access_keys' => array( "jim" , "lisa"),
and now your responsive file manager access link will be like this:
<a href="https://www.example.com/admin/responsive-filemanager/filemanager/dialog.php?akey=jim ></a>
jim is static here u should make it dynamic by call function to return authenticated user USERNAME and put it after &akey= in link
now if akey value in link find in access_key array the responsive file manager page will be work otherwise it show you ACCESS DENIED!!!
If it's still relevant, I can show you how I did it in Laravel 8
I proceeded from the opposite - if the user logged in under the admin, then there is no need to check it and therefore USE_ACCESS_KEYS do FALSE, otherwise - TRUE
And therefore, if he is NOT authenticated as an administrator, then he will NOT get access to the ResponsiveFileManager.
To do this, add such a function in the responsive_filemanager / filemanager / config / config.php file somewhere at the beginning of the file.
( Specify your own paths to the files '/vendor/autoload.php' and '/bootstrap/app.php' )
function use_access_keys() {
require dirname(__DIR__, 4) . '/vendor/autoload.php';
$app = require_once dirname(__DIR__, 4) . '/bootstrap/app.php';
$request = Illuminate\Http\Request::capture();
$request->setMethod('GET');
$app->make('Illuminate\Contracts\Http\Kernel')->handle($request);
if (Auth::check() && Auth::user()->hasRole('admin')) {
return false;
}
return true;
}
and then this line:
define('USE_ACCESS_KEYS', false);
replace with this:
define('USE_ACCESS_KEYS', use_access_keys());
And one moment.
If after that, when opening the FileManager, the following error suddenly pops up: "Undefined variable: lang"
then open responsive_filemanager / filemanager / dialog.php
find the array $get_params and in it change like this:
'lang' => 'en',

Laravel session doesn't work for other sub-domains

I have a problem with the session in Laravel 5.3.
For our project, we have 2 sub-domain :
1 for developement's environment
1 for preproduction's environment
for the first sub-domain no problem, it all works. But the second sub-domain nothing works. I concluded that the problem is the session, because on my second sub-domain, none cookies created.
I have look my session's file in my config and update any datas. This is my config session's file :
<?php
return [
/*
|--------------------------------------------------------------------------
| Default Session Driver
|--------------------------------------------------------------------------
|
| This option controls the default session "driver" that will be used on
| requests. By default, we will use the lightweight native driver but
| you may specify any of the other wonderful drivers provided here.
|
| Supported: "file", "cookie", "database", "apc",
| "memcached", "redis", "array"
|
*/
'driver' => env('SESSION_DRIVER', 'file'),
/*
|--------------------------------------------------------------------------
| Session Lifetime
|--------------------------------------------------------------------------
|
| Here you may specify the number of minutes that you wish the session
| to be allowed to remain idle before it expires. If you want them
| to immediately expire on the browser closing, set that option.
|
*/
'lifetime' => 3600,
'expire_on_close' => false,
/*
|--------------------------------------------------------------------------
| Session Encryption
|--------------------------------------------------------------------------
|
| This option allows you to easily specify that all of your session data
| should be encrypted before it is stored. All encryption will be run
| automatically by Laravel and you can use the Session like normal.
|
*/
'encrypt' => false,
/*
|--------------------------------------------------------------------------
| Session File Location
|--------------------------------------------------------------------------
|
| When using the native session driver, we need a location where session
| files may be stored. A default has been set for you but a different
| location may be specified. This is only needed for file sessions.
|
*/
'files' => storage_path('framework/sessions'),
/*
|--------------------------------------------------------------------------
| Session Database Connection
|--------------------------------------------------------------------------
|
| When using the "database" or "redis" session drivers, you may specify a
| connection that should be used to manage these sessions. This should
| correspond to a connection in your database configuration options.
|
*/
'connection' => null,
/*
|--------------------------------------------------------------------------
| Session Database Table
|--------------------------------------------------------------------------
|
| When using the "database" session driver, you may specify the table we
| should use to manage the sessions. Of course, a sensible default is
| provided for you; however, you are free to change this as needed.
|
*/
'table' => 'sessions',
/*
|--------------------------------------------------------------------------
| Session Cache Store
|--------------------------------------------------------------------------
|
| When using the "apc" or "memcached" session drivers, you may specify a
| cache store that should be used for these sessions. This value must
| correspond with one of the application's configured cache stores.
|
*/
'store' => null,
/*
|--------------------------------------------------------------------------
| Session Sweeping Lottery
|--------------------------------------------------------------------------
|
| Some session drivers must manually sweep their storage location to get
| rid of old sessions from storage. Here are the chances that it will
| happen on a given request. By default, the odds are 2 out of 100.
|
*/
'lottery' => [2, 100],
/*
|--------------------------------------------------------------------------
| Session Cookie Name
|--------------------------------------------------------------------------
|
| Here you may change the name of the cookie used to identify a session
| instance by ID. The name specified here will get used every time a
| new session cookie is created by the framework for every driver.
|
*/
'cookie' => 'name_cookie', // identique for all sub-domain but i have try to change the name for each domaine
/*
|--------------------------------------------------------------------------
| Session Cookie Path
|--------------------------------------------------------------------------
|
| The session cookie path determines the path for which the cookie will
| be regarded as available. Typically, this will be the root path of
| your application but you are free to change this when necessary.
|
*/
'path' => '/',
/*
|--------------------------------------------------------------------------
| Session Cookie Domain
|--------------------------------------------------------------------------
|
| Here you may change the domain of the cookie used to identify a session
| in your application. This will determine which domains the cookie is
| available to in your application. A sensible default has been set.
|
*/
'domain' => env('SESSION_DOMAIN', null),
/*
|--------------------------------------------------------------------------
| HTTPS Only Cookies
|--------------------------------------------------------------------------
|
| By setting this option to true, session cookies will only be sent back
| to the server if the browser has a HTTPS connection. This will keep
| the cookie from being sent to you if it can not be done securely.
|
*/
'secure' => env('SESSION_SECURE_COOKIE', false),
/*
|--------------------------------------------------------------------------
| HTTP Access Only
|--------------------------------------------------------------------------
|
| Setting this value to true will prevent JavaScript from accessing the
| value of the cookie and the cookie will only be accessible through
| the HTTP protocol. You are free to modify this option if needed.
|
*/
'http_only' => true,
];
I don't found a solution despite much search,
Thanks
I found the solution, my problem came from a bad configuration of the cloud front and not of laravel. Thanks at all for your help
Did you try this ? SESSION_DOMAIN=*.mydomain.com so cookies produced in dev env would ba accessible for preprod env
EDIT : SESSION_DOMAIN=.mydomain.com was the right answer, my bad
Test it out
'domain' => '*.domain.com'
You have to specify a wildcard domain for the SESSION_DOMAIN environnement variable
Edit: too late, stackexchange didnt show up the answers

run a program with tcsetattr raw mode in background

I need to run a program as is in the background. The catch is that the program does a tcsetattr() call and sets the raw mode as following :
struct termios tio;
if (tcgetattr(fileno(stdin), &tio) == -1) {
perror("tcgetattr");
return;
}
_saved_tio = tio;
tio.c_iflag |= IGNPAR;
tio.c_iflag &= ~(ISTRIP | INLCR | IGNCR | ICRNL | IXON | IXANY | IXOFF);
tio.c_lflag &= ~(ISIG | ICANON | ECHO | ECHOE | ECHOK | ECHONL);
// #ifdef IEXTEN
tio.c_lflag &= ~IEXTEN;
// #endif
tio.c_oflag &= ~OPOST;
tio.c_cc[VMIN] = 1;
tio.c_cc[VTIME] = 0;
if (tcsetattr(fileno(stdin), TCSADRAIN, &tio) == -1)
perror("tcsetattr");
else
_in_raw_mode = 1;
The implication is that as soon as I run my program with '&' and press enter, the process shows 'stopped'. Even the ps aux output shows 'T' as the process state which means it is not running.
How can I make this program running in background.Issue is I cant modify this program.
For complete details, actually I need to use ipmitool with 'sol' as a background process.
Any help is appreciated !
Thanks
It is hard to give a complete answer on what is going wrong without knowledge of how ipmitool is actually used/started but I'll try to add some details.
So all the options in the question are needed to adjust i/o for the program (see comments):
// ignorance of errors of parity bit
tio.c_iflag |= IGNPAR;
// removed any interpretation of symbols (CR, NL) for input and control signals
tio.c_iflag &= ~(ISTRIP | INLCR | IGNCR | ICRNL | IXON | IXANY | IXOFF);
// switch off generation of signals for special characters, non-canonical mode is on,
// no echo, no reaction to kill character etc
tio.c_lflag &= ~(ISIG | ICANON | ECHO | ECHOE | ECHOK | ECHONL);
// removed recognition of some spec characters
// #ifdef IEXTEN
tio.c_lflag &= ~IEXTEN;
// #endif
// disable special impl-based output processing
tio.c_oflag &= ~OPOST;
// minimum number of characters to read in non-canonical mode
tio.c_cc[VMIN] = 1;
// timeout -> 0
tio.c_cc[VTIME] = 0;
// accurately make all the adjustments then it will be possible
if (tcsetattr(fileno(stdin), TCSADRAIN, &tio) == -1)
perror("tcsetattr");
else
_in_raw_mode = 1;
More details on terminal configuring are here and here.
In other words this part of code configures the standard input of the process to a "complete silent" or "raw" mode.
In spite of the lack of the information you may also try "kill -cont %PID%" to the process or try to provide some file as standard input for it.

Resources