How to automatically get basic monitoring info such as CPU and Memory Usage from the IBM i? - performance

I'm trying to get some basic performance data (such as CPU and Memory Usage) from the IBM i every minute or so.
Then I'm creating a Web App, which will display all of this in a centralized dashboard and also notify the user for any unusual values/events.
All I need is some kind of parsable data output from IBM i; could be JSON, CSV, perhaps even ODBC,...
I already tried running commands to get spool output, but that's not consistent so it can't really be parsed. The latest thing I found is collecting CSV files, but that is not automatic.
Inside the "IBM i Navigator -> Performance -> Investigate Data" there is an option to show a graph with my required data and it's even possible to export it as CSV.
However, I was wondering if it's possible to GET this data via a HTTP Request as JSON? I was searching around and found mentions of "Integrated Web Services" and "CICS Transactions Server HTTP Requests", but nothing specific on getting existing data, only on creating your own.
https://www.ibm.com/docs/en/cics-ts/5.3?topic=protocol-http-requests
https://www.ibm.com/docs/en/i/7.3?topic=tasks-integrated-web-application-server
Thank you!

I don't know if the data you search for available through a web request. What is the greater goal you want to achieve? Just curiosity? Centralized Monitoring for erratic values?
Usually, the requested class of data is exposed in more or less real time via SNMP and easily accessible by existing monitoring applications. It uses UDP and is much more efficient in terms of processor overhead than web requests.
The graphs you mention might be derived from the Performance Tools, something akin to sar & friends on Linux/Unix. However, this data is also not exported via web request. I think there are API calls within the OS to access this data. See Performance Tools for an overview.
Of course, this data is saved in tables and can be accessed via ODBC from outside IBM i, but I question the effort resulting from the probable lack of documentation about the table structure to be beneficial.

the system exposes all sorts of performance info as SQL table functions. here is the active job info table function
PHP can be used to write a web service which first calls the table function, then returns the resulting data as a JSON data stream.
<?php
$showColNameArr = array("JOB_NAME", "SUBSYSTEM", "JOB_TYPE", "FUNCTION", "FUNCTION_TYPE",
"JOB_STATUS", "CPU_TIME" ) ;
header("Content-type: text/javascript; charset=utf-8;");
// access an input, posted json object.
$postContents = file_get_contents('php://input') ;
$postObject = json_decode( $postContents ) ;
$action = isset($postObject->action) ? $postObject->action : '' ;
{
$conn = as400Connect('qgpl qtemp') ;
$sql = "SELECT *
from TABLE(QSYS2.ACTIVE_JOB_INFO( ))" ;
$stmt = db2_prepare($conn, $sql) ;
$result = db2_execute($stmt) ;
$colNames = db2Stmt_GetColNames( $stmt ) ;
$finalArr = array( ) ;
while( $row = db2_fetch_array($stmt))
{
$assocArr = array( ) ;
for( $jx = 0 ; $jx < sizeof($row) ; ++$jx )
{
$colName = $colNames[$jx] ;
if ( in_array( $colName, $showColNameArr ))
{
$vlu = $row[$jx] ;
$assocArr[$colName] = $vlu ;
}
}
$finalArr[] = $assocArr ;
}
echo json_encode( $finalArr ) ;
}
// ---------------------------- as400Connect ------------------------
function as400Connect( $libl )
{
$options = array('i5_naming' => DB2_I5_NAMING_ON);
if (strlen($libl) > 0)
{
$options['i5_libl'] = $libl ;
}
$conn = db2_connect("*LOCAL","","", $options);
if (!$conn) {
echo "Connection failed" ;
echo "<br>" ;
echo db2_conn_errormsg( ) ;
exit( ) ;
}
return $conn ;
}
// --------------------- db2Stmt_GetColNames ----------------
// build and return array of column names from a db2_execute
// executed $stmt.
function db2Stmt_GetColNames( $stmt )
{
$colCx = db2_num_fields($stmt);
$colNames = array( ) ;
for( $ix=0; $ix < $colCx; $ix++ )
{
array_push( $colNames, db2_field_name( $stmt, $ix )) ;
}
return $colNames ;
}
?>

Related

Google Fit API - International users sync issue

I'm using the Google Fit API Rest
Here is the data I'm retrieving from Google Fit using the API:
2021-03-21 29989 Steps
2021-03-20 12 Steps
Here is the data the user exported from Google:
3/22/2021 16,480 Steps
3/21/2021 13,521 Steps
In both circumstances, the steps equal 30,001
The dates are clearly off by one day because of the time zone. The daily count is also off for the same reason, however, it added up to the same steps.
What general approach/strategy can I take to get the steps obtained from the API match those on Google Fit when I don't have a timezone?
My API currently loops through the database and syncs all user data, not distinguishing domestic vs international users.
Here is the code snippet used to get steps:
//***** Get steps
case DATATYPE_STEP_COUNT_DELTA:
if ($dataStreamId == 'derived:com.google.step_count.delta:com.google.android.gms:estimated_steps') {
$listDatasets = $dataSets->get("me", $dataStreamId, $startTime . '000000000' . '-' . $endTime . '000000000');
if ($debug == 1) PrintR($listDatasets,"DATATYPE_STEP_COUNT_DELTA");
$step_count = 0;
foreach ($listDatasets as $dataSet) {
if ($dataSet['startTimeNanos']) {
$sec = $dataSet['startTimeNanos'] / 1000000000;
$activity_date = date('Y-m-d', $sec);
$dataSetValues = $dataSet['value'];
if ($dataSetValues && is_array($dataSetValues)) {
foreach ($dataSetValues as $dataSetValue) {
if(!isset($stepsArr[$studentencodedid][$activity_date])) $stepsArr[$studentencodedid][$activity_date] = 0;
$stepsArr[$studentencodedid][$activity_date] += $dataSetValue['intVal'];
$step_count += $dataSetValue['intVal'];
}
}
}
}
}
break;
//***** End get steps

comparing 2 data sets possibly with concurrency/asynchronous/parallel approach

I am currently trying to improve upon an existing mechanism (to compare data from 2 sources, implemented in perl5) and would like to use perl6 instead.
My target data volume range is about 20-30 GB in uncompressed flat files.
In terms of lines, a file can contain anywhere from 18 million to 28 million lines.
It has around 40-50 columns per line.
I do this type of data reconciliation on a daily basis and it can take about ~10 minutes to read from a file and populate the hash. ~20 minutes spent to read both files and to populate hash.
comparison process takes about ~30-50 minutes including iterating over hash, collecting desired result(s), and writing to output file (csv,psv).
All in all it can take anywhere between 30 minutes to 60 minutes on a 32 core dual xeon cpu server with 256gb of RAM, including intermittent server load, to perform the process.
Now I am trying to bring down the total processing time even further.
Here is my current single threaded approach using perl5.
fetch data from 2 sources (let's say s1 and s2) one by one and populate my hash based on key-value pairs. Source of data could be either a flat csv or psv file OR a database query Array of Array result, via DBI client. Data is always unsorted to start with.
To be specific, I read the file line by line,split fields, and choose desired indexes for key,value pair and insert into hash.
After collecting data and populating hash with desired key/value pairs,I start to compare and collect results (mainy comparing on what is missing or different in s2 w.r.t s1 and vice-versa).
dump output in an excel file (very costly if no. of lines is large like ~1 million or greater) or in a simple CSV (cheap operation. preferred method).
I was wondering whether if I could somehow do the first step in parallel i.e. collect data from both sources at once and populate my global hash, and then proceed to compare and dump output?
What options can perl6 provide to deal with this situation? I have read about concurrency, asynchronous and parallel operations using perl6 but I am not so certain which one can help me here.
I would really appreciate any general guidance on the matter. I hope I explained my problem well but sadly I don't have much to show for what have I tried till now? and reason is that I am just beginning to tackle this one. I am just unable to see past the single threaded approach and need some help.
Thanks.
EDIT
As my existing problem statement has been deemed by the community as 'too broad' - allow me to attempt to highlight my pain points below:
I would like to do file comparison by utilizing all 32 cores if possible. I am just not able to come up with a strategy or initial idea.
What type of new techniques are available or applicable with perl6 in order to tackle this problem or type of problem.
If I spawn 2 processes to read file(s) and collect data - is it possible to get the result back as an array or hash?
Is it possible to compare the data (stored in hash) in parallel?
My current p5 comparison logic is shown below for your reference. Hope this helps and not let this question shutdown.
package COMP;
use strict;
use Data::Dumper;
sub comp
{
my ($data,$src,$tgt) = #_;
my $result = {};
my $ms = ($result->{ms} = {});
my $mt = ($result->{mt} = {});
my $diff = ($result->{diff} = {});
foreach my $key (keys %{$data->{$src}})
{
my $src_val = $data->{$src}{$key};
my $tgt_val = $data->{$tgt}{$key};
next if ($src_val eq $tgt_val);
if (!exists $data->{$tgt}{$key}) {
push (#{$mt->{$key}}, "$src_val|NULL");
}
if (exists $data->{$tgt}{$key} && $src_val ne $tgt_val) {
push (#{$diff->{$key}}, "$src_val|$tgt_val")
}
}
foreach my $key (keys %{$data->{$tgt}})
{
my $src_val = $data->{$src}{$key};
my $tgt_val = $data->{$tgt}{$key};
next if ($src_val eq $tgt_val);
if (!exists $data->{$src}{$key}) {
push (#{$ms->{$key}},"NULL|$tgt_val");
}
}
return $result;
}
1;
If someone would like to try it out, here is the sample output and the test script used.
script output
[User#Host:]$ perl testCOMP.pl
$VAR1 = {
'mt' => {
'Source' => [
'source|NULL'
]
},
'ms' => {
'Target' => [
'NULL|target'
]
},
'diff' => {
'Sunday_isit' => [
'Yes|No'
]
}
};
Test Script
[User#Host:]$ cat testCOMP.pl
#!/usr/bin/env perl
use lib $ENV{PWD};
use COMP;
use strict;
use warnings;
use Data::Dumper;
my $data2 = {
f1 => {
Amitabh => 'Bacchan',
YellowSun => 'Yes',
Sunday_isit => 'Yes',
Source => 'source',
},
f2 => {
Amitabh => 'Bacchan',
YellowSun => 'Yes',
Sunday_isit => 'No',
Target => 'target',
},
};
my $result = COMP::comp ($data2,'f1','f2');
print Dumper $result;
[User#Host:]$
If you have an existing and working toolchain you don't have to rewrite it all to use Perl6. It's parallelism mechanisms work fine with external processess too. Consider
allnum.pl6
use v6;
my #processes =
[ "num1.txt", "num2.txt", "num3.txt", "num4.txt", "num5.txt" ]
.map( -> $filename {
[ $filename, run "perl", "num.pl", $filename, :out ];
})
.hyper;
say "Lazyness Here!";
my $time = time;
for #processes
{
say "<{$_[0]} : {$_[1].out.slurp}>";
}
say time - $time, "s";
num.pl
use warnings;
use strict;
my $file = shift #ARGV;
my $start = time;
my $result = 0;
open my $in, "<", $file or die $!;
while (my $thing = <$in>)
{
chomp $thing;
$thing =~ s/ //g;
$result = ($result + $thing) / 2;
}
print $result, " : ", time - $start, "s";
On my system
C:\Users\holli\tmp>perl6 allnum.pl6
Lazyness Here!
<num1.txt : 7684.16347578616 : 3s>
<num2.txt : 3307.36261498186 : 7s>
<num3.txt : 5834.32817942962 : 10s>
<num4.txt : 6575.55944995197 : 0s>
<num5.txt : 6157.63100049619 : 0s>
10s
Files were set up like so
C:\Users\holli\tmp>perl -e "for($i=0;$i<10000000;$i++) { print chr(32) ** 100, int(rand(1000)), chr(32) ** 100, qq(\n); }">num1.txt
C:\Users\holli\tmp>perl -e "for($i=0;$i<20000000;$i++) { print chr(32) ** 100, int(rand(1000)), chr(32) ** 100, qq(\n); }">num2.txt
C:\Users\holli\tmp>perl -e "for($i=0;$i<30000000;$i++) { print chr(32) ** 100, int(rand(1000)), chr(32) ** 100, qq(\n); }">num3.txt
C:\Users\holli\tmp>perl -e "for($i=0;$i<400000;$i++) { print chr(32) ** 100, int(rand(1000)), chr(32) ** 100, qq(\n); }">num4.txt
C:\Users\holli\tmp>perl -e "for($i=0;$i<5000;$i++) { print chr(32) ** 100, int(rand(1000)), chr(32) ** 100, qq(\n); }">num5.txt

Check memory type (ECC or not) by using PowerShell

I am trying to check memory types on all PCs across company. My testing code is below based on info from here:
Get-WmiObject Win32_PhysicalMemory |
Select-Object -Property PSComputerName, DeviceLocator, Manufacturer, PartNumber, #{label = "Size/GB" ; Expression = {$_.capacity / 1GB}}, Speed, datawidth, totalwidth, #{label = "ECC" ; Expression = {
if ( $_.totalwidth > $_.datawidth ) {
"$($_.DeviceLocator) is ECC memory type"
}
else {
"$($_.DeviceLocator) is non-ECC Memory Type"
}
}
} | Out-GridView
The results showing me that memory type is non-ecc:
But if I use 3rd party tool like "HWiNFO64 v4.30" the result is ECC memory. See pic below. How can I get the same memory info like pic below by using PowerShell? Speciously "Memory type" "Speed" and "ECC"
Vikas could have some good points about the accuracy of the information which should be considered. The linked post eludes to other issues as well.
The issue you are running into with this code is your use of PowerShell Comparison Operators.
They are in the format of -gt and -lt for example which are greater than and less than respectively. Assuming your logic you should just have to update
if ( $_.totalwidth > $_.datawidth )
to
if ( $_.totalwidth -gt $_.datawidth )

Codeigniter CSV upload then explode

I have some code that uploads the CSV file to the specified folder, but it doesn't update the database.
public function do_upload()
{
$csv_path = realpath(APPPATH . '/../assets/uploads/CSV/');
$config['upload_path'] = $csv_path;
$config['allowed_types'] = '*'; // All types of files allowed
$config['overwrite'] = true; // Overwrites the existing file
$this->upload->initialize($config);
$this->load->library('upload', $config);
if ( ! $this->upload->do_upload('userfile'))
{
$error = array('error' => $this->upload->display_errors());
$this->layout->buffer('content', 'program/upload', $error);
$this->layout->render();
}
else
{
$image_data = $this->upload->data();
$fname = $image_data['file_name'];
$fpath = $image_data['file_path'].$fname;
$fh = fopen($fpath, "r");
$insert_str = 'INSERT INTO wc_program (JobRef, Area, Parish, AbbrWorkType, WorkType, Timing, TrafficManagement, Location, Duration, Start, Finish) VALUES '."\n";
if ($fh) {
// Create each set of values.
while (($csv_row = fgetcsv($fh, 2000, ',')) !== false) {
foreach ($csv_row as &$row) {
$row = strtr($row, array("'" => "\'", '"' => '\"'));
}
$insert_str .= '("'
// Implode the array and fix pesky apostrophes.
.implode('","', $csv_row)
.'"),'."\n";
}
// Remove the trailing comma.
$insert_str = rtrim($insert_str, ",\n");
// Insert all of the values at once.
$this->db->set($insert_str);
echo '<script type="text/javascript">
alert("Document successfully uploaded and saved to the database.");
location = "program/index";
</script>';
}
else {
echo '<script type="text/javascript">
alert("Sorry! Something went wrong please proceed to try again.");
location = "program/upload";
</script>';
}
}
}
When I run var_dump($fh); it shows: resource(89) of type (stream)
When I run var_dump($fpath) it shows: string(66) "/Applications/MAMP/htdocs/site/assets/uploads/CSV/wc_program.csv"
So it all uploads but what is wrong with it not updating the database?
I have tried all kinds of changing the fopen method but still no joy, I really need it to add to the database and the insert query and set query should do the trick but it doesn't.
Any help greatly appreciated!
You are not running any query on the database. You are mixing active record syntax with simple query syntax. The active record insert query will be executed by calling.
$this->db->insert('my_table');
db::set() does not actually query the database. It takes in a key/value pair that will be inserted or updated after db::insert() or db::update() is called. If you build the query yourself you need to use the db::query() function.
Review the active directory documentation.
You can use $this->db->query('put your query here'), but you lose the benefit of CodeIgniter's built in security. Review CodeIgniter's query functions.
I'll give you examples of just a few of the many ways you can insert into a database using CodeIgniter. The examples will generate the query from your comment. You will need to adjust your code accordingly.
EXAMPLE 1:
$result = $this->db
->set('JobRef', 911847)
->set('Area', 'Coastal')
->set('Parish', 'Yapton')
->set('AbbrWorkType', 'Micro')
->set('WorkType', 'Micro-Asphalt Surfacing')
->set('Timing', 'TBC')
->set('TrafficManagement', 'No Positive Traffic Management')
->set('Location', 'Canal Road (added PMI 16/07/12)')
->set('Duration', '2 days')
->set('Start', '0000-00-00')
->set('Finish', '0000-00-00')
->insert('wc_program');
echo $this->db->last_query() . "\n\n";
echo "RESULT: \n\n";
print_r($result);
EXAMPLE 2 (Using an associative array):
$row = array(
'JobRef' => 911847,
'Area' => 'Coastal',
'Parish' => 'Yapton',
'AbbrWorkType' => 'Micro',
'WorkType' => 'Micro-Asphalt Surfacing',
'Timing' => 'TBC',
'TrafficManagement' => 'No Positive Traffic Management',
'Location' => 'Canal Road (added PMI 16/07/12)',
'Duration' => '2 days',
'Start' => '0000-00-00',
'Finish' => '0000-00-00'
);
$this->db->insert('wc_program', $row);
// This will do the same thing
// $this->db->set($row);
// $this->db->insert('wc_program');
echo $this->db->last_query();
Example 1 and 2 are using the Active Record. The information is stored piece by piece and then the query is built when you make the final call. This has several advantages. It allows you to build queries dynamically without worrying about SQL syntax and order of the keywords. It also escapes your data.
EXAMPLE 3 (Simple Query):
$query = 'INSERT INTO
wc_program
(JobRef, Area, Parish, AbbrWorkType, WorkType, Timing, TrafficManagement, Location, Duration, Start, Finish)
VALUES
("911847","Coastal","Yapton","Micro","Micro-Asphalt Surfacing","TBC","No Positive Traffic Management","Canal Road (added PMI 16/07/12)","2 days","0000-00-00","0000-00-00")';
$result = $this->db->query($query);
echo $this->db->last_query() . "\n\n";
echo "RESULT: \n";
print_r($result);
This way leaves all the protection against injection up to you, can lead to more errors, and is harder to change/maintain.
If you are going to do it this way you should use the following syntax, which will protect against injection.
EXAMPLE 4:
$query = 'INSERT INTO
wc_program
(JobRef, Area, Parish, AbbrWorkType, WorkType, Timing, TrafficManagement, Location, Duration, Start, Finish)
VALUES
(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?);';
$row = array(
911847,
'Coastal',
'Yapton',
'Micro',
'Micro-Asphalt Surfacing',
'TBC',
'No Positive Traffic Management',
'Canal Road (added PMI 16/07/12)',
'2 days',
'0000-00-00',
'0000-00-00'
);
$result = $this->db->query($query, $row);
echo $this->db->last_query() . "\n\n";
echo "RESULT: \n";
print_r($result);
CodeIgniter will replace each "?" in the query with the corresponding value from the array after it is escaped. You can use this to run many queries that are of the same form, but have different data just by updating the $row array and benefit from CI's built in security.

Simultaneous AJAX calls during HTTP streaming

I have an Apache server running a web application. In this webapplication I show a video using JWPlayer. JWPlayer uses http pseudostreaming to fetch the video from a PHP script which serves up this video. All this works well and the video is streamed well.
The problem I am having is while the video is streaming I also use AJAX calls to fetch some XML files which are used by Adobe Flash files on the same page. While streaming these XML file fetches are kept 'pending' until the entire video is loaded. Using Chrome I can see that the video gets loaded byte by byte. When the video is entirely loaded, then the XML files are fetched. Also if I open another tab in my browser while a video is streaming and try to load the web application again, it will also not show until the video is entirely loaded.
This seems te be an Apache setting of some sort. The MPM settings for apache are:
ThreadsPerChild 150
MaxRequestsPerChild 0
This seems to be correct. Any ideas what could be wrong?
If you are using PHP sessions then this is probably what is causing the IO blocking.
php blocking when calling the same file concurrently
I was making a system with private video streaming. So, i was needing a streaming via php, because using php programming i was able to restringe user access.
I was having troubles to streaming video and execute other script on the server.
Using the session_write_close() solve the problem to open another scripts and i found that script on web that helps me sooooo much.
I want to share, because that script makes a real streaming.
I found it on http://www.tuxxin.com/php-mp4-streaming/ website.
All thanks to the author of this code =D
ENJOY !
<?php
$file = 'video360p.mp4';
$fp = #fopen($file, 'rb');
$size = filesize($file); // File size
$length = $size; // Content length
$start = 0; // Start byte
$end = $size - 1; // End byte
header('Content-type: video/mp4');
//header("Accept-Ranges: 0-$length");
header("Accept-Ranges: bytes");
if (isset($_SERVER['HTTP_RANGE'])) {
$c_start = $start;
$c_end = $end;
list(, $range) = explode('=', $_SERVER['HTTP_RANGE'], 2);
if (strpos($range, ',') !== false) {
header('HTTP/1.1 416 Requested Range Not Satisfiable');
header("Content-Range: bytes $start-$end/$size");
exit;
}
if ($range == '-') {
$c_start = $size - substr($range, 1);
}else{
$range = explode('-', $range);
$c_start = $range[0];
$c_end = (isset($range[1]) && is_numeric($range[1])) ? $range[1] : $size;
}
$c_end = ($c_end > $end) ? $end : $c_end;
if ($c_start > $c_end || $c_start > $size - 1 || $c_end >= $size) {
header('HTTP/1.1 416 Requested Range Not Satisfiable');
header("Content-Range: bytes $start-$end/$size");
exit;
}
$start = $c_start;
$end = $c_end;
$length = $end - $start + 1;
fseek($fp, $start);
header('HTTP/1.1 206 Partial Content');
}
header("Content-Range: bytes $start-$end/$size");
header("Content-Length: ".$length);
$buffer = 1024 * 8;
while(!feof($fp) && ($p = ftell($fp)) <= $end) {
if ($p + $buffer > $end) {
$buffer = $end - $p + 1;
}
set_time_limit(0);
echo fread($fp, $buffer);
flush();
}
fclose($fp);
exit();
?>

Resources