I have a query, the output is like this:
1 2 3 4 5 6 7 8 9 10 11 12 13
- - - - - - - - - - - - -
40 20 22 10 0 0 0 0 0 0 0 0 0
I want to convert the output to one column and the column is like below:
output
-----------
{"1":40,"2":20,"3":22,"4":10,"5":0,"6":0,"7":0,"8":0,"9":0,"10":0,"11":0,"12":0,"13":0}
You can use the JSON formatting 'trick' in SQL Developer.
Full scenario:
CREATE TABLE JSON_SO (
"1" INTEGER,
"2" INTEGER,
"3" INTEGER,
"4" INTEGER,
"5" INTEGER,
"6" INTEGER
);
INSERT INTO JSON_SO VALUES (
40,
20,
22,
10,
0,
0
);
select /*json*/ * from json_so;
And the output when executing with F5 (Execute as Script):
{
"results":[
{
"columns":[
{
"name":"1",
"type":"NUMBER"
},
{
"name":"2",
"type":"NUMBER"
},
{
"name":"3",
"type":"NUMBER"
},
{
"name":"4",
"type":"NUMBER"
},
{
"name":"5",
"type":"NUMBER"
},
{
"name":"6",
"type":"NUMBER"
}
],
"items":[
{
"1":40,
"2":20,
"3":22,
"4":10,
"5":0,
"6":0
}
]
}
]
}
Note that the JSON output happens client-side via SQL Developer (this also works in SQLcl) and I formatted the JSON output for display here using https://jsonformatter.curiousconcept.com/
This will work with any version of Oracle Database that SQL Developer supports, while the JSON_OBJECT() function was introduced in Oracle Database 12cR2 - if you want to have the DB format the result set to JSON for you.
If you want Oracle DB server to return the result as JSON, you can do query as below -
SELECT JSON_OBJECT ('1' VALUE col1, '2' VALUE col2, '3' VALUE col3) FROM table
Related
Import.php
...
return new Statement([
'account_number' => $row['accountno'],
'account_name' => $row['name'],
'reading_date' => \Carbon\Carbon::createFromFormat('m/d/Y', $row['billdate']),
'due_date' => \Carbon\Carbon::createFromFormat('m/d/Y', $row['duedate']),
]);
...
Error:
Illuminate\Database\QueryException PHP 8.1.6 9.37.0
SQLSTATE[22007]: Invalid datetime format: 1292 Incorrect date value: '10/18/2022' for column `mubsdb`.`statements`.`due_date` at row 1
INSERT INTO
`statements` (`due_date`, `reading_date`)
VALUES
( 10 / 18 / 2022, 10 / 03 / 2022),
(
10 / 18 / 2022,
10 / 03 / 2022
),
( 10 / 18 / 2022, 10 / 03 / 2022),
( 10 / 18 / 2022, 10 / 03 / 2022),
(10/18/2022, 10/03/2022), (10/18/2022, 10/03/2022), (10/18/2022, 10/03/2022),
DB Structure:
Name Type Null Default
reading_date date Yes NULL
due_date date Yes NULL
I'm trying to import and save csv rows to my DB but I get error with dates. I tried \Carbon\Carbon::createFromFormat('m/d/Y', $row['billdate']) and \Carbon\Carbon::parse($row['billdate'])->format('Y-m-d') but neither seems to work
weirdly, this worked.
'reading_date' => $row['billdate'] ? \Carbon\Carbon::createFromFormat('m/d/Y', $row['billdate'])->format('m/d/Y') : null,
'due_date' => $row['duedate'] ? \Carbon\Carbon::createFromFormat('m/d/Y', $row['duedate'])->format('m/d/Y') : null,
If you're using the newest version of laravel-excel, then you'll notice at this page a date column is exported using Date::dateTimeToExcel:
// ...
Date::dateTimeToExcel($invoice->created_at),
// ...
That is because date is stored as numbers in excel, thus a datetime object needs to be converted first in order to show the value correctly.
This rule also exists in import. So personally I would add a rule in the import class to make sure that the date we're receiving is a number (which actually a date format in excel):
use Maatwebsite\Excel\Concerns\WithValidation;
class MyImport implements WithValidation
{
public function rules(): array
{
return [
'created_at' => ['numeric'],
];
}
}
And then, when about to importing the data using model, convert the number to datetime before modifying it with Carbon:
use PhpOffice\PhpSpreadsheet\Shared\Date;
// ...
return new Import([
// ...
'created_at' => Carbon::instance(Date::excelToDateTimeObject($row['created_at'])),
// or any other Carbon methods
// ...
]);
is there any way to convert ":" to "." in a query constructed by jdbc connector?
"SELECT * FROM database : user : table WHERE database : user : table "
connector configuration:
"name": "jdbc_source_connector",
"config": {
"connector.class" : "io.confluent.connect.jdbc.JdbcSourceConnector",
"connection.url" : "jdbc:informix-sqli://IP:PORT/databasa:informixserver=oninit;user=user;password=password",
"topic.prefix" : "table-",
"poll.interval.ms" : "100000",
"mode" : "incrementing",
"table.whitelist" : "table",
"query.suffix" : ";",
"incrementing.column.name" : "lp"
ERROR:
[2021-02-03 14:03:02,809] INFO Begin using SQL query: SELECT * FROM database : user : table WHERE database : user : table : lp > ? ORDER B
Y database : user : table : lp ASC ; (io.confluent.connect.jdbc.source.TableQuerier:164)
[2021-02-03 14:03:02,853] ERROR Failed to run query for table TimestampIncrementingTableQuerier{table="database "." user "." table", query='null', top
icPrefix='database-', incrementingColumn='lp', timestampColumns=[]}: {} (io.confluent.connect.jdbc.source.JdbcSourceTask:404)
java.sql.SQLSyntaxErrorException: A syntax error has occurred.
connector session:
informix database:/usr/informix$ onstat -g ses 544271
IBM Informix Dynamic Server Version 12.10.FC13 -- On-Line -- Up 60 days 18:26:06 -- 4985440 Kbytes
session effective #RSAM total used dynamic
id user user tty pid hostname threads memory memory explain
544271 user - - 1266352 kafkahost 1 172032 102672 off
Program :
Thread[id:175, name:task-thread-jdbc_source_connector_database, path:/app/kafka_2.13-2.7.0/plugins/kafka-connect-jdbc-10.0.1/lib/jdbc-4.50.4.1.jar]
tid name rstcb flags curstk status
583042 sqlexec 7000000437451a8 Y--P--- 6224 cond wait netnorm -
Memory pools count 2
name class addr totalsize freesize #allocfrag #freefrag
544271 V 70000004f6c7040 167936 68592 113 35
544271*O0 V 700000065ad0040 4096 768 1 1
name free used name free used
overhead 0 6656 scb 0 144
opentable 0 11352 filetable 0 1040
log 0 16536 temprec 0 22688
keys 0 816 gentcb 0 1592
ostcb 0 3472 sqscb 0 25128
hashfiletab 0 552 osenv 0 2056
sqtcb 0 9336 fragman 0 640
sapi 0 144 udr 0 520
sqscb info
scb sqscb optofc pdqpriority optcompind directives
7000000343d3360 700000039194028 0 0 0 1
Sess SQL Current Iso Lock SQL ISAM F.E.
Id Stmt type Database Lvl Mode ERR ERR Vers Explain
544271 - database CR Not Wait 0 0 9.28 Off
Last parsed SQL statement :
SELECT * FROM database : user : table WHERE database : user : table :
lp > ? ORDER BY database : user : table : lp ASC
so, the only workaround is to add the query option to the connector configuration
config after modifications:
"name": "jdbc_source_connector",
"config": {
"connector.class" : "io.confluent.connect.jdbc.JdbcSourceConnector",
"connection.url" : "jdbc:informix-sqli://IP:PORT/databasa:informixserver=oninit;user=user;password=password",
"topic.prefix" : "table-tablename",
"poll.interval.ms" : "100000",
"mode" : "incrementing",
"query" : "select * from table ",
"incrementing.column.name" : "lp"
I'm a super duper newb with elasticsearch
I have a bunch of products on my elasticsearch. Each elasticsearch record has title, pid, product_group, color, size, qty... etc, many more fields
Now when I'm doing my request, what I want to happen is for it to group the results by pid, and then inside the _group part of the response, I also want those grouped as well, by product_group.
So in other words, if I have
pid: 1, product_group: 1, size: 1
pid: 1, product_group: 1, size: 2
pid: 1, product_group: 2, size: 1
pid: 1, product_group: 2, size: 2
pid: 2, product_group: 3, size: 1
pid: 2, product_group: 3, size: 2
pid: 2, product_group: 4, size: 1
pid: 2, product_group: 4, size: 2
I would want my top level search array to have 2 results: 1 for pid1 and 1 for pid2, and then inside of each of those results, inside the _group part of the json, I would expect 2 results each: pid1 would have a result for product_group 1 and product_group 2, and pid2 would have a _group result for product_group 3 and product_group 4.
Is this possible?
At the moment, this is how i'm modifying my query to group it based on pid:
group: {field: "pid", collapse: true}
I don't really know if I want collapse to be true or false, and I do'nt know how, or if it's even possible, to do a second layer of grouping like i'm asking for. Would appreciate any help.
The most straightforward way is to go with child terms aggs:
{
"size": 0,
"aggs": {
"by_pid": {
"terms": {
"field": "pid"
},
"aggs": {
"by_group": {
"terms": {
"field": "product_group"
},
"aggs": {
"underlying_docs": {
"top_hits": {}
}
}
}
}
}
}
}
Note that the last aggs group is optional -- I've put it there in case you'd like to know which docs have been bucketized to which particular band.
I am working with an RDBMS that contains a list of hierarchical objects stored like this:
Id Name ParentId
====================================
1 Food NULL
2 Drink NULL
3 Vegetables 1
4 Fruit 1
5 Liquor 2
6 Carrots 3
7 Onions 3
8 Strawberries 4
...
999 Celery 3
I do not know the specific reason why this was chosen, but it is fixed in so far as the rest of the system relies on fetching the structure in this form.
I want to expose this data via JSON using a RESTful API, and I wish to output this in the following format (array of arrays):
item:
{
id: 1, Description: "Food",
items: [
{
id: 3, Description: "Vegetables",
items: [ ... ]
},
{
id: 4, Description: "Fruit",
items: [ ... ]
}
]
},
item:
{
id: 2, Description: "Drink",
items: [ ... ]
}
What would be a sensible way of looping through the data and producing the desired output? I'm developing in C# but if there are libraries or examples for other languages I would be happy to re-implement where possible.
Thanks :)
I would like to create a Linq query that compares date from multiple rows in a single table.
The table consists of data that polls a web-services for balance data for account. Unfortunately the polling interval is not a 100% deterministic which means there can be 0-1-more entries for each account per day.
For the application i would need this data to be reformatted in a certain formatted (see below under output).
I included sample data and descriptions of the table.
Can anybody help me with a EF Linq query that will produce the required output?
table:
id The account id
balance The available credits in the account at the time of the measurement
create_date The datetime when the data was retrieved
Table name:Balances
Field: id (int)
Field: balance (bigint)
Field: create_date (datetime)
sample data:
id balance create_date
3 40 2012-04-02 07:01:00.627
1 55 2012-04-02 13:41:50.427
2 9 2012-04-02 03:41:50.727
1 40 2012-04-02 16:21:50.027
1 49 2012-04-02 16:55:50.127
1 74 2012-04-02 23:41:50.627
1 90 2012-04-02 23:44:50.427
3 3 2012-04-02 23:51:50.827
3 -10 2012-04-03 07:01:00.627
1 0 2012-04-03 13:41:50.427
2 999 2012-04-03 03:41:50.727
1 50 2012-04-03 15:21:50.027
1 49 2012-04-03 16:55:50.127
1 74 2012-04-03 23:41:50.627
2 -10 2012-04-03 07:41:50.727
1 100 2012-04-03 23:44:50.427
3 0 2012-04-03 23:51:50.827
expected output:
id The account id
date The data component which was used to produce the date in the row
balance_last_measurement The balance at the last measurement of the date
difference The difference in balance between the first- and last measurement of the date
On 2012-04-02 id 2 only has 1 measurement which sets the difference value equal to the last(and only) measurement.
id date balance_last_measurement difference
1 2012-04-02 90 35
1 2012-04-03 100 10
2 2012-04-02 9 9
2 2012-04-03 -10 -19
3 2012-04-02 3 -37
3 2012-04-03 0 37
update 2012-04-10 20:06
The answer from Raphaƫl Althaus is really good but i did make a small mistake in the original request. The difference field in the 'expected output' should be either:
the difference between the last measurement of the previous day and the last measurement of the day
if there is no previous day then first measurement of the day should be used and the last measurement
Is this possible at all? It seems to be quite complex?
I would try something like that.
var query = db.Balances
.OrderBy(m => m.Id)
.ThenBy(m => m.CreationDate)
.GroupBy(m => new
{
id = m.Id,
year = SqlFunctions.DatePart("mm", m.CreationDate),
month = SqlFunctions.DatePart("dd", m.CreationDate),
day = SqlFunctions.DatePart("yyyy", m.CreationDate)
}).ToList()//enumerate there, this is what we need from db
.Select(g => new
{
id = g.Key.id,
date = new DateTime(g.Key.year, g.Key.month, g.Key.day),
last_balance = g.Select(m => m.BalanceValue).LastOrDefault(),
difference = (g.Count() == 1 ? g.First().BalanceValue : g.Last().BalanceValue - g.First().BalanceValue)
});
Well, a probable not optimized solution, but just see if it seems to work.
First, we create a result class
public class BalanceResult
{
public int Id { get; set; }
public DateTime CreationDate { get; set; }
public IList<int> BalanceResults { get; set; }
public int Difference { get; set; }
public int LastBalanecResultOfDay {get { return BalanceResults.Last(); }}
public bool HasManyResults {get { return BalanceResults != null && BalanceResults.Count > 1; }}
public int DailyDifference { get { return HasManyResults ? BalanceResults.Last() - BalanceResults.First() : BalanceResults.First(); } }
}
then we change a little bit our query
var query = db.Balances
.GroupBy(m => new
{
id = m.Id,
year = SqlFunctions.DatePart("mm", m.CreationDate),
month = SqlFunctions.DatePart("dd", m.CreationDate),
day = SqlFunctions.DatePart("yyyy", m.CreationDate)
}).ToList()//enumerate there, this is what we need from db
.Select(g => new BalanceResult
{
Id = g.Key.id,
CreationDate = new DateTime(g.Key.year, g.Key.month, g.Key.day),
BalanceResults = g.OrderBy(l => l.CreationDate).Select(l => l.BalanceValue).ToList()
}).ToList();
and finally
foreach (var balanceResult in balanceResults.ToList())
{
var previousDayBalanceResult = balanceResults.FirstOrDefault(m => m.Id == balanceResult.Id && m.CreationDate == balanceResult.CreationDate.AddDays(-1));
balanceResult.Difference = previousDayBalanceResult != null ? balanceResult.LastBalanecResultOfDay - previousDayBalanceResult.LastBalanecResultOfDay : balanceResult.DailyDifference;
}
as indicated, performance (use of dictionaries, for example), code readability should of course be improved, but... that's the idea !