I was wondering how to get the table from the following webpage. I imagine that is possible using importHTML function, but it was not possible to identify the path. I think that is masked or is not accessible.
Does someone know how to get the correct path or how to import this table?
Thank you
I don't think GoogleSheets could handle it. As stated by #wp78de, a lot of scripts are running in the background. But you can access the data directly in JSON format here :
Data
Related
Using PowerBI/PowerQuery I'm using the web connector to query a rest api which returns json. I'm converting that to a table and ending up with a single column of values (id's). Looks like this:
These ids now need to be used in subsequent web queries, one query for each id. This will then yield the 'real' data which I need to get to.
How can I iterate through those id's using each in a new web query?
I have a complete solution in Python and can just import the json files from the Python script into PBI, but I really want to be able to give the PBI report to a colleague who would not touch Python, so I'm keen to find a simple way to achieve this in PBI/PQ.
Would appreciate any pointers on how this could be achieved as simply as possible.
I'm using some data from Bloomberg into a Google spreadsheet, two of the lines read as:
=importxml("https://www.bloomberg.com/quote/ELIPCAM:BZ";"(//span)[28]")
=importxml("https://www.bloomberg.com/quote/ELIPCAM:BZ";"(//span)[31]")
However, there is a large number of importxml and importhtml over all, and many of them querying the same web page. As a result, too many cells are in eternal "Loading..." state. Google even presents the message:
"Loading data may take a while because of the large number of requests. Try to reduce the amount of IMPORTHTML, IMPORTDATA, IMPORTFEED or IMPORTXML functions across spreadsheets you've created."
So, is there any way to merge requests as above? For sure I could open a new tab and import everything (i.e. use only "(//span)" for the query), but other than being messy, I'm afraid I'd still be querying more than I need. ideally, there should be some query for multiple numbered nodes, something like "(//span)[28,31]", but this obviously returns an error.
Try it this way and see if it works:
=importxml("https://www.bloomberg.com/quote/ELIPCAM:BZ","(//span)[position()=28 or position()=31]")
use | between your XPaths like:
=IMPORTXML("https://www.bloomberg.com/quote/ELIPCAM:BZ",
"(//span)[28] | (//span)[31]")
I have a financial system with all its business logic located in the database and i have to code an automated workflow for transactions batch processing, which consists of steps listed below:
A user or an external system inserts some data in a table
Before further processing a snapshot of this data in the form of CSV file with a digital signature has to be made. The CSV snapshot itself and its signature have to be saved in the same input table. Program updates successfully signed rows to make them available for further steps of code
...further steps of code
Obvious trouble is step#2: I don't know, how to assign results of a query as a BLOB, that represents a CSV file, to a variable. It seems like some basic stuff, but I couldn't find it. The CSV format was chosen by users, because it is human-readable. Signing itself can be made with a request to external system, so it's not an issue.
Restrictions:
there is no application server, which could process the data, so i have to do it with plsql
there is no way to save a local file, everything must be done on the fly
I know that normally one would do all the work on the application layer or with some local files, but unfortunately this is not the case.
Any help would be highly appreciated, thanks in advance
I agree with #william-robertson. you just need to create a comma delimited values string (assuming header and data row) and write that to a CLOB. I recommend an "insert" trigger. There are lots of SQL tricks you can do to make that easier). On usage of that CSV string will need to be owned by the part of the application that reads it in and needs to do something with it.
I understand yo stated you need to create a CVS, but see if you could do XML instead. Then you could use DBMS_XMLGEN to generate the necessary snapshot into a database column directly from the query for it.
I do not accept the concept that a CVS is human-readable (actually try it sometime as straight text). What is valid is that Excel displays it in human-readable form. But is should also be able to display the XML as human-readable. Further, if needed the data in it can be directly back-ported into the original columns.
Just a alternate idea.
I am trying to generate a complex table with rows and columns spanning multiple cells. Below is a snapshot of my reST code.
However, the Latex generated PDF output from Sphinx is not representing the format correctly.
Please let me know what might be wrong in my reST format to correct this issue?
The HTML snapshot, as per comment, is attached below and it is correct.
Thank you!
This is a known bug: the Docutils LaTeX writer fails with (some)
komplrc tables.
http://docutils.sourceforge.net/docs/user/latex.html#tables
I have not tried it because I don't want to type that whole table, but rst2pdf should not have a problem processing that.
I've been using Core Data for about a week now, and really loving it, but one minor issue is that setting default values requires going through and setting up a temp interface to load the data, which I then do away with once I have the data seeded. Is there any way to edit values in a table, like how you can use phpMyAdmin to manipulate values in a MySQL database? Alternately, is there a way to write a function to import seed values from something like a Numbers spreadsheet if it doesn't detect the storedata XML file?
For your first question, you could edit the file directly but it's highly recommended you don't. How to edit it depends entirely on the store type you selected.
Regarding importing or setting up pre-generated data, of course: you can write code to manually insert entity instances into your Managed Object Context. There's a dedicated section on this very topic in the documentation. Further, if you have a lot of data to import, there's even a section on how to do this efficiently.
Is there any way to edit values in a
table, like how you can use phpMyAdmin
to manipulate values in a MySQL
database?
Xcode has a means of creating a quick and dirty interface for a data model. You just drag the data model file into a window in interface builder and it autogenerates an interface for you. This lets you view the data without having to have your entire app up and running.