Read CDPOS/CDHDR Sap tables using vbscript - vbscript

I am trying to read sap change log using rfc + vbs as buffer.
I know that I need to use CHANGEDOCUMENT_READ_HEADERS and CHANGEDOCUMENT_READ_POSITIONS functions to do this, but have not found any research how to do this properly with the help of vbscript.
I have already found out how to read normal tables using BBP_RFC_READ_TABLE but it doesn’t work with CDPOS...
Any ideas?

First, if you want to use VBscript to integrate with SAP, you will go through RFC channel using SAP NWRFC library or SAP .Net Connector 3.0, and the SAP functions or BAPIs you can call must be remote enabled. Unfortunately, the two functions, CHANGEDOCUMENT_READ_HEADERS and CHANGEDOCUMENT_READ_POSITIONS, are not remote enabled. I could imagine CDPOS is difficult for you because CDPOS has wide fields which cannot be processed by BBP_RFC_READ_TABLE.
Once we are aligned with the objective challenges, there are two options to help you move forward:
Write your own custom "Z" function module, which is remote enabled, and call CHANGEDOCUMENT_READ_HEADERS and CHANGEDOCUMENT_READ_POSITIONS inside the function;
Use third party commercial library (our company AecorSoft developed such ADO.NET compliant library for SAP integration).
I would suggest you explore #1 first. You can follow this blog https://blogs.sap.com/2017/02/09/how-to-use-dotnet-connector-nco-inside-vba/ to get started.

Dunno about BBP_RFC_READ_TABLE but RFC_READ_TABLE perfectly reads CDPOS
If you need to header-based query you will need 2 sequential reads: first for CDHDR headers and the second for positions, constructing second query from the first.

Related

Fuchsia: how to use a built-in capability in a component

I'm trying to learn and use Fuchsia for fun, and a pretty basic concept is keeping me from progressing.
I thought that, as a learning experience, I could write a simple HTTP client that prints the content of some random URL to the log. Really nothing fancy.
As I understand, using the network (in my case I'd like to utilize fuchsia.net.http.Loader) is a capability, which has to be granted to a running component. Makes sense, that's pretty much the core of the OS.
I also understand that the initiating component, the one that runs my component, needs to grant this capability to my component. That's fair.
What I don't understand, and I'd very much appreciate any additional information (pretty please!) is how I can grant this to my component?
Specifically all demos and examples I saw had a custom client & server under a realm, which talked to each other. That's a good practice, but it doesn't bring in any capability that's built in.
What am I missing? Thanks in advance!
I'm trying to learn and use Fuchsia for fun, and a pretty basic concept is keeping me from progressing.
Thanks for your interest in Fuchsia! First of all, if you haven't already gone through Fuchsia Fundamentals I would strongly suggest that as a starting point for many of the foundational concepts.
Specifically all demos and examples I saw had a custom client & server under a realm, which talked to each other. That's a good practice, but it doesn't bring in any capability that's built in.
This is primarily because there's isn't necessarily a concept of any set of components or capabilities being "built in" to the system. The capabilities available to components in the system are entirely dependent on the rest of the components in a particular product build and how they are organized (this is called the component topology).
I thought that, as a learning experience, I could write a simple HTTP client that prints the content of some random URL to the log. Really nothing fancy.
The answer has a few sharp edges to it at the moment, as Fuchsia is a rapidly evolving open source project. Hopefully some of the details below will help you move forward.
Determine the capability routes
So you'll have to do a bit of work to figure out where the capability you need is provided and routed. In fact, one of the components exercises shows you how to do this for the fuchsia.net.http.Loader capability. Knowing where a capability is offered/used allows you to determine where your component would need to be instantiated to obtain the necessary capability.
You might also find some of the content in the Connect components developer guide useful in accessing the capability.
Run the component
Knowing where a capability is routed allows you to determine how to run your component. The most straightforward way of instantiating a component in the topology is to do so dynamically using ffx component. However, this requires a collection somewhere on the system with the capabilities you need. The ffx-laboratory realm where most examples are run has a very limited set of capabilities that does not include fuchsia.net.http.Loader.
You'll likely need to add your component statically to the topology using a core realm shard so that the necessary routes can be declared explicitly between the components that offer fuchsia.net.http.Loader and your component. With the component included statically in your product build, you can execute it using ffx component commands.
For more details on component execution, check out the Run components developer guide as well.
Run a CLI binary
Since this is a learning exercise, another option is to build your code as a binary that runs within the context of a component that already has the capabilities you need vs. creating and running an entirely new component. This is commonly used for CLI tools. With the ffx component explore command you can run your code as a binary inside the existing component that provides the HTTP capability you are looking for using the --tools argument, without the need to work through all the capability routing pieces described above.
For more details on ffx component explore, see Explore components.

Off-Chain Worker Framework

I haven’t entirely given up on the idea of validators moonlighting as oracles for off-chain computation…based on this extensive discussion: https://gov.near.org/t/off-chain-computation-framework/1400/6
So far from studying Sputnik’s code, I have figured out the mechanics of how to upload a blob to a smart contract. Let's say that a blob represents a storage-less contract, having only stateless functions that act only on input to the function, and return those inputs modified.
Now I’m missing the piece of how Validators can download and execute the blob. As mentioned by Ilya in the link above, the NearSDK would be able to interpret the blob (if the blob is essentially a compiled contract), but it needs to be a modified version of the SDK...
Think of this like sandbox mode…blob cannot modify state of any other contract, but can read state (forget about the internet access part for now). Results of the blob execution are then fed back to a smart contract, where they have to match the results of every other validator who executed the blob. This can be done by hash comparison (rather than looping through the results individually), so it’s not an expensive comparison, especially because it’s all or nothing.
Question: how can a Validator download the blob and execute it via a sandboxed SDK, and post the result via the regular SDK to the blockchain? I am missing a lot of architectural context…and this is bringing me to the edge of giving on the idea. Please help prevent that from happening!
If you are implementing this as a separate binary, your binary will be doing next things:
Use RPC to load the WASM file from the blockchain. See RPC reference
Use runtime-standalone to run this WASM with specific inputs. An example of using runtime standalone is here, but you will need to customize this with few things.
The result should be sent as a transaction signed by this binary again via RPC.
If you want these WASM files to have access to state, you will need to load state inside this binary. There are two options:
Modify a nearcore node to also do the above items
Run nearcore in parallel, and open the database on read when you are initializing Trie (e.g. here load from disk instead).
If you want to add more host functions (like accessing internet), you will need to fork runtime-standalone to expose those functions.

Convert Resuable ErrorHandling flow in to connector/component in Mule4

I'm Using Mule 4.2.2 Runtime. We use the errorHandling generated by APIKIT and we customized it according to customer requirement's, which is quite standard across all the upcoming api's.
Thinking to convert this as a connector so that it will appear as component/connector in palette to reuse across all the api's instead copy paste everytime.
Like RestConnect for API specification which will automatically convert in to connector as soon as published in Exchange ( https://help.mulesoft.com/s/article/How-to-generate-a-connector-for-a-REST-API-for-Mule-3-x-and-4-x).
Do we have any option like above publishing mule common flow which will convert to component/connector?
If not, which one is the best way suits in my scenario
1) using SDK
https://dzone.com/articles/mulesoft-custom-connector-using-mule-sdk-for-mule (or)
2) creating jar as mentioned in this page
[https://www.linkedin.com/pulse/flow-reusability-mule-4-nagaraju-kshathriya][2]
Please suggest which one is best and easy way in this case? Thanks in advance.
Using the Mule SDK (1) is useful to create a connector or module in Java. Your questions wasn't fully clear about what do want to encapsulate in a connector. I understand that you want is to share parts of a flow as a connector in the palette, which is different. The XML SDK seems to be more inline with that. You will need to make some changes to encapsulate the flow elements, as described in the documentation. That's actually very similar to how REST connect works.
The method described in (2) is for importing XML flows from a JAR file, but the method described by that link is actually incorrect for Mule 4. The right way to implement sharing flows through a library is the one described at https://help.mulesoft.com/s/article/How-to-add-a-call-to-an-external-flow-in-Mule-4. Note that this method doesn't create a connector that can be used from Anypoint Studio palette.
From personal experience - use common flow, put it to repository and include it as dependency to pom file. Even better solution - include is as flow to the Domain app and use it alone with your shared https connector.
I wrote a lot of Java based custom components. I liked them a lot and was proud of them. But transition from Mule3 to Mule4 killed most of them. Even in Mule4 Mulesoft makes changes periodically which make components incompatible with runtime.

Token based authentication using Dapper micro-orm

I am looking for tutorial or sample for Dapper using token based authentication in web api 2. I appreciate if anyone can suggest where to start, I have found tutorial in http://www.c-sharpcorner.com/UploadFile/ff2f08/token-based-authentication-using-Asp-Net-web-api-owin-and-i/ but the sample is using EF and I havent tried using EF, but dapper also I am using MySQL for my database. Thanks in advance and good day.
Dapper is a very different tool to EF (which is the DbContext described in your step 3 / step 4). It simply will not be compatible with those steps, and isn't designed to be used with those steps.
But here's the thing: dapper is just a tool. EF is just a tool. It is ok to use more than one tool. If it suits your purposes, then use EF to do one set of jobs (for example, to help you use a particular library that is designed with that in mind), and use another tool (such as dapper) elsewhere in the same project. That's OK. No one will mind.
If you really really don't want to use EF at all, then you'll need to find out everything that the library needs to support what you are doing, and implement it manually. If the library is designed around IQueryable<T> etc, then this may be very difficult.

Ajax without backend script

I have a simple database application in mind and I am thinking of making it browser-accessible instead of creating a standalone one.
I almost finished creating the DB schema in a PostgreSQL Server and I will now start developing. My first idea was using PHP or Ruby On Rails to manage the backend logic and interfacing with the DB, but since this application is fairly simple I think that I can easily implement all business and data manipulation logic with JavaScript or with the DB triggers.
So I am now wondering: is there a way to directly send the queries to a PostgreSQL Server, without server-side scripting?
More generally: can a PostgreSQL(9.3) Server receive the queries in Http requests and provide the results in Http responses?
I know this might sound stupid, and I am not looking for answers like "Use JS for presentation, PHP for logic and DB for data storage". I believe this is a lightweight solution for a very simple application, so I want to try it if possible!
Yes, That is possible.
What you can do is to send it via REST API. (post, get request ).
Here are some reference for you:
https://github.com/begriffs/postgrest
https://github.com/pgrest/pgrest
Please take a look at this for more HTTP API
[update!]
This idea is currently not possible (as I tought when I answered you before).
I tought it was possible after checking this node-postgres library written in javascript but it uses Node.js specific functions not present in the web browser as stated by the library's creator himself and this answer at stack overflow.
There is this package called browserify that exports a Node.js javascript file into a browser front-end ready javascript file. The problem with node-postgres + browserify is that it throw some errors during the browserification process, precisely when it tries to access libpq (an API written in C for accessing PostgreSQL).
I'm sorry I have mistaken you
Yet I still have a suggestion for you. You can try CouchDB if you really want to build a backendless/serverless application. It is natively RESTful, handles authentication and authorization at some extent, is opensource but unfortunately: NoSQL. It processes queries based on Map/Reduce paradigm and Mango query language so it's an entire different world for you to discover if you are used with SQL.
[old answer, I'm leaving it here for learning purposes]
Have you considered using a PostgreSQL driver for JavaScript? It is not RESTful, but it can connect to PostgreSQL and query it!
The library is called node-postgres and you can download it via npm
https://www.npmjs.com/package/pg
Just don't forget to enable SSL connection in the PostgreSQL server and in the client to avoid man-in-the-middle attacks.
An here's a tip: if you need an ACL for allowing or denying selects or inserts for specific users you can manage that through PostgreSQL user management and privileges. PostgreSQL has row level security, allowing you to define which rows in a table can be selected updated and deleted for a given set of users or groups.

Resources