I wonder if anyone can provide me with a simple program that connects to the database and performs a query against an Oracle database using SqlKata.
Please also tell what one needs to install.
This is my code so far:
using SqlKata.Compilers;
using SqlKata.Execution;
using System.Data.OracleClient;
var connection = new SqlConnection("Data Source=tellus:1526/aom19c;User Id=Test1;Password=TEST1");
var compiler = new OracleCompiler();
var db = new QueryFactory(connection, compiler);
var parts = db.Query("part").Get();
Console.WriteLine(parts);
Related
With some vb.net code I try to retrieve data from an Oracle database (simplified example):
strQuery = "Select 2.3, 2.3/1, 2.3/3.1 From Owner.TableName Where ROWNUM < 10"
Dim da As OracleDataAdapter
da = New OracleDataAdapter(strQuery, ConnectionString)
da.Fill(GetData)
This results in an "Specified cast is invalid" error.
The 2.3/3.1 is the problem.
I learned from "Specified cast is not valid" when populating DataTable from OracleDataAdapter.Fill() that Oracle works with a higher precision than dot net can handle and that I should use SuppressGetDecimalInvalidCastException in the OracleDataAdapter. But I don't know how to code it in VB.net. Can anyone help me?
Automatic translation from the C# code did not work.
The C# code itself did not work for me (probably due to the fact that I don't know how to handle the async stuff) and if I simplify it to
string queryString = "Select 2.3, 3.1 From owner.table";
string connectionString = "Data Source=Data.plant.be/database;User ID=****;Password=****";
var table = new DataTable();
var connection = new OracleConnection(connectionString);
var command = new OracleCommand(queryString, connection);
var adapter = new OracleDataAdapter(command) {
SuppressGetDecimalInvalidCastException = true
};
adapter.Fill(table);
I get error CS0117: OracleDataAdaptor does not contain a definition for SuppressGetDecimalInvalidCastException.
Extra info:
As proposed by #Andrew-Morton - thank you Andrew - I wrote:
Dim table = New DataTable()
Dim connection = New OracleConnection(ConnectionString)
Dim cmd = New OracleCommand(strQuery, connection)
Dim adapter = New OracleDataAdapter(cmd) With {.SuppressGetDecimalInvalidCastException = True}
adapter.Fill(GetData)
But I get BC30456: SuppressGetDecimalInvalidCastException is not a member of 'OracleDataAdapter'.
Remark: I have version 19.6 of Oracle.ManagedDataAccess.
I could not install package 'Oracle.ManagedDataAccess 21.9.0'. I Get: You are trying to install this package into a project that targets '.NETFramework,Version=v4.5', but the package does not contain any assembly references or content files that are compatible with that framework. For more information, contact the package author.
I set up an Elastic Cloud to offload my local elasticsearch config (as one does), but for reasons unknown to me, I can't get it to show any logs in Elastic Cloud, despite it working fine locally.
The code I got: (modified for privacy reasons)
//var uri = new Uri("http://localhost:9200"); // old one
var uri = new Uri("https://my-server.kb.eastus2.azure.elastic-cloud.com:9243");
var sinkOptions = new ElasticsearchSinkOptions(uri)
{
AutoRegisterTemplate = true,
ModifyConnectionSettings = x => x.BasicAuthentication("elastic", "the password I was given"),
IndexFormat = $"test-logs-{env.EnvironmentName?.ToLower().Replace('.', '-')}-{DateTime.Now:yyyy-MM}",
};
Log.Logger = new LoggerConfiguration()
.ReadFrom.Configuration(config)
.Enrich.FromLogContext()
.Enrich.WithMachineName()
.WriteTo.Console()
.WriteTo.Elasticsearch(sinkOptions)
.Enrich.WithProperty("Environment", env.EnvironmentName)
.CreateLogger();
There are two possible reasons I can think of that might be the cause of this not working:
The credentials are wrong
The Uri is wrong
Every solution I've been given so far has provided the data in this fashion, and nowhere does it say what the URI I'm supposed to use looks like.
I get no errors.
I get no warnings.
I get no logs.
What am I doing wrong here?
The issue was using the incorrect uri. I wrote
my-server.kb.eastus2.azure.elastic-cloud.com:9243 rather than
my-server.es.eastus2.azure.elastic-cloud.com:9243.
Note the very tiny difference that is kb vs es in the url
I am trying to connect to Snowflake using R in databricks, my connection works and I can make queries and retrieve data successfully, however my problem is that it can take more than 25 minutes to simply connect, but once connected all my queries are quick thereafter.
I am using the sparklyr function 'spark_read_source', which looks like this:
query<- spark_read_source(
sc = sc,
name = "query_tbl",
memory = FALSE,
overwrite = TRUE,
source = "snowflake",
options = append(sf_options, client_Q)
)
where 'sf_options' are a list of connection parameters which look similar to this;
sf_options <- list(
sfUrl = "https://<my_account>.snowflakecomputing.com",
sfUser = "<my_user>",
sfPassword = "<my_pass>",
sfDatabase = "<my_database>",
sfSchema = "<my_schema>",
sfWarehouse = "<my_warehouse>",
sfRole = "<my_role>"
)
and my query is a string appended to the 'options' arguement e.g.
client_Q <- 'SELECT * FROM <my_database>.<my_schema>.<my_table>'
I can't understand why it is taking so long, if I run the same query from RStudio using a local spark instance and 'dbGetQuery', it is instant.
Is spark_read_source the problem? Is it an issue between Snowflake and Databricks? Or something else? Any help would be great. Thanks.
Following code is simple code to check how many entities can be added per second or minute.
createAsset is calling backend(http:localhost:3000) and add data using post.
When I did test using this code, it took 23 seconds to add 10 entities.
I am using composer 0.19.12 and fabric 1.1. When I checked some thread from GitHub, performance has improved using indexing couchdb. How can I use that feature? (I need to check again, but it seems that it is default feature of recent composer version)
addEntities: async function() {
var start = 0;
var end = start + 100;
var sd = new Date();
console.log(sd.getHours()+':'+sd.getMinutes()+':'+sd.getSeconds()+'.'+sd.getMilliseconds());
for(var i = start; i<end; i++) {
entityData.id = i.toString();
await this.createAsset('/Entity', 'model.Entity', entityData);
}
var ed = new Date();
var totalTime = new Date(ed.getTime()-sd.getTime());
console.log(totalTime.getMinutes()+':'+totalTime.getSeconds()+'.'+totalTime.getMilliseconds());
},
My model is really simple as follows.
asset Entity identified by id {
o String id
}
I have changed the test code to send multiple transactions as follows following david_k's advice.
addEntities: async function() {
var start = 15000;
var dataNumber = 1200;
var loopNumber = 400;
var end = start + dataNumber;
var sd = new Date();
console.log(sd.getHours()+':'+sd.getMinutes()+':'+sd.getSeconds()+'.'+sd.getMilliseconds());
var tasks = [];
for(var i = start; i<end; i++) {
entityData.id = i.toString();
if((i-start)%loopNumber === loopNumber - 1) {
await this.createAsset('/Entity', 'model.Entity', entityData);
console.log('--- i: ' + i + ' loops completed');
}
else {
this.createAsset('/Entity', 'model.Entity', entityData);
}
}
var ed = new Date();
var totalTime = new Date(ed.getTime()-sd.getTime());
console.log(totalTime.getMinutes()+':'+totalTime.getSeconds()+'.'+totalTime.getMilliseconds());
},
The purpose of change is send multiple requests at the same time, and it seems work well because it shows much better performance compared to previous code. However, the performance is still around 8 TPS. As original test code was 1 transaction per 2sec~3sec, it improved a lot. But, 8TPS looks that it cannot be used for commercial application at all. Even it is not good for test purpose as well. Could someone give some advice for this?
That sounds about right looking at your example code and I am assuming you are using either the fabric-dev-servers package which is a very simple fabric network to help get users started with developing a business network and want to try out on a hyperledger fabric network, or you are using the byfn network from the multi-org tutorial which is a hyperledger fabric example of a 2 organisation network in a consortium to demonstrate the required operational steps of composer in a multi-org fabric setup.
Hyperledger Fabric is a distributed ledger technology based around eventual consistency. Composer implements a submit/notify model such that once a transaction has been submitted it will notify the client when that transaction has been committed to the ledger. You can configure which Peers in a network you are interested in informing you when that occurs, but the default is all of them and so the rest server responds once all peers have committed it to the ledger.
Hyperledger fabric doesn't commit individual transactions, it batches them up into blocks and these blocks get committed to the ledger, and it will wait a period of time before building that block with the current set of transactions that have been submitted for ordering, so blocks can contain one or more transactions. You need to configure fabric for your use case to determine how transactions are batched into blocks.
I am writing a vs2012 extension that will talk to TFS 2010 (though I would prefer if it could also work with tfs2012).
I need to invoke a compare operations on a file from the extension.
I want to use the default compare tool that is configured in visual studio at the moment of the innovation (because the user can configure a different compare tool).
I have the location of the file and I want to be able to invoke the following:
open the default compare.
open a compare with latest version
open a compare with workspace version
Use IVsDifferenceService to invoke Visual Studio diff tool from your VSPackage:
private void Compare(string leftFile, string rightFile)
{
var diffService = (IVsDifferenceService)GetService(typeof(SVsDifferenceService));
if (diffService != null)
{
ErrorHandler.ThrowOnFailure(
diffService.OpenComparisonWindow(leftFile, rightFile).Show()
);
}
}
To test it you need to set the workspace and download the file you want to compare:
// TODO: add some error handling
var tpc = new TfsTeamProjectCollection(new Uri("http://tfs.company.com:8080/tfs"));
var vcs = tpc.GetService<VersionControlServer>();
var workspace = vcs.GetWorkspace(Environment.MachineName, vcs.AuthorizedUser);
string localItem = #"C:\workspace\project\somefile.cs";
var folder = workspace.GetWorkingFolderForLocalItem(localItem);
var item = vcs.GetItem(folder.ServerItem, VersionSpec.Latest);
var latestItem = string.Format("{0}~{1}", localItem, item.ChangesetId);
item.DownloadFile(latestItem);
Compare(localItem, latestItem);
References:
using Microsoft.VisualStudio;
using Microsoft.VisualStudio.Shell;
using Microsoft.VisualStudio.Shell.Interop;
using Microsoft.TeamFoundation.Client;
using Microsoft.TeamFoundation.VersionControl.Client;