go-pg is a Golang library for PostgreSQL. In SQL one could update an entire column by applying a regular expression, e.g.:
update <some-column> set x = regexp_replace(x,'^.*\/[0-9]+(.*)$', '\1hello');
Problem
According the README, one could perform a bulk update. However, no information regarding regular expression were neither found in the issue tracker, nor in the documentation.
Question
Does this library support regexp_replace updates?
It does not support it as an ORM, but it supports plain SQL. I personally do not like to run it as such, but there seems to be no other choice when this library is used at the moment. One benefit is that the statement will be run in the flow of the go app. For example, once the file paths have been changed on disk, the database could be updated in a controlled way.
Related
Hi there I need to change format of parquet file to csv using only Logic app native tools. Is that even possible?
I did research of similar issues, I found how to use Azure Functions to change format, but it's not native Logic App tool.
There's a custom connector that will transform Parquet to Json for you.
It will also allow you to perform filter and sorting operations on the data prior to it being returned.
Documentation can be found here ... https://www.statesolutions.com.au/parquet-to-json/
I have a huge sqlite file containing my db. I need to know if it is possible and how to connect to this db as an embedded one with jpa.
I'm developing an app that packs this database inside it's own jar so that when I use it on another system I don't have to import a copy of my db back and forth.
The technologies I'd like to use are Angular and Spring since those are the ones I know best. If there are some techonlogies that better suit this purpose I'd like some suggestions.
Thanks :)
I hope I undestood your question correctly, so I made a small project for you, hence you can have a look into it: spring-jpa-sqlite-sample. It may guide you a bit, though I and don't claim correctness or completeness.
The path to the sqlite file can easily be changed by inserting the correct url in the persistence.properties file:
driverClassName=org.sqlite.JDBC
url=jdbc:sqlite:src/main/resources/chinook.db --> you may use relative paths.
hibernate.dialect=dev.mutiny.semo.config.SQLiteDataTypesConfig
hibernate.hbm2ddl.auto=none
hibernate.show_sql=true
You can also use Environment variables from your system, which Spring tries to read from, so that you can reference the correct directory to a file. This can be found here: Read system environment var (SO)
Last but not least. Beware of using huge SQLite files. Find another way and transfer it first into a 'real' Database like any other Client/Server RDBMS you know (Oracle, MariaDB, MSSQL, depends on your scenario/taste).
Have closer look onto the documentation: When to use SQLite (and when not to!)
Very new to Datadog and need some help. I have crafted 2 SQL queries (one for on-prem database and one for cloud database) and I would like to run those queries through Datadog and be able display the query results and validate that the daily results fall within an expected variance between the two systems.
I have already set up Datadog on the cloud environment and believe I should use DogStatsD to create a custom metric but I am pretty lost with how I can incorporate my necessary SQL queries in the code to create the metric for eventual display on a dashboard. Any help will be greatly appreciated!!!
You probably want to be using the MySQL integration, and configure the 'custom queries' option: https://docs.datadoghq.com/integrations/faq/how-to-collect-metrics-from-custom-mysql-queries
You can follow those instructions after you configure the base integration https://docs.datadoghq.com/integrations/mysql/#pagetitle (This will give you a lot of use metrics in addition to the custom queries you want to run)
As you mentioned, DogStatsD is a library you can import to whatever script or application in order to submit metrics. But it really isn't a common practice in the slightest to modify the underlying code of your database. So instead it makes more sense to externally run a query on the database, take those results, and send them to datadog. You could totally write a python script or something to do this. However the Datadog agent already has this capability built in, so it's probably easier to just use that.
I am also just assuming SQL refers to MySQL, there are other integration for things like SQL Server, and PostgreSQL, and pretty much every implementation of sql. And the same pattern applies where you would configure the integration, and then add an extra line to the config file where you have the check run your queries.
I have a side project I'm working on currently that requires me to copy over a .csv file from a remote FTP and save it locally. I figured I would use DBMS_SCHEDULER.GET_FILE but I do not have permission. When I asked my manager, he said that I wont be able to get privileges to do this and should look up other ways.
After researching for a couple of days I keep coming back to DBMS_SCHEDULER, am I out of luck or are my searching skills terrible.
Thanks
I'm not certain you'd want to use DBMS_SCHEDULER for this; from what I understand from the documentation (never used this myself) the FTP site would have to be completely open to all; there is a parameter destination_permissions, but it's only "Reserved for future use", i.e. there's no way of specifying any permissions at the moment.
If I'm right with this then I agree with your manager, though not necessarily for the same reasons (it seems like you'll never get permission to use DBMS_SCHEDULER which I hope is incorrect).
There are other methods of doing this:
UTL_TCP; this is simply a method of interacting over a TCP/IP protocol. Oracle Base has an article, which includes a FTP package based on UTL_TCP and instructions how to use it. This also requires the use of the UTL_FILE package, which can write OS files.
UTL_HTTP; I'm 99% certain it's possible to connect to an FTP using this; it's certainly possible to connect to a SFTP/any server. It'll require a little more work but it would be worth it in the longer run. It would also require the use of UTL_FILE.
A Java stored procedure to FTP directly; this is probably the best approach; create one using one of the many Java FTP libraries.
A Java stored procedure to call call OS commands; this is easiest method but the least extensible. Oracle released a white paper on calling OS commands from within PL/SQL back in 2008 but there's plenty of other stuff out there (including Oracle Base again)
Lastly, you could question whether this is actually what you want to do...
What scheduler do you use? Does it have event driven scheduling? If so there's no need to FTP from within Oracle; use UTL_FILE to write a file to the OS and then OS commands from there.
Was the other file originally in a database? If that's the case you don't need to extract it. You could use DBMS_FILE_TRANSFER to collect it straight from the database or even create a JDBC connection or (more simply) a database link to SELECT the data directly.
I've been scouring through the rsyslog documentation for a way to anonymize mysql log data by removed quoted strings. I've successfully managed to detect strings with sensitive data using the :contains property but I can't seem to find a way to replace.
I've looked through the property options and the regex functionality. I believe I may be missing something because none of those provide a straight way for find and replace.
AFAIK, there's no way currently to do regex replace in rsyslog. The cleanest way (I see) for achieving what you need is to parse your logs with mmnormalize (more documentation can be found at liblognorm, which is the library mmnormalize uses). Then, you can access all the parsed properties, and put whatever you want in templates. Templates let you select what properties from the messages get written in MySQL.
The benefit of this solution is that mmnormalize should be faster than using regular expressions. The problem is that you'll probably need a new version of rsyslog (probably 8.x) to get it working properly.