Replica database is not accessible in Read Scale Availability Group in SQL Server 2017 Standard edition - high-availability

I'm looking into ways of replicating databases from On-Premise environments to Azure and one of the options I found was setting up a Read-Scale availability group.
The reason I'm using a Read-Scale and not an Always On availability group, is because I don't won't to use SQL Server Enterprise edition due to the cost.
I followed a tutorial from Microsoft (MS TUTORIAL) to set this all up and in the end, I think I got it working as my database appeared on the Azure environment.
However, the problem is that my replica always stays in the Synchronizing state - which is probably due to the fact I chose Asynchronous Replication by using the AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT command - but even worse is that I can't access the database itself.
Each time I try to fire a query against it, it comes back with an Object is not accessible exception.
After some reading, I found that the cause of this might be because my replica doesn't have a secondary role. Trying to set this via the ... SECONDARY_ROLE({ALLOW_CONNECTIONS = ALL})... command, clearly states that this feature is not available in the Standard version of SQL Server.
My whole confusion comes from the fact that on the Microsoft documentation (MS DOCS), it says that With availability groups, one or more secondary replicas can be configured to support read-only access to secondary databases. which is exactly what I'm not succeeding in.
Did anybody have the same issue, or knows how to configure the Read-Scale availability group on SQL Server Standard so my second replica is accessible and readable as well?
P.S. I did look at the actual SQL Replication with Transaction Replication, but there are quite a bit of moving parts there, so I'm exploring all options before making a decision.

Based on a twitter conversation I found out that you will need to create a snapshot of the database in Secondary Replica in order to read from there.
Please read this tweeter thread.
I also added a suggestion in the feedback channel to fix the documentation.

Related

AWS RDS database can't read record that was just written to database

I'm seeing an error with some Laravel code that uses an AWS RDS database. The code writes a record to the database and then immediately does a search to load that record using the primary key and gets no results.
If I try it manually afterwards I find the record. If I insert a 1-second sleep in the code it works correctly.
I've tried this using Laravel's separate settings for read and write hosts. I've also tried setting them to the same host and only using one host. The result is always the same. However other environments with the same configuration do not have the error.
Is there an option in RDS that needs to be changed to have the record available immediately after it's written.
The error is due to the mySQL master-slave replication lag.
A common mistake is to use a mySQL cluster and then perform a read
immediately after a write.
Since the read occurs on one of the slave/read hosts and the write occurs on the master, the data would not be replicated at the time of the read.
There are a couple of ways to rectify the error:
The read immediately after must be performed on the master (not the slave). Even though you've mentioned that you changed it to a single host, often people make a mistake while switching the connection. Refer this SO post to properly switch connections in Laravel
An easier way may be to use the sticky database option in Laravel. Beware: this may cause performance issues if not used carefully for only the use case you desire. From the docs:
The sticky option is an optional value that can be used to allow the
immediate reading of records that have been written to the database
during the current request cycle.
If the sticky option is enabled and a "write" operation has been
performed against the database during the current request cycle, any
further "read" operations will use the "write" connection.
The most "non-obvious" way is to NOT perform a read immediately after a write. Think about whether this can be avoided depending on your use case.
Other methods: refer this SO post

BulkDeleteFailureBase - lots of "Not Enough Privilege" records

I've taken over support of a CRM 2016 On-Premise system. I don't know the history of the particular instance, but I suspect it's been copied and/or imported many times.
The BulkDeleteFailureBase tables has just short of 2 million rows, almost all of which contain an error description like:
Not enough privilege to access the Microsoft Dynamics CRM object or
perform the requested operation. The current Organizationid '<GUID1>'
does not match with userOrTeam's organization id '<GUID2>'.
OrganisationBase has only one record with <GUID2> in it.
Has this happened because the instance has been copied/moved around incorrectly? If so, is this likely an indication more problems are heading my way in the future?
How can I recover from this?
BulkDeleteFailureBase is one of the system async jobs logging table where platform captures the run/success/failure logs.
Probably someone might have tried to clean the data like Plugin Trace log which were copied over from different DB backup/restore or CRM Org restoration. They used Bulk delete & all that fails, ended up here.
MS Support recommendation gives the script to clean those tables safely. Leaving it only gives you performance head-ache.

Cube database renamed - where is it logged?

Over the weekend someone renamed one of the cube databases that we have leading to massive headaches and SQL job failures. I would like to know if a cube database rename action is logged anywhere and related details. I tried replicating the same in development environment and searching in eventvwr, without much luck. Any leads will be appreciated!
The key mechanisms for maintaining error logs for Analysis Services are to either:
1.Keep track of the data stored in the msmdsrv.log.
It will be necessary to copy the log off before it gets overwritten.
2.If you are using Analysis Services 2005, 2008, or 2008 R2,
you can generate your own trace events as noted in the System-wide
Trace file section of the post Analysis Services Processing Best Practces
at: http://technet.microsoft.com/en-us/library/cc966525.aspx#EBAA
3.If you are using SQL Server 2012,
you can use the XEvents feature as noted in the SSAS documentation Use SQL Server Extended Events (XEvents) to Monitor Analysis Services at: http://msdn.microsoft.com/en-us/library/gg492139.aspx
by using the above mechinisims ,you can keep track of log going forward

DB job to generate/email Oracle report output

The task is to have an Oracle report generated daily, automatically, and e-mailed to a user.
So I've sort of got this working (it works if I hardcode one of the reports server names below).
I created a job on the database that will generate the report. I'm able to get the report to email as a PDF to the destination with this command:
UTL_HTTP.REQUEST('http://server/reports/rwservlet?server=specific_report_server &report='||p_report_name||'&userid='||p_connstring||'&destype=mail'||p_parameters||'&desname='||p_to_recipientlist||' &cc='||p_cc_recipientlist||'&bcc='||p_bcc_recipientlist||'&subject=%22' || REPLACE(p_subject,' ','%20') || '%22&paramform=no&DESformat=pdf&ENVID='||p_envid);
That works great...
The problem however is that my organization has two report servers that are load balanced. Our server team could take down one of the servers without really any warning, so I can't just hardcode the report server name (the ?server= parameter above) with one of the report server names because it will work for a while, then when that server goes down, it will stop working.
My server team asked me to look for a way to pull the server from the formsweb.cfg file or from default.env value within the job (there are parameters in there that hold the server name). The idea there is that the "http://server" piece will direct the report to be run on the appropriate server, and the first part of the job could get the reports server name from the config file that the report is run on. I'm not sure if this is possible from the database level, or how to do this. Any ideas?
Is there a better way that this can be done, perhaps?
If there are two load-balanced servers, that strongly implies that the network folks must have configured some sort of virtual IP (VIP) for the service. You (and everyone else) should be using that VIP rather than a specific server name.
For example, if you have two servers reportA.yourdomain.com and reportB.yourdomain.com, you would almost certainly create a VIP for reports.yourdomain.com that load balances between the two servers (and knows whether one of the servers is down or whether a new reportC server has been added). This VIP would either do the load balancing itself or would point to an actual physical load balancer that distributes the traffic. All applications would reference the reports.yourdomain.com VIP rather than any hard-coded server names.

MySQL database backup: performance issues

Folks,
I'm trying to set up a regular backup of a rather large production database (half a gig) that has both InnoDB and MyISAM tables. I've been using mysqldump so far, but I find that it's taking increasingly longer periods of time, and the server is completely unresponsive while mysqldump is running.
I wanted to ask for your advice: how do I either
Make mysqldump backup non-blocking - assign low priority to the process or something like that, OR
Find another backup mechanism that will be better/faster/non-blocking.
I know of the existence of MySQL Enterprise Backup product (http://www.mysql.com/products/enterprise/backup.html) - it's expensive and this is not an option for this project.
I've read about setting up a second server as a "replication slave", but that's not an option for me either (this requires hardware, which costs $$).
Thank you!
UPDATE: more info on my environment: Ubuntu, latest LAMPP, Amazon EC2.
If replication to a slave isn't an option, you could leverage the filesystem, depending on the OS you're using,
Consistent backup with Linux Logical Volume Manager (LVM) snapshots.
MySQL backups using ZFS snapshots.
The joys of backing up MySQL with ZFS...
I've used ZFS snapshots on a quite large MySQL database (30GB+) as a backup method and it completes very quickly (never more than a few minutes) and doesn't block. You can then mount the snapshot somewhere else and back it up to tape, etc.
Edit: (previous answer was suggestion a slave db to back up from, then I noticed Alex ruled that out in his question.)
There's no reason your replication slave can't run on the same hardware, assuming the hardware can keep up. Grab a source tarball, ./configure --prefix=/dbslave; make; make install; and you'll have a second mysql server living completely under /dbslave.
EDIT2: Replication has a bunch of other benefits, as well. For instance, with replication running, you'll may be able to recover the binlog and replay it on top your last backup to recover the extra data after certain kinds of catastrophes.
EDIT3: You mention you're running on EC2. Another, somewhat contrived idea to keep costs down is to try setting up another instance with an EBS volume. Then use the AWS api to spin this instance up long enough for it to catch up with writes from the binary log, dump/compress/send the snapshot, and then spin it down. Not free, and labor-intensive to set up, but considerably cheaper than running the instance 24x7.
Try mk-parallel-dump utility from maatkit (http://www.maatkit.org/)
regards,
Something you might consider is using binary logs here though a method called 'log shipping'. Just before every backup, issue out a command to flush the binary logs and then you can copy all except the current binary log out via your regular file system operations.
The advantage with this method is your not locking up the database at all, since when it opens up the next binary log in sequence, it releases all the file locks on the prior logs so processing shouldn't be affected then. Tar'em, zip'em in place, do as you please, then copy it out as one file to your backup system.
An another advantage with using binary logs is you can restore up to X point in time if the logs are available. I.e. You have last year's full backup, and every log from then to now. But you want to see what the database was on Jan 1st, 2011. You can issue a restore 'until 2011-01-01' and when it stops, your at Jan 1st, 2011 as far as the database is concerned.
I've had to use this once to reverse the damage a hacker caused.
It is definately worth checking out.
Please note... binary logs are USUALLY used for replication. Nothing says you HAVE to.
Adding to what Rich Adams and timdev have already suggested, write a cron job which gets triggered on low usage period to perform the slaving task as suggested to avoid high CPU utilization.
Check mysql-parallel-dump also.

Resources