How to restore custom rules after a SonarQube migration - sonarqube

I have defined a few days ago, after receiving an answer to this question, two rules for forbidding the use of specific classes in our Java code. The starting point for defining them was the Squid:S3688 rule template ("Track uses of disallowed classes").
However, I have yesterday migrated the SonarQube 6.7.2 instance installed on my Windows 10 laptop to the most recent LTS (v6.7.4). As prescribed, I installed the newer version in a different folder while reusing the existing database. Upon inspection, everything looked fine.
My problem is that the web UI of SonarQube no longer shows the two custom rules. I thought the migration had somehow removed them. But when I tried to redefine the first one, I got this message:
A rule with the key 'NO_APACHE_COMMONS_LOGGING' already exists.
I agree that the rule already exists: I created it like a week ago. But I do not see it in the UI, no matter how I search for it:
the page of the rule template does not list any custom rules that implement the rule template,
I do not see the custom rule when viewing the profile used by my team
Search results are empty.
I went to the PostgreSQL database to see whether the rule was in the rules table and it is indeed there, just like the other rule "NO_LOG4J_LOGGING" that I've created. All values in the columns of the record are consistent with what I defined.
I have set the log levels to TRACE, hoping to see the generated SQL, but I didn't learn much from the logs.
My question: is there anything to do to restore/retrieve custom rules after a migration? I mean, apart from tampering with the database.

Related

Original SQL script now invalid according to Flyway

We have a Spring boot application that has been in production for a while. We use Flyway to manage database migrations. I just upgraded to Spring boot 2.5.4 from 2.4.5 which brings with it an upgrade to Flyway 7.7.3.
When executing all the migrations in a fresh local environment, the migration now fails due to a syntax issue with this comment:
---*********************---
-- ** AUDITING TABLES ** --
---*********************---
I imagine this won't be an issue in environments which have already executed this migration but what is the best way to fix this for new environments with a fresh database given that the original file cannot be edited due to checksum comparison on migration?
My current versioning just includes a major version i.e. V2, V3 etc. My thinking is to get rid of V2 (the script with the issue) and introduce V2.1 which would be an exact copy of V2 with the erroneous comment section removed. I would then set both ignoreMissingMigrations and ignoreIgnoredMigrations to true
Does this sound like the right way to solve this?
Thanks in advance.
Changing the script and then executing flyway repair would be the ideal solution - this would rectify the checksums.
Assuming this option is not available for some reason (it would be helpful to know what that is in case we can fix it!), the above sounds correct. ignoreMissingMigrations means your old deployments won't object to V2 not being there, and ignoreIgnoredMigrations means they won't object to V2.1 being present. The downside is that these ignores may not be valid in the longer term - so they won't, for example, catch a later script that goes missing unintentionally.

D365 Can I update systemuserid?

In our D365 online environment we have multiple sandboxes and production instances. In each of these the systemuserid is different (user import was done before I joined!!). This mismatch in SystemUserId is also happening when new user is added. (my own user record for example that was added last week)
I know updating systemuserid in onPrem was unsupported but was possible but with online environment what are my best options to fix this issue? With different Guids, all references (workflow etc) are failing when moving solution across different environments.
Coming here as my last option as I have googled and looked in to SDK already.
Thanks,
hardcoding data into processes is a bad practice, makes your processes really rigid. You can create a configuration entity, stab the sys admin id there and retrieve it. If you have a custom workflow activity you will be able to retrieve the record and use it in every configuration task.
You can't update an ID at all. I usually copy my production database in all my dev environment to avoid this problem. D365 also make it easy to do so. You should take a moment between two sprints to do it because it can help to have to system user ID and entities object type code identical everywhere.

Why would I suddenly get 'KerberosName$NoMatchingRule: No rules applied to user#REALM' errors?

We've been using Kerberos auth with several (older) Cloudera instances without a problem but now are now getting 'KerberosName$NoMatchingRule: No rules applied to user#REALM' errors. We've been modifying code to add functionality but AFAIK nobody has touched either the authentication code or the cluster configuration.
(I can't rule it out - and clearly SOMETHING has changed.)
I've set up a simple unit test and verified this behavior. At the command line I can execute 'kinit -kt user.keytab user' and get the corresponding Kerberos tickets. That verifies the correct configuration and keytab file.
However my standalone app fails with the error mentioned.
UPDATE
As I edit this I've been running the test in the debugger so I can track down exactly where the test is failing and it seems to be succeed when run in the debugger!!! Obviously there's something different in the environments, not some weird heisenbug that is only triggered when nobody is looking.
I'll update this if I find the cause. Does anyone else have any ideas?
Auth_to_local has to have at least one rule.
Make sure you have “DEFAULT” rule at the very end of auth_to_local.
If none rules before match, at least DEAFULT rule would kick in.

Is there a way to suppress SQL03006 error in VS2010 database project?

First of all, I know that the error I am getting can be resolved by creating reference project (of type Database Server) and then referencing it in my Database project...
However, I find this to be overkill, especially for small teams where there is no specific role separation between developers and db admins..But, let's leave this discussion for another time... Same goes for DACs...Can't use DAC b/c of limited objects supported...
Question
Now, the question is: Can I (and how), disable SQL03006 error when building my Database project. In my case this error is generated because I am creating some users whose logins are "unresolved"...I think this should be possible I hope, since I "know" that logins will exist on the server before I deploy the script...I also don't want to maintain database server project just so I can keep refs resolved (I have nothing besides logins at server level)...
Workaround
Using pre/post deployment scripts, it is trivial to get the secript working...
Workaround Issue
You have to comment out user scripts (which use login references) for workaround...
As soon as you do that, the .sqlpermissions bomb out, saying there is no referenced users...And then you have to comment permissions out and put them in post deploy scripts...
The main disadvantage of this workaround is that you cannot leverage schema compare to its fullest extent (you have to specify to ignore users/logins/permissions)
So again, all I want is
1. to maintain only DB project (no references to DB Server projects)
2. disable/suppress SQL03006 error
3. be able to use schema compare in my DB project
Am I asking for impossible? :)
Cheers
P.S.
If someone is aware of better VS2010 database project templates/tools (for SQL Server 2008 R2) please do share...
There are two workarounds:
1.
Turn off any schema checking (Tools > Options > Database Tools > Schema Compare > SQL Server 200x, then the Object Type tab) for anything user or security related. This is a permanent fix
2.
Go through the schema comparison and mark anything user or security related as Skip and then generate your SQL compare script. This is a per schema comparison fix.
It should be obvious but if you already have scripts in your project that reference logins or roles then delete them and they won't get created.

sitecore proxy items published, still seem to have a link to the source

On the project I am working on, there are some proxy items that were added at some point from source location A to location B. However right now is not possible to check the source of the proxy and the proxy folder in B does not show anything that suggests that it's a proxy, besides the visual cue that it's grayed out.
When I analysed this article, I looked into the web.config and found this:
<proxiesEnabled>false</proxiesEnabled>
<publishVirtualItems>true</publishVirtualItems>
This seems to suggest that when the proxies were published they were published as regular items, losing any connection to their source, so since I want to recreate the proxies, due to some weird issues related to layout on the standard values item on the template not propagating correctly to the proxied items, I wanted to try to rename the old proxy folder and create a new one, however when I wanted to rename I got a modal alert with this message:
"This item occurs in other locations. If you rename it, the item will be renamed in the other locations as well. Are you sure you want to rename 'MyFoo'?"
Does this means the item still is attached to the source?
I am using Sitecore 6.2.0 (rev. 100701)
I suppose that the settings you mentioned are for master database. Now if you take a closer look at the article you reference, it lists 2 valid cases of proxies setup:
when web database also relies on proxies
when web database contains regular items only which came from publishing
These both cases assume that master database has proxiesEnabled='true'. Look, it doesn't have any sense otherwise - if proxies are disabled, the rest of the mechanism doesn't work, there are no virtual items.
And I can see proxiesEnabled='false' in the example you mentioned.
I'm not sure about the message you get. But if I need to change the proxy definition, I would do the following:
make sure proxiesEnabled='false' for web database (I guess this is your intention)
disable proxies for master database and change the proxies definition the way you want
set publishVirtualItems to true for master database
turn the proxies on for master database
make sure virtual items are in place and publish the site
Try this on some test environment and experiment to get the behavior you'd like - playing with the live site is a bad karma :)

Resources