MS Exchange 2019 Install Error - Active Directory Operation Failed on dc.domain - Directory Object Not Found - exchange-server

Completely new install of MS Exchange 2019, installed all the prerequisites for the Mailbox role successfully and schema created across the 4 domain controllers in the environment.
Had to manually create the schema on the root domain as the Exchange server was on the child domain, however schema replicated successfully across the domains in the environment.
During the installation on the Exchange server, the installation stages will always stop at stage 8/12 (Installation of Mailbox role). Checked the logs and it returns the following error messages:
***[ERROR] Active Directory operation failed on dc.domain. The error is not retriable. Additional information: Directory object not found.
Active Directory response: 0000208D: NameErr: DSID-03100288, problem 2001 (NO_OBJECT), data 0, best match of: 'DC= dc, Dc= dc, Dc= dc'
[ERROR] The object does not exist
[ERROR-REFERENCE] Id=MailboxServiceControlLast_05b3bbd421504e0c93fefa6d5d1ae590
Component=EXCHANGE14:\Current\Release\Shared\Datacenter\Setup
Setup is stopping now because of one or more critical errors
Finished executing components tasks
Ending process Install-MailboxRole***
P.S In the right security groups to execute the installation (Schema and Enterprise Admins).
Any help or advice will be highly appreciated!

Have you changed the Users Container CN=Users,DC=domain,DC=com or the Object in question from Type Container to an Organizational Unit? The setup expects some Objects as Container. Users is one of them.

Related

Unable to build extended domain with WLS, Forms, Reports: expected directories not created

I am in the middle of building a 4-node application layer using WLS, Oracle Forms and Oracle Reports. I have built an ADMIN node and successfully built the RCU and have run config.sh.
I fully defined the entire domain (all 4 nodes) while running config.sh. I have copied and moved the domain definition to the 2nd node using pack & unpack.
When I attempt to install and build on the 2nd node (ADMIN does not run here), and run Forms and/or Reports for the first time, many directories are automatically created.
But some I expect to be created are missing.
For example:
$DOMAIN_HOME/config/fmwconfig/components/FORMS/instances/forms2/server/
did not get created.
What step did I miss here that results in some of the necessary directories not being created?
This is because the FORMS SystemComponents are not installed in the new instance locations under <domain_name>/config/fmwconfig/components/FORMS/<instance> and the FORMS components are not carried across by the pack command.
Re-running the config wizard will allow you to install the components on the new instances.
Alternatively, the instance definitions can be added on the Admin Server with WLST in offline mode only:
readDomain('<$DOMAIN_HOME>')
print('Create FORMS SystemComponent '+'forms2')
cd('/')
create('forms2', 'SystemComponent')
cd('/SystemComponent/'+'forms2')
cmo.setComponentType('FORMS')
set('Machine', machineName)
updateDomain()
closeDomain()
The above only works if the Managed Server shares the domain's filesystem with the Admin Server.
Also see:
https://github.com/galiacheng/oracle-forms-reports-weblogic-on-azure#create-managed-servers-and-forms-component

Terraform and OCI : "The existing Db System with ID <OCID> has a conflicting state of UPDATING" when creating multiple databases

I am trying to create 30 databases (oci_database_database resource) under 5 existing db_homes. All of these resources are under a single DB System :
When applying my code, a first database is successfully created then when terraform attempts to create the second one I get the following error message : "Error: Service error:IncorrectState. The existing Db System with ID has a conflicting state of UPDATING", which causes the execution to stop.
If I re-apply my code, the second database is created then I get the same previous error when terraform attempts to create the third one.
I am assuming I get this message because terraform starts creating the following database as soon as the first one is created, but the DB System status is not up to date yet (still 'UPDATING' instead of 'AVAILABLE').
A good way for the OCI provider to avoid this issue would be to consider a database creation as completed when the creation is indeed completed AND the associated db home and db system's status are back to 'AVAILABLE'.
Any suggestion on how to adress the issue I am encountering ?
Feel free to ask if you need any additional information.
Thank you.
As mentioned above, it looks like you have opened a ticket regarding this via github. What you are experiencing should not happen, as terraform should retry after seeing the error. As per your github post, the person helping you is in need of your log with timestamp so they can better troubleshoot. At this stage I would recommend following up there and sharing the requested info.

How overcome error 400 in Watson Discovery Upload Data

I am new to IBM cloud. I deleted my Watson Discovery service by mistake. Afterwards, I re-created a new service and there was no issue. But when I try to upload data to Watson Discovery, I'm given error 400 "Only one free environment is allowed per resource group". I'm on the Lite plan.
Any help?
login into your ibm cloud account and go to https://cloud.ibm.com/shell and run the following commands
ibmcloud resource reclamations
the above command list all resource reclamations under your account. to know which resource to delete check the Entity CRN and copy it's ID then use below command to delete the resource
ibmcloud resource reclamation-delete [ID] --force
Replace the ID with resource id to delete.
Maybe it is too late, but I found some information under this link: https://cloud.ibm.com/docs/discovery?topic=discovery-gs-api.
It mentions something like: "If you have recently deleted a Lite instance and then receive a 400 - Only one free environment is allowed per resource group error message when creating a new environment in a new Lite instance, you need to finish deleting the original Lite instance. See ibmcloud resource reclamations and follow the reclamation-delete instructions."
Also further information can be gathered from here: https://cloud.ibm.com/docs/cli?topic=cloud-cli-ibmcloud_commands_resource#ibmcloud_resource_reclamations

Search fails after upgrade to TFS 2018 Update 2

After performing an upgrade of a TFS server to 2018 Update 2 the search and indexing seems to be broken on one of our environments.
Any search gives "We encountered an unexpected error when processing your request" and I have work through all the troubleshooting docs to clean and reindex all collections. Also completely reinstalled the search package to the separate server we run for search and indexing to make sure we got the correct version running.
In the event logs on the TFS App Server a large number of these exceptions are logged:
Events (81277) completed with status FailedAndRetry. Event 81277
completed with message 'BeginBulkIndex-PushEventNotification: The
operation did not complete successfully because of exception
Microsoft.VisualStudio.Services.Search.Common.FeederException: Lots of
files rejected by Elasticsearch, failing this job. Failure Reason :
Microsoft.VisualStudio.Services.Search.Common.SearchPlatformException:
ES Exception: [HTTP Status Code: [200] BULK_API_ERROR: [ index
returned 404 _index: codesearchshared_1_0 _type:
SourceNoDedupeFileContractV3 _version: 0 error: Type:
type_missing_exception Reason: "type[SourceNoDedupeFileContractV3]
missing"
And another exception type also logged a lot of times indicating failure to index work items:
Microsoft.VisualStudio.Services.Search.Common.SearchPlatformException:
ES Exception: [HTTP Status Code: [200] BULK_API_ERROR: [ update
returned 404 _index: workitemsearchshared_0_2 _type: workItemContract
_version: 0 error: Type: type_missing_exception Reason: "type[workItemContract] missing" update returned 404 _index:
workitemsearchshared_0_2 _type: workItemContract _version: 0 error:
Type: type_missing_exception Reason: "type[workItemContract] missing"
The exceptions seems to indicate that some type registrations are missing like the workItemContract and SourceNoDedupeFileContractV3 but I can not find any errors on the search server installation logs.
Anyone got some suggestions on how to solve this and get the Elastic Search back into a working state?
We resolved to situation by completely uninstalling and then reinstalling everything related to search.
Uninstalled all Code/Work/Wiki extensions from all collections from the extension management in web admin
Removed the TFS Search Service feature from the TFS Admin Console.
Uninstalled the elasticsearch service from the separate search server, using the PowerShell script .\Configure-TFSSearch.ps1 -operation remove
Restart TFS Job Agent service
Deleted old Search related databas content from ALL collection databases using
DELETE FROM [Search].[tbl_IndexingUnit]
DELETE FROM [Search].[tbl_IndexingUnitChangeEvent]
DELETE FROM [Search].[tbl_IndexingUnitChangeEventArchive]
DELETE FROM [Search].[tbl_JobYield]
DELETE FROM [Search].[tbl_TreeStore]
DELETE FROM [Search].[tbl_DisabledFiles]
DELETE FROM [Search].[tbl_ItemLevelFailures]
DELETE FROM [Search].[tbl_ResourceLockTable]
Restart TFS Job Agent service
Reboot Search server
Run Configure Search feature wizard from TFS Admin Console, using existing search server
Install search package according to instructions by PowerShell script
.\Configure-TFSSearch.ps1 -Operation install -TFSSearchInstallPath
D:\ES -TFSSearchIndexPath D:\ESDATA -Port 9200 -Verbose
Completed Search configuration wizard from TFS Admin Console enabling code search for all existing collections
Checked that services were running and tested searching from the web application, it works!
According to your error info and TFS version, this issue similar to Unable to start search after upgrade to TFS 2018 Update 2
Try the solution in the question below:
It seemed that I had an invalid/problematic setting in following reg
key that update/install did not fix.
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Apache Software
Foundation\Procrun
2.0\elasticsearch-service-x64\Parameters\Java\Options
Value contained:
-Xms0m
-Xmx0m
Changing both from '0m' to '1g' fixed the issue. As far as I
understand '0m' defaults to 4GB, which might collide with my server
only having 2,8 GB of RAM. I will upgrade the server to follow the
minimum requirements.
Maybe the configuration tool could warn about this problem or set the
value to something that is possible.
Also take a look at this article, which maybe helpful: Elasticsearch 6.2.2 fails to run as a windows service
[This post refers to azure devops server 2019 to 2020 upgrade]
I also got
"We encountered an unexpected error when processing your request"
for any search - this was after migrating azure devops server 2019 to 2020 on a new box (was unable to perform the upgrade on the same box)
http://localhost:9200/_cat/health?v showed the only cluster TFS_Search_AZURE-DEVOPS in red status
In my particular case everything on the same box - windows 2019 + sql server 2019 + search service
Followed Per Salmi instructions and unfortunately that did not resolve the issue.
The solution in my case was rebuilding elastic search indexes (details and scripts):
Reindex repo
To reindex a Git or TFVC repository, execute the appropriate version of the script Re-IndexingCodeRepository.ps1 for your Azure DevOps Server or TFS version with administrative privileges. You're prompted to enter: [my values in brackets]
The SQL server instance name where the Azure DevOps Server or TFS configuration database is [azure-devops]
The name of the Azure DevOps Server or TFS collection database [Tfs_DefaultCollection]
The name of the Azure DevOps Server or TFS configuration database [Tfs_Configuration]
The type of reindexing to execute. Can be one of the following values: Git_Repository TFVC_Repository [Git_Repository]
The name of the collection [DefaultCollection]
The name of the repository to reindex. [your repo name here]
Reindex collection
To reindex a collection, execute the script TriggerCollectionIndexing.ps1 with administrative privileges. You're prompted to enter: [my values in brackets]
The SQL server instance name where the Azure DevOps Server or TFS configuration database is [azure-devops]
The name of the Azure DevOps Server or TFS collection database [Tfs_DefaultCollection]
The name of the Azure DevOps Server or TFS configuration database [Tfs_Configuration]
The name of the collection [DefaultCollection]
The entities to reindex. Can be one of the following values: All Code WorkItem Wiki [All]
The scripts above place reindexing jobs that typically take a few minutes to complete.
Right after execution I was getting an encouraging "We are not able to show results because one or more projects in your organization are still being indexed"
And after a few minutes results started to come in, http://localhost:9200/_cat/health?v showing the only cluster TFS_Search_AZURE-DEVOPS in green status

Tfs 2017 search internal server error

We have: multi tier TFS 2017 update 3.
recently i moved elasticsearch from one of application tiers to another server.
So What did i do:
deleted elasticsearch service using cmd: sc delete servicename.
deleted IndexStore folder
deleted this tables
[Search].[tbl_IndexingUnit]
[Search].[tbl_IndexingUnitChangeEvent]
[Search].[tbl_IndexingUnitChangeEventArchive]
[Search].[tbl_JobYield]
[Search].[tbl_TreeStore]
[Search].[tbl_DisabledFiles]
[Search].[tbl_ResourceLockTable]
from all collection DB's
Installed elasticsearch at another server (actually it is oone of sql servers of our TFS instance)
Checked that service is avaliable at url : http://SearchUrl:9200
Set up both AT's search feature at http://SearchUrl:9200
Tried To search something.
Result:
1.Search returns "There was a problem processing your request.
Internal Server Error".
Index folder is only 36.5 Kbytes
In windows logs there is unhandled exception:
Assembly: Microsoft.TeamFoundation.Framework.Server, Version=15.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a; v4.0.30319
Service Host: 9025d248-2b1b-48a7-bb43-8abf779eeeaa (Development)
Process Details:
Process Name: w3wp
Process Id: 2580
Thread Id: 4688
Account name: Username
Detailed Message: TF30065: An unhandled exception occurred.
how can i fix it?
According to the error message "TF30065: An unhandled exception occurred." Seems it's similar with this issue : Search Code and Work Items does not work after upgrade to 2017 Update 2
Add based on David Jansen [MSFT]'s comments, you can try calling the Re-index scripts manually to fix that. The solution is also mentioned in the link Resetting Search Index in Team Foundation Server provided by Daniel's comment above.
There is a dedicated "code search server". I used the latest Code
Search Tools scripts (TFS2017.2) and did a WipeAndRestart, followed by
manually calling the Re-index scripts. This did the trick.
More related information you can reference below articles:
Manage Search in Team Foundation Server
Re-index a repository or collection
Well. Now it works
What did i do:
1) Restored all deleted tables :)
2) Set up Elasticsearch on a separate server
3) Removed search feature from AT servers
4) Deleted all data from tables
[Search].[tbl_IndexingUnit]
[Search].[tbl_IndexingUnitChangeEvent]
[Search].[tbl_IndexingUnitChangeEventArchive]
[Search].[tbl_JobYield]
[Search].[tbl_TreeStore]
[Search].[tbl_DisabledFiles]
[Search].[tbl_ResourceLockTable]
5) removed all search extensions
6) instaled search feature. Configured it to the elastic server
7) waited for indexing
Actually, i've noticed that "unhalndled exception" in search disappeared after indexation finished

Resources