Exchange Server 2013 Database Mounting Error - exchange-server-2013

I got error message in exchange server Microsoft.Exchange.Data.Storage.MailboxOfflineException.
So I try to watch my Exchange server database status and it's show me Active and Dismounted.
I am trying to mount the database, but mount command not executed
I got the error message. How to solve it?
Failed to mount database "Mailbox Database 0389974439". Error: An Active Manager operation failed. Error: The database action failed. Error: Operation failed with message: MapiExceptionDatabaseError: Unable to mount database. (hr=0x80004005, ec=1108) Diagnostic context: Lid: 65256 Lid: 10722 StoreEc: 0x454 Lid: 1494 ---- Remote Context Beg ---- Lid: 45120 dwParam: 0x2649A2 Lid: 57728 dwParam: 0x264A8D Lid: 46144 dwParam: 0x2650B7 Lid: 34880 dwParam: 0x2650B7 Lid: 34760 StoreEc: 0xFFFFFC06 Lid: 41344 Guid: b1f22c9b-393c-41a5-bc7c-0fc65a5dd783 Lid: 35200 dwParam: 0x3D40 Lid: 46144 dwParam: 0x265654 Lid: 34880 dwParam: 0x265654 Lid: 54472 StoreEc: 0x1388 Lid: 42184 StoreEc: 0x454 Lid: 1750 ---- Remote Context End ---- Lid: 1047 StoreEc: 0x454 [Database: Mailbox Database 0389964565, Server: xxxx.mydomain.com]

This page is the source of the answer, but I'll copy it here just in case...
1st check with eseutil /mh if database is not in clean shutdown, if not just run with /p switch to get it in clean shutdown
Once it come to clean shutdown state, then
Try moving the checkpoint and logs to a temporary location and mount database to see whether it works.
Get-MailboxServer | select MaximumActiveDatabases
If the value shows as 0, then you need to change that:
Get-MailboxServer | Set-MailboxServer -MaximumActiveDatabases 1
The other thing you can do is try restarting the Microsoft Exchange Information Store process. The error (hex) above shows a DB lock, but that could be either the bug from RTM (if you are unpatched) OR something else locking it. Try that and see if it works.

There are five possible reasons of "unable to mount database (hr=0x80004005, ec=1108)" error:
Missing log files Exchange
“Dirty Shutdown” Error
Corrupt Exchange Database Files
Information Store unable to start
Not enough free disk space on database
Please refer following article to resolve "unable to mount database (hr=0x80004005, ec=1108)" error: https://www.linkedin.com/pulse/how-resolve-unable-mount-database-hr0x80004005-ec1108-shelly-bhardwaj

Related

Failed to get network: Failed to create new channel client: event service creation failed: could not get chConfig cache reference

Failed to get network: Failed to create new channel client: event service creation failed: could not get chConfig cache reference: QueryBlockConfig failed: QueryBlockConfig failed: queryChaincode failed: Transaction processing for endorser [peer-node-endpoint]: Endorser Client Status Code: (2) CONNECTION_FAILED. Description: dialing connection on target [peer-node-endpoint]: connection is in TRANSIENT_FAILURE
Getting this error when trying to connect fabric-sdk-go with network using connection-profile.yaml in Hyperledger fabric.
NOTE: chaincode is deployed and works just fine when I hit transaction from terminal.So no doubts on this side.
I saw same problem is posted on stack-overflow already but thats outdated as hyperledger-fabric v2.2 changes a lot as compared to v1.
Here is my connection profile from fabric-samples test-network.(only difference is I gave path of tls-cert file instead of pasting private key.
Here is connection-profile.yaml of test-network which works fine on local machine.
---
name: test-network-org1
version: 1.0.0
client:
organization: Org1
connection:
timeout:
peer:
endorser: '300'
organizations:
Org1:
mspid: Org1MSP
peers:
- peer0.org1.example.com
certificateAuthorities:
- ca.org1.example.com
peers:
peer0.org1.example.com:
url: grpcs://localhost:7051
tlsCACerts:
path: /path/to/cert/file
grpcOptions:
ssl-target-name-override: peer0.org1.example.com
hostnameOverride: peer0.org1.example.com
certificateAuthorities:
ca.org1.example.com:
url: https://localhost:7054
caName: ca-org1
tlsCACerts:
path: /path/to/cert/file
httpOptions:
verify: false
But if just change peer (logical) name i.e peer0.org1.example.com. from
peer0.org1.example.com to peer0.org1.com i.e just remove "example" word from name it gives same error I posted above.
So i am just wondering why is it the case because in hyperledger-fabric connection-profile documentation it say that this name is just logical name and nothing else and we can give it any name and all that matters is (peer/ca) url endpoint.
And new connection-profile looks like this:
---
name: test-network-org1
version: 1.0.0
client:
organization: Org1
connection:
timeout:
peer:
endorser: '300'
organizations:
Org1:
mspid: Org1MSP
peers:
- peer0.org1.com
certificateAuthorities:
- ca.org1.example.com
peers:
peer0.org1.com:
url: grpcs://localhost:7051
tlsCACerts:
path: /path/to/cert/file
grpcOptions:
ssl-target-name-override: peer0.org1.com
hostnameOverride: peer0.org1.com
certificateAuthorities:
ca.org1.example.com:
url: https://localhost:7054
caName: ca-org1
tlsCACerts:
path: /path/to/cert/file
httpOptions:
verify: false
And in docker logs it shows this error:
peer0.org1.example.com|2022-10-03 06:47:51.442 UTC 0c95 ERRO [core.comm] ServerHandshake -> Server TLS handshake failed in 3.5334ms with error remote error: tls: bad certificate server=PeerServer remoteaddress=172.20.0.1:61090
Now the path to cert file is correct and thats I'm sure about.
So can anyone guide me why changing just peer (logical) name on local hit this error.
if someone want to create same case.This can also be produced on Local (by running test-network from fabric-samples).
And as far code is concerned its the same as in fabric-samples I am just trying to run it using connection-profile.yaml (assetTransdferBasic case).
if anyone want any help regarding more detail I will be available.

How to use gpload utility?

I have YAML file below:
---
VERSION: 1.0.0.1
DATABASE: xxx
USER: xxx
HOST: xxx
PORT: 5432
GPLOAD:
INPUT:
- SOURCE:
LOCAL_HOSTNAME:
- 192.168.0.21
PORT: 8081
FILE:
- /home/root/test_input.txt
- COLUMNS:
- age: int4
- name: varchar
- surname: varchar
- FORMAT: text
- DELIMITER: '|'
- ERROR_LIMIT: 2
- LOG_ERRORS: true
OUTPUT:
- TABLE: sf_dfs.test_gpload
- MODE: INSERT
PRELOAD:
- REUSE_TABLES: true
But i recieve a error: error when connecting to gpfdist http://192.168.0.21:8081//home/root/test_input.txt, quit after 11 tries (seg0 slice1 192.168.0.23:6000 pid=2021845)
encountered while running INSERT INTO
Maybe somebody have experience about this program?
Looks like it is a port issue. If the database is up then please rerun the job with different port. Ensure that firewall is not blocking this port.
A couple of questions:
Are you running gpload as root? root generally does not have access permissions to the database. It needs to be run as gpadmin or a superuser.
The input file is in /home/root. If you are running as gpadmin, can gpadmin access this file? Permissions on the file?
Finally, does the target table exist in the database (sf_dfs.test_gpload)? Was it created and distributed across all segments? The error would seem to indicate the table is not there.

Microsoft Orleans in kubernetes StatefulSet POD crashes after several restart

Microsoft Orleans v3.4.3
Consul Clustering
Running in K8S
siloBuilder
.UseConsulClustering(opt =>
{
opt.Address = new Uri(AppConfig.Orleans.ConsulUrl);
opt.AclClientToken = AppConfig.Orleans.AclClientToken;
})
.Configure<ClusterOptions>(options =>
{
options.ClusterId = AppConfig.Orleans.ClusterID;
options.ServiceId = AppConfig.Orleans.ServiceID;
})
.siloBuilder.UseKubernetesHosting();
I configured the labels and environment variables for my POD accordingly to the doc.
- name: ORLEANS_SERVICE_ID #Required by Orleans
valueFrom:
fieldRef:
fieldPath: metadata.labels['orleans/serviceId']
- name: ORLEANS_CLUSTER_ID #Required by Orleans
valueFrom:
fieldRef:
fieldPath: metadata.labels['orleans/clusterId']
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['statefulset.kubernetes.io/pod-name']
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
It is a StatefulSet with only 1 POD for testing.
On initial startup, it works well.
However, every time when I restart the POD, a new entry is created in Consul.
And It crashes in subsequent startup.
The log says
System.AggregateException: One or more errors occurred. (Failed to get ping responses from 1 of 1 active silos. Newly joining silos validate connectivity with all active silos that have recently updated their 'I Am Alive' value before joining the cluster. Successfully contacted: []. Failed to get response from: [S10.18.123.218:11111:361110184])
---> Orleans.Runtime.MembershipService.OrleansClusterConnectivityCheckFailedException: Failed to get ping responses from 1 of 1 active silos. Newly joining silos validate connectivity with all active silos that have recently updated their 'I Am Alive' value before joining the cluster. Successfully contacted: []. Failed to get response from: [S10.18.123.218:11111:361110184]
at Orleans.Runtime.MembershipService.MembershipAgent.ValidateInitialConnectivity()
at Orleans.Runtime.MembershipService.MembershipAgent.BecomeActive()
at Orleans.Runtime.MembershipService.MembershipAgent.<>c__DisplayClass26_0.<<Orleans-ILifecycleParticipant<Orleans-Runtime-ISiloLifecycle>-Participate>g__OnBecomeActiveStart|6>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at Orleans.Runtime.SiloLifecycleSubject.MonitoredObserver.OnStart(CancellationToken ct)
at Orleans.LifecycleSubject.OnStart(CancellationToken ct)
at Orleans.Runtime.Scheduler.AsyncClosureWorkItem.Execute()
at Orleans.Runtime.Silo.StartAsync(CancellationToken cancellationToken)
at Orleans.Hosting.SiloHost.StartAsync(CancellationToken cancellationToken)
at Orleans.Hosting.SiloHostedService.StartAsync(CancellationToken cancellationToken)
at Microsoft.Extensions.Hosting.Internal.Host.StartAsync(CancellationToken cancellationToken)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.RunAsync(IHost host, CancellationToken token)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.RunAsync(IHost host, CancellationToken token)
at UBS.OrleansServer.EntryPoint.Start() in /app/UBS/OrleansServer/EntryPoint.cs:line 102
--- End of inner exception stack trace ---
I have to remove all the entries in Consul then restart the POD, then everything works fine.
The POD_NAME is the same for StatefulSet's POD, is it correct that each POD restart creates a new entry in Consul?
What could be the cause?
Thanks in advance
UPDATE
After several rounds crashes and restart, finally it does not crash any more. And in log I see the following message
ProcessTableUpdate (called from DeclareDead) membership table: 5 silos, 1 are Active, 4 are Dead, Version=<31, 28123>. All silos: [SiloAddress=S10.18.123.244:11111:361163684 SiloName=ubs-job-dev-0 Status=Active, SiloAddress=S10.18.123.200:11111:361158057 SiloName=ubs-job-dev-0 Status=Dead, SiloAddress=S10.18.123.210:11111:361161905 SiloName=ubs-job-dev-0 Status=Dead, SiloAddress=S10.18.123.217:11111:361157424 SiloName=ubs-job-dev-0 Status=Dead, SiloAddress=S10.18.123.244:11111:361163558 SiloName=ubs-job-dev-0 Status=Dead]
The SiloName never changes and there is only one POD in StatefulSet, but it sees 5 silos, 4 of them are dead. It seems each new POD, even if pod name does not change, is seen as a new silo. Is that expected?
(Failed to get ping responses from 1 of 1 active silos.
Newly joining silos validate connectivity with all active silos that have recently updated their 'I Am Alive' value before joining the cluster.
Successfully contacted: []. Failed to get response from: [S10.18.123.218:11111:361110184])
Looks like your membershiptable (in consul) thinks that you already have active silos in it. When your 'new' silo comes up and looks in the membershiptable it sees these active silos at the table's IP addresses.
To keep the cluster correct, a newly joining silo must be able to communicate with the existing silos. However if the membership table is incorrect (ip address with status 3/active) then you have a problem where the new silo is trying to ping the active silos and not being able to reach them will fail to join and fast itself.
You have a couple of solutions:
clear the consul table when deploying your solution
change the deploymentid on every deployment.
You obviously found the first solution (clear the table)
see silo lifecycle

Gluster_Volume module in ansible

Request you to help me on the following Issue
I am writing a High available LAMPAPP on UBUNTU 14.04 with ansible (on my home lab). All the tasks are getting excecuted till the glusterfs installation however creating the Glusterfs Volume is a challenge for me since a week. If is use the command moudle the glusterfs volume is getting created
- name: Creating the Gluster Volume
command: sudo gluster volume create var-www replica 2 transport tcp server01-private:/data/glusterfs/var-www/brick01/brick server02-private:/data/glusterfs/var-www/brick02/brick
But if i use the GLUSTER_VOLUME module i am getting the error
- name: Creating the Gluster Volume
gluster_volume:
state: present
name: var-www
bricks: /server01-private:/data/glusterfs/var-www/brick01/brick,/server02-private:/data/glusterfs/var-www/brick02/brick
replicas: 2
transport: tcp
cluster:
- server01-private
- server02-private
force: yes
run_once: true
The error is
"msg": "error running gluster (/usr/sbin/gluster --mode=script volume add-brick var-www replica 2 server01-private:/server01-private:/data/glusterfs/var-www/brick01/brick server01-private:/server02-private:/data/glusterfs/var-www/brick02/brick server02-private:/server01-private:/data/glusterfs/var-www/brick01/brick server02-private:/server02-private:/data/glusterfs/var-www/brick02/brick force) command (rc=1): internet address 'server01-private:/server01-private' does not conform to standards\ninternet address 'server01-private:/server02-private' does not conform to standards\ninternet address 'server02-private:/server01-private' does not conform to standards\ninternet address 'server02-private:/server02-private' does not conform to standards\nvolume add-brick: failed: Host server01-private:/server01-private is not in 'Peer in Cluster' state\n"
}
May i know the mistake i am committing
The bricks: declaration of Ansible gluster_volume module requires only the path of the brick. The nodes participating in the volume are identified as cluster:.
The <hostname>:<brickpath> format is required for the gluster command line. However when you use the Ansible module, this is not required.
So your task should be something like:
- name: Creating the Gluster Volume
gluster_volume:
name: 'var-www'
bricks: '/data/glusterfs/var-www/brick01/brick,/data/glusterfs/var-www/brick02/brick'
replicas: '2'
cluster:
- 'server01-private'
- 'server02-private'
transport: 'tcp'
state: 'present'

Cant able to start auditbeat

Hi i am using elk stack of version 7.1.1 with x-pack installed and i'm trying to configure and setup Auditbeat but it's showing the following error on startup :
ERROR instance/beat.go:916 Exiting: 2 errors: 1 error: failed to create audit client: failed to get audit status: operation not permitted; 1 error: unable to create DNS sniffer: failed creating af_packet sniffer: operation not permitted
Exiting: 2 errors: 1 error: failed to create audit client: failed to get audit status: operation not permitted; 1 error: unable to create DNS sniffer: failed creating af_packet sniffer: operation not permitted
My auditfile conf
auditbeat.modules:
- module: auditd
audit_rule_files: [ '${path.config}/audit.rules.d/*.conf' ]
audit_rules: |
- module: file_integrity
paths:
- /bin
- /usr/bin
- /sbin
- /usr/sbin
- /etc
- module: system
datasets:
- host
- login
- package
- process
- socket
- user
state.period: 12h
user.detect_password_changes: true
login.wtmp_file_pattern: /var/log/wtmp*
login.btmp_file_pattern: /var/log/btmp*
setup.template.settings:
index.number_of_shards: 1
index.codec: best_compression
setup.kibana:
host: "localhost:5601"
output.elasticsearch:
hosts: ["localhost:9200"]
username: "elastic"
password: "mypassword"
Please help me solve it.
I would assume you have lauched auditbeat under unprivileged user.
Due to auditbeat has to interact with auditd, most of activities should be performed by root. [at least root rights solved the same issue in my case]
PS: if you can't switch to root try this:
link

Resources