VS2010 Database project deploy - "SqlDeployTask" task failed unexpectedly, NullReferenceException - visual-studio-2010

I have a solution in Visual Studio 2010 with a number of SQL Server 2008 database projects. I'm trying to do a 'Deploy Solution' and I'm getting the following error for one of the database pojects:
------ Deploy started: Project: MyDBProj, Configuration: Sandbox Any CPU ------
C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\TeamData\Microsoft.Data.Schema.TSqlTasks.targets(120,5): Error MSB4018: The "SqlDeployTask" task failed unexpectedly.
System.NullReferenceException: Object reference not set to an instance of an object.
at Microsoft.Data.Schema.Sql.SchemaModel.SqlModelComparerBase.VariableSubstitution(SqlScriptProperty propertyValue, IDictionary`2 variables, Boolean& isChanged)
at Microsoft.Data.Schema.Sql.SchemaModel.SqlModelComparerBase.ArePropertiesEqual(IModelElement source, IModelElement target, ModelPropertyClass propertyClass, ModelComparerConfiguration configuration)
at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareProperties(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonChangeDefinition changes)
at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithoutCompareName(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes)
at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithSameType(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean ignoreComparingName, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, Boolean compareFromRootElement, ModelComparisonChangeDefinition& changes)
at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareChildren(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareParentElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes, Boolean isComposing)
at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithoutCompareName(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes)
at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithSameType(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean ignoreComparingName, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, Boolean compareFromRootElement, ModelComparisonChangeDefinition& changes)
at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareChildren(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareParentElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes, Boolean isComposing)
at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithoutCompareName(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes)
at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithSameType(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean ignoreComparingName, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, Boolean compareFromRootElement, ModelComparisonChangeDefinition& changes)
at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareAllElementsForOneType(ModelElementClass type, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean compareOrphanedElements)
at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareStore(ModelStore source, ModelStore target, ModelComparerConfiguration configuration)
at Microsoft.Data.Schema.Build.SchemaDeployment.CompareModels()
at Microsoft.Data.Schema.Build.SchemaDeployment.PrepareBuildPlan()
at Microsoft.Data.Schema.Build.SchemaDeployment.Execute(Boolean executeDeployment)
at Microsoft.Data.Schema.Build.SchemaDeployment.Execute()
at Microsoft.Data.Schema.Tasks.DBDeployTask.Execute()
at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute()
at Microsoft.Build.BackEnd.TaskBuilder.ExecuteInstantiatedTask(ITaskExecutionHost taskExecutionHost, TaskLoggingContext taskLoggingContext, TaskHost taskHost, ItemBucket bucket, TaskExecutionMode howToExecuteTask, Boolean& taskResult)
Done executing task "SqlDeployTask" -- FAILED.
Done building target "DspDeploy" in project "MyDBProj.dbproj" -- FAILED.
Done executing task "CallTarget" -- FAILED.
Done building target "DBDeploy" in project "MyDBProj.dbproj" -- FAILED.
Done building project "MyDBProj.dbproj" -- FAILED.
Does anybody know what could be causing this?
My projects are configured to create the deployment script and run it against the target database.
I've tried dropping the target database and creating an empty database before running the deploy.
I've tried 'cleaning' the solution in Visual Studio.

Tom,
I've documented a workaround (and a very easy one at that) here: http://sqlblog.com/blogs/jamie_thomson/archive/2011/11/21/workaround-for-datadude-deployment-bug.aspx

I have encountered similar NullReference exception attached below when had the following scenario:
I've edited the definition of partition scheme PS1 of SQL Database project and all tables which were using it (T1, T2, T3)
However, in my database, I had an old table not defined in the code (T_old) but still not deleted from the database (it was not used anymore, but dacpac doesn't remove stuff for you, only changes or adds). This old table was using the same partition scheme, but dacpac didn't have any reference to that old table definion meaning it was not able to backup table T_old to change PS1
You can have similar issues when referencing objects that changed by old objects that are not defined in code anymore.
To solve that, check the old dependencies of the object and delete them. Try to change things only in code and not mix it with ad-hock changes in database.
Hope that will help, since error message doesn't give a lot of explanation.
Unhandled Exception: System.NullReferenceException: Object reference not set to an instance of an object.
at Microsoft.Data.Tools.Schema.Sql.Deployment.SqlDeploymentPlanGenerator.DeploymentScriptDomGenerator.UnbindTableDatamotion(SqlTable sourceTable, SqlTable targetTable, Boolean unbindPartitionScheme, HashSet`1 unboundColumns)
at Microsoft.Data.Tools.Schema.Sql.Deployment.SqlDeploymentPlanGenerator.DeploymentScriptDomGenerator.GenerateUnbindTableSteps(SqlTable sourceTable, SqlTable targetTable)
at Microsoft.Data.Tools.Schema.Sql.Deployment.SqlDeploymentPlanGenerator.DeploymentScriptDomGenerator.GenerateSteps(Int32 operation, IModelElement element)
...

I've been able to reproduce this error with a test database project containing a single in-line function as follows:
CREATE FUNCTION [dbo].[Function1]
()
RETURNS TABLE
AS
RETURN (
WITH cte AS (
SELECT 1 AS [c1]
FROM [$(Database3)].[dbo].[Table1]
)
SELECT 1 AS [c1]
FROM cte
)
$(Database3) is a database variable that references another database project .dbschema file. This dbschema file contains a single table - [Table1].
It seems that you need an inline function with a CTE that contains a reference to another database using a database variable. Additionally, the function must already exist on the target database.
You may get the following error under some circumstances (e.g. the inline function doesn't use a CTE):
------ Deploy started: Project: Database2, Configuration: Debug Any CPU ------
Database2.dbschema(0,0): Warning TSD00560: If this deployment is executed, changes to [dbo].[Function2] might introduce run-time errors in [dbo].[Procedure1].
Deployment script generated to:
C:\temp\Database2\sql\debug\Database2.sql
Altering [dbo].[Function2]...
C:\temp\Database2\sql\debug\Database2.sql(74,0): Error SQL01268: .Net SqlClient Data Provider: Msg 208, Level 16, State 1, Procedure Function2, Line 9 Invalid object name 'Database3.dbo.Table1'.
An error occurred while the batch was being executed.
Done executing task "SqlDeployTask" -- FAILED.
Done building target "DspDeploy" in project "Database2.dbproj" -- FAILED.
Done executing task "CallTarget" -- FAILED.
Done building target "DBDeploy" in project "Database2.dbproj" -- FAILED.
Done building project "Database2.dbproj" -- FAILED.
Build FAILED.
So the only workaround seems to be to drop the function in the target before the deployment.
I'll raise a Microsoft Connect issue.
UPDATE
I've created a Connect issue - https://connect.microsoft.com/VisualStudio/feedback/details/693158/vs2010-database-project-deploy-sqldeploytask-task-failed-unexpectedly-nullreferenceexception

Related

Error while trying to obtain information from source and destination in table RSBASIDOC

There is a package which is used to extract data from SAP systems to SQL, but we are getting the below error while running the package.
Error while trying to obtain information from source and destination in table RSBASIDOC
But I was not able to understand why I got this error and reason too.
Help me address the issue.
Thanks in advance.
Full log:
7/5/2021 2:04:43 PM Data Flow Task BW -> SQL Error
XtractKernel.Extractors.XtractException Errors occurred
during extraction
at XtractIS.XtractSourceDeltaQ.PrimeOutput(lnt32
outputs, Int32[] outputlDs. PipelineBuffer[] buffers)
at
Microsoft SqlServer.Dts.Pipeline.ManagedComponentHost
.HostPrimeOutput(IDTSManagedComponentWrapper 100
Wrapper. Int32 outputs, Int32[] outputIDs.IDTSBuffer100[]
buffers, IntPtr ppBufferWirePacket)
7/5/2021 2:04:43 PM Data Flow Task BW -> SQL:Error: [2021-07-
05T14:04:43.336+02:00]
XtractKernel Extractors.XtractException: Error while trying
to obtain information about source and destination in table
RSBASIDOC. This is an indication that the customizing in
SAP is not done property. --->
XtractKernel.Extractors.XtractException: Found != 1 rows
in table RSBASIDOC
at
XtractKernel.Extractors.DeltaQDefinition.GetSystem.Param
eters(R3Connection connection)
--- End Of inner exception stack trace ---
XtractKernel.Extractors.DeltaQDefinition.GetSystemParam
eters(R3Connection connection)
at XtractKernel.Extractors.DeltaQExtractor.Extract()
at XtractKernel.Extractors.ExtractorBase`1.Extract
(ProcessResultCallback processResult, LoggerBase
logger)
at XtractIS.XtractSourceDeltaQ.PrimeOutput(lnt32
outputs, Int32[] outputlDs, PipelineBuffer[] buffers)

ElasticSearch randomly fails when running tests

I have a test ElasticSearch box (2.3.0) and my tests that are using ES are failing in random order which is really frustrating (failed with All shards failed exception).
Looking at the elastic_search.log file it only showed me this
[2017-05-04 04:19:15,990][DEBUG][action.search.type ] [es-testing-1] All shards failed for phase: [query]
RemoteTransportException[[es-testing-1][127.0.0.1:9300][indices:data/read/search[phase/query]]]; nested: IllegalIndexShardStateException[CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED, RELOCATED]];
Caused by: [derp_test][[derp_test][3]] IllegalIndexShardStateException[CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED, RELOCATED]]
at org.elasticsearch.index.shard.IndexShard.readAllowed(IndexShard.java:993)
at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:814)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:641)
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:618)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:369)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:368)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:365)
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
Any idea what's going on? So far my research only told me this is most likely due to corrupt translog -- but I don't think deleting translog will help because the test drops the test index for every namespace
ES test box has 3.5GB RAM and it's using 2.5GB heap size, CPU usage is quite normal during the test (peaked at 15%)
To clarify: when I said failing test, I meant error with the weird exception as mentioned above (not failing test due to incorrect value). I did manual refresh after every insert/update operation so value is correct.
After investigating ElasticSearch log file (at DEBUG level) and the source code, turns out what actually happened was that after index is created, the shards are entering RECOVERING state and sometimes my test tried to perform a query on ElasticSearch while the shards are not yet active -- thus the exception.
Fix is simple - after creating an index, just wait until shards are active using setWaitForActiveShards function and to be more paranoid I also added setWaitForYellowStatus
It's a recommendation use ESIntegTestCase to do the integration test.
ESIntegTestCase has some helper method, like: ensureGreen and refresh ... to ensure the Elasticsearch is ready to continue testing. and you can configure node settings for test.
if use Elasticsearch directly as a test box, it maybe cause various problems:
like your Exception, this seems it's recovering shards for index
derp_test.
even you have indexed your data into index, but when you immediately search will fail, since cluster need flush or refresh
...
Those most problems can just use Thread.sleep to wait some time to fix :), but it's a bad way to do this.
Try manually refreshing your indices after inserting the data and before performing a query to ensure the data is searchable.
Either:
As part of the index request - https://www.elastic.co/guide/en/elasticsearch/reference/2.3/docs-index_.html#index-refresh
Or separately - https://www.elastic.co/guide/en/elasticsearch/reference/2.3/indices-refresh.html
There could be another reason. I had the same problem with my elasticsearch unit tests, at first I thought the problem root cause is somewhere in .Net Core or Nest or elsewhere outside of my code because the test would run successfully in Debug mode (When debugging tests) but randomly failed in Release mode (when running tests).
After lots of investigations and many try and errors, I found out the problem root cause (in my case) was Concurrency !! or on the other hand Race Condition used to happen
Since the tests run concurrently and I used to recreate and seed my index (initializing and preparing) on test class constructor which means executing on the beginning of every test and since the tests would run concurrently, race condition were likely to happen and make my tests fail
Here is my initialization code that caused tests fail randomly when running them (on release mode)
public BaseElasticDataTest(RootFixture fixture)
: base(fixture)
{
ElasticHelper = fixture.Builder.Build<ElasticDataProvider<FakePersonIndex>();
deleteFakeIndex();
createFakeIndex();
fillFakeIndexData();
}
the code above used to run on every test concurrently. I fixed my problem by executing initialization code only once per test class (once for all the test cases inside the test class) and the problem went away.
Here is my fixed test class constructor code :
static bool initialized = false;
public BaseElasticDataTest(RootFixture fixture)
: base(fixture)
{
ElasticHelper = fixture.Builder.Build<ElasticDataProvider<FakePersonIndex>>();
if (!initialized)
{
deleteFakeIndex();
createFakeIndex();
fillFakeIndexData();
//for concurrency
System.Threading.Thread.Sleep(100);
initialized = true;
}
}
Hope it helps

indexing many with nest and elasticsearch - unable to perform post on any of the nodes

i'm trying to index many documents using Nest to Elasticsearch. things run fine with there's a limited number of documents, but when i ramp up the number - from say 1000 to 50,000 it throws an error. i'm not convinced it's due to the number of documents - it could be bad data.
i'm trying to safeguard against bad data though - i'm only indexing documents that have an id. the id is being generated from one of my fields (upc). so i'm positive there's an id for every document. i'm also making sure my class object that it's serializing to/from has all nullable properties.
still, there's nothing informational that i can see that helps me in this error.
the error i get is..
Unable to perform request: 'POST' on any of the nodes after retrying 0 times
and here's the stacktrace when it throws the error:
at Elasticsearch.Net.Connection.Transport.RetryRequest[T](TransportRequestState`1 requestState, Uri baseUri, Int32 retried, Exception e) in c:\Projects\NEST\src\Elasticsearch.Net\Connection\Transport.cs:line 241
at Elasticsearch.Net.Connection.Transport.DoRequest[T](TransportRequestState`1 requestState, Int32 retried) in c:\Projects\NEST\src\Elasticsearch.Net\Connection\Transport.cs:line 215
at Elasticsearch.Net.Connection.Transport.DoRequest[T](String method, String path, Object data, IRequestParameters requestParameters) in c:\Projects\NEST\src\Elasticsearch.Net\Connection\Transport.cs:line 163
at Elasticsearch.Net.ElasticsearchClient.DoRequest[T](String method, String path, Object data, BaseRequestParameters requestParameters) in c:\Projects\NEST\src\Elasticsearch.Net\ElasticsearchClient.cs:line 75
at Elasticsearch.Net.ElasticsearchClient.Bulk[T](Object body, Func`2 requestParameters) in c:\Projects\NEST\src\Elasticsearch.Net\ElasticsearchClient.Generated.cs:line 45
at Nest.RawDispatch.BulkDispatch[T](ElasticsearchPathInfo`1 pathInfo, Object body) in c:\Projects\NEST\src\Nest\RawDispatch.generated.cs:line 34
at Nest.ElasticClient.<Bulk>b__d6(ElasticsearchPathInfo`1 p, BulkDescriptor d) in c:\Projects\NEST\src\Nest\ElasticClient-Bulk.cs:line 20
at Nest.ElasticClient.Dispatch[D,Q,R](D descriptor, Func`3 dispatch, Boolean allow404) in c:\Projects\NEST\src\Nest\ElasticClient.cs:line 86
at Nest.ElasticClient.Dispatch[D,Q,R](Func`2 selector, Func`3 dispatch, Boolean allow404) in c:\Projects\NEST\src\Nest\ElasticClient.cs:line 72
at Nest.ElasticClient.Bulk(Func`2 bulkSelector) in c:\Projects\NEST\src\Nest\ElasticClient-Bulk.cs:line 15
at Nest.ElasticClient.IndexMany[T](IEnumerable`1 objects, String index, String type) in c:\Projects\NEST\src\Nest\ElasticClient-Index.cs:line 44
at ElasticsearchLoad.Program.BuildBulkApi() in c:\Projects\ElasticsearchLoad\ElasticsearchLoad\Program.cs:line 258
any help would be appreciated!
You are going to be limited in the effective bulk size you can send to Elasticsearch by a combination of your documents and Elasticsearch configuration. There is not any "single best answer" for this, but with some testing and configuration changes you should be able to achieve a suitable bulk indexing performance threshold. Here are some resources to assist you...
elasticsearch bulk indexing gets slower over time with constant number of indexes and documents
Write heavy elasticsearch
Scaling Elasticsearch Part 1: Overview
And for overall Sizing of Elasticsearch I would highly recommend reading - Sizing Elasticsearch - Scaling up and out
If you are running in multi-node cluster make sure your setup is the same for all nodes.
I am not sure if this can help you, but I had similar issue in 2 node cluster. I've was adding synonyms and setup the file for only master machine. I completely forgot to copy it over to 2nd node. This was causing the error above for me when creating new index that depended on that synonym file.
After I added synonym file and restarted 2nd node, everything went back to normal.

Appfabric max object size

While caching some large objects (may be around 10mb) in Appfabric cache, it throws following exception
ErrorCode :SubStatus:The connection was terminated, possibly due to server or network problems or serialized Object size is greater than MaxBufferSize on server. Result of the request is unknown.
Here are the transport channel setting
<transportProperties connectionBufferSize="131072" maxBufferPoolSize="268435456"
maxBufferSize="50000000" maxOutputDelay="2" channelInitializationTimeout="60000"
receiveTimeout="600000"/>
Even though the maxBufferPoolSize is set above 2GB, why would storing 10MB object throws exception. Please let me know if I am missing something here.
The WCF transport settings need to be set on both the client and server to take effect.
AFAIK the max possible setting for both maxBufferSize and maxBufferPoolSize is 2GB, but I wouldn't set these arbitrarily large, as it will chew memory.

Error received after submitting translated 4010 xml to legacy webservice

Good Afternoon.
I am sending a 270 to the state(Michigan) and receiving a 271 which I then transform into a 4010 version of the 271 so that a legacy webservice can attempt to absorb the data. The webservice is using dbml and LINQ to translate the message into a series of classes that represent the database after translation occurs it performs a transaction and updates the client. However I am getting an error that says:
The adapter failed to transmit message going to send port "SendEDI"
with URL "http://biz05/WriteEligibilityResponse/service.svc". It will
be retransmitted after the retry interval specified for this Send
Port. Details:"System.ServiceModel.FaultException: a:InternalServiceFaultAn attempt was made to remove a relationship between
a X12_NM1 and a X12_271_2120C. However, one of the relationship's
foreign keys (X12_271_2120C.X12_NM1_Id) cannot be set to
null.An attempt was
made to remove a relationship between a X12_NM1 and a X12_271_2120C.
However, one of the relationship's foreign keys
(X12_271_2120C.X12_NM1_Id) cannot be set to
null. at
EligibilityLookup.Service.ResponseToSQL.WriteResponse(Message message)
at SyncInvokeWriteResponse(Object , Object[] , Object[] )
at System.ServiceModel.Dispatcher.SyncMethodInvoker.Invoke(Object
instance, Object[] inputs, Object[]& outputs)
at
System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc&
rpc)
at
System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage5(MessageRpc&
rpc)
at
System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage4(MessageRpc&
rpc)
at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean
isOperationContextSet)System.InvalidOperationException
at
Microsoft.BizTalk.Adapter.Wcf.Runtime.WcfClient`2.RequestCallback(IAsyncResult
result)".
Keeping in mind that I cannot change the LINQ code(I cannot edit the client as part of a management descion, rebuilding the front end is Stage 2 of the project) is there any suggestable way to get around this? I have already removed the 5010 to 4010 link in the map for this element, and I also do not care if the I get a complete 271 dataset into the legacy system.
Just googling the error came up with this:
http://blogs.msdn.com/b/bethmassi/archive/2007/10/02/linq-to-sql-and-one-to-many-relationships.aspx
If you can't change the linq model then it appears you are going to have to map data into the 4010 document you send to the web service so that data is populated in the X12_NM1 that maps to the X12_271_2120C table.

Resources