I am trying to create VDMs using EDMX from SFSF, using this blog
I create a SCP Business Application template and then from in the srv module I try to add new data model from external source - in this case API Business Hub.
I try to use SuccessFactors Employee Central - Personal Information.
https://api.sap.com/api/ECPersonalInformation/overview
The process starts and fails with the message: "OData models with multiple schemas are not supported" and then "Could not generate Virtual Data Model classes."
The external folder is generated as expected with the XML in the EDMX folder but the csn folder is empty.
As I understand it this should work with any api from the business hub? Am I doing something wrong or am I missing something?
Thanks.
Update:
There seems to be an issue with the conversion from EDMX into CSN used by the Web IDE (which is not part of the SAP Cloud SDK).
The Java VDM generated by the OData Generator from the SAP Cloud SDK (used as a component by the Web IDE) should work without any problem.
This looks like an unexpected behavior. We will investigate this further.
In the meantime, as a workaround, you can use our maven plugin or CLI to create the data model for you. This is described in detail in this blog post.
The tl;dr version (for the CLI) is:
Determine which version of the SAP Cloud SDK you are using (search for sdk-bom in your parent pom.xml). I assume this to be version 2.16.0 for this example.
Download the CLI library from maven central: https://search.maven.org/artifact/com.sap.cloud.s4hana.datamodel/odata-generator-cli/2.16.0/jar
Download the metadata file (edmx) from the API Business Hub (as linked in your question)
Run the CLI with e.g. the following command:
java -jar odata-generator-cli-2.16.0.jar -i <input-directory> -o <output-directory> -b <base-path>
The <base-path> in there is the prefix (service independent) to be used in between your host configuration and the actual service name.
Add the generated code manually to your project.
I will updates this answer with the results of the investigation.
Related
I am trying to implement a state machine using the Java lambda function. I have created a state machine and some java lambda functions. But the code editor does not support java.
Upload from option is available here with 2 different formats:
.zip or .jar file
Amazone s3 location
What kind of file do we need to upload over here? Can anyone show me some sample files? Is there any pom file we need to upload for the working of state function?
For java lambdas we can upload jar file as well as zip which can be created by gradle and maven plugins mentioned in the article.
Also lambda now supports container so you can also use container image.
There are also few popular frameworks you can use to deploy java lambda as native image like Quarkus or Micronaut.
Is it possible to take the existing services in azure cloud subscription as reference and create similar services with parameters update in another subscription either by using powershell or ARM template.
We are missing few details while taking the reference details manually and then creating it using ARM templates. We wanted it to be end to end automation.
You can export the ARM template from existing resources using Export-AzureRmResourceGroup or Save-AzureRmResourceGroupDeploymentTemplate (https://azure.microsoft.com/en-us/blog/export-template/) and then redeploy that template to a new environment.
However, if you are using Export-AzureRmResourceGroup to try to dynamically create an ARM template from existing resources then the generated template will likely not be ready to automatically redeploy. There may be issues with resource dependencies, resources not getting exported correctly, template limitations, etc. It will take a fair bit of manual effort to update the generated ARM template to get it to a point where it can be correctly redeployed into another subscription.
If you are able to use Save-AzureRmResourceGroupDeploymentTemplate (ie. if your existing resources were all deployed via ARM templates with no post deployment ad-hoc changes) then the templates should be ready to deploy.
For future reference, the best solution is to always deploy all of your resources via ARM templates (or something like Terraform) where your configuration is all saved in a source repository and you are deploying via a CI/CD pipeline.
I would like to know how to auto manage portal's custom code just like in TFS/VSTS?
At present ,I am using XRMToolbox to manage ,push or pull portal's code into CRM Instance but disadvantage is code checkin and checkout.
Can anyone help me in this to manage a code with auto pull and push option into CRM instance with checkin ,checkout options?
Thanks in Advance!
I'm afraid the XRMToolbox plugin doesn't support it yet.
Ref: https://github.com/MscrmTools/MscrmTools.PortalCodeEditor/issues/13
But there is no stopping you from creating your own pipeline - at the end of the day portal code is just bunch of Crm entities. Part of Crm SDK is configuration migration tool - last version is here:
https://www.nuget.org/packages/Microsoft.CrmSdk.XrmTooling.ConfigurationMigration.Wpf
So the idea is:
1) Get this tool
2) Define entities you want to backup & create schema xml file for them. I think you'd want adx_webpage, adx_webfile, adx_pagetemplate (and all attributes from them)
3) Export data using this schema - this exports them to .zip package that contains simple structure (schema file and data file); so you can unzip it and store in your git branch (pull)
4) For push zip this file and again use configuration migration tool to import the data
This gives you also an opportunity to have separate dev version of portal code and production version of portal code (which is always a good thing).
Portals code is made up of configuration changes to a solution (which can be extracted as xml) and data (records such as web pages, web roles etc.)
There are several tools available to help you source control both.
xrm-ci-framework provides automation tools to extract your CRM solution as xml, and then source control it. You can do this locally or in the cloud with Azure DevOps or other.
msbuild-xrm-sourcecontrol is similar. It integrates into Visual Studio to help you extract CRM customisations locally. It also has a partner project xrm-datamigration which helps you extract data from CRM, version control it and deploy it to other environments in your release pipeline. Both have documentation on the GitHub pages I've linked; this blog post is informative too.
I received an email with the Title ^ as the subject. Says it all. I'm not directly using the specified endpoint (storage#v1). The project in question is a postback catcher that funnels data into BigQuery
App Engine > Pub Sub > Dataflow > Cloud Storage > BigQuery
Related question here indicates Dataflow might be indirectly using it. I'm only using the Cloud PubSub to GCS Text template.
What is the recommended course of action if I'm relying on a template?
I think the warning may come from a dataflow job which uses the old version of storage API. Please upgrade Dataflow/Beam SDK version beyond 2.5.
Since you're using our PubsubToText template. The easiest way to to it would be:
Stop your pipeline. Be sure to select "Drain" when asked.
Relaunch the pipeline using the newest version (which is automatically done if you're using UI), from the same subscription.
Check the SDK version. It should be at least 2.7.
After that you should not see any more warnings.
I have a solution that contains multiple integration test projects and one web application project. each integration project connects to the web application when running the tests. I would like for each test project to access the website with its own database connection. I have been trying to use the web deploy functionality built into visual studio. However I have been unable to figure out what I need to add to either the deployment package that is created and/or the post build event for the test projects to declare the binding port for the website when deployed. For example, I want integration project A to create and access the website located at http://localhost:83 and integration project B to create and access the website located at http://localhost:82. Could someone please explain:
Is there anything I need to do the deployment package ?
What do I need to add to my post-build events for my integration projects when deploying the package, so that the website is created at the correct port when building the project?
Update:
I'm wanting to deploy the same site to two different locations on my machine so that I can run both sets of integration tests at the same time.
Update 2:
I have researched the web deploy tool and it allows you to specify parameters that modify what is deployed when you call it from the command line. However I have found the documentation very confusing. http://technet.microsoft.com/en-us/library/dd568968(WS.10).aspx
Update 3:
I expect these to be two different websites, each pointing to there own database. If possible I would like a single package that can be deployed using msdeploy. Which will then be called in a post build event from each of the integration test projects. I would like to specify the connection string and deployment location from the post build script of the integration project.
you can try with webdev.server included in visual studio. VisualStudio use this for start a webserver when you debug. With this you can start a webserver in the desire port (if the port is not used currently).
I made a bat file for change some options.
check it.
::Begin of bat file
cd C:\Program Files\Common Files\microsoft shared\DevServer\10.0\
WebDev.WebServer40.exe /port:80 /path:"C:\PATHTOYOURWEBPROJECT" /vpath:"/NAMEOFYOURWEBPROJECT"
::End of bat file
You can acces in: http://localhost:80
I use the webserver40, but if you don't have net.4 or vs2010 you can try to find webserver[ xx version].exe
I hope that this will be helpful, and sorry for my broken english.
First off, you're approaching this the wrong way.
> I would like for each test project to access the website with its
own database connection.
Who is creating the DB connection? Your web site or the test project? For rest of your question to make sense, I presume its the web site (otherwise, Project A and Project B cannot share a connection out of the box).
If your website is making the connection, unless you're caching or having a static connection, there will be a new connection made as each request runs your your site on a new thread. Another simpler alternative is to take a query param and initiate a new connection based on that. If you seed it off the caller, you can also use it for more detailed logging.
Web Deployment projects are meant for deploying to integration servers, so that means you cannot access them by http://localhost... but the full FQDN of the server.
Most importantly, http://localhost:82/myApp and http://localhost:83/myApp are two different sites (unless you redirect from one of them to another one which in itself can cause additional issues) running the same codebase.
Having said that, you would then need to deploy your website twice and then all you need is to change the config/settings entry in Project A and B to point to these to different sites.
Hope this makes sense.
You can define virtual host configuration.
Refer this guide for more information.
http://docs.jboss.org/jbossas/guides/webguide/r2/en/html/ch07.html