Sitecore with Oracle Commerce/ATG 11.3 and Endeca - oracle

Can anyone advice if is it possible Sitecore would work with Oracle Commerce/ATG 11.3 and Endeca?

There is no reason why it can't be integrated, it depends how deeply you want to integrate it. For example SiteCore exposes a RESTful API that allows you to get to individual content items.
Since you mention Endeca as well, I assume you want to be able to index the content too? For that you will probably have to develop your own CAS connector, potentially hooking in to the SiteCore search functionality.
There may be other APIs to hook into as well but since your question is whether it is possible. The answer is still 'Yes'.
As for whether it is the right solution, that is a different question. What does SiteCore give you that the BCC doesn't? Can you migrate from SiteCore to the BCC? Does SiteCore expose other APIs that will allow them to be 'read-only' stores in an ATG application (BCC or Storefront)? Many options exist.

Related

using magento apis for ecommerce website

I am a beginner in magento and am working on creating a website using magento. I have noticed that magento has a good number of apis that expose all of the functionality that I would need to create an ecommerce website. So, I would like to use magento's apis to fetch data, but develop the UI separately without any dependencies on magento. I have found a lot of references that develop the website via magento theming, but not those where the UI is developed in a separate MVC and uses magento purely as service layer. Are there any problems/issues in my approach?
Edit: I have gained a lot of clarity on db performance issue in apis and how external caching can alleviate the issue, but I still don't understand the underwhelming use of magento as a service layer (i.e. fueling the website by using magento's apis), are they any other gotchas?
Here is how we overcame slowness in Magento APIs:
Created a Web service provider in J2EE, Spring MVC that acts as a proxy between Magento and end users.
J2EE Web service provider exposes pretty much all the APIs that Magento has but also supports JSON with REST along with SOAP & RPC.
J2EE Web service provider uses a document based database (MongoDB) to store a snapshot of product catalog in MongoDB.
J2EE Web service provider uses native MongoDB caching to serve data fast without running any expensive SQL queries.
To avoid dirty caching issues we created a hook in Magento Admin to push data into MongoDB whenever data changes in Magento.
This might sound like overkill to some but we have been able to achieve pretty high throughput without any slowness.
The Magento APIs are slow, you would encounter serious performance issues trying to run a site off of it.
Due to the complex nature of the EAV model, you may find it difficult to manage products through the API alone.
Are there any particular concerns you have about using Magento's own frontend? It is daunting at first but once you understand the layout system it's actually very powerful and customisable.
Technically it is possible to run a site only through the API.
The issue you might face is a practical one, instead of spending your time trying to learn all the API calls, you can learn how to implement your current UI in Magento.
The advantage to this approach is that you will also better understand how Magento works internally, thus allowing you to leverage it's functionality for your unique business needs.
Another issue is that when using API's you have a little less control over how things are processed / calculated, vs when working in Magento itself there is a lot of control over specifics.
I regularly see "session expiration" issues when accessing Magento's API, through both SOAP and XMLRPC. All my calls require exception handling to avoid halting execution. I imagine that alone would create a nightmare when building everything on top of the API.
The best answer you're going to get is to Load Test the API before you start coding. Log the tests extensively and look for errors. If you see errors on a normal basis that should answer your question. Even if you find documentation that says it's okay to do what you're trying, you're still going to have to tune the API to work properly under the load required to run the store.
It will be good to know what you're up against before sinking hours into development.

Any way to use MvcMiniProfiler on windows application? Or is there a sister application?

So I've started using MvcMiniProfiler on our websites and quite like it. We have a Windows Application component/framework that is leveraged by the website and I was wondering if it was possible to use the profiler on that. I'm assuming not, but maybe there is a subcomponent of the code that could be used? I see that there is a way to configure where the results are stored (i.e. Sql Server) so maybe it is close to possible?
We have the following flow:
Website submits job to 'broker' then returns a 'come back later' page.
Broker runs and eventually data in the websites database gets updated by the broker.
Website displays the results.
I'd be great if there was a way I could get the entire workflow profiled. If there is no way/no intentions from the developers to make MvcMiniProfiler available to Windows applications, any recommendations for similar styled profilers?
You could get this working by using SqlServerStorage, there is very little in the code base that heavily depends on ASP.NET, in fact the SQL interceptor is generalized and so it the stack used to capture the traces.
I imagine that a few changes internally need to be made, eg: use Thread.SetData as opposed to HttpContext but they are pretty superficial.
The way you would get this going is by passing the "profiling identity" into the App and then continuing tracking there. Eventually when the user hits the site after it happens, it would show up as little "chiclets" on the left side.
A patch is totally welcome here but it is not something it does in its current version.
(note to future readers, this is probably going to be out of date at some point, if it is please suggest an edit)
Yes, there's a Windows porting of MiniProfiler: http://nootn.github.io/MiniProfiler.Windows/

Calendar integration to Domino (Lotus Notes)?

How do I integrate with a Lotus Notes Domino server? I know there are several versions and the answer would be different for each one, but advice on any version would be great at the moment as I haven't gotten the info on what server it is I'm supposed to integrate with yet. Assume version 6+.
I'm assuming I need to do the integration with the server and not the local Lotus Notes client, but that might not be correct?
I need to both read and write to the calendar appointments of a select number of users.
For instance I should be able to create/update/delete a appointment for a certain user.
The appointments are the only thing I need access to, at the moment I have no need for the mails.
From what I have read on the internet there are no standard interface to do this?
Should I develop a Domino app that does what I want?
Maybe there is a server API that I can use to connect and retrive information?
Hopefully this can be done in c#? If not what is the preferred way? I read something about java and that is doable also.
If you don't have any concrete answers but you have useful links, please post those as comments.
I have used Java and the C++ APIs to read a Domino calendar. Depending on the scenario, a server side solution can run into trouble if you want to do more than read -- the workflow sometimes needs the Notes client. Need to understand more about what you intend to do.
API documentation:
http://www.ibm.com/developerworks/lotus/downloads/toolkits.html
I'd use Java.
Here's Domino Designer help section on Java:
http://publib.boulder.ibm.com/infocenter/domhelp/v8r0/topic/com.ibm.designer.domino.main.doc/H_9_CODING_GUIDELINES_JAVA.html?resultof=%22%6a%61%76%61%22%20
First read Running a Java program section.
Then you'll be interested in Accessing databases link.
Here's example of how to access user's mail db (calendar items are inside mail db in Lotus):
http://publib.boulder.ibm.com/infocenter/domhelp/v8r0/topic/com.ibm.designer.domino.main.doc/H_EXAMPLES_OPENMAIL_METHOD_JAVA.html
GooCalSync (openntf and LotusNotes-Google Calendar Synchronizer (sourceforce) are great examples of how to do this in Java.
The best way to do this without the pain of having to write code is to use ICal. You will enter all sorts of issues with access, reading appointments etc that are best left to Domino to handle.
There are some good documents on the web on ICal support in Domino.
I've done this before for a CRM product (clearc2.com). iCal is easy, but if you want to do more than insert items and actually do a bi-directional sync to the calendars (which are mail databases on a domino server), then I would look at the appendix of the Lotus Notes C API first. There is a section that explains the C&S piece fairly well. You do not need to use the C API to do the work, but it will explain what the many c&s items (fields) are for.
Click here for documentation.
My advice is to keep it simple, e.g. do not try to tackle repeating items (appts/tasks) on the first attempt. And try not to re-use any custom product objects you find in the mail template. These are undocumented Notes classes and can go away anytime. Furthermore, they may not work the same from each point release or even incremental release. The mail template code can be evil.

Google Visualization API

I want a real and honest opinion what do you think of Google Visualization API?
Is it reliable to use becasue when i was reading the documentation i noticed that there are alot of issues and defects to overcome and can i use it to retrieve data from mysql database.
Thank you.
I am currently evaluating it. As compared to other javascript data visualization frameworks, i think it has a lot going for it:
dynamic loading is built-in
diverse, many things to choose from.
looks really great!
framework mostly takes care of picking whatever implementation fits the current browser
service based, you don't need to download anything in advance
unified data source: just create one data table, and have multiple visalizations draw from that data.
As a disadvantage, I'd like to mention security. I mean, because it's all service based, it is not so transparent what happens when you pass data into these API calls. And as far as I know, the API is free, but not open source, so I can't really check what is going on behind the covers.
I think the Google visualization API really shines if you want to very quickly whip up a visualization gadget for use in a blog or so, and you are not interested in deploying all kinds of plugins and libraries (for eaxmple, with jQuery based frameworks, you need may need to manage multitple javascript libraries that work together to deliver the goods). If on the other hand you are creating an application that you want to sell, you might want to keep more control over what components you are using, and I would probably consider using something like Flot
But like I said, I am only evaluation atm, I am not using this in production.
Works really great for me. Can be customized fairly easily. Haven't seen any scaling issues. No data is exposed so security should not be an issue. - Arunabh Das
One point I want to add here is that, Google Visualization API cannot be downloaded, its not available for offline usage. So application which is going to use it must be always connected to internet, otherwise I think it wont be able to render charts. Due
to this limitation, this API cannot be used in some applications for which internet connection is not available.
I am currently working on a web based application that will have the Google Visualization API added to it and from the perspective of a developer the Google Visualization API is very limited in what you can do with each individual Chart and if I had a choice I would probably look at dojox charting just because of the extra flexibility that the framework gives you.
If you are doing any kind of large web application that will use charting extensively then I would not recommend the Google Visualizations API it does not have enough flexibility for a large web application.
I am using Google Visualization API and I want to stress that they still won't let you download it, which means if their servers are down, your app will be down if you depend on it. I have been using it for about 4 months, and they have crashed once me once so I'd say they pretty reliable and their documentation is really nice.

How can I integrate Oracle BI into an existing application?

I have an existing application written in Perl. Now I need to integrate this application with OBI. The plan is having a button the user can click on to open OBI in an iframe. OBI resides on a different server from the running application.
Has anyone done this before, know what is the best practice for doing this, and what is the effort of doing this?
Another question is is it possible to add customizations to the OBI displayed in the iframe.?
There are two ways to address the problem that I know of and tried out. According to your needs, one or the other might be more appropriate (or both, they're not mutually exclusive). In both cases, the documentation is good and readily available.
The "Go URL"
The Go URL is documented more thoroughly in the Oracle Business Intelligence Presentation Services Administration Guide. It provides a quick and easy interface to the reports you've already defined, in the form of a URL. All that's needed to get it running is to fill in a few query parameters to direct to the report you want. You might need to include authentication tokens too.
Pros: very easy to try out.
Cons: harder to get security right.
The web services
The presentation server comes with a series of web services that enable a more programmatic way of querying your repository. The functionality offered through this channel goes further: for example most catalog management, including report creation and editing is possible. The full list fills a guide of its own: the Oracle Business Intelligence Web Services Guide.
Pros: better integration (i.e., no need for an IFRAME); easier to get the security right.
Cons: harder to setup; lots of XML; more advanced features (e.g. in-place drilldown) need an HTTP bridge that was a bitch to get right in my case. The generated HTML might clash a bit with yours and require cleaning up, notably in the CSS.
Embedding OBIEE reports inside a non-ADF web app is tough. If you have an option to re-write your web application in ADF, your life will be a lot easier. Just drag and drop reports and visualizations into your web application. Oracle's own Fusion Applications also follow this approach. If your app is analytics heavy, it might be a good option to explore. Here's a link to the Oracle doc.

Resources