Showing posts with label Core Service. Show all posts
Showing posts with label Core Service. Show all posts

Monday, 10 September 2018

Published Summary Alchemy Plugin

In my previous blog, we saw how Published Summary plugin was conceptualized, its use cases and how it works and its different functionalities.

In this Plugin, I wrote the C# code that interacts with Core Service and forwards the JSON response to be consumed on FrontEnd.

Based on the use cases and feature of this plugin I wrote the Endpoints on the following scenarios and provided the JSON.
  1. Get All Published Items from Publication, Structure Group, and Folder.
    • Let's say if we select Publication then the response will be all Published  Pages, Components, and Categories in that publication.
    • If we select this plugin from Structure group, then we will have all the published pages in the structure group and same goes for Folder selection all Published ComponentTemplates and Components.
  2. Publish Items that are selected by the user from the Plugin UI.
  3. Unpublish the Items that are select by the user from the Plugin UI.
  4. Summary Panel, where we'll have a Snapshot of how many items are published in a particular publication, basically a GroupBy of itemsType with the count.
  5. And Finally,  Get all Publication and Target Type.
Now, Let's look at the code at High Level

Get All Published Item

The GetAllPublishedItems method is the POST request method and deserialized JSON in C# modal.

JSON format is {'IDs':['tcm:14-65-4']}

 [HttpPost, Route("GetAllPublishedItems")]  
     public object GetAllPublishedItems(TcmIds tcmIDs)  
     {  
       GetPublishedInfo getFinalPublishedInfo = new GetPublishedInfo();  
       var multipleListItems = new List<ListItems>();  
       XmlDocument doc = new XmlDocument();  
       try  
       {  
         foreach (var tcmId in tcmIDs.IDs)  
         {  
           TCM.TcmUri iTcmUri = new TCM.TcmUri(tcmId.ToString());  
           XElement listXml = null;  
           switch (iTcmUri.ItemType.ToString())  
           {  
             case CONSTANTS.PUBLICATION:  
               listXml = Client.GetListXml(tcmId.ToString(), new RepositoryItemsFilterData  
               {  
                 ItemTypes = new[] { ItemType.Component, ItemType.ComponentTemplate, ItemType.Category, ItemType.Page },  
                 Recursive = true,  
                 BaseColumns = ListBaseColumns.Extended  
               });  
               break;  
             case CONSTANTS.FOLDER:  
               listXml = Client.GetListXml(tcmId.ToString(), new OrganizationalItemItemsFilterData  
               {  
                 ItemTypes = new[] { ItemType.Component, ItemType.ComponentTemplate },  
                 Recursive = true,  
                 BaseColumns = ListBaseColumns.Extended  
               });  
               break;  
             case CONSTANTS.STRUCTUREGROUP:  
               listXml = Client.GetListXml(tcmId.ToString(), new OrganizationalItemItemsFilterData()  
               {  
                 ItemTypes = new[] { ItemType.Page },  
                 Recursive = true,  
                 BaseColumns = ListBaseColumns.Extended  
               });  
               break;  
             case CONSTANTS.CATEGORY:  
               listXml = Client.GetListXml(tcmId.ToString(), new RepositoryItemsFilterData  
               {  
                 ItemTypes = new[] { ItemType.Category },  
                 Recursive = true,  
                 BaseColumns = ListBaseColumns.Extended  
               });  
               break;  
             default:  
               throw new ArgumentOutOfRangeException();  
           }  
           if (listXml == null) throw new ArgumentNullException(nameof(listXml));  
           doc.LoadXml(listXml.ToString());  
           multipleListItems.Add(TransformObjectAndXml.Deserialize<ListItems>(doc));  
         }  
         return getFinalPublishedInfo.FilterIsPublishedItem(multipleListItems).SelectMany(publishedItem => publishedItem, (publishedItem, item) => new { publishedItem, item }).Select(@t => new { @t, publishInfo = Client.GetListPublishInfo(@t.item.ID) }).SelectMany(@t => getFinalPublishedInfo.ReturnFinalList(@t.publishInfo, @t.@t.item)).ToList();  
       }  
       catch (Exception ex)  
       {  
         throw new HttpResponseException(Request.CreateErrorResponse(HttpStatusCode.InternalServerError, ex.Message));  
       }  
     }  

Publish and UnPublish Items

The Publish and UnPublish method is the POST request method and deserialized JSON  into C# Model.

{"TcmIds":[{Id:"tcm:14-65-4",Target:"staging"},{Id:"tcm:14-77-64",Target:"staging"}]}


  #region Publishe the items  
     /// <summary>  
     /// Publishes the items.  
     /// </summary>  
     /// <param name="IDs">The i ds.</param>  
     /// <returns>System.Int32.</returns>  
     /// <exception cref="ArgumentNullException">result</exception>  
     [HttpPost, Route("PublishItems")]  
     public string PublishItems(PublishUnPublishInfoData IDs)  
     {  
       try  
       {  
         var pubInstruction = new PublishInstructionData()  
         {  
           ResolveInstruction = new ResolveInstructionData() { IncludeChildPublications = false },  
           RenderInstruction = new RenderInstructionData()  
         };  
         PublishTransactionData[] result = null;  
         var tfilter = new TargetTypesFilterData();  
         var allPublicationTargets = Client.GetSystemWideList(tfilter);  
         if (allPublicationTargets == null) throw new ArgumentNullException(nameof(allPublicationTargets));  
         foreach (var pubdata in IDs.IDs)  
         {  
           var target = allPublicationTargets.Where(x => x.Title == pubdata.Target).Select(x => x.Id).ToList();  
           if (target.Any())  
           {  
             result = Client.Publish(new[] { pubdata.Id }, pubInstruction, new[] { target[0] }, PublishPriority.Normal, null);  
             if (result == null) throw new ArgumentNullException(nameof(result));  
           }  
         }  
         return "Item send to Publish";  
       }  
       catch (Exception ex)  
       {  
         throw new HttpResponseException(Request.CreateErrorResponse(HttpStatusCode.InternalServerError, ex.Message));  
       }  
     }  
     #endregion  
     #region Unpublish the items  
     /// <summary>  
     /// Uns the publish items.  
     /// </summary>  
     /// <param name="IDs">The i ds.</param>  
     /// <returns>System.Object.</returns>  
     /// <exception cref="ArgumentNullException">result</exception>  
     /// <exception cref="HttpResponseException"></exception>  
     [HttpPost, Route("UnPublishItems")]  
     public string UnPublishItems(PublishUnPublishInfoData IDs)  
     {  
       try  
       {  
         var unPubInstruction = new UnPublishInstructionData()  
         {  
           ResolveInstruction = new ResolveInstructionData()  
           {  
             IncludeChildPublications = false,  
             Purpose = ResolvePurpose.UnPublish,  
           },  
           RollbackOnFailure = true  
         };  
         PublishTransactionData[] result = null;  
         var tfilter = new TargetTypesFilterData();  
         var allPublicationTargets = Client.GetSystemWideList(tfilter);  
         if (allPublicationTargets == null) throw new ArgumentNullException(nameof(allPublicationTargets));  
         foreach (var tcmID in IDs.IDs)  
         {  
           var target = allPublicationTargets.Where(x => x.Title == tcmID.Target).Select(x => x.Id).ToList();  
           if (target.Any())  
           {  
             result = Client.UnPublish(new[] { tcmID.Id }, unPubInstruction, new[] { target[0] }, PublishPriority.Normal, null);  
             if (result == null) throw new ArgumentNullException(nameof(result));  
           }  
         }  
         return "Items send for Unpublish";  
       }  
       catch (Exception ex)  
       {  
         throw new HttpResponseException(Request.CreateErrorResponse(HttpStatusCode.InternalServerError, ex.Message));  
       }  
     }  
     #endregion  

Summary Panel

The GetSummaryPanelData method is the POST request method and deserialized JSON in C# modal.

JSON format is {'IDs':['tcm:14-65-4']}


 #region Get GetSummaryPanelData  
     /// <summary>  
     /// Gets the analytic data.  
     /// </summary>  
     /// <returns>System.Object.</returns>  
     [HttpPost, Route("GetSummaryPanelData")]  
     public object GetSummaryPanelData(TcmIds tcmIDs)  
     {  
       try  
       {  
         GetPublishedInfo getFinalPublishedInfo = new GetPublishedInfo();  
         var multipleListItems = new List<ListItems>();  
         XmlDocument doc = new XmlDocument();  
         foreach (var tcmId in tcmIDs.IDs)  
         {  
           var listXml = Client.GetListXml(tcmId.ToString(), new RepositoryItemsFilterData  
           {  
             ItemTypes = new[] { ItemType.Component, ItemType.ComponentTemplate, ItemType.Category, ItemType.Page },  
             Recursive = true,  
             BaseColumns = ListBaseColumns.Extended  
           });  
           if (listXml == null) throw new ArgumentNullException(nameof(listXml));  
           doc.LoadXml(listXml.ToString());  
           multipleListItems.Add(TransformObjectAndXml.Deserialize<ListItems>(doc));  
         }  
         List<Item> finalList = new List<Item>();  
         foreach (var publishedItem in getFinalPublishedInfo.FilterIsPublishedItem(multipleListItems))  
           foreach (var item in publishedItem)  
           {  
             var publishInfo = Client.GetListPublishInfo(item.ID);  
             foreach (var item1 in getFinalPublishedInfo.ReturnFinalList(publishInfo, item)) finalList.Add(item1);  
           }  
         IEnumerable<Analytics> analytics = finalList.GroupBy(x => new { x.PublicationTarget, x.Type }).Select(g => new Analytics { Count = g.Count(), PublicationTarget = g.Key.PublicationTarget, ItemType = g.Key.Type, });  
         var tfilter = new TargetTypesFilterData();  
         List<ItemSummary> itemssummary = getFinalPublishedInfo.SummaryPanelData(analytics, Client.GetSystemWideList(tfilter));  
         return itemssummary;  
       }  
       catch (Exception ex)  
       {  
         throw new HttpResponseException(Request.CreateErrorResponse(HttpStatusCode.InternalServerError, ex.Message));  
       }  
     }  
     #endregion  

Get All Publication and Target Type

 #region Get list of all publications  
     /// <summary>  
     /// Gets the publication list.  
     /// </summary>  
     /// <returns>List&lt;Publications&gt;.</returns>  
     [HttpGet, Route("GetPublicationList")]  
     public List<Publications> GetPublicationList()  
     {  
       GetPublishedInfo getPublishedInfo = new GetPublishedInfo();  
       XmlDocument publicationList = new XmlDocument();  
       PublicationsFilterData filter = new PublicationsFilterData();  
       XElement publications = Client.GetSystemWideListXml(filter);  
       if (publications == null) throw new ArgumentNullException(nameof(publications));  
       List<Publications> publicationsList = getPublishedInfo.Publications(publicationList, publications);  
       return publicationsList;  
     }  
     #endregion  
     #region Get List of all publication targets  
     /// <summary>  
     /// Gets the publication target.  
     /// </summary>  
     /// <returns>System.Object.</returns>  
     [HttpGet, Route("GetPublicationTarget")]  
     public object GetPublicationTarget()  
     {  
       var filter = new TargetTypesFilterData();  
       var allPublicationTargets = Client.GetSystemWideList(filter);  
       if (allPublicationTargets == null) throw new ArgumentNullException(nameof(allPublicationTargets));  
       return allPublicationTargets;  
     }  
     #endregion  


You can download the code from the Github.

Happy Coding and Keep Sharing !!!

SDL Web Hackathon 2018 - Published Summary Plugin

Published Summary Alchemy-Plug In project has been conceptualized and initiated for SDL Tridion Developer Summit, Amsterdam Hackathon for the year 2018.

The entry for Hackathon has been entered with the team name - "CB Ke Cheete" and for the project "Published Summary" Alchemy plugin.

Team Members 
The team - "CB ke Cheete" - having meanings as "Cheetahs of CB", comprises of three young and dynamic Content Bloom professionals. The team has following members:
  • Pankaj Gaur - Director, Content Bloom | SDL Certified Dev and BA | SDL Tridion/Web MVP
  • Hem Kant - Consultant, Content Bloom | SDL Certified Dev and BA | SDL Web MVP
  • Priyank Gupta - Consultant, Content Bloom | SDL Certified Dev

This plugin is intended to do the following:
  1. Get all items within a publication, folder or structure group published to one or more Publishing Target Types.
    • Let's say if we select Publication then the response will be all Published  PagesComponents, and Categories in that publication.
    • If we select this plugin from Structure group, then we will have all the published pages in the structure group and same goes for Folder selection all Published ComponentTemplates and Components.
  2. Republish, Unpublish and open a specific item.
  3. Republish or Unpublish multiple items on their respective Publishing Target Types.
  4. Export in CSV.
  5. Filter based on Publishing Target Types, Item Type (Component, Pages etc.), and Published Date Range.
  6. Sorting based on Title, Published Date, Published By, Targets, and TCM URIs.
  7. Searching.
  8. Summary of published building blocks across all publishing target types.

The Published Summary Alchemy Plug-in can help in following scenarios:
  1. You want to export a list of all published items from Tridion for a specific website (or for a specific structure group/folder within a website) in CSV format.
  2. You want to see/export all published items from Tridion for a specific Publishing Target Type and compare among all publishing target types for a specific CMS instance.
  3. You need to know, what all need to be published from Tridion in order to make a specific website up and running similar to an existing website.
  4. You need to know a summary of "how many" specific items are published from Tridion to individual publishing target types.
  5. You need to know about "delta" of published items across publishing target types of a CMS instance.
  6. You need to sync your non-live websites with the live websites in terms of content managed from Tridion.
Published Summary

Happy Coding and Keep Sharing !!!

Monday, 29 May 2017

Universal WorkList Alchemy Plugin for SDL WEB

Centralized way of Managing all the tasks Universal Work List is used to manage ,respond to ,assign daily task or delegate tasks to peers.With the help of this plugin you will able to manage all our task and communication from CMS.

Features of this plugin

  1. Create Task and assign to peers.
  2. Update Task status and put your comments.
  3. Re-assign task you any user.
  4. Assign any task to yourself.
  5. Dashboard.
  6. Powered by SOLR 6.5
Let's go through the each screens one by one and understand how its works and its functionality. 


Landing page where you will see all the task assign to you and you can filter them as well to render data I have used JSgrid which gives all the feature such as pagination,sorting etc.
Landing page
 Here, I will have all the task assign to me rendering and I can create the task ,by clicking on the uniqueKey it will take user to Task update page, where user can update comments and status
Update Task
Here user can update comments ,update task status,Re-assign to other user . All the data is stored in SOLR. 

Next ,is how to create a task.In the create task panel you have 
  1. Task assign to field.
  2. Watcher or you can say second look.
  3. Priority is defined.
  4. Environment 
  5. Issue description.   
    Create Task
Happy Coding and keep Sharing !!!

Friday, 5 May 2017

Alchemy Plugin for WEB8 Dashboard

This is in continuation of my previous post where we build Alchemy plugin to download and search CMS items.I have further enhanced that one and added new feature as CMS level dashboard where we have multiple data points options and multiple chart options to represent the data in pictorial form.

Let's see the new addition in this Plugin DASHBOARD.
Dashboards often provide at-a-glance views.A data dashboard is an information management tool that visually tracks, analyzes and displays key performance using pictorial representation of the data , with the help of this dashboard we can identity the usage of different CMS items based on date using charts.

Here,We have 3 filters.
Dashboard
Let's  run this and see how its looks like with all the charts and information which represent .Let's generate charts based on All the components created month wise using all three charts.
1.Bar and Column Chart.
Bar and Column chart
 2.All three charts representing the number of content created month wise in pictorial form we can have same information available year wise as well.
All three charts.
3.Similar to components right we can generate information for pages for month and year wise.
4. Schema utilization number of components created using any particular schema
Schema utilization
With the help of all these information when can do CMS DB forecasting, CMS clean-up activity,Content growth.I have used Core Service,JQuery and Google visualization API to create dashboard.

Let's see the enhancement in the existing report Plugin.

  1. Download button to export the search data in .csv format.
  2. <a> link to open any item in the CMS to verify or cross check.
  3. You can open any Item in same or new window.
  4. Improved UI
  5. JQuery searching to filter data
Anchor tag with UI improvement

Download and Data searching .
Happy coding and keep sharing !!!!

Monday, 1 May 2017

Alchemy plugin to download report from SDL WEB8


We all have came a across from a situation where we need to deploy/port CMS items from one environment to another for that we have Content porter but we need to manually identity(list of items which got created or modified after a certain date) items which we need to migrate.We used Bundle Schema to clubbed all the items at one location but that is again what if we missed any item and as a result content porting might failed due to some or the other dependency.

Here, is an Alchemy plugin which will allow you to download report from CMS based on date.I have used coreService,jQuery to build this plugin and a .ASPX page which is for popup.

This plugin will help you in migration as well where you need to identify items from across CMS.


Steps 
1.    Download the Developer pack from visual studio gallery Link.
2.    Alchemy version that I have used can be downloaded from Alchemy4Tridion link.
3.    Create a project selecting "Starter Plugin Project" template change the project name .
4.    Build the project and navigate to the generated .a4t file.
5.    Drag and drop the file in alchemy window.

Alchemy Plugin Installed.
Let go to CME,
Plugin is installed and ready for use
Let run the plugin and get some records form CM Database.
Custom Popup to Download data

Using Jquery Table searching you can filter the records as well. It will look for the text in all the columns.With the help of Jquery searching you can filter data based on date as well.
Search



You,can download the sample code from here

Happy coding and keep sharing !!!!

Monday, 13 March 2017

Mircoservice over SDL WEB 8 Coreservice


Self Hosted REST Microservice using OWIN and ASP.NET Web API 2

Open Web Interface for .NET (OWIN) defines an abstraction between .NET web servers and web applications. OWIN decouples the web application from the server, which makes OWIN ideal for self-hosting a web application in your own process, outside of IIS.

ASP.NET Web API - CORS Support in ASP.NET Web API 2. Cross-origin resource sharing (CORS) is a World Wide Web Consortium (W3C) specification (commonly considered part of HTML5) that lets JavaScript overcome the same-origin policy security restriction imposed by browsers.

For more details on how to setup and create Microservice read it here.

This service is used to get the data from SDL WEB 8 CM Database

Methods which are available 
  1. GetComponentByTcmUri
    • http://127.0.0.1:8080/Coreservice/getComponentByTcm/{tcmuri}
  2. GetSchemaByTcmUri
    • http://127.0.0.1:8080/Coreservice/getSchemaByTcm/{tcmuri}
  3. GetAllCategoriesWithInPubByTcmUri
    • http://127.0.0.1:8080/Coreservice/GetAllCategoriesWithInPubByTcmUri/{tcmuri}
  4. GetKeywordByCategoryID
    • http://127.0.0.1:8080/Coreservice/GetKeywordByCategory/{tcmuri}
  5. GetPageTempletByPubID
    • http://127.0.0.1:8080/Coreservice/GetPageTempletByPubID/{tcmuri}
  6. GetComponentTemplateByPubID
    • http://127.0.0.1:8080/Coreservice/GetComponentTemplateByPubID/{tcmuri}
  7. GetTemplateBuildingBlockByPubID
    • http://127.0.0.1:8080/Coreservice/GetTemplateBuildingBlockByPubID/{tcmuri}
  8. GetPageByPubID
    • http://127.0.0.1:8080/Coreservice/GetPageByPubID/{tcmuri}
  9. GetStructureGroupByPubID
    • http://127.0.0.1:8080/Coreservice/GetStructureGroupByPubID/{tcmuri}
  10. GetMultimediaComponentByPubID
    • http://127.0.0.1:8080/Coreservice/GetMultimediaComponentByPubID/{tcmuri}
  11. GetPublicationList
    • http://127.0.0.1:8080/Coreservice/GetPublicationList
  12. GetUserList 
    • http://127.0.0.1:8080/Coreservice/GetUserList
You can download code from here.

Happy Codeing and keep Sharing !!!1

Saturday, 4 February 2017

Connect SDL WEB 8 with Elasticsearch

What is ElasticSearch ?

Elasticsearch is an Apache Lucene-based search server. It was developed by Shay Banon and published in 2010.

Elasticsearch is a real-time distributed and open source full-text search and analytics engine. It is accessible from RESTful web service interface and uses schema less JSON (JavaScript Object Notation) documents to store data. It is built on Java programming language, which enables Elasticsearch to run on different platforms. It enables users to explore very large amount of data at very high speed.


Features of Elasticsearch are as follows:- 
  1. Elasticsearch is scalable up to petabytes of structured and unstructured data.
  2. Elasticsearch is open source and available under the Apache license version 2.0.
  3. Elasticsearch uses denormalization to improve the search performance.
  4. Elasticsearch can be used as a replacement of document stores like MongoDB.
  5. Elasticsearch is one of the popular enterprise search engines, which is currently being used by many big organizations like Wikipedia, The Guardian, StackOverflow, GitHub etc.
key concepts of Elasticsearch:-

Node:- It refers to a single running instance of Elasticsearch. Single physical and virtual server accommodates multiple nodes depending upon the capabilities of their physical resources like RAM, storage and processing power.

Cluster:- It is a collection of one or more nodes. Cluster provides collective indexing and search capabilities across all the nodes for entire data.

Index:- It is a collection of different type of documents and document properties. Index also uses the concept of shards to improve the performance. For example, a set of document contains data of a social networking application.

Type/Mapping:- It is a collection of documents sharing a set of common fields present in the same index. For example, an Index contains data of a social networking application, and then there can be a specific type for user profile data, another type for messaging data and another for comments data.

Replicas:- Elasticsearch allows a user to create replicas of their indexes and shards. Replication not only helps in increasing the availability of data in case of failure, but also improves the performance of searching by carrying out a parallel search operation in these replicas.

Shard:-  Indexes are horizontally subdivided into shards. This means each shard contains all the properties of document, but contains less number of JSON objects than index. The horizontal separation makes shard an independent node, which can be store in any node. Primary shard is the original horizontal part of an index and then these primary shards are replicated into replica shards.


Advantages
Elasticsearch is developed on Java, which makes it compatible on almost every platform.
Elasticsearch is real time, in other words after one second the added document is searchable in this engine.
Elasticsearch is distributed, which makes it easy to scale and integrate in any big organization.
Creating full backups are easy by using the concept of gateway, which is present in Elasticsearch.


Now, Lets create connect  ElasticSearch  and SDL WEB 8 and  components in CMS.

  1. Let's create an console based application using .NET 4.5.
  2. Run the following command in the Package manager console .Install-Package NEST .Install Elasticsearch .NET high level client.
  3. Download Elasticsearch click and unzip the Elasticsearch 
  4. Run ../bin/elasticsearch
  5. Run  http://localhost:9200/
  6. Default Elasticsearch instance is up and running now .
  7. With the help of Elasticsearch .NET high level client ,we can do all the CRUD operations in Elasticsearch.
  8. Here, I have created a default index  fromelasticstoweb8 (should be in all lower case other-wise you will get the error Invalid resquest) and Type esnews.
  9. So the data URL would be  http://localhost:9200/fromelasticstoweb8/esnews/AVoFKrUbqeqnhk6aE3Ca  the last parameter is the ID
  10. To get all the records http://localhost:9200/fromelasticstoweb8/_search?q=*:*
  11. Code snippet to write data in Elasticsearch node
  12. Let read all the data from node fromelasticstoweb8
  13. Using Coreservice we can create components in WEB 8 
    1. Create coreservice client using ISessionAwareCoreService.
    2. Create model Serialize that model into XML.
    3. Using coreservice client.create method we can create the components in WEB 8.
    4. I have created all the items available in ES node fromelasticstoweb8  into SDL WEB 8 as components .
  14. To know more on how to use coreservice to interact with SDL WEB 8 and create CMS items such as components ,Keywords etc you can refer my previous blogs also .


Happy Coding and Keeping Sharing !!!!!

Sunday, 4 December 2016

SalesÆ’orce Integration with SDL WEB8


SalesÆ’orce-CRM

A CRM system is a business tool that allows you to manage all your customers, partners, and prospects information all in one place. The Sales Cloud (Salesforce.com’s CRM system) is a cloud based CRM system.
For example, it helps:
1.       sales teams close deals faster
2.       marketing manage campaigns and track lead generation
3.       service call centres reduce the time to resolve customer complaints

SDL WEB 8-CMS

                SDL Web covers four main core functional areas:
1.       Web Content Management
2.       Experience Optimization (including personalization)
3.       Digital Media
4.       Localization

Steps to get the SF API and how to create .NET based client

1.       Go to ap2.salesforce.com and create your developer account.
2.       Click on Setup and from quick find search for API.
3.       Select the API, Salesforce’s WSDL allows you to easily integrate salesforce.com with your applications, and to build new applications that work with salesforce.com. To get started, download a WSDL file to a place accessible to your development environment. For complete documentation, sample code, and developer community, visit
5.       I used Enterprise WSDL 
a.       A strongly typed WSDL for customers who want to build an integration with their salesforce.com organization only.
SF API

Create dummy data in SF

Dummy data is created in SF
 I have created some dummy data in Lead object which we will accessing using Enterprise WSDL  API


Read Data using .NET Client and SF API

Add SF Enterprise API as service reference in your project 
    1. Login to the Force.com API
    2. Change the binding to the new endpoint
    3. Create a new session header object and set the session id to that returned by the login
    4. Queries are executed against the Force.com API using SOQL (Salesforce.com Object Query Language).


In the screenshot, we can see that the data created in SF is accessible using .Net client.

Write Data in SF using .Net Client

  1. Instantiate the Object Being Created: Before we can execute a create() against the API, we must establish the type of object being created. In this case, we will create a new lead.
  2. Establish Field Values and Object Properties
  3. Execute Create Call and Capture Save Results
  4. Working with the Results: The create call will obviously result in success or failure. Successful create calls will return the ID value of the resultant record created. Fortunately, the API is built to allow us to handle both scenarios within our application.
  5. Data is successfully created
  6. Let’s go and check in SF online. Here you can see the details which we have supplied via .NET client is saved in SF.

Let’s integrate SF with SDL WEB 8

    1. We already have SF API and before we start the integration of SF with SDL WEB 8 we need to first create WEB 8 Core service client to get the object.
    2. Core Service is WCF based webService.
    3. I have already created core service client using ISessionAwareCoreService.
    4. After this we need to read Component data using.
    5. Deserialize the data in to the object, and send to SF API to create the records in SF.
    6. Know we have the components List from WEB 8 , run SF API create method to create new records in SF.
    7. I have created sample component in WEB 8 which is pushed in to SF
      • Below image is CMS component which is pushed to SF.
      • Salesforce UI here we can see the data which was pushed from WEB 8 is successfully created in SF.
      • So, we are successfully able to push content from WEB 8 to SF lets work on the other way around so that we can say SF and WEB 8 are in sync.
    8. To create content in WEB 8 
      • Need core service client I have re-used the same core service utility.
      • Folder Location where you want to create content.
      • Read data from SF you can check Read data from SF section for more information
      • Transform the object into XML.
      • We need to call coreService create method.
Run this service in background as Windows service or Task Scheduler  to keep SalesÆ’orce and Tridion in a sync also if you want to have, create custom SalesÆ’orce objects, and add custom fields you can also do that, in this demo I have used the default object LEAD provided by SalesÆ’orce.

Prerequisites to create custom objects in SF:

A SalesÆ’orce account in a sandbox Professional, Enterprise, Performance, or Unlimited Edition org, or an account in a Developer org.


Happy  Coding and keep Sharing !!

Sunday, 27 November 2016

Cms Talk to Cms

The Value of website content


Content is what sets your website apart from the masses.The success of your website is determined primarily by its content .
The key to a successful website is having managed,relevant,clear and keyword-rich content . 

Why content is important ?

  1. It makes your brand an authority 
    • Quality content delivered on a regular basis makes your brand an authority
  2. You get to know your customers
    • Content ,when delivered through websites,social media allow for feedback.It means you can gauge your customers.
  3. Generating content improve SEO
    • Managed,relevant and keyword-rich content is very important for SEO.
  4. Content add value 

Today we are going to see how we can manage,share our content between multiple CMS. The whole idea here is how we can share the content across multiple CMS.
ArchitectureDiagram
High-Level Architecture diagram of CmsTalkToCms
Let's discuss the diagram
  1. Point A
    1. Here I have created a C# console based application which takes inputs from user and forward that request to business connector 
    2. Inputs are like
      • Read data from CMS A process it in a manner which is accepted by CMS B.
      • Mentioned the type of the data (In case of Tridion schema and for Umbraco parent document type)
      • Returns the count and details of the transaction.
      • User authentication is done at this level.
  2. Point B
    1. Business connector is the core of this architecture.
      1. It will consume all the input provide by the Client
      2. Based on that it will first call the source CMS could be point (C) or (D) .
        • Here we extract all the items which needs to share.
        • Transform the data in a manner which is accepted by the destination CMS
        • Once data transformation is done then we will call the destination CMS service which could be point (C) or (D).
        • Service will validate the data 
        • Finally data is pushed to destination CMS.
      3. Destination service
        1. Will also validate the data before performing CRUD operation
        2. Service will check if any item in the list send by the source already exist
        3. If yes will run update else create new item.
    2. After all the data is either updated created or rejected the response will be send back to the business connector and business connector share that response with the client Point A.
  3. Point C
    1. I have created Custom web service .ASMX 
    2. Using Umbraco.core DLL provided by Umbraco.
    3. Using this DLL we can perform all the CRUD operations. 
      service
      Point (C) Content Service used for CRUD operations 
  4. Point D
    1. Tridion core service is used for CRUD operations.
    2. ISessionAwareCoreService is used to create the core service client. 

Monday, 30 May 2016

Exchange Integration 4 Tridion-Ei4T

Introduction 

Exchange Integration 4 Tridion-Ei4T is intent to provide the Tridion integration with Microsoft Exchange server to read emails and create components in SDL Tridion.


Ei4T-High Level Architecture
SETUP

Below are various set-up step and prerequisites 
  1. Dedicated Email-Id is required on which service will keep on checking for emails
  2. Task scheduler to schedule the service.
  3. Custom validator will get EWS url.
  4. Task scheduler and Exchange Service SearchFilter ItemSchema.DateTimeReceived time Interval
    should be in sync 
  5. Make sure you select right version of Exchange.I am using ExchangeVersion.Exchange2013 
  6. Publish component only if its marked as item.Importance == "High" again this is configurable
  7. Create Tridion coreService client using ISessionAwareCoreService
  8. Create model as Serializable
  9. Copy the configuration folder/logging.config file and update the path in app.config
  10. Update App.Config 
    1. coreService Url
    2. Exchange Dedicated Email-ID/Password
    3. Schema ID
    4. Folder ID
    5. Publication Target ID
    6. CM userid/password to run coreService
  11. Email format


You can Download the code from here.

Happy Coding and Keep Sharing !!!