Saturday, 15 June 2019

Extending Content Delivery Storage in SDL WEB 8.5 - Part 1

In this blog, We are going to extend the Storage layer in SDL WEB 8.5 but first, we need to set up the project and will make sure that the custom code interacts with the default process.

Pre-Requisites
  1. Eclipse or you can use your favorite JAVA IDE. You can also follow my previous blog on how to set up the Project in Eclipse and generate custom JAR. [here]
  2. JAVA 8
  3. Default SDL JARs
We can customize the way existing items are stored by the Storage Layer using the DAO "Data Access Object" implementation pattern. We also need to extend JPAComponentPresentationDAO and implement ComponentPresentationDAO and this will allow us to extend the Storage Layer for Dynamic Component Presentation.

Step 1. Create a JAVA class file and add the following Namespaces.
import com.tridion.broker.StorageException;
import com.tridion.storage.ComponentPresentation;
import com.tridion.storage.dao.ComponentPresentationDAO;
import com.tridion.storage.persistence.JPAComponentPresentationDAO;
import com.tridion.storage.util.ComponentPresentationTypeEnum;
import java.io.UnsupportedEncodingException;
import java.util.Collection;
import javax.persistence.EntityManager;
import javax.persistence.EntityManagerFactory;
import org.springframework.context.annotation.Scope;
import org.springframework.stereotype.Component;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
Step 2.  Extends the class JPAComponentPresentationDAO and implement  ComponentPresentationDAO. We also need to insert a @Component and a @scope("prototype") statement between import statements and class definition and the following line of code.

Custom Storage Extension

Step 3. Create a storage DAO bundle config file.



Step 4. Copy this config file in the Deployer config folder and open the cd_storage config file in your favorite editor and add the following element.

Edit storage config file.

Step 5. Copy the Custom JAR file in the deployer and restart the deployer.

Custom JAR.


Step 6. We are done with code and configuration, now we need to publish the DCP and check the logs.

Logs

In the log file, we can see the custom log is available and it's returning the Component ID which means the code is interacting with the default process.


In the next blog, we will see the difference between other Methods, how our code will change on Publishing, Unpublishing and re-publishing and how to read DCP content, Until then.

Happy Coding and Keep Sharing !!!





Monday, 3 June 2019

Custom Deployer Extension in SDL WEB 8.5 - Purge Cache Part - 2

In the previous post, we set up the custom deployer project, discussed its requirement and the use of deployer extension. Today, we are going to continue to use that same project and implement a real-life use-case.

Background:- In our current project we are using SDL WEB 8.5 and .NET DXA 1.8 and to improve the performance of the web application we implemented the custom caching with cache expires after 15 mins, but the business team has this requirement that they want to see the content "Real-time", no delay whenever the publish any new content.

Based on their requirement we came up with this approach and decided to write a custom deployer extension. So, whenever the author publish the page(s), the deployer extension will trigger and clear the cache of that particular page based on the URL extracted from PageMetaData.

In the below code snippet we are reading the page relative URL retrieved from PageMetaData and with the help of string manipulation we've created the absolute URL and based on the Query String parameter we trigger the code to purge the cache. Check the HTTP response code it should be 200 in the logs and refresh the web page to check the latest content.

 // This is called once for each Transport Package that is deployed.  
      @Override  
      public void process(TransportPackage data) throws ProcessingException   
      {  
            log.debug("This is custom logs");  
            String USER_AGENT = "Mozilla/5.0";  
            log.debug("PublicationId : " + String.valueOf(data.getProcessorInstructions().getPublicationId().getItemId()));  
           try  
           {  
            log.debug("Custom Code :- Entered in the try block");  
            PageMetaData pageFile = (PageMetaData)data.getMetaDataFile("Pages");  
            log.debug("Custom code :- Reading PageMetaData");  
            if (pageFile != null)  
      {  
        List<Page> pages = pageFile.getPages();  
        for ( Page page : pages )  
        {  
             log.debug("Custom code :- For loop to get the page url ");  
             log.debug("http://localhost:92"+page.getURLPath()+"?ClearCache=true");   
             String URL = "http://localhost:92"+page.getURLPath()+"?ClearCache=true";  
             URL obj = new URL(URL);  
             HttpURLConnection con = (HttpURLConnection) obj.openConnection();  
             con.setConnectTimeout(5000);  
             log.debug("Custom code :- timeout");  
             con.setRequestMethod("GET");  
             con.setRequestProperty("User-Agent", USER_AGENT);  
             int responseCode = con.getResponseCode();  
             log.debug("Response Code : " + responseCode);  
             if (responseCode == HttpURLConnection.HTTP_OK)  
             {  
              log.debug("Custom Deployer, Cache has been removed and the responseCode is : " + responseCode + "- for URL " + URL);  
             }  
             log.debug("Page ID :" + String.valueOf(page.getId().getItemId()));  
        }  
      }  
            log.debug("Purging is done !! Please check the web page!!");  
           }  
           catch (Exception e)  
     {  
        log.error("Could not get path for publication.", e);  
        return;  
     }  
  }  

Logs generated by the deployer.

Deployer Logs


With the help of the deployer extension, we were able to achieve and deliver the real-time experience and still managing the performance of the site using cache.


Happy Coding and Keep Sharing !!!



Sunday, 2 June 2019

Custom Deployer Extension in SDL WEB 8.5 Part - 1

Today, we are going to see how to extend the default SDL Tridion deployment functionality, using Custom Deployer Extension. Before we move forward and start building/set up the project let's first understand the use of Deployer Extension. When a user publishes content, the Content Deployer unpacks the incoming Transport Package and processes its transport instructions and we can extend the default behavior of the Content Deployer by creating a custom Module and adding it to a Step, or by extending an existing Module.

Deployer Extension:- 
  • Deployer Extension is used to inject additional functionality in the default SDL Tridion deployment process. 
  • If you want to implement your custom logic/data. "Event System might be useful here as well ðŸ¤” We need to be absolutely sure about it".
  • If you want to use some specific data/info which is only available at the time of deployment.   
  • Based on JAVA.
  • You can extend the default behavior of the Content Deployer by creating a custom Module and adding it to a Step, or by extending an existing Module.
In order to set up the JAVA project for the Deployer Extension, I have used Eclipse IDE, default JARs provided by SDL and JAVA 8. 

First, we need to configure the JAVA project using Eclipse.

1. Click on File --> New --> Other Project and then select Java Project and click Next.

Create New JAVA Project

2. Enter the Project Name

Enter the Project Name.

3. Create a new folder called "lib" which will contain all the JARs.

Create new Folder lib

4. Copy all the JARs in lib, you can find all the default JAR files in the deployer microservice lib folder. Once copied then we need to Add the Build PATH.

Add JARs Build Path


5 Next, is create a Java package and then create a CLASS file.

Create a new Java Package

Create a New Class
6. Start Importing all the Namespace that's required in order to extend the deployer.

Tridion Default Namespaces.

7. Next is we must implement the abstract method  Module.process(TransportPackage)

 package com.tridion.deployer.extension;  
 import org.slf4j.Logger;  
 import org.slf4j.LoggerFactory;  
 import com.tridion.configuration.Configuration;  
 import com.tridion.configuration.ConfigurationException;  
 import com.tridion.deployer.Module;  
 import com.tridion.deployer.ProcessingException;  
 import com.tridion.deployer.Processor;  
 import com.tridion.transport.transportpackage.TransportPackage;  
 public class PurgeCache extends Module {  
      protected static Logger log = LoggerFactory.getLogger(PurgeCache.class);  
        
      public PurgeCache(Configuration config, Processor processor) throws ConfigurationException  
      {  
           super(config,processor);  
      }  
      // This is called once for each Transport Package that is deployed.  
      @Override  
      public void process(TransportPackage data) throws ProcessingException   
      {  
            log.debug("This is custom logs");  
            int publicationId =data.getProcessorInstructions().getPublicationId().getItemId();  
            log.debug("PublicationId : " + String.valueOf(publicationId));  
      }  
 }  


That's it Export this into a JAR and next is the configuration in the deployer_config.xml of Deployer Microservice. "Don't forget to restart the service".


Click Export

Click Jar file and then Next
Custom Deployer Extension JAR file is ready


Finally, we need to deploy the custom JAR in the deployer microservice lib folder and open the deployer_config.xml file in your favorite editor add the below configuration, save the file Don't forget to take a backup before you start editing.


 <Step Factory="com.sdl.delivery.deployer.steps.TridionExecutableStepFactory" Id="PurgeCacheSteps">  
   <Module Type="PurgeCache" Class="com.tridion.deployer.extension.PurgeCache">  
   </Module>  
 </Step>  

 We also need to edit the logback.xml

 <logger name="com.tridion.deployer.extension">  
     <appender-ref ref="rollingDeployerLog"/>  
 </logger>  

Let's do some publishing and see if everything is working fine. Ideally, we should be able to see the logs entries. If you see the custom logs in the log file that means your custom code is interesting with default deployment process and you've built the extension successfully.

Log File


In the next blog, we will use the deployer extension with a very interesting use-case, until then.

Happy Coding and Keep Sharing !!!



Saturday, 13 April 2019

SDL Tridion sites 9 PCA with REACT - Part 2

In the previous post, we saw how to set up the project using Apollo Client, ReactJS and interact with Tridion Sites 9 PCA. We also build a new schema, Dynamic CT and published 3 components using the Dynamic CT in order to retrieve them via PCA (GraphQL).

In the last exercise, we rendered the list of Dynamic News Components on the Landing page. Today we are going to continue using the same code and build the Details page and later on will see the deployment process and how to deploy ReactJs App.

So, let's start with the News details page, we already created the landing page and while doing that we had sufficient information to get the DCP of that particular component. If you have noticed in the code or screenshot from my last blog the Href that I've created for each News and Article items contains the combination of ItemId and CT and we need that same combination to get the Component Presentation.

DCPs with ItemID-CT available on Landing Page

Here are the JSX and Graphql code to render the News details page

News Detail Page

Here is the News and Article Details page again not a very attractive UI 😊.

Details Page

Let's build the App and see how's the production-ready build looks like. The below commands build the deployment package after npm build we need to run serve -s from the build folder and this will spin the Website

 $ npm run build
 $ serve -s

Build Package

Final Output 

Final Build

The updated code is available on GitHub and in the next blog, we will see more features works until then.

Happy Coding and Keep Sharing !!!


SDL Tridion sites 9 PCA with REACT

Tridion sites 9 PCA is getting popular more and more every day, and today in this blog, we will be creating an app using Apollo Client to communicate with SDL Tridion GraphQL API. We will integrate Apollo Client with ReactJS, but you can use it with several other client platforms as well.

Setting Up The Project


To get started we first need to set up a new React project. The easiest way to do so is to use create-react-app. This script creates a new React project.
 npm install -g create-react-app react-graphql  
 cd react-graphql  
 npm start  

with this, you will have the default app up and running on port number 3000, This is initiating a new basic React project in the newly created project folder react-graphql. By using the npm start to command the live-reloading development web server is started and the output of the default React project can be accessed in the browser:

Default ReactJS

In order to work with GraphQL, we have to install Dependencies. The next step is to install the needed dependencies.

  • apollo-boost: Package containing everything you need to set up Apollo Client
  • react-apollo: View layer integration for React
  • graphql-tag: Necessary for parsing your GraphQL queries
  • graphql: Also parses your GraphQL queries
The installation of these dependencies can be done by using the following NPM command

 $ npm install apollo-boost react-apollo graphql-tag graphql  

Project Structure 
As you can see I've imported react-apollo in order to work with GraphQL. In today's demo, we will see how to get the Dynamic Component Presentation and reder the same in React based webapp. To start this I've created a new schema with Title and Description as content fields and used the default DXA standard Metadata Schema as well.

Based on this new schema I've created 3 new components and published them using Dynamic CT. They are now available via PCA.
3 DCPs for Demo Purpose

DCPs via GraphiQL

Next, is call PCA using Apollo Client and ReactJS and render the DCPs.

Here is the query to get all the DCPs to render them on the News and Article Landing page.
 const repoQuery = gql`  
  query  
 {  
  componentPresentations(namespaceId: 1, publicationId: 11, filter: {schema: {id: 789}}) {  
   edges {  
    cursor  
    node {  
     itemType  
     rawContent {  
      data  
     }  
    }  
   }  
  }  
 }  
 `  

Based on the query criteria we are getting 3 DCPs which is correct

PCA output

Next, Display the Data

Let's write some JSX to display the fetched data. I'm fetching/rendering the data from both Metadata and Content fields, just to test the feasibility/flexibility and syntax.

 class News extends Component {  
  render() {  
   console.log(this.props)  
   return (  
    <div>  
     <h2>News and Articles Landing Page</h2>   
     {  
      this.props.data.loading === true ? "Loading" :   
       this.props.data.componentPresentations.edges.map  
       (  
        data =>  
       <ul key={data.node.rawContent.data.Id}>  
       <li style={{fontWeight: 'bold'}}>  
         <a href={"newsdetails?ids="+data.node.rawContent.data.Id +  
          "&name="+data.node.rawContent.data.Content.title.replace(/[^a-zA-Z0-9]/g, '-')}>  
          {data.node.rawContent.data.Content.title}  
         </a>  
       </li>  
       <p>  
        {  
         data.node.rawContent.data.Metadata.metadata.description  
        }  
        </p>  
      </ul>  
       )  
     }  
    </div>  
   );  
  }  
 }  


And Finally, our News and Article Landing page with DCPs is ready, not the very attractive UI though 😊 

DCPs Rendering on ReactJS app
In the next blog, we will be creating News details page and will try to implement search and other features as well. 
You can download the sample application from GitHub, don't forget to update the PCA URL 

Happy Coding and Keep Sharing !!!! 

Saturday, 6 April 2019

Hybrid Architecture SDL WEB 8.5 and SDL Tridion Sites 9

Well, In my previous post we saw how we can use GraphQL with SDL WEB 8.5 by simply installing Tridion Sites 9 Content Service that will work isolated and read data from 8.5 Broker. But, this has some limitations I saw some difference in the Broker Database of 8.5 and Tridion 9.e:g The "Component Presentation" table has a new column CONTENT_ID in the Tridion 9 Broker. This field is not there in the 8.5 Broker and this caused an issue while accessing Dynamic Component Presentations using GraphQL.

To, sort any such issues which we might face, we discussed the new Hybrid approach where we will have the Tridion 9 Broker and State Store Database with Tridion 9 Services Discovery, Deployer, and Content service running parallelly with 8.5 CMS .


High-Level Architecture Diagram

  1. Install Tridion 9 Broker and State_Store Database (As per the recommendation by SDL we need to install SQL Server 2016 SP2 or 2017).
  2. Update storage configuration as per the new Tridion 9 DB details in the Tridion 9 Discovery, Deployer and Content service(These services could be on the same box or in a new) and then install the services.
  3. After installation of service run the “java -jar discovery-registration.jar update” from discovery service config folder. This command will register the discovery service capabilities.
  4. The above steps/command will set up the services for the content delivery environment.
  5. Next is create a new topology manager in order to set the publishing
    1. We need to keep 8.5 and Tridion 9 broker DB in sync. So that we can run existing  8.5 and at the same time we should be able to query Tridion 9 Broker DB using GraphQL.
    2. We need targets that will publish to both the Brokers (e:g WEB8_Staging, Tridion9_Staging, WEB8_LIVE, and Tridion9_LIVE). This will keep the existing env. working and at the same time, we can use Graphql.
      • WEB8_Staging will point WEB 8 Staging Discovery 
      • Tridion9_Staging will point to Tridion 9 Staging Discovery
  6. There is another advantage here, so when you MIGRATE from CM 8.5 to Tridion 9 you already have your Broker DB up-to-date, content delivery microservices are already working.
The above setup will let you continue with the 8.5 and it will also give us the privilege to use GraphQL as well.


Happy Coding and Keep Sharing !!!

Friday, 5 April 2019

OutScale Publishing in SDL Tridion

This was my topic in this year's SDL Tridion DX DEV India summit. We recently upgraded one of our client's system from SDL Tridion 2013 to SDL WEB 8.5, with more than 700 Websites and 200K items to publish, We need to build a system that supports the above scenarios with the high-performance rate.

So to maintain scalability we decided to use Deployer-Endpoint and Deployer-Worker rather than going with Deployer-Combined, we also used REDIS for Binary Storage and ActiveMQ for Massage queue/Notifications. Below is the Architecture Diagram.



Let understand the difference between Split Deployer and Deployer Combined
  • Deployer Endpoint:- Receive the package sent by the Transport Service and index it in Binary Storage. 
  • Deployer Worker:-  Process the package and saves the content in Broker DB or on the File System depending upon the configuration.  

Next, let's see how this architecture works step by step.

  • Content is passed to the Deployer after a user Publishes an item in the CME.
  • The Deployer Endpoint passes the Transport Package (.zip file) to the defined Binary Storage (File System or Redis Database). We are using Redis right now.
  • The Deployer Endpoint also passes the item to the Queue in ActiveMQ (JMS).
  • ActiveMQ triggers an event that informs the Worker Deployers that a new package has been received.
  • The first available Worker Deployer picks up the job from the Queue and contacts the Binary Storage to get the respective Transport Package.
  • After rendering the Transport Package the Worker Deployer passes the item to the Broker Database.
  • The Deployer then gets the status of the job from the Broker Database, which is updated by the Worker Deployer responsible for that job.

Below is the screenshot of Redis Consumes the "Transport Package" and in the ActiveMQ we have "Queue/notifications" Enqueued and Dequeued.



 Happy Coding and Keep Sharing !!!