Showing posts with label SDL web 8.5. Show all posts
Showing posts with label SDL web 8.5. Show all posts

Sunday 16 June 2019

Extending Content Delivery Storage in SDL WEB 8.5 - Part 3

In the last blog, we saw what happened when we publish a new component, it only calls the create method, but when we re-publish an item it first calls the remove method and then create.

Remove Method


public void remove(int publicationId, int componentId, int componentTemplateId, ComponentPresentationTypeEnum componentPresentationType)
throws StorageException
{
log.debug("Custom storage remove Method");
super.remove(publicationId, componentId, componentTemplateId, componentPresentationType);
log.debug("Custom storage remove Method :- "+componentId);
 
}
Re-Publish an item


As you can see when I re-published an item, we have remove method invoked first and then create method. Next, is un-publish an item and see how it works/ in what sequence.


Un-Publish an Item

When we publish a new item only create method is called, re-publishing will call the remove method and then create and finally in un-publish only remove method is called.   


In the last two blogs, we learned how to set up and configure the project to build storage extension and Publishing a new item and in this blog, we saw other functionalities as well and how they are invoked, in what sequence/order, with this approach we can add/update/remove DCPs in custom storage e:g SOLR, ElasticSearch, MongoDB, etc.

This data can further be used for analytics and for third-party applications.


Happy Coding and Keep Sharing !!!



Extending Content Delivery Storage in SDL WEB 8.5 - Part 2

In the previous blog, we set up the project and configured all the required config and based on that we published a DCP and saw custom logs are available which means everything is working fine.

Today we are going to read the content of DCP and will see how Storage Extension work when we publish, re-publish and un-publish an item.

The Content is available in the form of Bytes and we need to convert that, below is the code snippet that will allow you to access the DCP content in your custom code.

public void create(ComponentPresentation itemToCreate, ComponentPresentationTypeEnum componentPresentationType)
throws StorageException
{
          super.create(itemToCreate,componentPresentationType);
          log.debug("Custom Code Create Method COMID :- " + itemToCreate.getComponentId());
          log.debug("Custom Storage create Method:- "+ itemToCreate.getContent());
          log.debug(String.valueOf(itemToCreate.getComponentId()));
          byte[] dcpComponent= itemToCreate.getContent();
          try
          {
                  componentPresentation = new String(dcpComponent,"UTF-8");
                  log.debug("Custom Storage create DCP :- " + componentPresentation);
          }
          catch (UnsupportedEncodingException ex)
          {
                  log.error("Custom Storage create Unsupported " + ex);
          }

}

DCP content

Publish a new component and in the deployer log file, you will see the DCP content but what will happen when we re-publish the same component or un-publish this. How we are going to manage that in the custom code, will see in the next blog, until then.


Happy Coding and Keep Sharing !!!





Saturday 15 June 2019

Extending Content Delivery Storage in SDL WEB 8.5 - Part 1

In this blog, We are going to extend the Storage layer in SDL WEB 8.5 but first, we need to set up the project and will make sure that the custom code interacts with the default process.

Pre-Requisites
  1. Eclipse or you can use your favorite JAVA IDE. You can also follow my previous blog on how to set up the Project in Eclipse and generate custom JAR. [here]
  2. JAVA 8
  3. Default SDL JARs
We can customize the way existing items are stored by the Storage Layer using the DAO "Data Access Object" implementation pattern. We also need to extend JPAComponentPresentationDAO and implement ComponentPresentationDAO and this will allow us to extend the Storage Layer for Dynamic Component Presentation.

Step 1. Create a JAVA class file and add the following Namespaces.
import com.tridion.broker.StorageException;
import com.tridion.storage.ComponentPresentation;
import com.tridion.storage.dao.ComponentPresentationDAO;
import com.tridion.storage.persistence.JPAComponentPresentationDAO;
import com.tridion.storage.util.ComponentPresentationTypeEnum;
import java.io.UnsupportedEncodingException;
import java.util.Collection;
import javax.persistence.EntityManager;
import javax.persistence.EntityManagerFactory;
import org.springframework.context.annotation.Scope;
import org.springframework.stereotype.Component;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
Step 2.  Extends the class JPAComponentPresentationDAO and implement  ComponentPresentationDAO. We also need to insert a @Component and a @scope("prototype") statement between import statements and class definition and the following line of code.

Custom Storage Extension

Step 3. Create a storage DAO bundle config file.



Step 4. Copy this config file in the Deployer config folder and open the cd_storage config file in your favorite editor and add the following element.

Edit storage config file.

Step 5. Copy the Custom JAR file in the deployer and restart the deployer.

Custom JAR.


Step 6. We are done with code and configuration, now we need to publish the DCP and check the logs.

Logs

In the log file, we can see the custom log is available and it's returning the Component ID which means the code is interacting with the default process.


In the next blog, we will see the difference between other Methods, how our code will change on Publishing, Unpublishing and re-publishing and how to read DCP content, Until then.

Happy Coding and Keep Sharing !!!





Sunday 2 June 2019

Custom Deployer Extension in SDL WEB 8.5 Part - 1

Today, we are going to see how to extend the default SDL Tridion deployment functionality, using Custom Deployer Extension. Before we move forward and start building/set up the project let's first understand the use of Deployer Extension. When a user publishes content, the Content Deployer unpacks the incoming Transport Package and processes its transport instructions and we can extend the default behavior of the Content Deployer by creating a custom Module and adding it to a Step, or by extending an existing Module.

Deployer Extension:- 
  • Deployer Extension is used to inject additional functionality in the default SDL Tridion deployment process. 
  • If you want to implement your custom logic/data. "Event System might be useful here as well ðŸ¤” We need to be absolutely sure about it".
  • If you want to use some specific data/info which is only available at the time of deployment.   
  • Based on JAVA.
  • You can extend the default behavior of the Content Deployer by creating a custom Module and adding it to a Step, or by extending an existing Module.
In order to set up the JAVA project for the Deployer Extension, I have used Eclipse IDE, default JARs provided by SDL and JAVA 8. 

First, we need to configure the JAVA project using Eclipse.

1. Click on File --> New --> Other Project and then select Java Project and click Next.

Create New JAVA Project

2. Enter the Project Name

Enter the Project Name.

3. Create a new folder called "lib" which will contain all the JARs.

Create new Folder lib

4. Copy all the JARs in lib, you can find all the default JAR files in the deployer microservice lib folder. Once copied then we need to Add the Build PATH.

Add JARs Build Path


5 Next, is create a Java package and then create a CLASS file.

Create a new Java Package

Create a New Class
6. Start Importing all the Namespace that's required in order to extend the deployer.

Tridion Default Namespaces.

7. Next is we must implement the abstract method  Module.process(TransportPackage)

 package com.tridion.deployer.extension;  
 import org.slf4j.Logger;  
 import org.slf4j.LoggerFactory;  
 import com.tridion.configuration.Configuration;  
 import com.tridion.configuration.ConfigurationException;  
 import com.tridion.deployer.Module;  
 import com.tridion.deployer.ProcessingException;  
 import com.tridion.deployer.Processor;  
 import com.tridion.transport.transportpackage.TransportPackage;  
 public class PurgeCache extends Module {  
      protected static Logger log = LoggerFactory.getLogger(PurgeCache.class);  
        
      public PurgeCache(Configuration config, Processor processor) throws ConfigurationException  
      {  
           super(config,processor);  
      }  
      // This is called once for each Transport Package that is deployed.  
      @Override  
      public void process(TransportPackage data) throws ProcessingException   
      {  
            log.debug("This is custom logs");  
            int publicationId =data.getProcessorInstructions().getPublicationId().getItemId();  
            log.debug("PublicationId : " + String.valueOf(publicationId));  
      }  
 }  


That's it Export this into a JAR and next is the configuration in the deployer_config.xml of Deployer Microservice. "Don't forget to restart the service".


Click Export

Click Jar file and then Next
Custom Deployer Extension JAR file is ready


Finally, we need to deploy the custom JAR in the deployer microservice lib folder and open the deployer_config.xml file in your favorite editor add the below configuration, save the file Don't forget to take a backup before you start editing.


 <Step Factory="com.sdl.delivery.deployer.steps.TridionExecutableStepFactory" Id="PurgeCacheSteps">  
   <Module Type="PurgeCache" Class="com.tridion.deployer.extension.PurgeCache">  
   </Module>  
 </Step>  

 We also need to edit the logback.xml

 <logger name="com.tridion.deployer.extension">  
     <appender-ref ref="rollingDeployerLog"/>  
 </logger>  

Let's do some publishing and see if everything is working fine. Ideally, we should be able to see the logs entries. If you see the custom logs in the log file that means your custom code is interesting with default deployment process and you've built the extension successfully.

Log File


In the next blog, we will use the deployer extension with a very interesting use-case, until then.

Happy Coding and Keep Sharing !!!



Saturday 13 April 2019

SDL Tridion sites 9 PCA with REACT - Part 2

In the previous post, we saw how to set up the project using Apollo Client, ReactJS and interact with Tridion Sites 9 PCA. We also build a new schema, Dynamic CT and published 3 components using the Dynamic CT in order to retrieve them via PCA (GraphQL).

In the last exercise, we rendered the list of Dynamic News Components on the Landing page. Today we are going to continue using the same code and build the Details page and later on will see the deployment process and how to deploy ReactJs App.

So, let's start with the News details page, we already created the landing page and while doing that we had sufficient information to get the DCP of that particular component. If you have noticed in the code or screenshot from my last blog the Href that I've created for each News and Article items contains the combination of ItemId and CT and we need that same combination to get the Component Presentation.

DCPs with ItemID-CT available on Landing Page

Here are the JSX and Graphql code to render the News details page

News Detail Page

Here is the News and Article Details page again not a very attractive UI 😊.

Details Page

Let's build the App and see how's the production-ready build looks like. The below commands build the deployment package after npm build we need to run serve -s from the build folder and this will spin the Website

 $ npm run build
 $ serve -s

Build Package

Final Output 

Final Build

The updated code is available on GitHub and in the next blog, we will see more features works until then.

Happy Coding and Keep Sharing !!!


Friday 5 April 2019

OutScale Publishing in SDL Tridion

This was my topic in this year's SDL Tridion DX DEV India summit. We recently upgraded one of our client's system from SDL Tridion 2013 to SDL WEB 8.5, with more than 700 Websites and 200K items to publish, We need to build a system that supports the above scenarios with the high-performance rate.

So to maintain scalability we decided to use Deployer-Endpoint and Deployer-Worker rather than going with Deployer-Combined, we also used REDIS for Binary Storage and ActiveMQ for Massage queue/Notifications. Below is the Architecture Diagram.



Let understand the difference between Split Deployer and Deployer Combined
  • Deployer Endpoint:- Receive the package sent by the Transport Service and index it in Binary Storage. 
  • Deployer Worker:-  Process the package and saves the content in Broker DB or on the File System depending upon the configuration.  

Next, let's see how this architecture works step by step.

  • Content is passed to the Deployer after a user Publishes an item in the CME.
  • The Deployer Endpoint passes the Transport Package (.zip file) to the defined Binary Storage (File System or Redis Database). We are using Redis right now.
  • The Deployer Endpoint also passes the item to the Queue in ActiveMQ (JMS).
  • ActiveMQ triggers an event that informs the Worker Deployers that a new package has been received.
  • The first available Worker Deployer picks up the job from the Queue and contacts the Binary Storage to get the respective Transport Package.
  • After rendering the Transport Package the Worker Deployer passes the item to the Broker Database.
  • The Deployer then gets the status of the job from the Broker Database, which is updated by the Worker Deployer responsible for that job.

Below is the screenshot of Redis Consumes the "Transport Package" and in the ActiveMQ we have "Queue/notifications" Enqueued and Dequeued.



 Happy Coding and Keep Sharing !!!

Monday 4 March 2019

SDL Tridion Sites 9 Content API GraphQL with SDL WEB 8.5

Recently, One of our customers requested this, can we have new SDL Tridion Sites 9 feature GraphQL in SDL WEB 8.5. They have the following questions before we can proceed to POC.

The client is very excited about this new GraphQL feature as this will bring lots of improvement in their existing implementation. So, they wanted to know can we have this new feature in 8.5.

  1. Can the CD side be upgraded to Sites 9 to have the GraphQL API, but the CM be left on Web 8.5?
  2. Can only the content microservice be updated to Sites 9 (while leaving the Broker and Deployer on 8.5) 
  3. Broker and Deployer also have to upgrade to 9?

We also need to think about downtown while we do all these. 

We did some investigation and come up with this idea/approach that we are not going to upgrade anything here just to have the new Content API feature (GraphQL).

So, We decided to Install only the Tridion Sites 9 Content Microservice and this will read the data from existing WEB 8.5 broker DB. NOTE:- We are still running the 8.5 services here no upgrade yet on CM or CD side.

With this approach, we were able to deliver what the client was requested from us. Now they are running their existing site using 8.5 content delivery and parallelly they started planning on building apps using 8.5 broker data using Tridion sites 9 GraphQL API. 

It was an interesting POC.

WEB 8.5 Content Service


Tridion Sites 9 Content Service running on same ENV and hitting 8.5 Broker to fetch the data.


Happy Coding and Keep Sharing !!!



Thursday 8 November 2018

Configure Workers, ActiveMQ and Redis for Scalable Deployment in SDL WEB 8.5

This is in continuation of my previous blogs where we discussed how to install CM and publisher on a dedicated machine and then, we saw how we can implement scale out content deployment using workers, ActiveMQ and Redis. Today, we are going to see how to configure Workes, ActiveMQ and Redis.

To read more about last two blogs in this series:-
  1. Scaling SDL WEB 8.5 Installing CM and Publisher on Dedicated Machines
  2. Scaling Deployers in SDL WEB 8.5
In order to configure the scalable content deployment, we need Deployer(Endpoint) and Deployer-Worker.

Deployer Installation Media
Pre-requisites, we need to install ActiveMQ and Redis
  1. How to install Redis please check my previous blogs.
  2. Download the Apache ActiveMQ
    1. Unzip the package in a suitable location and from command line execute command activemq start.
    2. Open your browser and navigate to the Admin Console at http://localhost:8161

Next step is to configure Deployer-Endpoint and Deployer-Workers Microservices.
  1. Both these services are required to update the deployer_conf.xml and Deployer-Workers required cd_storage_conf.xml.
  2. In the Deployer-Endpoint installation media open deployer_conf.xml.
    1. Here we need to update the <BinaryStorage> node and point this to Redis data store.
      Configure Redis in Deployer-Endpoint
    2. we need to configure the <State> node. so that workers know where to update the status of a job and the endpoint knows from where to get the status of the job.
      Configure State
    3. Next, is we need to configure ActiveMQ, we also need to make sure that all the other <Queue> entries related to FileSystem are commented out.
      Configure Apache ActiveMQ JMS
    4. Save and close the deployer-conf.xml file and use this same file for Workers as well.
    5. Next step is to configure the cd_storage_conf.xml for Deployer-Workers only.
      1. Here we need to update the <Storage> node with the Broker Database details. The worker will deploy the content on this DB.
      2. Update the License file path.
      3. Save and close the file and we can use this same file on the second Deployer-Worker.
    6. Now, we need to configure the ports for Deployer-Endpoint and Deployer-Workers.
    7. Last run the installService.ps1 from the Deployer-Endpoint folder and from Deployer-Worker. You need to run the command more than once depending upon the number of Deployer-Workers you want.
      Deployer-Endpoint and Deployer-Workers are installed  
    8. We have all the services configure and up-running.
    9. The final step is to register your Deployer (Endpoint) with your Discovery Service and run the discovery-registration.jar.



Happy Coding and Keep Sharing



Sunday 4 November 2018

Scaling Deployers in SDL WEB 8.5

This is in continuation of my previous post where we discussed how to set up Publisher and CM on a dedicated machine to improve the publishing efficiency click here to Read More.

Today we are going to see how to scale out Deployers, using multiple deployers and workers.

High-Level Achrciture diagram 

Steps:

  1. Content is passed to the Deployer after a user Publishes an item in the CME.
  2. The Deployer Endpoint passes the Transport Package (.zip file) to the defined Binary Storage (File System or Redis Database). We are using Redis right now.
  3. The Deployer Endpoint also passes the item to the Queue in ActiveMQ (JMS).
  4. ActiveMQ triggers an event that informs the Worker Deployers that a new package has been received.
  5. The first available Worker Deployer picks up the job from the Queue and contacts the Binary Storage to get the respective Transport Package.
  6. After rendering the Transport Package the Worker Deployer passes the item to the Broker Database.
  7. The Deployer then gets the status of the job from the Broker Database, which is updated by the Worker Deployer responsible for that job.
In the next blog, we'll see how to configure Workers, ActiveMQ and Redis.

Happy Coding and Keep Sharing 


Multiple Destination Publishing Using - MIRROR strategy in SDL WEB 8.5

What is Multiple Destination Publishing?


Multiple destination publishing means that the transport service can publish to multiple deployer destinations (if you have more than one deployer).

DEFAULT strategy means that there will be only one deployer only one URL is registered in discovery-service).

MIRROR strategy means that you can have more than one deployer (multiple URLs are registered in discovery-service) and transport service will send the package to all of the deployer URLs register in discovery service.

How to configure Multiple Destination Publishing?


We can do this by defining the strategy for the DeployerCapability which is registered in the Discovery Microservice cd_storage.conf.xml:

If you have set up multiple Content Deployer destinations, do the following in order to setup MIRROR strategy:

  1. Ensure that this Role element has a Strategy attribute, set to MIRROR.
  2. Inside the Role element, insert a Urls subsection, itself containing two or more Url subelements. Each Url element must have a Value attribute, set to the URL of the destination, and can have a DestinationName attribute, set to the name of this destination

Strategy Attribute 

This Role has one Capability URL (value DEFAULT, to be specified in the Url attribute) 
or multiple ones value MIRROR, to be specified in the URLs subsection. Defaults to DEFAULT is not specified.

Happy Coding and Keep Sharing !!!