The way the LiveServer Delivery Server is set up for a development environment usually is as a standalone application without having a front controlling web server. Although this is working for development purposes it doesn’t work within a live environment with several thousand users firing off HTTP requests every hour or minute or even second.
This post discusses the best practice of deploying the Open Text Delivery Server in an optimal way alongside a front controlling web server. This article provides a high-level overview of what to set up and how the necessary components work together. Depending on feedback I may post further posts on the details of each step.
The Open Text Delivery Server is a dynamic web server component that has strengths in coarse grained personalization and dynamic behaviour as well as system integration. All you need to know is where to get your hands on and what to do and what you better should not do.
The Open Text Delivery Server is housed within a Servlet Container. A Servlet Container is not the ideal location from which to serve static content. It handles requests in a way that limits the amount of concurrent requests. This can lead to severe performance issues.
There are ways to mitigate this but it needs quite a lot of Java experience and is still not recommended. Unless you wish to maintain a level of access control over the static content let’s put it simply like this:
Don’t run the Deliver Server as a standalone web server.
Leveraging the use of a front controlling Web Server facilitates an optimised site deployment as web servers such as Microsoft’s IIS or Apache’s HTTP Server can be utilised for delivering static content in an optimised way. For example, it is possible to easily configure a far future ‘Expires’ header on a given folder and therefore its content within either Apache or IIS, which promotes the caching of content in a user’s browser, which reduces page load times. Another example is in the use of mature compression features within such web servers to gain performance for your Intranet- or Extranetapplication. Although these examples can be achieved with some Servlet Containers, it is certainly not straight forward and doesn’t necessarily make sense from an architectural perspective.
Use a web server like IIS or Apache as a hatch for your static files or to forward requests that require personalization to the Delivery Server.
It is for this architectural reason, that best-practice dictates that we delegate only the relevant HTTP requests to Delivery Server. Your web server should handle the static bulky content and only forward the requests to Delivery Server which require personalization. In most cases, this means that Delivery Server is delegated requests for .htm and .xml resources. The rest such as images, videos, documents, … can be served from the front controlling web server (or better still a CDN).
This step can be easily achieved using the Tomcat Connector for both IIS and Apache. To find out more see the Tomcat Connector documentation here.
This connector uses the Apache JServ Protocol, which connects to port 8009 by default on Tomcat and is optimised to use a single connection between the Web Server and the Delivery Server for many HTTP requests for performance optimization. Therefore, this represents a better option than using reverse proxy functionality within the Web Server which slows down multiple requests fired off at the same period of time.
If we take a typical Delivery Server install (i.e. the reference install using Tomcat),
a page can be accessed with something like the following URL:
URL http://<host>:8080/cps/rde/xchg/<project>/<xsl_stylesheet>/<resource>
where resource could be any text based file like index.html or action.xml.
The result of correctly installing the Tomcat Connector means that we can access that same resource but through the Web Server on port 80 and not direct to the Tomcat instance on port 8080:
URL http://<host>/cps/rde/xchg/<project>/<xsl_stylesheet>/<resource>
Many confuse this step with URL rewriting or redirecting as the Tomcat Connector is often called the Jakarta Redirector. Therefore, I choose to differentiate by saying that this delegates HTTP requests between the two systems and nothing more.
In every install, I have always used the defaults in the workers.properties file and just used the following rule in the uriworkermap.properties file:
Rule
/cps/*=wlb
Due to the effort of setting up delegation, deciding which HTTP requests should be forwarded to Delivery Server is a simple matter of performing some URL rewrites.
As we have decided to use a mature Web Server, there are best practice ways to achieve this. In IIS6, HeliconTech (Visit HeliconTech website) created a very useful ISAPI filter which ports the widely adopted Apache mod_rewrite (visit Apache mod_rewrite website) functionality. For both of these, the same rewrite rules can be used. The following provides a couple of typical examples:
Apache Rules # Default landing page redirect RewriteRule ^/$ /cps/rde/xchg/<project>/<xsl_stylesheet>/index.htm [L] # Rewrite to delegate all *.html or *.htm HTTP requests to Delivery Server RewriteRule ^/?.*/(.+.html?)$ /cps/rde/xchg/<project>/<xsl_stylesheet>/$1 [L] # Rewrite to delegate all *.xml HTTP requests to Delivery Server RewriteRule ^/?.*/(.+.xml)$ /cps/rde/xchg/<project>/<xsl_stylesheet>/$1 [L]
Those of you who are well versed in regular expressions will see that the last two rules could be combined but I tend to leave them separate to aid readability.
The beauty of using regular expressions in this way is that you can actually create useful SEO benefits to your site also. Take for example the following rule:
Apache Rules RewriteRule ^/?.*/([0-9a-zA-Z_]+)$ /cps/rde/xchg/<project>/<xsl_stylesheet>/$1.htm [L]
This rule maps an URL with many apparent subdirectories to the Delivery Server file.
This means that you can publish a page with a “virtual” path within the Management Server which appears to a browser (and search engines) as something like the following:
URL http://<host>/this/is/a/descriptive/directory/structure/page.htm
and yet this maps to:
URL /cps/rde/xchg/<project>/<xsl_stylesheet>/page.htm
Being a Microsoft product, IIS7 has some quirks with regards to the rewriting (of course), which I explained in a previous post here.
Sharing the required tasks between their appropriate applications gives you a more stable and reliable system with an even better performance. All you have to do is let:
This approach has led to many successful installations where sites could additionally be optimised for SEO and page load by using compression techniques and the way a web server actually serves HTTP requests.
This article is based on the blog post here of Danny Baggs. Danny has a strong developer based background and is working for Open Text.
Although this should give you a fair bit of guidance on how to set up your web server environments to get a high performance solution there are always questions remaining. Feel free to share those questions in the comments below and let us know what you think.