Friday, July 15, 2011

Cocoon 3 and Hippo CMS at work

[See this post in the new blog]

I recently presented some aspects of the renewed Apache Cocoon power through its latest (and not yet completed) release, 3.0.

Today I am going to present some features of the Hippo Cocoon Toolkit, whose aim is to provide an alternative, Cocoon 3.0 based, toolkit for building front-end web sites while relying upon Hippo CMS and Repository.

This project is still rather immature, but it already provides some interesting features like XML and PDF generation of documents stored in the Hippo repository.

HCT can be used either as standalone webapp - and in this case it takes control of the whole navigation - or embedded in the official Hippo Site Toolkit: this would allow to benefit from HCT's (and Apache Cocoon 3.0's) features while staying in the traditional way of dealing with Hippo-powered websites.

Here it follows what I did:
  1. generated a new project (I used archetype version 1.04.00 just to stay on the edge :-P)
    Update: as reported in Hippo wiki, "The archetypes for Hippo CMS 7.6 are in the 1.03.xx range and are production ready. The latest micro version of that range is the one you will want to use. Archetype versions in the 1.04.xx range are unstable development releases. Unless you want to check out the new and upcoming features we strongly advice you not to use these."
    This means that the code attached to this post is not meant to be used in any production environment.
  2. went to the CMS console web interface and added a couple of new sitemap items for news/**.xml and news/**.PDF (the capital PDF is needed because otherwise the HST components seem to try loading a PDF asset)
  3. wrote a couple of java classes - namely HST components - HCTXml and HCTPdf
  4. prepared a couple of JSPs to handle the results provided by the two new HST components

Both HST components inherit from a common abstract class in which a basic Cocoon 3 pipeline is set up; the relevant part of this source code is shown below:

final Pipeline<SAXPipelineComponent> pipeline =
                new NonCachingPipeline<SAXPipelineComponent>();

        pipeline.addComponent(new XMLGenerator("<hct:document "
                + "xmlns:hct=\"http://forge.onehippo.org/gf/project/hct/1.0\" "
                + "path=\"" + hippoBean.getPath() + "\"/>"));

        final Map<String, String> hrtParams = new HashMap<String, String>();
        hrtParams.put(HippoRepositoryTransformer.PARAM_REPOSITORY_ADDRESS,
                "rmi://localhost:1099/hipporepository");
        hrtParams.put(HippoRepositoryTransformer.PARAM_USERNAME, "admin");
        hrtParams.put(HippoRepositoryTransformer.PARAM_PASSWORD, "admin");
        final HippoRepositoryTransformer hrt = new HippoRepositoryTransformer();
        hrt.setConfiguration(hrtParams);
        pipeline.addComponent(hrt);

A basic pipeline is created, starting with an XML string that simply contains a request that can be interpreted by the subsequent HippoRepositoryTransformer instance.
Note here that the repository URL and credentials are passed to the transformer and that the document is re-read from the repository while it is already contained in hippoBean: HCT is not yet mature, as written above...

Generating an XML output is now pretty straightforward:

final XMLSerializer serializer = XMLSerializer.createXMLSerializer();
        serializer.setIndent(true);
        pipeline.addComponent(serializer);

        final ByteArrayOutputStream baos = new ByteArrayOutputStream();
        try {
            pipeline.setup(baos);
            pipeline.execute();
        } catch (Exception e) {
            throw new HstComponentException(e);
        }

        request.setAttribute("xml", new String(baos.toByteArray()));

and (JSP):

<%@page contentType="text/xml" pageEncoding="UTF-8" trimDirectiveWhitespaces="true"%>
<%@ taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c" %>
<c:out value="${requestScope.xml}" escapeXml="false"/>

Consider that you could add here an additional XSLT transformation to customize the XML output in the desired way.

Generating a PDF file requires a little more work, since an intermediary XSLT transformation from the source XML to XSL-FO (required by Apache FOP) is needed:

final Map<String, Object> params = new HashMap<String, Object>();
        params.put("scheme", request.getScheme());
        params.put("servername", request.getServerName());
        params.put("serverport", Integer.valueOf(request.getServerPort()));
        params.put("contextPath", request.getContextPath());
        final XSLTTransformer xslt = new XSLTTransformer(
                getClass().getResource("/xslt/document2fo.xsl"));
        xslt.setParameters(params);
        pipeline.addComponent(xslt);

        pipeline.addComponent(new FopSerializer());

        final ByteArrayOutputStream baos = new ByteArrayOutputStream();
        try {
            pipeline.setup(baos);
            pipeline.execute();
        } catch (Exception e) {
            throw new HstComponentException(e);
        }

        request.setAttribute("pdfArray", baos.toByteArray());

and (JSP):

<%@page contentType="application/pdf" trimDirectiveWhitespaces="true"%>
<%
    response.getOutputStream().write((byte[]) request.getAttribute(
            "pdfArray"));
%>

To test all this, build and run the source code in the usual way and point your favorite browser to http://localhost:8080/site/news/.

Now you can click on one of the three news items shown, go to the address bar of your browser and replace .html with .xml or .PDF and you can get a raw XML and PDF view of your Hippo document.

Thursday, July 7, 2011

AOP – Spring – JPA for background threads / jobs

[See this post in the new blog]

I've recently come up to a very wicked problem in Syncope, and a saving blog post pointed me in the right direction:

Getting your persistence access right when working with background jobs in Spring can be tricky. Most people rely on the Open Session In View pattern using Filters or Interceptors that act on the regular app server threads and close and open sessions for each request.

Nevertheless, I had to refactor a bit the source code to be JPA 2.0 (and not strictly Hibernate) compliant: the result is available here. I have also added some @Transactional support.