Sunday, October 16, 2011

A new place for "Forte e Gentile"

The new place for the technical stuff is at Tirasa blog, where I have also copied all old posts from this blog.

See you there!

Wednesday, September 21, 2011

Getting started with Activiti (with Maven)

[See this post in the new blog]

Syncope needs a new workflow engine, for many good reasons: here's why I've started playing around with Activiti.

Activiti looks really interesting because of features and Apache 2.0 license; moreover, its spicy story makes it even more attractive.

Unfortunately, the documentation is fully Eclipse and ANT oriented: this sounds a bit cumbersome for people (like me, of course) used to be in love with Maven and quite allergic to the dark (hem, let's say, more in love with everything that used to be connected to the Sun).
Anyway, Activiti team did not forget completely the rest of us and is regularly publishing artifacts to Alfresco's repository.

Hence, I've downloaded the latest 5.7 version, got the source code examples, and wrote a simple multi-module Maven project, able to compile and run all tests defined. Source code is available at GitHub.

Friday, July 15, 2011

Cocoon 3 and Hippo CMS at work

[See this post in the new blog]

I recently presented some aspects of the renewed Apache Cocoon power through its latest (and not yet completed) release, 3.0.

Today I am going to present some features of the Hippo Cocoon Toolkit, whose aim is to provide an alternative, Cocoon 3.0 based, toolkit for building front-end web sites while relying upon Hippo CMS and Repository.

This project is still rather immature, but it already provides some interesting features like XML and PDF generation of documents stored in the Hippo repository.

HCT can be used either as standalone webapp - and in this case it takes control of the whole navigation - or embedded in the official Hippo Site Toolkit: this would allow to benefit from HCT's (and Apache Cocoon 3.0's) features while staying in the traditional way of dealing with Hippo-powered websites.

Here it follows what I did:
  1. generated a new project (I used archetype version 1.04.00 just to stay on the edge :-P)
    Update: as reported in Hippo wiki, "The archetypes for Hippo CMS 7.6 are in the 1.03.xx range and are production ready. The latest micro version of that range is the one you will want to use. Archetype versions in the 1.04.xx range are unstable development releases. Unless you want to check out the new and upcoming features we strongly advice you not to use these."
    This means that the code attached to this post is not meant to be used in any production environment.
  2. went to the CMS console web interface and added a couple of new sitemap items for news/**.xml and news/**.PDF (the capital PDF is needed because otherwise the HST components seem to try loading a PDF asset)
  3. wrote a couple of java classes - namely HST components - HCTXml and HCTPdf
  4. prepared a couple of JSPs to handle the results provided by the two new HST components

Both HST components inherit from a common abstract class in which a basic Cocoon 3 pipeline is set up; the relevant part of this source code is shown below:

final Pipeline<SAXPipelineComponent> pipeline =
                new NonCachingPipeline<SAXPipelineComponent>();

        pipeline.addComponent(new XMLGenerator("<hct:document "
                + "xmlns:hct=\"http://forge.onehippo.org/gf/project/hct/1.0\" "
                + "path=\"" + hippoBean.getPath() + "\"/>"));

        final Map<String, String> hrtParams = new HashMap<String, String>();
        hrtParams.put(HippoRepositoryTransformer.PARAM_REPOSITORY_ADDRESS,
                "rmi://localhost:1099/hipporepository");
        hrtParams.put(HippoRepositoryTransformer.PARAM_USERNAME, "admin");
        hrtParams.put(HippoRepositoryTransformer.PARAM_PASSWORD, "admin");
        final HippoRepositoryTransformer hrt = new HippoRepositoryTransformer();
        hrt.setConfiguration(hrtParams);
        pipeline.addComponent(hrt);

A basic pipeline is created, starting with an XML string that simply contains a request that can be interpreted by the subsequent HippoRepositoryTransformer instance.
Note here that the repository URL and credentials are passed to the transformer and that the document is re-read from the repository while it is already contained in hippoBean: HCT is not yet mature, as written above...

Generating an XML output is now pretty straightforward:

final XMLSerializer serializer = XMLSerializer.createXMLSerializer();
        serializer.setIndent(true);
        pipeline.addComponent(serializer);

        final ByteArrayOutputStream baos = new ByteArrayOutputStream();
        try {
            pipeline.setup(baos);
            pipeline.execute();
        } catch (Exception e) {
            throw new HstComponentException(e);
        }

        request.setAttribute("xml", new String(baos.toByteArray()));

and (JSP):

<%@page contentType="text/xml" pageEncoding="UTF-8" trimDirectiveWhitespaces="true"%>
<%@ taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c" %>
<c:out value="${requestScope.xml}" escapeXml="false"/>

Consider that you could add here an additional XSLT transformation to customize the XML output in the desired way.

Generating a PDF file requires a little more work, since an intermediary XSLT transformation from the source XML to XSL-FO (required by Apache FOP) is needed:

final Map<String, Object> params = new HashMap<String, Object>();
        params.put("scheme", request.getScheme());
        params.put("servername", request.getServerName());
        params.put("serverport", Integer.valueOf(request.getServerPort()));
        params.put("contextPath", request.getContextPath());
        final XSLTTransformer xslt = new XSLTTransformer(
                getClass().getResource("/xslt/document2fo.xsl"));
        xslt.setParameters(params);
        pipeline.addComponent(xslt);

        pipeline.addComponent(new FopSerializer());

        final ByteArrayOutputStream baos = new ByteArrayOutputStream();
        try {
            pipeline.setup(baos);
            pipeline.execute();
        } catch (Exception e) {
            throw new HstComponentException(e);
        }

        request.setAttribute("pdfArray", baos.toByteArray());

and (JSP):

<%@page contentType="application/pdf" trimDirectiveWhitespaces="true"%>
<%
    response.getOutputStream().write((byte[]) request.getAttribute(
            "pdfArray"));
%>

To test all this, build and run the source code in the usual way and point your favorite browser to http://localhost:8080/site/news/.

Now you can click on one of the three news items shown, go to the address bar of your browser and replace .html with .xml or .PDF and you can get a raw XML and PDF view of your Hippo document.

Thursday, July 7, 2011

AOP – Spring – JPA for background threads / jobs

[See this post in the new blog]

I've recently come up to a very wicked problem in Syncope, and a saving blog post pointed me in the right direction:

Getting your persistence access right when working with background jobs in Spring can be tricky. Most people rely on the Open Session In View pattern using Filters or Interceptors that act on the regular app server threads and close and open sessions for each request.

Nevertheless, I had to refactor a bit the source code to be JPA 2.0 (and not strictly Hibernate) compliant: the result is available here. I have also added some @Transactional support.

Thursday, June 30, 2011

Build rich XML-enabled applications with Apache Cocoon 3.0 and Apache Wicket

[See this post in the new blog]

Some articles are already around about Apache Cocoon 3.0, a deep rewrite of an Apache project that is bringing to the community innovative concepts since 1998.

To be honest, the latest release is slowly approaching to a stable level, especially if compared to the wide spread and appreciation that 2.x series used to have - and still has, to a certain extent - all around the world. Consider only the date of this post reporting the official announcement of the initial work: almost three years ago now, normally enough to consider an Open Source project barely death.

Anyway, the user base seems to be wider than (at least, I've) expected, and still messages pass in Apache Cocoon's mailing lists asking for help, considerations, feature requests. Moreover, some blog entries like this and this recently appeared about Apache Cocoon 3.0, showing that there seems to be still room for the "Cocoon way" to Internet applications.

Ok, I might not be completely objective, but I really do believe that there is still nothing around comparable to Apache Cocoon, when it comes to deal with XML content.
An example of this is the Hippo Cocoon Toolkit project aiming to provide an alternative, Apache Cocoon 3.0 based, toolkit for building front-end web sites while relying upon Hippo CMS.

Apache Cocoon 3.0 has a very slimmed-down and targeted nature if compared to its ancestors (especially 2.1), thought for implementing any kind of web interaction, from portals to CRUD applications. But, from the other side, it provides any mean for a smooth integration in almost any environment.

Let's briefly see how simple and extremely powerful can be to build a web application capable of fancy AJAX stuff and, at the same time, strong XML processing.
Start by downloading the source code of the sample web application: as you can see, all you need to run is Apache Maven (2.2.1 or 3.0.3) installed in your workstation; then uncompress, cd and launch
 # mvn clean package jetty:run
Now point your favorite browser to http://localhost:8888/: voilĂ ! You can now see three different kinds of interaction available in this sample web application:
  1. Embed content produced by Cocoon pipelines in Wicket pages (source code: Homepage.java): you can then, for example, place somewhere in your Wicket form a snippet generated by a Cocoon pipeline; note here that Cocoon pipelines are written as pure Java code, no XML;
  2. Use full featured Cocoon pipelines (source code: sitemap.xmap): just empower Cocoon the good old way;
  3. Use full featured Wicket pages (I just grab the source code from the AJAX section of Wicket samples).

Nice, isn't it? ;-)

All this above can be considered as a very first insight in the many facets of Apache Cocoon 3.0: take a tour of its features to have a better idea; did I tell you, for example, about its RESTful attitude?

Friday, May 20, 2011

HSQLDB 2.0.0, BLOB & Hibernate

[See this post in the new blog]

HSQLDB is a very nice and complete all-Java DBMS, particularly useful when doing quick test-outs or maven tests.

A while ago, a bug has been discovered in release 2.0.0 that is causing issues with BLOB management: this is a considerable issue especially with @Lob fields in Hibernate.

The bug was actually fixed in 2.1.0; latest stable release is by today 2.2.1.

Unfortunately, the latest release available at the Maven Central Repository is "only" 2.0.0. [Update: release 2.2.4 is now available (June 25th 2011)]

Taking inspiration from this StackOverflow question, I've elaborated a simple solution working for Hibernate.

First of all, create a simple Java class like as the following:

public class HSQLSafeDialect extends HSQLDialect {

    public HSQLSafeDialect() {
        super();

        registerColumnType(Types.BLOB, "longvarbinary");
        registerColumnType(Types.CLOB, "longvarchar");
    }
}

Then configure your Hibernate instance to use xxx.yyy.HSQLSafeDialect instead of standard org.hibernate.dialect.HSQLDialect.

Basically, this disables BLOB and CLOB supported introduced as new in  HSQLDB 2.0.0, reverting to 1.8.X style.
Not fancy, but enough to make your maven tests run smoothly. Enjoy.

Wednesday, March 9, 2011

Still surprised by Linux? Yeah :-)

[See this post in the new blog]

Yesterday evening I had to print out a photo of my son: since the only ready PC in the neighbourhood was my brand new Linux powered laptop, I just plug in the USB cable of my HP Deskjet F300 all-in-one.
Without paying any attention, I switched it on, being prepared to go through the usual CUPS story, when some popups on the bottom right of my KDE desktop appeared saying that scanner and printer were been successfully configured. Just a quick test to verify that everything was actually working and voilĂ . Great!

I still remember what, years ago, I had to do to make my old Epson Stylus Color 740 able to operate, and without any success guarantee: modprobe, lsusb, /var/log/syslog, dmesg...

Apache Cocoon PMC Member!!

[See this post in the new blog]

After very good news I have reported before, this morning I have received a very pleasant e-mail saying that since today I have been accepted as member of the Apache Cocoon PMC.

Wow, I am still astonished by all this new stuff :-)

Wednesday, March 2, 2011

Apache Cocoon committer!

[See this post in the new blog]

Yes, it's true - I am still realizing, but it's definitely true:since last February 21st 2011 I have been granted committer rights at Apache Software Foundation, on the Cocoon3 project.

As I wrote to PMC members, it has been a real honour for me to accept this proposal and I wish I'll be able to contribute this project at best of my capabilities.

If you want, and especially if you don't believe me ;-) take a look at my personal page at people.apache.org!

Monday, February 28, 2011

Linux Mint 10 KDE on HP Envy 14 1100

[See this post in the new blog]

I come from four-in-a-row (very good) laptops made by mummy Apple: an iBook G3, a Powerbook G4, a MacBook Pro Core2Duo and another shining MacBook Pro, powered by dual-core i7 and SSD.
Because of some relevant changes in my work life (I'll write something about it in the near future), I had to return to my former company the latest laptop - with some regrets, however, it's a damn powerful machine.

Of course I could buy another MacBook Pro, maybe one of the latests, like some of my colleagues is doing right now, but it's been a few months now that I have a growing uncomfortable feeling of Apple and MacOSX that resemble every day to a cage, with their Apple Store, bells and whistles.

So I decided: back to the roots, back to Linux, possibly Debian, like when I was younger and I met THE Linux guy: I am expecting possibly more headaches, but I am free again, and nothing else matters.

After some discovery throughout the whole Internet, I've found this HP Envy 14 1100: solid, elegant and powerful; here it follows some hardware key points:
  1. Intel® Core™ i7-720QM 1.6 GHz (2.8 GHz with TurboBoost™)
  2. 4 GB DDR3 in a single slot; another slot free for additional 4 GB
  3. 500 GB (7200 rpm) SATA Hard disk
  4. ATI Mobility Radeon™ HD 5650, 1 GB dedicated memory
  5. LED 14,5'' display (1366 x 768)
General notes
  • Most of information here could by applied with minor modifications to other laptops of the HP Envy family
  • Most of information here could by applied to one of *Ubuntu 10.10
  • The Realtek network card has some issues with its deep sleep mode: be sure to read this post before starting any activity related to Linux installation; if it's too late (you easily jumped over this point and are now scrolling back to find out why you have that issue), please read how to remove the memory in order to reset the motherboard.
  • The laptop comes with four primary partitions on disks, so there is no way to make additional partitions for Linux unless you remove the HP recovery partition.
  • Other general information about installing and running Linux on a similar hardware were found here and here.
Installing Linux Mint KDE 10

Ok, Linux Mint it's not Debian, it's an Ubuntu derivative, and Ubuntu is in turn a Debian derivative. But Linux Mint has an ongoing pure Debian edition that will eventually replace the current one.
About the desktop environment, since 1.0 I've always preferred KDE over Gnome: I've always felt it more organic and stable.

As first step, download the bootable DVD ISO image from Linux Mint website, then burn a DVD and keep it ready for re-partitioning your hard drive.

Then, make all necessary backups and preparations, as reported in this guide about using GParted to resize Windows 7 partition, and keep a safe copy a Windows 7 repair disc.

Finally, reboot your system with the Linux Mint KDE 10 DVD inserted (you have to enter in the BIOS at poweron in order to select to boot from DVD drive) and let everything roll on.

The installation procedure runs quite smoothly, with some relevant items:
  • after resizing the Windows 7 partition (having removed the HP recovery as specified above) make a logic partition in which some additional extended partitions need to be created;
  • right after first boot, install the proprietary drivers for display card (ATI Radeon HD 5650) and wireless adapter (Broadcom BCM43224) by running jockey-kde from command-line (or "Additional Drivers" from the menu);
  • you might prefer to use an external USB mouse since the touchpad - a Synaptic Clickpad with promising features in gestures - is almost unusable; the situation improves significantly by applying this advice: I have now left click, two-finger scrolling and right click features (via two-finger tap, right button is still not working);
  • in order to profit from using an external monitor, you need a "HP mini displayport to VGA" or "HP mini displayport to DVI" adapter (similar Apple adapters won't work);
  • suspension and hibernation work by default, but you need some tweaking in order to be able to suspend/resume more than once: edit /etc/default/grub and add "usbcore.autosuspend=-1" to GRUB_CMDLINE_LINUX, then issue a "sudo update-grub".
Finally, if you want to verify that your system is actually using its Intel Core i7 at best of its power, just download i7z from googlecode, compile and run.