Showing posts with label Java. Show all posts
Showing posts with label Java. Show all posts

Sunday, December 27, 2009

[Tech] Simple Java Template Engine

Template engines are widely used in Web Frameworks, such as Struts, JSF and many other technologies. Apart from classical Web Framework, template engines can be very useful in integration projects. In an actual integration project that deals with a lot of XML data exchange, I discovered the Java Template Engine Library FreeMarker. This Open Source Library is a generic template engine in order to generate any output, such as HTML, XML and any other user defined output based on your given template.
"[...]FreeMarker is designed to be practical for the generation of HTML Web pages, particularly by servlet-based applications following the MVC (Model View Controller) pattern. The idea behind using the MVC pattern for dynamic Web pages is that you separate the designers (HTML authors) from the programmers. Everybody works on what they are good at. Designers can change the appearance of a page without programmers having to change or recompile code, because the application logic (Java programs) and page design (FreeMarker templates) are separated. Templates do not become polluted with complex program fragments. This separation is useful even for projects where the programmer and the HTML page author is the same person, since it helps to keep the application clear and easily maintainable[...]"
I think HTML is one application area of FreeMarker. Consider 3rd party systems providing APIs consuming XML data or their own data structures. Construct their data format in the code is a grubby approach and furthermore the code becomes not maintainable. Using such a library you can manage your data exchange template outside your code and produce the final data by using the template engine. I see such template engines as classical transformers as in an Enterprise Service Bus:

In the above exmple you see, that you can use placeholders in your template files, which will be replaced by the real data when the transformation takes place. FreeMarker provides enhanced constructs such as if statements, loops and other stuff which can be used in your template files.

Template engines are often used in Web Frameworks, but the usage of template engines is also very useful when you must produce specific output for other systems.

Thursday, July 02, 2009

[Pub] Mule Tutorial

In the current issue of the Java Magazin I published a tutorial to develop loose coupled systems with Mule. The tutorial illustrates the usage of an Enterprise Service Bus in an airport domain, where different airport systems communicate with each other over the ESB. In the example I use a set of important Enterprise Integration Patterns and show how these patterns are implemented in Mule. Some patterns I used are:
  • Event Driven Consumer
  • Content Based Router
  • Filter
  • Transformation
  • Message Splitter
The transports and connectors I used from Mule are:
  • JMS (Active MQ as message broker)
  • Quartz Transport
  • File Transport
  • XMPP transport for instant messaging
The source code of the tutorial can be downloaded here.

Have Fun!

Monday, June 29, 2009

[Misc] Hot deployment with Mule 3 M1

Some interesting news from the Open Source ESB Mule. The first milestone from the third version of Mule is out and comes with a major important feature: Hot Deployment

What is the meaning of hot deployment?

Hot deployment is a process of deploying/redeploying service components without having restart your application container. This is very useful in a production environment when you have multiple applications connected over the enterprise service bus without having to impact users of applications.

Check out the example on the mule homepage.

Tuesday, May 26, 2009

[Tech] Kent Beck's JUnit Max

JUnit is a testing-framework well known to every Java developer (according ports to other languages exist). Kent Beck and Erich Gamma were the core-developers of JUnit, which was published around 2000 as Open-Source framework. It is fair to say, that JUnit and its ports had a huge influence on quality assurance and can be found in nearly every modern software project.

Now Kent Beck announced a new project JUnit Max. The core concept is "continuous testing". JUnit Max is an Eclipse plugin that starts the unit test execution everytime a class is saved and controls the test-execution ordering according to the classes that are worked on and tests that have failed recently.

In my opinion this seems to be an interesting and logical next step in unit-testing-frameworks. JUnit Max however is not Open Source and follows (in my opinion) a rather strange license model (more can be found on the website). I wonder, whether the additional benefit justifies the license fees and this particular model and more important, how long it will take until this functionality is provided by an Open Source solution...

Tuesday, April 21, 2009

[Tech] What about Maven 3

At thte last Maven Meetup Jason van Zyl talked about the future of Maven and important milestones of Maven 3, including:
  • Support of incremental buildings
  • Changes about the Plugin-API
  • Better Multilanguage support
  • and more
The video and slides about the presentation are available here.

[Misc] Sun and Oracle

Now it finally happened: Oracle bought Sun for 7,4 billion dollar. It sure is a little bit surprising, as the deal with IBM seemed to be settled already. From a developers point of view, the Oracle deal might be better for the community, allthough it also has certain risks.

For IBM Java is strategically very important, insofar Java would have been "save" with IBM. Additionally IBM has developed (similar to Sun) a solid Open Source strategy over the last decade which would also fit to Sun. However, a significant amount of their product lines would have overlapped: Both have Middleware products like Websphere and the Sun Glassfish project portfolio. Both have a series of database products: mySQL at Suns side and of course the DB2 line on IBMs side, and a similar story on the OS front: the probably superior Solaris versus IBM AIX. Finally Sun has the Netbeans IDE as central development tool whereas IBM has Eclipse. I doubt that IBM would have had a lot of interest in doubling all these product lines. Not to mention the Sun hardware.

Now, on the paper Oracle looks much more "compatible" to Sun. True, there are some overlaps in the middleware section. Most "afraid" might be the mySQL folks, as Oracle already showed some hostility against mySQL in the past. Then again, when they own the product, they probably can sell it in their database portfolio in the "low-end" market. Java is also important for Oracle and probably even more important is the operating system Solaris and the Sun hardware and a tight integration to e.g. the Oracle database. With these assets Oracle can offer "end-to-end" solutions starting from hardware, operating system, storage, solutions, database, middleware, web-frameworks and integrated development environment.

What worries me a little bit with Oracle is the lack of experience in the Open Source community. Oracle is in my opinion a rather closed shop compared to IBM and Sun. Maybe Oracle can learn a little bit from Sun's experience here. However, my fazit is, that there is significant potential in the combination of Sun and Oracle (probably more than with Sun/IBM) but also some significant risks in terms of openness and for certain parts of the Sun product line. I am particularly looking forward in terms of the consequences for the Open Source middleware-portfolio, Java and mySQL.

Update: Larry Dignan from zdnet blog writes about mysql:
"Oracle gets to kill MySQL. There’s no way Ellison will let that open source database mess with the margins of his database. MySQL at best will wither from neglect. In any case, MySQL is MyToast."
Well, I would not bet on that (but probably would not start a new project with mySQL either...), but it is for sure an option.

Thursday, April 09, 2009

[Tech] Mavenizing AppEngine!

As I nagged yesterday about the fact, that AppEngine has no proper Maven-build system, already today the guys at Sonatype reacted ;-)

They describe preliminary attempts in how to "Mavenize" AppEngine projects; hope they will be able to fix also the last issues!

Wednesday, April 08, 2009

[Tech] Google AppEngine (and Java)

AppEngine is a rather recent new service from Google. It is probably Google's answer to Amazon's cloud-computing platform, yet targets a very different market. Where Amazon offers a a broad range of services and high flexibility (with the disadvantage of higher administration effort) Google targets web-developers that want to publish web-applications. AppEngine started with a Python environment, now since some days the long anticipated Java-version (Java 5 and 6) is online. Now what are the benefits of using AppEngine?

Java

First of all, it is possible to deploy applications without having to install, administrate and maintain an own server (instance). Google provides a runtime environment (sandbox) into which Python or Java applications can be deployed. Access to these applications is (for clients) only possible via http(s). So this is a feasible approach for web-applications or RESTful services.

An additional advantage is, that Google deals with scaling issues, i.e. it scales the applications dynamically to the demand. This is a significant advantage for startups, that have no clear idea about the number of customers they are going to have or how fast this number is growing. For the scaling to work, though, some restrictions have to be considered. Most notably this concerns the persistence strategy. E.g. applications (and libraries!) are allowed to read files from the filesystem, but are not allowed to write. For all persistence issues, the Google datastore has to be used. However, what is nice with the new Java-sandbox is the fact, that Google apparently tries to follow established standards. For persistence Java developers can use JDO or JPA or a low-level interface for the distributed datastore.

I wonder, however, how logging can be handled in that environment. Logging is usually done to a file or to a JDBC datasource. A JDO logging target I have not seen before; ideas anyone?

Generally spoken, arbitrary Java libraries can be deployed and used in the AppEngine as long as they do not violate the AppEngine sandbox. Also due to the scaling-approach not all libs/frameworks will run unchanged. As yet, it seems not quite clear for example, which Java Web-Frameworks will run seamlessly in the App-Engine. Googles webtoolkit (GWT) should work, other framework communities are currently testing their frameworks for compatibility, e.g. in the Apache Tapestry and the JSF framework Apache myFaces discussions are running on the mailinglists.

Build Automation and Development Process

The development process is, in my point of view, as also with other Google environments like GWT a mixed blessing. Everything is Eclipse centered, which is not really a good thing: Google provides an Eclipse plugin for the AppEngine including a runtime environment for testing applications offline. This is great for daily development activity, but not for a stable build- and testing environment. Unfortunately Maven support (like archetypes) are completly missing at the moment. Google is apparently pretty hostile towards Maven and focuses mostly on IDE integration, which is definitly not a sound way for a modern build automation. IDE "wizard-based" SE approaches usually turn out to be unstable and problematic, particularly in team-projects. This might be nice for a fast hack, but is no basis for a larger project. It seems, that some support is given for Apache Ant though.

Hopefully other developers will provide a Maven integration for the Java AppEngine. With the current approach not even an IDE-less deployment is possible.

Conclusion

So, despite of the build-issues, I believe that the AppEngine is a great option to deploy web-applications in Java or Python. For small applications (small in the sense of "low web-traffic"), the AppEngine is free, after exceeding certain thresholds (CPU, storage, bandwith...) one pays according to the resources needed. Google provides a web-interface to set daily financial limits for individual resources. E.g. one wants to spend a maximum of 5 $ a day for CPU time and so on.

Looking forward to the first experience reports, particularly with web-frameworks like Wicket, Tapestry or Cocoon.

Wednesday, March 25, 2009

[Tech] HSQLDB Version 1.9 alpha is out

Finally hsqldb 1.9 (alpha, though) is released. This release was announced for, I believe, nearly one year. It seemed to me already that hsqldb is a rather dead projects. I am glad they made the next round, because in a way I still like that system a lot. Sure, Apache Derby is most likely the superior system, and H2 looks very promising too (but is still, as I understand it, a "one man show" without community) however hsqldb has some tiny details that make it very nice: First, it always had a really tiny footprint and was extremly easy to understand and use.

And I particularly liked the feature to fine-tune the memory management, i.e., should the data be stored on disk, purely on memory... and this on a per-table basis. Plus, with one simple command it is possible to write the whole database as SQL statements into a file from which it is also loaded again from that file. A feature that is e.g. missing in Derby. This often turned out handy during development-phase.

Now for version 1.9 they seem to have rewritten significant parts of the software and added an impressive list of new features. What I have to figure out is, if they have finally implemented proper transaction isolation. In my opinion this was (beside the single-threaded kernel) the biggest issue in the previous versions, where dirty read could not be avoided. I am a little bit confused with the announcement(s) now, because they wrote that they have rewritten the core, however, in a forum posting the developers announced, that transaction isolation is not handled in the new release 1.9 but is planned for 2.0. The news announcements on the sourceforge are a little bit confusing for me. Does anyone have a better idea about this issue?

However, good luck for the stabilisation-phase of the new release!

Friday, January 23, 2009

[Tech] An easy to use XML Serializer

XML processing is an important part in present software systems, especially when communicate with other software components in the IT infrastructure. Pretty often you must provide your object data as XML. The Open Source market provides a wide range of XML tools, above all XML mapping tools, like Castor, JAXB and others. A very intersting and compact tool, existing since 2004 is XStream hosted on Codehaus. XStream is a proved XML serializing library and provides the following key features:
  • Easy to use API (see example)
  • You do not need explicit mapping files, like other serializing tools
  • Good performance
  • Full object graph support
  • You can modify the XML output
Let us consider a simple business object, person, implemented as a POJO (taken from the XStream homepage):

public class Person {
private String firstname;
private String lastname;
private PhoneNumber phone;
private PhoneNumber fax;
// ... constructors and methods
}

public class PhoneNumber {
private int code;
private String number;
// ... constructors and methods
}
In order to get a XML representation of the Person object we simple use the XStream API. We also set alias names which are used in the output XML.
XStream xstream = new XStream();
xstream.alias("person", Person.class);
xstream.alias("phonenumber", PhoneNumber.class);
String resultXml = xstream.toXml(myPerson)
When creating a new instance of the person object an serialize it via xstream (toXml) we get the following XML result. As we can see our alias names are used.

<person>
<firstname>Joe</firstname>
<lastname>Walnes</lastname>
<phone>
<code>123</code>
<number>1234-456</number>
</phone>
<fax>
<code>123</code>
<number>9999-999</number>
</fax>
</person>

The example illustrates that the framework is very compact and easy to use. Look at the 2 Minute tutorial on the XStream homepage to get more examples. You can also implement custom converter and transformation strategies to adapt XStream to your requirements.

Have fun with XStream.

Saturday, December 20, 2008

[Pub] Mule IDE

I published an article about the new Mule IDE in the current issue of the Eclipse Magazin. In the article I give an overview about Mule and how the IDE supports developers to model their Mule applications. The IDE provides the following features:
  • Mule project wizard
  • Mule runtime configuration (you can define different Mule runtimes)
  • Graphical Mule Configuration Editor
  • Start your Mule Server from your IDE
More information about the Mule IDE you can find on the Mule IDE homepage.

Tuesday, December 09, 2008

[Misc] Glassfish

I recently informed myself about the (Sun) Glassfish J2EE server. I never took it as a serious competitor in the field, as I had the impression it is just a reference implementation that is from Sun... However, I had to change my opinion. In the recent years it seems, that the Glassfish community worked hard on their baby and currently it seems to be a solid competitor in the field.

The Glassfish univers "not only" contains a J2EE server, but actually a set of Enterprise-tools like as message broker, clustering framework, enterprise service bus (JBI compatible), a library to implement SIP applications and the like. Additionally it is well supported by the Netbeans IDE. The recent (preview) version contains a J2EE runtime that additionally supports scripting languages like Ruby and Groovy and is based on the OSGi framework.

What I do like additionally is the fact, that Glassfish comes with a decent installation tool, provides a solid web-based administration interface and seems to be reasonably well documented. And, of course, the whole stuff is Open Source.

I must say, I am quite impressed so far. Any comments on that one?

Wednesday, November 12, 2008

[Arch] RESTful applications with NetKernel

The architectural style REST has gained some popularity and is often brought up against SOAP for interoperable web services. REST stands for Representational State Transfer and has some characteristics that distinct itself from other architectural styles:
  • Resources such as a person, an order, or a collection of the ten most recent stock quotes are identified by a (not necessary unique) URL.
  • Requests for a resource return a representation of the resource (eg. an html page describing the person) rather than an Object that IS the resource. A resource representation represents the current state of the resource and as such is immutable.
  • Representations contain typically links to other resources, so that the application can be discovered interactively.
  • There is typically a fixed and rather limited set of actions that can be called upon resources to retrieve or manipulate them. HTTP is the best known example of a RESTful system and defines eg. GET, PUT, POST and DELETE actions.
Applications based on REST are typically very extensible, provide good caching support, and can be easily mashed up to bigger applications.

NetKernel

Using the RESTful application pattern in non web based applications is currently not very well supported by programming languages and frameworks. NetKernel is an open source framework designed to provide a simple to use environment to program RESTful applications.

Its architecture is rather simple: Programmers write Modules and register them with the kernel. Each module registers its address space that that states which logical addresses (URIs) the module will handle and which java class, script (Python, JavaScript, Groovy, …), or static resource will act upon the request and return a resource representation. A module can also register rewrite rules that translate from one address to another.

Accessing resources within NetKernel from outside is via Transports. Each module can have Transports that monitor for external system events (eg. JMS events, HTTP requests, CRON events, etc), translate these events into NetKernel requests, and place these requests into the NetKernels infrastructure that will route the request to the appropriate resource.

NetKernel supports a wide range of scripting languages uses resource representation caching to speed things up transparently for the developer. The internal request-response dispatching is done asynchronously so callers can easily state that they do not care for an answer after 10 seconds, are not interested in the response at all, or place several request first and then wait for the responses coming back. REST is most often associated with HTTP – with NetKernel one can apply the REST architecture style also to applications that do not use http; it is completely decoupled from the http stack.

Compared to other REST frameworks such as Restlet, NetKernel is extremely well documented and several large sample applications can be downloaded from their homepage to get started quickly.

Related Links
Benedikt Eckhard (edited by Alexander Schatten)

Sunday, November 02, 2008

[Arch] Mule 2 and Beyond

For anybody who doesn't know the Open Source ESB Mule, I can recommend the presentation from Ross Mason hold on the Java Polis conference. In this presentation Ross gives an overview about Mule, including the component architecture and new features in Mule 2. Other relevant topics are:
  • Develop services in Mule
  • Dealing with web services in Mule
  • Exception strategies
  • Transaction Management
  • Scalability
  • Projects around Mule: Mule HQ and Mule IDE
He also points to the projects on MuleForge, which provides Mule users interesting enhancement modules and connectors, like LDAP or SAP.

Monday, October 20, 2008

[Conf] CEE-SET Conference

Last week I joined the CEE-SET conference in Brno. On the conference I presented a paper written by Robert Thullner, Josef Schiefer and myself: We analyse the application of Open Source frameworks in implementing enterprise integration patterns. For that matter a series of scenarios was implemented with (combinations) of different frameworks like Apache Active MQ, Apache Camel, Apache Service Mix and Mule.

The paper is available for download.

Monday, October 06, 2008

[Arch] OpenSource ESBs

The last couple of years major Open Source ESBs, including MuleSource and ServiceMix, have been expanded and are used in critical business solutions. Tijs Rademakers and Jos Dirksen offer a book which gives an overview about Open Source ESBs and which combination of Open Source technologies with ESBs are used. The main open source solutions covered in this book are Mule and ServiceMix. Therefore most of the examples in the book are based on these two technologies. Other Open Source ESBs that will be covered are Apache Synapse, Open ESB and the new integration framework from Spring, called Spring Integration.

In the TechBrief the authors mentioned that all Open Source ESBs focus on Enterprise Integration Pattersn. If you understand these patterns its very easy to understand the implementation and handling of ESBs.

The book is divided into three parts. The first part concentrates on reader which are not familiar with an ESB:
  • Overview about ESB functionality and what Open Source ESBs are available in the Open Source market
  • Taking a deep look into the Mule and ServiceMix architecture
  • Installation of Mule and ServiceMix and how to run them
The second part focus on ESB core functioanlity which covers some of the Enterprise Integration Patterns. Here the reader becomes some connector examples, like JMS, JDBC, POP3 and Web Services.

The third part covers case studies and also illustrates integration scenarios with BPM engines, like jBPM and Apache ODE.

In the tech brief there was also a short comparison between Mule and Service Mix. When to use which one, is hard to say, it depends on your requirements. But in this interview on of the authors said that in a web service based architecture the JBI approach is often the better choice, but Mule is very often used, because you can also transfer Java objects, which is often very comfortable and faster. They also talk about integration of legacy systems, which is sometimes easier with Mule, because when you use Service Mix all messages must be transformed in XML.

You can download chapter 1 and chapter 4 for the book homepage.

Tuesday, September 30, 2008

[Arch] A Comparative Analysis of State-of-the-Art Component Frameworks

Andreas Pieber and Jakob Spoerk wrote a thorough and very good thesis about software components and Java-based component frameworks. The authors introduce component-based software development, derive criteria to compare frameworks and eventually discuss OSGi, Spring, J2EE, SCA, and JBI on an individual basis and in connection with each other as some problems require the combination of several component frameworks.

Dowload the thesis here.

Thursday, September 18, 2008

[Arch] Pattern Based Development of Business Applications

In a recent series Jos Dirksen and Tijs Rademakers describe "pattern based development" on the basis of Open Source Middleware (ESBs). Specifically their first article describes how to implement and integrate applications using Mule and the second article gives a good introduction into the Java Business Integration standard (JBI) and the implementation ServiceMix plus a Message broker.

Thursday, June 12, 2008

[Tech] Update on "Maven: The Definitive Guide"

I am happy that the guys from Sonatype are continuously improving their free book on the "de facto standard" Apache Maven build-automation framework: "Maven: The Definitive Guide". The book covers most topics typical Maven users will encounter, including generation of documentation (site) and writing Maven plugins (mojos).

I think this book is very useful for the newbie as well as for more experienced Java developers. The book is frequently updated and available for online reading and as PDF download; in the recent update they put their book under a Creative Commons license.

Tuesday, April 29, 2008

[Tech] Database Migration

I found an interesting project: migrate4j which was introduced at Javalobby today. The idea behind this tool is to leverage the issues that come up when applications are developed using a relational database and the database schema changes between version. I.e., databases at customers or used by other developers have to be modified to the needs of the new version of the software. Probably you even want to downscale again.

The main page of the project already gives a good insight into the functionality of this tool. The idea is to describe "up" and "down" grading steps in Java classes that can be executed within the build automation cycle. Up and down are relative to the current version of the database. So it should be possible to up- and downgrade the database to the desired level automatically when needed.

Very interesting idea, however, I am wondering, why there are not more tools like that around; everyone developing database-applications is fighting with such issues I suppose. Have I overseen such tools? Recommendations?