Featured FREE Whitepapers

What's New Here?


Setting up JNDI with Jetty (Embedded)

I was running embedded Jetty on my developmentwork-space saving some time on vicious cycle of compiling and deployment. I have not worked with Jetty much and the ease of use made me hook on to it. I was in need to setup JNDI in order to retrieve a connection pool for my database related activities. Though there were comprehensive documentation in some places, most were scattered. So this post is intended to be your one stop place for the requirement of setting up JNDI with Jetty. If it does not, please do leave a comment and i will be glad to help you out. So starting off, first let us see how to setup Jetty to run as an embedded server. The folder structure of my eclipse project is as follows;The etc folder will consist of all the configuration files required by jetty. You can download jetty from here. For this example i have usedjetty-6.1.26. Include the following jars from the given folder locations;lib jetty-x.x.xx.jar, jetty-util-x.x.xx.jar,servlet-api-x.x.jarlib/plus jetty-plus-x.x.xx.jarlib/naming jetty-naming-x.x.xx.jarFor my example, i have set up mysql and therefore mysql-connector jar is also included in my library path. Copy all the files residing in your jetty installation’s etc directory to the etc directory of your eclipse project. In order to enable JNDI, we first need to include jetty-plus. There are many ways you can do this such as providing it as arun-timeargument, including it within your own jetty-env.xml residing in your WEB-INF or copying and pasting the required xml snippets from the jetty-plus.xml to your jetty.xml. I have chosen the latter. Hence, i have included the following snippet within my jetty.xml; <Array id="plusConfig" type="java.lang.String"> <Item>org.mortbay.jetty.webapp.WebInfConfiguration</Item> <Item>org.mortbay.jetty.plus.webapp.EnvConfiguration</Item> <Item>org.mortbay.jetty.plus.webapp.Configuration</Item> <Item>org.mortbay.jetty.webapp.JettyWebXmlConfiguration</Item> <Item>org.mortbay.jetty.webapp.TagLibConfiguration</Item> </Array><call name="addLifeCycle"> <arg> <new class="org.mortbay.jetty.deployer.WebAppDeployer"> <set name="contexts"><ref id="Contexts"></ref></set> <set name="webAppDir"><systemproperty default="." name="jetty.home">/webapps</systemproperty></set> <set name="parentLoaderPriority">false</set> <set name="extract">true</set> <set name="allowDuplicates">false</set> <set name="defaultsDescriptor"><systemproperty default="." name="jetty.home">/etc/webdefault.xml</systemproperty></set> <set name="configurationClasses"><ref id="plusConfig"></ref></set> </new> </arg> </call>Next up, you need to add the XML fragment related to your data-source into your jetty.xml. I have added the snippet required for mysql. For any other database, please check this link. <New id="myds" class="org.mortbay.jetty.plus.naming.Resource"><Arg>jdbc/MySQLDS</Arg> <Arg> <New class="com.mysql.jdbc.jdbc2.optional.MysqlConnectionPoolDataSource"> <Set name="Url">jdbc:mysql://localhost:3306/test</Set> <Set name="User">root</Set> <Set name="Password">password</Set> </New> </Arg> </New>Now that we have setup everything, all you need to do is run jetty in your embedded environment. Following code shows you how to run Jetty in the embedded mode as part of your main class; import java.io.File;import org.mortbay.jetty.Handler; import org.mortbay.jetty.Server; import org.mortbay.jetty.handler.DefaultHandler; import org.mortbay.jetty.handler.HandlerList; import org.mortbay.jetty.webapp.WebAppContext; import org.mortbay.xml.XmlConfiguration;public class JettyTest {public static void main(String[] args) throws Exception { Server jetty = new Server(); String[] configFiles = {"etc/jetty.xml"}; for(String configFile : configFiles) { XmlConfiguration configuration = new XmlConfiguration(new File(configFile).toURI().toURL()); configuration.configure(jetty); } WebAppContext appContext = new WebAppContext(); appContext.setContextPath("/myapp"); File rd = new File("path_to_your_war_file"); appContext.setWar(rd.getAbsolutePath()); HandlerList handlers = new HandlerList(); handlers.setHandlers(new Handler[]{ appContext, new DefaultHandler()}); jetty.setHandler(handlers); jetty.start(); } }Thats about it. Now you can look up your data-source which is exposed from Jetty. For ease, i have configured it with Spring’s JNDIObjectFactoryBean. One important aspect to note is the jndi provider URL and the initial context factory entries required for Jetty. <bean id="jndiTemplate" class="org.springframework.jndi.JndiTemplate"> <property name="environment"> <props> <prop key="java.naming.factory.initial">org.mortbay.naming.InitialContextFactory</prop> <prop key="java.naming.provider.url">org.mortbay.naming</prop> </props> </property> </bean> <bean id="jndiDataSource" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiTemplate"> <ref bean="jndiTemplate"/> </property> <property name="jndiName"> <value>jdbc/MySQLDS</value> </property> </bean>With that you have all that you need to configure JNDI and access it through Spring’s JNDI template. One other thing i was interested in was remote debugging with jetty server. After some searching i found that you need to include the following in your runtime configuration as VM arguments; -Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8000 This will enable you to remote debug your application on port 8000. If there are any queries please do leave a comment and i will be more than happy to help anyone. And ofcourse if you do see any error, leave a reply too which again is much appreciated :). Reference: Setting up JNDI with Jetty (Embedded) from our JCG partner Dinuka Arseculeratne at the My Journey Through IT blog....

FXML: Custom components using BuilderFactory

When you want to use FXML, you will need to be able to add your own components. That’s fairly easy, you simply need to add an import statement. Elements in your FXML-file that start with a capital letter will be interpreted as instances, and if they’re Java Beans, most important: if they have a parameterless standard constructor, everything is fine. If not, it’s a bit more complicated. You will need to provide a Builder and a BuilderFactory to the loader. As an example, in FXExperience Tools a nice ColorPicker control is used, that needs a Color passed to it’s constructor. So in FXML we want to write something like this: <?import com.fxexperience.javafx.scene.control.colorpicker.ColorPicker?><!-- ... --><ColorPicker fx:id="colorPicker" id="colorPicker" color="GREEN" /> Now we need to create a BuilderFactory and a Builder: import com.fxexperience.javafx.scene.control.colorpicker.ColorPicker; import javafx.fxml.JavaFXBuilderFactory; import javafx.scene.paint.Color; import javafx.util.Builder; import javafx.util.BuilderFactory;/** * * @author eppleton */ public class ColorPickerBuilderFactory implements BuilderFactory {public static class ColorPickerBuilder implements Builder<ColorPicker> { private Color color = Color.WHITE; private String id="colorPicker";public String getId() { return id; }public void setId(String id) { this.id = id; }public Color getColor() { return color; }public void setColor(Color color) { this.color = color; }@Override public ColorPicker build() { ColorPicker picker = new ColorPicker(color); picker.setId(id); return picker; } } private JavaFXBuilderFactory defaultBuilderFactory = new JavaFXBuilderFactory();@Override public Builder<?> getBuilder(Class<?> type) { return (type == ColorPicker.class) ? new ColorPickerBuilder() : defaultBuilderFactory.getBuilder(type); } }And finally when loading the FXML you need to pass the factory to your loader: (Parent) FXMLLoader.load( TestTool.class.getResource("GradientEditorControl.fxml"), null, new ColorPickerBuilderFactory())That’s it, would be cool if I could make SceneBuilder understand that as well. Reference: Add custom components to FXML using BuilderFactoryfrom our JCG partner Toni Epple at the Eppleton blog....

The Greatest Developer Fallacy Or The Wisest Words You’ll Ever Hear?

“I will learn it when I need it“! I’ve heard that phrase a lot over the years; it seems like a highly pragmatic attitude to foster when you’re in an industry as fast-paced as software development. On some level it actually IS quite pragmatic, but on another level I am annoyed by the phrase. It has become a mantra for our whole industry which hasn’t changed said industry for the better. The problem is this, in the guise of sounding like a wise and practical developer, people use it as an excuse to coast. There is too much stuff to know, it is necessary to be able to pick certain things up as you go along – part of the job. But, there is a difference between having to “pick up” some knowledge as you go along and doing absolutely everything just-in-time. The whole industry has become a bunch of generalists, maybe it has always been this way, I just wasn’t around to see it, either way I don’t like it. Noone wants to invest the time to learn anything really deeply, not computer science fundamentals, not the latest tech you’re working with, not even the language you’ve been coding in every day, for the last few years. Why bother, it will be replaced, superseded, marginalised and out of fashion before you’re half way done. I’ve discussed this with various people many times, but noone seems to really see it as a problem. “Just being pragmatic dude“. In the meantime we’ve all become clones of each other. You want a Java developer, I am a Java developer, you’re a Java developer, my neighbour is a Java developer. What differentiates us from each other – not much! Well, I’ve got some jQuery experience. That’s great, so you know how to build accordion menu then? Sure, I Google it and steal the best code I find :). In the meantime, if you need to hire a REAL expert (in anything, maybe you’re writing a fancy parser or need to visualise some big data), I hope you’ve stocked up on beer and sandwiches cause you’re gonna be here a while. Ok, there are ways to differentiate yourself, I have better communication skills, which is why I do better. That’s important too, but, developers differentiating themselves based on soft skills rather than developer skills – seems a bit twisted. We all communicate really well but the code is a mess :). Hell, I shouldn’t really talk, I am a bit of a generalist too. Of course I’d like to think of myself as a T-shaped individual, but if we’re completely honest, it’s more of a dash-shaped or underscore-shaped with maybe a few bumps :). To the uninitiated those bumps might look like big giant stalactites – T-shaped indeed. You seem like an expert without ever being an expert, just one advantage of being in a sea of generalists. Investing In Your Future I don’t want to preach about how we should all be investing in our professional future, everybody knows we should be. Most people probably think they are infact investing, they rock up to work, write a lot of code maybe even do some reading on the side, surely that must make them an expert in about 10 years, and a senior expert in 20 (I keep meaning to write more about this, one day I’ll get around to it :))? But, if that was the way, every old person would be an expert in a whole bunch of stuff and that is emphatically not the case. Maybe it is just that people don’t know how to build expertise (there is an element of truth to this), but I have a sneaking suspicion that it’s more about lack of desire rather than lack of knowledge. What was that saying about the will and the way – totally applicable in this case? I’ve gone completely off-track. “Investing in professional future” is just one of those buzzword things, the mantra is “I will learn it when I need it“. It was good enough for my daddy and it has served me well so far. Let’s apply this thinking to finance, “I will invest my money when I think I need the money“. Somehow it doesn’t quite have the same kind of pragmatic ring to it. You Don’t Know What You Don’t Know We’ve all had those moments where you’re going through major pain trying to solve a problem until someone comes along and tells you about algorithm X or technology Y and it makes everything fast and simple. It was lucky that person just happened to be there to show you the “easy” way, otherwise you would have spent days/weeks trying to figure it out and it would have been a mess. You can’t be blamed for this though, you don’t know what you don’t know. For me, this is where the “I will learn it when I need it” mentality falls over. You can’t learn something if you don’t know it exists. Google goes a long way towards mitigating this problem, but not all the way. There are plenty of problems you will encounter in the wild where you can beat your head against the wall ad infinitum unless you know what class of problem you’re looking at (e.g. if you know a bit about searching and constraint propagation, solving sudoku is easy, otherwise it’s really quite hard). You can’t learn about an algorithm if you’re not aware of it or its applicability. You can’t utilise a technology to solve a problem if you don’t even realise it has that capability. You’re not going to always have someone there to point you in the right direction. I am willing to bet there is a billion lines of code out there right now which can be replaced with a million lines of faster, cleaner, better code simply because whoever wrote it didn’t know what they didn’t know. I seem to be making a case for the opposite side here, if knowing what you don’t know is the ticket then surely we should be focusing on breadth of knowledge. Superficial awareness of as much stuff as possible should see us through, we’ll be able to recognise the problems when we see them and then learn what we need more deeply. Except it doesn’t work like that, skimming subjects doesn’t allow you to retain anything, our brain doesn’t work that way. If we don’t reinforce and dig deeper into the concepts we quickly page that information out as unimportant, it is a waste of time (think back to cramming for exams, how much do you remember the next day?). However if you focus on building deeper understanding of a subject – in an interesting twist – you will gain broad knowledge as well (which you will actually be able to retain). My grandad is a nuclear physicist, several decades of working to gain deeper knowledge of the subject has made him an expert, but it has also made him an excellent mathematician, a decent chemist, a pretty good geologist, a fair biologist etc. Just some empirical evidence that seeking depth leads to breadth as a side-effect. Can You Learn It Fast Enough Some stuff just takes a long time to learn. I am confident I can pick up an ORM framework I haven’t seen before without even breaking stride, I’ve used them before, the concepts are the same. But what if you need to do some speech to text conversion, not quite as simple, not enough background. Hopefully Google will have something for us to copy/paste. That was a bad example, only research boffins at universities need to do that crap. How about building a website then, we all know how to do that, but what if you need to do it for 10 million users a day. We just need to learn everything about scaling, I am sure the users will wait a month or two for us to get up to speed :). Yeah, I am just being stupid, all we need to do is hire an expert and … errr … oh wait, we’re all out of beer and sandwiches. Why Should I Care Working with experts is freaking awesome. You may have experienced it before, everything they say is something new and interesting, you learn new tricks with every line of code, you can almost feel your brain expanding :). You want to learn from the experts, so it’s really sad when you can’t find any. Since everyone is only learning when they “need it“, noone can teach anything to anyone. The chunk of wisdom here is this, you want to work with experts, but the experts also want to work with experts, so what are you doing to make sure the experts want to work with you? Being able to learn something when you need it is a good skill to have, but you can not let it be your philosophy as a developer. Yes it is a big industry you can’t learn everything, so pick something and make sure you know it backwards, if you’re curious enough to follow up on the interesting bits, you’ll find you have a decent grasp of a lot of other stuff at the end. And if you do a good enough job, other super-awesome-smart people are going to want to come and hang around you cause they’ll be able to learn something from you and you’ll be able to learn much from them. Everybody will be a winner. Reference: The Greatest Developer Fallacy Or The Wisest Words You’ll Ever Hear? from our JCG partner Alan Skorkin at the Skorks blog....

Complex Event Processing – a beginner’s view

Using a Complex Event Processing is not so complex. Well, initially at least. A substantial amount of information is available on the web on CEP products and functionality. But,if you are like me, you want to test run a product/application with little patience for reading detailed documentation. So when I was evaluating CEP as an engine for one of our future products, I decided to just try it out using a business scenario I knew from my past experienceworking with afinancial company. For the impatient developers like me, what could be better than using a free and open source product. So, I decided to use ‘Esper’, an open source product based on Java and was able to write the code (merely 3 java classes) to address business case below. But first a little about CEP and a shameless plug of our product. My apologies. :-) Complex Event Processing has been gaining significant ground recently. The benefits of CEP are widely understood in some verticals such as financial and insurance industries, where it is actively deployed to perform various business critical tasks. Monitoring, Fraud detection and algorithmic trading are some of those critical tasks that depend on CEP to integrate multiple streams of real-time data, identify patterns and generate actionable events for an organization. My current employer, Sybase Inc is one of the leading suppliers of CEP. Aleri, the Sybase CEP product, is widely used in financial services industry and it is the main component of Sybase’s leading solution,’RAP – The Trading Edition’. Aleri is also sold as a separate product. Detailed information about the product is available here. http://www.sybase.com/products/financialservicessolutions/complex-event-processing. The high level architecture of a CEP application is shown in the diagram below.  Now on to the best part. The business requirement – The important aspect of CEP that fascinates me is its ability to co-relate events or data points from different streams or from within the same data stream. To elaborate, take an example of a retail bank that has a fraud monitoring system in place. The system flags every cash transaction over $10,000 for a manual review. What this means is a large cash transaction (a deposit or withdrawal) in an account raises the anti-money laundering event from the monitoring system. Such traditional monitoring systems can easily be circumvented /exploited by simple tricks such as depositing more than one check with smaller amounts. What happens if an account holder deposits 2 checks of $6000 in a day or 5 checks of $2500 in a day? Nothing. The system can’t catch it. The CEP provides a way to define rules with a time frame criterion. For example, you could specify a rule to raise a flag when some one deposits more than $10000 in cash in a 12 hour window. Get it? Follow the steps below to see how easy it is to implement CEP to meet this business requirement. Download latest Esper version (4.5.0 at the time of this writing) from here. http://espertech.com/download/ Unzip the package in a separate folder. Create a Java project and reference the Esper jar files from this folder. Create a standard java bean for an event – which here is an Deposit account with a name and amount attributes. package com.sybase.testTools.util;import com.espertech.esper.client.EventBean;public class DepositEvent { private String accountName; private int amount;public DepositEvent(String accountName, int amount) { this.accountName = accountName; this.amount = amount; }public String getAccountName() { return accountName; }public int getAmount() { return amount; } }The next listing is for creating an event type, the sql like query to create an event and to register a listener on that query. The code generates an event any time one of the two deposit accounts AccountA and AccountB is deposited with more than 100000 in a time frame of 10 seconds (this is where you specify the time window). Because this is just a test, I have put the event generation functionality together with other code, but in real life the deposit amounts wouldbe fedfrom deposit transaction processing system based on some messaging framework. The code is easy enough to follow. First we create the initial configuration. Then we add a type of event we want.A query with criterion for selecting the event is created next. As you can see the amount is summed up over sliding windows of 10 seconds and it creates an event when total of the amount in that time frame for a particular account exceeds 100000.A listener is created next and it is registered on the query. package com.sybase.testTools;import org.apache.log4j.BasicConfigurator;import com.espertech.esper.client.Configuration; import com.espertech.esper.client.EPServiceProvider; import com.espertech.esper.client.EPServiceProviderManager; import com.espertech.esper.client.EPStatement; import com.sybase.testTools.util.MyListener; import com.sybase.testTools.util.DepositEvent;public class EsperTest { public static void main(String args[]) { try { Configuration config = new Configuration();config.addEventType("DepositEvent", com.sybase.testTools.util.DepositEvent.class .getName()); EPServiceProvider epService = EPServiceProviderManager .getDefaultProvider(config); String expression = "select accountName, sum(amount) from com.sybase.testTools.util.DepositEvent.win:time(10 seconds)" + " group by accountName having sum(amount) > 100000";EPStatement statement = epService.getEPAdministrator().createEPL( expression); MyListener listener = new MyListener(); statement.addListener(listener); int amount = 0; for (int i = 0; i < 1000; i++) { amount = i; DepositEvent event; if (i % 2 == 0) { event = new DepositEvent("AccountA", amount); } else { event = new DepositEvent("AccountB", amount); } epService.getEPRuntime().sendEvent(event); }} catch (Exception e) { e.printStackTrace(); } } }The next listing is the listener. Every time an event is generated in the time window specified in the query, it gets added to the newEvents collection. package com.sybase.testTools.util;import com.espertech.esper.client.EventBean; import com.espertech.esper.client.UpdateListener;public class MyListener implements UpdateListener { public void update(EventBean[] newEvents, EventBean[] oldEvents) { try { if (newEvents == null) {return; } EventBean event = newEvents[0]; System.out.println("Account: " + event.get("accountName") + ", exceeded the sum, actual " + event.get("sum(amount)")); } catch (Exception e) { e.printStackTrace(); }}}Easy enough, right? The expression language itself is fairly easy to understand because of its similarities to standard SQL syntax. Although the real life implementation could become complex based on the type and number of feeds and events you want to monitor, the product in itself is simple enough to understand. Many of the commercial CEP products offer excellent user interface to create the type of events, queries and reports. Complex event processing is still a growing field and the pace of its adoptionwill only increase as companies try to make sense of all the streams of data flowing in. The amount of semi-structured and other type of data (audio, video) has already surpassed the amount of traditional relational data. It’s easy to gauge the impact of good CEP application at a time when stock trading companies are already gleaning clues from twit feeds from twitter. Reference: Complex Event Processing – a beginner’s view from our JCG partner Mahesh Gadgil at the Simple yet Practical blog....

Being a better enterprise architect

Enterprise architects seem to become more and more involved in “trying out new things” or pushing down technology or implementation advice — nay dictation — without having a dog in the fight, or having to code any part of it. I’ve observed this in quite a few place, both working with the architects as a fellow architect, or as a developer. From these observations, I’ve come up with three rules for myself for being a good enterprise architect that I believe may be worthy for sharing and discussion. #1 Gain the respect of the developers I would like to generalize and say that developers seem to be the type of people who don’t want to put up with more bullshit than they absolutely have to. So trying the typical political maneuvering that you find in big companies to impress developers wont work. That includes salesmanship, power point presentations, etc. Those skills can be important for relaying a direction or vision, but it’s not going to impress the developers. The most tried and true way to gain their respect is to code with them. Yes, indeed. Good architects code. Bad ones pontificate. And there seem to be *way* more of the latter than the former. Coding your brilliantly “architected” solution will help gain their respect. But it also helps in another area. The second rule I follow. #2 Realize that you cannot design a system on paper. The source code is not the product that you’re engineering. The source code itself is the design. So when I sit in an architecture role, I remind myself that coming up with diagrams and flow visualizations is not the design. It’s a brainstorm to help develop a model in my head. But without putting that model to code, you don’t know how it will truly behave, or how the architected solution should be altered. And believe me. In almost all cases, it should be altered. In other words, there should be a feedback loop between the developers and the architects. And if you follow rule #1, you’ll be right there to observe first hand how your solution plays out in code. #3 Don’t resume build Don’t glom onto the latest and most shiny technology and push it onto the developers without putting it through some rigorous, real-life situations. Playing with new technology is fun. I do it all the time. But I do it outside of my day job. Sacrificing the stability of the team, the software, and the business model just because some technology seems cool and Google might hire you if you know it is not a respectable way to go about solving enterprise problems. Even if you’ve seen enough sales presentations about how this new technology is going to be such a magic bullet, resist the temptation to try to indoctrinate the rest of the team until you’ve put the new technology to real life software problems in an incubator. I’ve been on both sides of the fence, have worked with a bunch of good developers and architects, and these are my three rules. Anyone want to add anything? Reference: Being a better enterprise architect from our JCG partner Christian Posta at the Christian Posta Software blog....

What is ActiveMQ?

Although the Active MQ website already gives a pithy, to-the-point explanation of ActiveMQ, I would like to add some more context to their definition. From the ActiveMQ project’s website: “ActiveMQ is an open sourced implementation of JMS 1.1 as part of the J2EE 1.4 specification.” Here’s my take: ActiveMQ is an open-source, messaging software which can serve as the backbone for an architecture of distributed applications built upon messaging. The creators of ActiveMQ were driven to create this open-source project for two main reasons:The available existing solutions at the time were proprietary/very expensive Developers with the Apache Software Foundation were working on a fully J2EE compliant application server (Geronimo) and they needed a JMS solution that had a license compatible with Apache’s licensing.Since its inception, ActiveMQ has turned into a strong competitor of the commercial alternatives, such as WebSphereMQ, EMS/TIBCO and SonicMQ and is deployed in production in some of the top companies in industries ranging from financial services to retail. Using messaging as an integration or communication style leads to many benefits such as:Allowing applications built with different languages and on different operating systems to integrate with each other Location transparency – client applications don’t need to know where the service applications are located Reliable communication – the producers/consumers of messages don’t have to be available at the same time, or certain segments along the route of the message can go down and come back up without impacting the message getting to the service/consumer Scaling – can scale horizontallyby adding more services that can handle the messages if too many messages are arriving Asynchronous communication – a client can fire a message and continue other processing instead of blocking until the service has sent a response; it can handle the response message only when the message is ready Reduced coupling – the assumptions made by the clients and services are greatly reduced as a result of the previous 5 benefits. A service can change details about itself, including its location, protocol, and availability, without affecting or disrupting the client.Please see Gregor Hohpe’s description about messaging or the book he and Bobby Woolf wrote about messaging-based enterprise application integration. There are other advantages as well (hopefully someone can add other benefits or drawbacks in the comments), and ActiveMQ is a free, open-source software that can facilitate delivering those advantages and has proven to be highly reliable and scalable in production environments. Reference: What is ActiveMQ? from our JCG partner Christian Posta at the Christian Posta Software blog....

GlassFish 3.1.2 is Full of MOXy (EclipseLink JAXB)

I am very happy to announce that EclipseLink JAXB (MOXy) is now a JAXB ( JSR-222) provider in GlassFish 3.1.2. I would like to thank the EclipseLink and GlassFish committers for all their hard work to make this happen. In this post I will introduce how MOXy can be leveraged to create a JAX-WS service. In future posts I will cover more of the extensions in greater detail. GlassFish can be downloaded from the following link:http://glassfish.java.net/public/downloadsindex.html Web Service (JAX-WS) For this post we will implement a simple service that finds a customer by ID. As this is just a "Hello World" type example the service will always return a customer with the name "Jane Doe". package blog.jaxws.service;import javax.jws.*; import blog.jaxws.model.Customer;@WebService public class FindCustomer {@WebMethod public Customer findCustomer(int id) { Customer customer = new Customer(); customer.setId(id); customer.setFirstName("Jane"); customer.setLastName("Doe"); return customer; }}WEB-INF/sun-jaxws.xml There are multiple ways to specify MOXy as the JAXB provider. My preference is to use the sun-jaxws.xml file that is located in the WEB-INF directory. <?xml version="1.0" encoding="UTF-8"?> <endpoints xmlns="http://java.sun.com/xml/ns/jax-ws/ri/runtime" version="2.0"> <endpoint name='FindCustomer' implementation='blog.jaxws.service.FindCustomer' url-pattern='/FindCustomerService' databinding='eclipselink.jaxb'/> </endpoints>Model When MOXy is specified as the JAXB provider we can leverage all of its mapping extensions. In this example we will use @XmlPath to do XPath based mapping. package blog.jaxws.model;import javax.xml.bind.annotation.*; import org.eclipse.persistence.oxm.annotations.XmlPath;@XmlAccessorType(XmlAccessType.FIELD) @XmlType(propOrder={"firstName", "lastName"}) public class Customer {@XmlAttribute private int id;@XmlPath("personal-info/first-name/text()") private String firstName; @XmlPath("personal-info/last-name/text()") private String lastName;public int getId() { return id; }public void setId(int id) { this.id = id; }public String getFirstName() { return firstName; }public void setFirstName(String firstName) { this.firstName = firstName; }public String getLastName() { return lastName; }public void setLastName(String lastName) { this.lastName = lastName; }} WSDL Below is the WSDL that was generated for this service: <?xml version="1.0" encoding="UTF-8"?> <!-- Published by JAX-WS RI at http://jax-ws.dev.java.net. RI's version is Metro/2.2-b13 (branches/2.2-6964; 2012-01-09T18:04:18+0000) JAXWS-RI/2.2.6-promoted-b20 JAXWS/2.2 svn-revision#unknown. --> <!-- Generated by JAX-WS RI at http://jax-ws.dev.java.net. RI's version is Metro/2.2-b13 (branches/2.2-6964; 2012-01-09T18:04:18+0000) JAXWS-RI/2.2.6-promoted-b20 JAXWS/2.2 svn-revision#unknown. --> <definitions xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsp="http://www.w3.org/ns/ws-policy" xmlns:wsp1_2="http://schemas.xmlsoap.org/ws/2004/09/policy" xmlns:wsam="http://www.w3.org/2007/05/addressing/metadata" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:tns="http://service.jaxws.blog/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://schemas.xmlsoap.org/wsdl/" targetNamespace="http://service.jaxws.blog/" name="FindCustomerService"> <types> <xsd:schema> <xsd:import namespace="http://service.jaxws.blog/" schemaLocation="http://www.example.com:8080/Blog-JAXWS2/FindCustomerService?xsd=1"/> </xsd:schema> </types> <message name="findCustomer"> <part name="parameters" element="tns:findCustomer"/> </message> <message name="findCustomerResponse"> <part name="parameters" element="tns:findCustomerResponse"/> </message> <portType name="FindCustomer"> <operation name="findCustomer"> <input wsam:Action="http://service.jaxws.blog/FindCustomer/findCustomerRequest" message="tns:findCustomer"/> <output wsam:Action="http://service.jaxws.blog/FindCustomer/findCustomerResponse" message="tns:findCustomerResponse"/> </operation> </portType> <binding name="FindCustomerPortBinding" type="tns:FindCustomer"> <soap:binding transport="http://schemas.xmlsoap.org/soap/http" style="document"/> <operation name="findCustomer"> <soap:operation soapAction=""/> <input> <soap:body use="literal"/> </input> <output> <soap:body use="literal"/> </output> </operation> </binding> <service name="FindCustomerService"> <port name="FindCustomerPort" binding="tns:FindCustomerPortBinding"> <soap:address location="http://www.example.com:8080/Blog-JAXWS/FindCustomerService"/> </port> </service> </definitions>XML Schema Below is the XML schema referenced by the WSDL that was generated for the model. Notice how it includes the "personal-info" element that was specified in the @XmlPath annotation. <?xml version="1.0" encoding="UTF-8"?> <!-- Published by JAX-WS RI at http://jax-ws.dev.java.net. RI's version is Metro/2.2-b13 (branches/2.2-6964; 2012-01-09T18:04:18+0000) JAXWS-RI/2.2.6-promoted-b20 JAXWS/2.2 svn-revision#unknown. --> <xsd:schema xmlns:ns0="http://service.jaxws.blog/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" targetNamespace="http://service.jaxws.blog/"> <xsd:complexType name="findCustomerResponse"> <xsd:sequence> <xsd:element name="return" type="ns0:customer" minOccurs="0" /> </xsd:sequence> </xsd:complexType> <xsd:complexType name="findCustomer"> <xsd:sequence> <xsd:element name="arg0" type="xsd:int" /> </xsd:sequence> </xsd:complexType> <xsd:complexType name="customer"> <xsd:sequence> <xsd:element name="personal-info" minOccurs="0"> <xsd:complexType> <xsd:sequence> <xsd:element name="first-name" type="xsd:string" minOccurs="0" /> <xsd:element name="last-name" type="xsd:string" minOccurs="0" /> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:sequence> <xsd:attribute name="id" type="xsd:int" use="required" /> </xsd:complexType> <xsd:element name="findCustomerResponse" type="ns0:findCustomerResponse" /> <xsd:element name="findCustomer" type="ns0:findCustomer" /> </xsd:schema>Service Request Below is what a request to our service looks like: <?xml version="1.0" encoding="UTF-8"?> <S:Envelope xmlns:S="http://schemas.xmlsoap.org/soap/envelope/"> <S:Header/> <S:Body> <ns2:findCustomer xmlns:ns2="http://service.jaxws.blog/"> <arg0>123</arg0> </ns2:findCustomer> </S:Body> </S:Envelope>Service Response The response leverages the @XmlPath annotation we used on the Customer class to map the firstName and lastName properties to XML. <?xml version="1.0" encoding="UTF-8"?> <S:Envelope xmlns:S="http://schemas.xmlsoap.org/soap/envelope/"> <S:Body> <ns0:findCustomerResponse xmlns:ns0="http://service.jaxws.blog/"> <return id="123"> <personal-info> <first-name>Jane</first-name> <last-name>Doe</last-name> </personal-info> </return> </ns0:findCustomerResponse> </S:Body> </S:Envelope>Further Reading If you enjoyed this post, then you may be interested in:EclipseLink MOXy is the JAXB Provider in WebLogic Server 12c XPath Based Mapping – Geocode Example Mapping Objects to Multiple XML Schemas – Weather Example MOXy’s XML Metadata in a JAX-RS Service JPA Entities to XML – Bidirectional RelationshipsReference: GlassFish 3.1.2 is Full of MOXy (EclipseLink JAXB) from our JCG partner Blaise Doughan at the Java XML & JSON Binding blog....

ANTLR Tutorial – Hello Word

Antlr stands for ANother Tool for Language Recognition. The tool is able to generate compiler or interpreter for any computer language. Besides obvious use, e.g. need to parse a real ‘big’ programming language such as Java, PHP or SQL, it can help with smaller, more common tasks. It is useful any time you need to evaluate expressions unknown at compile-time or to parse non-trivial user input or files in a weird format. Of course, it is possible to create custom hand made parser for any of these tasks. However, it usually takes much more time and effort. A little knowledge of a good parser generator may turn these time-consuming tasks into easy and fast exercises. This post begins with a small demonstration of ANTLR usefulness. Then, we explain what ANTLR is and how does it work. Finally, we show how to compile a simple ‘Hello word!’ language into an abstract syntax tree. The post explains also how to add error handling and how to test the language. Next post shows how to create a real expression language. Real Word Examples ANTLR seems to be popular in open source word. Among others, it is used by Apache Camel, Apache Lucene, Apache Hadoop, Groovy and Hibernate. They all needed parser for a custom language. For example, Hibernate uses ANTLR to parse its query language HQL. All those are big frameworks and thus more likely to need domain specific language than small application. The list of smaller projects using ANTLR is available on its showcase list. We found also one stackoverflow discussion on the topic. To see where ANTLR could be useful and how it could save time, try to estimate following requirements:Add formula calculator into an accounting system. It will calculate values of formulas such as (10 + 80)*sales_tax. Add extended search field into a recipe search engine. It will search for receipts matching expressions such as (chicken and orange) or (no meat and carrot).Our safe estimate is a day and half including documentation, tests, and integration into the project. ANTLR is worth looking at if you are facing similar requirements and made significantly higher estimate. Overview ANTLR is code generator. It takes so called grammar file as input and generates two classes: lexer and parser. Lexer runs first and splits input into pieces called tokens. Each token represents more or less meaningful piece of input. The stream of tokes is passed to parser which do all necessary work. It is the parser who builds abstract syntax tree, interprets the code or translate it into some other form. Grammar file contains everything ANTLR needs to generate correct lexer and parser. Whether it should generate java or python classes, whether parser generates abstract syntax tree, assembler code or directly interprets code and so on. As this tutorial shows how to build abstract syntax tree, we will ignore other options in following explanations. Most importantly, grammar file describes how to split input into tokens and how to build tree from tokens. In other words, grammar file contains lexer rules and parser rules. Each lexer rule describes one token: TokenName: regular expression;Parser rules are more complicated. The most basic version is similar as in lexer rule: ParserRuleName: regular expression;They may contain modifiers that specify special transformations on input, root and childs in result abstract syntax tree or actions to be performed whenever rule is used. Almost all work is usually done inside parser rules. Infrastructure First, we show tools to make development with ANTLR easier. Of course, nothing of what is described in this chapter is necessary. All examples work with maven, text-editor and internet connection only. ANTLR project produced stand alone IDE, Eclipse plugin and Idea plugin. We did not found NetBeans plugin. ANTLRWorks Stand alone ide is called ANTLRWorks. Download it from the project download page. ANTLRWorks is a single jar file, use java -jar antlrworks-1.4.3.jar command to run it. The IDE has more features and is more stable than Eclipse plugin. Eclipse Plugin Download and unpack ANTLR v3 from ANTLR download page. Then, install ANTLR plugin from Eclipse Marketplace:Go to Preferences and configure ANTLR v3 installation directory:To test the configuration, download sample grammar file and open it in eclipse. It will be open it in ANTLR editor. The editor has three tabs:Grammar – text editor with syntax highlighting, code completion and so on. Interpreter – compiles test expressions into syntax trees, may produce different result than generated parser. It tend to throw failed predicate exception on correct expressions. Railroad View – paints nice graphs of your lexer and parser rules.An Empty Project – Maven Configuration This chapter shows how to add ANTLR into a maven project. If you use Eclipse and do not have a m2eclipse plugin installed yet, install it from http://download.eclipse.org/technology/m2e/releases update site. It will make your life much easier. Create Project Create new maven project and specify maven-archetype-quickstart on ‘Select an Archetype’ screen. If you do not use Eclipse, command mvn archetype:generate achieves the same. Dependency Add ANTLR dependency into pom.xml: org.antlr antlr 3.3 jar compileNote: As ANTLR does not have history of being backward-compatible, it is better to specify required version. Plugins Antlr maven plugin runs during generate-sources phase and generates both lexer and parser java classes from grammar (.g) files. Add it into pom.xml: org.antlr antlr3-maven-plugin 3.3 run antlr generate-sources antlrCreate src/main/antlr3 folder. The plugin expects all grammar files in there. Generated files are put into target/generated-sources/antlr3 directory. As this directory is not in default maven build path, we use build-helper-maven-plugin to add it there: org.codehaus.mojo build-helper-maven-plugin add-source generate-sources add-source ${basedir}/target/generated-sources/antlr3If you use eclipse, you have to update project configuration: right click on the project -> ‘maven’ -> ‘Update Project Configuration’. Test It Invoke maven to test project configuration: right click on the project -> ‘Run As’ -> ‘Maven generate-sources’. Alternatively, use mvn generate-sources command. Build should be successful. Console output should contain antlr3-maven-plugin plugin output: [INFO] --- antlr3-maven-plugin:3.3:antlr (run antlr) @ antlr-step-by-step --- [INFO] ANTLR: Processing source directory C:\meri\ANTLR\workspace\antlr-step-by-step\src\main\antlr3 [INFO] No grammars to process ANTLR Parser Generator Version 3.3 Nov 30, 2010 12:46:29It should be followed by build-helper-maven-plugin plugin output: [INFO] --- build-helper-maven-plugin:1.7:add-source (add-source) @ antlr-step-by-step --- [INFO] Source directory: C:\meri\ANTLR\workspace\antlr-step-by-step\target\generated-sources\antlr3 added.The result of this phase in located on github, tag 001-configured_antlr. Hello Word We will create simplest possible language parser – hello word parser. It builds a small abstract syntax tree from a single expression: ‘Hello word!’. We will use it to show how to create a grammar file and generate ANTLR classes from it. Then, we will show how to use generated files and create an unit test. First Grammar File Antlr3-maven-plugin searches src/main/antlr3 directory for grammar files. It creates new package for each sub-directory with grammar and generates parser and lexer classes into it. As we wish to generate classes into org.meri.antlr_step_by_step.parsers package, we have to create src/main/antlr3/org/meri/antlr_step_by_step/parsers directory. Grammar name and file name must be identical. File must have .g suffix. Moreover, each grammar file begins with a grammar name declaration. Our S001HelloWord grammar begins with following line: grammar S001HelloWord;eclaration is always followed by generator options. We are working on java project and wish to compile expressions into abstract syntax tree: options { // antlr will generate java lexer and parser language = Java; // generated parser should create abstract syntax tree output = AST; }Antlr does not generate package declaration on top of generated classes. We have to use @parser::header and @lexer::header blocks to enforce it. Headers must follow options block: @lexer::header { package org.meri.antlr_step_by_step.parsers; }@parser::header { package org.meri.antlr_step_by_step.parsers; }Each grammar file must have at least one lexer rule. Each lexer rule must begin with upper case letter. We have two rules, first defines a salutation token, second defines an endsymbol token. Salutation must be ‘Hello word’ and endsymbol must be ‘!’. SALUTATION:'Hello word'; ENDSYMBOL:'!';Similarly, each grammar file must have at least one parser rule. Each parser rule must begin with lower case letter. We have only one parser rule: any expression in our language must be composed of a salutation followed by an endsymbol. expression : SALUTATION ENDSYMBOL;Note: the order of grammar file elements is fixed. If you change it, antlr plugin will fail. Generate Lexer and Parser Generate a lexer and parser from command line using mvn generate-sources command or from Eclipse:Right click on the project. Click ‘Run As’. Click ‘Maven generate-sources’.Antlr plugin will create target/generated-sources/antlr/org/meri/antlr_step_by_step/parsers folder and place S001HelloWordLexer.java and S001HelloWordParser.java files inside. Use Lexer and Parser Finally, we create compiler class. It has only one public method which:calls generated lexer to split input into tokens, calls generated parser to build AST from tokens, prints result AST tree into console, returns abstract syntax tree.Compiler is located in S001HelloWordCompiler class: public CommonTree compile(String expression) { try { //lexer splits input into tokens ANTLRStringStream input = new ANTLRStringStream(expression); TokenStream tokens = new CommonTokenStream( new S001HelloWordLexer( input ) ); //parser generates abstract syntax tree S001HelloWordParser parser = new S001HelloWordParser(tokens); S001HelloWordParser.expression_return ret = parser.expression(); //acquire parse result CommonTree ast = (CommonTree) ret.tree; printTree(ast); return ast; } catch (RecognitionException e) { throw new IllegalStateException("Recognition exception is never thrown, only declared."); }Note: Do not worry about RecognitionException exception declared on S001HelloWordParser.expression() method. It is never thrown. Testing It We finish this chapter with a small test case for our new compiler. Create S001HelloWordTest class: public class S001HelloWordTest { /** * Abstract syntax tree generated from "Hello word!" should have an * unnamed root node with two children. First child corresponds to * salutation token and second child corresponds to end symbol token. * * Token type constants are defined in generated S001HelloWordParser * class. */ @Test public void testCorrectExpression() { //compile the expression S001HelloWordCompiler compiler = new S001HelloWordCompiler(); CommonTree ast = compiler.compile("Hello word!"); CommonTree leftChild = ast.getChild(0); CommonTree rightChild = ast.getChild(1);//check ast structure assertEquals(S001HelloWordParser.SALUTATION, leftChild.getType()); assertEquals(S001HelloWordParser.ENDSYMBOL, rightChild.getType()); }}The test will pass successfully. It will print abstract syntax tree to the console: 0 null -- 4 Hello word -- 5 !Grammar in IDE Open S001HelloWord.g in editor and go to interpreter tab.Highlight expression rule in top left view. Write ‘Hello word!’ into top right view. Press green arrow in top left corner.Interpreter will generate parse tree:Copy Grammar Each new grammar in this tutorial is based on previous one. We compiled a list of steps needed to copy an old grammar into a new one. Use them to copy an OldGrammar into a NewGrammar:Copy OldGrammar.g to NewGrammar.g in the same directory. Change grammar declaration to grammar NewGrammar; Generate parser and lexer. Create new class NewGrammarCompiler analogous to previous OldGrammarCompiler class. Create new test class NewGrammarTest analogous to previous OldGrammarTest class.Error Handling No task is really finished without an appropriate error handling. Generated ANTLR classes try to recover from errors whenever possible. They do report errors to the console, but there is no out-of-the box API to programmatically find about syntax errors. This could be fine if we would build command line only compiler. However, lets assume that we are building a GUI to our language, or use the result as input to another tool. In such case, we need an API access to all generated errors. In the beginning of this chapter, we will experiment with default error handling and create test case for it. Then, we will add a naive error handling, which will throw an exception whenever first error happens. Finally, we will move to the ‘real’ solution. It will collect all errors in an internal list and provide methods to access them. As a side product, the chapter shows how to:add custom catch clause to parser rules, add new methods and fields to generated classes, override generated methods.Default Error Handling First, we will try to parse various incorrect expressions. The goal is to understand default ANTLR error handling behavior. We will create test case from each experiment. All test cases are located in S001HelloWordExperimentsTest class. Expression 1: Hello word? Result tree is very similar to the correct one: 0 null -- 4 Hello word -- 5 ?<missing ENDSYMBOL>Console output contains errors: line 1:10 no viable alternative at character '?' line 1:11 missing ENDSYMBOL at '<eof>'Test case: following test case passes with no problem. No exception is thrown and abstract syntax tree node types are the same as in correct expression. @Test public void testSmallError() { //compile the expression S001HelloWordCompiler compiler = new S001HelloWordCompiler(); CommonTree ast = compiler.compile("Hello word?");//check AST structure assertEquals(S001HelloWordParser.SALUTATION, ast.getChild(0).getType()); assertEquals(S001HelloWordParser.ENDSYMBOL, ast.getChild(1).getType()); }Expression 2: Bye! Result tree is very similar to the correct one: 0 null -- 4 <missing> -- 5 !</missing> Console output contains errors: line 1:0 no viable alternative at character 'B' line 1:1 no viable alternative at character 'y' line 1:2 no viable alternative at character 'e' line 1:3 missing SALUTATION at '!'Test case: following test case passes with no problem. No exception is thrown and abstract syntax tree node types are the same as in correct expression. @Test public void testBiggerError() { //compile the expression S001HelloWordCompiler compiler = new S001HelloWordCompiler(); CommonTree ast = compiler.compile("Bye!");//check AST structure assertEquals(S001HelloWordParser.SALUTATION, ast.getChild(0).getType()); assertEquals(S001HelloWordParser.ENDSYMBOL, ast.getChild(1).getType()); }Expression 3: Incorrect Expression Result tree has only root node with no childs: 0Console output contains a lot of errors: line 1:0 no viable alternative at character 'I' line 1:1 no viable alternative at character 'n' line 1:2 no viable alternative at character 'c' line 1:3 no viable alternative at character 'o' line 1:4 no viable alternative at character 'r' line 1:5 no viable alternative at character 'r' line 1:6 no viable alternative at character 'e' line 1:7 no viable alternative at character 'c' line 1:8 no viable alternative at character 't' line 1:9 no viable alternative at character ' ' line 1:10 no viable alternative at character 'E' line 1:11 no viable alternative at character 'x' line 1:12 no viable alternative at character 'p' line 1:13 no viable alternative at character 'r' line 1:14 no viable alternative at character 'e' line 1:15 no viable alternative at character 's' line 1:16 no viable alternative at character 's' line 1:17 no viable alternative at character 'i' line 1:18 no viable alternative at character 'o' line 1:19 no viable alternative at character 'n' line 1:20 mismatched input '&ltEOF>' expecting SALUTATIONTest case: we finally found an expression that results in different tree structure. @Test public void testCompletelyWrong() { //compile the expression S001HelloWordCompiler compiler = new S001HelloWordCompiler(); CommonTree ast = compiler.compile("Incorrect Expression");//check AST structure assertEquals(0, ast.getChildCount()); }Error Handling in Lexer Each lexer rule ‘RULE’ corresponds to ‘mRULE’ method in generated lexer. For example, our grammar has two rules: SALUTATION:'Hello word'; ENDSYMBOL:'!';and generated lexer has two corresponding methods: public final void mSALUTATION() throws RecognitionException { // ... }public final void mENDSYMBOL() throws RecognitionException { // ... }Depending on what exception is thrown, lexer may or may not try to recover from it. However, each error ends in reportError(RecognitionException e) method. Generated lexer inherits it: public void reportError(RecognitionException e) { displayRecognitionError(this.getTokenNames(), e); }The result: we have to change either reportError or displayRecognitionError method in lexer. Error Handling in Parser Our grammar has only one parser rule ‘expression': expression SALUTATION ENDSYMBOL;The expression corresponds to expression() method in generated parser: public final expression_return expression() throws RecognitionException { //initialization try { //parsing } catch (RecognitionException re) { reportError(re); recover(input,re); retval.tree = (Object) adaptor.errorNode(input, retval.start, input.LT(-1), re); } finally { } //return result; }If an error happens, parser will:report error to the console, recover from the error, add an error node (instead of an ordinary node) to the abstract syntax tree.Error reporting in parser is little bit more complicated than error reporting in lexer: /** Report a recognition problem. * * This method sets errorRecovery to indicate the parser is recovering * not parsing. Once in recovery mode, no errors are generated. * To get out of recovery mode, the parser must successfully match * a token (after a resync). So it will go: * * 1. error occurs * 2. enter recovery mode, report error * 3. consume until token found in resynch set * 4. try to resume parsing * 5. next match() will reset errorRecovery mode * * If you override, make sure to update syntaxErrors if you care about that. */ public void reportError(RecognitionException e) { // if we've already reported an error and have not matched a token // yet successfully, don't report any errors. if ( state.errorRecovery ) { return; } state.syntaxErrors++; // don't count spurious state.errorRecovery = true;displayRecognitionError(this.getTokenNames(), e); }This time we have two possible options:replace catch clause in a parser rule method by own handling, override parser methods.Changing Catch in Parser Antlr provides two ways how to change generated catch clause in the parser. We will create two new grammars, each demonstrates one way how to do it. In both cases, we will make parser exit upon first error. First, we can add rulecatch to parser rule of new S002HelloWordWithErrorHandling grammar: expression : SALUTATION ENDSYMBOL; catch [RecognitionException e] { //Custom handling of an exception. Any java code is allowed. throw new S002HelloWordError(":(", e); }Of course, we had to add import of S002HelloWordError exception into headers block: @parser::header { package org.meri.antlr_step_by_step.parsers;//add imports (see full line on Github) import ... .S002HelloWordWithErrorHandlingCompiler.S002HelloWordError; }The compiler class is almost the same as before. It declares new exception: public class S002HelloWordWithErrorHandlingCompiler extends AbstractCompiler {public CommonTree compile(String expression) { // no change here }@SuppressWarnings("serial") public static class S002HelloWordError extends RuntimeException { public S002HelloWordError(String arg0, Throwable arg1) { super(arg0, arg1); } } }ANTLR will then replace default catch clause in expression rule method with our own handling: public final expression_return expression() throws RecognitionException { //initialization try { //parsing } catch (RecognitionException re) { //Custom handling of an exception. Any java code is allowed. throw new S002HelloWordError(":(", e); } finally { } //return result; }As usually, the grammar, the compiler class and the test class are available on Github. Alternatively, we can put rulecatch rule in between the header block and first lexer rule. This method is demonstrated in S003HelloWordWithErrorHandling grammar: //change error handling in all parser rules @rulecatch { catch (RecognitionException e) { //Custom handling of an exception. Any java code is allowed. throw new S003HelloWordError(":(", e); } }We have to add import of S003HelloWordError exception into headers block: @parser::header { package org.meri.antlr_step_by_step.parsers;//add imports (see full line on Github) import ... .S003HelloWordWithErrorHandlingCompiler.S003HelloWordError; }The compiler class is exactly the same as in previous case. ANTLR will replace default catch clause in all parser rules: public final expression_return expression() throws RecognitionException { //initialization try { //parsing } catch (RecognitionException re) { //Custom handling of an exception. Any java code is allowed. throw new S003HelloWordError(":(", e); } finally { } //return result; }Again, the grammar, the compiler class and the test class are available on Github. Unfortunately, this method has two disadvantages. First, it does not work in lexer, only in parser. Second, default report and recovery functionality works in a reasonable way. It attempts to recover from errors. Once it starts recovering, it does not generate new errors. Error messages are generated only if the parser is not in error recovery mode. We liked this functionality, so we decided to change only default implementation of error reporting.Add Methods and Fields to Generated Classes We will store all lexer/parser errors in private list. Moreover, we will add two methods into generated classes:hasErrors – returns true if at least one error occurred, getErrors – returns all generated errors.New fields and methods are added inside @members block: @lexer::members { //everything you need to add to the lexer }@parser::members { //everything you need to add to the parser }members blocks must be placed between header block and first lexer rule. The example is in grammar named S004HelloWordWithErrorHandling: //add new members to generated lexer @lexer::members { //add new field private List<RecognitionException> errors = new ArrayList <RecognitionException> (); //add new method public List<RecognitionException> getAllErrors() { return new ArrayList<RecognitionException>(errors); }//add new method public boolean hasErrors() { return !errors.isEmpty(); } }//add new members to generated parser @parser::members { //add new field private List<RecognitionException> errors = new ArrayList <RecognitionException> (); //add new method public List<RecognitionException> getAllErrors() { return new ArrayList<RecognitionException>(errors); }//add new method public boolean hasErrors() { return !errors.isEmpty(); } }Both generated lexer and generated parser contain all fields and methods written in members block. Overriding Generated Methods To override a generated method, do the same thing as if you want to add a new one, e.g. add it inside @members block: //override generated method in lexer @lexer::members { //override method public void reportError(RecognitionException e) { errors.add(e); displayRecognitionError(this.getTokenNames(), e); } }//override generated method in parser @parser::members { //override method public void reportError(RecognitionException e) { errors.add(e); displayRecognitionError(this.getTokenNames(), e); } }The method reportError now overrides default behavior in both lexer and parser. Collect Errors in Compiler Finally, we have to change our compiler class. New version collects all errors after input parsing phase: private List<RecognitionException> errors = new ArrayList<RecognitionException>();public CommonTree compile(String expression) { try {... init lexer ... ... init parser ... ret = parser.expression();//collect all errors if (lexer.hasErrors()) errors.addAll(lexer.getAllErrors()); if (parser.hasErrors()) errors.addAll(parser.getAllErrors()); //acquire parse result ... as usually ... } catch (RecognitionException e) { ... } } /** * @return all errors found during last run */ public List<RecognitionException> getAllErrors() { return errors; }We must collect lexer errors after parser finished its work. The lexer is invoked from it and contain no errors before. As usually, we placed the grammar, the compiler class, and the test class on Github. Download tag 003-S002-to-S004HelloWordWithErrorHandling of antlr-step-by-step project to find all three error handling methods in the same java project. Reference: ANTLR Tutorial – Hello Word from our JCG partner Maria Jurcovicova at the This is Stuff blog....

Connect to RabbitMQ (AMQP) using Scala, Play and Akka

In this article we’ll look at how you can connect from Scala to RabbitMQ so you can support the AMQP protocol from your applications. In this example I’ll use the Play Framework 2.0 as container (for more info on this see my other article on this subject) to run the application in, since Play makes developing with Scala a lot easier. This article will also use Akka actors to send and receive the messages from RabbitMQ. What is AMQP First, a quick introduction into AMQP. AMQP stands for “Advanced Message Queueing Protocol” and is an open standard for messaging. The AMQP homepage states their vision as this: “To become the standard protocol for interoperability between all messaging middleware”. AMQP defines a transport level protocol for exchanging messages that can be used to integrate applications from a number of different platform, languages and technologies. There are a number of tools implementing this protocol, but one that is getting more and more attention is RabbitMQ. RabbitMQ is an open source, erlang based message broker that uses AMQP. All application that can speak AMQP can connect to and make use of RabbitMQ. So in this article we’ll show how you can connect from your Play2/Scala/Akka based application to RabbitMQ. In this article we’ll show you how to do implement the two most common scenarios:Send / recieve: We’ll configure one sender to send a message every couple of seconds, and use two listeners that will read the messages, in a round robin fashion, from the queue. Publish / subscribe: For this example we’ll create pretty much the same scenario, but this time, the listeners will both get the message at the same time.I assume you’ve got an installation of RabbitMQ. If not follow the instructions from their site. Setup basic Play 2 / Scala project For this example I created a new Play 2 project. Doing this is very easy: jos@Joss-MacBook-Pro.local:~/Dev/play-2.0-RC2$ ./play new Play2AndRabbitMQ _ _ _ __ | | __ _ _ _| | | '_ \| |/ _' | || |_| | __/|_|\____|\__ (_) |_| |__/   play! 2.0-RC2, http://www.playframework.org   The new application will be created in /Users/jos/Dev/play-2.0/PlayAndRabbitMQ   What is the application name? > PlayAndRabbitMQ   Which template do you want to use for this new application?   1 - Create a simple Scala application 2 - Create a simple Java application 3 - Create an empty project   > 1   OK, application PlayAndRabbitMQ is created.   Have fun! I am used to work from Eclipse with the scala-ide pluging, so I execute play eclipsify and import the project in Eclipse. The next step we need to do is set up the correct dependencies. Play uses sbt for this and allows you to configure your dependencies from the build.scala file in your project directory. The only dependency we’ll add is the java client library from RabbitMQ. Even though Lift provides a scala based AMQP library, I find using the RabbitMQ one directly just as easy. After adding the dependency my build.scala looks like this: import sbt._ import Keys._ import PlayProject._   object ApplicationBuild extends Build {   val appName = "PlayAndRabbitMQ" val appVersion = "1.0-SNAPSHOT"   val appDependencies = Seq( "com.rabbitmq" % "amqp-client" % "2.8.1" )   val main = PlayProject(appName, appVersion, appDependencies, mainLang = SCALA).settings( ) } Add rabbitMQ configuration to the config file For our examples we can configure a couple of things. The queue where to send the message to, the exchange to use, and the host where RabbitMQ is running. In a real world scenario we would have more configuration options to set, but for this case we’ll just have these three. Add the following to your application.conf so that we can reference it from our application. #rabbit-mq configuration rabbitmq.host=localhost rabbitmq.queue=queue1 rabbitmq.exchange=exchange1 We can now access these configuration files using the ConfigFactory. To allow easy access create the following object: object Config { val RABBITMQ_HOST = ConfigFactory.load().getString("rabbitmq.host"); val RABBITMQ_QUEUE = ConfigFactory.load().getString("rabbitmq.queue"); val RABBITMQ_EXCHANGEE = ConfigFactory.load().getString("rabbitmq.exchange"); } Initialize the connection to RabbitMQ We’ve got one more object to define before we’ll look at how we can use RabbitMQ to send and receive messages. to work with RabbitMQ we require a connection. We can get a connection to a server by using a ConnectionFactory. Look at the javadocs for more information on how to configure the connection. object RabbitMQConnection {   private val connection: Connection = null;   /** * Return a connection if one doesn't exist. Else create * a new one */ def getConnection(): Connection = { connection match { case null => { val factory = new ConnectionFactory(); factory.setHost(Config.RABBITMQ_HOST); factory.newConnection(); } case _ => connection } } } Start the listeners when the application starts We need to do one more thing before we can look at the RabbitMQ code. We need to make sure our message listeners are registered on application startup and our senders start sending. Play 2 provides a GlobalSettings object for this which you can extend to execute code when your application starts. For our example we’ll use the following object (remember, this needs to be stored in the default namespace: import play.api.mvc._ import play.api._ import rabbitmq.Sender   object Global extends GlobalSettings {   override def onStart(app: Application) { Sender.startSending } } We’ll look at this Sender.startSending operation, which initializes all the senders and receivers in the following sections. Setup send and receive scenario Let’s look at the Sender.startSending code that will setup a sender that sends a msg to a specific queue. For this we use the following piece of code: object Sender {   def startSending = { // create the connection val connection = RabbitMQConnection.getConnection(); // create the channel we use to send val sendingChannel = connection.createChannel(); // make sure the queue exists we want to send to sendingChannel.queueDeclare(Config.RABBITMQ_QUEUE, false, false, false, null);   Akka.system.scheduler.schedule(2 seconds, 1 seconds , Akka.system.actorOf(Props( new SendingActor(channel = sendingChannel, queue = Config.RABBITMQ_QUEUE))) , "MSG to Queue"); } }   class SendingActor(channel: Channel, queue: String) extends Actor {   def receive = { case some: String => { val msg = (some + " : " + System.currentTimeMillis()); channel.basicPublish("", queue, null, msg.getBytes()); Logger.info(msg); } case _ => {} } } In this code we take the following steps:Use the factory to retrieve a connection to RabbitMQ Create a channel on this connection to use in communicating with RabbitMQ Use the channel to create the queue (if it doesn’t exist yet) Schedule Akka to send a message to an actor every second.This all should be pretty straightforward. The only (somewhat) complex part is the scheduling part. What this schedule operation does is this. We tell Akka to schedule a message to be sent to an actor. We want a 2 seconds delay before it is fired, and we want to repeat this job every second. The actor that should be used for this is the SendingActor you can also see in this listing. This actor needs access to a channel to send a message and this actor also needs to know where to send the message it receives to. This is the queue. So every second this Actor will receive a message, append a timestamp, and use the provided channel to send this message to the queue: channel.basicPublish(“”, queue, null, msg.getBytes());. Now that we send a message each second it would be nice to have listeners on this queue that can receive messages. For receiving messages we’ve also created an Actor that listens indefinitely on a specific queue. class ListeningActor(channel: Channel, queue: String, f: (String) => Any) extends Actor {   // called on the initial run def receive = { case _ => startReceving }   def startReceving = {   val consumer = new QueueingConsumer(channel); channel.basicConsume(queue, true, consumer);   while (true) { // wait for the message val delivery = consumer.nextDelivery(); val msg = new String(delivery.getBody());   // send the message to the provided callback function // and execute this in a subactor context.actorOf(Props(new Actor { def receive = { case some: String => f(some); } })) ! msg } } } This actor is a little bit more complex than the one we used for sending. When this actor receives a message (kind of message doesn’t matter) it starts listening on the queue it was created with. It does this by creating a consumer using the supplied channel and tells the consumers to start listening on the specified queue. The consumer.nextDelivery() method will block until a message is waiting in the configured queue. Once a message is received, a new Actor is created to which the message is sent. This new actor passes the message on to the supplied method, where you can put your business logic. To use this listener we need to supply the following arguments:Channel: Allows access to RabbitMQ Queue: The queue to listen to for messages f: The function that we’ll execute when a message is received.The final step for this first example is glueing everything together. We do this by adding a couple of method calls to the Sender.startSending method. def startSending = { ... val callback1 = (x: String) => Logger.info("Recieved on queue callback 1: " + x);   setupListener(connection.createChannel(),Config.RABBITMQ_QUEUE, callback1);   // create an actor that starts listening on the specified queue and passes the // received message to the provided callback val callback2 = (x: String) => Logger.info("Recieved on queue callback 2: " + x);   // setup the listener that sends to a specific queue using the SendingActor setupListener(connection.createChannel(),Config.RABBITMQ_QUEUE, callback2); ... }   private def setupListener(receivingChannel: Channel, queue: String, f: (String) => Any) { Akka.system.scheduler.scheduleOnce(2 seconds, Akka.system.actorOf(Props(new ListeningActor(receivingChannel, queue, f))), ""); } In this code you can see that we define a callback function, and use this callback function, together with the queue and the channel to create the ListeningActor. We use the scheduleOnce method to start this listener in a separate thread. Now with this code in place we can run the application (play run) open up localhost:9000 to start the application and we should see something like the following output. [info] play - Starting application default Akka system. [info] play - Application started (Dev) [info] application - MSG to Exchange : 1334324531424 [info] application - MSG to Queue : 1334324531424 [info] application - Recieved on queue callback 2: MSG to Queue : 1334324531424 [info] application - MSG to Exchange : 1334324532522 [info] application - MSG to Queue : 1334324532522 [info] application - Recieved on queue callback 1: MSG to Queue : 1334324532522 [info] application - MSG to Exchange : 1334324533622 [info] application - MSG to Queue : 1334324533622 [info] application - Recieved on queue callback 2: MSG to Queue : 1334324533622 [info] application - MSG to Exchange : 1334324534722 [info] application - MSG to Queue : 1334324534722 [info] application - Recieved on queue callback 1: MSG to Queue : 1334324534722 [info] application - MSG to Exchange : 1334324535822 [info] application - MSG to Queue : 1334324535822 [info] application - Recieved on queue callback 2: MSG to Queue : 1334324535822 Here you can clearly see the round-robin way messages are processed. Setup publish and subscribe scenario Once we’ve got the above code running, adding publish / subscribe functionality is very trivial. Instead of the SendingActor we now use a PublishingActor: class PublishingActor(channel: Channel, exchange: String) extends Actor {   /** * When we receive a message we sent it using the configured channel */ def receive = { case some: String => { val msg = (some + " : " + System.currentTimeMillis()); channel.basicPublish(exchange, "", null, msg.getBytes()); Logger.info(msg); } case _ => {} } } An exchange is used by RabbitMQ to allow multiple recipients to receive the same message (and a whole lot of other advanced functionality). The only change in the code from the other actor is that this time we send the message to an exchange instead of to a queue. The listener code is exactly the same, the only thing we need to do is connect a queue to a specific exchange. So that listeners on that queue receive the messages sent to to the exchange. We do this, once again, from the setup method we used earlier. ... // create a new sending channel on which we declare the exchange val sendingChannel2 = connection.createChannel(); sendingChannel2.exchangeDeclare(Config.RABBITMQ_EXCHANGEE, "fanout");   // define the two callbacks for our listeners val callback3 = (x: String) => Logger.info("Recieved on exchange callback 3: " + x); val callback4 = (x: String) => Logger.info("Recieved on exchange callback 4: " + x);   // create a channel for the listener and setup the first listener val listenChannel1 = connection.createChannel(); setupListener(listenChannel1,listenChannel1.queueDeclare().getQueue(), Config.RABBITMQ_EXCHANGEE, callback3);   // create another channel for a listener and setup the second listener val listenChannel2 = connection.createChannel(); setupListener(listenChannel2,listenChannel2.queueDeclare().getQueue(), Config.RABBITMQ_EXCHANGEE, callback4);   // create an actor that is invoked every two seconds after a delay of // two seconds with the message "msg" Akka.system.scheduler.schedule(2 seconds, 1 seconds, Akka.system.actorOf(Props( new PublishingActor(channel = sendingChannel2 , exchange = Config.RABBITMQ_EXCHANGEE))), "MSG to Exchange"); ... We also created an overloaded method for setupListener, which, as an extra parameter, also accepts the name of the exchange to use. private def setupListener(channel: Channel, queueName : String, exchange: String, f: (String) => Any) { channel.queueBind(queueName, exchange, "");   Akka.system.scheduler.scheduleOnce(2 seconds, Akka.system.actorOf(Props(new ListeningActor(channel, queueName, f))), ""); } In this small piece of code you can see that we bind the supplied queue (which is a random name in our example) to the specified exchange. After that we create a new listener as we’ve seen before. Running this code now will result in the following output: [info] play - Application started (Dev) [info] application - MSG to Exchange : 1334325448907 [info] application - MSG to Queue : 1334325448907 [info] application - Recieved on exchange callback 3: MSG to Exchange : 1334325448907 [info] application - Recieved on exchange callback 4: MSG to Exchange : 1334325448907 [info] application - MSG to Exchange : 1334325450006 [info] application - MSG to Queue : 1334325450006 [info] application - Recieved on exchange callback 4: MSG to Exchange : 1334325450006 [info] application - Recieved on exchange callback 3: MSG to Exchange : 1334325450006 As you can see, in this scenario both listeners receive the same message. That pretty much wraps it up for this article. As you’ve seen using the Java based client api for RabbitMQ is more than sufficient, and easy to use from Scala. Note though that this example is not production ready, you should take care to close connections, nicely shutdown listeners and actors. All this shutdown code isn’t shown here. Reference: Connect to RabbitMQ (AMQP) using Scala, Play and Akka from our JCG partner Jos Dirksen at the Smart Java blog....

AOP made easy with AspectJ and Spring

I recently started looking at Aspect Oriented Programming (AOP) and I’m finding it exciting to say the least. Of course I was acquainted with it, since I saw it used for transaction management within Spring but I have never looked at it in depth. In this article I want to show how quick it is to get up to speed with AOP and Spring thanks to AspectJ. The material in this article is based on the excellent AOP book AspectJ in Action by Ramnivas Laddad. AOP is not a language, but rather an approach to software engineering. Like any methodology it has got different implementations and AspectJ is currently the richest and most complete of all. Since AspectJ and AspectWerkz merged, it is now possible to create aspects using annotations. The reason developers write code is to provide functionality of some sort. The kind of functioniality is not important for this discussion: some might want to deliver business functionality, others might write code for research purposes, other for sheer fun. The point is that any information system has got a core motive, a key functionality which it wants to deliver. For instance, I recently wrote PODAM, a testing tool which has as its ultimate goal that of automatically fill POJO / JavaBean properties. Every information system has also got needs for orthogonal services (what AOP calls crosscutting concerns); for instance logging, security, auditing, exception management and so on. While an information system can be divided into discrete pieces of functionality (what AOP defines join points), orthogonal services are required across the board. For instance, if one wanted to log how long the execution of every single public method took, each public method should have something like the following pseudo-code: public void someBusinessMethod() {long start = System.currentTimeInMilliseconds();doTheBusinessFunctionality();long end = System.currentTimeInMilliseconds();log.debug("The execution of someBusinessMethod took " + (end - start) + " milliseconds");}In the above method, the core functionality is identified solely by someBusinessMethod() whereas everything else is just logging activity. It would be nice to have something like: //Some external magic happens before the invocation of this method to take the start time public void someBusinessMethod() {doTheBusinessFunctionality();} //Some external magic happens after the invocation of this method to take the end time and logs how long the execution took.Developers typically want logging, security, etc. throughout their application, not for a single method; AOP allows developers to achieve this goal by defining somewhere externally (called an Aspect) the behaviour to apply to all code matching some pattern (AOP actually allows for a broader set of functionalities, such as the possibility to add interfaces, instance variables, methods, etc to a class just to name one). This empowered behaviour is then somewhat added to the final executing code by what AOP calls a Weaver. There are various ways that this can be achieved: weaving can happen at the source level, at the binary level and at load time. You could think of the weaver as the linker in C and C++; sources and libraries are linked together to create an executable; the weaver combines together Java code and aspects to create empowered behaviour. Spring achieves this empowered behaviour by creating an AOP proxy around the code whose behaviour must be enriched. The code that follows shows a very simple example based on AspectJ; the example surrounds the execution of a simple method with some Authentication service. The Authentication services looks very simple (the point is not how the functionality has been implemented but rather that an authentication service is available): /** * */ package uk.co.jemos.aop;/** * A simple authenticator service. * * @author mtedone * */ public class Authenticator {public void authenticate() { System.out.println("Authenticated"); } }Now let's have a look at the business logic:/** * */ package uk.co.jemos.aop;/** * A simple service which delivers messages * @author mtedone * */ public class MessageCommunicator {public void deliver(String message) { System.out.println(message); }public void deliver(String person, String message) { System.out.println(person + ", " + message); }}What we would like is for the Authenticator to be invoked before the invocation of any of the business methods of MessageCommunicator. Using AspectJ annotation syntax, we write in Aspect in pure Java: package uk.co.jemos.aop;import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Before; import org.aspectj.lang.annotation.Pointcut;@Aspect public class SecurityAspect {private Authenticator authenticator = new Authenticator();@Pointcut("execution(* uk.co.jemos.aop.MessageCommunicator.deliver(..))") public void secureAccess() { };@Before("secureAccess()") public void secure() {System.out.println("Checking and authenticating user..."); authenticator.authenticate();}}   The code above is a bit more interesting. An Aspect is marked with the @Aspect annotation. A Pointcut is some point of interest in our code, where we would like our Aspect to kick in. The syntax @Pointcut(“execution(* uk.co.jemos.aop.MessageCommunicator.deliver(..))”) public void secureAccess() { }; means: “Define a Pointcut named secureAccess which applies to all deliver methods within the MessageCommunicator class, regardless of the return type of such method”. What follows is called an advice, and it’s where AOP empowers the behaviour of our class: @Before("secureAccess()") public void secure() {System.out.println("Checking and authenticating user..."); authenticator.authenticate();}The code above says: “Before any match of the secureAccess() Pointcut apply the code within the block”. All of the above is pure Java, although the annotations belong to the AspectJ runtime. To use the above aspect with Spring, I defined a Spring context file: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.0.xsd"><aop:aspectj-autoproxy /> <bean id="messageCommunicator" /> <bean id="securityAspect" /></beans>The XML element: <aop:aspectj-autoproxy /> instructs Spring to create a proxy around every aspect. Now when I use the MessageCommunicator from a client: /** * @param args */ public static void main(String[] args) { ApplicationContext ctx = new ClassPathXmlApplicationContext( "classpath:aop-appContext.xml");MessageCommunicator communicator = ctx.getBean("messageCommunicator", MessageCommunicator.class); communicator.deliver("Hello World"); communicator.deliver("Marco", "Hello World"); } I get the following output: INFO: Loading XML bean definitions from class path resource [aop-appContext.xml] 15-May-2011 11:51:41 org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@21b64e6a: defining beans [org.springframework.aop.config.internalAutoProxyCreator,messageCommunicator,securityAspect]; root of factory hierarchy Checking and authenticating user… Authenticated Hello World  Checking and authenticating user… Authenticated Marco, Hello World AOP substantially changes the way we think software engineering, by allowing us to externalise crosscutting concerns in external components which are then weaved into our code when needed.This allows for cleaner and more maintainable code and the implementations are limitless. Additionally, if we are careful in writing our Aspects by making them reusable, we can quickly come up with a library of general-purpose, reusable aspects which add functionality to our code in an injected way. There are obviously drawbacks in the adoption of AOP, mainly the learning curve which is required by developers to get acquainted with the technology. AspectJ defines its own language and syntax, as the example above demonstrates); the @Before annotation is just one possibility: advices can be applied before, after, around objects; additionally the syntax to define Pointcuts is not Java but rather script-like. AspectJ aspects also have keywords and native objects to capture the context of the join points they advice, and this syntax needs to be learned. However, the potential gains outweight by large the extra effort required in learning this new and exciting technology. Reference: AOP made easy with AspectJ and Spring from our JCG partner Marco Tedone at the Marco Tedone’s blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: