Featured FREE Whitepapers

What's New Here?


High Performance Webapps – Data URIs

I continue to write tips for perfomance optimization of websites. The last post was about jQuery objects. This post is about data URIs. Data URIs are an interesting concept on the Web. Read ” Data URIs explained” please if you don’t know what it does mean. Data URIs are a technique for embedding resources as base 64 encoded data, avoiding the need for extra HTTP requests. It gives you the ability to embed files, especially images, inside of other files, especially CSS. Not only images are supported by data URIs, but embedded inline images are the most interesting part of this technique. This technique allows separate images to be fetched in a single HTTP request rather than multiple HTTP requests, what can be more efficient. Decreasing the number of requests results in better page performance. “Minimize HTTP requests” is actually the first rule of the ” Yahoo! Exceptional Performance Best Practices“, and it specifically mentions data URIs. ” Combining inline images into your (cached) stylesheets is a way to reduce HTTP requests and avoid increasing the size of your pages… 40-60% of daily visitors to your site come in with an empty cache. Making your page fast for these first time visitors is key to a better user experience.” Data URI format is specified asdata:[<mime type>][;charset=<charset>][;base64],<encoded data>We are only interesting for images, so that mime types can be e.g. image/gif, image/jpeg or image/png. Charset should be omitted for images. The encoding is indicated by ;base64. One example of a valid data URI: <img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAUA AAAFCAYAAACNbyblAAAAHElEQVQI12P4//8/w38GIAXDIBKE0DHxgljNBAAO 9TXL0Y4OHwAAAABJRU5ErkJggg==" alt="Red dot">HTML fragments with inline images like the above example are not really interesting because they are not cached. Data URIs in CSS files (style sheets) are cached along with CSS files and that brings benefits. Some advantages describing in Wikipedia:HTTP request and header traffic is not required for embedded data, so data URIs consume less bandwidth whenever the overhead of encoding the inline content as a data URI is smaller than the HTTP overhead. For example, the required base64 encoding for an image 600 bytes long would be 800 bytes, so if an HTTP request required more than 200 bytes of overhead, the data URI would be more efficient. For transferring many small files (less than a few kilobytes each), this can be faster. TCP transfers tend to start slowly. If each file requires a new TCP connection, the transfer speed is limited by the round-trip time rather than the available bandwidth. Using HTTP keep-alive improves the situation, but may not entirely alleviate the bottleneck. When browsing a secure HTTPS web site, web browsers commonly require that all elements of a web page be downloaded over secure connections, or the user will be notified of reduced security due to a mixture of secure and insecure elements. On badly configured servers, HTTPS requests have significant overhead over common HTTP requests, so embedding data in data URIs may improve speed in this case. Web browsers are usually configured to make only a certain number (often two) of concurrent HTTP connections to a domain, so inline data frees up a download connection for other content.Furthermore, data URIs are better than sprites. Images organized as CSS sprites (many small images combined to one big) are difficult to be maintained. Maintenance costs are high. Imagine, you want to change some small images in the sprite, their position, size, color or whatever. Well, there are tools allowing to generate sprites, but later changes are not easy. Especially changes in size cause a shift of all positions and a lot of CSS changes. And don’t forget – a sprite still requires one HTTP request :-). What browsers support data URIs? Data URIs are supported for all modern browsers: Gecko-based (Firefox, SeaMonkey, Camino, etc.), WebKit-based (Safari, Google Chrome), Opera, Konqueror, Internet Explorer 8 and higher. For Internet Explorer 8 data URIs must be smaller than 32 KB. Internet Explorer 9 does not have this 32 KB limitation. IE versions 5-7 lack support of data URIs, but there is MHTML – when you need data URIs in IE7 and under. Are there tools helping with automatic data URI embedding? Yes, there are some tools. The most popular is a command line tool CSSEmbed. Especially if you need to support old IE versions, you can use this command line tool which can deal with MHTML. Maven plugin for web resource optimization, which is a part of PrimeFaces Extensions project, has now a support for data URIs too. The plugin allows to embed data URIs for referenced images in style sheets at build time. This Maven plugin doesn’t support MHTML. It’s problematic because you need to include CSS files with conditional comments separately – for IE7 and under and all other browsers. How does the conversion to data URIs work?Plugin reads the content of CSS files. A special java.io.Reader implementation looks for tokens #{resource[...]} in CSS files. This is a syntax for image references in JSF 2. Token should start with #{resource[ and ends with ]}. The content inside contains image path in JSF syntax. Theoretically we can also support other tokens (they are configurable), but we’re not interested in such kind of support :-) Examples: .ui-icon-logosmall { background-image: url("#{resource['images/logosmall.gif']}") !important; }.ui-icon-aristo { background-image: url("#{resource['images:themeswitcher/aristo.png']}") !important; }In the next step the image resource for each background image is localized. Images directories are specified according to the JSF 2 specification and suit WAR as well as JAR projects. These are ${project.basedir}/src/main/webapp/resources and ${project.basedir}/src/main/resources/META-INF/resources. Every image is tried to be found in those directories. If the image is not found in the specified directories, then it doesn’t get transformed. Otherwise, the image is encoded into base64 string. The encoding is performed only if the data URI string is less than 32KB in order to support IE8 browser. Images larger than that amount are not transformed. Data URIs looks like .ui-icon-logosmall { background-image: url("data:image/gif;base64,iVBORw0KGgoAAAANSUhEUgA ... ASUVORK5CYII=") !important; }.ui-icon-aristo { background-image: url("data:image/png;base64,iVBORw0KGgoAAAANSUhEUgA ... BJRU5ErkJggg==") !important; }Configuration in pom.xml is simple. To enable this feature set useDataUri flag to true. Example: <plugin> <groupId>org.primefaces.extensions</groupId> <artifactId>resources-optimizer-maven-plugin</artifactId> <configuration> <useDataUri>true</useDataUri> <resourcesSets> <resourcesSet> <inputDir>${project.build.directory}/webapp-resources</inputDir> </resourcesSet> </resourcesSets> </configuration> </plugin>Enough theory . Now, i will describe a practice part. I will expose some measurements, screenshots and give tips how large images should be, where CSS should be placed, what is the size of CSS file with data URIs and whether a GZIP filter can help here. Read on. The first question is if it’s worth to put data URIs in style sheets? Yes, it’s worth. First, I would like to point you to this great article ” Data URIs for CSS Images: More Tests, More Questions” where you can try to test all three scenarios for your location. Latency is different depending on your location. But you can see a tendency that a web page containing data URIs is loaded faster. We can see one of the main tricks to achieve better performance with data URIs: Split your CSS in two files – one with main data and one with data URIs only and place the second one in the footer. “In the footer” means close to the HTML body tag. Page rendering feels faster then because of the progressive rendering. In the second article you can see that this technique really accelerates page rendering. Style sheet in footer leads to a nice effect that large images download in parallel with the data URI style sheet. Why? Well, browser thinks stuff placed in footer can not have any impact on page structure above included files and doesn’t block resource loading. I also read that in this case all browsers (except old IE versions) render a page immediately without waiting until CSS with data URIs has been loaded. The same is valid for JavaScript files, as far as I know. Is it valid at all to put CSS files in page footer? Well, it’s not recommended in the HTML specification. But it’s valid in practice and it’s not bad at all in special cases. There is an interesting discussion on Stackoverflow ” How bad is it to put a CSS include in the middle of the body?” The second tip is to use data URIs for small images, up to 1-2 KB. It’s not worth to use data URIs for large images. A large image has a very long data URI string (base64 encoded string) which can increase the size of CSS file. Files with a big size can block loading of other files. Remember, browsers have connection limitations. They can normally open 2-8 conection to the same domain. That means only 2-8 files can be loaded parallel at the same time. After reading some comments in internet I got an acknowledge about my assumption with 1-2 KB images. We can soften this behavior by using of GZIP filter. A GZIP filter reduces size of resources. I have read that sometimes the size of an image encoded as data URI is even smaller than the size of original image. A GZIP filter is appled to web resources like CSS, JavaScript and (X)HTML files. But it’s not recommended to apply it to images and PDF files e.g. So, not encoded images aren’t going through the filer, but CSS files are going through. In 99%, if you gzip your CSS file, the resulting size is about the same as the regular image URL reference! And that was the third tip – use a GZIP filter. I would like to show now my test results. My test environment: Firefox 11 on Kubuntu Oneiric. I prepared the showcase of PrimeFaces Extensions with 31 images which I added to the start page. These images display small themes icons in PNG format. Every image has the same size 30 x 27 px. Sizes in kilobytes lie in range 1.0 – 4.6 KB. CSS file without data URIs was 4.8 KB and with data URIs 91,6 KB. CSS files were included quite normally in HTML head section, by the way. I deployed showcases with and without data URIs on my VPS with Jetty 8 server. First without a GZIP filer. I cleared browser cache and opened Firebug for each showcase. Here results: Without data URIs: 65 requests. Page loading time 3.84s (onload: 4.14s). That means, document ready event occured after 3.84 sek. and window onload after 4.14 sek. Subsequent calls for the same page (resources were fetched from browser cache) took 577 ms, 571 ms, 523 ms, … With data URIs: 34 requests. Page loading time 3.15s (onload: 3.33s). That means, fewer requests (remember 31 embedded images), document ready event occured after 3.15 sek. and window onload after 3.33 sek. Subsequent calls for the same page (resources were fetched from browser cache) took 513 ms, 529 ms, 499 ms, … There isn’t much difference for subsequent calls (page refreshes), but there is a significant difference for the first time visiting. Especially onload event occurs faster with data URIs. No wonder. Images being loading after document is ready. Because they can not be loaded parallel (number of opened connection is limited), they get blocked. I took some pictures from Google Chrome Web Inspector. Below you can see timing for an image (vader.png) for the first (regular) case without data URI.And the second case for the same image encoded as data URI.You see in the second picture there isn’t any blocking at all. Tests with a GZIP Filter didn’t have much impact in my case (don’t know why, maybe I haven’t too much resources). Average times after a couple of tests with empty cache: Without data URIs: 65 requests. Page loading time 3.18s (onload: 3.81s). With data URIs: 34 requests. Page loading time 3.03s (onload: 3.19s). Reference: High Performance Webapps. Use Data URIs. Practice, High Performance Webapps. Use Data URIs. Theory from our JCG partner Oleg Varaksin at the Thoughts on software development blog....

Aleri – Complex Event Processing

Sybase’s Aleri streaming platform is one of the more popular products in the CEP market segment. It’s is used in Sybase’s trading platform – the RAP edition, which is widely used in capital markets to manage positions in a portfolio. Today, in the first of the multi-part series, I want to provide an overview of the Aleri platform and provide some code samples where required. In the second part, I will present the Aleri Studio, the eclipse based GUI that simplifies the task of modeling CEP workflow and monitor the Aleri server through a dashboard. In my previous blog post on Complex Event Processing, I demonstrated the use of Esper, the open source CEP software and Twitter4J API to handle stream of tweets from Twitter. A CEP product is much more thanhandling just one stream of data though. Single stream of data could be easily handled through the standard asynchronous messaging platforms and does not pose very challenging scalability or latency issues. But when it comes to consuming more than one real time stream of data and to analyzing it in real time, and when correlation between the streams of data is important, nothing beats a CEP platform. The sources feeding streaming platform could vary in speed, volume and complexity. A true enterprise class CEP should deal effectively with various real time high speed data like stock tickers and slower but voluminous offline batch uploads, with equal ease. Apart from providing standard interfaces, CEP should also provide an easier programming language to query the streaming data and to generate continuous intelligence through such features as pattern matching and snapshot querying.Sybase Trading Platform – the RAP edition. Trackback URLTo keep it simple and at high level, CEP can be broken down to three basic parts. The first is the mechanism to grab/consume source data. Next is the process of investigating that data, identifying events & patterns and then interacting with target systems by providing them the actionable items. The actionable events take different forms and formats depending on the application you are using the CEP for. An action item could be – selling an equity position based on calculated risk in a risk monitoring application. indicating potential fraud events in money laundering applications or alerting to a catastrophic event in a monitoring system by reading thousands of sensors in a chemical plant. There literally are thousands of scenarios where a manual and off-line inspection of data is simply not an option. After you go through the following section, you may want to try Aleri yourself. This link http://www.sybase.com/aleriform directly takes you to the Aleri download page. Evaluation copy valid for 90 days is freely available from Sybase’s official website. Good amount of documentation, an excellent tutorial and some sample code on the website should help you getstarted quickly. If you are an existing user of any CEP product, I encourage you to compare Aleri with that product and share it with the community or comment on this blog. By somewhat dated estimates, Tibco CEP is the biggest CEP vendor in the market. I am not sure how much market share another leading product StreamBasehas. There is also a webinar you can listen to on Youtube.comthat explains CEP benefits in general and some key features of Streambase in specific. For new comers, this serves as an excellent introduction to CEP and a capital markets use case. An application on Aleri CEP is built by creating a model using the Studio (the gui) or using Splash(the language) or by using the Aleri Modeling language (ML) – the final stage before it is deployed. Following is a list of the key features of Splash.Data Types – Supports standard data types and XML . Also supports ‘Typedef ‘ for user defined data types. Access Control – a granular level access control enabling access to a stream or modules (containing many streams) SQL – another way of building a model. Building an Aleri studio model could take longer due to its visual paradigm. Someone proficient with SQL should be able to do it much faster using Aleri SQL which is very similar to regular SQL we all know. Joins – supported joins are Inner, Left, Right and Full Filter expressions – include Where, having, Group having ML – Aleri SQL produces data model in Aleri modeling language (ML) – A proficient ML users might use only ML (in place of Aleri Studio and Aleri SQL)to build a model. The pattern matching language – includes constructs such as ‘within’ to indicate interval (sliding window), ‘from’ to indicate the stream of data and the interesting ‘fby’ that indicates a sequence (followed by) User defined functions – user defined function interface provided in the splash allows you to create functions in C++ or Java and to use them within a splash expression in the model.Advanced pattern matching – capabilities are explained through example here. – Following three code segments and their explanations are directly taken from Sybase’s documentation on Aleri. The first example checks to see whether a broker sends a buy order on the same stock as one of his orher customers, then inserts a buy order for the customer, and then sells that stock. It creates a “buyahead” event when those actions have occurred in that sequence. within 5 minutes from BuyStock[Symbol=sym; Shares=n1; Broker=b; Customer=c0] as Buy1, BuyStock[Symbol=sym; Shares=n2; Broker=b; Customer=c1] as Buy2, SellStock[Symbol=sym; Shares=n1; Broker=b; Customer=c0] as Sell on Buy1 fby Buy2 fby Sell { if ((b = c0) and (b != c1)) { output [Symbol=sym; Shares=n1; Broker=b]; } }This example checks for three events, one following the other, using the fby relationship. Because thesame variable sym is used in three patterns, the values in the three events must be the same. Differentvariables might have the same value, though (e.g., n1 and n2.) It outputs an event if the Broker andCustomer from the Buy1 and Sell events are the same, and the Customer from the Buy2 event is different. The next example shows Boolean operations on events. The rule describes a possible theft condition,when there has been a product reading on a shelf (possibly through RFID), followed by a non-occurrenceof a checkout on that product, followed by a reading of the product at a scanner near the door. within 12 hours from ShelfReading[TagId=tag; ProductName=pname] as onShelf, CounterReading[TagId=tag] as checkout, ExitReading[TagId=tag; AreaId=area] as exit on onShelf fby not(checkout) fby exit output [TagId=t; ProductName=pname; AreaId=area];The next example shows how to raise an alert if a user tries to log in to an account unsuccessfully three times within 5 minutes. from LoginAttempt[IpAddress=ip; Account=acct; Result=0] as login1, LoginAttempt[IpAddress=ip; Account=acct; Result=0] as login2, LoginAttempt[IpAddress=ip; Account=acct; Result=0] as login3, LoginAttempt[IpAddress=ip; Account=acct; Result=1] as login4 on (login1 fby login2 fby login3) and not(login4) output [Account=acct];People wishing to break into computer systems often scan a number of TCP/IP ports for an open one,and attempt to exploit vulnerabilities in the programs listening on those ports. Here’s a rule that checkswhether a single IP address has attempted connections on three ports, and whether those have been followedby the use of the “sendmail” program. within 30 minutes from Connect[Source=ip; Port=22] as c1, Connect[Source=ip; Port=23] as c2, Connect[Source=ip; Port=25] as c3 SendMail[Source=ip] as send on (c1 and c2 and c3) fby send output [Source=ip];Aleri provides many interfaces out of the box for an easy integration with source and target systems. Through these interfaces/adapters the Aleri platform can communicate with standard relational databases, messaging frameworks like IBM MQ, sockets and file system files. Data in various formats like csv, FIX, Reuters market data, SOAP, http, SMTP is easily consumed by Aleri through standardized interfaces. Following are available techniques for integrating Aleri with other systems.Pub/sub API is provided in Java, C++ and dot net – A standard pub/sub mechanism SQL interface with SELECT, UPDATE, DELETE and INSERT statements used through ODBC and JDBC connection. Built in adapters for market data and FIXIn the next part of this series we will look at the Aleri Studio, the gui that helps us build the CEP application the easy way. Aleri, the complex event processing platform from Sybase was reviewed at high level in my last post. This week, let’s review the Aleri Studio, the user interface to Aleri platform and the use of pub/sub api, one of many ways to interface with the Aleri platform. The studio is an integral part of the platform and comes packaged with the free evaluation copy. If you haven’t already done so, please download a copy from here. The fairly easy installation process of Aleri product gets you up and running in a few minutes. The aleri studio is an authoring platform for building the model that defines interactions and sequencing between various data streams. It also can merge multiple streams to form one or more streams. With this eclipse based studio, you can test the models you build by feeding them with the test data and monitor the activity inside the streams in real time. Let’s look at the various type of streams you can define in Aleri and their functionality. Source Stream - Only this type of stream can handle incoming data. The operations that can be performed by the incoming data are insert, update, delete and upsert. Upsert, as the name suggests updates data if the key defining a row is already present in the stream. Else, it inserts a record in the stream. Aggregate Stream – This stream creates a summary record for each group defined by specific attribute. This provides functionality equivalent to ‘group by’ in ANSI SQL. Copy stream – This stream is created by copying another stream but with a different retention rule. Compute Stream – This stream allows you to use a function on each row of data to get a new computed element for each row of the data stream. Extend Stream – This stream is derived from another stream by additional column expressions Filter Stream – You can define a filter condition for this stream. Just like extend and compute streams, this stream applies filter conditions on other streams to derive a new stream. Flex Stream – Significant flexibility in handling streaming data is achieved through custom coded methods. Only this stream allows you to write your own methods to meet special needs. Join Stream – Creates a new stream by joining two or more streams on some condition. Both, Inner and Outer joins can be used to join streams. Pattern Stream – Pattern matching rules are applied with this stream Union Stream – As the name suggests, this joins two or more streams with same row data structure. Unlike the join stream, this stream includes all the data from all the participating streams. By using some of these streams and the pub api of Aeri, I will demonstrate the seggregation of twitter live feed into two different streams. The twitter live feed is consumed by a listener from Twitter4j library. If you just want to try Twitter4j library first, please follow my earlier post ‘ Tracking user sentiments on Twitter‘. The data received by the twitter4j listener, is fed to a source stream in our model by using the publication API from Aleri. In this exercise we will try to separate out tweets based on their content. Built on the example from my previous post, we will divide the incoming stream into two streams based on the content. One stream will get any tweets that consists ‘lol’ and the other gets tweets with a smiley “:)” face in the text . First, let’s list the tasks we need to perform to make this a working example.Create a model with three streams Validate the model is error free Create a static data file Start the Aleri server and feed the static data file to the stream manually to confirm correct working of the model. Write java code to consume twitter feed. Use the publish API to publish the tweets to Aleri platform. Run the demo and see the live data as it flows through various streams.This image is a snapshot of the Aleri Studio with the three streams – one on the left named “tweets” is a source stream and two on the right named “lolFilter” and “smileyFilter” are of the filter type. Source stream accepts incoming data while filter streams receive the filtered data. Here is how I defined the filter conditions – like (tweets.text, ‘%lol%’). tweets is the name of the stream and text is the field in the stream we are interested in. %lol% means, select any tweets that have ‘lol’ string in the content. Each stream has only 2 fields – id and text. The id and text maps to id and text-message sent by twitter. Once you define the model, you can check it for any errors by clicking on the check mark in the ribbon at the top. Erros if any will show up in the panel at bottom right of the image. Once your model is error free, it’s time to test it. The following image shows the test interface of the studio. Try running your model with a static data file first. The small red square at the top indicates that Aleri server is currently running. The console window at the bottom right shows server messages like successful starts and stops etc. The Run-test tab in the left pane, is where you pick a static data file to feed the source stream. The pane on the right shows all the currently running streams and live data processed by the streams.The image below shows the format of the data file used to test the model tweets ALERI_OPS="i" id="1" text="324test 1234" ; tweets ALERI_OPS="i" id="2" text="test 12345"; tweets ALERI_OPS="i" id="3" text="test 1234666" ; tweets ALERI_OPS="i" id="4" text="test 1234888" ; tweets ALERI_OPS="i" id="5" text="test 1234999" ;The source code for this exercise is at the bottom. Remember that you need to have twitter4j library in the build path and have Aleri server running before you run the program. Because I have not added any timer to the execution thread, the only way to stop the execution is to abort it. For brevity and to keep the code line short, I have deleted all the exception handling and logging. The code utilizes only the publishing part of the pub/sub api of Aleri. I will demonstrate the use of sub side of the api in my next blog post. package com.sybase.aleri;import java.io.BufferedWriter; import java.io.File; import java.io.FileWriter; import java.io.IOException;import twitter4j.Status; import twitter4j.StatusDeletionNotice; import twitter4j.StatusListener; import twitter4j.TwitterException; import twitter4j.TwitterStream; import twitter4j.TwitterStreamFactory; import twitter4j.conf.Configuration; import twitter4j.conf.ConfigurationBuilder;import com.aleri.pubsub.SpGatewayConstants; import com.aleri.pubsub.SpObserver; import com.aleri.pubsub.SpPlatform; import com.aleri.pubsub.SpPlatformParms; import com.aleri.pubsub.SpPlatformStatus; import com.aleri.pubsub.SpPublication; import com.aleri.pubsub.SpStream; import com.aleri.pubsub.SpStreamDataRecord; import com.aleri.pubsub.SpStreamDefinition; import com.aleri.pubsub.SpSubscription; import com.aleri.pubsub.SpSubscriptionCommon; import com.aleri.pubsub.impl.SpFactory; import com.aleri.pubsub.impl.SpUtils; import com.aleri.pubsub.test.ClientSpObserver;import java.text.SimpleDateFormat; import java.util.ArrayList; import java.util.Collection; import java.util.Date; import java.util.HashMap; import java.util.Vector; import java.util.TimeZone;public class TwitterTest_2 { //make sure that Aleri server is running prior to running this program static { //creates the publishing platform createPlatform();} // Important objects from the publish API static SpStream stream; static SpPlatformStatus platformStatus; static SpPublication pub;public static void main(String[] args) throws TwitterException, IOException { TwitterTest_2 tt2 = new TwitterTest_2(); ConfigurationBuilder cb = new ConfigurationBuilder(); cb.setDebugEnabled(true); //use your twitter id and passcode cb.setUser("Your user name"); cb.setPassword("Your Password");// creating the twitter4j listenerConfiguration cfg = cb.build(); TwitterStream twitterStream = new TwitterStreamFactory(cfg) .getInstance(); StatusListener_1 listener; listener = new StatusListener_1(); twitterStream.addListener(listener); //runs the sample that comes with twitter4j twitterStream.sample();}private static int createPlatform() { int rc = 0; //Aleri platform configuration - better alternative is to your properties file String host = "localhost"; int port = 22000; //aleri configured to run with empty userid and pwd strings String user = ""; String password = ""; //name of the source stream - the one that gets the data from the twitter4j String streamName = "tweets"; String name = "TwitterTest_2"; SpPlatformParms parms = SpFactory.createPlatformParms(host, port, user, password, false, false); platformStatus = SpFactory.createPlatformStatus(); SpPlatform sp = SpFactory.createPlatform(parms, platformStatus); stream = sp.getStream(streamName); pub = sp.createPublication(name, platformStatus); // Then get the stream definition containing the schema information. SpStreamDefinition sdef = stream.getDefinition(); /* int numFieldsInRecord = sdef.getNumColumns(); Vector colTypes = sdef.getColumnTypes(); Vector colNames = sdef.getColumnNames();*/ return 0; }static SpStream getStream() { return stream; }static SpPlatformStatus getPlatformStatus() { return platformStatus; }static SpPublication getPublication() { return pub; }static int publish(SpStream stream, SpPlatformStatus platformStatus, SpPublication pub, Collection fieldData) { int rc = 0; int i = pub.start();SpStreamDataRecord sdr = SpFactory.createStreamDataRecord(stream, fieldData, SpGatewayConstants.SO_UPSERT, SpGatewayConstants.SF_NULLFLAG, platformStatus);Collection dataSet = new Vector(); dataSet.add(sdr); System.out .println("\nAttempting to publish the data set to the Platform for stream <" + stream.getName() + ">.");rc = pub.publishTransaction(dataSet, SpGatewayConstants.SO_UPSERT, SpGatewayConstants.SF_NULLFLAG, 1);// commit blocks the thread until data is consumed by the platform System.out.println("before commit() call to the Platform."); rc = pub.commit();return 0; }}Reference: Aleri – Complex Event Processing – Part I, Understanding Aleri – Complex Event Processing – Part II from our JCG partner Mahesh Gadgil at the Simple yet Practical blog....

Maven Does Not Suck . . . but the Maven Docs Do

I’m not going to go into the whole Maven debate, but suffice it to say that I’m a strong proponent of everything best practice, and, to me, Maven is an embodiment of best practice. By this I mean that Maven is built around a specific best practice build methodology. Note, I said a specific best practice build methodology. In the real world, there are more than a handful of build methodologies that could qualify for the best practice accolade, but Maven assumes a single one of them. This does not mean that the others are not good, it just means that if you use Maven, you going to need to buy-in to the conventions it assumes . . . or suffer. This is true for any Convention Over Configuration ( CoC ) tool, and Maven is pretty darn  CoC. Maven, like all design patterns, is a reuseable solution to the process of building software I think the occasionally discussed notion of Maven as a design pattern for builds is a powerful metaphor.  It’s useful because it emphasises that Maven, like all design patterns, is a reuseable solution to the process of building software.  It’s a best practice solution that has been refined by a community of smart people over years of heavy use.  The most obvious benefits of leveraging a design pattern for building software are the same as those for writing software.  Namely:You get a bunch of functionality with out having to write it yourself An engineer that understands the pattern as applied to one project, can instantly understand the pattern as applied to another project.Nominally, the first bullet is about productivity and and the second is about simplicity. Obviously, everybody wants to be more productive, i.e. accomplishing more with less lines of code. But, I actually think the second point — simplicity — is far more important. In my opinion, the entire field of engineering boils down, most elegantly, to the concept of “managing complexity”. By complexity, I refer directly to that headache you get when bombarded with piles of spaghetti code.  Design patterns help eliminate this intellectual discord by sealing off a big chunk of complexity in a higher level notation.  In case you’ve forgotten, this is what frees our minds up for the bigger and cooler tasks that inevitably reside on the next level. It is this point of view that makes me rank learning a new project’s ad hoc build to be one of the most annoying aspects of my profession. Even if an ant or make build is very cleanly implemented, follows a localized best practice, and automates a broad scope of the software lifecycle, it still punishes  new developers with a mountain of raw data, i.e. lines of scriptcode. Note, it’s only the ad hoc-ness that is a problem here. This is certainly not a knock on these tools. ant in particular is very good at automating your tasks and providing a reusable set of build widgets. But it does nothing to provide a reusable solution to the entire process of building software, and, accordingly, it does nothing to ease a new developers on their road to comprehending the build.  it’s the conventions that matter most with a CoC tool like Maven So, as I see it, it’s the conventions that matter most with a CoC tool like Maven.  You have to know and follow the assumed conventions in order to be successful with Maven. Projects that don’t follow the conventions quickly run afoul of Maven. First, they struggle to implement their own build process with a tool that assumes a build process of it’s own. It’s easy to fall into being upset that you can’t easily do what you’ve been doing, but the preceding paragraphs are meant to suggest that it’s actually you who needs to change, at least if you plan to continue on with Maven. When you choose Maven, you need to accept the conventions. I you can’t, I suggest you stick with Ant, which is flexible enough to meet you on your terms. Just remember that you are losing the ability to leverage the design pattern aspect of Maven to manage the complexity of your build. And if you think your build doesn’t have complexity issues, ask your self these questions:Can every engineer on our team easily build all the components of our software system? Do our engineers have the confidence to modify build scripts without angst? Do our engineers flee the room when someone is needed to address a build problem?So, if you’re with me so far, you’d probably agree that following the conventions assumed by Maven is a critical prerequisite for entering Maven nirvana. And this is what leads me to the conclude that the Maven docs suck.  They are not only inadequate, but perhaps detrimental; they mostly document the configuration while utterly failing on the critical topic of conventions.  The emphasis on configuration, which I assume is largely by accident, leads newbies into thinking it’s okay, and perhaps even  normal, to configure Maven. The Maven documentation is not only inadequate, but perhaps detrimental; it mostly documents the configuration while utterly failing on the critical topic of conventions. By documentation, I mostly mean all that stuff you find when you visit the Maven or Codehaus plugin pages. For instance, consider the extremely core maven-assembly-plugin.  Browse through the docs on the Maven sit and you’ll find that it’s almost entirely about configuration. The problem, as I’ve stated and restated, is that you don’t really want to configure Maven; you want to follow the conventions. Configuration should be only an option of last resort. plugin puts things and then the next plugin can’t find that stuff.  Use a profile to tell Maven where to find something, and then nothing else can find that thing without the profile.  Configuring Maven gets you into a bit of a configuration feedback loop, and geometric growth of configuration does not lend itself to pom readability.  Even if you can get Maven to do what you need by configuring it to death, you quickly get an incomprehensible build.     Use the configuration to change where one plugin puts things and then the next plugin can’t find that stuff. So, avoid configuration!  Stick instead to the conventional path. Your engineers will know and love their build, and you will easily leverage the many benefits offered by the Maven ecosystem — from the rich plugin libraries to the repository servers and build servers.    But how does one go about learning the Maven conventions?  It’s all about community. Luckily, it’s a pretty friendly community.  Here are some of the most important resources that I use when trying to determine how things should be done in Maven.Sonatype Blog Stackoverflow Maven Users ListAdditionally, in an effort to be a friendly community member, I’m using this blog entry as an introduction to a series of Maven entries.  Each of these entries will outline important Maven conventions. I’ll detail the convention as well as offer example poms.  So, keep in touch if you want to learn about Maven conventions. Reference: Maven Does Not Suck . . . but the Maven Docs Do from our W4G partner Chad Davis at the zeroInsertionForce blog....

Which Private Cloud is Best and How to Select One

This litmus test is proposed to compare private cloudsHow long does it take to place in production an application delivered as service in your private cloud? (comparing apples to apples)? Less than 1 hour? Less than 1 day? Less than 1 week? More than 1 week? What is the skill level required for (1)? . Rate 1 any user, 2 any sysadmin, no training, 3 only trained computer science sysadmins Does it have a ready to use billing system to be used internally and externally? Most reply ” it has “hooks” to external “unnamed billing systems”. The reply is either Yes or Not. How the server scalability works? Manual or Automatic? Where the additional servers are located? (a) More servers on site or other sites inside the same organization are added function of aggregated demand? Or (b) servers are added from public sites for additional costswheneverthey are needed? If (b) , how outside bills are allocated to internal and external users?Now read on to see why. OpenStack vs Eucalyptus vs OpenNebula is an animated discussion on linked in. Here is my take. Don’t compete on features This discussion assumes that the winner will determined from technical features. This is wrong of course. Experience shows, the executives who back the product, – Eucalyptus has Marten Mickos – and who know all the right people – will win. OpenStack is yet to produce a startup backed by some big names well connected. If you think this is not important, read the blog from Andreessen Horowitz http://bit.ly/ww37ZZ. You will see how Opsware was transformed from one product with a single customer, and full of holes bugs, into something that HP bought for $1.6B Flaunting product features to win the war with competitors, is a mistake, because no one knows the winning features anyway. Martin Mickos tweeted a quote; ” Remember that not getting what you want is sometimes a wonderful stroke of luck.” I had a look at Andrew Chen blog Don’t compete on features http://bit.ly/xu9iZn He says: There are three key ramifications for teams building the first version of a product.Don’t compete on features. Find an interesting way to position yourself differently – not better, just differently – than your competitors and build a small featureset that addresses that use case well. If your product initially doesn’t find a fit in the market (as is common), don’t react by adding additional new features to “fix” the problem. That rarely works. Make sure your product reflects the market positioning. If your product is called the Ultimate Driving Machine,…, bring that positioning into the core of your product so that it’s immediately obvious to anyone using it…. Your product will be fundamentally differentiated from the start.I was the product manager of Sun Grid Engine for a decade and the most frequent request I had was to produce comparison with competing products LSF, PBSpro, and so on. Each time such a document was produced, it was leaked to competitors, they immediately added (or claimed they added) the features we claimed as exclusive. Some ofthefeatures were so esoteric (see A glimpse into the new features of Sun Grid Engine 6.2 Update 5, due in December 2009) that you can count the users who demanded them on your fingers. The vast majority of users did not need them Private Clouds versus wishful thinking Private Clouds Tom Morse has an excellent web site where he lists most private cloud offerings which are claimed to be products. http://www.cloudcomputeinfo.com/private-clouds It is a very nice work. Here are the companies he lists :http://aws.amazon.com/vpc/ http://www.bmc.com/solutions/cloud-computing http://www.ca.com/us/cloud-solutions.aspx http://www.cisco.com/web/about/ent/cloud/index.html http://www.cloud.com/?ntref=prod_top http://www.cloupia.com/en/ http://content.dell.com/us/en/enterprise/cloud-computing.aspx http://www.enomaly.com/ http://www.eucalyptus.com/ http://www.hexagrid.com/ http://www.hp.com http://www.hpchost.com/pages-Private_Cloud.html http://tinyurl.com/3wvj864 (IBM) http://www.microsoft.com/virtualization/en/us/private-cloud.aspx http://nebula.com/ http://nimbula.com/ http://opennebula.org/start http://openstack.org/ http://www.os33.com/ http://www.platform.com/private-cloud-computing http://www.redhat.com/solutions/cloud/foundations/ http://silver.tibco.com/run.php http://www.suse.com/solutions/platform.html#cloud http://www.vmware.com/solutions/cloud-computing/private-cloud/However this list completeness pays as price the inclusion of wishful thinking companies who they believe they are a private cloud, like IMO Cisco. Cisco under December 2011 claimed they integrated 3rd parties cloud software in their solutions, creating complicatedlabyrinthine implementations. On February 1, in Padmasree Warrior, Cisco’s CTO claimed in Cisco Live Europe event that …Cisco also has plans to build out its cloud offerings, with a four-pillar strategy to help customers build private, public and hybrid clouds on its Unified Computing System (UCS) This statement – a surprise for many engineers at execution level in Cisco who are reading on Internet what their company is up to – contradicts teh claim that Cisco has a Private Cloud Solution now. The litmus test to identify the real Private Cloud Can someone do one table comparison of all the private clouds offering on Paul Morse’s web site? I do not mean comparing features, just a few categories:How long does it take make an application delivered as service (comparing apples to apples)? ( less than 1 hour? Less than 1 day? Less than 1 week? More than 1 week? What is the skill level required for (1)? . Rate 1 any user, 2 any sysadmin, no training, 3 only trained computer science sysadmins Does it have a ready to use billing system to be used internally and externally?(Most reply” it has “hooks” to external “unnamed billing systems). The reply is either Yes or Not. How the scalability works? (a) More servers on site or other sites inside the same org. are added automatically function of aggregated demand? Or (b) servers are added from public sites for additional costs? If (4) , how outside bills are allocated to internal and external users?I don’t think there is even one person among the people I know – and I know some very competent people – who is able to answer these questions for each product from Paul Morse rather complete private clouds list. IMHO, if the resulting data center can not provide satisfactory replies to (1) through (4) questions without exception, no matter what product is used, we do not have a cloud, but another, slightly less cumbersome to run data center Note none of the litmus test questions include virtualization. Virtualization is just one tool, not an end by itself Reference: Which Private Cloud is Best and How to Select One from our JCG partner Miha Ahronovitz at the The memories of a Product Manager blog. (Copyright 2012 – Ahrono Associates)...

OSGI – Modularizing your application

Since I am a big proponent of modularity, low coupling, high cohesion, etc … I believe that this technology is a breakthrough in how we create applications using the Java platform. With OSGi, it is very simple to create highly extensible applications, see for example the Eclipse IDE. My goal here is not to show in depth how the technology works, but to demonstrate a small example of some of its advantages. The sample consists of a system for sending messages. The user types a message into a TextField and this message can be sent in several ways, such as Email or SMS. However, in this example, we have four modules. The graphical user interface, domain, sender of email messages and the sender via SMS. Following the nomenclature of OSGi, each module is a Bundle. A Bundle is nothing more than a “jar” with some additional information from MANIFEST.MF. This information is used by the OSGi framework. Like almost everything in Java, OSGi technology is a specification and therefore have different implementations to choose from. Among them are the most famous Equinox (Eclipse Project), Felix (Apache) and Knopflerfish. In this article we will use the Equinox. Download the Equinox. For this article we only need the jar. Run the jar to access the console of the Equinox.C:\osgi>java -jar org.eclipse.osgi_3.5.1.R35x_v20090827.jar –consoleTo view Bundles installed, simply type the command ss.C:\osgi>java -jar org.eclipse.osgi_3.5.1.R35x_v20090827.jar – console osgi> ssFramework is launched.id State Bundle< 0 ACTIVE org.eclipse.osgi_3.5.1.R35x_v20090827 osgi> _As we can see at this point we only have one bundle installed. The bundle of Equinox. Now we’ll create our bundle and add it to Equinox. Create a bundle is very simple. Create a simple project with the following class: package br.com.luiscm.helloworld;import org.osgi.framework.BundleActivator; import org.osgi.framework.BundleContext;public class Activator implements BundleActivator {/* * (non-Javadoc) * @see org.osgi.framework.BundleActivator#start(org.osgi.framework.BundleContext) */ public void start(BundleContext context) throws Exception { System.out.println("Hello World!"); }/* * (non-Javadoc) * @see org.osgi.framework.BundleActivator#stop(org.osgi.framework.BundleContext) */ public void stop(BundleContext context) throws Exception { System.out.println("Good Bye World!"); } }This class is the Activator of our bundle. The Activator is used by the OSGi framework to start or stop a bundle. In this first example, the Activator will only print messages when started and stopped. Now we need to modify the MANIFEST of the jar to make it an OSGi bundle.Manifest-Version: 1.0 Bundle-ManifestVersion: 2 Bundle-Name: LuisCM Plug-in Bundle-SymbolicName: br.com.luiscm.helloworld Bundle-Version: 1.0.0 Bundle-Activator: br.com.luiscm.helloworld.Activator Bundle-ActivationPolicy: lazy Bundle-RequiredExcutionEnvironment: JavaSE-1.6 Import-Package: org.osgi.framework;version=”1.3.0? See the MANIFEST passed to the OSGi bundle some of our information. Among them the name of the bundle (SymbolicName) and the Activator class. Now let’s install this bundle in Equinox. Generate a jar of the project and to install it in Equinox is simple:install file:.jar osgi> install file:bundle.jar Bundle id is 1 osgi>To verify that the bundle was properly installed, simply run the command ss:osgi> ss Framework is launched. id State Bundle 0 ACTIVE org.eclipse.osgi_3.5.1.R35x_v20090827 1 INSTALLED br.com.luiscm.helloworld_1.0.0 osgi> _The bundle is properly installed, you just start it now:start osgi> start 1 Hello World! osgi> To stop the bundle:osgi> stop 1 Goodbye World! osgi>Now that we know how to create a bundle, let’s start our example. In the example, we have four bundles. * Domain: As the name says, it stores the domain classes in our example. We will have two classes: Message and IMessageSender. * SenderSMS: implementation of IMessageSender that sends messages via SMS. * SenderEmail: implementation of IMessageSender that sends messages by email. * UI: GUI example Bundle UI We’ll start with the UI bundle. The activator will just build the frame for the user to enter the message. package br.com.luiscm.helloworld;import java.awt.BorderLayout; import java.awt.event.ActionEvent; import java.awt.event.ActionListener;import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JTextField;import org.osgi.framework.BundleActivator; import org.osgi.framework.BundleContext;import br.com.luiscm.helloworld.core.service.Message;public class Activator implements BundleActivator {private Message message; private JFrame frame;/* * (non-Javadoc) * @see org.osgi.framework.BundleActivator#start(org.osgi.framework.BundleContext) */ public void start(BundleContext context) throws Exception { buildInterface(); }/* * (non-Javadoc) * @see org.osgi.framework.BundleActivator#stop(org.osgi.framework.BundleContext) */ public void stop(BundleContext context) throws Exception { destroyInterface(); }private void destroyInterface() { frame.setVisible(false); frame.dispose(); }private void buildInterface() { frame = new JFrame("Hello"); frame.setSize(200, 80); frame.getContentPane().setLayout(new BorderLayout()); final JTextField textField = new JTextField();final JButton button = new JButton("Send"); button.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent event) { message.send(textField.getText()); } });frame.getContentPane().add(textField, BorderLayout.NORTH); frame.getContentPane().add(button, BorderLayout.SOUTH); frame.setVisible(true); } }Note that the bundle depends on a class called Message. This class is our domain, so it is not part of this bundle. Here comes another detail of OSGi. The communication is done through bundles of services. We can consider this model as an SOA within the VM. The services bundle UI will use the bundle core. Let’s look at the MANIFEST bundle UI.Manifest-Version: 1.0 Bundle-ManifestVersion: 2 Bundle-Name: UI Plug-in Bundle-SymbolicName: br.com.luiscm.helloworld.ui< Bundle-Version: 1.0.0 Bundle-Activator: br.com.luiscm.helloworld.ui.Activator Bundle-ActivationPolicy: lazy Bundle-RequiredExecutionEnvironment: JavaSE-1.6 Import-Package: br.com.luiscm.helloworld.core.service, javax.swing, org.osgi.framework;version=”1.3.0?, org.osgi.util.tracker;version=”1.3.6?See the Import-Package statement. We are importing a package bundle core. In this package are the services that our field is providing. Also import the javax.swing package. Now we need to create the service. Bundle Core The Core Bundle has two domain classes. The interface of the senders and the Message field. Interface: package br.com.luiscm.helloworld.core.service;public interface IMessageSender { void send(String message); }Domain: package br.com.luiscm.helloworld.core.service;import java.util.ArrayList; import java.util.List;public class Message {private final List services = new ArrayList();public void addService(final IMessageSender messageSender) { services.add(messageSender); }public void removeService(final IMessageSender messageSender) { services.remove(messageSender); }public void send(final String message) { for (final IMessageSender messageSender : services) { messageSender.send(message); } } }See the Message class is comprised of a list of services. These services are the senders of messages to be used. Note that the send method only interact on the mailing list message. So far everything is very simple. Now we need to export the Message class as a core service bundle. The UI module will interact directly with this service to send messages. First we need to tell it to export OSGi bundle to other bundles. See the MANIFEST:Manifest-Version: 1.0 Bundle-ManifestVersion: 2 Bundle-Name: Helloworld Plugin Bundle-SymbolicName: br.com.luiscm.helloworld.core Bundle-Version: 1.0.0 Bundle-Activator: br.com.luiscm.helloworld.core.Activator Bundle-ActivationPolicy: lazy Bundle-RequiredExecutionEnvironment: JavaSE-1.6 Import-Package: org.osgi.framework;version=”1.3.0?, org.osgi.util.tracker;version=”1.3.6? Export-Package: br.com.luiscm.helloworld.core.serviceSee information Export-Package. For a class to be visible for another bundle, it must be exported within a package. In our case, the UI needs bundle of the Message class, so we need to export the package where the class is. Remember that the UI imported the bundle package. Message To register the component as a service, we need to directly interact with the OSGi API. When the core bundle is started, we will register the service in the context of OSGi. The code is simple: package br.com.luiscm.helloworld.core;import org.osgi.framework.BundleActivator; import org.osgi.framework.BundleContext;public class Activator implements BundleActivator {/* * (non-Javadoc) * @see org.osgi.framework.BundleActivator#start(org.osgi.framework.BundleContext) */ public void start(BundleContext context) throws Exception { context.registerService(Message.class.getName(), messageService, null); }/* * (non-Javadoc) * @see org.osgi.framework.BundleActivator#stop(org.osgi.framework.BundleContext) */ public void stop(BundleContext context) throws Exception { messageService = null; } }The method registerService expects parameter as the service name (for recommendation is the class name), the service itself and some additional settings. Now we need to change the UI to use the bundle Message service. In the bundle activator UI, just do the lookup service using your name (class name): private Message message; private JFrame frame; private ServiceTracker serviceTracker;/* * (non-Javadoc) * @see org.osgi.framework.BundleActivator#start(org.osgi.framework.BundleContext) */ public void start(BundleContext context) throws Exception { serviceTracker = new ServiceTracker(context, Message.class.getName(), null); serviceTracker.open(); message = (Message)serviceTracker.getService(); buildInterface(); }/* * (non-Javadoc) * @see org.osgi.framework.BundleActivator#stop(org.osgi.framework.BundleContext) */ public void stop(BundleContext context) throws Exception { destroyInterface(); serviceTracker.close(); } }If we add the two bundles in the Equinox, we see that the two bundles are communicating. Now we need to create bundles that actually send the messages. Bundle Sender Email and SMS Shipping services via email and SMS will be new services from our system. Therefore, we create a bundle for each. This way we can control them separately. For example, we can stop the service by sending SMS and leave only the Email without affecting system operation. The two bundles have virtually the same structure, so I’ll save some lines here. The sender will have only one bundle class that implements the interface and class IMessageSender Activator. This interface is at the core bundle, so we need to import the package in the same way we did in the bundle UI.Manifest-Version: 1.0 Bundle-ManifestVersion: 2 Bundle-name: SMS Plug-in Bundle-SymbolicName: br.com.luiscm.helloworld.sms.Activator< Bundle-Version: 1.0.0 Bundle-Activator: br.com.luiscm.helloworld.sms.Activator Bundle-ActivationPolicy: lazy Bundle-RequiredExecutionEnvironment: JavaSE-1.6 Import-Package: br.com.luiscm.helloworld.core.service, org.osgi.framework;version=”1.3.0?The only class Sender implements our interface: package br.com.luiscm.helloworld.sms;import br.com.luiscm.helloworld.core.service.IMessageSender;public class MessageSenderSMS implements IMessageSender {@Override public void send(final String message) { System.out.println("Sending by SMS : " + message); } }Send by SMS is a service of our system. Therefore, we must register it in the OSGi context: public class Activator implements BundleActivator {private IMessageSender service;/* * (non-Javadoc) * @see org.osgi.framework.BundleActivator#start(org.osgi.framework.BundleContext) */ public void start(BundleContext context) throws Exception { service = new MessageSenderSMS(); context.registerService(IMessageSender.class.getName(), service, null); }/* * (non-Javadoc) * @see org.osgi.framework.BundleActivator#stop(org.osgi.framework.BundleContext) */ public void stop(BundleContext context) throws Exception { service = null; } }The bundle of mail is virtually the same code. The only difference is the message on System.out. Note that registered the service with the name of the interface. So now we have two services with the same name. Whenever we ask for the service context with the interface name, he will perform a logical priority to return only one implementation. Now we have two services for sending messages, we need to change our bundle core to use them. To achieve this objective, a ServiceTrackerCustomizer. ServiceTrackerCustomizer e ServiceTracker As we saw, we used to do Servicetrack lookup service. However, in the case of senders, we need to know when a new sender service is available or when a sender is removed. This information is important to feed the list of services within the Message object. To access this information, we use a ServiceTrackerCustomizer. The code is simple: package br.com.luiscm.helloworld.core;import org.osgi.framework.BundleContext;public class MessageSenderServiceTracker implements ServiceTrackerCustomizer {private final BundleContext context; private final Message message;public MessageSenderServiceTracker(final BundleContext context, final Message message) { this.context = context; this.message = message; }@Override public Object addingService(final ServiceReference serviceReference) { final IMessageSender sender = (IMessageSender)context.getService(serviceReference); message.addService(sender); System.out.println("tracker : " + sender.getClass().getName()); return sender; }@Override public void removedService(final ServiceReference serviceReference, Object service) { final IMessageSender sender = (IMessageSender)context.getService(serviceReference); message.removeService(sender); } }Just implement the interface and code ServiceTrackerCustomizer what you want when a service is added, modified or removed. Simple! In our case we will add or remove the service from the list of services on our Message object. Also has a message of “log”to help us with testing. Now we need to make one more minor change in the bundle core activator. We must register our ServiceTrackerCustomizer as a listener for services such IMessageSender. public class Activator implements BundleActivator {public Message messageService = new Message(); private ServiceTracker messageSenderServiceTracker;/* * (non-Javadoc) * @see org.osgi.framework.BundleActivator#start(org.osgi.framework.BundleContext) */ public void start(BundleContext context) throws Exception { context.registerService(Message.class.getName(), messageService, null); final MessageSenderServiceTracker serviceTracker = new MessageSenderServiceTracker(context, messageService); messageSenderServiceTracker = new ServiceTracker(context, IMessageSender.class.getName(), serviceTracker); messageSenderServiceTracker.open(); }/* * (non-Javadoc) * @see org.osgi.framework.BundleActivator#stop(org.osgi.framework.BundleContext) */ public void stop(BundleContext context) throws Exception { messageSenderServiceTracker.close(); messageService = null; } }We use ServiceTrackerCustomizer along with ServiceTtracker. Where a service is added, modified or removed, our component will be called. Testing the application Now that we code, we test the application. Create four jars: * bundleCore.jar * bundleUI.jar * bundleSenderEmail.jar * bundleSenderSMS.jar Install the four bundles in the Equinox: Framework is launched.id State Bundle 0 ACTIVE org.eclipse.osgi_3.5.1.R35x_20090827.jar osgi> install file:bundleCore.jar Bundle id is 5 osgi> install file:bundleUI.jar Bundle id is 6 osgi> install file:bundleSenderEmail.jar Bundle id is 7 osgi> install file:bundleSenderSMS.jar Bundle id is 8 osgi> ssFramework is launched.0 ACTIVE org.eclipse.osgi._3.5.1.R35x_v20090827.jar 5 INSTALLED br.com.luiscm.helloworld.core_1.0.0 6 INSTALLED br.com.luiscm.helloworld.ui_1.0.0 7 INSTALLED br.com.luiscm.helloworld.email_1.0.0 8 INSTALLED br.com.luiscm.helloworld.sms_1.0.0Start the bundles and test the application.C:\osgi>java -jar org.eclipse.osgi._3.5.1.R35x_v20090827.jar -console osgi> tracker: br.com.luiscm.heloworld.sms.SenderSMStracker: br.com.luiscm.helloworld.email.SenderEmailSee which messages are sent by email and SMS. In console Equinox, pause the service email:stopTry again send a message. Because the service is no longer available, the message was sent only by SMS. Stopping power application modules without suffering side effects is sensational. Imagine that you discover a critical error in the SMS module. You need not take all the air is applied to correct this problem. Just pause the SMS module. The rest of the system will continue normal operation. Take the test with this small example. Pause and Start services. This will not affect the core, much less the UI. I managed to explain a little of what is OSGi. It is noteworthy that has much more detail on control and classpath configuration bundles that do not pay attention here. It is the task for those interested take a look at other features. It’s worth looking at the Spring-DM project. The spring makes it very easy to set up services in addition to providing an excellent IoC container. Reference: OSGI – Modularizing your application from our JCG partner Luis Carlos Moreira da Costa at the Eclipse Brazil blog....

Key Success Factors for Open Source Strategies

Last week at EclipseCon Europe, John Swainson gave a keynote address that reflected on IBM’s decision to start Eclipse as an open source project and consortium. During his speech, John identified 5 key things IBM thought they needed to be successful. These success factors are still relevant today for any company interested in an open source strategy, so I think they warrant repeating:You need really good technology. Open source doesn’t make bad or mediocre technology look good. Open source can’t save a bad products, so please don’t try. A governance structure that would encourage broad industry adoption. If you want broad adoption, including your competitors, make sure you have a governance structure that is vendor neutral. Obviously, I believe open source foundations like the Eclipse Foundation, Apache Foundation or Linux Foundation, offer the best solution. A company needs to spend significant technical and marketing resources to kick start the project. Putting code on a download server and expecting a community to emerge is unrealistic. It takes a lot of work and resources to start an open source project. Someone needs to spend the time and money to nurture the community and promote the project. This is one criteria a LOT of companies overlook. A commercial friendly license model. If you want industry adoption and participation, you need to allow other companies to commercialize the technology. Licenses like EPL, Apache or BSD provide a commercial friend license model. IT infrastructure that encourages collaboration and distributed development. Someone needs to implement and support the tools for collaboration. Today there are lots of services that can be used but someone still needs to be responsible for administrating them.Most of these should be obvious. However, I am still surprised a lot of companies forget that all 5 have to be implemented if your strategy is going to be successful. Reference: Key Success Factors for Open Source Strategies from our JCG partner Ian Skerrett at the Ian Skerrett’s blog blog....

JMeter: Load Testing Relational Databases

Apache JMeter is a performance testing tool which is entirely written in Java. Any application that works on request/response model can be load tested with JMeter. A relational database is not an exception: receives sql queries, executes them and returns the results of the execution. I’am going to show you how easy it is to set up test scenarios with the graphical user interface of JMeter. But before diving into details let’s give a shot to basic terms: Test plan : describes a test scenario Thread Group : represents users running your test scenario. Samples : a way of sending request and waiting response. HTTP request, JDBC request, SOAP/XML-RPC request and java object request are examples of samples. Logic Controller : used to customize the logic that JMeter uses to decide when to send requests Listeners : receives test results and displays reports. Timers : cause JMeter to delay a certain amount of time before each request that a thread makes. Assertions : test that application returns expected responses Note : This post is not meant to be an alternative documentation for JMeter. JMeter has a great documentation. You can find the details in its User’s Manual (http://jakarta.apache.org/jmeter/usermanual/index.html Suppose we have an application that logs every transaction into a relational database. We are going to create a test plan – step by step – in order to answer the questions below.How many transaction records can be inserted to transaction table in a second? How much time does it take to insert a single transaction record to transaction table? How does number of concurrent threads (users) affects the inserts/secs and average response times? How does number of records affects the insert/secs and average response times?Step 1 Copy mysql jdbc driver into the lib folder of your JMeter installation. JMeter needs a suitable jdbc driver in the classpath to connect to the database. Example ~/tools/jakarta-jmeter-2.3.4/lib/mysql-connector-java-5.0.5.jar We are going to store orders of the customers and the result of the order in the transactions table. CREATE TABLE transactions ( id INT NOT NULL AUTO_INCREMENT, customer_id INT NOT NULL, order_id INT NOT NULL, result INT, PRIMARY KEY (id) );Step 2 Create a test plan and name it “Test MYSQL DB”. Then add the following jmeter components to the test plan.Thread group named ‘Database Users’ Sampler of type JDBC Request Config element of type JDBC Connection Configuration Three config elements of type Random Variable Listener of type Summary ReportAfter adding these components JMeter test plan looks like the following picture.Step 3 Configure database users. The thread group component simulates the database users. 1. Number of users (threads) 2. How many times a user will send request (loop count). If you select ‘Forever’, threads will run in a while(true) {…} loop until you decide to stop the test.Step 4 Configure JDBC connection pool. JDBC Connection Configuration component is used to create jdbc connection pools. Database url, jdbc driver, database user and password are configured with this component. Connection pools are identified by “Variable Name”. JDBC Samplers (requests) use this variable name (connection pool name) to pop and push connections. I named the test connection pool as “my db pool”Step 5 Define random variables that will be used in INSERT statements. In this test I am using three random variables : user id, order id and result. Following picture shows the a random number configuration for user id. Random number generator will give us a random integers between 1 and 1000000. We can refer to generated random number with the name user_id.Step 6 JDBC Request component is the place where we tell our users (threads) what to do. The name of the pool that was configured in Step 3 “my db pool” will be used as the “variable name bound to pool”. All threads will execute prepared statements. User id, order id and result will be generated by the random number configurator (described in Step 5)Step 7 Now we have our threads configured to insert transaction records to the transactions table. In this last step we will add a Listener of type Summary Report in order to view test results.The results tells us that 10 concurrent users (threads) working in an infinite loop can insert nearly 3300 rows in our transactions table. And the average time spent for inserting a row is 2 ms. You can also choose “Graph Results” listener to view visual representation of the results. I created and run a simple DB test plan. I hope you’ll find this post helpful. Keep this motto in mind if you can’t measure it, you can neither manage it nor improve it Happy testing… Reference: Load Testing Relational Databases With JMeter from our JCG partner Ilkin Ulas at the All your base are belong to us blog....

Hibernate Tip: Sort and Order

Let’s introduce another hibernate performance tip. Do you remember the model of previous hibernate post? We had a starship and officer related with a one to many association.                 @Entity public class Starship {@Id @GeneratedValue(strategy=GenerationType.SEQUENCE) private Long id; public Long getId() {return id;} protected void setId(Long id) {this.id = id;}@OneToMany(mappedBy="starship", cascade={CascadeType.ALL}) private List<Officer> officers = new ArrayList<Officer>(); public List<Officer> getOfficers() {return Collections.unmodifiableList(officers);} protected void setOfficers(List<Officer> officers) {this.officers = officers;} public void addOfficer(Officer officer) { officer.setStarship(this); this.officers.add(officer); }//more code }@Entity public class Officer {@Id @GeneratedValue(strategy=GenerationType.SEQUENCE) private Long id; public Long getId() {return id;} protected void setId(Long id) {this.id = id;}@ManyToOne private Starship starship; public Starship getStarship() {return starship;} protected void setStarship(Starship starship) {this.starship = starship;}//more code }Now we have next requirement: We shall get all officers assigned to a starship by alphabetical order. To solve this requirement we can:implementing an HQL query with order by clause. using sort approach. using order approach.The first solution is good in terms of performance, but implies more work as a developers because we should write a query finding all officers of given starship ordered by name and then create a finder method in DAO layer (in case you are using DAO pattern). Let’s explore the second solution, we could use SortedSet class as association, and make Officer implements Comparable, so Officer has natural order. This solution implies less work than the first one, but requires using @Sort hibernate annotation on association definition.So let’s going to modify previous model to meet our new requirement.Note that there is no equivalent annotation in JPAspecification. First we are going to implement @Entity public class Officer implements Comparable<Officer>{//getters, setters, equals, ... codepublic int compareTo(Officer officer) { return this.name.compareTo(officer.getName()); }}Comparable interface in Officer class. We are ordering officer by name by simply comparing name field. Next step is annotating association with @Sort. @Entity public class Starship {//more code@OneToMany(mappedBy="starship", cascade={CascadeType.ALL}) @Sort(type=SortType.NATURAL) private SortedSet>Officer< officers = new TreeSet>Officer<(); public SortedSet>Officer< getOfficers() {return Collections.unmodifiableSortedSet(officers);} protected void setOfficers(SortedSet>Officer< officers) {this.officers = officers;} public void addOfficer(Officer officer) { officer.setStarship(this); this.officers.add(officer); } }Notice that now officers association is implemented using SortedSet instead of a List . Furthermore we are adding @Sort annotation to relationship, stating that officers should be natural ordered. Before finishing this post we will insist more in @Sort topic, but for now it is sufficient. And finally a method that gets all officers of given starship ordered by name, printing them in log file. EntityManager entityManager = this.entityManagerFactory.createEntityManager(); EntityTransaction transaction = entityManager.getTransaction();transaction.begin(); log.info("Before Find Starship By Id");Starship newStarship = entityManager.find(Starship.class, starshipId); SortedSet<Officer> officers = newStarship.getOfficers(); for (Officer officer : officers) { log.info("Officer name {} with rank {}", officer.getName(), officer.getRank()); } log.info("After Find Starship By Id and Before Commit");transaction.commit(); entityManager.close();All officers are sorted by their names, but let’s examine which queries are sent to RDBMS. Hibernate: select starship0_.id as id1_0_, starship0_.affiliationEnum as affiliat2_1_0_, starship0_.launched as launched1_0_, starship0_.height as height1_0_, starship0_.length as length1_0_, starship0_.power as power1_0_, starship0_.width as width1_0_, starship0_.registry as registry1_0_, starship0_.starshipClassEnum as starship9_1_0_ from Starship starship0_ where starship0_.id=?Hibernate: select officers0_.starship_id as starship7_1_1_, officers0_.id as id1_, officers0_.id as id0_0_, officers0_.affiliationEnum as affiliat2_0_0_, officers0_.homePlanet as homePlanet0_0_, officers0_.name as name0_0_, officers0_.rank as rank0_0_, officers0_.speciesEnum as speciesE6_0_0_, officers0_.starship_id as starship7_0_0_ from Officer officers0_ where officers0_.starship_id=?First query is resulting of calling find method on EntityManager instance finding starship. Because one to many relationships are lazy by default when we call getOfficers method and we access first time to SortedSet, second query is executed to retrieve all officers. See that no order by clause is present on query, but looking carefully on output, officers are retrieved in alphabetical order. <Officer name Beverly Crusher with rank COMMANDER> <Officer name Data with rank LIEUTENANT_COMMANDER> <Officer name Deanna Troi with rank COMMANDER> <Officer name Geordi La Forge with rank LIEUTENANT> <Officer name Jean-Luc Picard with rank CAPTAIN> <Officer name William Riker with rank COMMANDER> <Officer name Worf with rank LIEUTENANT> So who is sorting officer entities? The explanation is on @Sort annotation. In hibernate a sorted collection is sorted in memory being Java the responsible of sorting data using compareTo method. Obviously this method is not the best performance-way to sort a collection of elements. It is likely that we’ll need a hybrid solution between using SQL clause and using annotation instead of writing a query. And this leads us to explain the third possibility, using ordering approach. @OrderBy annotation, available as hibernate annotation and JPA annotation, let us specifies how to order a collection by adding “ order by” clause to generated SQL. Keep in mind that using javax.persistence.OrderBy allows us to specify the order of the collection via object properties, meanwhile org.hibernate.annotations.OrderBy order a collection appending directly the fragment of SQL(not HQL) to order by clause. Now Officer class should not be touched, we don’t need to implement compareTo method nor a java.util.Comparator . We only need to annotate officers field with @OrderBy annotation. Since in this case we are ordering by simple attribute, JPA annotation is used to maintain fully compatibility to other “ JPA ready ” ORM engines. By default ascendent order is assumed. @Entity public class Starship {//code@OneToMany(mappedBy="starship", cascade={CascadeType.ALL}) @OrderBy("name") private List<Officer> officers = new ArrayList<Officer>(); public List<Officer> getOfficers() {return Collections.unmodifiableList(officers);} protected void setOfficers(List<Officer> officers) {this.officers = officers;} public void addOfficer(Officer officer) { officer.setStarship(this); this.officers.add(officer); } }And if we rerun get all officers method, next queries are sent: Hibernate: select starship0_.id as id1_0_, starship0_.affiliationEnum as affiliat2_1_0_, starship0_.launched as launched1_0_, starship0_.height as height1_0_, starship0_.length as length1_0_, starship0_.power as power1_0_, starship0_.width as width1_0_, starship0_.registry as registry1_0_, starship0_.starshipClassEnum as starship9_1_0_ from Starship starship0_ where starship0_.id=?Hibernate: select officers0_.starship_id as starship7_1_1_, officers0_.id as id1_, officers0_.id as id0_0_, officers0_.affiliationEnum as affiliat2_0_0_, officers0_.homePlanet as homePlanet0_0_, officers0_.name as name0_0_, officers0_.rank as rank0_0_, officers0_.speciesEnum as speciesE6_0_0_, officers0_.starship_id as starship7_0_0_ from Officer officers0_ where officers0_.starship_id=? order by officers0_.name ascBoth queries are still executed but note that now select query contains order by clause too. With this solution you are saving process time allowing RDBMS sorting data in a fast-way, rather than ordering data in Java once received. Furthermore OrderBy annotation does not force you to use SortedSet or SortedMap collection. You can use any collection like HashMap, HashSet, or even a Bag, because hibernate will use internally a LinkedHashMap, LinkedHashSet or ArrayList respectively. In this example we have seen the importance of choosing correctly an order strategy. Whenever possible you should try to take advantage of capabilities of RDBMS, so your first option should be using OrderBy annotaion ( hibernate or JPA), instead of Sort. But sometimes OrderBy clause will not be enough. In this case, I recommend you using Sort annotation with custom type (using java.util.Comparator class), instead of relaying on natural order to avoid touching model classes. @Sort(type=SortType.COMPARATOR, comparator=TimeComparator.class)I wish this post helped you to understand differences between “sort” and “order” in hibernate. Keep learning. Reference: Hibernate Tip: Sort and Order from our JCG partner Alex Soto at the One Jar To Rule Them All blog....

Java Swing to Android

Here is a quick, brief 10,000ft overview/observations on Android from a Swing developers point of view. Firstly if your coming to Android from Swing you are approaching it with a big advantage to J2EE developers. Now I realise Android doesn’t actually have the AWT/Swing API’s but that doesn’t matter as the general front end/event driven mechanism in it is so close to Swing that you will find it easy. Now Android does rely a lot more on XML defs than Swing ever did and that took me a while to get my head around and realise how powerful a feature it actually it is if used correctly. I started off by buying a couple of books: Learning Android Programming Android I’d recommend both of these as they cover the basics very well, from then on you can use the online Android SDK docs. Other books I’ve seen are not great to be honest, you will be better off googling. Android GUI is like Swing, it’s on a single thread, if you do some heavy lifting on that thread it will cause some issue’s just like with Swing. There are mechanisms and classes available that like SwingWorker help you out (more of this in later blogs). A typical Android screen is created by an Activity class that you have extended. This class typically inflates an XML definition on the screen layout. The XML defines the textfields, comboboxes (called Spinners) and so on. Eclipse is typically used to create the XML (I use the Graphical Layout Tool in Eclipse which is still very buggy but ok). You can also hard code the layouts but I wouldn’t bother as it’s easier to use the XML. The Activities are state driven, they are created, paused, resumed and so on. The states are there because they run on a small device where calls can come in, other programs started an so on. The screens for your app can be hidden, displayed again, killed off and restarted …. You need to handle this and in some cases persist the data. The Activity states are a bit of a head scratcher to start with but once you try a few examples out its straightforward enough. Handling Events from components (Views in Android) is sosimilar to Swing it makes me wonder if somebody copied the event mechanism ;) Any Activity you create needs to be defined in your applications Manifest file, this file is used by the Android device to figure out you apps main class, the icon, theprivilegesit needs and so on. If you don’t specify an Activity on the manifest then the app will stack trace when you attempt to display it. Jumping from screen to screen is done by the use of Intents. Intents are used by Android to signal that you want something to happen. You can create an Intent passing in the class that you want to process the Intent (so say an Activity which displays another screen) or you can ask for some operation, so for instance display this web page, Android in this case would open a browser and display the page. If you had several browsers installed it might ask you which one to use. Android keeps an internal stack so that when the back key is pressed then it will switch back from the browser back to your apps Activity (remember the states I mentioned, your Activity has just had its resume state called ;). Android has some nice features to let you define backgrounds and the look of buttons. This seems to be similar to JavaFX. Basically you can define gradients, patch 9 images (a way of taking a small image and extending it to fit a background nicely), shapes ……. Android comes complete with SQLite3 which is a nice lightweight database. I’ve found it very quick and nice to use. I’ll return to this in another blog as I’m not sure the preferred way to use the database in the most books is actually a good idea. Finally there is a emulator to test your apps on. Its ok, lets you test things on different versions of android and different device specs (screens, memory, input methods (rollers, soft/hard keyboard)) but its slow and really I’d say if you want to really write and app you need atleast one Android device. I have a nice Nexus S Mmmmmm lovely. Anyway thats a very brief overview and when I finish my App I will have some time to spend adding some nice new blogs. Enjoy Reference: Swing to Android from our JCG partner Steve Webb at the Java Desktop Development at the Coal Face blog....

Integrating Spring & JavaServer Faces : Improved Templating

With the release of version 2.0 Facelet templating became a core part of the JSF specification. Using <ui:composition> and <ui:decorate> tags it becomes pretty easy to build up complicated pages whilst still keeping your mark-up clean. Templates are particularly useful when creating HTML forms but, unfortunately, do tend to cause repetition in your xhtml files, breaking the DRY (Don’t Repeat Yourself) principal of good software design. As part of a project to provide deeper integration between JSF and Spring I have developed a couple of new components that aim to make templating easier. Before diving into the new components, lets look at how a typical form might be built up using standard JSF templates. Often the initial starting point with form templates is to add some boiler plate surround to each input. Often you need extra <div> or <span> tags for your css to use. Here is a typical example: <!-- /WEB-INF/pages/entername.xhtml --> <ui:decoreate template="/WEB-INF/layout/form.xhtml"> <h:inputText id="firstName" label="First Name" value="#{bean.firstName}"/> <ui:param name="label" value="First Name"/> <ui:param name="for" value="firstName"/> </ui:decorate> <ui:decoreate template="/WEB-INF/layout/form.xhtml"> <h:inputText id="lastName" label="Last Name" value="#{bean.lastName}"/> <ui:param name="label" value="Last Name"/> <ui:param name="for" value="lastName"/> </ui:decorate> <!-- Many additional form elements --><!-- /WEB-INF/layout/form.xhtml --> <ui:composition> <div class="formElement"> <span class="formLabel"> <h:outputLabel for="#{for}" label="#{label}"> </span> <ui:insert/> </div> </ui:composition>Here we can see that each item on the form is contained within a <div> and form labels are wrapped in an additional <span>. There is already some repetition in the mark-up, with the “for” parameter mirroring the component ID. I have also given each <h:inputText> element a label attribute for better validation error messages, this is repeated in the “label” <ui:param>. Things start getting worse if we want to mark required fields with an asterisk: <!-- /WEB-INF/pages/entername.xhtml --> <ui:decoreate template="/WEB-INF/layout/form.xhtml"> <h:inputText id="firstName" label="First Name" value="#{bean.firstName}" required="false"/> <ui:param name="label" value="First Name"/> <ui:param name="for" value="firstName"/> <ui:param name="showAsterisk" value="false"/> </ui:decorate> <ui:decoreate template="/WEB-INF/layout/form.xhtml"> <h:inputText id="lastName" label="Last Name" value="#{bean.lastName}" required="true"/> <ui:param name="label" value="Last Name"/> <ui:param name="for" value="lastName"/> <ui:param name="showAsterisk" value="true"/> </ui:decorate> <!-- Many additional form elements --><!-- /WEB-INF/layout/form.xhtml --> <ui:composition> <div class="formElement"> <span class="formLabel"> <h:outputLabel for="#{for}" label="#{label}#{showAsterisk ? ' *' : ''}"> </span> <ui:insert/> </div> </ui:composition>It’s pretty frustrating that we need to pass <ui:param> items that duplicate attributes already specified on the <h:inputText>. It is easy to see how, even for relatively small forms, we are going to end up with a lot of duplication in our mark-up. What we need is a way to get information about the inserted component inside the template, even though we don’t know what type of component it will be. What we need is <s:componentInfo>. The <s:componentInfo> component exposes a variable containing information about the inserted component. This information includes the label, the component clientID and if the component is required. By inspecting the inserted item we can remove a lot of duplication: <!-- /WEB-INF/pages/entername.xhtml --> <ui:decoreate template="/WEB-INF/layout/form.xhtml"> <h:inputText id="firstName" label="First Name" value="#{bean.firstName}" required="false"/> </ui:decorate> <ui:decoreate template="/WEB-INF/layout/form.xhtml"> <h:inputText id="lastName" label="Last Name" value="#{bean.lastName}" required="true"/> </ui:decorate> <!-- Many additional form elements --><!-- /WEB-INF/layout/form.xhtml --> <ui:composition> <s:componentInfo var="info"> <div class="formElement"> <span class="#{info.valid ? 'formLabel' : 'formErrorLabel'}"> <h:outputLabel for="#{info.for}" label="#{info.label}#{info.required ? ' *' : ''}"> </span> <ui:insert/> </div> </s:componentInfo> </ui:composition>Something else that we can now do is tell if the inserted component has failed validation. Notice that the example above will pick the “formErrorLabel” CSS class for components that are not valid. One interesting feature of having the new <s:componentInfo> component is that all the <ui:decorate> tags become identical. We have removed all the repetition inside the tag, but the tag itself is still repeated many times. Here we have one more trick that can help by introducing a new <s:decorateAll> tag. Using <s:decorateAll> allows use to apply a template once for every child component. Here is the updated form mark-up: <!-- /WEB-INF/pages/entername.xhtml --> <s:decoreateAll template="/WEB-INF/layout/form.xhtml"> <h:inputText id="firstName" label="First Name" value="#{bean.firstName}" required="false"/> <h:inputText id="lastName" label="Last Name" value="#{bean.lastName}" required="true"/> <!-- Many additional form elements --> </s:decorateAll><!-- /WEB-INF/layout/form.xhtml --> <ui:composition> <s:componentInfo var="info"> <div class="formElement"> <span class="#{info.valid ? 'formLabel' : 'formErrorLabel'}"> <h:outputLabel for="#{info.for}" label="#{info.label}#{info.required ? ' *' : ''}"> </span> <ui:insert/> </div> </s:componentInfo> </ui:composition>If you want to look at the source code for these components check out the org.springframework.springfaces.template.ui package on springfaces GitHub project. Reference: Integrating Spring & JavaServer Faces : Improved Templating from our JCG partner Phillip Webb at the Phil Webb’s blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: