Featured FREE Whitepapers

What's New Here?

java-logo

Java Debuggers and Timeouts

How to use your debugger in the presence of timeouts in your code. My kingdom for a debugger! So you’ve been coding away merrily on a project and everything is going well until a bug appears. You reach into your developer’s toolbox and pull out a debugger. It’s great – you can set breakpoints, you can interrupt when there’s an exception and you can inspect expressions at runtime. Whatever challenge awaits, you can be sure that a debugger will help! Unfortunately life isn’t that easy. A lot of code needs to have some kind of timeout – an event that happens after a period of time. The problem with this is that timeouts tend to ruin the debugging experience. You’re sitting there looking at your breakpoint, thinking “Now why is x 2 instead of 1?” Poof! The timeout kicks in and you are no longer able to continue. Even worse the JVM itself quits! So you go through the process of increasing your timeout, debugging and fixing your problem. Afterwards you either return the timeout to its original setting and have to go through the same tedious process again or accidentally commit the fix into your source tree thus breaking a test or maybe even production. To me this seems less than ideal. “For somehow this is timeout’s disease, to trust no friends” There are many reasons that people introduce timeouts. I’ve listed a few below, a couple of good and a couple of bad, and I’m sure you can think of a few more yourself.Checking that an asynchronous event has been responded to within a certain period of time. Avoiding starvation of a time based resource, such as a thread pool. You’ve got a race condition that needs a quick fix. You are waiting for an event to happen and decide to hard code an assumption about how long it’ll take. (Can be most frequently spotted in tests)Now obviously if your timeout has been introduced as a hack then it’s a good time to clean and boy-scout the code. If you need to rely on an event happening in tests then you should treat those tests as clients of your API and be able to know when the event has occurred. This might involve injecting a mock which gets called when an event happens or subscribing to a stream of events. If you’ve got a race condition – fix it! I know it’s painful and hard but do you really want a ticking timebomb in your codebase ready to generate a support call at 3am? Managing your timeouts Having said that we should remove the bad uses of timeouts, it’s pretty clear that are perfectly legitimate uses of timeouts. They are especially common in event driven and asynchronous code. It would still be good to be able to debug with them around. Good practice regardless of other factors is to be able to standardise your timeouts into configuration properties which can be set at runtime. This lets you easily alter them when running in a local IDE vs production. It can also help with managing the different performance properties that you encounter from differing hardware setups. Having externalised your timeouts into configuration from your code, you can then detect whether your code is running inside a debugger and set timeouts to significantly longer periods if this is the case. The trick to doing this is to recognise that a debugger involves running a Java agent, which modifies the command-line arguments of the program that it runs under. You can check whether these command-line arguments contain the right agent matcher. The following code snippet shows how to do this and has been tested to work under both eclipse and Intellij IDEA. RuntimeMXBean runtimeMXBean = ManagementFactory.getRuntimeMXBean(); String jvmArguments = runtimeMXBean.getInputArguments().toString(); boolean hasDebuggerAttached = jvmArguments.contains("-agentlib:jdwp"); I can see why some people would view it as a hack as well, you’re actively discovering something about your environment by looking at your own command-line arguments and then adapting around it. From my perspective, I’ve found this to be a useful technique. It does make it easier to debug in the presence of timeouts.Reference: Java Debuggers and Timeouts from our JCG partner Richard Warburton at the Insightful Logic blog....
javafx-logo

JavaFX Tip 8: Beauty Is Skin Deep

If you are developing a UI framework for JavaFX, then please make it a habit to always split your custom controls into a control class and a skin class. Coming from Swing myself this was not obvious to me right away. Swing also uses an MVC concept and delegates the actual component rendering to a UI delegate, but people extending Swing mostly subclassed one of its controls and added extensions / modifications to the subclass. Only very few frameworks actually worked with the UI delegates (e.g. MacWidgets). I have the luxury of being able to compare the implementation of the same product / control once done in Swing and once done in JavaFX and I noticed that the JavaFX implementation is so much cleaner, largely because of the splitting in controls and skins (next in row: CSS styling and property binding). In Swing I was exposing a lot of things to the framework user that I personally considered “implementation detail” but that became public API nevertheless. The JavaFX architecture makes it much more obvious where the framework developer draws the line between public and internal API. The Control The control class stores the state of the control and provides methods to interact with it. State information can be: the data visualized by the control (e.g. the items in TableView), visual attributes (show this, hide that), factories (e.g. cell factories). Interaction can be: scroll to an item, show a given time, do this, do that. The control class is the contract between your framework code and the application using the framework. It should be well designed, clean, stable, and final. The Skin This is the place to go nuts, the Wild West. The skin creates the visual representation of your control by composing already existing controls or by extending the very basic classes, such as Node or Region. Skins are often placed in separate packages with package names that imply that the API contained inside of them is not considered for public use. If somebody does use them then at their own risk because the framework developer (you) might decide to change them from release to release.Reference: JavaFX Tip 8: Beauty Is Skin Deep from our JCG partner Dirk Lemmermann at the Pixel Perfect blog....
enterprise-java-logo

From framework to platform

When I started my career as a Java developer close to 10 years ago, the industry is going through a revolutionary change. Spring framework, which was released in 2003, was quickly gaining ground and became a serious challenger to the bulky J2EE platform. Having gone through the transition time, I quickly found myself in favour of Spring framework instead of J2EE platform, even the earlier versions of Spring are very tedious to declare beans. What happened next is the revamping of J2EE standard, which was later renamed to JEE. Still, dominating of this era is the use of opensource framework over the platform proposed by Sun. This practice gives developers full control over the technologies they used but inflating the deployment size. Slowly, when cloud application become the norm for modern applications, I observed the trend of moving the infrastructure service from framework to platform again. However, this time, it is not motivated by Cloud application. Framework vs Platform I have never heard of or had to used any framework in school. However, after joining the industry, it is tough to build scalable and configurable software without the help of any framework. From my understanding, any application is consist of codes that implement business logic and some other codes that are helpers, utilities or to setup infrastructure. The codes that are not related to business logic, being used repetitively in many projects, can be generalised and extracted for reuse. The output of this extraction process is framework. To make it shorter, framework is any codes that is not related to business logic but helps to dress common concerns in applications and fit to be reused. If following this definition then MVC, Dependency Injection, Caching, JDBC Template, ORM are all consider frameworks. Platform is similar to framework as it also helps to dress common concerns in applications but in contrast to framework, the service is provided outside the application. Therefore, a common service endpoint can serve multiple applications at the same time. The services provided by JEE application server or Amazon Web Services are sample of platforms. Compare the two approaches, platform is more scalable, easier to use than framework but it also offers less control. Because of these advantage, platform seem to be the better approach to use when we build Cloud Application. When should we use platform over framework Moving toward platform does not guarantee that developers will get rid of framework. Rather, platform only complements framework in building applications. However, one some special occasions we have a choice to use platform or framework to achieve final goal.  From my personal opinion, platform is greater that framework when following conditions are matched:Framework is tedious to use and maintain The service has some common information to be shared among instances. Can utilize additional hardware to improve performance.In office, we still uses Spring framework, Play framework or RoR in our applications and this will not change any time soon. However, to move to Cloud era, we migrated some of our existing products from internal hosting to Amazon EC2 servers. In order to make the best use of Amazon infrastructure and improve software quality, we have done some major refactoring to our current software architecture. Here are some platforms that we are integrating our product to: Amazon Simple Storage Service (Amazon S3) &  Amazon Cloud Front We found that Amazon Cloud Front is pretty useful to boost average response time for our applications. Previously, we host most of the applications in our internal server farms, which located in UK and US. This lead to noticeable increase in response time for customers in other continents. Fortunately, Amazon has much greater infrastructure with server farms built all around the worlds. That helps to guarantee a constant delivery time for package, no matter customer locations. Currently, due to manual effort to setup new instance for applications, we feel that the best use for Amazon Cloud Front is with static contents, which we host separately from application in Amazon S3. This practice give us double benefit in performance with more consistent delivery time offered by the CDN plus the separate connection count in browser for the static content. Amazon Elastic Cache Caching has never been easy on cluster environment. The word “cluster” means that your object will not be stored and retrieve from system memory. Rather, it was sent and retrieved over the network. This task was quite tricky in the past because developers need to sync the records from one node to another node. Unfortunately, not all caching framework support this feature automatically. Our best framework for distributed caching was Terracotta. Now, we turned to Amazon Elastic Cache because it is cheap, reliable and save us the huge effort for setting up and maintain distributed cache. It is worth to highlight that distributed caching is never mean to replace local cache. The difference in performance suggest that we should only use distributed caching over local caching when user need to access real-time temporary data. Event Logging for Data Analytics In the past, we used Google Analytics for analysing user behaviour but later decided to build internal data warehouse. One of the motivation is the ability to track events from both browsers and servers. The Event Tracking system uses MongoDB as the database as it allow us to quickly store huge amount of events. To simplify the creation and retrieval of events, we choose JSON as the format for events. We cannot simply send this event directly to event tracking server due to browser prevention of cross-domain attack. For this reason, Google Analytic send the events to server under the form of a GET request for static resource. As we have the full control over how the application was built, we choose to let the events send back to application server first and route to event tracking server later. This approach is much more convenient and powerful. Knowledge Portal In the past, applications access data from database or internal file repository. However, to be able to scale better, we gathered all knowledge to build a knowledge portal. We also built query language to retrieve knowledge from this portal. This approach add one additional layer to the knowledge retrieval process but fortunately for us, our system does not need to serve real time data. Therefore, we can utilize caching to improve performance. Conclusion Above is some of our experience on transforming software architecture when moving to the Cloud. Please share with us your experience and opinion.Reference: From framework to platform from our JCG partner Nguyen Anh Tuan at the Developers Corner blog....
java-logo

How to use bloom filter to build a large in memory cache in Java

BackGround Cache is an important concept to solve day to day software problems. Your application may perform CPU intensive operations, which you do not want to perform again and again, instead you derive the result once and cache it in memory. Sometimes the bottleneck is IO, like you do not want to hit the database repeatedly and would like to cache the results and update the cache only if underlying data changes. Similarly there are other use cases where we need to perform a quick lookup to decide what to do with an incoming request. For example, consider this use case where you have to identify that one URL points to a malware site or not. There could be many URLs like that, to do it in an instance, if we cache all the malware URLs in memory, that would require a lot of space to hold them. Or another use case could be to identify if a user typed string has any reference to a place in USA. Like “museum in Washington” – in this string, Washington is a name of a place in USA. Should we keep all the places in USA in memory and then lookup? How big the cache size would be? Is it effective to do it without any database support? This is where we need to move away from basic map data structure and look for answers in more advanced data structure like bloomfilter. You can consider bloomfilter, like any other java collection where you can put items in it and ask it whether an item already present in it or not (like a HashSet). If Bloomfilter mentions that it does not contain the item, then definitely that item is not present. But if it mentions that it has seen the item, then that may be wrong. If we are careful enough, we can design a bloomfilter such that the probability of the wrong is controlled. Explanation Bloomfilter is designed as an array(A) of m bits. Initially all these bits are set to 0. To add item: In order to add any item, it needs to be feed through k hash functions. Each hash function will generate a number which can be treated as a position of the bit array (hash modulo array length can give us the index of the array) and we shall set that value of that position to 1. For example – first hash function(hash1) on item I produce a bit position x, similarly second and third hash functions produce position y and z. So we shall set: A[x]=A[y]=A[z] = 1 To find item: Similar process will be repeated, item will be hashed three times through three different hash functions. Each hash function will produce an integer which will be treated as a position of the array. We shall inspect those x,y, z positions of the bit array and see if they are set to 1 or not. If no, for sure no one ever tried to add this item into bloomfilter, but if all the bits are set, it could be a false positive. Things to tune From the above explanation, it becomes clear that to design a good bloomfilter we need to keep track of the following thingsGood hash functions that can generate wide range of hash values as quickly as possible The value of m (size of the bit array) is very important. If the size is too small, all the bits will be set to 1 quickly and false positives will grow largely. Number of hash functions(k) is also important so that the values get distributed evenly.If we can estimate how many items we are planning to keep in the bloom filter, we can calculate the optimal values of k and m. Skipping the mathematical details, the formula to calculate k and m are enough for us to write a good bloomfilter. Formula to determine m (number of bits for the bloom filter) is as bellow: m = - nlogp / (log2)^2; where p = desired false positive probability Formula to determine k (number of hash functions) is as bellow: k = m/n log(2) ; where k = number of hash functions, m=number of bits and n= number of items in the filter Hashing Hashing is an area which affects the performance of bloomfilter. We need to choose a hash function that is effective yet not time consuming. In the paper, “Less Hashing, Same Performance: Building a Better Bloom Filter” it is discussed how we can use two hash functions to generate K number of hash functions. First we need to calculate two hash function h1(x) and h2(x). Next, we can use these two hash functions to simulate k hash functions of the nature gi(x) = h1(x) + ih2(x); where i can range from {1..k} Google guava library uses this trick in their bloomfilter implementation, the hashing logic is outlined here : long hash64 = …; //calculate a 64 bit hash function //split it in two halves of 32 bit hash values int hash1 = (int) hash64; int hash2 = (int) (hash64 >>> 32); //Generate k different hash functions with a simple loop for (int i = 1; i <= numHashFunctions; i++) { int nextHash = hash1 + i * hash2; }Applications It is clear from the mathematical formulas that to apply bloomfilter to solve a problem, we need to understand the domain very well. Like we can apply bloomfilter to hold all the cities name in usa. This number is deterministic and we have prior knowledge, so we can determine n (total number of elements to be added to the bloomfilter). Fix p(probability of false positive) according to business requirements. In that case, we have a perfect cache which is memory efficient and lookup time is very low. Implementations Google guava library has an implementation of Bloomfilter. Check how the constructor of this class, which asks for expected items and false positive rate. import com.google.common.hash.BloomFilter; import com.google.common.hash.Funnels;//Create Bloomfilter int expectedInsertions = ….; double fpp = 0.03; // desired false positive probability BloomFilter<CharSequence> bloomFilter = BloomFilter.create(Funnels.stringFunnel(Charset.forName("UTF-8")), expectedInsertions,fpp)Resources:http://en.wikipedia.org/wiki/Bloom_filter http://billmill.org/bloomfilter-tutorial/ http://www.eecs.harvard.edu/~kirsch/pubs/bbbf/esa06.pdf...
spring-logo

Spring, REST, Ajax and CORS

Assuming you’re working on a project based on JavaScript for the client side and who makes ajax requests to a server through rest web services, you may encounter some troubles especially if both sides are on a separate domain. Indeed, for security reasons, ajax requests from one domain A to a different domain B are not authorized. Fortunately, the W3C introduced what is known as CORS (Cross Origin Resource Sharing) which offers the possibility for a server to have a better control of cross domain requests. To do that, the server must add HTTP headers to the response, indicating to the client side which are the allowed origins. Moreover, if you use custom headers, you browser will not be able to read them for security matters, so you must specify which headers to expose. So, if in your JavaScript code you can’t retrieve your custom http header value, you should read what comes next List of headers: Access-Control-Allow-Origin Access-Control-Allow-Origin: <origin> | * The origin parameter specifies a URI that may access the resource.  The browser must enforce this.  For requests without credentials, the server may specify “*” as a wildcard, thereby allowing any origin to access the resource. Access-Control-Expose-Headers Access-Control-Expose-Headers: X-My-Header This header lets a server whitelist headers that browsers are allowed to access. It is very usefull when you add custom headers, because by adding them to the ” Access-Control-Expose-Headers” header you can be sure that your browser will be able to read them. Access-Control-Max-Age Access-Control-Max-Age: <delta-seconds> This header indicates how long the results of a preflight request can be cached. Access-Control-Allow-Methods Access-Control-Allow-Methods: <method>[, <method>]* Specifies the method or methods allowed when accessing the resource.  This is used in response to a preflight request.  The conditions under which a request is preflighted are discussed above. Access-Control-Allow-Headers Access-Control-Allow-Headers: <field-name>[, <field-name>]* Used in response to a preflight request to indicate which HTTP headers can be used when making the actual request. Now let’s see how to add this headers with Spring First we need to create a class implementing the Filter interface: package hello; import java.io.IOException; import javax.servlet.Filter; import javax.servlet.FilterChain; import javax.servlet.FilterConfig; import javax.servlet.ServletException; import javax.servlet.ServletRequest; import javax.servlet.ServletResponse; import javax.servlet.http.HttpServletResponse; import org.springframework.stereotype.Component;public class CORSFilter implements Filter {public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) throws IOException, ServletException { HttpServletResponse response = (HttpServletResponse) res; HttpServletRequest request= (HttpServletRequest) req;response.setHeader("Access-Control-Allow-Origin", "*"); response.setHeader("Access-Control-Allow-Methods", "POST, GET, OPTIONS, DELETE"); response.setHeader("Access-Control-Allow-Headers", "x-requested-with"); response.setHeader("Access-Control-Expose-Headers", "x-requested-with"); chain.doFilter(req, res); } }Now, we just have to add our filter to the servlet context: @Configuration public class ServletConfigurer implements ServletContextInitializer { @Override public void onStartup(javax.servlet.ServletContext servletContext) throws ServletException { servletContext.addFilter("corsFilter", new CORSFilter()); } }And that’s all folks, you’re now able to make cross domain requests and use custom http headers! ...
enterprise-java-logo

HBase: Generating search click events statistics for customer behavior

In this post we will explore HBase to store customer search click events data and utilizing same to derive customer behavior information based on search query string and facet filter clicks. We will cover to use MiniHBaseCluster, HBase Schema design, integration with Flume using HBaseSink to store JSON data.               In continuation to the previous posts on,Customer product search clicks analytics using big data, Flume: Gathering customer product search clicks data using Apache Flume, Hive: Query customer top search query and product views count using Apache Hive, ElasticSearch-Hadoop: Indexing product views count and customer top search query from Hadoop to ElasticSearch, Oozie: Scheduling Coordinator/Bundle jobs for Hive partitioning and ElasticSearch indexing, Spark: Real time analytics for big data for top search queries and top product viewsWe have explored to store search click events data in Hadoop and to query same using different technologies. Here we will use HBase to achieve the same: HBase mini cluster setup  HBase template using Spring Data  HBase Schema Design  Flume Integration using HBaseSink  HBaseJsonSerializer to serialize json data  Query Top 10 search query string in last an hour  Query Top 10 search facet filter in last an hour  Get recent search query string for a customer in last 30 daysHBase HBase  “is the Hadoop database, a distributed, scalable, big data store.” HBaseMiniCluster/MiniZookeperCluster To setup and start mini cluster, Check HBaseServiceImpl.java ... miniZooKeeperCluster = new MiniZooKeeperCluster(); miniZooKeeperCluster.setDefaultClientPort(10235); miniZooKeeperCluster.startup(new File("taget/zookeper/dfscluster_" + UUID.randomUUID().toString()).getAbsoluteFile()); ... Configuration config = HBaseConfiguration.create(); config.set("hbase.tmp.dir", new File("target/hbasetom").getAbsolutePath()); config.set("hbase.master.port", "44335"); config.set("hbase.master.info.port", "44345"); config.set("hbase.regionserver.port", "44435"); config.set("hbase.regionserver.info.port", "44445"); config.set("hbase.master.distributed.log.replay", "false"); config.set("hbase.cluster.distributed", "false"); config.set("hbase.master.distributed.log.splitting", "false"); config.set("hbase.zookeeper.property.clientPort", "10235"); config.set("zookeeper.znode.parent", "/hbase");miniHBaseCluster = new MiniHBaseCluster(config, 1); miniHBaseCluster.startMaster(); ... MiniZookeeprCluster is started on client port 10235, all client connections will be on this port. Make sure to configure the hbase server port not colliding with your other local hbase server. Here we are only starting one hbase region server in the test case. HBase Template using Spring Data We will be using Spring hbase template to connect to HBase cluster: <hdp:hbase-configuration id="hbaseConfiguration" configuration-ref="hadoopConfiguration" stop-proxy="false" delete-connection="false" zk-quorum="localhost" zk-port="10235"> </hdp:hbase-configuration> <bean id="hbaseTemplate" class="org.springframework.data.hadoop.hbase.HBaseTemplate" p:configuration-ref="hbaseConfiguration" /> HBase Table Schema Design We have search click event JSON data in the following format,{"eventid":"24-1399386809805-629e9b5f-ff4a-4168-8664-6c8df8214aa7","hostedmachinename":"192.168.182.1330","pageurl":"http://blahblah:/5&quot;,"customerid":24,"sessionid":"648a011d-570e-48ef-bccc-84129c9fa400","querystring":null,"sortorder":"desc","pagenumber":3,"totalhits":28,"hitsshown":7,"createdtimestampinmillis":1399386809805,"clickeddocid":"41","favourite":null,"eventidsuffix":"629e9b5f-ff4a-4168-8664-6c8df8214aa7","filters":[{"code":"searchfacettype_color_level_2","value":"Blue"},{"code":"searchfacettype_age_level_2","value":"12-18 years"}]}One way to handle the data is to directly store it under one column family and json column. It won’t be easy and flexible to scan the json data that way. Another option can be to store it under one column family but to have different columns. But storing filters data in single column will be hard to scan. The hybrid approach below is to divide it under multiple column family and dynamically generate columns for filters data. The converted schema is: { "client:eventid" => "24-1399386809805-629e9b5f-ff4a-4168-8664-6c8df8214aa7", "client:eventidsuffix" => "629e9b5f-ff4a-4168-8664-6c8df8214aa7", "client:hostedmachinename" => "192.168.182.1330", "client:pageurl" => "http://blahblah:/5", "client:createdtimestampinmillis" => 1399386809805, "client:cutomerid" => 24, "client:sessionid" => "648a011d-570e-48ef-bccc-84129c9fa400", "search:querystring" => null, "search:sortorder" => desc, "search:pagenumber" => 3, "search:totalhits" => 28, "search:hitsshown" => 7, "search:clickeddocid" => "41", "search:favourite" => null, "filters:searchfacettype_color_level_2" => "Blue", "filters:searchfacettype_age_level_2" => "12-18 years" } The following three column family are created:client: To store client and customer data specific information for the event. search: search information related to query string and pagination information is stored here. filters: To support additional facets in future etc. and more flexible scanning of data, the column names are dynamically created based on facet name/code and the column value is stored as facet filter value.To create the hbase table, ... TableName name = TableName.valueOf("searchclicks"); HTableDescriptor desc = new HTableDescriptor(name); desc.addFamily(new HColumnDescriptor(HBaseJsonEventSerializer.COLUMFAMILY_CLIENT_BYTES)); desc.addFamily(new HColumnDescriptor(HBaseJsonEventSerializer.COLUMFAMILY_SEARCH_BYTES)); desc.addFamily(new HColumnDescriptor(HBaseJsonEventSerializer.COLUMFAMILY_FILTERS_BYTES)); try { HBaseAdmin hBaseAdmin = new HBaseAdmin(miniHBaseCluster.getConf()); hBaseAdmin.createTable(desc); hBaseAdmin.close(); } catch (IOException e) { throw new RuntimeException(e); } ... Relevant column family has been added on table creation to support new data structure. In general, it is recommended to keep the number of column family as minimum as possible, keep in mind how you structure your data based on the usage. Based on above examples, we have kept the scan scenario like:scan client family in case you want to retrieve customer or client information based on total traffic information on the website. scan search information to see what free text search the end customers are looking for which are not met by the navigational search. See on which page the relevant product was clicked, do you need boosting to apply to push the product high. scan filters family to see how the navigational search is working for you. Is it giving end customers the product they are looking for. See which facet filters are clicked more and do you need to push to up a bit in the ordering to be available easily to the customer. scan between family should be avoided and use row key design to achieve specific customer info.Row key design info In our case the row key design is based on customerId-timestamp -randomuuid. As the row key is same for all the column family, we can use Prefix Filter to filter on row only relevant to a specific customer. final String eventId = customerId + "-" + searchQueryInstruction.getCreatedTimeStampInMillis() + "-" + searchQueryInstruction.getEventIdSuffix(); ... byte[] rowKey = searchQueryInstruction.getEventId().getBytes(CHARSET_DEFAULT); ... # 24-1399386809805-629e9b5f-ff4a-4168-8664-6c8df8214aa7 Each column family here will have same row key, and you can use prefix filter to scan rows only for a particular customer. Flume Integration HBaseSink is used to store search events data directly to HBase. Check details, FlumeHBaseSinkServiceImpl.java ... channel = new MemoryChannel(); Map<String, String> channelParamters = new HashMap<>(); channelParamters.put("capacity", "100000"); channelParamters.put("transactionCapacity", "1000"); Context channelContext = new Context(channelParamters); Configurables.configure(channel, channelContext); channel.setName("HBaseSinkChannel-" + UUID.randomUUID());sink = new HBaseSink(); sink.setName("HBaseSink-" + UUID.randomUUID()); Map<String, String> paramters = new HashMap<>(); paramters.put(HBaseSinkConfigurationConstants.CONFIG_TABLE, "searchclicks"); paramters.put(HBaseSinkConfigurationConstants.CONFIG_COLUMN_FAMILY, new String(HBaseJsonEventSerializer.COLUMFAMILY_CLIENT_BYTES)); paramters.put(HBaseSinkConfigurationConstants.CONFIG_BATCHSIZE, "1000"); paramters.put(HBaseSinkConfigurationConstants.CONFIG_SERIALIZER, HBaseJsonEventSerializer.class.getName());Context sinkContext = new Context(paramters); sink.configure(sinkContext); sink.setChannel(channel);sink.start(); channel.start(); ... Client column family is used only for validation by HBaseSink. HBaseJsonEventSerializer Custom serializer is created to store the JSON data: public class HBaseJsonEventSerializer implements HBaseEventSerializer { public static final byte[] COLUMFAMILY_CLIENT_BYTES = "client".getBytes(); public static final byte[] COLUMFAMILY_SEARCH_BYTES = "search".getBytes(); public static final byte[] COLUMFAMILY_FILTERS_BYTES = "filters".getBytes(); ... byte[] rowKey = searchQueryInstruction.getEventId().getBytes(CHARSET_DEFAULT); Put put = new Put(rowKey);// Client Infor put.add(COLUMFAMILY_CLIENT_BYTES, "eventid".getBytes(), searchQueryInstruction.getEventId().getBytes()); ... if (searchQueryInstruction.getFacetFilters() != null) { for (SearchQueryInstruction.FacetFilter filter : searchQueryInstruction.getFacetFilters()) { put.add(COLUMFAMILY_FILTERS_BYTES, filter.getCode().getBytes(),filter.getValue().getBytes()); } } ... Check further details, HBaseJsonEventSerializer.java The events body is converted to Java bean from Json and further the data is processed to be serialized in relevant column family. Query Raw Cell data To query the raw cell data: ... Scan scan = new Scan(); scan.addFamily(HBaseJsonEventSerializer.COLUMFAMILY_CLIENT_BYTES); scan.addFamily(HBaseJsonEventSerializer.COLUMFAMILY_SEARCH_BYTES); scan.addFamily(HBaseJsonEventSerializer.COLUMFAMILY_FILTERS_BYTES); List<String> rows = hbaseTemplate.find("searchclicks", scan, new RowMapper<String>() { @Override public String mapRow(Result result, int rowNum) throws Exception { return Arrays.toString(result.rawCells()); } }); for (String row : rows) { LOG.debug("searchclicks table content, Table returned row: {}", row); } Check HBaseServiceImpl.java  for details. The data is stored in hbase in the following format: searchclicks table content, Table returned row: [84-1404832902498-7965306a-d256-4ddb-b7a8-fd19cdb99923/client:createdtimestampinmillis/1404832918166/Put/vlen=13/mvcc=0, 84-1404832902498-7965306a-d256-4ddb-b7a8-fd19cdb99923/client:customerid/1404832918166/Put/vlen=2/mvcc=0, 84-1404832902498-7965306a-d256-4ddb-b7a8-fd19cdb99923/client:eventid/1404832918166/Put/vlen=53/mvcc=0, 84-1404832902498-7965306a-d256-4ddb-b7a8-fd19cdb99923/client:hostedmachinename/1404832918166/Put/vlen=16/mvcc=0, 84-1404832902498-7965306a-d256-4ddb-b7a8-fd19cdb99923/client:pageurl/1404832918166/Put/vlen=19/mvcc=0, 84-1404832902498-7965306a-d256-4ddb-b7a8-fd19cdb99923/client:sessionid/1404832918166/Put/vlen=36/mvcc=0, 84-1404832902498-7965306a-d256-4ddb-b7a8-fd19cdb99923/filters:searchfacettype_product_type_level_2/1404832918166/Put/vlen=7/mvcc=0, 84-1404832902498-7965306a-d256-4ddb-b7a8-fd19cdb99923/search:hitsshown/1404832918166/Put/vlen=2/mvcc=0, 84-1404832902498-7965306a-d256-4ddb-b7a8-fd19cdb99923/search:pagenumber/1404832918166/Put/vlen=1/mvcc=0, 84-1404832902498-7965306a-d256-4ddb-b7a8-fd19cdb99923/search:querystring/1404832918166/Put/vlen=13/mvcc=0, 84-1404832902498-7965306a-d256-4ddb-b7a8-fd19cdb99923/search:sortorder/1404832918166/Put/vlen=3/mvcc=0, 84-1404832902498-7965306a-d256-4ddb-b7a8-fd19cdb99923/search:totalhits/1404832918166/Put/vlen=2/mvcc=0] Query Top 10 search query string in last an hour To query only search string, we only need search column family. To scan within time range, either we can use the client column family createdtimestampinmillis column but it will be expansive scan. ... Scan scan = new Scan(); scan.addColumn(HBaseJsonEventSerializer.COLUMFAMILY_CLIENT_BYTES, Bytes.toBytes("createdtimestampinmillis")); scan.addColumn(HBaseJsonEventSerializer.COLUMFAMILY_SEARCH_BYTES, Bytes.toBytes("querystring")); List<String> rows = hbaseTemplate.find("searchclicks", scan, new RowMapper<String>() { @Override public String mapRow(Result result, int rowNum) throws Exception { String createdtimestampinmillis = new String(result.getValue(HBaseJsonEventSerializer.COLUMFAMILY_CLIENT_BYTES, Bytes.toBytes("createdtimestampinmillis"))); byte[] value = result.getValue(HBaseJsonEventSerializer.COLUMFAMILY_SEARCH_BYTES, Bytes.toBytes("querystring")); String querystring = null; if (value != null) { querystring = new String(value); } if (new DateTime(Long.valueOf(createdtimestampinmillis)).plusHours(1).compareTo(new DateTime()) == 1 && querystring != null) { return querystring; } return null; } }); ... //sort the keys, based on counts collection of the query strings. List<String> sortedKeys = Ordering.natural().onResultOf(Functions.forMap(counts)).immutableSortedCopy(counts.keySet()); ... Query Top 10 search facet filter in last an hour Based on dynamic column creation, you can scan the data to return the top clicked facet filters. The dynamic columns will be based on your facet codes which can be any of: #searchfacettype_age_level_1 #searchfacettype_color_level_2 #searchfacettype_brand_level_2 #searchfacettype_age_level_2 for (String facetField : SearchFacetName.categoryFacetFields) { scan.addColumn(HBaseJsonEventSerializer.COLUMFAMILY_FILTERS_BYTES, Bytes.toBytes(facetField)); } To retrieve to: ... hbaseTemplate.find("searchclicks", scan, new RowMapper<String>() { @Override public String mapRow(Result result, int rowNum) throws Exception { for (String facetField : SearchFacetName.categoryFacetFields) { byte[] value = result.getValue(HBaseJsonEventSerializer.COLUMFAMILY_FILTERS_BYTES, Bytes.toBytes(facetField)); if (value != null) { String facetValue = new String(value); List<String> list = columnData.get(facetField); if (list == null) { list = new ArrayList<>(); list.add(facetValue); columnData.put(facetField, list); } else { list.add(facetValue); } } } return null; } }); ... You will get the full list of all facets, you can process the data further to count top facets and ordering same. For full details check, HBaseServiceImpl.findTopTenSearchFiltersForLastAnHour Get recent search query string for a customer If we need to check what is customer is currently looking for, we can create a scan between two column family between “client” and “search”. Or another way is to design the row key in a way to give you relevant information. In our case the row key design is based on CustomerId_timestamp _randomuuid. As the row key is same for all the column family, we can use Prefix Filter to filter on row only relevant to a specific customer. final String eventId = customerId + "-" + searchQueryInstruction.getCreatedTimeStampInMillis() + "-" + searchQueryInstruction.getEventIdSuffix(); ... byte[] rowKey = searchQueryInstruction.getEventId().getBytes(CHARSET_DEFAULT); ... # 84-1404832902498-7965306a-d256-4ddb-b7a8-fd19cdb99923 To scan the data for a particular customer, ... Scan scan = new Scan(); scan.addColumn(HBaseJsonEventSerializer.COLUMFAMILY_SEARCH_BYTES, Bytes.toBytes("customerid")); Filter filter = new PrefixFilter(Bytes.toBytes(customerId + "-")); scan.setFilter(filter); ... For details check HBaseServiceImpl.getAllSearchQueryStringsByCustomerInLastOneMonth Hope this helps you to get hands on HBase schema design and handling data.Reference: HBase: Generating search click events statistics for customer behavior from our JCG partner Jaibeer Malik at the Jai’s Weblog blog....
java-logo

Abstraction in Java – The ULTIMATE Tutorial

                       Table Of Contents1. Introduction 2. Interfaces2.1. Defining Interfaces 2.2. Implementing Interfaces 2.3. Using Interfaces3. Abstract Classes3.1. Defining Abstract Classes 3.2. Extending Abstract Classes 3.3. Using Abstract Classes4. A Worked Example – Payments System4.1. The Payee Interface 4.2. The Payment System 4.3. The Employee Classes 4.4. The Application 4.5. Handling Bonuses 4.6. Contracting Companies 4.7. Advanced Functionality: Taxation5. Conclusion  1. Introduction In this tutorial we will give an introduction to Abstraction in Java and define a simple Payroll System using Interfaces, Abstract Classes and Concrete Classes. There are two levels of abstraction in Java – Interfaces, used to define expected behaviour and Abstract Classes, used to define incomplete functionality. We will now look at these two different types of abstraction in detail. 2. Interfaces An interface is like a contract. It is a promise to provide certain behaviours and all classes which implement the interface guarantee to also implement those behaviours. To define the expected behaviours the interface lists a number of method signatures. Any class which uses the interface can rely on those methods being implemented in the runtime class which implements the interface. This allows anyone using the interface to know what functionality will be provided without having to worry about how that functionality will actually be achieved. The implementation details are hidden from the client, this is a crucial benefit of abstraction. 2.1. Defining Interfaces You can use the keyword interface to define an interface: public interface MyInterface {void methodA();int methodB();String methodC(double x, double y);} Here we see an interface called MyInterface defined, note that you should use the same case conventions for Interfaces that you do for Classes. MyInterface defines 3 methods, each with different return types and parameters. You can see that none of these methods have a body; when working with interfaces we are only interested in defining the expected behaviour, not with it’s implementation. Note: Java 8 introduced the ability to create a default implementation for interface methods, however we will not cover that functionality in this tutorial. Interfaces can also contain state data by the use of member variables: public interface MyInterfaceWithState {int someNumber;void methodA();} All the methods in an interface are public by default and in fact you can’t create a method in an interface with an access level other than public. 2.2. Implementing Interfaces Now we have defined an interface we want to create a class which will provide the implementation details of the behaviour we have defined. We do this by writing a new class and using the implements keyword to tell the compiler what interface this class should implement. public class MyClass implements MyInterface {public void methodA() { System.out.println("Method A called!"); }public int methodB() { return 42; }public String methodC(double x, double y) { return "x = " + x + ", y = " y; }} We took the method signatures which we defined in MyInterface and gave them bodies to implement them. We just did some arbitrary silliness in the implementations but it’s important to note that we could have done anything in those bodies, as long as they satisfied the method signatures. We could also create as many implementing classes as we want, each with different implementation bodies of the methods from MyInterface. We implemented all the methods from MyInterface in MyClass and if we failed to implement any of them the compiler would have given an error. This is because the fact that MyClass implements MyInterface means that MyClass is guaranteeing to provide an implementation for each of the methods from MyInterface. This lets any clients using the interface rely on the fact that at runtime there will be an implementation in place for the method it wants to call, guaranteed. 2.3. Using Interfaces To call the methods of the interface from a client we just need to use the dot (.) operator, just like with the methods of classes: MyInterface object1 = new MyClass(); object1.methodA(); // Guaranteed to work We see something unusual above, instead of something like MyClass object1 = new MyClass(); (which is perfectly acceptable) we declare object1 to be of type MyInterface. This works because MyClass is an implementation of MyInterface, wherever we want to call a method defined in MyInterface we know that MyClass will provide the implementation. object1 is a reference to any runtime object which implements MyInterface, in this case it’s an instance of MyClass. If we tried to do MyInterface object1 = new MyInterface() we’d get a compiler error because you can’t instantiate an interface, which makes sense because there’s no implementation details in the interface – no code to execute. When we make the call to object1.methodA() we are executing the method body defined in MyClass because the runtime type of object1 is MyClass, even though the reference is of type MyInterface. We can only call methods on object1 that are defined in MyInterface, for all intents and purposes we can refer to object1 as being of type MyInterface even though the runtime type is MyClass. In fact if MyClass defined another method called methodD() we couldn’t call it on object1, because the compiler only knows that object1 is a reference to a MyInterface, not that it is specifically a MyClass. This important distinction is what lets us create different implementation classes for our interfaces without worrying which specific one is being called at runtime. Take the following interface: public interface OneMethodInterface {void oneMethod();} It defines one void method which takes no parameters. Let’s implement it: public class ClassA implements OneMethodInterface {public void oneMethod() { System.out.println("Runtime type is ClassA."); } } We can use this in a client just like before: OneMethodInterface myObject = new ClassA(); myObject.oneMethod(); Output: ...
enterprise-java-logo

Jersey SSE capability in Glass Fish 4.0.1

Glass Fish bundles different Reference Implementations for various Java EE specifications e.g. Weld for CDI, Mojarra for JSF, Tyrus for WebSocket, Jersey for JAX-RS. Glass Fish 4.0.1 is in the pipeline and slated to cover updates for many of the components/modules which both include new features and bug fixes of course. Server Sent Events feature in Jersey will be supported with Glass Fish 4.0.1. Let’s try and test out this feature:Download the latest Glass Fish build from here  Unzip the contents of the ZIP installer and configure the same in your IDE (I am using NetBeans). Note: I am using using JDK 8. Remember to configure Glass Fish to use the same.      Make sure you include the highlighted JARS (below) in your class path. These are available under GF_INSTALL/glassfish/modules.   Now, the sample code for Jersey SSE feature demonstration. Its relatively simple. There are three primary classes involved:AResource.javaIt serves as a Producer of stream of events and is modeled as JAX-RS resource which emits events when invoked with a GET method. The returned event streams are abstracted in the form of org.glassfish.jersey.media.sse.EventOutput on to which a org.glassfish.jersey.media.sse.OutboundEvent object is written. The OutboundEvent consists of the actual event data.ATestServlet.javaThis class serves as consumer of the events produced by the AResource.java class. This is a simple JAX-RS client which sends a GET request to the published JAX-RS resource, reads the org.glassfish.jersey.client.ChunkedInput and further extracts the actual event data from org.glassfish.jersey.media.sse.InboundEvent instance.RESTConfig.javaAs commonly the case with JAX-RS, this serves as bootstrap class.  To test the SSE functionality from server ( producer ) to client ( consumer ), deploy your application and just access the Servlet on http://you_gfish_ip:port/JerseySSE/SSETest. You should see the following logs:  About the FishCAT – Glass Fish Community Acceptance Testing program. Everyone is welcome to participate! More on Jersey and Server Sent Events here. This was a rather quick one…Not bad ! Now you have time to go and do something more useful! Cheers!!!Reference: Jersey SSE capability in Glass Fish 4.0.1 from our JCG partner Abhishek Gupta at the Object Oriented.. blog....
enterprise-java-logo

Develop, test and deploy standalone apps on CloudBees

CloudBees is a cloud platform providing repository, CI service (Jenkins) and server for your apps. So everything you need to develop, test and deploy. There are many options, e.g. repository can be Git or SVN, for server you can choose Jetty, Tomcat, Glassfish, JBoss, Wildfly etc. It is also possible to run standalone applications, which are provided with port number, so you can start your own server. And that’s the case we’ll cover here. spray.io is Scala framework for web apps. It allows you to create standalone web-apps (starting their own server, spray-can) or somewhat limited .war ones (spray-servlet), which you can deploy on JEE server like Glassfish, JBoss etc. We are going to use standalone here. You can clone the app from github. Let’s take a quick look at it now. The app Boot The Boot file is Scala App, so it’s like java class with main method. It’s runnable. It creates Service actor, which is handling all the HTTP requests. It also reads port number from app.port system property and binds the service to the host and port. app.port is provided by CloudBees, if you want to run the app locally, you need to set it e.g. by jvm command line -Dapp.port=8080. Service Service has MyService trait, which handles routing to empty path only. Yes, the app is not very complicated! Buildfile build.gradle file is a bit more interesting. Let’s start from it’s end.mainClassName attribute is set to Scala App. This is the class that is going to be run when you run it locally from command line by gradlew run. applicationDefaultJvmArgs is set to -Dapp.port=8080 and it’s also necessery for running locally from gradle. This way we set port which Service is going to be bound to. jar.archiveName is a setting used to set generated .jar name. Without it it’s dependent on the project directory name.You can run the application by issuing gradlew run (make sure gradlew file is executable). When it’s running, you can point your browser to http://localhost:8080 and you should see “Say hello to spray-routing on spray-can!” Nothing fancy, sorry. There is also “cb” task definde for gradle. If you issue gradlew cb, it builds zip file, with all the dependency .jars, and szjug-sprayapp-1.0.jar in it’s root. This layout is necessary for CloudBees stand alone apps. Deploy to CloudBees First you need to create an account on CloudBees. If you have one, download CloudBees SDK – so you can run commands from your command line. On Mac, I prefer brew install, but you are free to choose your way. When installed, run bees command. When run for the first time, it asks your login/password, so you don’t need to provide it every time you want to use bees. Build .zip we’ll deploy to the cloud. Go into the app directory (szjug-sprayapp) and issue gradlew cb command. This command not only creates the .zip file, it also prints .jars list useful to pass to bees command as classpath. Deploy the application with the following command run from szjug-sprayapp directory: bees app:deploy -a spray-can -t java -R class=pl.szjug.sprayapp.Boot -R classpath=spray-can-1.3.1.jar:spray-routing-1.3.1.jar:spray-testkit-1.3.1.jar:akka-actor_2.10-2.3.2.jar:spray-io-1.3.1.jar:spray-http-1.3.1.jar:spray-util-1.3.1.jar:scala-library-2.10.3.jar:spray-httpx-1.3.1.jar:shapeless_2.10-1.2.4.jar:akka-testkit_2.10-2.3.0.jar:config-1.2.0.jar:parboiled-scala_2.10-1.1.6.jar:mimepull-1.9.4.jar:parboiled-core-1.1.6.jar:szjug-sprayapp-1.0.jar build/distributions/szjug-sprayapp-1.0.zip And here abbreviated version for readability: bees app:deploy -a spray-can -t java -R class=pl.szjug.sprayapp.Boot -R classpath=...:szjug-sprayapp-1.0.jar build/distributions/szjug-sprayapp-1.0.zip spray-can is an application name, -t java is application type. -R are CloudBees properties, like class to run and classpath to use. Files for classpath are helpfully printed when gradle runs cb task, so you just need to copy & paste. And that’s it! Our application is running on the CloudBees server. It’s accessible at the URL from CloudBees console.  Use CloudBees services The app is deployed on CloudBees, but is that all? As I mentioned we could also use git repository and Jenkins. Let’s do it now. Repository (Git) Create new git repository on your CloudBees account. Choose “Repos” on the left, “Add Repository”… it’s all pretty straightforward.  Name it “szjug-app-repo” and remember it should be Git.Next add this repository as remote one to your local git repo. On the repositories page on your CloudBees console there is very helpful cheetsheet about how to do it. First add git remote repository. Let’s name it cb git remote add cb ssh://git@git.cloudbees.com/pawelstawicki/szjug-app-repo.git Then push your commits there: git push cb master Now you have your code on CloudBees. CI build server (Jenkins) It’s time to configure the app build on CI server. Go to “Builds”. This is where Jenkins lives. Create new “free-style” job.Set your git repository to the job, so that Jenkins checks out always fresh code version. You’ll need the repository URL. You can take it from “Repos” page.Set the URL here:Next thing to set up is gradle task. Add next build step of type “Invoke gradle script”. Select “Use Gradle Wrapper” – this way you can use gradle version provided with the project. Set “cb” as the gradle task to run.Well, that’s all you need to have the app built. But we want to deploy it, don’t we? Add post-build action “Deploy applications”. Enter Application ID (spray-can in our case, region should change automatically). This way we tell Jenkins where to deploy. It also needs to know what to deploy. Enter build/distributions/szjug-app-job-*.zip as “Application file”.Because you deployed the application earlier from the command line, settings like application type, main class, classpath etc. are already there and you don’t need to provide it again. It might also be useful to keep the zip file from each build, so we can archive it. Just add post-build action “Archive the artifacts” and set the same zip file.Ok, that’s all for build configuration on Jenkins. Now you can hit “Build now” link and the build should be added to the queue. When it is finished, you can see the logs, status etc. But what’s more important, the application should be deployed and accessible to the whole world. You can now change something in it, hit “Build now” and after it’s finished, check if the changes are applied. Tests Probably you also noticed there is a test attached. You can run it by gradlew test. It’s specs2 test, with trait MyService so we have access to myRoute, and Specs2RouteTest so we have access to spray.io testing facilities. @RunWith(classOf[JUnitRunner]) is necessary to run tests in gradle. Now when we have tests, we’d like to see tests results. That’s another post-build step in Jenkins. Press “Add post-build action” -> “Publish JUnit test result report”. Gradle doesn’t put test results where maven does, so you’ll need to specify the location of report files.When it’s done, next build should show test results. Trigger build job You now have build job able to build, test and deploy the application. However, this build is going to run only when you run it by hand. Let’s make it run every day, and after every change pushed to the repository.Summary So now you have everything necessary to develop an app. Git repository, continous integration build system, and infrastructure to deploy the app to (actually, also continously). Think of your own app, and… happy devopsing!Reference: Develop, test and deploy standalone apps on CloudBees from our JCG partner Pawel Stawicki at the Java, the Programming, and Everything blog....
enterprise-java-logo

Examining Red Hat JBoss BRMS deployment architectures for rules and events (part I)

(Article guest authored together with John Hurlocker, Senior Middleware Consultant at Red Hat in North America)  In this weeks tips & tricks we will be slowing down and taking a closer look at possible Red Hat JBoss BRMS deployment architectures.     When we talk about deployment architectures we are referring to the options you have to deploy a rules and/or events project in your enterprise. This is the actual runtime architecture that you need to plan for at the start of your design phases, determining for your enterprise and infrastructure what the best way would be to deploy your upcoming application. It will also most likely have an effect on how you design the actual application that you want to build, so being aware of your options should help make your projects a success. This will be a multi-part series that will introduce the deployment architectures in phases, starting this week with the first two architectures. The possibilities A rule administrator or architect work with application team(s) to design the runtime architecture for rules and depending on the organizations needs the architecture could be any one of the following architectures or a hybrid of the designs below. In this series we will present four different deployment architectures and discuss one design time architecture while providing the pros and cons for each one to allow for evaluation of each one for your own needs. The basic components in these architectures shown in the accompanying illustrations are:JBoss BRMS server Rules developer / Business analyst Version control (GIT) Deployment servers (JBoss EAP) Clients using your applicationIllustration 1: Rules in applicationRules deployed in application The first architecture is the most basic and static in nature of all the options you have to deploy rules and events in your enterprise architecture. A deployable rule package (e.g. JAR) is included in your application’s deployable artifact (e.g. EAR, WAR). In this architecture the JBoss BRMS server acts as a repository to hold your rules and a design time tool. Illustration 1 shows how the JBoss BRMS server is and remains completely disconnected from the deployment or runtime environment.   ProsTypically better performance than using a rule execution server since the rule execution occurs within the same JVM as your applicationConsDo not have the ability to push rule updates to production applicationsrequires a complete rebuild of the application requires a complete re-testing of the application (Dev – QA – PROD) Illustration 2: KieScanner deploymentRules scanned from application A second architecture that you can use to slightly modify the previous one, is to add a scanner to your application that then monitors for new rule and event updates, pulling them in as they are deployed into your enterprise architecture. The JBoss BRMS API contains a KieScanner that monitors the rules repository for new rule package versions. Once a new version is available it will be picked up by the KieScanner and loaded into your application, as shown in illustration 2. The Cool Store demo project provides an example that demonstrates the usage of JBoss BRMS KieScanner, with an example implementation showing how to scan your rule repository for the last freshly built package. ProsNo need to restart your application serversin some organizations the deployment process for applications can be very lengthy this allows you to push rule updates to your application(s) in real timeConsNeed to create a deployment process for testing the rule updates with the application(s)risk of pushing incorrect logic into application(s) if the above process doesn’t thoroughly testNext up Next time we will dig into the two remaining deployment architectures that provide you with an Execution Server deployment and a hybrid deployment model to leverage several elements in a single architecture. Finally, we will cover a design time architecture for your teams to use while crafting and maintaining the rules and events in your enterprise.Reference: Examining Red Hat JBoss BRMS deployment architectures for rules and events (part I) from our JCG partner Eric Schabell at the Eric Schabell’s blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close