Featured FREE Whitepapers

What's New Here?


How the Cloud makes Windows irrelevant

Windows has been running on the majority of PCs for many years now. Microsoft successfully translated its client monopoly into a stronghold server position. However times are changing and it is no surprise that the new CEO of Microsoft is a Cloud expert. Cloud can make Windows irrelevant. Why? On the cloud you no longer use a client-server architecture. HTML5 has come a long way and is close to feature parity with most Windows GUI applications. HTML5 means that you can do mobile, tablet and PC without installation or client-side version management. This means that Salesforce, Google Apps, Workday and other SaaS solutions have become enterprise successes overnight. Mobile first means Android and iOS first. However the cloud is also bringing deeper changes. Innovation has never been cheaper. You don’t need to invest in anything. Hardware is almost for free. Software solutions are just an API away. Storage is infinite. Distribution is global. Mobile game companies were the first to experience overnight successes whereby on Monday they launched 2 servers and by Sunday they managed 5000. The next frontier will be business software. Small and nimble SaaS players will become overnight successes. Their software stacks will be different however. SQL Server and even worse Oracle and DB2 database clusters are no longer enough. They technically don’t scale. They financially don’t make sense. They are extremely hard to manage compared to nimble alternatives. Windows on the server is in no better shape. Docker and CoreOS are promising lightweight fast scale out. Ubuntu’s Juju is showing instant integration everywhere. The operating system is fast becoming a liability instead of an asset. Restarts of minutes to upgrade are not in line with 24×7 100% SLAs. In a time where each container tries to be as small and efficient as possible and upgrades need to be transactional and expressed in micro seconds, Windows is no longer the platform of choice. The cloud gave Ubuntu, an open source Linux operating system, up to 70% market share and growing. Remember what happened with Netscape and Real Player the moment Windows reached 80-90% penetration. So what should Microsoft do? The first thing is acknowledge the new reality and embrace & extend Linux. Many companies would love to migrate their .Net solutions to efficient Linux containers. Office on Linux Desktops is overdue. Why not give governments open source desktop solutions? They will gladly pay millions to boost their national pride. China did. Why would India, Russia, France, Germany, Brazil, Spain, Italy, Turkey, Saudi Arabia, Israel and the UK be different. Active Directory, Sharepoint and Exchange will loose market dominance if they do not embrace Linux. Windows phones with a Linux core could actually run Android apps and would level the playing field. Linux developers have been secretly jealous of the easiness to build great looking GUI apps. A Visual Designer for .Net on Linux and let’s be disruptive Go lang, Rails and Python would win developers mind share. IoT and embedded solutions that hold a Microsoft Linux kernel would make Android swet. Microsoft Open Source solutions in which you get the platform for free but developers can resell apps and extensions will deliver Microsoft  revenue shares, support and customisation revenues. Pivotal is showing how to do just this. Instant SaaS/PaaS enablement and integration solutions are hot but CloudFoundry is not a Windows play. But all of this is unlikely to thrive if Microsoft would keep its current internal structures. Just plainly buying some Linux thought leaders is unlikely to be enough. Microsoft could inspire itself in EMC where most people don’t know that RSA, VMWare and Pivotal all float into the same pockets. Consulting services & sales from one company are rewarded for selling products owned by the group. Office, Cloud, Phone, IoT and Business Software as independent units that can each determine how they interact with the Windows and Linux business units would accelerate innovation. Let’s see if Redmond is up for change. The new CEO at least seems to have vastly improved chances of change…Reference: How the Cloud makes Windows irrelevant from our JCG partner Maarten Ectors at the Telruptive blog....

How to create MIDlet in J2ME

Overview Java Mobile Application is called J2ME. Normally when we are working in the mobile technology field then we must think about J2ME application. Through this way we can develop our mobile application and also install it in our device through the jad or jar file. In current years the largest enhancement in mobile phone development was the introduction of JavahostedMIDlets. MIDlets are executed on a Java virtual machine that abstracts the underlying hardware and lets developers create presentations that run on the wide variety of devices that generally supports the Java run time system. Inappropriately this convenience comes at the price of restricted access to the device hardware. Now in the mobile development it was considered normal for third-party applications to receive different hardware to access and execution rights from those given to native applications written by the phone manufacturers’. The introduction and implementation of Java MIDlet expanded developers’ audiences. But the lack of low-level hardware access and sandboxed the execution meant that most mobile applications are regular to desktop programs or web sites designed to render on a smaller screen. In this article MIDlet creation will be discussed in details. Introduction J2ME abbreviation as Java 2, Micro Edition. It is a functional version of the Java besieged at devices which have limited treating power and storage aptitudes and alternating or fairly low-bandwidth network connections system. These systems also include mobile phones, pagers, wireless devices that are normally used in our daily life. MIDlets are part of the applets for mobile phones system. Here applets which can run in a protected sandbox. This system is extremely limited for that process. The MIDP ( ) 1.0 is currently found on most Java-capable phones and is fairly good. As an example – the KVM doesn’t allow us to process the floating point numbers yet and MIDlets written for MIDP 1.0 cannot access anything outside of the sandbox without proprietary APIs (Application Programming System.) from phone makers. Now we can put our dreams for developing the ultimate MIDlet with hooks into each part of our phone Operating System on the backburner. Basically when we want to find out exactly that how limited MIDP 1.0 is present then we should probably read the spec here. If we do it then it might want to check out MIDP 2.0 version and it’s up gradation. Here the time being we are going to write our first MIDlet – a full-featured like “Hello MIDlet” application. The MIDlet Lifecycle Every system should have lifecycle and through this life cycle we can recognize the step by step process of the system. Here we shortly discuss MIDlet Lifecycle Mobile devices like emulators or real that can interact with a MIDlet using their own software technology process known as Application Management Software (Abbreviated as AMS). The AMS is responsible for initializing, starting, pausing, resuming, and destroying a MIDlet. AMS may be responsible for installing and removing a MIDlet. To facilitate this life cycle management, a MIDlet can be in one of three states which are controlled via the MIDlet class methods that every MIDlet extends and overrides. These states are split up with different part like active, paused and destroyed. Virtual Machines Generally these types of programming model Virtual Machine or Virtual Device also take a vital role in the software development section. Now the CLDC (J2ME Connected, Limited Device Configuration) and the CDC (J2ME Connected Device Configuration) necessitate their own virtual machine because of their altered memory and displayed the capabilities. The CLDC virtual machine is smaller than that required by the CDC and supports fewer features. In that sense the virtual machine for the CLDC is called the Kilo Virtual Machine (KVM) and the virtual machine for the CDC is called the CVM. J2ME Connected, Limited Device Configuration:It is used to specify the Java environment for mobile phone, pager and wireless devices as well as support other devices also. CLDC devices are usually wireless that means mobility system is supported via CLDC Memory requirement is very much important in that particular device such that 160 – 512k of memory available for Java. Energy saver system has limited power or battery operated capability. Communication process is very important here. The network connectivity is wireless, intermittent, low-bandwidth (9600bps or less).J2ME Connected Device Configuration:Generally it describes the Java environment for the digital television set-top boxes, mobile, high end wireless devices and automotive telemetric systems. The device is powered by a 32-bit processor supported system. 2MB or more memory is available for the Java platform. The network connectivity that are often wireless, discontinuous, low-bandwidth (9600bps or less)Process to Create our own MIDlet in Net Beans Generally when we work in the Net Beans field then we have to install Java SE Development Kit (JDK) for that reason we must Download and install the latest 32–bit version of Java SE Development Kit (JDK) first. The JDK is required for compiling the Java classes to execute the code. The Net Beans installation Guide asks that is used to browse to the JDK location on the local drive during installation. ð This point should be remembered that when we are installing Net Beans then we select to customize the installation and clear theFeatures ON-Demand option. Then download and install a software development kit (SDK) that supports Java ME (Micro Edition). The SDK provides Java ME class libraries that the IDE involves for building MIDlets for a particular device platform. Actually when we generate the MIDlets for the Series 40 devices we use a Nokia SDK (Software Development Kit) for Java. Now if we want to create MIDlets for Series 40, 6th edition, or earlier Series 40 devices then use the corresponding Series 40 SDK. ð Remember that SDK is properly integrated with the IDE and also install the SDK on the same logical drive as the IDE section. Process to configure Net Beans After installing the required software integrate Net Beans with the installed SDK. Here we will discuss step by step process to create the project:Open Net Beans (Version 7.2.1 it may be changed according software update process Select Tools-> Java Platforms. Click Add Platform. Select Java ME CLDC Platform Emulator and click next. Net Beans searches our computer for SDKs that support Java ME. Select File> New Project. Select Java ME -> Mobile Application and click next. If we do not find the SDK (Software Development Kit) in Net Beans IDE then click Find More Java ME Platform Folders and select the folder where we installed the SDK. Net Beans searches the selected folder for SDKs (Software Development Kits) that support Java ME (Micro Edition). Select the SDK and click Next -> Net Beans detects the SDK capabilities. After the completion click configuration -> click Finish and then Close. Your development environment is now set and you can create the MIDlet in Net Beans.To create the HelloBCEI MIDlet:Download and install Netbeans (select an installation bundle that supports Java Micro Edition).In this Project Name field, enter “HelloBCEI”. Clear the checkbox Create Default Package and Main Executable Class. Click Next. The MIDlet setup continues with device platform selection. In the Emulator Platform drop-down menu option, select the device platform for which you want to create the MIDlet:For those Series 40 devices, here we select a Nokia Software Development Kit for Java. Select CLDC-1.1 and MIDP-2.0. We may also select MIDP-2.1 and Click Finish. NetBeans sets up the MIDlet project for us.Now I am creating the program through the following way.To create the main class for the MIDlet, select File -> New File. Select CLDC -> MIDlet and click next. In the MIDlet Name field, enter “HelloBCEI”. In the MIDP Class Name field, enter “HelloBCEIMIDlet”. Click Finish.The HelloBCEI MIDlet class is created in the default package. Here we write down the code that code is generated in the program. Listing1: Showing MIDlet class import javax.microedition.lcdui.Display; import javax.microedition.lcdui.Displayable; import javax.microedition.midlet.MIDlet;public class HelloMIDlet extends MIDlet { public HelloWorldMIDlet() {} // Sets the MIDlet’s current Display to a HelloScreen object. public void startApp() { Displayable current = Display.getDisplay(this).getCurrent(); if (current == null) { HelloScreen helloScreen = new HelloScreen(this, “Hello, BCEI!”); Display.getDisplay(this).setCurrent(helloScreen); } } public void pauseApp() {} public void destroyApp(boolean unconditional) {} } To create the HelloBCEIScreen class:select File -> New File. Select Java-> Java Class and click Next. In the Class Name field, enter “HelloBCEI”. Click Finish. The HelloBCEI class is created in the default package.Listing2: Shown class created in default package import javax.microedition.lcdui.*;class HelloBCEIScreen extends Form implements CommandListener { private final HelloBCEIMIDlet midlet; private final Command exitCommand; //Exit command for closing the MIDlet in the device UI.public HelloBCEIScreen(HelloWorldMIDlet midlet, String string) { super(“”); StringItem helloText = new StringItem(“”, string); super.append(helloText); this.midlet = midlet; exitCommand = new Command(“Exit”, Command.EXIT, 1); addCommand(exitCommand); setCommandListener(this); }public void commandAction(Command command, Displayable displayable) { if (command == exitCommand) { midlet.notifyDestroyed(); } } } Save the project by selecting File -> Save All. In the Project pane, right-click the HelloBCEI project and select Deploy option. After selecting the deploy option the program is ready to install in the device. NetBeans builds the MIDlet and creates the JAR and JAD files used for deploying the MIDlet to a device. You can also find the generated files in the Files pane under the dist folder.  Debugging a MIDlet Before we can debug a MIDlet, we must have versions of the MIDP executable and the MIDlet that has debugging cryptogram in their class files. To see whether we have an acceptable version of the midp executable and run the midp command in the midst of the -help option. If the generated executable has Java debugging capabilities then we will see the -debugger option listed below. For example:C:\midp2.0fcs> bin\midp -helpUsage: midp [<options>]Run the Graphical MIDlet Suite Manager….or midp [<options>] -debugger …The version of the midp executable that we are using does not support Java programming language debugging. To produce a version of the MIDlet that contains debugging symbols we use the –g option to the javac (compiler of the Java) command. To debug a MIDlet following steps should be followed up one by one: 1. at first open a command prompt or terminal window. 2. Change our current directory to midpInstallDir. For illustration, if the MIDP Reference accomplishment were installed in the directoryc:\midp2.0fcs we can run the command:c:\> cd midp2.0fcs3. Start the MIDP Reference accomplishment and executable in debug mode. Use the midp command in the midst of the switches -debugger and -port. The port number should be 2800. Here the port number on which the KVM debug proxy expects the debugger to be running. Reference Implementation executable. For example:c:\midp2.0fcs\> bin\midp -debugger -port 2800 -classpath classes4. Now Start the KVM debug proxy. Check the KVM documentation for information on the correct syntax, arguments, and options. For specimen, the following command has the KVM debug proxy connect to the midp executable that we started in the previous step and then listen at port 5000 for software compliant with the Java™ Platform Debugger Architecture process: c:\midp2.0fcs\> java -jarc:/kvm/bin/kdp.jar kdp.KVMDebugProxy –l 5000 -p -r localhost 2800 -cp Paths Including MIDletClassFiles 5. Connect to the KVM debug proxy from any debugger compliant with the Java Platform Debugger Architecture.  The Compliant debuggers include jdb, Sun™ ONE Studio (formerly known as Forte™ for Java), JBuilder, Code Warrior, Visual Café etc. Deploy the Project Now we will be discussing about the deployment process. We have reached the stage where we can deploy the MIDlet directly on our mobile device and also run it. Basically there are two ways to do this. Naturally the first is via a network connection between our computer and our handset device. This process can either be via a USB (Universal Serial Bus) cable or a Bluetooth wireless connection and depending on our device. Most of Java-enabled devices will allow us to install J2ME applications via this connection. Second, the one that is more motivating chapter because it opens up our MIDlet to the outside world via the Internet system. Generally, this means that our device should be able to connect to the Internet using its internal browser.<HTML> Click <a href=”DateTimeAppliction.jad”>here</a> to download DateTimeApplication MIDlet! </HTML> Processing to get the code in the our own device: When we have created our gorgeous little MIDlet and ensured that everything worked smoothly in the emulator and the next step is to get it running on an actual device. Over The Air (OTA) Provisioning: OTA provisioning that allows users to download our application wirelessly using the WAP browsers built into their phones (mobile). To start it we need to take a look at the Java Application Descriptor (JAD) file that is produced when we package a MIDlet using the J2ME Wireless Toolkit. When we edit a JAD file by means of the Wireless Toolkit then we must open our project and also click on Settings option. That will open up a new window with a number of tabs – API Selection, Required, Optional, User Defined, MIDlets, Push Registry and Permissions. These all types of Application are very vital and important in our working field. Following table gives brief idea. Conclusion: In the above discussion we have got knowledge on the Java Micro Edition. We know that this Edition is suitable for the mobile software development platform. Because when we are working in the mobile software based technology then J2ME is very much reliable and helpful for us. If we are working in that particular filed as a mobile software developer then Java platform is highly secured. Hope you have got the understanding of the MIDlet creation and its practical implementation. Keep watching in TechAlpine !!Reference: How to create MIDlet in J2ME from our JCG partner Kaushik Pal at the TechAlpine – The Technology world blog....

How does Spring @Transactional Really Work?

In this post we will do a deep dive into Spring transaction management. We will go over on how does @Transactional really work under the hood. Other upcoming posts will include:how to use features like propagation and isolation what are the main pitfalls and how to avoid themJPA and Transaction Management It’s important to notice that JPA on itself does not provide any type of declarative transaction management. When using JPA outside of a dependency injection container, transactions need to be handled programatically by the developer: UserTransaction utx = entityManager.getTransaction();try { utx.begin();businessLogic();utx.commit(); } catch(Exception ex) { utx.rollback(); throw ex; } This way of managing transactions makes the scope of the transaction very clear in the code, but it has several disavantages:it’s repetitive and error prone any error can have a very high impact errors are hard to debug and reproduce this decreases the readability of the code base What if this method calls another transactional method?Using Spring @Transactional With Spring @Transactional, the above code gets reduced to simply this: @Transactional public void businessLogic() { ... use entity manager inside a transaction ... } This is much more convenient and readable, and is currently the recommended way to handle transactions in Spring. By using @Transactional, many important aspects such as transaction propagation are handled automatically. In this case if another transactional method is called by businessLogic(), that method will have the option of joining the ongoing transaction. One potential downside is that this powerful mechanism hides what is going on under the hood, making it hard to debug when things don’t work. What does @Transactional mean? One of the key points about @Transactional is that there are two separate concepts to consider, each with it’s own scope and life cycle:the persistence context the database transactionThe transactional annotation itself defines the scope of a single database transaction. The database transaction happens inside the scope of a persistence context. The persistence context is in JPA the EntityManager, implemented internally using an Hibernate Session (when using Hibernate as the persistence provider). The persistence context is just a synchronizer object that tracks the state of a limited set of Java objects and makes sure that changes on those objects are eventually persisted back into the database. This is a very different notion than the one of a database transaction. One Entity Manager can be used across several database transactions, and it actually often is. When does an EntityManager span multiple database transactions? The most frequent case is when the application is using the Open Session In View pattern to deal with lazy initialization exceptions, see this previous blog post for it’s pros and cons. In such case the queries that run in the view layer are in separate database transactions than the one used for the business logic, but they are made via the same entity manager. Another case is when the persistence context is marked by the developer as PersistenceContextType.EXTENDED, which means that it can survive multiple requests. What defines the EntityManager vs Transaction relation? This is actually a choice of the application developer, but the most frequent way to use the JPA Entity Manager is with the “Entity Manager per application transaction” pattern. This is the most common way to inject an entity manager: @PersistenceContext private EntityManager em; Here we are by default in “Entity Manager per transaction” mode. In this mode, if we use this Entity Manager inside a @Transactional method, then the method will run in a single database transaction. How does @PersistenceContext work? One question that comes to mind is, how can @PersistenceContext inject an entity manager only once at container startup time, given that entity managers are so short lived, and that there are usually multiple per request. The answer is that it can’t: EntityManager is an interface, and what gets injected in the spring bean is not the entity manager itself but a context aware proxy that will delegate to a concrete entity manager at runtime. Usually the concrete class used for the proxy is SharedEntityManagerInvocationHandler, this can be confirmed with the help a debugger. How does @Transactional work then? The persistence context proxy that implements EntityManager is not the only component needed for making declarative transaction management work. Actually three separate components are needed:The EntityManager Proxy itself The Transactional Aspect The Transaction ManagerLet’s go over each one and see how they interact. The Transactional Aspect The Transactional Aspect is an ‘around’ aspect that gets called both before and after the annotated business method. The concrete class for implementing the aspect is TransactionInterceptor. The Transactional Aspect has two main responsibilities:At the ‘before’ moment, the aspect provides a hook point for determining if the business method about to be called should run in the scope of an ongoing database transaction, or if a new separate transaction should be started. At the ‘after’ moment, the aspect needs to decide if the transaction should be committed, rolled back or left running.At the ‘before’ moment the Transactional Aspect itself does not contain any decision logic, the decision to start a new transaction if needed is delegated to the Transaction Manager. The Transaction Manager The transaction manager needs to provide an answer to two questions:should a new Entity Manager be created? should a new database transaction be started?This needs to be decided at the moment the Transactional Aspect ‘before’ logic is called. The transaction manager will decide based on:the fact that one transaction is already ongoing or not the propagation attribute of the transactional method (for example REQUIRES_NEW always starts a new transaction)If the transaction manager decides to create a new transaction, then it will:create a new entity manager bind the entity manager to the current thread grab a connection from the DB connection pool bind the connection to the current threadThe entity manager and the connection are both bound to the current thread using ThreadLocal variables. They are stored in the thread while the transaction is running, and it’s up to the Transaction Manager to clean them up when no longer needed. Any parts of the program that need the current entity manager or connection can retrieve them from the thread. One program component that does exactly that is the EntityManager proxy. The EntityManager proxy The EntityManager proxy (that we have introduced before) is the last piece of the puzzle. When the business method calls for example entityManager.persist(), this call is not invoking the entity manager directly. Instead the business method calls the proxy, which retrieves the current entity manager from the thread, where the Transaction Manager put it. Knowing now what are the moving parts of the @Transactional mechanism, let’s go over the usual Spring configuration needed to make this work. Putting It All Together Let’s go over how to setup the three components needed to make the transactional annotation work correctly. We start by defining the entity manager factory. This will allow the injection of Entity Manager proxies via the persistence context annotation: @Configuration public class EntityManagerFactoriesConfiguration { @Autowired private DataSource dataSource;@Bean(name = "entityManagerFactory") public LocalContainerEntityManagerFactoryBean emf() { LocalContainerEntityManagerFactoryBean emf = ... emf.setDataSource(dataSource); emf.setPackagesToScan( new String[] {"your.package"}); emf.setJpaVendorAdapter( new HibernateJpaVendorAdapter()); return emf; } } The next step is to configure the Transaction Manager and to apply the Transactional Aspect in @Transactional annotated classes: @Configuration @EnableTransactionManagement public class TransactionManagersConfig { @Autowired EntityManagerFactory emf; @Autowired private DataSource dataSource;@Bean(name = "transactionManager") public PlatformTransactionManager transactionManager() { JpaTransactionManager tm = new JpaTransactionManager(); tm.setEntityManagerFactory(emf); tm.setDataSource(dataSource); return tm; } } The annotation @EnableTransactionManagement tells Spring that classes with the @Transactional annotation should be wrapped with the Transactional Aspect. With this the @Transactional is now ready to be used. Conclusion The Spring declarative transaction management mechanism is very powerful, but it can be misused or wrongly configured easily. Understanding how it works internally is helpful when troubleshooting situations when the mechanism is not at all working or is working in an unexpected way. The most important thing to bear in mind is that there are really two concepts to take into account: the database transaction and the persistence context, each with it’s own not readily apparent life cycle. A future post will go over the most frequent pitfalls of the transactional annotation and how to avoid them.Reference: How does Spring @Transactional Really Work? from our JCG partner Aleksey Novik at the The JHades Blog blog....

Java 8 Optional: How to Use it

Java 8 comes with a new Optional type, similar to what is available in other languages. This post will go over how this new type is meant to be used, namely what is it’s main use case. What is the Optional type? Optional is a new container type that wraps a single value, if the value is available. So it’s meant to convey the meaning that the value might be absent. Take for example this method:     public Optional<Customer> findCustomerWithSSN(String ssn) { ... } Returning Optional adds explicitly the possibility that there might not be a customer for that given social security number. This means that the caller of the method is explicitly forced by the type system to think about and deal with the possibility that there might not be a customer with that SSN. The caller will have to to something like this: Optional<Customer> optional = findCustomerWithSSN(ssn);if (optional.isPresent()) { Customer customer = maybeCustomer.get(); ... use customer ... } else { ... deal with absence case ... } Or otherwise provide a default value: Long value = findOptionalLong(ssn).orElse(0L); This use of optional is somewhat similar to the more familiar case of throwing checked exceptions. By throwing a checked exception, we use the compiler to enforce callers of the API to somehow handle an exceptional case. What is Optional trying to solve? Optional is an attempt to reduce the number of null pointer exceptions in Java systems, by adding the possibility to build more expressive APIs that account for the possibility that sometimes return values are missing. If Optional was there since the beginning, most libraries and applications would likely deal better with missing return values, reducing the number of null pointer exceptions and the overall number of bugs in general. How should Optional be used then? Optional should be used as the return type of functions that might not return a value. This is a quote from OpenJDK mailing list:“The JSR-335 EG felt fairly strongly that Optional should not be on any more than needed to support the optional-return idiom only. Someone suggested maybe even renaming it to OptionalReturn”In the context of domain driver development, this means that Optional should be used as the return type of certain service, repository or utility methods such as the one shown above. What is Optional not trying to solve Optional is not meant to be a mechanism to avoid all types of null pointers. The mandatory input parameters of methods and constructors still have to be tested for example. Like when using null, Optional does not help with conveying the meaning of an absent value. In a similar way that null can mean many different things (value not found, etc.), so can an absent Optional value. The caller of the method will still have to check the javadoc of the method for understanding the meaning of the absent Optional, in order to deal with it properly. Also in a similar way that a checked exception can be caught in an empty block, nothing prevents the caller of calling get() and moving on. What is wrong with just returning null? The problem is that the caller of the function might not have read the javadoc for the method, and forget about handling the null case. This happens frequently and is one of the main causes of null pointer exceptions, although not the only one. How should Optional NOT be used? Optional is not meant to be used in these contexts, as it won’t buy us anything:in the domain model layer (not serializable) in DTOs (same reason) in input parameters of methods in constructor parametersHow does Optional help with functional programming? In chained function calls, Optional provides method ifPresent(), that allows to chain functions that might not return values: findCustomerWithSSN(ssn).ifPresent(() -> System.out.println("customer exists!")); Useful Links This blog post from Oracle goes further into Optionaland it’s uses, comparing it with similar functionality in other languages – Tired of Null Pointer Exceptions? This cheat sheet provides a thorough overview of Optional – Optional in Java 8 Cheat Sheet.Reference: Java 8 Optional: How to Use it from our JCG partner Aleksey Novik at the The JHades Blog blog....

How to use a JPA Type Converter to encrypt your data

A few days ago, I read an interesting article by Bear Giles about Database encryption using JPA listeners from 2012. He discusses his requirement for an encryption solution and provides a code example with JPA listeners. His main requirements are:provide a transparent encryption that does not affect the application, be able to add the encryption at deployment time, develop application and security/encryption by two different teams/persons.And I completely agree with him. But after 1.5 years and a spec update to JPA 2.1, JPA listeners are not the only solution anymore. JPA 2.1 introduced type converter, which can be used to create a maybe better solution. General information and setup This example expects, that you have some basic knowledge about JPA type converter. If you want to read in more detail about type converters, check my previous article on JPA 2.1 – How to implement a Type Converter. The setup for the following example is quiet small. You just need a Java EE 7 compatible application server. I used Wildfly 8.0.0.Final which contains Hibernate 4.3.2.Final as JPA implementation. Creating the CryptoConverter Payment information like a credit card number are confidential information that should be encrypted. The following code snippet shows the CreditCard entity which we will use for this example. @Entity public class CreditCard {@Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Integer id;private String ccNumber;private String name;... } As we pointed out in the beginning, the encryption should work in a transparent way. That means, that the application is not affected by the encryption and that it can be added without any changes to the existing code base. For me, this also includes the data model in the database because it is often created by some application specific scripts which shall not be changed. So we need a type converter that does not change the data type while encrypting and decrypting the information. The following code snippet shows an example of such a converter. As you can see, the converter is quite simple. The convertToDatabaseColumn method is called by hibernate before the entity is persisted to the database. It gets the unencrypted String from the entity and uses the AES algorithm with a PKCS5Padding for encryption. Then a base64 encoding is used to convert the encrypted byte[] into a String which will be persisted to the database. When the persistence provider reads the entity from the database, the method convertToEntityAttribute gets called. It takes the encrypted String from the database, uses a base64 decoding to transform it to a byte[] and performs the decryption. The decrypted String is assigned to the attribute of the entity. For a real application, you might want to put some more effort into the encryption or move it to a separate class. But this should be good enough explain the general idea. @Converter public class CryptoConverter implements AttributeConverter<String, String> {private static final String ALGORITHM = "AES/ECB/PKCS5Padding"; private static final byte[] KEY = "MySuperSecretKey".getBytes();@Override public String convertToDatabaseColumn(String ccNumber) { // do some encryption Key key = new SecretKeySpec(KEY, "AES"); try { Cipher c = Cipher.getInstance(ALGORITHM); c.init(Cipher.ENCRYPT_MODE, key); return Base64.encodeBytes(c.doFinal(ccNumber.getBytes())); } catch (Exception e) { throw new RuntimeException(e); } }@Override public String convertToEntityAttribute(String dbData) { // do some decryption Key key = new SecretKeySpec(KEY, "AES"); try { Cipher c = Cipher.getInstance(ALGORITHM); c.init(Cipher.DECRYPT_MODE, key); return new String(c.doFinal(Base64.decode(dbData))); } catch (Exception e) { throw new RuntimeException(e); } } } OK, we have a type converter that encrypts and decrypts a String. Now we need to tell hibernate to use this converter to persist the ccNumber attribute of the CreditCard entity. As described in one of my previous articles, we could use the @Convert annotation for this. But that would change the code of our application. Another and for our requirements the better option is to assign the converter in the XML configuration. This can be done in the orm.xml file. The following snippet assigns the CryptoConverter to the ccNumber attribute of the CreditCard entity. <entity-mappings version="2.1" xmlns="http://xmlns.jcp.org/xml/ns/persistence/orm" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence/orm_2_1.xsd"><entity class="blog.thoughts.on.java.jpa21.enc.entity.CreditCard"> <convert converter="blog.thoughts.on.java.jpa21.enc.converter.CryptoConverter" attribute-name="ccNumber"/> </entity> </entity-mappings> That is everything we need to do to implement and configure a type converter based encryption for a single database field. Entity Listeners or Type Converter? The answer for this question is not as easy as it seems. Both solutions have their advantages and disadvantages. The entity listener described by Bear Giles can use multiple attributes of the entity during encryption. So you can join multiple attributes, encrypt them and store the encrypted data in one database field. Or you can use different attributes for the encrypted and decrypted data to avoid the serialization of the decrypted data (as described by Bear Giles). But using an entity listener has also drawbacks. Its implementation is specific for an entity and more complex than the implementation of a type converter. And if you need to encrypt an additional attribute, you need to change the implementation. As you saw in the example above, the implementation of a type converter is easy and reusable. The CryptoConverter can be used to encrypt any String attribute of any entity. And by using the XML based configuration to register the converter to the entity attribute, it requires no change in the source code of the application. You could even add it to the application at a later point in time, if you migrate the existing data. A drawback of this solution is, that the encrypted entity attribute cannot be marked as transient. This might result in vulnerabilities if the entity gets written to the disk. You see, both approaches have their pros and cons. You have to decide which advantages and disadvantages are more important to you. Conclusion In the beginning of this post, we defined 3 requirements:provide a transparent encryption that does not affect the application, be able to add the encryption at deployment time, develop application and security/encryption by two different teams/persons.The described implementation of the CryptoConverter fulfills all of them. The encryption can be added at deployment time and does not affect the application, if the XML configuration is used to assign the type converter. The development of the application and the encryption is completely independent and can be done by different teams. On top of this, the CryptoConverter can be used to convert any String attribute of any entity. So it has a high reusability. But this solution has also some drawbacks as we saw in the last paragraph. You have to make the decision which approach you want to use. Please write me a comment about your choice.Reference: How to use a JPA Type Converter to encrypt your data from our JCG partner Thorben Janssen at the Some thoughts on Java (EE) blog....

Is there a future for Map/Reduce?

Google’s Jeffrey Dean and Sanjay Ghemawat filed the patent request and published the map/reduce paper  10 year ago (2004). According to WikiPedia Doug Cutting and Mike Cafarella created Hadoop, with its own implementation of Map/Reduce,  one year later at Yahoo – both these implementations were done for the same purpose – batch indexing of the web. Back than, the web began its “web 2.0″ transition, pages became more dynamic , people began to create more content – so an efficient way to reprocess and build the web index was needed and map/reduce was it. Web Indexing was a great fit for map/reduce since the initial processing of each source (web page) is completely independent from any other – i.e.  a very convenient map phase and you need  to combine the results to build the reverse index. That said, even the core google algorithm –  the famous pagerank is iterative (so less appropriate for map/reduce), not to mention that  as the internet got bigger and the updates became more and more frequent map/reduce wasn’t enough. Again Google (who seem to be consistently few years ahead of the industry) began coming up with alternatives like Google Percolator  or  Google Dremel (both papers were published in 2010, Percolator was introduced at that year, and dremel has been used in Google since 2006). So now, it is 2014, and it is time for the rest of us to catch up with Google and get over Map/Reduce and  for multiple reasons:end-users’ expectations (who hear “big data” but interpret that as  “fast data”) iterative problems like graph algorithms which are inefficient as you need to load and reload the data each iteration continuous ingestion of data (increments coming on as small batches or streams of events) – where joining to existing data can be expensive real-time problems – both queries and processing.In my opinion, Map/Reduce is an idea whose time has come and gone – it won’t die in a day or a year, there is still a lot of working systems that use it and the alternatives are still maturing. I do think, however, that if you need to write or implement something new that would build on map/reduce – you should use other option or at the very least carefully consider them. So how is this change going to happen ?  Luckily, Hadoop has recently adopted YARN (you can see my presentation on it here), which opens up the possibilities to go beyond map/reduce without changing everything … even though in effect,  a lot  will change. Note that some of the new options do have migration paths and also we still retain the  access to all that “big data” we have in Hadoopm as well as the extended reuse of some of the ecosystem. The first type of effort to replace map/reduce is to actually subsume it by offering more  flexible batch. After all saying Map/reduce is not relevant, deosn’t mean that batch processing is not relevant. It does mean that there’s a need to more complex processes. There are two main candidates here  Tez and Spark where Tez offers a nice migration path as it is replacing map/reduce as the execution engine for both Pig and Hive and Spark has a compelling offer by combining Batch and Stream processing (more on this later) in a single engine. The second type of effort or processing capability that will help kill map/reduce is MPP databases on Hadoop. Like the “flexible batch” approach mentioned above, this is replacing a functionality that map/reduce was used for – unleashing the data already processed and stored in Hadoop.  The idea here is twofold:To provide fast query capabilities* – by using specialized columnar data format and database engines deployed as daemons on the cluster To provide rich query capabilities – by supporting more and more of the SQL standard and enriching it with analytics capabilities (e.g. via MADlib).Efforts in this arena include Impala from Cloudera, Hawq from Pivotal (which is essentially greenplum over HDFS), startups like Hadapt or even Actian trying to leverage their ParAccel acquisition with the recently announced Actian Vector . Hive is somewhere in the middle relying on Tez on one hand and using vectorization and columnar format (Orc) on the other. The Third type of processing that will help dethrone Map/Reduce is Stream processing. Unlike the two previous types of effort this is covering a ground the map/reduce can’t cover, even inefficiently. Stream processing is about  handling continuous flow of new data (e.g. events) and processing them (enriching, aggregating, etc.)  them in seconds or less.  The two major contenders in the Hadoop arena seem to be Spark Streaming and Storm though, of course, there are several other commercial and open source platforms that handle this type of processing as well. In summary – Map/Reduce is great. It has served us (as an industry) for a decade but it is now time to move on and bring the richer processing capabilities we have elsewhere to solve our big data problems as well. Last note  - I focused on Hadoop in this post even thought there are several other platforms and tools around. I think that regardless if Hadoop is the best platform it is the one becoming the de-facto standard for big data (remember betamax vs VHS?). One really, really last note – if you read up to here, and you are a developer living in Israel, and you happen to be looking for a job –  I am looking for another developer to join my Technology Research team @ Amdocs. If you’re interested drop me a note: arnon.rotemgaloz at amdocs dot com or via my twitter/linkedin profiles. *esp. in regard to analytical queries – operational SQL on hadoop with efforts like  Phoenix ,IBM’s BigSQL or Splice Machine are also happening but that’s another story! illustration idea found in  James Mickens’s talk in Monitorama 2014 –  (which is, by the way, a really funny presentation – go watch it) -ohh yeah… and pulp fiction!Reference: Is there a future for Map/Reduce? from our JCG partner Arnon Rotem Gal Oz at the Cirrus Minor blog....

The Mouse is a Programmer’s Enemy

One of the first programming management books I was encouraged to read was Peopleware – Productive Projects and Teams. It was a great read and I try to re-read it every once in a while. One of the topics covered is actually a term that comes from psychology – flow. Flow carries the idea of being completely mentally immersed in a task. There are a lot of things that can break us out of flow or prevent us from ever entering that sate that are out of our control. But I want to focus on something that is completely within our control and that could be interrupting our flow hundreds of times per day. The Mouse Reaching for the mouse (trackpad, touchpad, etc.) is a very natural instinct for many of us. But removing one of our hands from the keyboard actually can disrupt our thinking and hamper productivity – albeit not to the extent of many of the other distractions we contend with. Modern IDEs are so feature rich, that if you are deeply involved in a development task, know your requirements and have a well-thought out design that meets those requirements, you can perform much of the development without having your fingers leave your keyboard and maintain your blissful state of flow. Keyboard Shortcuts Most of us would agree that using the mouse to perform frequent operations like copy/cut/paste/save/undo/redo is unnecessary. But we can go much further in our effort to keep our minds focused and increase our productivity. I won’t bother outlining what the shortcuts are since every IDE has its own set of built-in keyboard shortcuts, most allow for customization of these and some languages may have some shortcut concepts that don’t apply elsewhere. What I will do is outline some of the shortcuts that you should know for your IDE and, in brief, how they will benefit you. I work in Java most often day-to-day, so some of these may apply more strictly to Java developers. Refactor – rename / move Don’hesitate to rename variables, files or move them if it’s in your best interest to do so. Generate Generate entire code files, variables, implementation shells. Open resource Get to that code or resource file by name. Open selection Open the item your cursor is on. Find references Find all code uses of the item your cursor is on. Show hierarchy Display class hierarchy of selected item. Indent / Outdent Keep that code looking beautiful. Comment / Toggle Comment Quickly and easily handle code blocks to comments. Cleanup / Format More code beautification. Can even be used to resolve code problems. Add Import Java import of a specific class. Run/Debug Quickly relaunch last launched or the code that’s open in your editor. Set/Toggle Breakpoint – Step Into / Step Out / Step Over / Run to Line / Resume Debugger shortcuts. Quick Fix / Quick Assist / Suggest Super powerful! Code problem? Your IDE might know how to fix it! Also, save on pointless typing with assist/suggest features. Duplicate lines Need to perform another similar operation to an existing block of code? Line duplication without copy/paste! Templates Repetitive typing tasks simplified. In Conclusion These are just some of my favourites. There are dozens, even hundreds more available. Interested in more information?IDEA Keyboard Shortcuts Eclipse Keyboard ShortcutsHave a favourite that I haven’t listed? Leave a comment and let us know!Reference: The Mouse is a Programmer’s Enemy from our JCG partner Craig Flichel at the Carfey Software Blog blog....

10 things you can do to make your app secure: #1 Parameterize Database Queries

OWASP’s Top 10 Risk list for web applications is a widely recognized tool for understanding, describing and assessing major application security risks. It is used to categorize problems found by security testing tools, to explain appsec issues in secure software development training, and it is burned into compliance frameworks like PCI DSS. The OWASP Top 10 for web apps, and the Top 10 risk list for mobile apps, are written by security specialists for other security specialists, pen testers and compliance auditors. They are useful in understanding what is wrong or what could be wrong with an app, but they don’t help developers understand what they need to do to build secure software. Now OWASP has a Top 10 list written for developers: 10 things that developers can and should do to build secure online apps. This list of “Proactive Controls” covers security issues in requirements, architecture and design, as well as code-level concerns. It provides a checklist to follow when developing a system, pointing to detailed guidance in each area. All available free online. Let’s start with #1 on the list, the simplest, but one of the most important things that you can do to secure your application: Parameterize Database Queries. #1 Parameterize Database Queries One of the most dangerous and most common attacks on online applications is SQL Injection: attackers inserting malicious SQL into a dynamic SQL statement. SQL injection vulnerabilities are easy for an attacker to find using free tools like SQL Map or SQL Ninja or one of the many other hacking tools or even through simple manual testing: try inserting a value like:1′ or ’1′ = ’1into the user name and password or other text fields and see what happens. Once a SQL injection vulnerability is found, it is easy to exploit. SQL injection is also one of the easiest problems to solve. You do this by making it clear to the SQL interpreter what parts of a SQL statement make up the command, and what parts are data, by parameterizing your database statements. OWASP has a cheat sheet that explains how to parameterize queries in Java (using prepared statements or with Hibernate), and in .NET/C#, ASP.NET, Ruby, PHP, Coldfusion and Perl. None of this is hard to understand or hard to do properly. It’s not exciting. But it will stop some of the worst security attacks. SQL injection is only one type of injection attack. Next we’ll look at how to protect against other kinds of injection attacks by Encoding Data – or you can watch Jim Manico explain encoding and the rest of the Top 10 Proactive Controls on YouTube.Reference: 10 things you can do to make your app secure: #1 Parameterize Database Queries from our JCG partner Jim Bird at the Building Real Software blog....

It’s time: Bring Dart to Android

As one of the first users of Google’s Dart language, I always felt like it could do more than “just some webtricks”. It’s performance is great and the language itself is elegant and modern. I like it. Even more than Java’ which I used for a long, long time. Even more than PHP’ which I used since I started with web programming. Even more than JavaScript, with which I am happy to work with. Dart is great, but the browser vendors didn’t ask for it. Nor did the hardcore JavaScript users. It is and will continue to be hard for Google to convince other people to switch from JavaScript.     Sure, you can do a lot of great stuff with Dart; like server side programming. But JavaScript has that all too. So why bother? I always thought there was only one chance for Google to convince programmers of Dart: bring it to Android, and replace the old fashioned Java. Don’t get me wrong: I still think Java is a great language. But it’s not a great language for all requirements. When I started with working on my own products, I used Java. It’s something which I regret today, because as a single-man show, it was simply too time consuming. For newer products I use dynamic languages. I am quicker, and that’s what I need right now. In Android-land, people started to bypass Java and work with Apache Cordova (repackaged as Phonegap). And just recently, Apple announced Swift. A language which looks pretty similar to Dart actually, as you can see here: class Shape { var numberOfSides = 0 func simpleDescription() -> String { return "A shape with \(numberOfSides) sides." } } You see the “var” declaration? You can also code more in the functional way: func lessThanTen(number: Int) -> Bool { return number < 10 } Also interesting: Generics: enum OptionalValue<T> { case None case Some(T) } As with Dart, Swift seems to take modern concepts and paradigmas and still looks like a dynamic and lightweight language. Swift will run on iOS and it’s said to be perfect for game programming. Something which can also be done with Dart. Now with Apple having a great, nice-looking and easy-to-use language like Swift ready for programmers, I think it’s now the perfect time to bring Dart to Android. I am honest: being a long time Java developer, I will not miss Java that much. Time to move on, Google! Give us Android-Dart!Reference: It’s time: Bring Dart to Android from our JCG partner Christian Grobmeier at the PHP und Java Entwickler blog....

ActiveMQ – Network of Brokers Explained – Part 4

In the previous part 3 , we have seen how ActiveMQ helps distinguish remote consumers from local consumers which helps in determining shorter routes from message producers to consumers. In this part 4, we will look into how to load balance concurrent consumers on remote brokers. Let’s consider a bit more advanced configuration to load balance concurrent message consumers on a queue in remote brokers as shown below.    Part 4 – Network of brokersIn the above configuration, we have a message producer sending messages into a queue moo.bar on broker-1. Broker-1 establishes network connectors to broker-2 and broker-3. Consumer C1 consumes messages from queue moo.bar on broker-2 while consumers C2 and C3 are concurrent consumers on queue moo.bar on broker-3. Let’s see this in action Let’s create three brokers instances…Ashwinis-MacBook-Pro:bin akuntamukkala$ pwd/Users/akuntamukkala/apache-activemq-5.8.0/bin Ashwinis-MacBook-Pro:bin akuntamukkala$./activemq-admin create ../cluster/broker-1 Ashwinis-MacBook-Pro:bin akuntamukkala$./activemq-admin create ../cluster/broker-2 Ashwinis-MacBook-Pro:bin akuntamukkala$./activemq-admin create ../cluster/broker-3 Fix the broker-2 and broker-3 transport, amqp connectors and jetty http port by modifying the corresponding conf/activemq.xml and conf/jetty.xml as follows:  Broker Openwire Port Jetty HTTP Port AMQP Portbroker-1 61616 8161 5672broker-2 61626 9161 5682broker-3 61636 10161 5692  Fix network connector on broker-1 such that messages on queues can be forwarded dynamically to consumers on broker-2 and broker-3. This can be done by adding the following XML snippet into broker-1′s conf/activemq.xml <networkConnectors> <networkConnectorname="Q:broker1->broker2"uri="static:(tcp://localhost:61626)"duplex="false"decreaseNetworkConsumerPriority="true"networkTTL="2"dynamicOnly="true"><excludedDestinations><topic physicalName=">" /></excludedDestinations> </networkConnector> <networkConnectorname="Q:broker1->broker3"uri="static:(tcp://localhost:61636)"duplex="false"decreaseNetworkConsumerPriority="true"networkTTL="2"dynamicOnly="true"><excludedDestinations><topic physicalName=">" /></excludedDestinations></networkConnector></networkConnectors> Start broker-2, broker-3 and broker-1. We can start these in any order./apache-activemq-5.8.0/cluster/broker-3/bin$ ./broker-3 console /apache-activemq-5.8.0/cluster/broker-2/bin$ ./broker-2 console /apache-activemq-5.8.0/cluster/broker-1/bin$ ./broker-1 consoleLet’s start the consumers C1 on broker-2 and C2, C3 on broker-3 but on the same queue called “moo.bar”/apache-activemq-5.8.0/example$ ant consumer -Durl=tcp://localhost:61626 -Dsubject=moo.bar /apache-activemq-5.8.0/example$ ant consumer -Durl=tcp://localhost:61636 -Dsubject=moo.bar -DparallelThreads=2The consumer subscriptions are forwarded by broker-2 and broker-3 to their neighboring broker-1 which has a network connector established to both broker-2 and broker-3 by the use of advisory messages. Let’s review the broker web consoles to see the queues and corresponding consumers.We find that broker-2′s web console shows one queue “moo.bar” having 1 consumer, broker-3′s web console shows one queue “moo.bar” having 2 concurrent consumers Though there are three consumers (C1 on broker-2 and C2,C3 on broker-3), broker-1 sees only two consumers (representing broker-2 and broker-3).  http://localhost:8161/admin/queues.jsp This is because the network connector from broker-1 to broker-2 and to broker-3 by default has a property “conduitSubscriptions” which is true. Due to which broker-3′s C2 and C3 which consume messages from the same queue “moo.bar” are treated as one consumer in broker-1.Let’s produce 30 messages into broker-1′s queue moo.bar and see how the messages are divvied among the consumers C1, C2 and C3:  Shows how the messages were propagated from producer to consumers C1, C2, C3As seen above, even though there were three consumers and 30 messages, they didn’t get to process 10 messages each as C2, C3 subscriptions were consolidated into one consumer at broker-1. conduitSubscriptions=”true” is a useful setting if we were creating subscribers on topics as that would prevent duplicate messages. More on this in part 5. So, in order to make C2 and C3 subscriptions on queue moo.bar propagate to broker-1, let’s redo the same steps 6, 7, 8, 9 and 10 after setting conduitSubscriptions=”false” in broker-1′s network connector configuration in conf/activemq.xml. Here is the new network connector configuration snippet for broker-1: <networkConnectors> <networkConnector name="Q:broker1->broker2" uri="static:(tcp://localhost:61626)" duplex="false" decreaseNetworkConsumerPriority="true" networkTTL="2" conduitSubscriptions="false" dynamicOnly="true"> <excludedDestinations> <topic physicalName=">" /> </excludedDestinations> </networkConnector> <networkConnector name="Q:broker1->broker3" uri="static:(tcp://localhost:61636)" duplex="false" decreaseNetworkConsumerPriority="true" networkTTL="2" conduitSubscriptions="false" dynamicOnly="true"> <excludedDestinations> <topic physicalName=">" /> </excludedDestinations> </networkConnector> </networkConnectors> Upon restarting the brokers and consumers C1, C2 and C3 and producing 30 messages into broker-1′s moo.bar queue, we find that all of the three consumer subscriptions are visible at broker-1. As a result broker-1 dispatches 10 messages to each of the consumers in a round-robin fashion to load balance. This is depicted pictorially below.Shows how the messages were propagated from producer to consumers C1, C2, C3  Broker-1′s web console @http://localhost:8161/admin/queueConsumers.jsp?JMSDestination=moo.bar shows that broker-1 now sees 3 consumers and dispatches 10 messages to each consumer.Thus in this part 4 of the blog series, we have seen how we can load balance remote concurrent consumers which are consuming messages from a queue. As always, your comments and feedback is appreciated! In the next part 5, we will explore how the same scenario will play out if we were to use a topic instead of a queue. Stay tuned… Resourceshttp://fusesource.com/docs/esb/4.3/amq_clustering/Networks-Connectors.html The configuration files (activemq.xml and jetty.xml) of all the brokers used in this blog are available here.Reference: ActiveMQ – Network of Brokers Explained – Part 4 from our JCG partner Ashwini Kuntamukkala at the Ashwini Kuntamukkala – Technology Enthusiast blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books