Featured FREE Whitepapers

What's New Here?


Developing a top-down Web Service project

This is a sample chapter taken from the Advanced JAX-WS Web Services book edited by Alessio Soldano. The bottom-up approach for creating a Web Service endpoint has been introduced in the first chapter. It allows exposing existing beans as Web Service endpoints very quickly: in most cases, turning the classes into endpoints is a matter of simply adding few annotations in the code. However, when developing a service with an already defined contract, it is far simpler (and effective) to use the top-down approach, since a wsdl-to-java tool can generate the annotated code matching the WSDL. This is the preferred solution in multiple scenarios such as the following ones:Creating a service that adheres to the XML Schema and WSDL that have been developed by hand up front; Exposing a service that conforms to a contract specified by a third party (e.g. a vendor that calls the service using an already defined set of messages); Replacing the implementation of an existing Web Service while keeping compatibility with older clients (the contract must not change).In the next sections, an example of top-down Web Service endpoint development is provided, as well as some details on constraints the developer has to be aware of when coding, regardless of the chosen approach. Creating a Web Service using the top-down approach In order to set up a full project which includes a Web Service endpoint and a JAX-WS client we will use two Maven projects. The first one will be a standard webapp-javaee7 project, which will contain the Web Service Endpoint. The second one, will be just a quickstart Maven project that will execute a Test case against the Web Service. Let’s start creating the server project as usual with: mvn -DarchetypeGroupId=org.codehaus.mojo.archetypes -DarchetypeArtifactId=webapp-javaee7 -DarchetypeVersion=0.4-SNAPSHOT -DarchetypeRepository=https://nexus.codehaus.org/content/repositories/snapshots -DgroupId=com.itbuzzpress.chapter2.wsdemo -DartifactId=ws-demo2 -Dversion=1.0 -Dpackage=com.itbuzzpress.chapter2.wsdemo -Darchetype.interactive=false --batch-mode --update-snapshots archetype:generate Next step will be creating the Web Service interface and stubs from a WSDL contract. The steps are similar to those for building up a client for the same contract. The only difference is that the wsconsume script will output the generated source files into our Maven project: $ wsconsume.bat -k CustomerService.wsdl -o ws-demo-wsdl\src\main\java In addition to the generated classes, which we have discussed at the beginning of the chapter, we need to provide a Service Endpoint Implementation that contains the Web Service functionalities: @WebService(endpointInterface="org.jboss.test.ws.jaxws.samples.webresult.Customer") public class CustomerImpl implements Customer { public CustomerRecord locateCustomer(String firstName, String lastName, USAddress address) { CustomerRecord cr = new CustomerRecord(); cr.setFirstName(firstName); cr.setLastName(lastName); return cr; } }The endpoint implementation class implements the endpoint interface and references it through the @WebService annotation. Our WebService class does nothing fancy, just create a CustomerRecord object using the parameters received as input. In a real world example, you would collect the CustomerRecord using the Persistence Layer for example. Once the implementation class has been included in the project, the project needs to be packaged and deployed to the target container, which will expose the service endpoint with the same contract that was consumed by the tool. It is also possible to reference a local WSDL file in the @WebService wsdlLocation attribute in the Service Interface and include the file in the deployment. That would make the exact provided document be published. If you are deploying the Web Service to WildFly application server, then you can check from a management instrument like the Admin Console that the endpoint is now available. Select the Upper Runtime tab and click on the Web Services link contained in the left Subsystem left option:Requirements of a JAX-WS endpoint Regardless of the approach chosen for developing a JAX-WS endpoint, the actual implementation needs to satisfy some requirements:The implementing class must be annotated with either the javax.jws.WebService or the javax.jws.WebServiceProvider annotation. The implementing class may explicitly reference a service endpoint interface through the endpointInterface element of the @WebService annotation but is not required to do so. If no endpointInterface is specified in @WebService, the service endpoint interface is implicitly defined for the implementing class. The business methods of the implementing class must be public and must not be declared static or final. The javax.jws.WebMethod annotation is to be used on business methods to be exposed to web service clients; if no method is annotated with @WebMethod, all business methods are exposed. Business methods that are exposed to web service clients must have JAXB-compatible parameters and return types. The implementing class must not be declared final and must not be abstract. The implementing class must have a default public constructor and must not define the finalize method. The implementing class may use the javax.annotation.PostConstruct or the javax.annotation.PreDestroy annotations on its methods for lifecycle event callbacks.Requirements for building and running a JAX-WS client A JAX-WS client can be part of any Java project and is not explicitly required to be part of a JAR/WAR archive deployed on a JavaEE container. For instance, the client might simply be contained in a quickstart Maven project as follows: mvn archetype:generate -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-quickstart -DgroupId=com.itbuzzpress.chapter2.wsdemo -DartifactId=client-demo-wsdl -Dversion=1.0 -Dpackage=com.itbuzzpress.chapter2.wsdemo -Dversion=1.0 -Darchetype.interactive=false --batch-mode As your client needs to reference the endpoint interface and stubs, you need to provide them either copying them from the server project or generating them again using wsconsume: $ wsconsume.bat -k CustomerService.wsdl -o client-demo-wsdl\src\main\java Now include a minimal Client Test application, which is part of a JUnit test case: public class AppTest extends TestCase { public void testApp() { CustomerService service = new CustomerService(); Customer port = service.getCustomerPort(); CustomerRecord record = port.locateCustomer("John", "Li", new USAddress()); System.out.println("Customer record is " +record); assertNotNull(record); } }Compiling and running the test In order to run successfully running a WS client application, a classloader needs to be properly setup to include the desired JAX-WS implementation libraries (and the required transitive dependencies, if any). Depending on the environment the client is meant to be run in, this might imply adding some jars to the classpath, or adding some artifact dependencies to the Maven dependency tree, setting the IDE properly, etc. Since Maven is used to build the application containing the client, you can configure your pom.xml as follows so that it includes a dependency to the JBossWS: <dependency> <groupId>org.jboss.ws.cxf</groupId> <artifactId>jbossws-cxf-client</artifactId> <version>4.2.3.Final</version> <scope>provided</scope> </dependency>Now, you can execute the testcase which will call the JAX-WS API to serve the client invocation using JBossWS. mvn clean package test Focus on the JAX-WS implementation used by the client The JAX-WS implementation to be used for running a JAX-WS client is selected at runtime by looking for META-INF/services/javax.xml.ws.spi.Provider resources through the application classloader. Each JAX-WS implementation has a library (jar) including that resource file which internally references the proper class implementing the JAX-WS SPI Provider. On WildFly 8.0.0.Final application server the JAX-WS implementation is contained in the META-INF/services/javax.xml.ws.spi.Provider of the file jbossws-cxf-factories-4.2.3.Final: org.jboss.wsf.stack.cxf.client.ProviderImpl Therefore, it is extremely important to control which artifacts or jar libraries are included in the classpath the application classloader is constructed from. If multiple implementations are found, order matters, hence the first implementation in the classpath will be used. The safest way to avoid any classpath issue (and thus load another JAX-WS implementation) is to set the java.endorsed.dirs system property to include the jbossws-cxf-factories.jar; if you don’t do that, make sure you don’t include ahead of your classpath other META-INF/services/javax.xml.ws.spi.Provider resources which will trigger another JAX-WS implementation. Finally, if the JAX-WS client is meant to run on WildFly as part of a JavaEE application, the JBossWS JAX-WS implementation will be automatically selected for serving the client. This excerpt has been taken from the “Advanced JAX-WS Web Services” book in which you’ll learn the concepts of SOAP based Web services architecture and get practical advice on building and deploying Web services in the enterprise. Starting from the basics and the best practices for setting up a development environment, this book enters into the inner details of the JAX-WS in a clear and concise way. You will also learn about the major toolkits available for creating, compiling and testing SOAP Web services and how to address common issues such as debugging data and securing its content. What you will learn from this book:Move your first steps with SOAP Web services. Installing the tools required for developing and testing applications. Developing Web services using top-down and bottom-up approach. Using Maven archetypes to speed up Web services creation. Getting into the details of JAX-WS types: Java to XML mapping and XML to Java Developing SOAP Web services on WildFly 8 and Tomcat. Running native Apache CXF on WildFly. Securing Web services. Applying authentication policies to your services. Encrypting the communication....

Some more unit test tips

In my previous post I showed some tips on unit testing JavaBeans. In this blog entry I will give two more tips on unit testing some fairly common Java code, namely utility classes and Log4J logging statements. Testing Utility classes If your utility classes follow the same basic design as the ones I tend to write, they consist of a final class with a private constructor and all static methods.       Utility class tester package it.jdev.example;import static org.junit.Assert.*;import java.lang.reflect.*;import org.junit.Test;/** * Tests that a utility class is final, contains one private constructor, and * all methods are static. */ public final class UtilityClassTester {private UtilityClassTester() { super(); }/** * Verifies that a utility class is well defined. * * @param clazz * @throws Exception */ @Test public static void test(final Class<?> clazz) throws Exception { // Utility classes must be final. assertTrue("Class must be final.", Modifier.isFinal(clazz.getModifiers()));// Only one constructor is allowed and it has to be private. assertTrue("Only one constructor is allowed.", clazz.getDeclaredConstructors().length == 1); final Constructor<?> constructor = clazz.getDeclaredConstructor(); assertFalse("Constructor must be private.", constructor.isAccessible()); assertTrue("Constructor must be private.", Modifier.isPrivate(constructor.getModifiers()));// All methods must be static. for (final Method method : clazz.getMethods()) { if (!Modifier.isStatic(method.getModifiers()) && method.getDeclaringClass().equals(clazz)) { fail("Non-static method found: " + method + "."); } } }} This UtilityClassTester itself also follows the utility class constraints noted above, so what better way to demonstrate its use by using it to test itself: Test case for the UtilityClassTester package it.jdev.example;import org.junit.Test;public class UtilityClassTesterTest {@Test public void test() throws Exception { UtilityClassTester.test(UtilityClassTester.class); }} Testing Log4J logging events When calling a method that declares an exception you’ll either re-declare that same exception, or you’ll try to deal with it within a try-catch block. In the latter case, the very least you will do is log the caught exception. A very simplistic example is the following: MyService example package it.jdev.example;import java.lang.invoke.MethodHandles;import org.apache.log4j.Logger; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service;@Service public class MyService {private static final Logger LOGGER = Logger.getLogger(MethodHandles.Lookup.class);@Autowired private MyRepository myRepository;public void doSomethingUseful() { try { myRepository.doSomethingVeryUseful(); } catch (SomeException e) { LOGGER.error("Some very informative error logging.", e); } }} Of course, you will want to test that the exception is logged appropriately. Something along the line of the following: Test case for MyService logging event package it.jdev.example;import static org.junit.Assert.*;import org.apache.log4j.spi.LoggingEvent; import org.junit.*; import org.mockito.*;public class MyServiceTest {@Mock private MyRepository myRepository;@InjectMocks private MyService myService = new MyService();@Before public void setup() { MockitoAnnotations.initMocks(this); }@Test public void thatSomeExceptionIsLogged() throws Exception { TestAppender testAppender = new TestAppender();Mockito.doThrow(SomeException.class).when(myRepository).doSomethingVeryUseful(); myService.doSomethingUseful();assertTrue(testAppender.getEvents().size() == 1); final LoggingEvent loggingEvent = testAppender.getEvents().get(0); assertEquals("Some very informative error logging.", loggingEvent.getMessage().toString()); }} But how can you go about to achieve this? As it turns out it is very easy to add a new LogAppender to the Log4J RootLogger. TestAppender for Log4J package it.jdev.example;import java.util.*;import org.apache.log4j.*; import org.apache.log4j.spi.*;/** * Utility for testing Log4j logging events. * <p> * Usage:<br /> * <code> * TestAppender testAppender = new TestAppender();<br /> * classUnderTest.methodThatWillLog();<br /><br /> * LoggingEvent loggingEvent = testAppender.getEvents().get(0);<br /><br /> * assertEquals()...<br /><br /> * </code> */ public class TestAppender extends AppenderSkeleton {private final List<LoggingEvent> events = new ArrayList<LoggingEvent>();public TestAppender() { this(Level.ERROR); }public TestAppender(final Level level) { super(); Logger.getRootLogger().addAppender(this); this.addFilter(new LogLevelFilter(level)); }@Override protected void append(final LoggingEvent event) { events.add(event); }@Override public void close() { }@Override public boolean requiresLayout() { return false; }public List<LoggingEvent> getEvents() { return events; }/** * Filter that decides whether to accept or deny a logging event based on * the logging level. */ protected class LogLevelFilter extends Filter {private final Level level;public LogLevelFilter(final Level level) { super(); this.level = level; }@Override public int decide(final LoggingEvent event) { if (event.getLevel().isGreaterOrEqual(level)) { return ACCEPT; } else { return DENY; } }}}Reference: Some more unit test tips from our JCG partner Wim van Haaren at the JDev blog....

Custom JSR 303 Bean Validation constraints for the JSR 310 New Date/Time API

With JSR 310 Java 8 finally brought us a decent date and time API. For those of you that are still using Java 7 – like I am at my current project – there is an excellent backport available, see www.threeten.org for more details. However, I’m not going to go into any details about using the new API since there are already a ton of blog posts out there about the topic. What I am going to show you in this post is how you can use the Date/Time API in conjunction with the JSR 303 Bean Validation API by writing your own custom annotations. If you’re using both bean validation and the new date/time API you’ll probably want to use them in conjunction. The API and an implementation like Hibernate Validator only provide a handful of constraints, e.g. NotEmpty or @Pattern. However, as of yet there are no out-of-the-box constraints for JSR 310. Fortunately it is very easy to create your own constraints. As an example I will demonstrate how you can write your own @Past annotation for validating java.time.LocalDate fields. For testing purposes we’ll start off with a very simple class that holds a date and a dateTime. These fields are supposed to represent dates in the past. Therefore they are annotated with the @Past anootation: ClassWithPastDates package it.jdev.example.jsr310.validator;import java.time.LocalDate; import java.time.LocalDateTime;public class ClassWithPastDates {@Past private LocalDate date;@Past private LocalDateTime dateTime; public LocalDate getDate() { return date; } public void setDate(LocalDate date) { this.date = date; } public LocalDateTime getDateTime() { return dateTime; } public void setDateTime(LocalDateTime dateTime) { this.dateTime = dateTime; }} Next, we’ll write a very basic unit test for the @Past constraint that demonstrates our intentions: obviously besides dates that lie in the past, we’ll also want a null reference to be valid but dates in the future to be invalid, and even today should count as invalid. PastTest package it.jdev.example.jsr310.validator;import static org.junit.Assert.assertEquals;import java.time.LocalDate; import java.time.LocalDateTime; import java.util.Set;import javax.validation.ConstraintViolation; import javax.validation.Validation; import javax.validation.Validator; import javax.validation.ValidatorFactory;import org.junit.Before; import org.junit.Test;public class PastTest { private ClassWithPastDates classUnderTest;@Before public void setup() { classUnderTest = new ClassWithPastDates(); }@Test public void thatNullIsValid() { Set<ConstraintViolation<ClassWithPastDates>> violations = validateClass(classUnderTest); assertEquals(violations.size(), 0); }@Test public void thatYesterdayIsValid() throws Exception { classUnderTest.setDate(LocalDate.now().minusDays(1)); classUnderTest.setDateTime(LocalDateTime.now().minusDays(1)); Set<ConstraintViolation<ClassWithPastDates>> violations = validateClass(classUnderTest); assertEquals(violations.size(), 0); }@Test public void thatTodayIsInvalid() throws Exception { classUnderTest.setDate(LocalDate.now()); classUnderTest.setDateTime(LocalDateTime.now()); Set<ConstraintViolation<ClassWithPastDates>> violations = validateClass(classUnderTest); assertEquals(violations.size(), 2); }@Test public void thatTomorrowIsInvalid() throws Exception { classUnderTest.setDate(LocalDate.now().plusDays(1)); classUnderTest.setDateTime(LocalDateTime.now().plusDays(1)); Set<ConstraintViolation<ClassWithPastDates>> violations = validateClass(classUnderTest); assertEquals(violations.size(), 2); }private Set<ConstraintViolation<ClassWithPastDates>> validateClass(ClassWithPastDates myClass) { ValidatorFactory factory = Validation.buildDefaultValidatorFactory(); Validator validator = factory.getValidator(); Set<ConstraintViolation<ClassWithPastDates>> violations = validator.validate(myClass); return violations; }} Now that we’ve got the basic test set up, we can implement the constraint itself. This consists of two steps. First we’ll have to write the annotation, and then we’ll have to implement a ConstraintValidator. To start with the annotation: @interface Past package it.jdev.example.jsr310.validator;import java.lang.annotation.Documented; import java.lang.annotation.ElementType; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; import java.lang.annotation.Target;import javax.validation.Constraint; import javax.validation.Payload;@Target({ ElementType.FIELD }) @Retention(RetentionPolicy.RUNTIME) @Constraint(validatedBy = PastValidator.class) @Documented public @interface Past {String message() default "it.jdev.example.jsr310.validator.Past.message";Class<?>[] groups() default {};Class<? extends Payload>[] payload() default {};} As you can see, the @Past annotation is not very spectacular. The main thing to notice is the @Constraint annotations where we specify which class will be used to perform the actual validation. PastValidator package it.jdev.example.jsr310.validator;import java.time.LocalDate; import java.time.temporal.Temporal;import javax.validation.ConstraintValidator; import javax.validation.ConstraintValidatorContext;public class PastValidator implements ConstraintValidator<Past, Temporal> {@Override public void initialize(Past constraintAnnotation) { }@Override public boolean isValid(Temporal value, ConstraintValidatorContext context) { if (value == null) { return true; } LocalDate ld = LocalDate.from(value); if (ld.isBefore(LocalDate.now())) { return true; } return false; }} The PastValidator is where all the magic happens. By implementing the ConstraintValidator interface we’re obliged to provide two methods but for our example only the isValid() method is of use, this is where we’ll perform the actual validation. Note that we’ve used the java.time.temporal.Temporal as the type because it is the interface that both the LocalDate and LocalDateTime classes have in common. This allows us to use the same @Past for both LocalDate and LocalDateTime fields. And that really is all there is to it. With this very basic example I’ve shown how easy it is to create your own custom JSR 303 bean validation constraint.Reference: Custom JSR 303 Bean Validation constraints for the JSR 310 New Date/Time API from our JCG partner Wim van Haaren at the JDev blog....

2 Ways of Passing Properties / Parameters in Java EE 7 Batch

When it comes to the Java EE 7 Batch Processing facility, there are 2 ways in passing properties / parameters to the chunks and batchlets. This quick guide shows you the 2 ways, which could be use very frequently when developing batch processing the Java EE 7 way. 1. Pre-Defined Properties / Parameters Before Runtime Pre-Defined properties are properties (name value pairs) which you define before deploying the application. In other words, it is fix and static, never dynamic and the values will always stay the same when you retrieve it. This is done through the job descriptor XML file, which resides in e.g. META-INF/batch-jobs/demo-job.xml. For example: <?xml version="1.0" encoding="UTF-8"?> <job id="demoJob" xmlns="http://xmlns.jcp.org/xml/ns/javaee" version="1.0"> <properties> <property name="staticParamName1" value="staticParamValue1" /> <property name="staticParamName2" value="staticParamValue2" /> </properties>   <!-- Then, the rest of the steps definition --> </job>All that it takes it is to have each pre-defined properties placed within the <properties /> tag. After the app is deployed, these properties will be made available to the objects of ItemReader, ItemProcessor, ItemWriter and Batchlet defined in the XML file during runtime. Here’s an example of how to retrieve the pre-defined properties / parameters during runtime. @Dependent @Named( "DemoReader" ) public class DemoReader extends AbstractItemReader { @Inject private JobContext jobCtx;   @Override public void open( Serializable ckpt ) throws Exception {   // Retrieve the value of staticParamName1 defined in job descriptor XML String staticParamValue1 = jobCtx.getProperties().getProperty( "staticParamName1" );   // The rest of the implementation }   // The rest of the overridden methods }The down side of this that the properties’ value will always stay the same throughout the runtime. If you need to pass a dynamic value to the batch step objects, read on… 2. Passing Properties / Parameters Dynamically During Runtime There are situations when dynamic property / parameters values are desired during batch run. To do this, first, the properties / parameters would have to be defined and have the job operator pass to the batch job. For example, I have a JobOperator (Singleton EJB) which will start the batch job through the method runBatchJob() with two dynamic properties / parameters to be passed to the batch job objects: @Singleton public class BatchJobOperator implements Serializable {   public void runBatchJob() { Properties runtimeParameters = new Properties(); runtimeParameters.setProperty( "dynamicPropertyName1", "dynamicPropertyValue1" ); runtimeParameters.setProperty( "dynamicPropertyName2", "dynamicPropertyValue2" );   JobOperator jo = BatchRuntime.getJobOperator();   // Run the batch job with the runtimeParameters passed jo.start( "name-of-job-xml-file-without-dot-xml", runtimeParameters ); } }Once when the application server has the jobs running, the objects involved in the job (ItemReader, ItemProcessor, ItemsWriter and Batchlet) could retrieve the properties set in runtimeParameters, but in a different way. Here’s how to do it in an ItemReader (the same goes for the rest of the batch job step objects): @Dependent @Named( "DemoReader" ) public class DemoReader extends AbstractItemReader { @Inject private JobContext jobCtx;   @Override public void open( Serializable ckpt ) throws Exception {   // Here's how to retrieve dynamic runtime properties / parameters Properties runtimeParams = BatchRuntime.getJobOperator().getParameters( jobCtx.getExecutionId() ); String dynamicPropertyValue1 = runtimeParams.getProperty( "dynamicPropertyName1" ); String dynamicPropertyValue2 = runtimeParams.getProperty( "dynamicPropertyName2" );   // The rest of the implementation }   // The rest of the overridden methods }Notice the difference, instead of getting the properties from the JobContext, the dynamic runtime defined properties has to be gotten from the BatchRuntime’s JobOperator, by passing the Job Context’s execution ID. Hope this is useful.Reference: 2 Ways of Passing Properties / Parameters in Java EE 7 Batch from our JCG partner Max Lam at the A Developer’s Scrappad blog....

Using rlimit (And Why You Should)

I’ve been going through some old notes and came across a reminder of setrlimit(2). This is a C system call that allows an application to specify resource limitations on a number of important parameters:              RLIMIT_AS – The maximum size of the process’s virtual memory (address space) in bytes. RLIMIT_CORE – Maximum size of core file. RLIMIT_CPU – CPU time limit in seconds. RLIMIT_DATA – The maximum size of the process’s data segment (initialized data, uninitialized data, and heap). RLIMIT_FSIZE – The maximum size of files that the process may create. RLIMIT_MEMLOCK – The maximum number of bytes of memory that may be locked into RAM. RLIMIT_MSGQUEUE – Specifies the limit on the number of bytes that can be allocated for POSIX message queues for the real user ID of the calling process. RLIMIT_NICE – Specifies a ceiling to which the process’s nice value can be raised using setpriority(2) or nice(2). RLIMIT_NOFILE – Specifies a value one greater than the maximum file descriptor number that can be opened by this process. RLIMIT_NPROC – The maximum number of processes (or, more precisely on Linux, threads) that can be created for the real user ID of the calling process. RLIMIT_RSS – Specifies the limit (in pages) of the process’s resident set (the number of virtual pages resident in RAM). RLIMIT_RTPRIO – Specifies a ceiling on the real-time priority that may be set for this process using sched_setscheduler(2) and sched_setparam(2). RLIMIT_RTTIME – Specifies a limit (in microseconds) on the amount of CPU time that a process scheduled under a real-time scheduling policy may consume without making a blocking system call. RLIMIT_SIGPENDING – Specifies the limit on the number of signals that may be queued for the real user ID of the calling process. RLIMIT_STACK – The maximum size of the process stack, in bytes.The limits for all programs are specified in configuration files (/etc/security/limits.conf and /etc/security/limits.d), or can be set in an individual shell and its processes via the ‘ulimit’ shell function. Under Linux the current resource limits for a process are visible at /proc/[pid]/limits. The limits can also be set programmatically, via setrlimit(2). Any process can give itself more restrictive limits. Any privileged process (running as root or with the correct capability) can give itself more permissive limits. I believe most systems default to unlimited or very high limits and it is the responsibility of the application to specify tighter limits. Better secured systems will do the reverse – they’ll have much tighter restrictions and use a privileged loader to grant more resources to specific programs. Why do we care? Security in depth. First, people make mistakes. Setting reasonable limits keeps a runaway process from taking down the system. Second, attackers will take advantage of any opportunity they can find. A buffer overflow isn’t an abstract concern – they are real and often allow an attacker to execute arbitrary code. Reasonable limits may be enough to sharply curtail the damage caused by an exploit. Here are some concrete examples: First, setting RLIMIT_NPROC to zero means that the process cannot fork/exec a new process – an attacker cannot execute arbitrary code as the current user. (Note: the man pages suggests this may limit the total number of processes for the user, not just in this process and its children. This should be double-checked.) It also prevents a more subtle attack where a process is repeatedly forked until a desired PID is acquired. PIDs should be unique but apparently some kernels now support a larger PID space than the traditional pid_t. That means legacy system calls may be ambiguous. Second, setting RLIMIT_AS, RLIMIT_DATA, and RLIMIT_MEMLOCK to reasonable values prevents a process from forcing the system to thrash by limiting available memory. Third, setting RLIMIT_CORE to a reasonable value (or disabling core dumps entirely) has historically been used to prevent denial of service attacks by filling the disk with core dumps. Today core dumps are often disabled to ensure sensitive information such as encryption keys are not inadvertently written to disk where an attacker can later retrieve them. Sensitive information should also be memlock()ed to prevent it from being written to the swap disk. What about java? Does this impact java? Yes. The standard classloader maintains an open ‘file handle’ for every loaded class. This can be thousands of open file handles for application servers. I’ve seen real-world failures that were ultimately tracked down to hitting the RLIMIT_NOFILE limit. There are three solutions. The first is to increase the number of permitted open files for everyone via the limits.conf file. This is undesirable – we want applications and users to have enough resources to do their job but not much more. The second is to increase the number of permitted open files for just the developers and application servers. This is better than the first option but can still let a rogue process cause a lot of damage. The third is to write a simple launcher app that sets a higher limit before doing an exec() to launch the application server or developer’s IDE. This ensures that only the authorized applications get the additional resources. (Java’s SecurityManager can also be used to limit resource usage but that’s beyond the scope of this discussion.) Sample code Finally some sample code from the prlimit man page. The setrlimit version is similar. #define _GNU_SOURCE #define _FILE_OFFSET_BITS 64 #include <stdio.h> #include <time.h> #include <stdlib.h> #include <unistd.h> #include <sys/resource.h>#define errExit(msg) do { perror(msg); exit(EXIT_FAILURE); } while (0)int main(int argc, char *argv[]) { struct rlimit old, new; struct rlimit *newp; pid_t pid;if (!(argc == 2 || argc == 4)) { fprintf(stderr, "Usage: %s [<new-soft-limit> <new-hard-limit>]\n", argv[0]); exit(EXIT_FAILURE); }pid = atoi(argv[1]); /* PID of target process */newp = NULL; if (argc == 4) { new.rlim_cur = atoi(argv[2]); new.rlim_max = atoi(argv[3]); newp = ≠w; }/* Set CPU time limit of target process; retrieve and display previous limit */ if (prlimit(pid, RLIMIT_CPU, newp, &old) == -1) errExit("prlimit-1"); printf("Previous limits: soft=%lld; hard=%lld\n", (long long) old.rlim_cur, (long long) old.rlim_max);/* Retrieve and display new CPU time limit */ if (prlimit(pid, RLIMIT_CPU, NULL, &old) == -1) errExit("prlimit-2"); printf("New limits: soft=%lld; hard=%lld\n", (long long) old.rlim_cur, (long long) old.rlim_max);exit(EXIT_FAILURE); } Usage in practice It should not be hard to write a function that sets limitations as part of the program startup, perhaps as the final step in program initialization but before reading anything provided by the user. In many cases we can just take the existing resource usage and add just enough to cover what we’ll need to support the user’s request. E.g., perhaps two additional file handles, one for input and one for output. In other cases it’s harder to identify good limits but there are three approaches. The first is to focus on what’s critical. E.g., many applications know that they should never launch a subprocess so RLIMIT_NPROC can be set to zero. (Again, after verifying that this is the limit of processes under the current process, not all processes for the user.) They know that the should never need to open more than a handful of additional files so RLIMIT_NOFILE can be set to allow a few more open files but no more. Even these modest restrictions can go a long way towards limiting damage. The second is to simply pick some large value that you are sure will be adequate for limits on memory or processor usage. Maybe 100 MB is an order of magnitude too large – but it’s an order of magnitude smaller than it was before. This approach can be especially useful for subprocesses in a boss/worker architecture where the amount of resources required by any individual worker can be well-estimated. The final approach requires more work but will give you the best numbers. During development you’ll add a little bit of additional scaffolding:Run the program as setuid root but immediately change the effective user to an unprivileged user. Set a high hard limit and low soft limit. Check whether the soft limit is hit on every system call. (You should already checking for errors.) On soft limit hits change the effective user to root, bump the soft limit, restore the original effective user, and retry the operation. Log it every time you must bump the soft limit. Variant – have an external process poll the /proc/[pid]/limits file.With good functional and acceptance tests you should have a solid idea about the resources required by the program. You’ll still want to be generous with the final resource limits but it should give you a good ‘order of magnitude’ estimate for what you need, e.g., 10 MB vs 2 GB. On a final note: disk quotas We’ve been discussing resource limitations on an individual process but sometimes the problem is resource exhaustion over time. Specifically disk usage – an application could inadvertently cause a denial of service attack by filling the disk. There’s an easy solution to this – enabling disk quotas. We normally think of disk quotas as being used to make sure users on a multi-user system play well together but they can also be used as a security measure to constrain compromised servers.Reference: Using rlimit (And Why You Should) from our JCG partner Bear Giles at the Invariant Properties blog....

Customizing HttpMessageConverters with Spring Boot and Spring MVC

Exposing a REST based endpoint for a Spring Boot application or for that matter a straight Spring MVC application is straightforward, the following is a controller exposing an endpoint to create an entity based on the content POST’ed to it:                 @RestController @RequestMapping("/rest/hotels") public class RestHotelController { .... @RequestMapping(method=RequestMethod.POST) public Hotel create(@RequestBody @Valid Hotel hotel) { return this.hotelRepository.save(hotel); } } Internally Spring MVC uses a component called a HttpMessageConverter to convert the Http request to an object representation and back. A set of default converters are automatically registered which supports a whole range of different resource representation formats – json, xml for instance. Now, if there is a need to customize the message converters in some way, Spring Boot makes it simple. As an example consider if the POST method in the sample above needs to be little more flexible and should ignore properties which are not present in the Hotel entity – typically this can be done by configuring the Jackson ObjectMapper, all that needs to be done with Spring Boot is to create a new HttpMessageConverter bean and that would end up overriding all the default message converters, this way: @Bean public MappingJackson2HttpMessageConverter mappingJackson2HttpMessageConverter() { MappingJackson2HttpMessageConverter jsonConverter = new MappingJackson2HttpMessageConverter(); ObjectMapper objectMapper = new ObjectMapper(); objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false); jsonConverter.setObjectMapper(objectMapper); return jsonConverter; } This works well for a Spring-Boot application, however for straight Spring MVC applications which do not make use of Spring-Boot, configuring a custom converter is a little more complicated – the default converters are not registered by default and an end user has to be explicit about registering the defaults – the following is the relevant code for Spring 4 based applications: @Configuration public class WebConfig extends WebMvcConfigurationSupport {@Bean public MappingJackson2HttpMessageConverter customJackson2HttpMessageConverter() { MappingJackson2HttpMessageConverter jsonConverter = new MappingJackson2HttpMessageConverter(); ObjectMapper objectMapper = new ObjectMapper(); objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false); jsonConverter.setObjectMapper(objectMapper); return jsonConverter; } @Override public void configureMessageConverters(List<HttpMessageConverter<?>> converters) { converters.add(customJackson2HttpMessageConverter()); super.addDefaultHttpMessageConverters(); } } Here WebMvcConfigurationSupport provides a way to more finely tune the MVC tier configuration of a Spring based application. In the configureMessageConverters method, the custom converter is being registered and then an explicit call is being made to ensure that the defaults are registered also. A little more work than for a Spring-Boot based application.Reference: Customizing HttpMessageConverters with Spring Boot and Spring MVC from our JCG partner Biju Kunjummen at the all and sundry blog....

Using Infinispan as a persistency solution

Cross-posted from https://vaadin.com/blog/-/blogs/using-infinispan-as-a-persistency-solution. Thanks Fredrik and Matti for your permission! Various RDBMSs are the de-facto standard for persistency. Using them is such a safe bet by architects that I dare say they are used in too many places nowadays. To fight against this, I have recently been exploring with alternative persistency options, like graph databases. This time I played with Infinispan. In case you are not familiar with Infinispan, or distributed key/value data stores in general, you could think of it as a HashMap on steroids. Most essentially, the map is shared among all your cluster nodes. With clustering you can gain huge size, blazing fast access and redundancy, depending on how you configure it. There are several products that compete with Infinispan, like Ehcache and Hazelcast from OS world and Oracle Coherence from the commercial side. Actually, Infinispan is a technology that you might have used without noticing it at all. For example high availability features of Wildfly heavily rely on Infinispan caches. It is also often used as a second level cache for ORM libraries. But it can also be used directly as a persistency library as such. Why would you consider it as your persistency solution:It is a lightning fast in-memory data storage The stored value can be any serializable object, no complex mapping libraries needed It is built from the ground up for a clustered environment – your data is safer and faster to access. It is very easy for horizontal scaling It has multiple optional cache store alternatives, for writing the state to e.g. disk for cluster wide reboots Not all data needs to be stored forever, Infinispan has built-in sophisticated evict rules Possibility to use transactional access for ACID changesSounds pretty amazing, doesn’t it? And it sure is for certain use cases, but all technologies have their weaknesses and so do key/value data stores. When comparing to RDBMSs, the largest drawback is with relations to other entities. You’ll have to come up with a strategy for how to store references to other entities and searching based on related features must also be tackled. If you end up wondering these questions, be sure to check if Hibernate OGM could help you. Also, doing some analysis on the data can be considered simpler, or at least more familiar, with traditional SQL queries. Especially if you end up having a lot of data, distributed on multiple nodes, you’ll have to learn the basics of MapReduce programming model to do any non trivial queries. Using Infinispan in a web application Although Infinispan is not tied to Wildfly, I decided to base my experiments on Wildfly. Its built in version is available for web applications, if you explicitly request it. The easiest method to do this is to add the following MANIFEST.MF entry to your war file. If you don’t want to spoil your project with obsolete files, just add it using a small war plugin config. Dependencies: org.infinispan export Naturally you’ll still want to add an Infinispan dependency to your application, but you can leave it to provided. Be sure to use the same version provided by your server, in Wildlfy 8, Infinispan version is 6.0.2. In a Maven project, add this kind of dependency declaration: <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-core</artifactId> <version>6.0.2.Final</version> <!-- Provided as we use the Infinispan provided by Wildfly --> <scope>provided</scope> </dependency> Before accessing Infinispan “caches”, you need to configure them. There are both programmatic and xml configurations available. With Wildfly, it is most natural to configure the Infinispan data store right into the server config. The “right” config file depends on how you are launching your Wildfly server. If you are testing clustering locally, you probably want to add something like this into your domain.xml, under the <subsystem xmlns="urn:jboss:domain:infinispan:2.0"> section. <cache-container name="myCache" default-cache="cachedb"> <transport lock-timeout="60000"/> <replicated-cache name="cachedb" batching="true" mode="SYNC"/> </cache-container> Note that with this config, the data is only stored within the memory of cluster nodes. To learn how to tweak cache settings or to set up disk “backup”, refer to the extensive Infinispan documentation. To remove all Infinispan references from the UI code, I created an EJB that does all the data access. There I inject the CacheContainer provided by Wildfly and fetch the default cache in an init method. @Resource(lookup = "java:jboss/infinispan/container/myCache") CacheContainer cc;Map<String, MyEntity> cache;@PostConstruct void init() { this.cache = cc.getCache(); } I guess you are already wondering it: yes, the Map is the very familiar java.util.Map interface and the rest of the implementation is trivial to any Java developer. Infinispan caches extend the basic Map interface, but in case you need some more advanced features, you can also use Cache or AdvancedCache types. The MyEntity in the previous code snippet is just a very simple POJO I created for the example. With Vaadin CDI usage, I can then inject the EJB to my UI class and do pretty much anything with it. The actual Vaadin code has no special tricks, just normal CDI spiced Vaadin code. Based on this exercise, would I use Infinispan directly for persistency in my next project? Probably not, but for certain apps, without hesitation. I can also imagine certain hybrid models where some of the data is only in an Infinispan cache and some in traditional RDBMS, naturally behind ORM, taking the best of both worlds. We’ll also be using Infinispan in our upcoming joint webinar with Arun Gupta from RedHat on September 8th, 2014. There we’ll show you a simple Vaadin application and how easy it can be to cluster it using Wildfly.Reference: Using Infinispan as a persistency solution from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....

Using Gradle to Build & Apply AST Transformations

Recently, I wanted to both build and apply local ast transformations in a Gradle project. While I could find several examples of how to write transformations, I couldn’t find a complete example showing the full build process. A transformation has to be compiled separately and then put on the classpath, so its source can’t simply sit in the rest of the Groovy source tree. This is the detail that tripped me up for a while. I initially setup a separate GroovyCompile task to process the annotation before the rest of the source (stemming from a helpful suggestion from Peter Niederwieser on the Gradle forums). While this worked, a much simpler solution for getting transformations to apply is to setup a multi-project build. The main project depends on a sub-project with the ast transformation source files. Here’s a minimal example’s directory structure: ast/build.gradle ast build file ast/src/main/groovy/com/cholick/ast/Marker.groovy marker interface ast/src/main/groovy/com/cholick/ast/Transform.groovy ast transformation build.gradle main build file settings.gradle project hierarchy configuration src/main/groovy/com/cholick/main/Main.groovy source to transform For the full working source (with simple tests and no * imports), clone https://github.com/cholick/gradle_ast_example The root build.gradle file contains a dependency on the ast project: dependencies { ... compile(project(':ast')) }The root settings.gradle defines the ast sub-project: include 'ast' The base project also has src/main/groovy/com/cholick/main/Main.groovy, with the source file to transform. In this example, the ast transformation I’ve written puts a method named ‘added’ onto the class. package com.cholick.mainimport com.cholick.ast.Marker@Marker class Main { static void main(String[] args) { new Main().run() } def run() { println 'Running main' assert this.class.declaredMethods.find { it.name == 'added' } added() } } In the ast sub-project, ast/src/main/groovy/com/cholick/ast/Marker.groovy defines an interface to mark classes for the ast transformation: package com.cholick.astimport org.codehaus.groovy.transform.GroovyASTTransformationClassimport java.lang.annotation.*@Retention(RetentionPolicy.SOURCE) @Target([ElementType.TYPE]) @GroovyASTTransformationClass(['com.cholick.ast.Transform']) public @interface Marker {} Finally, the ast transformation class processes source classes and adds a method: package com.cholick.astimport org.codehaus.groovy.ast.* import org.codehaus.groovy.ast.builder.AstBuilder import org.codehaus.groovy.control.* import org.codehaus.groovy.transform.*@GroovyASTTransformation(phase = CompilePhase.INSTRUCTION_SELECTION) class Transform implements ASTTransformation { void visit(ASTNode[] astNodes, SourceUnit sourceUnit) { if (!astNodes) return if (!astNodes[0]) return if (!astNodes[1]) return if (!(astNodes[0] instanceof AnnotationNode)) return if (astNodes[0].classNode?.name != Marker.class.name) returnClassNode annotatedClass = (ClassNode) astNodes[1] MethodNode newMethod = makeMethod(annotatedClass) annotatedClass.addMethod(newMethod) } MethodNode makeMethod(ClassNode source) { def ast = new AstBuilder().buildFromString(CompilePhase.INSTRUCTION_SELECTION, false, "def added() { println 'Added' }" ) return (MethodNode) ast[1].methods.find { it.name == 'added' } } } Thanks Hamlet D’Arcy for a great ast transformation example and Peter Niederwieser for answering my question on the forums.Reference: Using Gradle to Build & Apply AST Transformations from our JCG partner Matt Cholick at the Cholick.com blog....

An Inconvenient Latency

Overview Vendors typically publish numbers they are happy with, and avoid telling you about a product’s weaknesses.  However, behind the numbers is a dirty secret if you know where to look. Why don’t we use GPUs for everything? Finding problems which naturally scale to thousands of data points/tasks is easy for some problems, and very hard for others.   GPUs are designed for computing large vector operations.  However, what is “large” and why does it matter? Say you have a GPU with 1024 cores. This means it can process a vector of length 1024 all at once and 2048 double the time.  But what happen if we only have a vector of 100, or 10 or 1. The inconvenient answer is it take the same amount of time because you can’t make use of all of your cores at once.  You get only 10%, 1% or just 0.1% of the peak performance.  If you want to get best efficiency, you want a problem which has many thousands of values which can be processed concurrently.  If you don’t have a problem which is so concurrent, you are not going to get maximum performance. Unlike a screen full of pixels, a lot of business logic deals with a small number values at once so you want a small number of fast cores which can perform independent tasks at once. Why web servers favour horizontal scalability Web services scale well up to one task per active user. To use multiple cores, you need a problem which naturally breaks into many independent tasks. When you provide a web service, you have many users and the work for each user is largely independent. If you have ten concurrent users, you can expect to close to ten time the throughput as having one user at a time. If you have one hundred users, you can get up to ten times the throughput of ten users etc. The limit of your concurrency is around the number of active/concurrent users you have. Throughput, latency and concurrency When a vendor benchmarks their product, a common benchmark is throughput. This is the total number of operations per second over a significant period of time.  Ideally this should be many minutes. Vendors are increasingly publishing average latency benchmarks.  Average latency is a fairly optimistic number as it is good at hiding small numbers of particularly bad latencies. For example, if you want to hide long GC pauses, use average latency. The problem for vendors is these two measures can be used in combination to determine the minimum concurrency required to achieve the throughput claimed. A given problem has a “natural concurrency” the problem can be easily broken into. If you have 100 concurrent users, you may have a natural latency of about 100.  There is also a “minimum concurrency” implied by a benchmark. minimum-concurrency = throughput * latency. To achieve the throughput a benchmark suggests your natural concurrency should greater than the minimum concurrency in the benchmark.  If you have less natural concurrency, you might expect you get pro-rata throughput as well. Consider these benchmarks for three key-value stores.key-store throughput latencyProduct 1 1,000,000/s   46 ms or 0.046sProduct 2    600,000/s   10 ms or 0.01sProduct 3 2,000,000/s   0.002 ms or 0.000002sYou might look at this table and say, they all look good, they all have high throughputs and low enough latencies.  The problem is; there is an implied natural concurrency requirement to achieve the throughput stated for the latency measured. Lets see what the minimum concurrency needed to achieve the latency:key-store throughput latency minimum concurrencyProduct 1 1,000,000/s   46 ms or 0.046s 46,000Product 2    600,000/s   10 ms or 0.01s   6,000Product 3 2,000,000/s     0.002 ms or 0.000002s         4Many problems have around 4 independent tasks, but 46K is pretty high.  So what?  What if you only have 10 concurrent tasks/users.key-store concurrency latency throughput achievedProduct 1   10   46 ms or 0.046s            220/sProduct 2   10   10 ms or 0.01s         1,000/sProduct 3   10  0.002 ms or 0.000002s  2,000,000/sNote: having more concurrency than you need, doesn’t help throughput much, but a lack of natural concurrency in your problem will hurt you throughput (and horizontal scalability). Conclusion Next time you read a benchmark which includes throughput and average latency, multiply them together to see what level of concurrency would be required to achieve that throughput and compare this with the natural concurrency of the problem you are trying to solve to see if the solution fits your problem. If you have more natural concurrency, you have more solutions you can consider, if you have a low natural concurrency, you need a solution with a low latency.Reference: An Inconvenient Latency from our JCG partner Peter Lawrey at the Vanilla Java blog....

R: Calculating rolling or moving averages

I’ve been playing around with some time series data in R and since there’s a bit of variation between consecutive points I wanted to smooth the data out by calculating the moving average. I struggled to find an in built function to do this but came across Didier Ruedin’s blog post which described the following function to do the job:           mav <- function(x,n=5){filter(x,rep(1/n,n), sides=2)} I tried plugging in some numbers to understand how it works: > mav(c(4,5,4,6), 3) Time Series: Start = 1 End = 4 Frequency = 1 [1] NA 4.333333 5.000000 NA Here I was trying to do a rolling average which took into account the last 3 numbers so I expected to get just two numbers back – 4.333333 and 5 – and if there were going to be NA values I thought they’d be at the beginning of the sequence. In fact it turns out this is what the ‘sides’ parameter controls: sides for convolution filters only. If sides = 1 the filter coefficients are for past values only; if sides = 2 they are centred around lag 0. In this case the length of the filter should be odd, but if it is even, more of the filter is forward in time than backward. So in our ‘mav’ function the rolling average looks both sides of the current value rather than just at past values. We can tweak that to get the behaviour we want: mav <- function(x,n=5){filter(x,rep(1/n,n), sides=1)} > mav(c(4,5,4,6), 3) Time Series: Start = 1 End = 4 Frequency = 1 [1] NA NA 4.333333 5.000000 The NA values are annoying for any plotting we want to do so let’s get rid of them: > na.omit(mav(c(4,5,4,6), 3)) Time Series: Start = 3 End = 4 Frequency = 1 [1] 4.333333 5.000000 Having got to this point I noticed that Didier had referenced the zoo package in the comments and it has a built in function to take care of all this: > library(zoo) > rollmean(c(4,5,4,6), 3) [1] 4.333333 5.000000 I also realised I can list all the functions in a package with the ‘ls’ function so I’ll be scanning zoo’s list of functions next time I need to do something time series related – there’ll probably already be a function for it! > ls("package:zoo") [1] "as.Date" "as.Date.numeric" "as.Date.ts" [4] "as.Date.yearmon" "as.Date.yearqtr" "as.yearmon" [7] "as.yearmon.default" "as.yearqtr" "as.yearqtr.default" [10] "as.zoo" "as.zoo.default" "as.zooreg" [13] "as.zooreg.default" "autoplot.zoo" "cbind.zoo" [16] "coredata" "coredata.default" "coredata<-" [19] "facet_free" "format.yearqtr" "fortify.zoo" [22] "frequency<-" "ifelse.zoo" "index" [25] "index<-" "index2char" "is.regular" [28] "is.zoo" "make.par.list" "MATCH" [31] "MATCH.default" "MATCH.times" "median.zoo" [34] "merge.zoo" "na.aggregate" "na.aggregate.default" [37] "na.approx" "na.approx.default" "na.fill" [40] "na.fill.default" "na.locf" "na.locf.default" [43] "na.spline" "na.spline.default" "na.StructTS" [46] "na.trim" "na.trim.default" "na.trim.ts" [49] "ORDER" "ORDER.default" "panel.lines.its" [52] "panel.lines.tis" "panel.lines.ts" "panel.lines.zoo" [55] "panel.plot.custom" "panel.plot.default" "panel.points.its" [58] "panel.points.tis" "panel.points.ts" "panel.points.zoo" [61] "panel.polygon.its" "panel.polygon.tis" "panel.polygon.ts" [64] "panel.polygon.zoo" "panel.rect.its" "panel.rect.tis" [67] "panel.rect.ts" "panel.rect.zoo" "panel.segments.its" [70] "panel.segments.tis" "panel.segments.ts" "panel.segments.zoo" [73] "panel.text.its" "panel.text.tis" "panel.text.ts" [76] "panel.text.zoo" "plot.zoo" "quantile.zoo" [79] "rbind.zoo" "read.zoo" "rev.zoo" [82] "rollapply" "rollapplyr" "rollmax" [85] "rollmax.default" "rollmaxr" "rollmean" [88] "rollmean.default" "rollmeanr" "rollmedian" [91] "rollmedian.default" "rollmedianr" "rollsum" [94] "rollsum.default" "rollsumr" "scale_x_yearmon" [97] "scale_x_yearqtr" "scale_y_yearmon" "scale_y_yearqtr" [100] "Sys.yearmon" "Sys.yearqtr" "time<-" [103] "write.zoo" "xblocks" "xblocks.default" [106] "xtfrm.zoo" "yearmon" "yearmon_trans" [109] "yearqtr" "yearqtr_trans" "zoo" [112] "zooreg" Be Sociable, Share!Reference: R: Calculating rolling or moving averages from our JCG partner Mark Needham at the Mark Needham Blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: