Featured FREE Whitepapers

What's New Here?

software-development-2-logo

Statement driven development

Today most methodologies used model oriented approach. It can be domain-driven or reverse-engineering, one common point is that they start from a static definition of a model. For domain-driven, it is the domain issued by the development language itself. For the reverse-engineering it starts from a static model structure definition ex: a xsd, wsdl, database schema.  Statement driven development focuses on the actions the developer can run on some models and not the models themselves.  Example: a select statement fetching all the group members a user with group-administrator-role can manage.  Here the statement can be materialized into a search query with sql as implementation. The important thing for the developer is the query and its I/O not really the underneath model and … not the decorative technology.Why Statement-Driven-DevelopmentCouple of articles and limitations make me think that statement driven approach can be a flexible and fast alternative for developpers. Usually after defining an abstract model such as ORM, the user have to write his UC statements. This means that there is still this second step to perform.The model is sometimes/often overkill. As a developer, you do not have to apprehend the entire model complexity before yielding productivity.Limitation on standard Like in the JOOQ article about function and stored procedure extracting the meta-information can reach some limits when entering in vendor specific particularities.Native DSL There is a trend that states that SQL is the only DSL you need. Why should you wrap it with other technology abstraction that limit its power?Bye-Bye Domain Objects …Welcome DTO.  I/O are by essence DTO (why should they be the exact reflection of a persistence layer?). This situation is just a specific exception.  This exception is widely used by multiple apps/framework (strong resuability but limitations).  Remark: This article is not there to cover the ‘everlasting-debate’ DO vs. DTO. SDD just brings a new approach that do not exclude but complement the DDD/Rev-Eng one.  Concretely MinuteProject for release 0.8.1+ (mid-may-2012) will offer Statement-Driven-Development facilities.The user will focus one SQL statement. The output will be deduce by the execution of it. The input will be easily configurable.Example Here is a simple configuration. Statement-model is the new node.<model> ... <statement-model> <queries> <query name="get address street"> <query-body><value><![CDATA[select * from address where addressid > ?]]></value></query-body> <query-params> <query-param name="identifier_Address" is-mandatory="false" type="INT" sample="1"></query-param> </query-params> </query> </queries> </statement-model> </model>This configuration should be enough to get:Input bean Output bean Output list bean All the technology decoration ex for REST CXF:Resource bean with REST path spring config Native DAODemonstration will be shipped in Minuteproject next release (0.8.1).Integrating with REST is pretty much statement-driven: basically you just need to know the URL + its I/O.Conclusion  With SDD you focus on the statement and the I/O. Minuteproject simplifies the rest (technology wrapper).Reference: Statement driven development from our JCG partner Florian Adler at the minuteproject blog blog....
software-development-2-logo

Building security into a development team

Getting application developers to understand and take responsibility for software security is difficult. Bootstrapping an Appsec program requires that you get the team up to speed quickly on security risks and what problems they need to look for, how to find and fix and prevent these problems, what tools to use, and convince them that they need to take security seriously. One way to do this is to train everyone on the development team on software security.But at RSA 2011, Caleb Sima’s presentation Don’t Teach Developers Security challenged the idea that training application developers on software security will make a meaningful difference. He points out (rightly) that you can’t teach most developers anything useful about secure software development in a few hours (which as much Appsec training as most developers will get anyways). At best training like this is a long-term investment that will only pay off with reinforcement and experience – the first step on a long road. Most developers (he suggests as many as 90 out of 100) won’t take a strong interest in software security regardless. They are there to build stuff, that’s what they get paid for, that’s what they care about and that’s what they do best. Customers love them and managers (like me) love them too because they deliver, and that’s what we want them spending their time doing. We don’t want or need them to become AppSec experts. Only a few senior, experienced developers will “get” software security and understand or care about all of the details, and in most cases this is enough. The rest of the team can focus on writing good defensive code and using the right frameworks and libraries properly.Caleb Sima recommends starting an Appsec program by working with QA. Get an application security assessment: a pen test or a scan to identify security vulnerabilities in the app. Identify the top 2 security issues found. Then train the test team on these issues, what they look like, how to test for them, what tools to use. It’s not practical to expect a software tester to become a pen testing expert, but they can definitely learn how to effectively test for specific security issues. When they find security problems they enter them as bugs like any other bug, and then it’s up to development to fix the bugs.Get some wins this way first. Then extend security into the development team. Assign one person as a security controller for each application: a senior developer who understands the code and who has the technical skills and experience to take on security problems. Give them extra Appsec training and the chance to play a leadership role. It’s their job to assess technical risks for security issues. They decide on what tools the team will use to test for security problems, recommend libraries and frameworks for the team to use, and help the rest of the team to write secure code.What worked for usLooking back on what worked for our Appsec program, we learned similar lessons and took some of the same steps.While we were still in startup, I asked one of our senior developers to run an internal security assessment and make sure that our app was built in a secure way. I gave him extra time to learn about secure development and Appsec, and gave him a chance to take on a leadership role for the team. When we brought expert consultants in to do additional assessments (a secure design review and code review and pen testing) he took the lead on working with them and made sure that he understood what they were doing and what they found and what we needed to do about it. He selected a static analysis tool and got people to use it. He ensured that our framework code was secure and used properly, and he reviewed the rest of the team’s code for security and reliability problems. Security wasn’t his entire job, but it was an important part of what he did. When he eventually left the team, another senior developer took on this role. Most development teams have at least 1 developer who the rest of the team respects and looks to for help on how to use the language and platform correctly. Someone who cares about how to write good code and who is willing to help others with tough coding problems and troubleshooting. Who handles the heavy lifting on frameworks or performance engineering work. This is the developer that you need to take on your core security work. Someone who likes to learn about technical stuff and who picks new things up quickly, who understands and likes hard technical stuff (like crypto and session management), who makes sure that things get done right.Without knowing it we ended up following a model similar to Adobe’s “security ninja” program, although on a micro-scale. Most developers on the team are white belts or yellow belts with some training in secure software development and defensive programming. Our security lead is the black belt, with deeper technical experience and extra training and responsibility for leading software security for the application. Although we depended on external consultants for the initial assessments and to help us lay out a secure development roadmap, we have been able to take responsibility for secure development into the development team. Security is a part of what they do and how they design and build software today.This model works and it scales. If as a manager you look at security as an important and fundamental technical problem that needs to be solved (rather than a pain-in-the-ass that needs to be gotten over), then you will find that your senior technical people will take it seriously. And if your best technical people take security seriously, then the rest of the team will too.Reference: Building security into a development team from our JCG partner Jim Bird at the Building Real Software blog....
apache-hadoop-mapreduce-logo

MapReduce for dummies

Continuing the coverage on Hadoop component, we will go through the MapReduce component. MapReduce is a concept that has been programming model of LISP. But before we jump into MapReduce, lets start with an example to understand how MapReduce works. Given a couple of sentences, write a program that counts the number of words.Now, the traditional thinking when solving this problem is read a word, check whether the word is one of the stop words, if not , add the word in a HashMap with key as the word and set the value to number of occurrences. If the word is not found in HashMap, then add the word and set the value to 1. If the word is found, then increment the value and word the same in HashMap. Now, in this scenario, the program is processing the sentence in a serial fashion. Now, imagine if instead of a sentence, we need to count the number of words in encylopedia. Serial processing of this amount of data is time consuming. So, question is is there another algorithm we can use to speed up the processing. Lets take the same problem and divide the same into 2 steps. In the first step, we take each sentence each and map the number of words in that sentence.Once, the words have been mapped, lets move to the next step. In this step, we combine (reduce) the maps from two sentences into a single map.That’s it, we have just seen how, individual sentences can be mapped individually and then once mapped, can be reduced to a single resulting map.Advantage of the MapReduce approach isThe whole process got distributed in small tasks that will help in faster completion of the job Both the steps can be broken down into tasks. In the first, instance, run multiple map tasks, once the mapping is done, run multiple reduce tasks to combine the results and finally aggregate the resultsNow, imagine this MapReduce paradigm working on the HDFS. HDFS has data nodes that splits and store the files in blocks. Now, if map the tasks on each of the data nodes, then we can easily leverage the compute power of those data node machines.So, each of the data nodes, can run tasks (map or reduce) which are the essence of the MapReduce. As each data nodes stores data for multiple files, multiple tasks might be running at the same time for different data blocks. To control the MapReduce tasks, there are 2 processes that need to be understoodJobTracker – The JobTracker is the service within Hadoop that farms out MapReduce tasks to specific nodes in the cluster, ideally the nodes that have the data, or at least are in the same rack. TaskTracker – TaskTracker is a process that starts and tracks MapReduce Tasks in a cluster. It contacts the JobTracker for Task assignments and reporting results.These Trackers are part of the Hadoop itself and can be tracked easily viahttp://<host-name>:50030/ – web UI for MapReduce job tracker(s) http://<host-name>:50060/ – web UI for task tracker(s)Reference: MapReduce for dummies from our JCG partner Munish K Gupta at the Tech Spot blog....
java-interview-questions-answers

ADF: Backing bean scope in task flow finalizer

IntroductionThis is very common and recommended practice to use task flow finalizers when we need to do some final job (clean resources, close connections, etc) before the task flow is gone. As usual we work with managed beans declared inside the task flow. The managed beans can have different scopes – request, page flow, view, backing bean, etc. The scope depends on what the bean is actually used for. There is a small problem when we access to the backingBean scope managed bean in the finalizer. Let’s have a look at the example bellow. We have a bounded task flow with page fragments:And we have manged beans inside the task flow of three different scopes – page flow, view and backingBean:<managed-bean id="__3"> <managed-bean-name id="__5">FlowBean</managed-bean-name> <managed-bean-class id="__4">view.BackBean</managed-bean-class> <managed-bean-scope id="__2">pageFlow</managed-bean-scope> </managed-bean> <managed-bean id="__9"> <managed-bean-name id="__6">ViewBean</managed-bean-name> <managed-bean-class id="__7">view.BackBean</managed-bean-class> <managed-bean-scope id="__8">view</managed-bean-scope> </managed-bean> <managed-bean id="__10"> <managed-bean-name id="__11">BackBean</managed-bean-name> <managed-bean-class id="__12">view.BackBean</managed-bean-class> <managed-bean-scope id="__13">backingBean</managed-bean-scope> </managed-bean>On the page we have three buttons binded to managed beans of each scope:<af:commandButton text="commandButton 1" id="cb1" action="go" binding="#{backingBeanScope.BackBean.button}"> </af:commandButton><af:commandButton text="commandButton 1" id="cb2" binding="#{viewScope.ViewBean.button}"/><af:commandButton text="commandButton 1" id="cb3" binding="#{pageFlowScope.FlowBean.button}"/>The bean class has the button attribute and testString attribute that signals whether the button is assigned:private RichCommandButton button; public void setButton(RichCommandButton button) { this.button = button; }public RichCommandButton getButton() { return button; }public String getTestString() { if (this.button == null) return "The button is not assigned"; else return "The button is assigned"; }When we press cb1 we go to the return activity and the finalizer gets executed:public static String resolveExpression(String expression) { FacesContext fc = FacesContext.getCurrentInstance(); return (String) fc.getApplication().evaluateExpressionGet(fc, expression, String.class); }public void theFinalizer() { //Just to have test access to the managed beans //and to be sure we work with the same instances System.out.println(resolveExpression("#{pageFlowScope.FlowBean.testString}")+ " " + resolveExpression("#{pageFlowScope.FlowBean.button}")); System.out.println(resolveExpression("#{viewScope.ViewBean.testString}")+ " " + resolveExpression("#{viewScope.ViewBean.button}")); System.out.println(resolveExpression("#{backingBeanScope.BackBean.testString}")+ " " + resolveExpression("#{backingBeanScope.BackBean.button}")); }Run the application, press the cb1 button and see the following in the system log:The button is assigned RichCommandButton[UIXFacesBeanImpl, id=cb3] The button is assigned RichCommandButton[UIXFacesBeanImpl, id=cb2] The button is assigned RichCommandButton[UIXFacesBeanImpl, id=cb1]Everything seems to be ok. The task flow is finished and in the finalizer we work with correct managed bean instances. In this test the task flow is finished correctly using Return activity. And now let’s abandon our task flow – just go away from the page the task flow is put on. The finalizer is executed as well, and have a look at system out:The button is assigned RichCommandButton[UIXFacesBeanImpl, id=cb3] The button is assigned RichCommandButton[UIXFacesBeanImpl, id=cb2] The button is not assignedThis means that we work with different instance of the backingBeanScope.BackBean! In case of abounded task flow the controller don’t see correct backingBeanScope in the finalizer, it is empty and the controller create new instance of the BackBean. At the same time pageFlowScope and viewScope work perfect. So, be careful when you use backingBean scope managed beans within task flows, especially when you access them in finalizers. But in any case you can use the same trick described in the previous post.That’s it!Reference: Backing bean scope in ADF task flow finalizer from our JCG partner Eugene Fedorenko at the ADF Practice blog....
play-framework-logo

Troubleshooting Play Framework 2 apps on Openshift

Troubleshooting Openshift   With the do-it-yourself application type you really get a lot of freedom to support almost any framework or server that can be built and run on a linux box. But you do have to make your homework, and do some research. So in this article I’ll show you a couple of tips I learnt playing around with Openshift and Play Framework.  Comments are more than welcome, so I hope you can also provide me some more tips to help us all get our apps running on the cloud. Providing native support for play framework application   Right now, the solution we found for deploying Play 2.0 apps on openshift is quite handy, but we could make it a little better..The problem is that we have to compile the app locally (issuing play stage) and then push 30 MB of libraries to Openshift. The ideal thing, and that’s what we did with the Play 1.x quickstart and with the latest version of Openshift module for Play Framework 1.x, would be to just upload our sources and then let Openshift download and install Play, compile our app, and start it. Unfortunately we’ve ran with some memory constraints (seems like compiling Play 2 apps is a bit memory demanding) that eventually raised some issues. We are trying to work them out, but perhaps, with this tips, you could help has troubleshoot it. With the opensourcing of Openshift and the new Origin livecd we have more tools available for us to further investigate what’s going on, I just didn’t have time yet to start playing with it. So, enought chatter, and let’s get our hands dirty. Houston, we have a problem   All right, you’ve just read this guide or followed our steps on the Play Framework webinar using this Play 2.0 quickstart (in fact, some of this tips will help trouble shoot any app running on Openshift) and something went wrong. First of all, have a look at the logs. Just issuerhc app tail -a myapp -l mylogin@openshift.com -p mysecretpassLeave that window open, it will become quite handy later. Then we’ll ssh into our remote machine. Just issue:rhc app show -a myapp -l mylogin@openshift.com -p mysecretpassand you’ll get something likeApplication Info ================ contacts Framework: diy-0.1 Creation: 2012-04-19T14:20:16-04:00 UUID: 0b542570e41b42e5ac2a255c316871bc Git URL: ssh://0b542570e41b42e5ac2a255c316871bc@myapp-mylogin.rhcloud.com/~/git/myapp.git/ Public URL: http://myapp-mylogin.rhcloud.com/Embedded: NoneTake the part after the ssh of the Git URL stuff, and log into you openshift machine:ssh 96e487d1d4a042f8833efc696604f1e7@myapp-mylogin.rhcloud.com(If you are lazy like me, go on and vote for an easier way to ssh into openshift) It’s also a good idea to open another command window, ssh into openshift, and run something like “top” or “watch -n 2 free -m” to keep an eye on memory usage. Troubleshooting Play   You know the old motto, “write once, run everywhere”… well it just “should” work, but just in case you could try compiling your app with the same JDK version as the one running on openshift. Just runjava -version java version "1.6.0_22" OpenJDK Runtime Environment (IcedTea6 1.10.6) (rhel-1.43.1.10.6.el6_2-i386) OpenJDK Server VM (build 20.0-b11, mixed mode)And install the same jdk version on your box. Then compile your app and redeploy (you can use the convenience script openshift_deploy) If that doesn’t work, try to do the whole process manually on Openshift. You should do something like this:# download play cd ${OPENSHIFT_DATA_DIR} curl -o play-2.0.1.zip http://download.playframework.org/releases/play-2.0.1.zip unzip play-2.0.1.zip cd ${OPENSHIFT_REPO_DIR}#stop app .openshift/action_hooks/stop#clean everything - watch for errors, if it fails retry a couple more times ${OPENSHIFT_DATA_DIR}play-2.0.1/play cleanif you get something like:/var/lib/stickshift/0b542570e41b42e5ac2a255c316871bc/myapp/data/play-2.0.1/framework/build: line 11: 27439 KilledIt means it failed miserably (that’s the memory problem I told you about) And it’s such a bad tempered error that you’ll also loose you command prompt. Just blindily type “reset” and hit enter, you’ll get your prompt back. And then just try again… You might also get this message:This project uses Play 2.0! Update the Play sbt-plugin version to 2.0.1 (usually in project/plugins.sbt)That means you created the app with Play 2.0 and you are now trying to compile it with a different version. Just update project/plugins.sbt file or download the appropiate version. Now compile and stage your app.#compile everything - watch for errors, if it fails retry a couple more times ${OPENSHIFT_DATA_DIR}play-2.0.1/play compile#stage - watch for errors, if it fails retry a couple more times ${OPENSHIFT_DATA_DIR}play-2.0.1/play stageThen run it (don’t be shy and have a look at the action hooks scripts on the quickstart repo).target/start -Dhttp.port=8080 -Dhttp.address=${OPENSHIFT_INTERNAL_IP} -Dconfig.resource=openshift.confGo check it at https://myapp-mylogin.rhcloud.com If everything works ok, just stop it with ctrl-c, and then run:.openshift/action_hooks/startYou should see your app starting in the console with the logs files Now you can log out from the ssh session with ctrl-d, and issue:rhc app restart -a myapp -l mylogin@openshift.com -p mysecretpassand you should see something likeStopping play application Trying to kill proccess, attempt number 1 kill -SIGTERM 19128 /var/lib/stickshift/0b542570e41b42e5ac2a255c316871bc/openbafici/repo/target/start "-DapplyEvolutions.default=true" -Dhttp.port=8080 -Dhttp.address=127.11.189.129 -Dconfig.resource=openshift.conf Play server process ID is 21226 [info] play - Application started (Prod) [info] play - Listening for HTTP on port 8080...I hope this tips will become useful. As I told, I’m looking forward to start playing with the Openshift Origin livecd, and then I’ll tell you about. In the meantime I’ll leave you with the company of the good old Openshift Rocket Bear, I know you miss him too, so why not get him back?Reference: Troubleshooting Play Framework 2 apps on Openshift from our JCG partner Sebastian Scarano at the Having fun with Play framework! blog....
apache-maven-logo

Maven Build Dependencies

Maven and Gradle users who are familiar with release and snapshot dependencies may not know about TeamCity snapshot dependencies or assume they’re somehow related to Maven (which isn’t true). TeamCity users who are familiar with artifact and snapshot dependencies may not know that adding an Artifactory plugin allows them to use artifact and build dependencies as well, on top of those provided by TeamCity. Some of the names mentioned above seem not to be established enough while others may require a discussion about their usage patterns. Having this in mind I’ve decided to explore each solution in its own blog post, setting a goal of providing enough information so that people can choose what works best. This first post explores Maven snapshot and release dependencies. The second post covers artifact and snapshot dependencies provided by TeamCity, and the third and final part will cover artifact and build dependencies provided by the TeamCity Artifactory plugin. Internal and External Dependencies Build processes may run in total isolation by checking out the entire code base and building an application from scratch. This is the case for projects where relevant binary dependencies (if there are any) are kept in VCS together with the project sources. However, in many other cases build scripts rely on internal or external dependencies of some sort. Internal dependencies are satisfied by our own code where we have full control over the project which can be split into multiple modules or sub-projects. External dependencies are satisfied by someone else’s code (on which we have no control) and we consume it or use it as clients. This can be a third-party library such as Spring or a component developed by another team. This distinction is important since internal and external dependencies are usually accompanied by different release and upgrade cycles: internal dependencies may be modified, rebuilt and updated on an hourly basis while external dependencies’ release cycle is significantly slower with users applying the updates even less frequently, if at all. This is largely driven by the fact that internal dependencies are under our own control and have a narrow-scoped impact, limited by a specific project or module while external dependencies can only be used as-is, their impact is potentially company or world-wide, they are not scoped by any project and can be used anywhere. Naturally, this requires significantly higher standards of release stability, compatibility and maturity, hence slower release and update cycles. Another aspect of “internal vs. external” dependency characteristics is expressed in how their versions are specified in a build script. Internal dependencies are usually defined using snapshot versions while external dependencies use release versions. The definition of “snapshot” and “release” versions was coined by Maven, which pioneered the idea of managing dependencies by a build tool. If you’re familiar with automatic dependencies management feel free to skip the following section which provides a quick overview of how it works. Automatic Dependencies Management In Maven dependencies are specified declaratively in a build script, an approach later followed by newer build tools such as Gradle, Buildr and sbt. Maven: <dependency> <groupId>org.codehaus.groovy</groupId> <artifactId>groovy-all</artifactId> <version>1.8.6</version> <scope>compile</scope> </dependency>Gradle: compile "org.codehaus.groovy:groovy-all:1.8.6"Buildr: compile.with "org.apache.axis2:axis2:jar:1.6.1"sbt: libraryDependencies += "org.twitter4j" % "twitter4j-core" % "2.2.5"Every dependency is identified by its coordinates and scope. Coordinates unambiguously specify the library and version used, while scope defines its visibility and availability in build tasks such as compilation or tests invocation. For instance, "compile org.codehaus.groovy:groovy-all:1.8.6" would designate a Groovy "org.codehaus.groovy:groovy-all" distribution for version "1.8.6", used for source compilation and test invocation. Switching the scope to “test” or “runtime” would then narrow down the library visibility to tests-only or runtime-only, respectively. When a build starts, dependencies are either located in a local artifacts repository managed by a build tool (similar to a browser cache) or downloaded from remote repositories, either public or private, such as Maven Central, Artifactory or Nexus. The build tool then adds the artifacts resolved to the corresponding classpaths according to their scopes. When assembling build artifacts, such as "*.war" or "*.ear" archives, all required dependencies are correctly handled and packaged as well. Though dependencies management seems to be an essential part of almost any build, not all build tools provide a built-in support for it: Ant and MSBuild lack this capability, a gap later addressed by Ivy and NuGet to some extent. However, Ivy’s adoption was slower compared to Maven, while NuGet is a .NET-only tool. Over time, Maven artifact repositories and Maven Central have become a de facto mechanism for distributing and sharing Java artifacts. Being able to resolve and deploy these using Maven repositories has become a “must have” ability for all newer Java build tools. Release and Snapshot Dependencies As I mentioned previously, internal dependencies are normally defined using snapshot versions while external dependencies use release versions. Let’s look into release versions first as they are easier to reason about. Release dependencies are those which have a fixed version number, such as the "1.8.6" version of the Groovy distribution. Whatever artifact repository is used by the build and whenever it attempts to locate this dependency, it is always expected to resolve the exact same artifact. This is the main principle of release dependencies: “Same version = same artifact”. Due to this fact, build tools do not check for a release dependency update once it is found and will only re-download the artifact if the local cache was emptied. And this all makes sense, of course, since we never expect to find divergent artifacts of the same library carrying an identical version number! Snapshot dependencies are different and, as a result, way trickier to deal with. Snapshot dependency versions end with a special "-SNAPSHOT" keyword, like "3.2.0-SNAPSHOT". This keyword signals the build tools to periodically check an artifact with a remote repository for updates; by default, Maven performs this check on a daily basis. The function of snapshot dependencies, then, is to depend on someone else’s work-in-progress (think “nightly builds”): when product development moves from version "X" to version "X+1" its modules are versioned "X+1-SNAPSHOT". Snapshot Dependencies Uncertainty If the main principle of release dependencies was “Same version = same artifact” (after version ‘X’ of library is released, its artifacts are identical all around the world, forever), snapshot dependencies’ principle is “Same version = ever-updating artifact”. The benefit of this approach is that it enables retrieving frequent updates without the need to produce daily releases which would be highly impractical. The downside of it, however, is uncertainty – using snapshot dependencies in a build script makes it harder to know which version was used in a specific build execution. My "maven-about-plugin" stores a textual “about” file in every snapshot artifact in order to better identify its origins such as VCS revision and build number; this can be helpful but it only solves half of the problem. Being a moving target by their definition, snapshot dependencies do not allow us to pin down versions on which we depend and therefore build reproducibility becomes harder to achieve. Also, in a series or pipeline of builds (when a finished build triggers an invocation of subsequent ones), an artifact produced by initial pipeline steps is not necessarily consumed by the closing ones as it may be long overridden since then by other build processes running at the same time. One possible approach in this situation is to lock-down a dependency version in a build script using timestamp so it becomes "3.2.0-20120119.134529-1" rather than "3.2.0-SNAPSHOT". This effectively makes snapshot dependencies identical to release dependencies and disables an automatic update mechanism, making it impossible to use an up-to-date version even when one is available unless the timestamp is updated. As you see, snapshot dependencies can be used where it makes sense but it should be done with caution and in small doses. If possible, it is best to manage a separate release lifecycle for every reusable component and let its clients use periodically updated release dependencies. Summary This article has provided an overview of automatic dependencies management by Java build tools together with an introduction to Maven release and snapshot dependencies. It also explained how snapshot dependencies’ advantages become debatable in the context of build reproducibility and build pipelines. The following blog posts will explore the TeamCity build chains and Artifactory build isolation which allow to use consistent, reproducible and up-to-date snapshot versions throughout a chain of builds without locking down their timestamps in a build script. More to come! Reference: Maven Build Dependencies from our JCG partner Evgeny Goldin at the Goldin++ blog....
eclipse-logo

The Eclipse Common Build Infrastructure

Creating an Common Build Infrastructure (CBI) at Eclipse has been one of our holy grail pursuits over the years. My main goal in this effort has been to get Eclipse committers out of the build business so that more time and energy can be focused on actually writing software (which is what most software developers would rather spend their time doing). When we talk about “build”, we mean the process of taking source code and turning it into a form that adopters and users can download and consume. Building software in general–and building Eclipse plug-ins/OSGi bundles in particular–is a relatively hard problem. The original CBI at Eclipse used a combination of PDE Build, black magic, cron, and manual intervention in the form of pulled hairs. Our first attempt at trying to make this easier for Eclipse projects resulted in the Athena Common Build. Athena added a layer on top of PDE Build that made it quite a lot easier to build Eclipse plug-ins, features, and update sites. The community owes a debt of gratitude to Nick Boldt for all the hard work he did to implement Athena, and his tireless efforts help projects to adopt it. Around the same time that Athena came into being, we brought Hudson into the picture to provide proper build orchestration. Builds at Eclipse continued to evolve. The Buckminster project, which focuses on assembling artifacts, waded into the build space. The B3 project, a third-generation build based on Modeling technology, was created. At one point, a new project at Eclipse had lots of different choices for build. Then the push to Maven started. For years many vocal members of the community complained that Maven couldn’t be used to build Eclipse bundles. It was Eclipse’s fault. It was Maven’s fault. There were a many false starts. But everything changed with Tycho. Tycho makes Maven understand how to build Eclipse plug-ins. Tycho facilitates a couple of useful things: first, it allows you to do “manifest-first” builds in which your Maven pom leverages the OSGi dependencies specified by an Eclipse bundle; second, it enables Maven to resolve dependencies found in p2 repositories. It does more than this, but these are the big ones. Unfortunately, we haven’t found a good way to track the rate of migration. But in my estimation, Eclipse projects and many others in the adopter community are flocking to it. The combination of Hudson, Maven, and Tycho seems to be delivering the holy grail. I managed to get up and running on Maven/Tycho in a matter of minutes and haven’t thought about the build since. For projects delivering a handful of features and bundles, it’s dirt-easy to get started and maintain the build. There are a still few rather large corner cases that need to be addressed. For example, we have a team working on moving the Eclipse project’s build over to the CBI. The Eclipse project’s build is about as hard as it gets. The CBI has evolved and will continue to evolve. Our noble Webmaster recently added a new interface for signing build artifacts with the Eclipse certificate. We have ongoing work to develop a standard “parent pom” for Eclipse projects, and even a Maven repository where Eclipse projects to push their build results for dissemination to the general public. So the CBI at Eclipse seems to be stabilising around these technologies. But, I have no doubt that it will continue to evolve, especially as more and more projects start to consider implementing continuous build strategies combining Gerrit and Hudson. Reference: The Eclipse Common Build Infrastructure from our JCG partner Wayne Beaton at the Eclipse Hints, Tips, and Random Musings blog....
jboss-hibernate-logo

ORM Haters Don’t Get It

I’ve seen tons of articles and comments (especially comments) that tell us how bad, crappy and wrong is the concept of ORM (object-relational mapping). Here are the usual claims, and my comments to them:“they are slow” – there is some overhead in mapping, but it is nothing serious. Chances are you will have much slower pieces of code. “they generate bad queries which hurts performance” – first, it generates better queries than the regular developer would write, and second – it generates bad queries if you use bad mappings “they deprive you of control” – you are free to execute native queries “you don’t need them, plain SQL and XDBC is fine” – no, but I’ll discuss this in the next paragraph “they force you to have getters and setters which is bad” – your entities are simple value objects, and having setters/getters is fine there. More on this below database upgrade is hard – there are a lot of tools around the ORMs that make schema transition easy. Many ORMs have these tools built-inBut why do you need an ORM in the first place? Assume you decided not to use one. You write your query and get the result back, in the form of a ResultSet (or whatever it looks like in the language you use). There you can access each column by its name. The result is a type unsafe map-like structure. But the rest of your system requires objects – your front-end components take objects, your service methods need objects as parameters, etc. These objects are simple value-objects, and exposing their state via getters is nothing wrong. They don’t have any logic that operates on their state, they are just used to transfer that state. If you are using a statically-typed language, you are most likely using objects rather than type-unsafe structures around your code, not to mention that these structures are database-access interfaces, and you wouldn’t have them in your front-end code. So then a brilliant idea comes to your mind – “I will create a value object and transfer everything from the result set to it. Now I have the data in an object, and I don’t need database-access specific interfaces to pass around in my code”. That’s a great step. But soon you realize that this is a repetitive task – you are creating a new object and manually, field by field, transferring the result from your SQL query to that object. And you devise some clever reflection utility that reads the object fields, assumes you have the same column names in the DB, reads the result set and populates the object. Well, guess what – ORMs have been doing the same thing for years and years now. I bet theirs are better and work in many scenarios that you don’t suspect you’ll need. (And I will just scratch the surface of how odd is the process of maintaining native queries – some put them in one huge text file (ugly), others put them inline (how can the DBAs optimize them now?))To summarize the previous paragraph – you will create some sort of ORM in your project, but yours will suck more than anything out there, and you won’t admit it’s ORM.This is a good place to mention an utility called commons-dbutils (Java). It is a simple tool to map database results to objects that covers the basic cases. It is not an ORM, but it does what an ORM does – maps the database to your objects. But there’s something missing in the basic column-to-field mapper, and that’s foreign keys and joins. With an ORM you can get the User’s address in an Address field even though a JOIN would be required to fetch it. That’s both a strength and a major weakness of ORMs. The *ToOne mappings are generally safe. But *ToMany collections can be very tricky, and they are very often misused. This is partly the fault of ORMs as they don’t warn you in any way about the consequences of mapping a collection of, say, all orders belonging to a company. You will never and must never need to access that collection, but you can map it. This is an argument I’ve never heard from ORM haters, because they didn’t get to this point.So, are ORMs basically dbutils plus the evil and risky collection mapping? No, it gives you many extras, that you need. Dialects – you write your code in a database-agnostic way, and although you are probably not going to change your initially selected database vendor, it is much easier to use any database without every developer learning the culprits if its syntax. I’ve worked with MSSQL and Oracle, and I barely felt the pain in working with them. Another very, very important thing is caching. Would you execute the same query twice? I guess no, but if it happens to be in two separate methods invoked by a third method, it might be hard to catch, or hard to avoid. Here comes the session caching, and it saves you all duplicated queries to get some row (object) from the database. There is one more criticism to ORMs here – the session management is too complicated. I have mainly used JPA, so I can’t tell about others, but it is really tricky to get the session management right. It is all for very good reasons (the aforementioned cache, transaction management, lazy mappings, etc.), but it is still too complicated. You would need at least one person on the team that has a lot of experience with a particular ORM to set it up right.But there’s also the 2nd level cache, which is significantly more important. This sort of thing is what allows services like facebook and twitter to exist – you stuff your rarely-changing data in (distributed) memory and instead of querying the database every time, you get the object from memory, which is many times faster. Why is this related to ORMs? Because the caching solution can usually be plugged into the ORM and you can store the very same objects that the ORM generated, in memory. This way caching becomes completely transparent to your database-access code, which keeps it simple and yet performant.So, to summarize – ORMs are doing what you would need to do anyway, but it is almost certain that a framework that’s been around for 10 years is better than your homegrown mapper, and they are providing a lot of necessary and important extras on top of their core functionality. They also have two weak points (they both practically say “you need to know what you are doing”):they are easy to misuse, which can lead to fetching huge, unnecessary results from the database. You can very easily create a crappy mapping which can slow down your application. Of course, it is your responsibility to have a good mapping, but ORMs don’t really give you a hand there their session management is complicated, and although it is for very good reasons, it may require a very experienced person on the team to set things up properlyI’ve never seen these two being used as arguments against ORMs, whereas the wrong ones in the beginning of this article are used a lot, which leads me to believe that people raging against ORMs rarely know what they are talking about.Reference: ORM Haters Don’t Get It from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....
software-development-2-logo

Latency, Throughput and Degree of Concurrency

chrisapotek asked. How do you define throughput and latency for your test? There is not a simple question, so I have replied with a post. Sustained Throughput I consider throughput to be the number of actions a process can perform over a sustained period of time, between 10 seconds and day. (Assuming you have a quite period over night to catch up) I measure this as the number of actions per second or mega-bytes (MB) per second, but I feel the test needs to run for more than a second to be robust. Shorter tests can still report a throughput of X/s but this can be unrealistic because systems are designed to handle bursts of actively with caches and buffers. If you test one behaviour alone you get a figure which assumes nothing else is running on the system and the limits of these buffers are not important. When you run a real application on a real machine doing other things, they will not have full use of the caches, buffers, memory and bandwidth and you may not get within 2-3x the sustained throughput let alone the more optimistic burst throughput. A SATA HDD can report a burst throughput of 500 MB/second, but it might only achieve a sustained 40 MB/s. When running a real program you might expect to get 15-25 MB/sec. Latency There are two way to report latency. One way latency and round trip latency (or Round Trip Time). Often the first is reported because it is less, but it difficult to measure accurately as you need a synchronised clock at both ends. For this reason you often measure the round trip latency (as you can use just one accurate clock) and possibly halve it to infer the one way latency. I tend to be interested in what you can expect from a real application and the higher round trip latency is usually a better indication. A common measure of latency is to take the inverse of the throughput. While this is easier to calculate, it is only comparable to other tests measured this way because it only gives you the most optimistic view of the latency. e.g. if you send messages asynchronously over TCP on loop back you may be able to send two million messages per second and you might infer that the latency is the inverse of 500 ns each. If you place a time stamp in each message you may find the typical time between sending a receiving is actually closer to 20 micro-seconds. What can you infer from this discrepancy? That there around 40 (20 us / 500 ns) messages in flight at any time. Typical, Average and percentile latency Typical latency can be calculated by taking the individual latencies, sorting them and taking the middle value. This can be a fairly optimistic value but because its the lowest, it can the value you might like to report. The Average latency is the sum of latencies divided by the count. This is often reported because its the simplest to calculate and understand which it means. Because it takes into account all values it can be more realistic than the typical latency. A more conservative view is to report a percentile of latency like 90%, 99%, 99.9% or even 99.99% latency. This is calculated by sorting the individual latencies and taking the highest 10%, 1%, 0.1% or 0.01%. As this represents the latency you will get most of the time, it is a better figure to work with. The typical latency is actually the 50% percentile. It can be useful to compare the typical and average latencies to see how “flat” the distribution is. If the typical and average latencies are within 10%, I consider this to be fairly flat. Must higher than this indicates opportunities to optimise your performance. In a well performing system I look for about a factor of 2x in latency between the 90%, 99% and 99.9%. The distribution of Latencies often have what is called “fat tails”. Every so often you will have values which are much larger than all the other values. These can be 10 – 1000x higher. This is what looking at the average or percentile latencies more important as these are the one which will cause you trouble. The typical latency is more useful for determining if the system can be optimised. A test which reports these latencies and throughput The test How much difference can thread affinity make is what I call an echo or ping test. One thread or process sends a short message which contains a timestamp. The service picks up the message and sends it back. The original sender reads the message and compares the timestamp in the message with another timestamp it takes when the message is read. The difference is the latency measured in nano-second (or micro-seconds in some test I do) Wouldn’t less latency lead to more throughput? Can you explain that concept in mere mortal terms? There are many techniques which improve both latency and throughput. e.g. using faster hardware, optimising the code to make it faster. However, some techniques improve only throughput OR latency. e.g. using buffering, batching or asynchronous communication (in NIO2) improves throughput, but at the cost of latency. Conversely making the code as simple as possible and reducing the number of hops tends to reduce latency but may not give as high throughput. e.g. send one byte at a time instead of using a Buffered stream. Each byte can be received with lower latency but throughput suffers. Can you explain that concept in mere mortal terms? In simplest terms, latency is the time per action and throughput is the number of actions per time. The other concept I use is the quantity “in flight” or “degree of concurrency”, which is the Concurrency = Throughput * Latency. Degree of Concurrency examples If a task takes 1 milli-second and the throughput is 1,000 per second, the degree of concurrency is 1 (1/1000 * 1000). In other words the task is single threaded. If a task takes 20 micro-seconds and the throughput is 2 million messages per second, the number “in flight” is 40 (2e6 * 20e-6) If a HDD has a latency of 8 ms but can write 40 MB/s, the amount of data written per seek is about 320 KB (40e6 B/s * 8e-3 s = 3.2e5 B) Reference: What is latency, throughput and degree of concurrency? from our JCG partner Peter Lawrey at the Vanilla Java blog....
google-chromium-logo

Netty: Using SPDY and HTTP transparently

Most people have already heard about SPDY, the protocol, from google, proposed as a replacement for the aging HTTP protocol. Webservers are browsers are slowly implementing this protocol and support is growing. In a recent article I already wrote about how SPDY works and how you can enable SPDY support in Jetty. Since a couple of months Netty (originally from JBoss) also has support for SPDY.Since Netty is often used for high performant protocol servers, SPDY is a logical fit. In this article I’ll show you how you can create a basic Netty based server that does protocol negotiation between SPDY and HTTP. It used the example HTTPRequestHandler from the Netty snoop example to consume and produce some HTTP content.To get everything working we’ll need to do the following things:Enable NPN in Java to determine protocol to use. Determine, based on the negotiated protocol, whether to use HTTP or SPDY. Make sure the correct SPDY headers are sent back with HTTP.SPDY uses an TLS extension to determine the protocol to use in communication. This is called NPN. I wrote a more complete explanation and shown the messages involved in the article on how to use SPDY on Jetty, so for more info look at that article. Basically what this extension does is that during the TLS exchange a server and client also exchange the transport level protocols they support. In the case of SPDY a server could support both the SPDY protocol and the HTTP protocol. A client implementation can then determine which protocol to use. Since this isn’t something which is available in the standard Java implementation, we need to extend the Java TLS functionality with NPN.Enable NPN support in JavaSo far I found two options that can be used to add NPN support in Java. One is from https://github.com/benmmurphy/ssl_npn who also has a basic SPDY/Netty example in his repo where he uses his own implementation. The other option, and the one I’ll be using, is the NPN support provided by Jetty. Jetty provides an easy to use API that you can use to add NPN support to your Java SSL contexts. Once again, in the in the article on Jetty you can find more info on this. To set up NPN for Netty, we need to do the following:Add NPN lib to bootpath Connect the SSL context to the NPN ApiAdd NPN lib to boothpathFirst things first. Download the NPN boot jar from http://repo2.maven.org/maven2/org/mortbay/jetty/npn/npn-boot/8.1.2.v2012… and make sure that when you run the server you start it like this:java -Xbootclasspath/p:<path_to_npn_boot_jar>With this piece of code, Java SSL has support for NPN. We still, however, need access to the results from this negotiation. We need to know whether we’re using HTTP or SPDY, since that determines how we process the received data. For this Jetty provides an API. For this and for the required Netty libraries, we add the following dependencies, since I’m using maven, to the pom.<dependency> <groupId>io.netty</groupId> <artifactId>netty</artifactId> <version>3.4.1.Final</version> </dependency> <dependency> <groupId>org.eclipse.jetty.npn</groupId> <artifactId>npn-api</artifactId> <version>8.1.2.v20120308</version> </dependency>Connect the SSL context to the NPN APINow that we’ve got NPN enabled and the correct API added to the project, we can configure the Netty SSL handler. Configuring handlers in Netty is done in a PipelineFactory. For our server I created the following PipelineFactory:package smartjava.netty.spdy; import static org.jboss.netty.channel.Channels.pipeline; import java.io.FileInputStream; import java.security.KeyStore; import javax.net.ssl.KeyManagerFactory; import javax.net.ssl.SSLContext; import javax.net.ssl.SSLEngine; import org.eclipse.jetty.npn.NextProtoNego; import org.jboss.netty.channel.ChannelPipeline; import org.jboss.netty.channel.ChannelPipelineFactory; import org.jboss.netty.handler.ssl.SslHandler; public class SPDYPipelineFactory implements ChannelPipelineFactory { private SSLContext context; public SPDYPipelineFactory() { try { KeyStore keystore = KeyStore.getInstance("JKS"); keystore.load(new FileInputStream("src/main/resources/server.jks"), "secret".toCharArray()); KeyManagerFactory kmf = KeyManagerFactory.getInstance("SunX509"); kmf.init(keystore, "secret".toCharArray()); context = SSLContext.getInstance("TLS"); context.init(kmf.getKeyManagers(), null, null); } catch (Exception e) { e.printStackTrace(); } } public ChannelPipeline getPipeline() throws Exception { // Create a default pipeline implementation. ChannelPipeline pipeline = pipeline(); // Uncomment the following line if you want HTTPS SSLEngine engine = context.createSSLEngine(); engine.setUseClientMode(false); NextProtoNego.put(engine, new SimpleServerProvider()); NextProtoNego.debug = true; pipeline.addLast("ssl", new SslHandler(engine)); pipeline.addLast("pipeLineSelector", new HttpOrSpdyHandler()); return pipeline; } }In the constructor from this class we setup a basic SSL context. The keystore and key we use I created using the java keytool, this is normal SSL configuration. When we receive a request, the getPipeline operation is called to determine how to handle the request. Here we use the NextProtoNego class, provided by Jetty-NPN-API, to connect our SSL connection to the NPN implementation. In this operation we pass a provider that is used as callback and configuration for our server. We also set NextProtoNego.debug to true. This prints out some debugging information that makes, well, debugging easier. The code for the SimpleServerProvider is very simple:public class SimpleServerProvider implements ServerProvider { private String selectedProtocol = null; public void unsupported() { //if unsupported, default to http/1.1 selectedProtocol = "http/1.1"; } public List<String> protocols() { return Arrays.asList("spdy/2","http/1.1"); } public void protocolSelected(String protocol) { selectedProtocol = protocol; } public String getSelectedProtocol() { return selectedProtocol; } }This code is pretty much self-explanatory.The unsupported operation is called when the client doesn’t support NPN. In that case we default to HTTP. The protocols() operation returns the protocols the server supports The protocolSelected operation is called when a protocol has been negotiated by the server and the clientThe getSelectedProtocol is a method we will use to get the selected protocol from a different handler in the Netty pipeline.Determine, based on the negotiated protocol, whether to use HTTP or SPDYNow we need to configure Netty in such a way that it runs a specific pipeline for HTTPS request and a pipeline for SPDY requests. For this let’s look back at a small part of the pipelinefactory.pipeline.addLast("ssl", new SslHandler(engine)); pipeline.addLast("pipeLineSelector", new HttpOrSpdyHandler());The first part of this pipeline is the SslHandler that is configured with NPN support. The next handler that will be called is the HttpOrSpdyHandler. This handler determines, based on the protocol, which pipeline to use. The code for this handler is listed next:public class HttpOrSpdyHandler implements ChannelUpstreamHandler { public void handleUpstream(ChannelHandlerContext ctx, ChannelEvent e) throws Exception { // determine protocol type SslHandler handler = ctx.getPipeline().get(SslHandler.class); SimpleServerProvider provider = (SimpleServerProvider) NextProtoNego.get(handler.getEngine()); if ("spdy/2".equals(provider.getSelectedProtocol())) { ChannelPipeline pipeline = ctx.getPipeline(); pipeline.addLast("decoder", new SpdyFrameDecoder()); pipeline.addLast("spdy_encoder", new SpdyFrameEncoder()); pipeline.addLast("spdy_session_handler", new SpdySessionHandler(true)); pipeline.addLast("spdy_http_encoder", new SpdyHttpEncoder()); // Max size of SPDY messages set to 1MB pipeline.addLast("spdy_http_decoder", new SpdyHttpDecoder(1024*1024)); pipeline.addLast("handler", new HttpRequestHandler()); // remove this handler, and process the requests as spdy pipeline.remove(this); ctx.sendUpstream(e); } else if ("http/1.1".equals(provider.getSelectedProtocol())) { ChannelPipeline pipeline = ctx.getPipeline(); pipeline.addLast("decoder", new HttpRequestDecoder()); pipeline.addLast("http_encoder", new HttpResponseEncoder()); pipeline.addLast("handler", new HttpRequestHandler()); // remove this handler, and process the requests as http pipeline.remove(this); ctx.sendUpstream(e); } else { // we're still in protocol negotiation, no need for any handlers // at this point. } } }Using the NPN API and our current SSL context, we retrieve the SimpleServerProvider we added earlier. We check whether the selectedProtocol has been set, and if so, we setup a chain for processing. We handle three options in this class:There is no protocol: It’s possible that no protocol has been negotiated yet. In that case we don’t do anything special, and just process it normally. There is a http protocol: We set up a handler chain to handle HTTP requests. There is a spdy protocol: We set up a handler chain to handle SPDY requests.With this chain all the messages we receive eventually by the HttpRequestHandler are HTTP Requests. We can process this HTTP request normally, and return a HTTP response. The various pipeline configurations will handle all this correctly.Make sure the correct SPDY headers are sent back with HTTPThe final step we need to do, is this test. We’ll test this with the latest version of Chrome to test whether SPDY is working, and we’ll use wget to test the normal http requests. I mentioned that the HttpRequestHandler, the last handler in the chain, does our HTTP processing. I’ve used the http://netty.io/docs/stable/xref/org/jboss/netty/example/http/snoop/Http… as the HTTPRequestHandler since that one nicely returns information about the HTTP request, without me having to do anything. If you run this without alteration, you do run into an issue. To correlate the HTTP response to the correct SPDY session, we need to copy a header from the incoming request to the response: the “X-SPDY-Stream-ID” header. I’ve added the following to the HttpSnoopServerHandler to make sure these headers are copied (should really have done this in a seperate handler).private final static String SPDY_STREAM_ID = = "X-SPDY-Stream-ID"; private final static String SPDY_STREAM_PRIO = "X-SPDY-Stream-Priority"; // in the writeResponse method add if (request.containsHeader(SPDY_STREAM_ID)) { response.addHeader(SPDY_STREAM_ID,request.getHeader(SPDY_STREAM_ID)); // optional header for prio response.addHeader(SPDY_STREAM_PRIO,0); }Now all that is left is a server with a main to start everything, and we can test our SPDY implementation.public class SPDYServer { public static void main(String[] args) { // bootstrap is used to configure and setup the server ServerBootstrap bootstrap = new ServerBootstrap( new NioServerSocketChannelFactory( Executors.newCachedThreadPool(), Executors.newCachedThreadPool())); bootstrap.setPipelineFactory(new SPDYPipelineFactory()); bootstrap.bind(new InetSocketAddress(8443)); } }Start up the server, fire up Chrome and let’s see whether everything is working. Open the https://localhost:8443/thisIsATest url and you should get a result that looks something like this:In the output of the server, you can see some NPN debug logging:[S] NPN received for 68ce4f39[SSLEngine[hostname=null port=-1] SSL_NULL_WITH_NULL_NULL] [S] NPN protocols [spdy/2, http/1.1] sent to client for 68ce4f39[SSLEngine[hostname=null port=-1] SSL_NULL_WITH_NULL_NULL] [S] NPN received for 4b24e48f[SSLEngine[hostname=null port=-1] SSL_NULL_WITH_NULL_NULL] [S] NPN protocols [spdy/2, http/1.1] sent to client for 4b24e48f[SSLEngine[hostname=null port=-1] SSL_NULL_WITH_NULL_NULL] [S] NPN selected 'spdy/2' for 4b24e48f[SSLEngine[hostname=null port=-1] SSL_NULL_WITH_NULL_NULL]An extra check is looking at the open SPDY sessions in chrome browser by using the following url: chrome://net-internals/#spdyNow lets check whether plain old HTTP is still working. From a command line do the following:jos@Joss-MacBook-Pro.local:~$ wget --no-check-certificate https://localhost:8443/thisIsATest --2012-04-27 16:29:09-- https://localhost:8443/thisIsATest Resolving localhost... ::1, 127.0.0.1, fe80::1 Connecting to localhost|::1|:8443... connected. WARNING: cannot verify localhost's certificate, issued by `/C=NL/ST=NB/L=Waalwijk/O=smartjava/OU=smartjava/CN=localhost': Self-signed certificate encountered. HTTP request sent, awaiting response... 200 OK Length: 285 Saving to: `thisIsATest' 100%[==================================================================================>] 285 --.-K/s in 0s 2012-04-27 16:29:09 (136 MB/s) - `thisIsATest' saved [285/285] jos@Joss-MacBook-Pro.local:~$ cat thisIsATest WELCOME TO THE WILD WILD WEB SERVER =================================== VERSION: HTTP/1.1 HOSTNAME: localhost:8443 REQUEST_URI: /thisIsATest HEADER: User-Agent = Wget/1.13.4 (darwin11.2.0) HEADER: Accept = */* HEADER: Host = localhost:8443 HEADER: Connection = Keep-Alive jos@Joss-MacBook-Pro.local:~$And it works! Wget uses standard HTTPS, and we get a result, and chrome uses SPDY and presents the result from the same handler. In the net couple of days, I’ll also post on article on how you can enable SPDY for the Play Framework 2.0, since their webserver is also based on Netty.Reference: Using SPDY and HTTP transparently using Netty from our JCG partner Jos Dirksen at the Smart Java blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close