Featured FREE Whitepapers

What's New Here?


Externalizing session state for a Spring-boot application using spring-session

Spring-session is a very cool new project that aims to provide a simpler way of managing sessions in Java based web applications. One of the features that I explored with spring-session recently was the way it supports externalizing session state without needing to fiddle with the internals of specific web containers like Tomcat or Jetty. To test spring-session I have used a shopping cart type application (available here) which makes heavy use of session by keeping the items added to the cart as a session attribute, as can be seen from these screenshots:    Consider first a scenario without Spring session. So this is how I have exposed my application:I am using nginx to load balance across two instances of this application. This set-up is very easy to run using Spring boot, I brought up two instances of the app up using two different server ports, this way: mvn spring-boot:run -Dserver.port=8080 mvn spring-boot:run -Dserver.port=8082 and this is my nginx.conf to load balance across these two instances: events { worker_connections 1024; } http { upstream sessionApp { server localhost:8080; server localhost:8082; }server { listen 80;location / { proxy_pass http://sessionApp; } } } I display the port number of the application in the footer just to show which instance is handling the request. If I were to do nothing to move the state of the session out the application then the behavior of the application would be erratic as the session established on one instance of the application would not be recognized by the other instance – specifically if Tomcat receives a session id it does not recognize then the behavior is to create a new session. Introducing Spring session into the application There are container specific ways to introduce a external session stores – One example is here, where Redis is configured as a store for Tomcat. Pivotal Gemfire provides a module to externalize Tomcat’s session state. The advantage of using Spring-session is that there is no dependence on the container at all – maintaining session state becomes an application concern. The instructions on configuring an application to use Spring session is detailed very well at the Spring-session site, just to quickly summarize how I have configured my Spring Boot application, these are first the dependencies that I have pulled in: <dependency> <groupId>org.springframework.session</groupId> <artifactId>spring-session</artifactId> <version>1.0.0.BUILD-SNAPSHOT</version> </dependency> <dependency> <groupId>org.springframework.session</groupId> <artifactId>spring-session-data-redis</artifactId> <version>1.0.0.BUILD-SNAPSHOT</version> </dependency> <dependency> <groupId>org.springframework.data</groupId> <artifactId>spring-data-redis</artifactId> <version>1.4.1.RELEASE</version> </dependency> <dependency> <groupId>redis.clients</groupId> <artifactId>jedis</artifactId> <version>2.4.1</version> </dependency> and my configuration to use Spring-session for session support, note the Spring Boot specific FilterRegistrationBean which is used to register the session repository filter: mport org.springframework.boot.context.embedded.FilterRegistrationBean; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.core.annotation.Order; import org.springframework.data.redis.connection.jedis.JedisConnectionFactory; import org.springframework.session.data.redis.config.annotation.web.http.EnableRedisHttpSession; import org.springframework.session.web.http.SessionRepositoryFilter; import org.springframework.web.filter.DelegatingFilterProxy;import java.util.Arrays;@Configuration @EnableRedisHttpSession public class SessionRepositoryConfig {@Bean @Order(value = 0) public FilterRegistrationBean sessionRepositoryFilterRegistration(SessionRepositoryFilter springSessionRepositoryFilter) { FilterRegistrationBean filterRegistrationBean = new FilterRegistrationBean(); filterRegistrationBean.setFilter(new DelegatingFilterProxy(springSessionRepositoryFilter)); filterRegistrationBean.setUrlPatterns(Arrays.asList("/*")); return filterRegistrationBean; }@Bean public JedisConnectionFactory connectionFactory() { return new JedisConnectionFactory(); } } And that is it! magically now all session is handled by Spring-session, and neatly externalized to Redis. If I were to retry my previous configuration of using nginx to load balance two different Spring-Boot applications using the common Redis store, the application just works irrespective of the instance handling the request. I look forward to further enhancements to this excellent new project.The sample application which makes use of Spring-session is available here: https://github.com/bijukunjummen/shopping-cart-cf-app.gitReference: Externalizing session state for a Spring-boot application using spring-session from our JCG partner Biju Kunjummen at the all and sundry blog....

A common CXF Request Interceptor for all OSGi Bundles

I have been working on Apache CXF, Karaf, Felix from path few months and i find all these bundled technologies very interesting to work with. While working on some use cases i have been got into a situation where i need only One Interceptor that should be executed on each HTTP request sent to any of bundles deployed under application in Karaf. Basically i want to Authorize every request, change some headers and to do some security checks whatever request has been sent to the system and most importantly i want to do it at a single class. I found many ways to add interceptor in every bundle but i want to do that at some centralized location/bundle so that all the requests can be handled from that bundle. It can simply reject any request after doing some authorization or pass it to relevant bundle(cxf does that internally). While doing this i came to know that CXF always creating a separate BUS for every RestServer that is being initialized in bundle’s Blueprint. But to achieve my goal we have to register all the bundles on same bus and apply the interceptor to that bus. With that we can control all the requests flowing on the bus. Common Interceptor public class CommonInterceptor extends AbstractPhaseInterceptor {public CommonInterceptor() { super(Phase.PRE_PROTOCOL); }public void handleMessage(Message message) throws Fault { /** * Here write whatever logic you want to implement on each HTTP call sent to your project. * * This interceptor will be called on every request that is being recieved by container and then will be sent * to the relevant bundle/class for handling. */String url = ( String ) message.get( URL_KEY_ ); String method = ( String ) message.get( Message.HTTP_REQUEST_METHOD );LOGGER.debug( "################### Authentication Interceptor Validating Request : " + url + "####################" );Map< String, List< String >> headers = Headers.getSetProtocolHeaders( message ); if ( headers.containsKey( X_AUTH_TOKEN ) ) { return; }else{ message.getInterceptorChain().abort(); } } } Above is the common interceptor code where you can do anything with the request that is being sent to your server. In constructor i am assigning the Phase to which that interceptor will be hooked up. There are several Phases in CXF. You can get information about Phases link: Phases in CXF. Extending AbstractFeature: public class InterceptorManager extends AbstractFeature {private static final String COMMON_BUS_NAME = "javapitshop_bus"; private static final Logger LOGGER = LoggerFactory.getLogger(InterceptorManager.class); private static final Interceptor< Message > COMMON_INTERCEPTOR = new CommonInterceptor();protected void initializeProvider(InterceptorProvider provider, Bus bus) { if ( COMMON_BUS_NAME.equals( bus.getId() ) ) { LOGGER.debug( " ############## Registering Common Interceptor on BUS ##############" ); bus.getInInterceptors().add( COMMON_INTERCEPTOR ); } else { LOGGER.error( " ############## Bus Id: '" + bus.getId() + "' doesn't matched with system bus id ##############" ); } } } In above code i am extending AbstractFeature class and hooking up initilizeProvider method. Then i have given a name to our common bus. Basically whenever any OSGi Bundle gets installed it registered itself with the bus. In that case we are checking if the bundle has the desired bus Id. That bus ID will be unique system wide and all the bundles having this bus id will be registered to same bus and each and every request that will be related to those bundles will be sent to CommonInterceptor first. Bus Registration In Bundles: <cxf:bus id="javapitshop_bus"> <cxf:features> <cxf:logging /> </cxf:features> </cxf:bus> To register the bundles with same bus you have to give an Id to that bus and register it in bundle’s blueprint.xml file. Do this in all relevant bundles and all those bundles will be assigned the same bus and CommonInterceptor will automatically be implemented to all the bundles.You can downlaod the complete source code from my Github.Reference: A common CXF Request Interceptor for all OSGi Bundles from our JCG partner Ch Shan Arshad at the Java My G.Friend blog....

Making Side Projects With New Technologies

(Captain Obvious mantle on) You are a software engineer and maybe you have a side project – something that you do at home in your spare time. If you don’t, go ahead and have one – no life outside is better than a few more hours of programming. Unwitty jokes aside, having a side project is indeed a very useful practice (read on). A side-project is sometimes thought of as “the thing that would make you rich and you won’t have to program ever again”. It very rarely is, so we’d better view it as “the thing that would sound cool when I speak about it”. But apart from the motivational/coolness aspect, side-projects have a very important practical consequence – they make you a better programmer. Of course, every extra hour of doing something makes you better at it, but a side-project is even better, because you are the one that makes all the decisions – what to do, how to do it, when to do it, what technologies to use. I’ll focus a bit more on the last point. Not only you can choose the technologies to use, but you can choose technologies that you don’t know yet (imagine going to your manager in the beginning of a project and and asking him to build it with a language or framework that nobody on the team has ever used). And that’s what I’m doing – for most of my side-project I choose technologies that I haven’t used before. I get to learn new frameworks, tools and languages (a.k.a. “technology”), and get relatively good with them. That’s the way I learned JSF, Android, Scala, AWS and more. Learning a technology by itself is not the most motivating endeavor, but learning it as part of a project; as part of building something meaningful, is a different thing – it comes naturally. The obvious practical bonus of all this is that you become more “hireable”. Having a technology in your skillset makes you more eligible for certain positions than other people – knowing a bit of scala and AWS makes you way more qualified for a “scala full-stack engineer” than someone with just Java and Linux knowledge. Another scenario is when a new project starts and you get to pick the technologies, you can now say “I have experience with JSF, let’s build the front-end with that” (and that’s exactly what has happened to me). Now, a clarification is due about the “new” word in the title. I don’t intend it to mean “untested, overhyped crap”, I mostly mean “new to you”, something that you haven’t used. It might be an already stable technology, or something that is gaining traction but your conservative company is never going to try. Of course, trying something “fresh” is also good, as being an early-adopter is sometimes rewarding. Should you make side-projects with technologies you are familiar with? Of course, and I’ve done so as well. If the subject of the project is way more interesting than the technologies themselves (e.g. an algorithmic composer). But it is way better to use at least one new thing. By the way, that’s not relevant only for “youngsters”. The “big, fat architect” also needs a bit of the side project experience too, otherwise he risks being irrelevant pretty soon. In a way I think side projects are the way for developers to enrich their skillset and to be up to date. Learning only the technologies you need at work can make you forget how to learn; forget what programmers’ curiosity is – and that’s just bad. And constantly exploring the programming world not only gives you particular skills with a given technology, but also broadens your general engineering mindset.Reference: Making Side Projects With New Technologies from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....

A Tech Lead Paradox: Delivering vs Learning

Agile Manifesto signatory Jim Highsmith talks about riding paradoxes in his approach to Adaptive Leadership. A leader will find themselves choosing between two solutions or two situations that compete against each other. A leader successfully “rides the paradox” when they adopt an “AND” mindset, instead of an “OR” mindset. Instead of choosing one solution over another, they find a way to satisfy both situations, even though they contradict one another. A common Tech Lead paradox is the case of Delivering versus Learning.   The case for delivering In the commercial of software development, there will always be pressure to deliver software that satisfy user needs. Without paying customers, companies cannot pay their employees. The more software meets user needs, the more a company earns, and the more the company can invest in itself. Business people will always be asking for more software changes as there is no way of knowing if certain features really do meet user needs. Business people do not understand (and cannot be expected to fully understand) what technical infrastructure is needed to deliver features faster or more effectively. As such, they will always put pressure on to deliver software faster. From a purely money-making point of view, it is easy to interpret delivering software as the way of generating more earnings. The case for learning Software is inherently complex. Technology constantly changes. The problem domain shifts as competitors release new offerings and customer needs change in response and evolve through constant usage. People, who have certain skills, leave a company and new people, who have different skills, join. Finding the right balance of skills to match the current set of problems is a constant challenge. From a technologist’s point of view, learning about different technologies can help solve problems better. Learning about completely different technologies opens up new opportunities that may lead to new product offerings. But learning takes time. The conflict For developers to do their job most effectively, they need time to learn new technologies, and to improve their own skills. At the same time, if they spend too much time learning, they cannot deliver enough to help a company to reach their goals, and the company may not earn enough money to compensate their employees and in turn, developers. Encouraging learning at the cost of delivering also potentially leads to technology for technology’s sake – where developers use technology to deliver something. But what they deliver may not solve user needs, and the whole company suffers as a result. What does a Tech Lead do? A Tech Lead needs to keep a constant balance between finding time to learn, and delivering the right thing effectively. It will often be easier for a Tech Lead to succumb to the pressure of delivering over learning. Below is advice for how you can keep a better balance between the two. Champion for some time to learn Google made famous their 20% time for developers. Although not consistently implemented across the entire organisation, the idea has been adopted by several other companies to give developers some creative freedom. 20% is not the only way. Hack days, like Atlassian’s ShipIt days (renamed from FedEx days) also set aside some explicit, focused time to allow developers to learn and play. Champion learning that addresses user needs Internally run Hack Days encourage developers to unleash their own ideas on user needs, where they get to apply their own creativity, and often learn something in the process. They often get to play with technologies and tools they do not use during their normal week, but the outcome is often focused on a “user need” basis, with more business investment (i.e. time) going towards a solution that makes business sense – and not just technology for the sake of technology. Capture lessons learned In large development teams, the same lesson could be learned by different people at different times. This often means duplicated effort that could have been spent learning different or new things. A Tech Lead can encourage team members to share what they have learned with other team members to spread the lessons. Some possibilities I have experienced include:Running regular learning “show-and-tell” sessions – Where team members run a series of lightning talks or code walkthroughs around problems recently encountered and how they went about solving it. Update a FAQ page on a wiki – Allows team members to share “how to do common tasks” that are applicable in their own environment. Share bookmark lists – Teams create a list of links that interesting reads based on problems they have encountered.Encourage co-teaching and co-learning A Tech Lead can demonstrate their support for a learning environment but encouraging everyone to be a student and a teacher at the same time. Most team members will have different interests and strengths, and a Tech Lead can encourage members to share what they have. Encouraging team members to run brown bag sessions on topics that enthuse them encourage an atmosphere of sharing. Weekly reading list I know of a few Tech Leads who send a weekly email with interesting reading links to a wide variety of technology-related topics. Although they do not expect everyone to read every link, each one is hopeful that one of those links will be read by someone on their team.Reference: A Tech Lead Paradox: Delivering vs Learning from our JCG partner Patrick Kua at the THEKUA.COM@WORK blog....

Make Stories Small When You Have “Wicked” Problems

If you read my Three Alternatives to Making Smaller Stories, you noticed one thing. In each of these examples, the problem was in the teams’ ability to show progress and create interim steps. But, what about when you have a “wicked” problem, when you don’t know if you can create the answer? If you are a project manager, you might be familiar with the idea of “wicked” problems from   from the book Wicked Problems, Righteous Solutions: A Catalog of Modern Engineering Paradigms. If you are a designer/architect/developer, you might be familiar with the term from Rebecca Wirfs-Brock’s book, Object Design: Roles, Responsibilities, and Collaborations. You see problems like this in new product development, in research, and in design engineering. You see it when you have to do exploratory design, where no one has ever done something like this before. Your problem requires innovation. Maybe your problem requires discussion with your customer or your fellow designers. You need consensus on what is a proper design. When I taught agile to a group of analog chip designers, they created landing zones, where they kept making tradeoffs to fit the timebox they had for the entire project, to make sure they made the best possible design in the time they had available. If you have a wicked problem, you have plenty of risks. What do you do with a risky project?Staff the project with the best people you can find. In the past, I have used a particular kind of “generalizing specialist,” the kind where the testers wrote code. The kind of developers who were also architects. These are not people you pick off the street. These are people who are—excuse me—awesome at their jobs. They are not interchangeable with other people. They have significant domain expertise in how to solve the problem. That means they understand how to write code and test. Help those generalizing specialists learn how to ask questions at frequent points in the project. In my inch-pebble article, I said that with a research project, you use questions to discover what you need to know. The key is to make those questions small enough, so you can show progress every few days or at least once week. Everyone in the project needs to build trust. You build trust by delivering. The project team builds trust by delivering answers, even if they don’t deliver working code. You always plan to replan. The question is how often? I like replanning often. If you subscribed to my Reflections newsletter (before the Pragmatic Manager), back in 1999, I wrote an article about wicked projects and how to manage the risks. Help the managers stop micromanaging. The job of a project manager is to remove obstacles for the team. The job of a manager is to support the team. Either of those manager-types might help the team by helping them generate new questions to ask each week. Neither has the job of asking “when will you be done with this?” See Adam Yuret’s article The Self-Abuse of Sprint Commitment.Now, in return, the team solving this wicked problem owes the organization an update every week, or, at the most, every two weeks about what they are doing. That update needs to be a demo. If it’s not a demo, they need to show something. If they can’t in an agile project, I would want to know why. Sometimes, they can’t show a demo. Why? Because they encountered a Big Hairy Problem. Here’s an example. I suffer from vertigo due to loss of (at least) one semi-circular canal in my inner ear. My otoneurologist is one of the top guys in the world. He’s working on an implantable gyroscope. When I started seeing him four years ago, he said the device would be available in “five more years.” Every year he said that. Finally, I couldn’t take it anymore. Two years ago, I said, “I’m a project manager. If you really want to make progress, start asking questions each week, not each year. You won’t like the fact that it will make your project look like it’s taking longer, but you’ll make more progress.” He admitted last year that he took my advice. He thinks they are down to four years and they are making more rapid progress. I understand if a team learns that they don’t receive the answers they expect during a given week. What I want to see from a given week is some form of a deliverable: a demo, answers to a question or set of questions, or the fact that we learned something and we have generated more questions. If I, as a project manager/program manager, don’t see one of those three outcomes, I wonder if the team is running open loop. I’m fine with any one of those three outcomes. They provide me value. We can decide what to do with any of those three outcomes. The team still has my trust. I can provide information to management, because we are still either delivering or learning. Either of those outcomes provides value. (Do you see how a demo, answers or more questions provides those outcomes? Sometimes, you even get production-quality code.) Why do questions work? The questions work like tests. They help you see where you need to go. Because you, my readers, work in software, you can use code and tests to explore much more rapidly than my otoneurologist can. He has to develop a prototype, test in the lab and then work with animals, which makes everything take longer. Even if you have hardware or mechanical devices or firmware, I bet you simulate first. You can ask the questions you need answers to each week. Then, you answer those questions. Here are some projects I’ve worked on in the past like this:Coding the inner loop of an FFT in microcode. I knew how to write the inner loop. I didn’t know if the other instructions I was also writing would make the inner loop faster or slower. (This was in 1979 or so.) Lighting a printed circuit board for a machine vision inspection application. We had no idea how long it would take to find the right lighting. We had no idea what algorithm we would need. The lighting and algorithm were interdependent. (This was in 1984.) With clients, I’ve coached teams working on firmware for a variety of applications. We knew the footprint the teams had to achieve and the dates that the organizations wanted to release. The teams had no idea if they were trying to push past the laws of physics. I helped the team generate questions each week to direct their efforts and see if they were stuck or making progress. I used the same approach when I coached an enterprise architect for a very large IT organization. He represented a multi-thousand IT organization who wanted to revamp their entire architecture. I certainly don’t know architecture. I know how to make projects succeed and that’s what he needed. He used the questions to drive the projects.The questions are like your tests. You take a scientific approach, asking yourself, “What questions do I need to answer this week?” You have a big question. You break that question down into smaller questions, one or two that you can answer (you hope) this week. You explore like crazy, using the people who can help you explore. Exploratory design is tricky. You can make it agile, also. Don’t assume that the rest of your project can wait for your big breakthrough.  Use questions like your tests. Make progress every day. I thank Rebecca Wirfs-Brock for her review of this post. Any remaining mistakes are mine.Reference: Make Stories Small When You Have “Wicked” Problems from our JCG partner Johanna Rothman at the Managing Product Development blog....

Java performance tuning survey results (part II)

This is a second post in series where we analyze the results of the performance tuning survey conducted in October 2014. If you have not read the first part yet, we recommend to start here. Second part will focus on monitoring Java applications for performance issues. In particular, we try to answer the following questions:              How do people find about performance issues? What are the symptoms of such issues? How often are such issues affecting end users? What tools are used to monitor the applications?Finding out about the performance problem Before investigating any performance incidents one needs to be aware it exists. We asked to describe the channels through which the respondents discovered the presence of the problem. 286 people responded by listing 406 channels:Considering that most of our respondents were from the engineering side, we were truly surprised that more than 58% of the respondents listed monitoring software as the source for awareness. At the same time, just 38% had load/stress tests to alert them about it. This data is verifying what we see during our daily job – most of the companies do not have the possibility to run load tests – creating and maintaining such tests takes time and is often skipped. The eleven respondents categorized as “Other” were mostly referring to procedural activities, such as external performance audits taking place. Symptoms of the performance problem With this question we wished to understand the symptoms of the problem. 286 respondents listed 462 symptoms as an answer to this question:By far the most common symptom triggering the further research is excessive resource (such as CPU, memory, IO, etc) usage. 205, or 72% of the respondents listed this as one of the symptoms. Apparently monitoring end user transactions is less widespread – with its more complex setup the majority of the systems are still monitored from the resource side without having the end user transactions in mind. On the other hand, the severity of the performance-related issues is well illustrated with the fact that for 17% of the respondents learned about issue only after a complete service outage. Impact to end users? Next in line we were after understanding whether the issue at hand was affecting end users. 284 responses gave us the following insight:The 82% of the respondents answering “Yes” verified our gut feeling – performance is getting attention only when the related issues start impacting end users. Business side tends to focus on adding new / improving existing functionality leaving non-functional requirements such as performance without the attention they might deserve. And only when the impact on performance is so significant that end users start complaining, some resources get allocated to overcome the issue at hand. Monitoring solutions used One of the potentially most intriguing insights from the survey was the current monitoring landscape – we asked the respondents to identify the monitoring solutions they are using in production site. 284 respondents listed 365 tools being used as some respondents were using up to five tools to monitor their deployments:The places on the podium are somewhat surprising:Most common answer to the question was “None”, meaning that 21% of the respondents used no tools whatsoever to monitor the production site. The most common tool used is still the 15-year old Nagios. 51 people (or 18% of the respondents) listed Nagios as one of the tools they use for monitoring. Third place, listed as “Other” consisted in 38 different tools which all got 1-2 mentions. So we can say that the number of players in the market is large and only some of the tools have managed to gather any meaningful market share.Next in this list: NewRelic, Zabbix, AppDynamics and Oracle Enterprise Managers were mentioned in between 7 and 13% of the cases. NewRelic and AppDynamics were kind-of expected to have a widespread deployment base, but the frequency of Zabbix and Oracle Enterprise Manager deployments is definitely unexpected. What is also worth mentioning is the amount of self-built solutions and JVM tooling. Self-built solution option was not even among our list of answers, so having 6% of the respondents building their own monitoring solutions is somewhat surprising. The tail of the results contains tools mentioned four or more times. It is rather weird to see the the large APM vendors (CA, Compuware and BMC) being beaten by the simplest tool possible – namely Pingdom. As the survey was listed on our site, we do admit that Plumbr position in this list is most likely biased, so take our place in this list with a healthy grain of salt.Reference: Java performance tuning survey results (part II) from our JCG partner Ivo Mägi at the Plumbr Blog blog....

A Virtual Mesos Cluster with Vagrant and Chef

This is a republished guest blog post by Ed Ropple. Ed is a platform engineer with Localytics. His ambition is to enable other software developers to be more productive and less error-prone. You can find his original article here. Since starting at Localytics in Februmarch or so, I’ve found myself thrown into a bunch of new-ish stuff. My prior ops experience was much more “developer moonlighting as a sysadmin”, rather than buzzword-compliant DevOps Ninja Powers. At Localytics I’ve been leveling those up pretty quick, though, and there’s some fun stuff I’d like to talk about a little. But we need to figure out what we want to make public first, and it’ll probably end up on the company blog before it ends up here, so I’m going to natter on a bit about something I’m doing in my spare time: setting up a Mesos cluster and populating it with apps for a side project or two. What’s Mesos? Glad you asked, figment of my imagination! Mesos is a compute management framework that was born in the AMP Lab at UC Berkeley. The idea behind Mesos and other platform-as-a-service (PaaS) projects is to take your entire data center, the whole kit and caboodle of it, and treat it as a single heterogeneous compute environment. Apps don’t run on servers (well, they do, but only after a fashion); instead they run on the whole damn data center. There are a few other tools that act vaguely similar to Mesos, among them Deis and Flynn. All of them are deeply in hock to the Google Omega paper, which you should go read because Google does this stuff at scale. The differences between the various clusterization options are largely in scope–Mesos is a fair bit more ambitious than Deis and Flynn—and the tooling each project’s selected for their stuff. You’ll also see references to Marathon along the way, too. It’s a scheduler for Mesos that acts sort of like an init system—it provides a lot of the brains of the system. I’ll be adding it as I go along. I found this video on Mesos, Marathon, and Docker to be really helpful for thinking about this stuff, I suggest you watch it. I’m funnier than they are, but they actually know what they’re doing. (How dare they.) Hardware Unfortunately, I don’t have a data center in my house. I mean, I’ve considered a rack, but I pay enough for air conditioning during the summer as it is. So my available hardware is a little constrained. My former VM server, now sort-of-Docker playground, is an Ivy Bridge i5 with 32GB of RAM and a few terabytes of disk, is the best candidate for the job, but it’s also running my home VPN and I don’t want to hose the machine beyond my meager powers of resuscitation. (Yet.) So for the first bits of this series, I’m going to be building a three-node Mesos cluster on my MacBook Pro. Getting Started First off: I’m going to be uploading my stuff as I go to mesos-experiments on Github. I’ll tag the state as of each post; this post will contain blog-1 and blog-2. Despite Mesos being an Apache project, most of the useful documentation is instead on Mesosphere‘s site. Mesosphere is the commercial arm for Mesos. Their “learn” page is a little intimidating. Like, I know what all the words mean, but I’ll be damned if I know what they mean all strung together. But not knowing the first thing about something has never stopped me before, so on we go. Since I’m using Vagrant on OS X , I was tempted to give vagrant-mesos a spin, but that has a real problem for me in that it comes pre-built and I won’t understand what I’m doing with it. So, soup-to-nuts it is. Mesos provided instructions for Ubuntu 14.04, so I went ahead and grabbed a box from the depths of the Internet. vagrant box add --name ubuntu-14.04-chef https://oss-binaries.phusionpassenger.com/vagrant/boxes/latest/ubuntu-14.04-amd64-vbox.box (Thanks for the box, Phusion! You’re the best company I know absolutely nothing about.) Anyway, this Vagrant box has Chef and Puppet installed; I’m not a partisan, unless the parti against which I’m sanning is Chef Server, ’cause I have had enough of that for one lifetime. So Chef Solo it is. Let’s init us some Vagrant: vagrant init The default Vagrantfile is filled with comments that at this point in my Vagrant life I don’t need or want, so after deleting approximately one Pete worth of comments and cargo-culting me some chef-solo, here’s what I’ve got: VAGRANTFILE_API_VERSION = "2"Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "ubuntu-14.04-chef" config.vm.synced_folder '.', '/vagrant', disabled: true config.vm.synced_fonder './scripts', '/opt/scripts'config.vm.provision "chef_solo" do |chef| chef.cookbooks_path = [ "./chef/cookbooks", "./chef/librarian-cookbooks" ] chef.add_recipe "edcanhack_mesos"chef.json = {} end end The astute among us will notice that this refers to a nonexistent Chef cookbook. And the astute among us can shaddap, thankyouverymuch, I’m getting to that. To avoid spamming your eyeballs out, I’ll omit the connective tissue of the system setup I’m working with (it’s in the repo) and just include the interesting bits of my cookbooks1. For now, the recipe is pretty straightforward: include_recipe "ubuntu" include_recipe "python"MESOS_DEB_URL="http://downloads.mesosphere.io/master/ubuntu/14.04/mesos_0.19.0~ubuntu14.04%2B1_amd64.deb" MESOS_EGG_URL="http://downloads.mesosphere.io/master/ubuntu/14.04/mesos-0.19.0_rc2-py2.7-linux-x86_64.egg"case node[:platform] when "ubuntu" %w"zookeeperd default-jre python-setuptools python-protobuf curl".each { |p| package p } else raise "ubuntu only!" endremote_file "Mesos .deb" do source MESOS_DEB_URL path "/tmp/mesos.deb" end dpkg_package "Installing Mesos .deb" do source "/tmp/mesos.deb" endremote_file "Mesos .egg" do source MESOS_EGG_URL path "/tmp/mesos.egg" end bash "Installing Mesos .egg" do code "easy_install /tmp/mesos.egg" end These are the underlying prerequisites, as per Mesosphere’s prerequisites page. I haven’t yet extended this for multi-server, obviously, so we will be running what they call a singleton cluster for the time being. Unfortunately, while you can (inadvisably) reboot a machine within Chef, Vagrant will lose all its marbles when you try–guess how I found that one out–so we’ll do that ourselves too. vagrant up && vagrant halt && vagrant up # AND UP AND DOWN AND UP AND DOWN vagrant ssh -c "curl -i localhost:5050/master/health" And… now I have a machinegunMesos-ready box. (This is tag blog-1.) HTTP/1.1 200 OK Date: Fri, 11 Jul 2014 06:32:28 GMT Content-Length: 0 App app app So I came to Localytics as a Scala Person, and I remain so, but I’ve ended up becoming acquainted with Ruby in no small part through doing an uncomfortable amount of Chef wizardry. Still no clue as to Rails, though. Good news for me right now, though, is that Mesosphere has a tutorial for putting a Play app onto a Mesos cluster. So lemme get to it. If you’ve clicked on the barrage of links thus far, you probably know that Mesos uses a two-level scheduler. The top-level scheduler is the Mesos master. Its job is to fairly (or unfairly, if that’s your bag) allocate resources across the apps running in its frameworks. Frameworks generate jobs for the Mesos slaves to run, passing them back up through the master and down to the slave nodes to run them. Mesos comes with support for Hadoop and Spark and a bunch of other fashionable belles at the distributed computing ball. Mesosphere has also developed a framework of their own, called Marathon, which is designed to execute long-running tasks. Mesosphere differs from Flynn, Deis, and Heroku in that it doesn’t use a git push model; instead you just point it at a zip file and it does its thing off of that. Mesosphere has a PlayHello project that we’ll start with. More pressingly, though, Step 1 of their tutorial says “go download Marathon!” and I can’t be bothered to do that manually, so let’s go drop that into our Chef cookbook. (And, while we’re at it, gem install marathon_client so we have that for later…) user "marathon" remote_file "Marathon .tar.gz" do source MARATHON_URL path "/tmp/marathon.tar.gz" end bash "Installing Marathon" do code <<-ENDCODE tar xzf /tmp/marathon.tar.gz mv /tmp/marathon /opt/marathon chown -R marathon /opt/marathon ENDCODE end cookbook_file "Adding Marathon upstart script" do source "marathon_upstart.conf" path "/etc/init/marathon.conf" end Once again, the eagle-eyed will see a cookbook_file directive in there. The blog-2 tag includes a “marathon_upstart.conf” file. It originates from a gist by jalaziz that handles things pretty much as I would have–though it has some neat idiomatic tricks I’d never seen, like how it handles file-existence tests (which are in retrospect pretty obvious, but new to me). I updated it to use Marathon’s start script rather than calling it directly and chopped out some extraneous bits. The directories in that upstart script are somewhat magical, in that Mesos dumps some stuff to the file system in places that Marathon expects them. When we get to a cluster that isn’t a singleton we’ll need to provide some configuration in /etc/default/marathon. Anyway, reprovision with vagrant provision, bounce the servers again, and you should have Marathon running on localhost:8080. (I’ve also added a port forward to/from port 8080 and upped the VM RAM to 2GB in the Vagrantfile.) Click the friendly ‘new app’ button.ID: hello-test Command: ./Hello-*/bin/hello -Dhttp.port=$PORT Memory: 512 CPUs: 0.5 Instances: 1 URIs: http://downloads.mesosphere.io/tutorials/PlayHello.zipYou should see an instance of the app come up along with a randomized port for it to play on. The first time I did this, I did not get the wonderful deployed app I was hoping for. Instead–nothing. Zero instances came up in my app info. Scaled up, no good; scaled down, ditto. Figured out I’d misconfigured Vagrant with too little RAM for the instance, so went and bumped that up. New failure: the app would start and immediately die2. Ordinarily, Mesos installs its logs to $MESOS_HOME, but (distressingly) installing it via the .deb package puts things in…places. Eventually I tracked down the output of my workers to a directory six levels deep within /tmp and for a moment doubted my fortitude, but continued on nevertheless. Look what I found: root@ubuntu-14:/tmp/mesos/slaves/20140712-044358-16842879-5050-1169-0/frameworks/20140711-071659-16842879-5050-1167-0000/executors/hello-test_0-1405141392233/runs/latest# cat stderr SONOFA: WARNING: Logging before InitGoogleLogging() is written to STDERR I0712 05:03:12.295583 3228 fetcher.cpp:73] Fetching URI 'http://downloads.mesosphere.io/tutorials/PlayHello.zip' I0712 05:03:12.296902 3228 fetcher.cpp:123] Downloading 'http://downloads.mesosphere.io/tutorials/PlayHello.zip' to '/tmp/mesos/slaves/20140712-044358-16842879-5050-1169-0/frameworks/20140711-071659-16842879-5050-1167-0000/executors/hello-test_0-1405141392233/runs/4750cdfa-fcc2-44fb-a078-819edc3fdad7/PlayHello.zip' sh: 1: unzip: not found But! That mystery solved, we now have… the stock Play hello-world app. This is the blog-2 tag in mesos-experiments, and that’ll do it for now. For my next trick (and next post), I’ll build a separate secondary slave and add some Vagrant magic to let it scale to an arbitrary number of slave nodes.Reference: A Virtual Mesos Cluster with Vagrant and Chef from our JCG partner Manuel Weiss at the Codeship Blog blog....

An introduction to REST

REST, or Representational State Transfer is an architectural style, or more simply, a set of constraints. We will look at the constraints REST imposes for web apps, but some highlights are:              Uniform interfaces: all resources are identified by URIs (think: links) It relies on a stateless, client-server, cacheable communications protocol (think: HTTP). Interaction with resources is via a set of standard methods (think: HTTP verbs)REST can be viewed as a lightweight alternative to mechanisms like RPC (Remote Procedure Calls) and Web Services protocols (SOAP, WSDL, etc)., but it is much more than that too! It is not an exaggeration to say that REST has been used to guide the design and development of the architecture for the modern Web. The term REST was defined in 2000 by Roy Fielding in his doctoral dissertation at UC Irvine.Background What is REST? HTTP HATEAOS Summary Terminology Sources, references, bibliographyBackground A brief history of WWW Back in 1989, Tim Berners-Lee first proposed the “WorldWideWeb” project. Berners-Lee was a software engineer working at at CERN, the large particle physics laboratory in Switzerland. Many scientists worked at CERN for periods of time, then returned to their own labs around the world and so there was a need for them to be able to share and link their research documents. To facilitate this, Berners-Lee proposed three technologies that would become the foundation of Web:HTTP: Hypertext Transfer Protocol. HTTP is a protocol, or a formal set of rules, for exchanging information over the web. It allows for the retrieval of linked resources from across the Web. HTML: HyperText Markup Language. The publishing format for the Web, including the ability to format documents and link to other documents and resources. URI: Uniform Resource Identifier. A kind of “address” that is unique to each resource on the Web.(we are not going to delve into HTML here, instead the focus is on HTTP and a little on URIs) HTTP 1.0 The first documented version of HTTP was HTTP V0.9 (1991) and had only one method, namely GET, which would make a request to a server and the server would respond with HTML page. It was a good start, but need many enhancements to support the exploding popularity of the Web. So, Berners-Lee teamed up with researcher Roy Fielding, and others, to develop HTTP 1.0. HTTP 1.0 transformed HTTP from a trivial request/response application to a true messaging protocol. It described a complete message format for HTTP, and explained how it should be used for client requests and server responses, and supported multiple media types. Unfortunately, some of the limitations of HTTP 1.0 were increasingly causing problems as web usage grew. For example, a separate connection to the server is made for every resource request. There was also a lack of support for caching and proxying. HTTP 1.1 Jump forward to 1994. The web was growing really fast. It was an exciting time. The WWW was becoming a buzzword and getting a huge amount of press. Sites like hotmail, yahoo, altavista were taking off.  Google didn’t even exist yet. But the architecture and technologies on which the web was built were beginning to creak at the seams. So, TBL, Fielding, who were researchers at MIT and UCI respectively, and a number of other leading technologists, including folks from Compaq, Xerox and Microsoft, got together to specify and improve the WWW infrastructure through the IETF working groups on URI, HTTP, and HTML. Through this work, HTTP 1.1 was born. Some of the big improvements introduced in HTTP 1.1 were:Multiple Host Name Support: Allows one Web server to handle requests for many different virtual hosts. Persistent Connections: Allows a client to send multiple requests for documents in a single TCP session. Partial Resource Selection: A client can ask for only part of a resource rather than the entire document, reducing load and required bandwidth Better Caching and Proxying Support Content Negotiation: Allows the client and server to exchange information to help select the best resource when multiple are available. Better Security: Defines authentication methods and is generally more “security aware”Work began on HTTP 1.1 in 1994, and it was official released in 1997. And what version of HTTP 1.1 is in use today? Still 1.1, over 25 years later! Considering how quickly technology changes, that is an incredible achievement. How many projects have you worked on that have stood the test of time so well? Lessons learned Fielding had been involved in the web from its infancy and experienced first hand its rapid growth, both as a user and as an architect. He understood better than most the reasons for its success and so after the release of HTTP1.1, Fielding begin to write about what he had learned working on HTTP, and the other web technologies (Fielding has also been involved in the development of HTML, URIs and was a co-founder of the Apache HTTP Server project). He took the knowledge of web’s architectural principles and presented them as a framework of constraints, or as he called them, an architectural style. Specifically, Fielding wrote a PhD thesis focused on the rationale behind, and key architectural principles of, the design of the modern Web architecture. Fielding’s thesis was published in 2000, and was called Architectural Styles and the Design of Network-based Software Architectures. I have to admit that I have not read many PhD theses, but his most be among the most readable of them. It even contains Monty Python quotes! In it, Fielding discusses Network-based Application Architectures and Architectural Styles, before introducing and defining the term REST. Although introduced in Fielding’s paper, Fielding noted that “REST has been used to guide the design and development of the architecture for the modern Web”. So, while the term REST didn’t come about until afterwards, it is the design style behind HTTP. Fielding didn’t ‘invent’ REST in his paper, instead he developed it in collaboration with his colleagues while working on HTTP and URIs, but it was in his paper that the term was coined and defined. Fielding tried to answer the question of why the Web has been such a successful platform by explaining it guiding principles, and how they can be correctly applied when building distributed systems? So, want to build a distributed web app? Not sure what architecture to use? Why not base it on the Web’s architecture! Before diving in to what REST is, feel free to read the terminology section at the end. What is REST? REST is an architectural style, or a set of constraints, for distributed hypermedia systems. Constraints Imagine you were designing a freeway. You might impose rules such as cars only (no trucks, pedestrians or bicycles), all traffic must travel between 40 and 70 mph, and no traffic lights (only on and off ramps). Although these rules constrain the system, they make it work better overall; in this case allow more traffic to flow freer and faster. REST imposes constraints on web apps, or distributed hypermedia systems, in order to enable those apps to scale and perform as desired. What were the constraints that Fielding suggested? 1) Client Server By separating the user interface concerns from the data storage concerns, we improve the portability of the user interface across multiple platforms and improve scalability by simplifying the server components. Separation also allows the components to evolve independently. 2) Stateless Communication must be stateless. Each request from client to server must contain all of the information necessary to understand the request. Session state is kept entirely on the client. Reliability is improved because it eases the task of recovering from partial failures. Scalability is improved because not having to store state between requests allows the server component to quickly free resources, and simplifies implementation. 3) Cache Cache constraints require that the data within a response to a request be labeled as cacheable or non-cacheable. If a response is cacheable, then a client cache is given the right to reuse that response data for later, equivalent requests. 4) Uniform Interface The central feature that distinguishes the REST architectural style from other network-based styles is its emphasis on a uniform interface between components. Implementations are decoupled from the services they provide, which encourages independent evolvability. 5) Layered System The layered system style allows an architecture to be composed of layers by constraining component behavior such that each component cannot “see” beyond the immediate layer with which they are interacting. 6) Code-On-Demand The final addition to our constraint set for REST comes from the code-on-demand style. REST allows client functionality to be extended by downloading and executing code in the form of applets or scripts. This simplifies clients by reducing the number of features required to be pre-implemented. Allowing features to be downloaded after deployment improves system extensibility. However, it also reduces visibility, and thus is only an optional constraint within REST. Those are the constraints that make up REST. Next, HTTP. HTTP HTTP has a very special role in web architecture, and with REST in particular. Note however that REST doesn’t have to use HTTP. There are other application-level protocols that could, possibly, be candidates for use with REST: The Gopher was widely used in the early days of the web, although was overtaken by HTTP; Fielding himself has been working on a new http-like protocol called waka; There is also a Google developed protocol called SPDY that has goals of reducing web page load latency and improving web security. However in practice REST and HTTP are closely related. Fielding not only introduced REST, he was also one of the principal authors of the HTTP specification, so it is not too surprising that the two are closely linked. We will dive in to HTTP and look at some example requests & responses and the HTTP methods and response codes that are commonly used. Example Request An example of a HTTP request: GET /index.html HTTP/1.1 Host: www.example.com This is made up of the following components:Method: GET URI:  /index.html Version: HTTP/1.1 Headers: Host: www.example.com Body: empty in this caseExample Response Version/Status code; Reason phrase HTTP/1.1 200 OK Version/Status code; Reason phrase Date: Mon, 23 May 2005 22:38:34 GMT  HEADERS Server: Apache/ (Unix) (Red-Hat/Linux) Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT ETag: "3f80f-1b6-3e1cb03b" Content-Type: text/html; charset=UTF-8 Content-Length: 131 Accept-Ranges: bytes Connection: close <html> BODY <head>   <title>An Example Page</title> </head> <body>   Hello World </body> </html> In the above request example, the verb is GET. HTTP verbs are also known as methods, are there are 8 supported in the HTTP 1.1 (RFC 2616). First we will look at the 4 most commonly used verbs: GET, PUT, DELETE, POST. Then we will look at the lesser used ones: HEAD, OPTIONS, TRACE and CONNECT However, before we dive in to the methods, let’s take a look at some characteristics, or groupings, of the messages. Specifically, the concept of safe methods and idempotency. HTTP Methods Safe Methods Safe methods are methods that do not modify resources, they are used only for retrieval. (Strictly speaking, some things may change, e.g. logs, caches etc, but the representation of the resource in question must not). Safe methods are:  HEAD, GET, OPTIONS and TRACE. By contrast, non-safe methods such as POST, PUT, DELETE and PATCH are intended to cause side effects either on the server. Idempotent Methods Idempotent methods can be called many times without different outcomes. Call it once, or 1 thousand times, the result will be the same. For example, multiplying by 1 is an idempotent operation. So is the assignment ‘a=4;’ More formally “Methods can also have the property of idempotence in that (aside from error or expiration issues) the side-effects of N>0 identical requests is the same as for a single request.” [7] The methods GET, HEAD, PUT and DELETE share this property. Also, the methods OPTIONS and TRACE SHOULD NOT have side effects, and so are inherently idempotent. Common Methods And now, a look at the 4 most commonly used verbs: GET, PUT, DELETE, POST GET Retrieve the resource identified by the URI. The simplest and most common method! The one you use every time you access a web page. PUT Store the supplied entity under the supplied URI. If already exists, update (and return either the 200 OK or 204 No Content). If not create with that URI (and return ‘201 Created’ response). POST Request to accept the entity as a new subordinate of the resource identified by the URI. For exampleSubmit data from a form to a data-handling process; Post a message to a mailing list or blogIn plain english, create a resource. DELETE Requests that the server delete the resource identified by the URI. PUT vs POST OK, before we go on to the other lesser used HTTP, verbs, let’s take a look at 2 of the above commonly used verbs that are often most confusing: PUT and POST. The office HTTP 1.1 doc (RFC 2616) states: “The fundamental difference between the POST and PUT requests is reflected in the different meaning of the Request-URI. The URI in a POST request identifies the resource that will handle the enclosed entity. That resource might be a data-accepting process, a gateway to some other protocol, or a separate entity that accepts annotations. In contrast, the URI in a PUT request identifies the entity enclosed with the request — the user agent knows what URI is intended and the server MUST NOT attempt to apply the request to some other resource.” That however is a bit of a mouthful! PUT and POST can both be used to create or update a resource, but here are some (sometimes contradictory!) rules of thumb:PUT is for update; POST is for create PUT idempotent; POST is not; Who creates the URL of the resource?PUT is for creating when you know the URL of the thing you will create; POST is for creating when the server decides the URL for you (you just know the URL of the “factory” or manager that does the creation)There is also a recent argument (from Thoughtworks for example) that says don’t use Put, always Post (and post events instead).Short answer? There is no short answer! Use your best judgement. See some useful discussions at this stackoverflow posting. Less Common Methods The other 4 lesser use HTTP verbs are:  HEAD, OPTIONS, TRACE and CONNECT. OPTIONS Request for information about the capabilities of a server, e.g. request a list of HTTP methods that may be used on this resource. It would look something like this: 200 OK Allow: HEAD,GET,PUT,DELETE,OPTIONS A somewhat obscure part of the HTTP standard. Potentially useful but few web services actual seem to make it available. HEAD Identical to GET except that the server MUST NOT return a message-body in the response. Used for obtaining meta-information about the entity implied by the request without transferring the entity-body itself. Why use? Useful for testing links, e.g. for validity, accessibility. TRACE Used to invoke a remote, application-layer loop- back of the request message. Plain english: Echoes back the received request so that a client can see what (if any) changes or additions have been made by intermediate servers. Trace is often disabled since can represent a security risk. CONNECT Connect is for use with a proxy that can dynamically switch to being a tunnel. Converts the request connection to a transparent TCP/IP tunnel, usually to facilitate SSL-encrypted communication (HTTPS) through an unencrypted HTTP proxy. HTTP Response codes See http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.htmlCode Meaning Plain English(From user perspective)1xx Informational; indicates a provisional response,e.g. 100 FYI, OK so far and client should continue with the request2xx Successful All good3xx Redirection Something moved4xx Client Error You messed up5xx Server Error We messed upWhy REST and HTTP? Because HTTP provides all the characteristics required by REST.Client Server Http is a “protocol in the client-server computing model”, so meets the first requirement of REST. With HTTP, often the client is a web browser and the server is a piece of software serving content such as Apache, IIS or Nginx. With the “Internet of Things” however, things are becoming less conventional. The client could be your toaster! Stateless HTTP is a stateless protocol. HTTP servers are not required to keep any information or state between requests. This can be circumvented by using things like cookies and sessions, but Fielding makes it clear in his dissertation that he strongly disagrees with cookies. Cache HTTP supports caching via three basic mechanisms: freshness, validation, and invalidation. Uniform Interface Using interfaces to decouple a client/caller from the implementation is a common concept on software.Identification of resources HTTP supports hyperlinks. Anything of interest can be a resource, and those resources can be identified uniquely by a URI. How do you identify a book? example.com/books/1234 How do you identify a user? example.com/users/sabram All resources are identified by a uniform interface – the URI Manipulation of resources through these representations URIs, in conjunction with the HTTP methods, can be used to manipulate resources. Self-descriptive messages In HTTP, messages can describe themselves using media (MIME) types, status codes, and headers to, for example, indicate their cacheability. Hypermedia as the engine of application state (A.K.A. HATEOAS)More later! See below. Uniform Interface, in plain English. OK, that covers what Fielding had to say in his dissertation about Uniform Interfaces, but what does it all mean in plain English? I mentioned earlier that using interfaces to decouple a client/caller from the implementation is a common concept on software. Similarly, when designing GUIs, you ideally have a very simple user interface, but one that still allows the user to carry out complex tasks. Generally, a simple interface that provides the client/user all the capabilities they need while hiding the underlying complexities of the implementations is the ideal goal, but tough to achieve. But that is exactly what Fielding achieved with REST. The interface is simply a link (or more specifically, a URI)! Which is about the the simplest interface you can think of. Combined with the other HTTP capabilities such as methods and media types and suddenly you have an incredibly powerful but deceptively simple, and widely understood method of communicating intentions. Layered System The idea behind a layered system is that a client doesn’t know (or care) whether it is connected to the end server, or to an intermediary one. This feature can improve scalability via load-balancing and caches etc. Layers may also enforce security policies. HTTP supports layering via proxy servers and caching. Code-On-Demand This is actually an optional constraint in REST. For example, you may request a resource, and get that resource with some JavaScript.HATEAOS Clients know a few simple fixed entry points to the application but have no knowledge beyond that. Instead, they transition (states) by using those links, and the links they lead to. In other words, state transitions are driven by the client based on options the server presents. If you think of Hypermedia as simply links, then “Hypermedia as the engine of application state” is simply using the links you discover to navigate (or transition state) through the application. And remember that it doesn’t need to be a user clicking on links; it can just as easily be another software component that is initiating the state transitions. To quote from Fielding himself: “Representational State Transfer is intended to evoke an image of how a well-designed Web application behaves: a network of web pages (a virtual state-machine), where the user progresses through an application by selecting links (state transitions), resulting in the next page (representing the next state of the application) being transferred to the user and rendered for their use.”Summary What is REST?Pretty URLs? An alternative to SOAP or RPC?Really it is an architectural style, or a set of constraints, that captures the fundamental principles that underlie the Web. The emphasis of REST is on simplicity, and utilizing the power of the existing web technologies and standards such as HTTP and URIUniform interfaces: All resources are identified by URIs HTTP Methods: All resources can be created/accessed/updated/deleted by standard HTTP methods Stateless: There is no state on the serverTerminology Let’s define some useful terminology that is relevant in any discussion of REST. Architecture Wikipedia: Software architecture refers to the high level structures of a software system, the discipline of creating such structures, and the documentation of these structures. The architecture of a software system is a metaphor, analogous to the architecture of a building. Fielding: A software architecture is an abstraction of the run-time elements of a software system during some phase of its operation [1] Fowler: Architecture is a shared understanding of the system design, including how the system is divided into components and how the components interact through interfaces. [3] Architectural style Fielding: An architectural style is a named, coordinated set of architectural constraints that restricts the roles and features of architectural elements [1] An architectural style is a named collection of architectural design decisions that (1) are applicable in a given development context, (2) constrain architectural design decisions that are specific to a particular system within that context, and (3) elicit beneficial qualities in each resulting system [4] REST or RESTful? What is the difference between the terms REST and RESTful? From what I have read, there is not a lot of difference. We know that REST is an architectural style for distributed software. Services conforming to that architectural style. Conforming to the REST constraints is referred to as being ‘RESTful’. Or to put it another way: REST is a noun, RESTful is an adjective. Hypertext In plain English: Hypertext is text with links.In plain English: Wikipedia: Hypertext is text displayed on a computer display or other electronic devices with references (hyperlinks) to other text which the reader can immediately access, or where text can be revealed progressively at multiple levels of detail. Roy Fielding: The simultaneous presentation of information and controls such that the information becomes the affordance through which the user obtains choices and selects actions [slide #50] Hypermedia In plain English: Interactive multimedia. If you see a booth at a mall with video, sound etc that is multimedia. If you can interact with it – click links, or control the content using buttons or the like, it is hypermedia. Wikipedia: Hypermedia, an extension of the term hypertext, is a nonlinear medium of information which includes graphics, audio, video, plain text and hyperlinks. Roy Fielding: Hypermedia is defined by the presence of application control information embedded within, or as a layer above, the presentation of information. [1] Resource In plain English: A resource can be anything real, but typical examples would be files, web pages, customers, accounts etc. Wikipedia: any physical or virtual component of limited availability within a computer system. Roy Fielding: Any information that can be named can be a resource: a document or image, a collection of other resources, a non-virtual object (e.g. a person). In other words, any concept that might be the target of an author’s hypertext reference must fit within the definition of a resource. [1] REST in practice: A resource is anything we expose to the Web, from a document or video clip to a business process or device. From a consumer’s point of view, a resource is anything with which that consumer interacts while progressing toward some goal.[6] URI – Uniform Resource Identifier Wikipedia: a string of characters used to identify a name of a resource W3: Uniform Resource Identifiers (URIs, aka URLs) are short strings that identify resources in the web: documents, images, downloadable files, services, electronic mailboxes, and other resources. What is the difference between a URI and  URL? The difference between a URI and a URL is subtle, and I don’t think terribly important. A URI identifies a resource either by location and/or a name. A URI does not have to specify the location of a specific representation. If it does, it is also a URL. A Uniform Resource Locator (URL) is a subset of the Uniform Resource Identifier (URI) that specifies where an identified resource is available and the mechanism for retrieving it”. So all URLs are URIs, but all URIs are not URLs. URIs can also be URN (Universal Resource Name). Or: URLs and URNs are special forms of URIs. For the most part, I think you can think or URI and URLs as being the same thing. I may be flamed for saying that, but it keeps things simpler! Sources, references, bibliographyArchitectural Styles and the Design of Network-based Software Architectures (Fielding, 2000) A little REST and relaxation (Fielding) Who Needs An Architect? (Fowler) Software architecture: Foundations, Theory and Practice; R. N. Taylor, N. Medvidović and E. M. Dashofy, . Wiley, 2009. Representational state transfer (Wikipedia) REST in practice (Webber; Parastatidis; Robinson) HTTP 1.1 (RFC 2616)Reference: An introduction to REST from our JCG partner Shaun Abram at the Shaun Abram blog blog....

PrimeFaces 5.0 DataTable Column Toggler

I have had an opportunity to work a bit with the PrimeFaces 5.0 DataTable, and the enhancements are great.  Today, I wanted to show just one of the new features…the DataTable column toggler.  This feature enables one to choose which columns are displayed via a list of checkboxes. To use a column toggler, simply add a commandButton to display picklist of column choices into the header of the table, as follows:           <p:commandButton icon="ui-icon-calculator"  id="toggler" style="float: right;" type="button" value="Columns"/> Next, add a columnToggler component to the table header, and specify the DataTable ID as the data source. In this case, the DataTable ID is “datalist”: <p:columnToggler datasource="datalist" trigger="toggler"/> That’s it! In the end, a button is added to the header of the table, which allows the user to specify which columns are displayed (Figure 1).The full source listing for the DataTable in this example is as follows: <p:dataTable id="datalist" paginator="true" rowkey="#{item.id}" rows="10" rowsperpagetemplate="10,20,30,40,50" selection="#{poolController.selected}" selectionmode="single" value="#{poolController.items}" var="item" widgetvar="poolTable"><p:ajax event="rowSelect" update="createButton viewButton editButton deleteButton"/><p:ajax event="rowUnselect" update="createButton viewButton editButton deleteButton"/><f:facet name="header"> <p:commandButton icon="ui-icon-calculator" id="toggler" style="float: right;" type="button" value="Columns"/> <p:columnToggler datasource="datalist" trigger="toggler"/> <div style="clear:both" /> </f:facet> <p:column> <f:facet name="header"> <h:outputText value="#{bundle.ListPoolTitle_id}"/> </f:facet> <h:outputText value="#{item.id}"/> </p:column> <p:column> <f:facet name="header"> <h:outputText value="#{bundle.ListPoolTitle_style}"/> </f:facet> <h:outputText value="#{item.style}"/> </p:column> <p:column> <f:facet name="header"> <h:outputText value="#{bundle.ListPoolTitle_shape}"/> </f:facet> <h:outputText value="#{item.shape}"/> </p:column> <p:column> <f:facet name="header"> <h:outputText value="#{bundle.ListPoolTitle_length}"/> </f:facet> <h:outputText value="#{item.length}"/> </p:column> <p:column> <f:facet name="header"> <h:outputText value="#{bundle.ListPoolTitle_width}"/> </f:facet> <h:outputText value="#{item.width}"/> </p:column> <p:column> <f:facet name="header"> <h:outputText value="#{bundle.ListPoolTitle_radius}"/> </f:facet> <h:outputText value="#{item.radius}"/> </p:column> <p:column> <f:facet name="header"> <h:outputText value="#{bundle.ListPoolTitle_gallons}"/> </f:facet> <h:outputText value="#{item.gallons}"/> </p:column> <f:facet name="footer"> <p:commandButton id="createButton" icon="ui-icon-plus" value="#{bundle.Create}" actionListener="#{poolController.prepareCreate}" update=":PoolCreateForm" oncomplete="PF('PoolCreateDialog').show()"/> <p:commandButton id="viewButton" icon="ui-icon-search" value="#{bundle.View}" update=":PoolViewForm" oncomplete="PF('PoolViewDialog').show()" disabled="#{empty poolController.selected}"/> <p:commandButton id="editButton" icon="ui-icon-pencil" value="#{bundle.Edit}" update=":PoolEditForm" oncomplete="PF('PoolEditDialog').show()" disabled="#{empty poolController.selected}"/> <p:commandButton id="deleteButton" icon="ui-icon-trash" value="#{bundle.Delete}" actionListener="#{poolController.destroy}" update=":growl,datalist" disabled="#{empty poolController.selected}"/> </f:facet> </p:dataTable> Happy coding with PrimeFaces 5.0! This example was generated using PrimeFaces 5.0 RC 2. The final release should be out soon!Reference: PrimeFaces 5.0 DataTable Column Toggler from our JCG partner Josh Juneau at the Josh’s Dev Blog – Java, Java EE, Jython, Oracle, and More… blog....

The Definition of a Tech Lead

There are many names for leadership roles in software development such as Senior Developer, Architect, Technical Lead, Team Lead, and Engineering Manager. These are just a few. To me, the Technical Leader (Tech Lead) plays an unique and essential role that others cannot. The Definition The Short: A Tech Lead is a developer who is responsible for leading a development team.    The Long: Leading a development team is no easy task. An effective Tech Lead establishes a technical vision with the development team and works with developers to turn it into reality. Along the way, a Tech Lead takes on traits that other roles may have, such as a Team Lead, Architect or Software Engineering Manager but they remain hands-on with code. To make the most effective choices and to maintain trust and empathy with developers, a Tech Lead must code. In “The Geek’s Guide to Leading Teams” presentation, I talked about an ideal minimum time of about 30%.  Not just a Team Lead Early in my career, I worked on a team that had both a Tech Lead and a Team Lead. The Team Lead didn’t have much of a technical background and had a strong focus on the people side and tracking of tasks. They would have 1-to-1s with people on the team, and co-ordinate with outside stakeholders to schedule meetings that didn’t interrupt development time where possible.While the Team Lead focused on general team issues, the Tech Lead focused on technical matters that affected more than just one developer. They stepped in on heated technical debates, and worked with outside stakeholders to define technical options and agree on solutions for future streams of work. They wrote code with the other developers and sometimes called for development “huddles” to agree on a direction. More hands-on than an Engineering Manager You manage things, you lead people – Grace HopperAny reasonably-sized IT organisation has an Engineering Manager. They are responsible for more than one development, and have tasks that include:Maintaining a productive working environment for development teams. Acquiring appropriate budget for development to support business goals. Representing the technology perspective on a management or board level. Establishes and/or co-ordinates programmes of work (delivered through development). Responsible for overall IT headcount.Depending on the size of an organisation, an Engineering Manager may also be called a Chief Technical Officer (CTO) or Chief Information Officer (CIO) or Head of Software Development. Although an Engineering Manager represents technology, they are often very far-removed from a development team and rarely code. In contrast, a Tech Lead sits with developers, very much focused on moving them towards their goal. They work to resolve technical disputes, and are watchful of technical decisions that have long-term consequences. A Tech Lead works closely with the Engineering Manager to build an ideal work environment. A good Architect look likes a Tech Lead The Architect role ensures overall application architecture suitably fits the business problem, for now and for the future. In some organisations, Architects work with the team to establish and validate their understanding of architecture. A suitable amount of standardisation helps productivity. Too much standardisation kills innovation. Some organisations have the “Ivory Tower Architect” who swoops in to consult, standardise and document. They float from team-to-team, start new software projects, and rarely follow up to see the result of their initial architectural vision. An effective Architect looks like a good Tech Lead. They establish a common understanding of what the team is aiming for, and make adjustments as the team learns more about the problem and the technology chosen to solve it. What is a Tech lead again? A successful Tech Lead takes on responsibilities that sit with roles such as the Team Lead, the Architect and the Engineering Manager. They bring a unique blend of leadership and management skills applied in a technical context with a team of developers. The Tech Lead steers a team towards a common technical vision, writing code at least 30% of the time.Reference: The Definition of a Tech Lead from our JCG partner Patrick Kua at the THEKUA.COM@WORK blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: