Featured FREE Whitepapers

What's New Here?


Java EE Pitfalls #1: Ignore the default lock of a @Singleton

EJB Singleton Beans were introduced by the EJB 3.1 specification and are often used to store cached data. This means, we try to improve the performance of our application by using a Singleton. In general, this works quite well. Especially if there are not too many calls in parallel. But it changes if we ignore the default lock and the number of parallel calls increases. Sensible defaults Let’s start with some Java code and see how the sensible default of the lock works out. The following snippet shows a simple EJB Singleton with a counter and two methods. method1 writes the current value of the counter to the log and method2 counts from 0 to 100. @Singleton @Remote(SingletonRemote.class) public class DefaultLock implements SingletonRemote { Logger logger = Logger.getLogger(DefaultLock.class.getName());private int counter = 0;@Override public void method1() { this.logger.info("method1: " + counter); }@Override public void method2() throws Exception { this.logger.info("start method2"); for (int i = 0; i < 100; i++) { counter++; logger.info("" + counter); } this.logger.info("end method2"); } } As you can see, there is no lock defined. What do you expect to see in the log file, if we call both methods in parallel? 2014-06-24 21:18:51,948 INFO [blog.thoughts.on.java.singleton.lock.DefaultLock] (EJB default - 5) method1: 0 2014-06-24 21:18:51,949 INFO [blog.thoughts.on.java.singleton.lock.DefaultLock] (EJB default - 4) start method2 2014-06-24 21:18:51,949 INFO [blog.thoughts.on.java.singleton.lock.DefaultLock] (EJB default - 4) 1 2014-06-24 21:18:51,949 INFO [blog.thoughts.on.java.singleton.lock.DefaultLock] (EJB default - 4) 2 2014-06-24 21:18:51,950 INFO [blog.thoughts.on.java.singleton.lock.DefaultLock] (EJB default - 4) 3 ... 2014-06-24 21:18:51,977 INFO [blog.thoughts.on.java.singleton.lock.DefaultLock] (EJB default - 4) 99 2014-06-24 21:18:51,977 INFO [blog.thoughts.on.java.singleton.lock.DefaultLock] (EJB default - 4) 100 2014-06-24 21:18:51,978 INFO [blog.thoughts.on.java.singleton.lock.DefaultLock] (EJB default - 4) end method2 2014-06-24 21:18:51,978 INFO [blog.thoughts.on.java.singleton.lock.DefaultLock] (EJB default - 6) method1: 100 2014-06-24 21:18:51,981 INFO [blog.thoughts.on.java.singleton.lock.DefaultLock] (EJB default - 7) method1: 100 2014-06-24 21:18:51,985 INFO [blog.thoughts.on.java.singleton.lock.DefaultLock] (EJB default - 8) method1: 100 2014-06-24 21:18:51,988 INFO [blog.thoughts.on.java.singleton.lock.DefaultLock] (EJB default - 9) method1: 100 OK, that might be a little unexpected, the default is a container managed write lock on the entire Singleton. This is a good default to avoid concurrent modifications of the attributes. But it is a bad default if we want to perform read-only operations. In this case, the serializationion of the method calls will result in a lower scalability and in a lower performance under high load. How to avoid it? The answer to that question is obvious, we need to take care of the concurrency management. As usual in Java EE, there are two ways to handle it. We can do it ourself or we can ask the container to do it. Bean Managed Concurrency I do not want to go into too much detail regarding Bean Managed Concurrency. It is the most flexible way to manage concurrent access. The container allows the concurrent access to all methods of the Singleton and you have to guard its state as necessary. This can be done by using synchronized and volatile. But be careful, quite often this is not as easy as it seems. Container Managed Concurrency The Container Managed Concurrency is much easier to use but not as flexible as the bean managed approach. But in my experience, it is good enough for common use cases. As we saw in the log, container managed concurrency is the default for an EJB Singleton. The container sets a write lock for the entire Singleton and serializes all method calls. We can change this behavior and define read and write locks on method and/or class level. This can be done by annotating the Singleton class or the methods with @javax.ejb.Lock(javax.ejb.LockType). The LockType enum provides the values WRITE and READ to define an exclusive write lock or a read lock. The following snippet shows how to set the Lock of method1 and method2 to LockType.READ. @Singleton @Remote(SingletonRemote.class) public class ReadLock implements SingletonRemote { Logger logger = Logger.getLogger(ReadLock.class.getName());private int counter = 0;@Override @Lock(LockType.READ) public void method1() { this.logger.info("method1: " + counter); }@Override @Lock(LockType.READ) public void method2() throws Exception { this.logger.info("start method2"); for (int i = 0; i < 100; i++) { counter++; logger.info("" + counter); } this.logger.info("end method2"); } } As already mentioned, we could achieve the same by annotating the class with @Lock(LockType.READ) instead of annotating both methods. OK, if everything works as expect it, both methods should be accessed in paralel. So lets have a look at the log file. 2014-06-24 21:47:13,290 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 10) method1: 0 2014-06-24 21:47:13,291 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) start method2 2014-06-24 21:47:13,291 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 1 2014-06-24 21:47:13,291 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 2 2014-06-24 21:47:13,291 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 3 ... 2014-06-24 21:47:13,306 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 68 2014-06-24 21:47:13,307 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 69 2014-06-24 21:47:13,308 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 3) method1: 69 2014-06-24 21:47:13,310 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 70 2014-06-24 21:47:13,310 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 71 ... 2014-06-24 21:47:13,311 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 76 2014-06-24 21:47:13,311 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 77 2014-06-24 21:47:13,312 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 2) method1: 77 2014-06-24 21:47:13,312 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 78 2014-06-24 21:47:13,312 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 79 ... 2014-06-24 21:47:13,313 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 83 2014-06-24 21:47:13,313 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 84 2014-06-24 21:47:13,314 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 5) method1: 84 2014-06-24 21:47:13,316 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 85 2014-06-24 21:47:13,316 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 86 2014-06-24 21:47:13,317 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 87 2014-06-24 21:47:13,318 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 88 2014-06-24 21:47:13,318 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 6) method1: 89 2014-06-24 21:47:13,318 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 89 2014-06-24 21:47:13,319 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 90 ... 2014-06-24 21:47:13,321 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 99 2014-06-24 21:47:13,321 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 100 2014-06-24 21:47:13,321 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) end method2 Conclusion At the beginning of this article, we found out that Java EE uses a container managed write lock as default. This results in a serialized processing of all method calls and lowers the scalability and performance of the application. This is something we need to have in mind when implementing an EJB Singleton. We had a look at the two exisiting options to control the concurrency management: the Bean Managed Concurrency and the Container Managed Concurrency. We used the container managed approach to define a read lock for both methods of our singleton. This is not as flexible as the bean managed approach, but it is much easier to use and sufficient in most of the cases. We just need to provide an annotation and the container will handle the rest.Reference: Java EE Pitfalls #1: Ignore the default lock of a @Singleton from our JCG partner Thorben Janssen at the Some thoughts on Java (EE) blog....

Java EE 8 – Deliver More Apps to More Devices

If there’s one thing I dislike about summer, it is the fact that there isn’t much news to share or talk about. Whoever decided to put the Java Day Tokyo into this boring time of the year did a pretty good job and gave me an opportunity to write a blog post about new and upcoming Java EE 8 specification enriched with some more thoughts and pointers. As announced on the Java EE 7 EG Mailinglist beginning of June the new EE 8 JSR is going to be filed shortly (before JavaOne).     Contents of EE 8 Unlike the first version of EE 7 which was totally dominated by the word “cloud” and later re-aligned with the hard facts, this new Java EE version will basically stick to three different areas of improvement.HTML 5 / Web Tier Enhancements CDI Alignment / Ease-of-Development Cloud EnablementAll three can be seen as a continued evolution of what EE 7 already delivered and there is no real surprise in it at all. Head over to The Aquarium to read more about the details.Cameron Purdy about EE 8 at Java Day Tokyo 2014  Hidden Gems – What might come up at JavaOne The Java Day Tokyo was held recently and with Cameron Purdy as a keynote speaker about Java EE and it’s general direction (mp4 download, 363MB) this probably was one of the first chances to see, what will be the overall story for JavaOne with regards to the platform. As Oracle should have learned the Java community isn’t interested in big and unpleasant surprises. Strategic directions are communicated and prepared a bit more carefully. We all have seen and heard about the IoT hype and the efforts everybody puts in it. This obviously also seems to have some outreach into Java EE. Beside the general topics and contents of EE 8 the Purdy keynote also contained a slide titled “Powering Java Standard in the Cloud – Deliver Mode Apps to More Devices with Confidence”.  Java Standards in the Cloud.  And yes, you are correct about thinking that this is EE 7 coverage. It actually is. But at least for me it is the first time, that individual features have been isolated from individual technical specifications and put into a complete, strategic picture outlining use-cases in the enterprise. It will be interesting to see, if there is something more like this to be shown at JavaOne and how much IoT we will see in EE 8 when it finally hits the road.Reference: Java EE 8 – Deliver More Apps to More Devices from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....

How to Handle Incompetence?

We’ve all had incompetent colleagues. People that tend to write bad code, make bad decisions or just can’t understand some of the concepts in the project(s). And it’s never trivial to handle this scenario. Obviously, the easiest solution is to ignore it. And if you are not a team lead (or something similar), you can probably pretend that the problem doesn’t exist (and occasionally curse and refactor some crappy code). There are two types of incompetent people: those who know they are not that good, and those who are clueless about their incompetence.   The former are usually junior and mid-level developers, and they are expected to be less experienced. With enough coaching and kindly pointing out their mistakes, they will learn. This is where all of us have gone though. The latter is the harder breed. They are the “senior” developers that have become senior only due to the amount of years they’ve spent in the industry, and regardless of their actual skills or contribution. They tend to produce crappy code, misunderstand assignments, but on the other hand reject (kindly or more aggressively) any attempt to be educated. Because they’re “senior”, and who are you to argue with them? In extreme cases this may be accompanied with an inferiority complex, which in turn may result in clumsy attempts to prove they are actually worthy. In other cases it may involve pointless discussions on topics they do not want to admit they are wrong about, just because admitting that would mean they are inferior. They will often use truisms and general statements instead of real arguments, in order to show they actually understand the matter and it’s you that’s wrong. E.g. “we must do things the right way”, “we must follow best practices”, “we must do more research before making this decision”, and so on. In a way, it’s not exactly their incompetence that is the problem, it’s their attitude and their skewed self-image. But enough layman psychology. What can be done in such cases? A solution (depending on the labour laws) is to just lay them off. But in a tight market, approaching deadlines, a company hierarchy and rules, probably that’s not easy. And such people can still be useful. It’s just that “utilizing” them is tricky. The key is – minimizing the damage they do without wasting the time of other team members. Note that “incompetent” doesn’t mean “can’t do anything at all”. It’s just not up to the desired quality. Here’s an incomplete list of suggestions:code reviews – you should absolutely have these, even if you don’t have incompetent people. If a piece of code is crappy, you can say that in a review. code style rules – you should have something like checkstyle or PMD rule set (or whatever is relevant to your language). And it won’t be offensive when you point out warnings from style checks. pair programming – often simple code-style checks can’t detect bad code, and especially a bad approach to a problem. And it may be “too late” to indicate that in a code review (there is never a “too late” time for fixing technical debt, of course). So do pair programming. If the incompetent person is not the one writing the code, his pair of eyes may be useful to spot mistakes. If writing the code, then the other team member might catch a wrong approach early and discuss that. don’t let them take important decisions or work or important tasks alone; in fact – this should be true even for the best developer out there – having more people involved in a discussion is often productiveDid I just make some obvious engineering process suggestions? Yes. And they would work in most cases, resolving the problem smoothly. Just don’t make a drama out of it and don’t point fingers… …unless it’s too blatant. If the guy is both incompetent and with an intolerable attitude, and the team agrees on that, inform management. You have a people-problem then, and you can’t solve it using a good process. Note that the team should agree. But what to do if you are alone in a team of incompetent people, or the competent people too unmotivated to take care of the incompetent ones? Leave. That’s not a place for you. I probably didn’t say anything useful. But the “moral” is – don’t point fingers; enforce good engineering practices instead.Reference: How to Handle Incompetence? from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....

More #NoEstimates

Quite an interesting conversation and reaction to the #NoEstimates post. Good questions too, and frankly, to some I don’t have answers. I’ll try, anyway. Let’s start with classic project management. It tells us that in order to plan, we need to estimate cost and duration. Estimation techniques have been around for a while.     @gil_zilberfeld all estimates probabilistic based on underlying statistics of work processes. what's alternative 4 knowing cost/sched/tech? — Glen B. Alleman (@galleman) June 19, 2014  There’s a problem with the assumption that we can “know” stuff.  We can’t know stuff about the future. Guessing, or estimating, as we call it is the current alternative. To improve, we can at most try to forecast. And we want a forecast we can trust enough to make further plans on. If confidence is important then:   @gil_zilberfeld So, get good enough at estimates to feel that confidence, make decisions accordingly. Shouldn't be a controversy. @galleman — Peter Kretzman (@PeterKretzman) June 19, 2014  Sounds easy enough… Estimating is a skill. It takes knowledge of process and ability to deduce from experience. As with other skills, you can improve your estimations. It works well, if the work we’re doing is similar to what you did before. However, if history is different than the future, we’re in trouble. In my experience, it usually is. Variations galore. In the projects I was involved in, there were plenty of unknowns: technology, algorithms, knowledge level, team capacity and availability, even mood. All of those can impact delivery dates, and therefore the “correctness” of estimations. With so many “unknown unknowns” out there, what’s the chance of a plausible estimation? We can definitely estimate the “knowns”, try to improve on the “known unknowns”, but it’s impractical to improve on estimating that part. Yet the question remains @adubism how do you determine cost to reach that schedule with needed capabilities? @PeterKretzman @gil_zilberfeld — Glen B. Alleman (@galleman) June 19, 2014  Ok, wise-guy, if estimating can yield lousy results, what’s the alternative? Agile methodologies take into account that reality is complex, and therefore involve the feedback loop in short iterations. The product owner can decide to shut down the project or continue it every cycle. I think we should be moving in that direction at the organizational level. Instead of trying to predict everything, set short-term goals and check points. Spend small amount of money, see the result, then decide. Use the time you spent on estimating to do some work. Improving estimates is a great example of local optimization. After all, the customer would rather have a prototype in the hand, than a plan on the tree. And if he wants estimates? Then we will give a rough estimate, that doesn’t cost much. I know project managers won’t like this answer. I know a younger me wouldn’t either. But I refer you to the wise words of the Agile Manifesto, which apply to estimating, among other things: We are uncovering better ways of developing software by doing it and helping others do it.There are better ways. We’ll find them.Reference: More #NoEstimates from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....

Using Git- Part -I : Basics

Introduction Git is popular distributed version control system created by Linus Torvalds, creator of Linux OS. So, as you might have guessed it is first used for version controlling the Linux Kernel code.            Its widely used in most of open source and closed source software development. Thanks to Github popularity and its own feature sets. Most of software open source projects foundation such as Eclipse Foundation recently moved its projects SVN and CVS repositories to Git. You can read more about here and here. This is basic tutorial targeted at fellow beginners to git version control system. It shows very basic git workflow to get started with git. Installation For Windows and Mac OS For this, go ahead to git-scm downloads site then go ahead to download executable files specific to your Operating System. Click on downloaded installer file to install git on your machine. For Debian based OS- Ubuntu/Mint Execute following commands in terminal window, it installs git using PPA (personal package archive): sudo add-apt-repository ppa:git-core/ppa sudo apt-get update sudo apt-get install git Now, you have installed the git, you can check for proper installation using git –version:    Note : For this tutorial, i will be using terminal for demonstration of all git commands. So, if you are on Windows OS, make use of bash command prompt that shipped with git installation and on Mac or Linux use normal terminal. As now, we have git executable available from command prompt/terminal to execute the git commands. Git executable has following basic format: git <command> <switch-options/command-options> <sub-command> <more-sub-command-options>   Doing Git global user configuration Command to be used : git config First step after installation, is to do user configurations that is to setup the name and email id so, that git can identify while you making commit, that basically adds ownership of each commit. Execute the following commands: git config --global user.name <your-name> git config --global user.email <your-email>   Creating Git repository Command to be used : git init For tracking files using git, first you need to create git repository first. Lets create hello-git directory and initialize the git repository in it. mkdir hello-git cd hello-git git init    After initializing you will see the message that Initialized git repository as shown in above screenshot and if you checked out the ‘hello-git’ directory , you will notice ‘.git’ directory is created. This is the directory where git operates.    Creating files and adding it to git Commands to be used : git add, git commit, git status Now, lets add few files to our newly created git project.    I have added the hello.js and README.md files to project directory, Now lets see what git tells us about status of our project directory. git status- this command lets you see whats the status of files in a git project directory.    Git File states As we are discussing from start that git tracks changes for git intialized directory. So, when working with git, filles goes to few states which explained as follows: Untracked Files: This are kind of files which are newly added to directroy and it is not tracked by git for version control Tracked Files : This are the files which are aleady committed to git but not staged or added to Index area Staged / Index Files :These are the files which are to be added to next commit. Modified / Unstaged files : These are files which are modifed but not staged As you can see it says our two files are Untracked, meaning that we haven’t told git to track our files. For telling this, we going to use git add command. git add - It adds untracked or unstaged files to Staging/ Index area of git. Adding specific files : git add <file-1> <file-2>Adding all files in directory and sub-directories : git add . or git add --all For our case, i am going to add all files:    As you can see that, files are added to staging area. Now, we need to commit our files to that is to add to git repository, for this we use git commit. git commit : In simple terms, this command does two thing first it adds files to git repository and makes the commit log based on commit message you provided. git commit -m “<your message>” For our case: git commit -m "Initial commit for project, added hello.js and README.md"    Now, after doing this you will see that your changes are committed to git repository. Now, if you do git status , you will see no changes to commit, that means our working directory changes are recorded by git:    git log : This let you see your older commits, in this you can see commit hash , name of author, date to which commit has been made and commit message.    To see more compact commit messages, use oneline switch on log command shown as follows:    Doing more changes to files Commands to be used : git diff, git add and git commit Lets do more changes to files in repository and learn more git commands. Now, lets add few lines to hello.js and README.md, Open this files in the your favorite editor and add few lines to it. I have added few lines to files you can see those as follows:       Now, if you do git status:  You can see the files are tracked but not staged for commit, as compared previous un-tracked files. Now, in middle changing files, you might want to see what is been changes since last staged. git diff : It shows you code diff’s between what you have changed in working directory since your last commit for each files in directory. git diff <options> <file-name> If you just do git diff, without specifying the file-name you diff of all files:    While if you specified file-name as follows, it shows diff of only that particular file:    You can see the lines in light green color, which are started with +, indicates that these lines are added since last commit. Lets add files to staging area using git add command: git add . Now, if do diff of repository, you will not see the anything because, by default diff command shows the diff’s between unstaged files. To see diff’s of files that has been staged, you have to do the following: git diff --cached    Now, lets commit the files: git commit -m “added the more contents”    and now if you do git log, you see our two commits as shown as follows:    To get more information use –stat, which gives more information about which files are changed. Look at following image:    I Hope this tutorial helped you to at-least understand basics of Git. Git has lot of cool features you might need it later as you get advanced. For that, be sure to check out this blog again for more tutorials on same.Reference: Using Git- Part -I : Basics from our JCG partner Abhijeet Sutar at the ajduke’s blog blog....

A closer look at the Java Identity API

Before I jump into the nitty-gritty, let’s take a look at some of the quick facts about Java Identity API JSR 351. This is still a work in progress . . .When was the JSR initiated?This JSR passed the Approval Ballot in October 2011 which was followed by Expert Group formation in November 2011.Who is leading this specification?The Java Identity API is being lead by Ron Monzillo.Expert Group?The EG consists of representatives from Oracle, IBM, RedHat, SAP and GoldmanSachs as well as individuals.Specification document?This is still in a draft phase and is available at: https://jcp.org/aboutJava/communityprocess/edr/jsr351/index.htmlReference ImplementationNobis is the RI for the Java Identity API and can be accessed at: https://java.net/projects/nobis/pages/Home Introduction If I had to explain the motive behind the Java Identity API in a single line, it would be defined as, a Java standard for Identity Management. On a high level, the primary goals of this standard are:Define a representation for an Identity in Java. Facilitate secure usage (creation, exchange, governance) of these ‘identities’  by defining a Standard API and Interaction models between Identity consumers and providers. Provide a uniform, high level programming model for application to interact with identity/attribute repositories with heterogeneous domain models.Present Challenges Currently, the Java Platform does not provide standard interfaces for managing identities. With the increasing use of internet services in day to day applications, adoption of SSO and federation, there is a need to protect network identity. Existing Java objects, such as the X509Certificate and KerberosTicket provide semblance for encapsulating identity attributes, but only to a limited extent. Instead of having disparate and non standard model, there is a requirement for a set of standards to evolve, which can be leveraged by application or identity framework developers to provide rock solid support for propagation and consumption of network identity. A Simple Analogy I like to think of it as an API similar to JDBC or JNDI (if not the same). Both these APIs, help developers communicate with underlying data sources or naming services in a loosely coupled fashion through standard interfaces. It allows us to adopt a pluggable architecture wherein different vendor implementations can be leveraged to connect disparate databases (be it Oracle, MySQL, Sybase DB2 . . we really do not care apart from having the vendor JARS in our class path), LDAP servers (AD, OID, Sun Java, Apache etc). How can the Java Identity API help? This API will:Allow applications to interact with heterogeneous underlying identity repositories in a portable and standard fashion. Allow vendors to develop implementations using the Attribute Service framework to seamlessly interact with attributes in one or more repositories e.g. Facebook, Twitter, Linked in via supported protocols/APIs such as OAUTH, Open ID, FaceBook Connect etc. Enable applications to also act as providers of attributes – this is also a part of the Attribute Service framework. Allow the end developers to build applications on top of these implementations. Prevent dependency upon non standard, proprietary implementations to implement identity services within applications.Salient Features Some of the key features/highlights of this API have been listed below:Compatibility with Java SE 6 and Java EE 6 Integration with the Java Security modelThe existing objects within the Java Security model like Principal, Subject, Policy etc will be integrated within the API:Support for Programmatic as well as Annotation driven programming models Leveraging Contexts and Dependency Injection (CDI)CDI will render services such as resource injection, life cycle callbacks and of course dependency injection of identity attributes and references within applications via Qualifiers and Producers. Key Terminologies A brand new specification can often introduce terms or expressions which might sound vague or abstract at first. Here is a list of keywords and concepts which are intimately attached to the Java Identity API. Having a basic understanding of these terminologies is important.Term DescriptionEntity Nothing but collection of ‘Attributes‘ e.g. A Person can have attributes such as First Name, Last Name, SSN, Email etc.Attribute It has a name (username, email), value (johndoe, jdoe@test.com) and associated metadata (issuer, expiry)Entity Reference A secure handle for an entityAttribute Reference A secure, value independent handle to the Attribute itselfNote: Both Entity and Attribute references facilitate exchange without actually exposing the associated valueAttribute Repository Represents a set of contracts to be implemented in order to integrate with an identity source. Contains the business logic to interact with the end identity repositoryRepository Agent It’s bound to a specific Attribute Repository and can be queried to provide a handle to the Attribute Repository it is attached toRepository Descriptor Describes the relationship b/w a Repository Agent and the Attribute Repository which is bound to the agentAttribute Provider Interacts with the Repository Agent and acts on its behalf to perform operations requested by the consumerAttribute Service It’s a service component which is exposed directly to the client application. It provides access to high level interfaces for interacting with and managing identities  Core API The Java Identity API is fairly lightweight and compact. The packages which form a part of the core programming interface have been highlighted below.Package Descriptionjavax.security.identity This package contains the identity attribute and reference typesjavax.security.identity.annotations Contains annotations which help provide a  portable identity programming modeljavax.security.identity.auth Contains identity attribute and reference types for use in a Java Subject or AccessControlContext.javax.security.identity.client Provide high-level programming interfaces to the identity attribute services.javax.security.identity.client.expression Contains provider-independent expressions that are used to compose attribute queries.javax.security.identity.client.qualifiers Defines annotations to be used as qualifiers in CDI injection of Identity attributes.javax.security.identity.permission Consists of the permission and actions values used to protect the interfaces of the attribute service.javax.security.identity.provider Contains interfaces that are to be implemented by attribute providers and repository agents.  Some of the important annotations, interfaces and classes of the Java Identity API have been highlighted below: AnnotationsComponent API equivalentIdentity javax.security.identity.annotations.IDEntityAttribute javax.security.identity.annotations.IdentityAttributeEntity Reference javax.security.identity.annotations.EntityReference  Interfaces and ClassesComponent API equivalentAttribute javax.security.identity.IDAttributeEntity Reference javax.security.identity.IDEntityReferenceAttribute Reference javax.security.identity.IDAttributeReferenceAttribute Repository javax.security.identity.provider. AttributeRepositoryAttribute Provider javax.security.identity.provider.AttributeProviderRepository Agent javax.security.identity.provider.RepositoryAgentRepository Descriptor javax.security.identity.client.RepositoryDescriptor  High Level Overview of API Usage Applications need access to underlying repositories to interact with them and perform operations. The below example outlines the sequence of steps highlighting the ways in which an application can leverage the API to obtain handles to the underlying identities and attributes:Concrete implementation of the javax.security.identity.client.LookupService interface. This encapsulates the services of javax.security.identity.client.ProviderLookupService and the javax.security.identity.provider.AttributeLookupService An instance of the javax.security.identity.client.ProviderLookupContext is obtained as a result of binding the LookupService with an implementation of the javax.security.identity.provider.RepositoryAgent The ProviderLookupContext is used to get a reference to javax.security.identity.provider.AttributeProvider that is bound to the range of entities contained in the repository identified by the ProviderLookupContext. The AttributeProvider implementation is the gateway to the underlying identity repository and exposes CRUD like functionality via the javax.security.identity.provider.RepositoryLookupService and javax.security.identity.provider.RepositoryUpdateServiceCode Snippet Reference Implementation As with most Java standards, JSR 351 has a reference implementation known as  Nobis. It provides implementations for:javax.security.identity.client.LookupService i.e. the ProviderLookupService and AttributeLookupService – to enable search/lookup identity attributes from the repository javax.security.identity.provider.AttributeProvider javax.security.identity.provider.AttributeRepository javax.security.identity.client.IDPredicate – serves as a filtration/search criteriaAs a part of the implementation, the Nobis RI also provides:Post Construct Interceptors corresponding to the @javax.security.identity.annotations.IDEntityProvider and @javax.security.identity.annotations.IDEntity, which are nothing but Interceptor Bindings. A factory like API equivalent for above mentioned Interceptors A sample implementation of Facebook as an Attribute Provider along with JPA based and in-memory providers.Some things to look forward toHow is the API going to evolve and attain final shape How will it be adopted by the community How would this be implemented and leveraged by products and real world applicationsCheers . . . . ! ! !Reference: A closer look at the Java Identity API from our JCG partner Abhishek Gupta at the Object Oriented.. blog....

Getting Started with Gradle: Our First Java Project

This blog post describes how we can compile and package a simple Java project by using Gradle. Our Java project has only one requirement: Our build script must create an executable jar file. In other words, we must be able to run our program by using the command:         java -jar jarfile.jar Let’s find out how we can fulfil this requirement. Creating a Java Project We can create a Java project by applying the Java plugin. We can do this by adding the following line to our build.gradle file: apply plugin: 'java' That is it. We have now created a Java project. The Java plugin adds new conventions (e.g. the default project layout), new tasks, and new properties to our build. Let’s move on and take a quick look at the default project layout. The Project Layout of a Java Project The default project layout of a Java project is following:The src/main/java directory contains the source code of our project. The src/main/resources directory contains the resources (such as properties files) of our project. The src/test/java directory contains the test classes. The src/test/resources directory contains the test resources.All output files of our build are created under the build directory. This directory contains the following subdirectories which are relevant to this blog post (there are other subdirectories too, but we will talk about them in the future):The classes directory contains the compiled .class files. The libs directory contains the jar or war files created by the build.Let’s move on and add a simple main class to our project. Adding a Main Class to Our Build Let’s create a simple main class which prints the words: “Hello World” to System.out. The source code of the HelloWorld class looks as follows: package net.petrikainulainen.gradle;public class HelloWorld {public static void main(String[] args) { System.out.println("Hello World!"); } } The HelloWorld class was added to the src/main/java/net/petrikainulainen/gradle directory. That is nice. However, we still have to compile and package our project. Let’s move on and take a look at the tasks of a Java project. The Tasks of a Java Project The Java plugin adds many tasks to our build but the tasks which are relevant for this blog post are:The assemble task compiles the source code of our application and packages it to a jar file. This task doesn’t run the unit tests. The build task performs a full build of the project. The clean task deletes the build directory. The compileJava task compiles the source code of our application.We can also get the full list of runnable tasks and their description by running the following command at the command prompt: gradle tasks This is a good way to get a brief overview of our project without reading the build script. If we run this command in the root directory of our example project, we see the following output: > gradle tasks :tasks------------------------------------------------------------ All tasks runnable from root project ------------------------------------------------------------Build tasks ----------- assemble - Assembles the outputs of this project. build - Assembles and tests this project. buildDependents - Assembles and tests this project and all projects that depend on it. buildNeeded - Assembles and tests this project and all projects it depends on. classes - Assembles classes 'main'. clean - Deletes the build directory. jar - Assembles a jar archive containing the main classes. testClasses - Assembles classes 'test'.Build Setup tasks ----------------- init - Initializes a new Gradle build. [incubating] wrapper - Generates Gradle wrapper files. [incubating]Documentation tasks ------------------- javadoc - Generates Javadoc API documentation for the main source code.Help tasks ---------- dependencies - Displays all dependencies declared in root project 'first-java-project'. dependencyInsight - Displays the insight into a specific dependency in root project 'first-java-project'. help - Displays a help message projects - Displays the sub-projects of root project 'first-java-project'. properties - Displays the properties of root project 'first-java-project'. tasks - Displays the tasks runnable from root project 'first-java-project'.Verification tasks ------------------ check - Runs all checks. test - Runs the unit tests.Rules ----- Pattern: build<ConfigurationName>: Assembles the artifacts of a configuration. Pattern: upload<ConfigurationName>: Assembles and uploads the artifacts belonging to a configuration. Pattern: clean<TaskName>: Cleans the output files of a task.To see all tasks and more detail, run with --all.BUILD SUCCESSFULTotal time: 2.792 secs Let’s move on and find out how we can package our Java project. Packaging Our Java Project We can package our application by using two different tasks: If the run the command gradle assemble at command prompt, we see the following output: > gradle assemble :compileJava :processResources :classes :jar :assembleBUILD SUCCESSFULTotal time: 3.163 secs If we run the command gradle build at command prompt, we see the following output: > gradle build :compileJava :processResources :classes :jar :assemble :compileTestJava :processTestResources :testClasses :test :check :buildBUILD SUCCESSFULTotal time: 3.01 secs The outputs of these commands demonstrate that the difference of these tasks is that:The assemble task runs only the tasks which are required to package our application. The build task runs the tasks which are required to package our application AND runs automated tests.Both of these commands create the first-java-project.jar file to the build/libs directory. The default name of the created jar file is created by using the following template: [project name].jar, and the default name of the project is the same than the name of the directory in which it is created. Because the name of our project directory is first-java-project, the name of created jar is first-java-project.jar. We can now try to run our application by using the following command: java -jar first-java-project.jar When we do this, we see the following output: > java -jar first-java.project.jar No main manifest attribute, in first-java-project.jar The problem is that we haven’t configured the main class of the jar file in the manifest file. Let’s find out how we can fix this problem. Configuring the Main Class of a Jar File The Java plugin adds a jar task to our project, and every jar object has a manifest property which is an instance of Manifest. We can configure the main class of the created jar file by using the attributes() method of the Manifest interface. In other words, we can specify the attributes added to the manifest file by using a map which contains key-value pairs. We can set the entry point of our application by setting the value of the Main-Class attribute. After we have made the required changes to the build.gradle file, its source code looks as follows (the relevant part is highlighted): apply plugin: 'java'jar { manifest { attributes 'Main-Class': 'net.petrikainulainen.gradle.HelloWorld' } } The Java SE tutorial provides more information about the manifest file. After we have created a new jar file by running either the gradle assemble or gradle build command, we can run the jar file by using the following command: java -jar first-java-project.jar When we run our application, the following text is printed to the System.out: > java -jar first-java-project.jar Hello World! That is all for today. Let’s find out what we learned from this blog post. Summary We have now created a simple Java project by using Gradle. This blog post has taught us four things:We know that we can create a Java project by applying the Gradle Java plugin. We learned that the default directory layout of a Java project is the same than the default directory layout of a Maven project. We learned that all output files produced by our build can be found from the build directory. We learned how we can customize the attributes added to the manifest file.P.S. The example project of this blog post is available at Github.Reference: Getting Started with Gradle: Our First Java Project from our JCG partner Petri Kainulainen at the Petri Kainulainen blog....

Thymeleaf – fragments and angularjs router partial views

One more of the many cool features of thymeleaf is the ability to render fragments of templates – I have found this to be an especially useful feature to use with AngularJs. AngularJS $routeProvider or AngularUI router can be configured to return partial views for different “paths”, using thymeleaf to return these partial views works really well. Consider a simple CRUD flow, with the AngularUI router views defined this way:       app.config(function ($stateProvider, $urlRouterProvider) { $urlRouterProvider.otherwise("list");$stateProvider .state('list', { url:'/list', templateUrl: URLS.partialsList, controller: 'HotelCtrl' }) .state('edit', { url:'/edit/:hotelId', templateUrl: URLS.partialsEdit, controller: 'HotelEditCtrl' }) .state('create', { url:'/create', templateUrl: URLS.partialsCreate, controller: 'HotelCtrl' }); }); The templateUrl above is the partial view rendered when the appropriate state is activated, here these are defined using javascript variables and set using thymeleaf templates this way (to cleanly resolve the context path of the deployed application as the root path): <script th:inline="javascript"> /*<![CDATA[*/ var URLS = {}; URLS.partialsList = /*[[@{/hotels/partialsList}]]*/ '/hotels/partialsList'; URLS.partialsEdit = /*[[@{/hotels/partialsEdit}]]*/ '/hotels/partialsEdit'; URLS.partialsCreate = /*[[@{/hotels/partialsCreate}]]*/ '/hotels/partialsCreate'; /*]]>*/ </script> Now, consider one of the fragment definitions, say the one handling the list: file: templates/hotels/partialList.html <!DOCTYPE html> <html xmlns:th="http://www.thymeleaf.org" layout:decorator="layout/sitelayout"> <head> <title th:text="#{app.name}">List of Hotels</title> <link rel="stylesheet" th:href="@{/webjars/bootstrap/3.1.1/css/bootstrap.min.css}" href="http://netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css"/> <link rel="stylesheet" th:href="@{/webjars/bootstrap/3.1.1/css/bootstrap-theme.css}" href="http://netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap-theme.css"/> <link rel="stylesheet" th:href="@{/css/application.css}" href="../../static/css/application.css"/> </head> <body> <div class="container"> <div class="row"> <div class="col-xs-12"> <h1 class="well well-small">Hotels</h1> </div> </div> <div th:fragment="content"> <div class="row"> <div class="col-xs-12"> <table class="table table-bordered table-striped"> <thead> <tr> <th>ID</th> <th>Name</th> <th>Address</th> <th>Zip</th> <th>Action</th> </tr> </thead> <tbody> <tr ng-repeat="hotel in hotels"> <td>{{hotel.id}}</td> <td>{{hotel.name}}</td> <td>{{hotel.address}}</td> <td>{{hotel.zip}}</td> <td><a ui-sref="edit({ hotelId: hotel.id })">Edit</a> | <a ng-click="deleteHotel(hotel)">Delete</a></td> </tr> </tbody> </table> </div> </div> <div class="row"> <div class="col-xs-12"> <a ui-sref="create" class="btn btn-default">New Hotel</a> </div> </div> </div> </div> </body> </html> The great thing about thymeleaf here is that this view can be opened up in a browser and previewed. To return the part of the view, which in this case is the section which starts with “th:fragment=”content””, all I have to do is to return the name of the view as “hotels/partialList::content”! The same approach can be followed for the update and the create views. One part which I have left open is about how the uri in the UI which is “/hotels/partialsList” maps to “hotels/partialList::content”, with Spring MVC this can be easily done through a View Controller, which is essentially a way to return a view name without needing to go through a Controller and can be configured this way: @Configuration public class WebConfig extends WebMvcConfigurerAdapter {@Override public void addViewControllers(ViewControllerRegistry registry) { registry.addViewController("/hotels/partialsList").setViewName("hotels/partialsList::content"); registry.addViewController("/hotels/partialsCreate").setViewName("hotels/partialsCreate::content"); registry.addViewController("/hotels/partialsEdit").setViewName("hotels/partialsEdit::content"); }} So to summarize, you create a full html view using thymeleaf templates which can be previewed and any rendering issues fixed by opening the view in a browser during development time and then return the fragment of the view at runtime purely by referring to the relevant section of the html page.A sample which follows this pattern is available at this github location – https://github.com/bijukunjummen/spring-boot-mvc-testReference: Thymeleaf – fragments and angularjs router partial views from our JCG partner Biju Kunjummen at the all and sundry blog....

Apache CXF 3.0: CDI 1.1 support as alternative to Spring

With Apache CXF 3.0 just being released a couple of weeks ago, the project makes yet another important step to fulfill the JAX-RS 2.0 specification requirements: integration with CDI 1.1. In this blog post we are going to look on a couple of examples of how Apache CXF 3.0 and Apache CXF 3.0 work together. Starting from version 3.0, Apache CXF includes a new module, named cxf-integration-cdi which could be added easily to your Apache Maven POM file:         <dependency> <groupId>org.apache.cxf</groupId> <artifactId>cxf-integration-cdi</artifactId> <version>3.0.0</version> </dependency> This new module brings just two components (in fact, a bit more but those are the key ones):CXFCdiServlet: the servlet to bootstrap Apache CXF application, serving the same purpose as CXFServlet and CXFNonSpringJaxrsServlet, … JAXRSCdiResourceExtension: portable CDI 1.1 extension where all the magic happensWhen run in CDI 1.1-enabled environment, the portable extensions are discovered by CDI 1.1 container and initialized using life-cycle events. And that is literally all what you need! Let us see the real application in action. We are going to build a very simple JAX-RS 2.0 application to manage people using Apache CXF 3.0 and JBoss Weld 2.1, the CDI 1.1 reference implementation. The Person class we are going to use for a person representation is just a simple Java bean: package com.example.model;public class Person { private String email; private String firstName; private String lastName; public Person() { } public Person( final String email, final String firstName, final String lastName ) { this.email = email; this.firstName = firstName; this.lastName = lastName; }// Getters and setters are ommited // ... } As it is quite common now, we are going to run our application inside embedded Jetty 9.1 container and our Starter class does exactly that: package com.example;import org.apache.cxf.cdi.CXFCdiServlet; import org.eclipse.jetty.server.Server; import org.eclipse.jetty.servlet.ServletContextHandler; import org.eclipse.jetty.servlet.ServletHolder; import org.jboss.weld.environment.servlet.BeanManagerResourceBindingListener; import org.jboss.weld.environment.servlet.Listener;public class Starter { public static void main( final String[] args ) throws Exception { final Server server = new Server( 8080 ); // Register and map the dispatcher servlet final ServletHolder servletHolder = new ServletHolder( new CXFCdiServlet() ); final ServletContextHandler context = new ServletContextHandler(); context.setContextPath( "/" ); context.addEventListener( new Listener() ); context.addEventListener( new BeanManagerResourceBindingListener() ); context.addServlet( servletHolder, "/rest/*" ); server.setHandler( context ); server.start(); server.join(); } } Please notice the presence of CXFCdiServlet and two mandatory listeners which were added to the context:org.jboss.weld.environment.servlet.Listener is responsible for CDI injections org.jboss.weld.environment.servlet.BeanManagerResourceBindingListener binds the reference to the BeanManager to JNDI location java:comp/env/BeanManager to make it accessible anywhere from the applicationWith that, the full power of CDI 1.1 is at your disposal. Let us introduce the PeopleService class annotated with @Named annotation and with an initialization method declared and annotated with @PostConstruct just to create one person. @Named public class PeopleService { private final ConcurrentMap< String, Person > persons = new ConcurrentHashMap< String, Person >(); @PostConstruct public void init() { persons.put( "a@b.com", new Person( "a@b.com", "Tom", "Bombadilt" ) ); } // Additional methods // ... } Up to now we have said nothing about configuring JAX-RS 2.0 applications and resources in CDI 1.1 enviroment. The reason for that is very simple: depending on the application, you may go with zero-effort configuration or fully customizable one. Let us go through both approaches. With zero-effort configuration, you may define an empty JAX-RS 2.0 application and any number of JAX-RS 2.0 resources: Apache CXF 3.0 implicitly will wire them together by associating each resource class with this application. Here is an example of JAX-RS 2.0 application: package com.example.rs;import javax.ws.rs.ApplicationPath; import javax.ws.rs.core.Application;@ApplicationPath( "api" ) public class JaxRsApiApplication extends Application { } And here is a JAX-RS 2.0 resource PeopleRestService which injects the PeopleService managed bean: package com.example.rs;import java.util.Collection;import javax.inject.Inject; import javax.ws.rs.DELETE; import javax.ws.rs.DefaultValue; import javax.ws.rs.FormParam; import javax.ws.rs.GET; import javax.ws.rs.POST; import javax.ws.rs.PUT; import javax.ws.rs.Path; import javax.ws.rs.PathParam; import javax.ws.rs.Produces; import javax.ws.rs.QueryParam; import javax.ws.rs.core.Context; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.Response; import javax.ws.rs.core.UriInfo;import com.example.model.Person; import com.example.services.PeopleService;@Path( "/people" ) public class PeopleRestService { @Inject private PeopleService peopleService; @Produces( { MediaType.APPLICATION_JSON } ) @GET public Collection< Person > getPeople( @QueryParam( "page") @DefaultValue( "1" ) final int page ) { // ... }@Produces( { MediaType.APPLICATION_JSON } ) @Path( "/{email}" ) @GET public Person getPerson( @PathParam( "email" ) final String email ) { // ... }@Produces( { MediaType.APPLICATION_JSON } ) @POST public Response addPerson( @Context final UriInfo uriInfo, @FormParam( "email" ) final String email, @FormParam( "firstName" ) final String firstName, @FormParam( "lastName" ) final String lastName ) { // ... } // More HTTP methods here // ... } Nothing else is required: Apache CXF 3.0 application could be run like that and be fully functional. The complete source code of the sample project is available on GitHub. Please keep in mind that if you are following this style, only single empty JAX-RS 2.0 application should be declared. With customizable approach more options are available but a bit more work have to be done. Each JAX-RS 2.0 application should provide non-empty getClasses() or/and getSingletons() collections implementation. However, JAX-RS 2.0 resource classes stay unchanged. Here is an example (which basically leads to the same application configuration we have seen before): package com.example.rs;import java.util.Arrays; import java.util.HashSet; import java.util.Set;import javax.enterprise.inject.Produces; import javax.inject.Inject; import javax.ws.rs.ApplicationPath; import javax.ws.rs.core.Application;import com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider;@ApplicationPath( "api" ) public class JaxRsApiApplication extends Application { @Inject private PeopleRestService peopleRestService; @Produces private JacksonJsonProvider jacksonJsonProvider = new JacksonJsonProvider(); @Override public Set< Object > getSingletons() { return new HashSet<>( Arrays.asList( peopleRestService, jacksonJsonProvider ) ); } } Please notice, that JAXRSCdiResourceExtension portable CDI 1.1 extension automatically creates managed beans for each JAX-RS 2.0 applications (the ones extending Application) and resources (annotated with @Path). As such, those are immediately available for injection (as for example PeopleRestService in the snippet above). The class JacksonJsonProvider is annotated with @Provider annotation and as such will be treated as JAX-RS 2.0 provider. There are no limit on JAX-RS 2.0 applications which could be defined in this way. The complete source code of the sample project using this appoarch is available on GitHub. No matter which approach you have chosen, our sample application is going to work the same. Let us build it and run: > mvn clean package > java -jar target/jax-rs-2.0-cdi-0.0.1-SNAPSHOT.jar Calling the couple of implemented REST APIs confirms that application is functioning and configured properly. Let us issue the GET command to ensure that the method of PeopleService annotated with @PostConstruct has been called upon managed bean creation. > curl http://localhost:8080/rest/api/peopleHTTP/1.1 200 OK Content-Type: application/json Date: Thu, 29 May 2014 22:39:35 GMT Transfer-Encoding: chunked Server: Jetty(9.1.z-SNAPSHOT)[{"email":"a@b.com","firstName":"Tom","lastName":"Bombadilt"}] And here is the example of POST command: > curl -i http://localhost:8080/rest/api/people -X POST -d "email=a@c.com&firstName=Tom&lastName=Knocker"HTTP/1.1 201 Created Content-Type: application/json Date: Thu, 29 May 2014 22:40:08 GMT Location: http://localhost:8080/rest/api/people/a@c.com Transfer-Encoding: chunked Server: Jetty(9.1.z-SNAPSHOT){"email":"a@c.com","firstName":"Tom","lastName":"Knocker"} In this blog post we have just scratched the surface of what is possible now with Apache CXF and CDI 1.1 integration. Just to mention that embedded Apache Tomcat 7.x / 8.x as well as WAR-based deployments of Apache CXF with CDI 1.1 are possible on most JEE application servers and servlet containers. Please take a look on official documentation and give it a try!The complete source code is available on GitHub.Reference: Apache CXF 3.0: CDI 1.1 support as alternative to Spring from our JCG partner Andrey Redko at the Andriy Redko {devmind} blog....

10 things you can do as a developer to make your app secure: #6 Protect Data and Privacy

This is part 6 of a series of posts on the OWASP Top 10 Proactive Development Controls. Regulations – and good business practices – demand that you protect private and confidential customer and employee information such as PII and financial data, as well as critical information about the system itself: system configuration data and especially secrets. Exposing sensitive information is a serious and common problem, 6th place on the OWASP Top 10 risk list. Protecting data and privacy is mostly about encryption: encrypting data in transit, at rest, and during processing. Encrypting Data in Transit – Using TLS properly For web apps and mobile apps, encrypting data in transit means using SSL/TLS. Using SSL isn’t hard. Making sure that it is setup and used correctly takes more work. OWASP’s Transport Layer Protection Cheat Sheet explains how SSL and TLS work and rules that you should follow when using them. Ivan Ristic at Qualys SSL Labs provides a lot of useful information and free tools to explain how to setup SSL correctly and to test the strength of your website’s SSL setup. And OWASP has another Cheat Sheet on Certificate Pinning that focuses on how you can prevent man-in-the-middle attacks when using SSL/TLS. Encrypting Data at Rest The first rule of crypto is: Never try to write your own encryption algorithm. The other rules and guidelines for encrypting data correctly are explained in another Cheat Sheet from OWASP which covers the different crypto algorithms that you should use and when, and the steps that you need to follow to use them. Even if you use a standard crypto algorithm, properly setting up and managing keys and other steps can still be hard to do right. Libraries like Google KeyCzar or Jasypt will take care of these details for you. Extra care needs to be taken with safely storing (salting and hashing) passwords – something that I already touched on an earlier post in this series on Authentication. The OWASP Password Storage Cheat Sheet walks you through how to do this. Implement Protection in Process The last problem to look out for is exposing sensitive data during processing. Be careful not to store this data unencrypted in temporary files and don’t include it in logs. You may even have to watch out when storing it in memory. In the next post we’ll go through security and logging.Reference: 10 things you can do as a developer to make your app secure: #6 Protect Data and Privacy from our JCG partner Jim Bird at the Building Real Software blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books