Featured FREE Whitepapers

What's New Here?


Do You Encourage People to Bring You Problems?

One of the familiar tensions in management is how you encourage or discourage people from bringing you problems. One of my clients had a favorite saying, “Don’t bring me problems. Bring me solutions.” I could see the problems that saying caused in the organization. He prevented people from bringing him problems until the problems were enormous. He didn’t realize that his belief that he was helping people solve their own problems was the cause of these huge problems. How could I help?   I’d only been a consultant for a couple of years. I’d been a manager for several years, and a program manager and project manager for several years before that. I could see the system. This senior manager wasn’t really my client. I was consulting to a project manager, who reported to him, but not him. His belief system was the root cause of many of the problems. What could I do? I tried coaching my project manager, about what to say to his boss. That had some effect, but didn’t work well. My client, the project manager, was so dejected going into the conversation that the conversation was dead before it started. I needed to talk to the manager myself. I thought about this first. I figured I would only get one shot before I was out on my ear. I wasn’t worried about finding more consulting – but I really wanted to help this client. Everyone was suffering. I asked for a one-on-one with the senior manager. I explained that I wanted to discuss the project, and that the project manager was fine with this meeting. I had 30 minutes. I knew that Charlie, this senior manager cared about these things: how fast we could release so we could move to the next project and what the customers would see (customer perception). He thought those two things would affect sales and customer retention. Charlie had put tremendous pressure on the project to cut corners to release faster. But that would change the customer perception of what people saw and how they would use the product. I wanted to change his mind and provide him other options. “Hey Charlie, this time still good?” “Yup, come on in. You’re our whiz-bang consultant, right?” “Yes, you could call me that. My job is to help people think things through and see alternatives. That way they can solve problems on the next project without me.” “Well, I like that. You’re kind of expensive.” “Yes, I am. But I’m very good. That’s why you pay me. So, let’s talk about how I’m helping people solve problems.” “I help people solve problems. I always tell them, ‘Don’t bring me problems. Bring me solutions.’ It works every time.” He actually laughed when he said this. I waited until he was done laughing. I didn’t smile. “You’re not smiling.” He started to look puzzled. “Well, in my experience, when you say things like that, people don’t bring you small problems. They wait until they have no hope of solving the problem at all. Then, they have such a big problem, no one can solve the problem. Have you seen that?” He narrowed his eyes. “Let’s talk about what you want for this project. You want a great release in the next eight weeks, right? You want customers who will be reference accounts, right? I can help you with that.” Now he looked really suspicious. “Okay, how are you going to pull off this miracle? John, the project manager was in here the other day, crying about how this project was a disaster.” “Well, the project is in trouble. John and I have been talking about this. We have some plans. We do need more people. We need you to make some decisions. We have some specific actions only you can take. John has specific actions only he can take. “Charlie, John needs your support. You need to say things like, “I agree that cross-functional teams work. I agree that people need to work on just one thing at a time until they are complete. I agree that support work is separate from project work, and that we won’t ask the teams to do support work until they are done with this project.” Can you do that? Those are specific things that John needs from you. But even those won’t get the project done in time. “Well, what will get the project done in time?” He practically growled at me. “We need consider alternatives to the way the project has been working. I’ve suggested alternatives to the teams. They’re afraid of you right now, because they don’t know which solution you will accept.” “AFRAID? THEY’RE AFRAID OF ME?” He was screaming by this time. “Charlie, do you realize you’re yelling at me?” I did not tell him to calm down. I knew better than that. I gave him the data. “Oh, sorry. No. Maybe that’s why people are afraid of me.” I grinned at him. “You’re not afraid of me.” “Not a chance. You and I are too much alike.” I kept smiling. “Would you like to hear some options? I like to use the Rule of Three to generate alternatives. Is it time to bring John in?” We discussed the options with John. Remember, this is before agile. We discussed timeboxing, short milestones with criteria, inch-pebbles, yellow-sticky scheduling, and decided to go with what is now a design-to-schedule lifecycle for the rest of the project. We also decided to move some people over from support to help with testing for a few weeks. We didn’t release in eight weeks. It took closer to twelve weeks. But the project was a lot better after that conversation. And, after I helped the project, I gained Charlie as a coaching client, which was tons of fun. Many managers have rules about their problem solving and how to help or not help their staff. “Don’t bring me a problem. Bring me a solution” is not helpful. That is the topic of this month’s management myth: Myth 31: I Don’t Have to Make the Difficult Choices. When you say, “Don’t bring me a problem. Bring me a solution” you say, “I’m not going to make the hard choices. You are.” But you’re the manager. You get paid to make the difficult choices. Telling people the answer isn’t always right. You might have to coach people. But not making decisions isn’t right either. Exploring options might be the right thing. You have to do what is right for your situation. Go read Myth 31: I Don’t Have to Make the Difficult Choices.Reference: Do You Encourage People to Bring You Problems? from our JCG partner Johanna Rothman at the Managing Product Development blog....

Locking and Logging

Plumbr has been known as the tool to tackle memory leaks. As little as two months ago we released GC optimization features. But we have not been sitting idle after this – for months we have been working on lock contention detection. From the test runs we have discovered many awkward concurrency issues in hundreds of different applications. Many of those issues are unique to the application at hand, but one particular type of issues stands out. What we found out was that almost every Java application out there is using either Log4j or Logback. As a matter of fact, from the data we had available, it appears to be that more than 90% of the applications are using either of those frameworks for logging. But this is not the interesting part. Interesting is the fact that about third of those applications are facing rather significant lock wait times during logging calls. As it stands, more than 10% of the Java applications seem to halt for more than 5,000 milliseconds every once in a while during the innocent-looking log.debug() call. Why so? Default choice of an appender for any server environment is some sort of File appender, such as RollingFileAppender for example. What is important is the fact that these appenders are synchronized. This is an easy way to guarantee that the sequence of log entries from different threads is preserved. To demonstrate the side effects for this approach, we setup a simple JMH test (MyBenchmark) which is doing nothing besides calling log.debug(). This benchmark was ran on a quad-core MacBook Pro with 1,2 and 50 threads. 50 threads was chosen to simulate a typical setup for a servlet application with 50 HTTP worker threads. @State(Scope.Benchmark) public class LogBenchmark {static final Logger log = LoggerFactory.getLogger(LogBenchmark.class);AtomicLong counter;@Benchmark public void testMethod() { log.debug(String.valueOf(counter.incrementAndGet())); }@Setup public void setup() { counter = new AtomicLong(0); }@TearDown public void printState() { System.out.println("Expected number of logging lines in debug.log: " + counter.get()); } } From the test results we see a dramatic decrease in throughput 278,898 ops/s -> 84,630 ops/s -> 73,789 ops/s we can see that going from 1 to 2 threads throughput of the system decreases 3.3x. So how can you avoid this kind of locking issues?  The solution is simple – for more than a decade there has been an appender called AsyncAppender present in logging frameworks. The idea behind this appender is to store the log message in the queue and return the flow back to the application. In such a way the framework can deal with storing the log message asynchronously in a separate thread. Let’s see how AsyncAppender can cope with multithreaded application. We set up similar simple benchmark but configure the logger for that class to use AsyncAppender. Now, when we run the benchmark with the same 1, 2 and 50 threads we get stunning results: 4,941,874 ops/s -> 6,608,732 ops/s -> 5,517,848 ops/s. The improvement in throughput is so good, that it raises suspicion that there’s something fishy going on. Let’s look at the documentation of the AsyncAppender. It says AsyncAppender is by default a lossy logger, meaning that when the logger queue will get full, appender will start dropping trace, debug and info level messages, so that warnings and errors would surely get written. This behavior is configured using 2 parameters – discardingThreshold and queueSize. First specifies how full should be the queue when messages will start to be dropped, second obviously specifies how big is the queue. The default queue size set to 256 can for example configured to 0 disabling discarding altogether so that the appender becomes blocking when the queue will get full. To better understand the results, let’s count the expected number of messages in the log file (as the number of benchmark invocations by the JMH is non-deterministic) and then compare how many were actually written to see how many messages are actually discarded to get such brilliant throughput. We run the benchmark with 50 threads, varied the queue size and turned discarding on and off. The results are as follows:  Queue size DiscardNo discardOps/s Expected msg Actual msg Lost msg Ops/s256 4,180,312 184,248,790 1,925,829 98.95% 1183404,096 4,104,997 182,404,138 694,902 99.62% 11353465,536 3,558,543 157,385,651 1,762,404 98.88% 1375831,048,576 3,213,489 141,409,403 1,560,612 98.90% 1178202,000,000 3,306,476 141,454,871 1,527,133 98.92% 108603  What can we conclude from them? There’s no free lunches and no magic. Either we discard 98% of the log messages to get such massive throughput gain, or when the queue fills, we start blocking and fall back to performance comparable to synchronous appender. Interestingly the queue size doesn’t affect much. In case you can sacrifice the debug logs, using AsyncAppender does make sense.Reference: Locking and Logging from our JCG partner Vladimir Sor at the Plumbr Blog blog....

Java EE7 and Maven project for newbies – part 7

Resuming from the previous parts Part #1, Part #2, Part #3, Part #4, Part #5 , Part #6 In the previous post (num 6) we discovered how we can unit test our JPA2 domain model, using Arquillian and Wildfly 8.1 In the post we made a simple configuration decision, we used the internal H2 database that is bundled with Wildfly 8.1 and the already configured Datasource (called ExampleDS). But what about a real DBMS? In this post we are going to extend a bit the previous work, use the same principles and    test towards a running PostgreSQL in our localhost use some of the really nice features the ShrinkWrap APi of Arquillian Offers.Pre-requisites You need to install locally a PostgreSQL RBDMS, my example is based on a server running on localhost and the Database name is papodb. Adding some more dependencies Eventually we will need to add some more dependencies in our sample-parent (pom). Some of the are related to Arquillian and specifically the ShrinkWrap Resolvers features (more on this later). So our we need to add to the parent pom. xml the following: <shrinkwrap.bom-version>2.1.1</shrinkwrap.bom-version> <!-- jbdc drivers --> <postgreslq.version>9.1-901-1.jdbc4</postgreslq.version> ... <!-- shrinkwrap BOM--> <dependency> <groupId>org.jboss.shrinkwrap.resolver</groupId> <artifactId>shrinkwrap-resolver-bom</artifactId> <version>${shrinkwrap.bom-version}</version> <type>pom</type> <scope>import</scope> </dependency> <!-- shrinkwrap dependency chain--> <dependency> <groupId>org.jboss.shrinkwrap.resolver</groupId> <artifactId>shrinkwrap-resolver-depchain</artifactId> <version>${shrinkwrap.bom-version}</version> <type>pom</type> </dependency> <!-- arquillian itself--> <dependency> <groupId>org.jboss.arquillian</groupId> <artifactId>arquillian-bom</artifactId> <version>${arquillian-version}</version> <scope>import</scope> <type>pom</type> </dependency> <!-- the JDBC driver for postgresql --> <dependency> <groupId>postgresql</groupId> <artifactId>postgresql</artifactId> <version>${postgreslq.version}</version> </dependency> Some notes on the above change: In order to avoid any potential conflicts between dependencies, make sure to define the ShrinkWrap BOM on top of Arquillian BOMNow on the sample-services (pom.xml) , the project that hosts are simple tests, we need to reference some of these dependencies. <dependency> <groupId>org.jboss.shrinkwrap.resolver</groupId> <artifactId>shrinkwrap-resolver-depchain</artifactId> <scope>test</scope> <type>pom</type> </dependency> <dependency> <groupId>postgresql</groupId> <artifactId>postgresql</artifactId> </dependency> Restructuring our test code In the previous example, our test was simple, we we only used a certain test configuration. That resulted to single test-persistence.xml file and no web.xml file, since we were packaging our test application as a jar. Now we will upgrade our testing archive to a war. War packaging in JavaEE7 has become a first level citizen when it comes to bundling and deploying an enterprise application. The main difference with the previous example is that we would like to keep both the previous settings, meaning test using the internal H2 on wildfly, and the new setting testing towards a real RDBMS server. So we need to maintain 2 set of configuration files, and making use of the Maven Profiles feature, package them accordingly depending our mode. If you are new to Maven make sure to look on the concepts of profiles. Adding separate configurations per profiles So our test resources (watch out these are under src/test/resources) are now as illustrated below.There are differences in both cases. The test-persistence.xml of h2 is pointing to the ExampleDS datasource, where the one on postgre is pointing to a new datasource that we have defined in the web.xml! Please have a look on the actual code, from the git link down below. This is how we define a datasource in web.xmlNotes on the abovethe standard naming in the JNDI name java:jboss/datasources/datasourceName the application server, once it reads the contents of the web.xml file, will automatically deploy and configure a new Datasource.This is our persistence.xmlNotes on the aboveMake sure the 2 JNDI entries are the same both in the datasource definition and in the persistence.xml Of course the Hibernate Dialect used for postGresql is different The line that is highlighted is a special setting that is required for Wildfly 8.1 in cases that you want to deploy with one go, the datasource, the jdbc driver and the code. It hints the application server to initialize and configure first the datasource and then initialize the EntityManager. In cases that you have already deployed /configured the datasource this setting is not needed.Define the profiles in our pom In the sample-services pom.xml we add the following section. This is our profile definition. <profiles> <profile> <id>h2</id> <build> <testResources <testResource> <directory>/resources-h2</directory> <includes> <include>**/*</include> </includes> </testResource> </testResources> </build> </profile> <profile> <id>postgre</id> <build> <testResources> <testResource> <directory>/resources-postgre</directory> <includes> <include>**/*</include> </includes> </testResource> </testResources> </build> </profile> </profiles> Depending on the profile actived, we instruct Maven to include and work with the xml files under a specific subfolder. So if we apply the following command: mvn clean test -Pdb2 Then maven will include the persistence.xml and web.xml under the resource-h2 folder and our tests will make use of the interall H2 DB. If we issue though: mvn clean test -Ppostgre Then our test web archive will be packaged with data source definition specific to our local postgresql server. Writting a simple test Eventually our new JUnit test is not very different from the previous one. Here is a screenshot indicating some key points.   Some notes on the code above:The Junit test and basic annotations are the same with the previous post. The init() method is again the same, we just create and persist a new SimpleUser Entity The first major different is the use of ShrinkWrap Api, that makes use of our test dependencies in our pom, and we can locate the JBDC driver as a jar. Once located ShrinkWrap makes sure to package it along with the rest of resources and code in our test.war. Packaging only the jdbc driver though is NOT enough, in order this to work, we need a datasource to be present (configured) in the server. We would like this to be automatic, meaning we dont want to preconfigure anything on our test Wildfly Server. We make use of the feature to define a datasource on web.xml. (open it up in the code).The application server, once it scans the web.xml will pick up the entry and will configure a datasource under the java:jboss/datasources/testpostgre name. So we have bundled the driver, the datasource definition, we have a persistence.xml pointing to the correct datasourc. we are ready to test Our test method is similar with the previous one.We have modified a bit the resources for the H2 profile so that we package the same war structure every time. That means if we run the test using the -Ph2 profile, the web.xml included is empty, because we actually we don’t need to define a datasource there, since the datasource is already deployed by Wildfly. The persistence.xml though is different, because in one case the dialect defined is specific to H2 and in the other is specific to Postgre. You can follow the same principle and add a new resource subfolder, configure a Datasource for another RDBMS eg MySQL, add the appropriate code to fetch the driver and package it along.You can get the code for this post on this bitbucket repo-tag.ResourceShrinkwrap resolver API page (lots of nice examples for this powerful API) Defining Datasources for Wildfly 8.1Reference: Java EE7 and Maven project for newbies – part 7 from our JCG partner Paris Apostolopoulos at the Papo’s log blog....

Behavior-Driven RESTful APIs

In the RESTBucks example, the authors present a useful state diagram that describes the actions a client can perform against the service. Where does such an application state diagram come from? Well, it’s derived from the requirements, of course. Since I like to specify requirements using examples, let’s see how we can derive an application state diagram from BDD-style requirements.       Example: RESTBucks state diagram Here are the three scenarios for the Order a Drink story: Scenario: Order a drinkGiven the RESTBucks service When I create an order for a large, semi milk latte for takeaway Then the order is created When I pay the order using credit card xxx1234 Then I receive a receipt And the order is paid When I wait until the order is ready And I take the order Then the order is completedScenario: Change an orderGiven the RESTBucks service When I create an order for a large, semi milk latte for takeaway Then the order is created And the size is large When I change the order to a small size Then the order is created And the size is smallScenario: Cancel an orderGiven the RESTBucks service When I create an order for a large, semi milk latte for takeaway Then the order is created When I cancel the order Then the order is canceled Let’s look at this in more detail, starting with the happy path scenario. Given the RESTBucks service When I create an order for a large, semi milk latte for takeaway The first line tells me there is a REST service, at some given billboard URL. The second line tells me I can use the POST method on that URI to create an Order resource with the given properties.    Then the order is created This tells me the POST returns 201 with the location of the created Order resource. When I pay the order using credit card xxx1234 This tells me there is a pay action (link relation).    Then I receive a receipt This tells me the response of the pay action contains the representation of a Receipt resource.    And the order is paid This tells me there is a link from the Receipt resource back to the Order resource. It also tells me the Order is now in paid status.    When I wait until the order is ready This tells me that I can refresh the Order using GET until some other process changes its state to ready.    And I take the order This tells me there is a take action (link relation).    Then the order is completed This tells me that the Order is now in completed state.    Analyzing the other two scenarios in similar fashion gives us a state diagram that is very similar to the original in the RESTBucks example.    The only difference is that this diagram here contains an additional action to navigate from the Receipt to the Order. This navigation is also described in the book, but not shown in the diagram in the book. Using BDD techniques for developing RESTful APIs Using BDD scenarios it’s quite easy to discover the application state diagram. This shouldn’t come as a surprise, since the Given/When/Then syntax of BDD scenarios is just another way of describing states and state transitions. From the application state diagram it’s only a small step to the complete resource model. When the resource model is implemented, you can re-use the BDD scenarios to automatically verify that the implementation matches the requirements. So all in all, BDD techniques can help us a lot when developing RESTful APIs.Reference: Behavior-Driven RESTful APIs from our JCG partner Remon Sinnema at the Secure Software Development blog....

Hibernate and UUID identifiers

Introduction In my previous post I talked about UUID surrogate keys and the use cases when there are more appropriate than the more common auto-incrementing identifiers. A UUID database type There are several ways to represent a 128-bit UUID, and whenever in doubt I like to resort to Stack Exchange for an expert advice.     Because table identifiers are usually indexed, the more compact the database type the less space will the index require. From the most efficient to the least, here are our options:Some databases (PostgreSQL, SQL Server) offer a dedicated UUID storage type Otherwise we can store the bits as a byte array (e.g. RAW(16) in Oracle or the standard BINARY(16) type) Alternatively we can use 2 bigint (64-bit) columns, but a composite identifier is less efficient than a single column one We can store the hex value in a CHAR(36) column (e.g 32 hex values and 4 dashes), but this will take the most amount of space, hence it’s the least efficient alternativeHibernate offers many identifier strategies to choose from and for UUID identifiers we have three options:the assigned generator accompanied by the application logic UUID generation the hexadecimal “uuid” string generator the more flexible “uuid2″ generator, allowing us to use java.lang.UUID, a 16 byte array or a hexadecimal String valueThe assigned generator The assigned generator allows the application logic to control the entity identifier generation process. By simply omitting the identifier generator definition, Hibernate will consider the assigned identifier. This example uses a BINARY(16) column type, since the target database is HSQLDB. @Entity(name = "assignedIdentifier") public static class AssignedIdentifier {@Id @Column(columnDefinition = "BINARY(16)") private UUID uuid;public AssignedIdentifier() { }public AssignedIdentifier(UUID uuid) { this.uuid = uuid; } } Persisting an Entity: session.persist(new AssignedIdentifier(UUID.randomUUID())); session.flush(); Generates exactly one INSERT statement: Query:{[insert into assignedIdentifier (uuid) values (?)][[B@76b0f8c3]} Let’s see what happens when issuing a merge instead: session.merge(new AssignedIdentifier(UUID.randomUUID())); session.flush(); We get both a SELECT and an INSERT this time: Query:{[select assignedid0_.uuid as uuid1_0_0_ from assignedIdentifier assignedid0_ where assignedid0_.uuid=?][[B@23e9436c]} Query:{[insert into assignedIdentifier (uuid) values (?)][[B@2b37d486]} The persist method takes a transient entity and attaches it to the current Hibernate session. If there is an already attached entity or if the current entity is detached we’ll get an exception. The merge operation will copy the current object state into the existing persisted entity (if any). This operation works for both transient and detached entities, but for transient entities persist is much more efficient than the merge operation. For assigned identifiers, a merge will always require a select, since Hibernate cannot know if there is already a persisted entity having the same identifier. For other identifier generators Hibernate looks for a null identifier to figure out if the entity is in the transient state. That’s why the Spring Data SimpleJpaRepository#save(S entity) method is not the best choice for Entities using an assigned identifier: @Transactional public <S extends T> S save(S entity) { if (entityInformation.isNew(entity)) { em.persist(entity); return entity; } else { return em.merge(entity); } } For assigned identifiers, this method will always pick merge instead of persist, hence you will get both a SELECT and an INSERT for every newly inserted entity. The UUID generators This time we won’t assign the identifier ourselves but have Hibernate generate it on our behalf. When a null identifier is encountered, Hibernate assumes a transient entity, for whom it generates a new identifier value. This time, the merge operation won’t require a select query prior to inserting a transient entity. The UUIDHexGenerator The UUID hex generator is the oldest UUID identifier generator and it’s registered under the “uuid” type. It can generate a 32 hexadecimal UUID string value (it can also use a separator) having the following pattern: 8{sep}8{sep}4{sep}8{sep}4. This generator is not IETF RFC 4122 compliant, which uses the 8-4-4-4-12 digit representation. @Entity(name = "uuidIdentifier") public static class UUIDIdentifier {@GeneratedValue(generator = "uuid") @GenericGenerator(name = "uuid", strategy = "uuid") @Column(columnDefinition = "CHAR(32)") @Id private String uuidHex; } Persisting or merging a transient entity: session.persist(new UUIDIdentifier()); session.flush(); session.merge(new UUIDIdentifier()); session.flush(); Generates one INSERT statement per operation: Query:{[insert into uuidIdentifier (uuidHex) values (?)][2c929c6646f02fda0146f02fdbfa0000]} Query:{[insert into uuidIdentifier (uuidHex) values (?)][2c929c6646f02fda0146f02fdbfc0001]} You can check out the string parameter value sent to the SQL INSERT queries. The UUIDGenerator The newer UUID generator is IETF RFC 4122 compliant (variant 2) and it offers pluggable generation strategies. It’s registered under the “uuid2″ type and it offers a broader type range to choose from:java.lang.UUID a 16 byte array a hexadecimal String value@Entity(name = "uuid2Identifier") public static class UUID2Identifier {@GeneratedValue(generator = "uuid2") @GenericGenerator(name = "uuid2", strategy = "uuid2") @Column(columnDefinition = "BINARY(16)") @Id private UUID uuid; } Persisting or merging a transient entity: session.persist(new UUID2Identifier()); session.flush(); session.merge(new UUID2Identifier()); session.flush(); Generates one INSERT statement per operation: Query:{[insert into uuid2Identifier (uuid) values (?)][[B@68240bb]} Query:{[insert into uuid2Identifier (uuid) values (?)][[B@577c3bfa]} This SQL INSERT queries are using a byte array as we configured the @Id column definition.Code available on GitHub.Reference: Hibernate and UUID identifiers from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....

Graduate School: ”Do… Or Do Not. There Is No Try” – Yoda

I recently completed my Master of Science in Computer Science.  There were both good and bad experiences about graduate school while working full time, and I wanted to share them to help those who are considering taking that leap. Before I do, I wanted to provide a little history on why I choose to pursue a graduate degree in Computer Science. My undergraduate and first graduate background is in Biomedical Engineering. As my emphasis was in signal processing, I was exposed to curriculum that focused on logic and coding. I enjoyed these classes the most. Therefore, I chose to get a more formal education in Computer Science when I moved to Kansas City in 2010.   Typically, I would start off a blog post with the good news first, but I don’t want to end with the bad and scare off folks who are on the fence of deciding whether or not to further their education. My intention is to provide some honest opinion and feedback that may save you from the bad aspects of a graduate program, rather than turning you off from taking a very beneficial and self-rewarding step. The Not-So-Good My bad experiences are more complaints than anything else. Not all graduate students are full-time students and, in my experience, many professors don’t understand this. I remember countless “busy work” assignments that never helped me gain knowledge of the subject covered in class. For example, students were assigned to find and read published paper and in turn write mandatory one-page reports. This was a weekly activity throughout the semester. These assignments would eat up my free time at home, and, to this day, I can’t quantify the value in any of the critiques that I wrote. I mentioned free time in that last sentence. I will be the first to say that graduate school is a commitment and it requires sacrifices. I remember countless nights and weekends that I would have been rather doing anything else other than homework. As a tip to those considering this move, I would recommend researching the course structure and teaching methodology to make sure the course curriculum is synced up with your goals. Needless to say, having a supportive work environment and spouse are a must. I was fortunate to have both, so thank you Keyhole and a shout out to my husband. I was under the assumption that creativity and innovation are welcome in a higher education setting, but found that wasn’t always the case. For example, I once solved a problem on an exam by applying an algorithm that was different than the way the professor used to solve the same problem. I was deducted points and when I went to sit down with the professor to understand why my approach was wrong, I was told “that is not the way I would solve the problem.” I could be wrong, but the best I could tell, I was punished for solving the problem in a creative, efficient way. The professor didn’t appreciate it, as it was a different way of getting to the answer. I swallowed my pride. I told myself to keep looking for ways to be creative and focus on learning rather than worrying about my grades. The Good Now for the good experiences about graduate school. Grad school helped me obtain a basic theoretical Computer Science background that I was lacking, as I came from a Biomedical Engineering program. Furthermore, I really enjoyed the small-scale projects that helped me understand the lifecycle of a project holistically. Things like IT Project Management, business and system requirements, designing, coding, and testing were all part of the project. The small-scale projects helped me translate why and how things are structured in a real work setting. At the same time, some of the topics like architecture cannot be fully understood in small projects. But the importance of architecture and good coding practice can be quantified into a large project. I experienced this first-hand as I was able to translate these qualities while at my full-time role as a Keyhole consultant. On the flip side, I was able to take things I was learning on work projects and apply them on school projects. I would say there was a good balance in applying the skills I learned between work and school projects, which furthered my learning. Final Thoughts In conclusion, I found graduate school was very beneficial for me. As a suggestion for those considering this leap: research the curriculum and make sure it is in line with your learning objectives and expectations. I would also highly recommend you have real-world experience before starting graduate school, as I found that having the work experience helps you succeed in your graduate studies. Graduate school is a big commitment and you need to be mentally prepared for time commitment that it requires. Keep learning; innovate, never become complacent, as a mind is a terrible thing to waste.Reference: Graduate School: ”Do… Or Do Not. There Is No Try” – Yoda from our JCG partner Jinal Patel at the Keyhole Software blog....

Software Defined Everything

The other day Taxis in London where on strike because Uber was setting up shop in London. Do you know a lot of people that still send paper letters? Book holiday flights via a travel agent?  Buy books in book stores? Rent DVD movies? 5 smart programmers can bring down a whole multi-billion industry and change people’s habits. It has long been known that any company that changes people habits becomes a multi-billion company. Cereals for breakfast, brown coloured sweet water, throw-away shaving equipment, online bookstore, online search & ads, etc. You probably figured out the name of the brand already.   Software Defined Everything is Accelerating The Cloud, crowd funding, open source, open hardware, 3D printing, Big Data, machine learning, Internet of Things, mobile, wearables, nanotechnology, social networks, etc. all seem individual technology innovations. However things are changing. Your Fitbit will send your vital signs via your mobile to the cloud where deep belief networks analyse it and find out that you are stressed. Your smart hub detects you are approaching your garage and your Arduino controller linked to your IP camera encased in a 3D printing housing detects that you brought a visitor. A LinkedIn and Facebook image scan finds that your visitor is your boss’s boss. Your Fitbit and Google Calendar have given away over the last months that whenever you have a meeting with your boss’s boss, you get stressed. Your boss’s boss music preferences are guesses based on public information available on social networks. Your smart watch gets a push notification with the personal profile data that could be gathered from your boss’s boss: he has two boys and a girl, got recently divorced, the girl recently won a chess award, a facebook tagged picture shows your boss in a golf tournament three weeks ago, an Amazon book review indicates that he likes Shakespeare but only the early work, etc. All of a sudden your house shows pictures of that one time you plaid golf. Music plays according to what 96.5% of Shakespeare lovers like from a crowd-funded bluetooth in-house speaker system… It might be a bit farfetched but what used to be disjoint technologies and innovations are fast coming together. Those companies that can both understand the latest cutting-edge innovations and be able to apply them to improve their customer’s life or solve business problems will have a big competitive edge. Software is fast defining more and more industries. Media, logistics, telecom, banking, retail, industrial, even agriculture will see major changes due to software (and hardware) innovations. What should you do? If you are technology savvy? You should look for customers that want faster horses and draw a picture of a car. Make a slide deck. Get feedback and adjust. Build a prototype. Get feedback and adjust. Create a minimum valuable product. Get feedback and adjust… Change the world. If you have a business problem and money but are not technology savvy?   Organise a competition in which you ask people to solve your problem and give prices to the best solution. You will be amazed by what can come out of these. If you work in a traditional industry and think software is not going to redefine what you do? Call your investment manager and ask them if you have enough money in the bank to retire in case you would get fired next year and wouldn’t be able to find a job any more. If the answer is no! Then start reading the top of the blog post again…Reference: Software Defined Everything from our JCG partner Maarten Ectors at the Telruptive blog....

Agile VS Real Life

The Agile Manifesto tells us that: “We have come to value “Individuals and Interaction over Processes and Tools” Reality tells us otherwise. Want to do unit testing? Pick up a test framework and you’re good to go. Want your organization to be agile? Scrum is very simple, and SAFe is scaled simplicity. We know there are no magic bullets. Yet we’re still attracted to pre-wrapped solutions. Why? Good question. We’re not stupid, most of us anyway. Yet we find it very easy to make up a story about how easy it’s going to be. Here are a couple of theories.We’re concentrating on short term gains. Whether it’s the start-up pushing for a lucrative exit by beating the market, or the enterprise looking at the next investor call, companies are pushing their people to maximize value in the short term. With that in mind, people look for a “proven” tool or process, that minimizes long term investments. In fact, systems punish people, if they do otherwise. We don’t understand complexity. Think about how many systems we’re part of , how they impact each other, and then consider things we haven’t thought about. That’s overwhelming. Our wee brain just got out of fight-or-flight mode, you want it to do full plan and execution with all those question marks? People are hard. Better get back to dry land where tools and processes are actually working. We’re biased in so many ways. One of our biases is called anchoring. Simply put, if we first hear of something, we compare everything to it. It becomes our anchor. Now, when you’re researching a new area, do you start with the whole methodology? Nope. We’re looking for examples, similar to our experiences. What comes out first when we search? The simple stuff. Tools and processes. Once we start there, there’s no way back. We don’t live well with uncertainty. Short term is fine, because we have the illusion of control over it. Because of complexity, long-term is so out of our reach we give up, and try to concentrate short term wins. We don’t like to read the small print. Small print hurts the future-perfect view. We avoid the context issues, we tell ourselves that the annotation applies to a minority of the cases, which obviously we don’t belong to.  Give us the short-short version, and we’ll take it from there. We like to be part of the group. Groups are comfy. Belonging to one removes anxiety. Many companies choose scrum because it works for them, why won’t it work for me? The only people who publish big methodology papers are from the academia. And that’s one group we don’t want to be part of, heaven forbid.That’s why we like processes and tools. Fighting that is not only hard, but may carry a penalty. So what’s the solution? Looking for simplicity again? So soon? Well, the good news, is that it is possible to do that with discipline. If we have enough breathing room, if we don’t get push back from the rest of our company, if we acknowledge that we need to invest in learning , and understand that processes and tools are just the beginning – then there’s hope for us yet. Lots of if’s. But if you don’t want to bother, just go with this magical framework.Reference: Agile VS Real Life from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....

Goodbye Sense – Welcome Alternatives?

I only recently noticed that Sense, the Chrome Plugin for Elasticsearch has been pulled from the app store by its creator. There are quite strong opinions in this thread and I would like to have Sense as a Chrome plugin as well. But I am also totally fine with Elasticsearch as a company trying to monetize some of its products so that is maybe something we just have to accept. What is interesting is that it isn’t even possible to fork the project and keep developing it as there is no explicit license in the repo. I guess there is a lesson buried somewhere in here. In this post I would like to look at some of the alternatives for interacting with Elasticsearch. Though the good thing about Sense is that it is independent from the Elasticsearch installation we are looking at plugins here. It might be possible to use some of them without installing them in Elasticsearch but I didn’t really try. The plugins are generally doing more things but I am looking at the REST capabilities only. Marvel Marvel is the commercial plugin by Elasticsearch (free for development purposes). Though it does lots of additional things, it contains the new version of Sense. Marvel will track lots of the state and interaction with Elasticsearch in a seperate index so be aware that it might store quite some data. Also of course you need to respect the license; when using it on a production system you need to pay. The main Marvel dashboard, which is Kibana, is available at http://localhost:9200/_plugin/marvel. Sense can be accessed directly using http://localhost:9200/_plugin/marvel/sense/index.html.The Sense version of Marvel behaves exactly like the one you are used from the Chrome plugin. It has highlighting, autocompletion (even for new features), the history and the formatting. elasticsearch-head elasticsearch-head seems to be one of the oldest plugins available for Elasticsearch and it is recommended a lot. The main dashboard is available at http://localhost:9200/_plugin/head/ which contains the cluster overview.There is an interface for building queries at the Structured Query tab./p>It lets you execute queries by selecting values from dropdown boxes and it can even detect fields that are available for the index and type. Results are displayed in a table. Unfortunately the values that can be selected are rather outdated. Instead of the match query it still contains the text query that is deprecated since Elasticsearch 0.19.9 and is not available anymore with newer versions of Elasticsearch. Another interface on the Any Request tab lets you execute custom requests.The text box that accepts the body has no highlighting and it is not possble to use tabs but errors will be displayed, the response is formatted, links are set and you do have the option to use a table or the JSON format for responses. The history lets you execute older queries. There are other options like Result Transformer that sound interesting but I have never tried those. elasticsearch-kopf elasticsearch-kopf is a clone of elasticsearch-head that also provides an interface to send arbitrary requests to Elasticsearch.You can enter queries and let them be executed for you. There is a request history, you have highlighting and you can format the request document but unfortunately the interface is missing a autocompletion. If you’d like to learn more about elasticsearch-kopf I have recently published a tour through its features. Inquisitor Inquisitor is a tool to help you understand Elasticsearch queries. Besides other options it allows you to execute search queries.Index and type can be chosen from the ones available in the cluster. There is no formatting in the query field, you can’t even use tabs for indentation, but errors in your query are displayed in the panel on top of the results while typing. The response is displayed in a table, matching fields are automatically highlighted. Because of the limited possibilites when entering text the plugin seems to be more useful when it comes to the analyzing part or for pasting existing queries Elastic-Hammer Andrew Cholakian, the author of Exploring Elasticsearch, has published another query tool, Elastic-Hammer. It can either be installed locally or used as an online version directly.It is a quite useful query tool that will display syntactic errors in your query and format images and links in a pretty response. It even offers autocompletion though not as elaborated as the one Sense and Marvel are providing: It will display any allowed term, no matter the context. So you can’t really see which terms currently are allowed but only that the term is allowed at all. Nevertheless this can be useful. Searches can also be saved in local storage and executed again. Conclusion Currently none of the free and open source plugins seems to provide an interface that is as good as the one contained in Sense and Marvel. As Marvel is free for development you can still use but you need to install it in the instances again. Sense was more convenient and easier to start but I guess one can get along with Marvel the same way. Finally I wouldn’t be surprised if someone from the very active Elasticsearch community comes up with another tool that can take the place of Sense again.Reference: Goodbye Sense – Welcome Alternatives? from our JCG partner Florian Hopf at the Dev Time blog....

Java SE 8 new features tour: Functional programming with Lambda Expression

This article of the “Java SE 8 new features tour” series will deep dive into understanding Lambda expressions. I will show you a few different uses of Lambda Expressions. They all have in common the implementation of functional interfaces. I will explain how the compiler is inferring information from code, such as specific types of variables and what is really happening in the background. In the previous article “Java SE 8 new features tour: The Big change, in Java Development world”, where I have talked about what we are going to explore during this series. I have started by an introduction to Java SE 8 main features, followed by installation process of JDK8 on both Microsoft windows and Apple Mac OS X platforms, with important advices and notice to take care of. Finally, we went through a development of a console application powered by Lambda expression to make sure that we have installed Java SE 8 probably. Source code is hosted on my Github account: Clone from HERE. What is Lambda expression? Perhaps the best-known new feature of Java SE 8 is called Project Lambda, an effort to bring Java into the world of functional programming. In computer science terminology;A Lambda is an anonymous function. That is, a function without a name.In Java;All functions are members of classes, and are referred to as methods. To create a method, you need to define the class of which it’s a member.A lambda expression in Java SE 8 lets you define a class and a single method with very concise syntax implementing an interface that has a single abstract method. Let’s figure out the idea. Lambda Expressions lets developers simplify and shorten their code. Making it more readable and maintainable. This leads to remove more verbose class declarations. Let’s take a look at a few code snippets.Implementing an interface: Prior to Java SE 8, if you wanted to create a thread, you’d first define a class that implements the runnable interface. This is an interface that has a single abstract method named Run that accepts no arguments. You might define the class in its own code file. A file named by MyRunnable.java. And you might name the class, MyRunnable, as I’ve done here. And then you’d implement the single abstract method. public class MyRunnable implements Runnable { @Override public void run() { System.out.println("I am running"); } public static void main(String[] args) { MyRunnable r1 = new MyRunnable(); new Thread(r1).start(); } } In this example, my implementation outputs a literal string to the console. You would then take that object, and pass it to an instance of the thread class. I’m instantiating my runnable as an object named r1. Passing it to the thread’s constructor and calling the thread’s start method. My code will now run in its own thread and its own memory space. Implementing an inner class: You could improve on this code a bit, instead of declaring your class in a separate file, you might declare it as single use class, known as an inner class, local to the method in which it’s used. public static void main(String[] args) { Runnable r1 = new Runnable() { @Override public void run() { System.out.println("I am running"); } }; new Thread(r1).start(); } So now, I’m once again creating an object named r1, but I’m calling the interface’s constructor method directly. And once again, implementing it’s single abstract method. Then I’m passing the object to the thread’s constructor. Implementing an anonymous class: And you can make it even more concise, by declaring the class as an anonymous class, so named because it’s never given a name. I’m instantiating the runnable interface and immediately passing it to the thread constructor. I’m still implementing the run method and I’m still calling the thread’s start method. public static void main(String[] args) { new Thread(new Runnable() { @Override public void run() { System.out.println("I am running"); } }).start(); }Using lambda expression: In Java SE 8 you can re-factor this code to significantly reduce it and make it a lot more readable. The lambda version might look like this. public static void main(String[] args) { Runnable r1 = () -> System.out.println("I am running"); new Thread(r1).start(); } I’m declaring an object with a type of runnable but now I’m using a single line of code to declare the single abstract method implementation and then once again I’m passing the object to the Thread’s constructor. You are still implementing the runnable interface and calling it’s run method but you’re doing it with a lot less code. In addition, it could be improved as the following: public static void main(String[] args) { new Thread(() -> System.out.println("I am running")).start(); } Here is an important quote from an early specs document about Project Lambda. Lambda expressions can only appear in places where they will be assigned to a variable whose type is a functional interface. Quote By Brian Goetz Let’s break this down to understand what’s happening.What are the functional interfaces? A functional interface is an interface that has only a single custom abstract method. That is, one that is not inherited from the object class. Java has many of these interfaces such as Runnable, Comparable, Callable, TimerTask and many others. Prior to Java 8, they were known as Single Abstract Method or SAM interfaces. In Java 8 we now call them functional interfaces. Lambda Expression syntax:This lambda expression is returning an implementation of the runnable interface; it has two parts separated by a new bit of syntax called the arrow token or the Lambda operator. The first part of the lambda expression, before the arrow token, is the signature of the method you’re implementing. In this example, it’s a no arguments method so it’s represented just by parentheses. But if I’m implementing a method that accepts arguments, I would simply give the arguments names. I don’t have to declare their types. Because the interface has only a single abstract method, the data types are already known. And one of the goals of a lambda expression is to eliminate unnecessary syntax. The second part of the expression, after the arrow token, is the implementation of the single method’s body. If it’s just a single line of code, as with this example, you don’t need anything else. To implement a method body with multiple statements, wrap them in braces. Runnable r = ( ) -> { System.out.println("Hello!"); System.out.println("Lambda!"); }; Lambda Goals: Lambda Expressions can reduce the amount of code you need to write and the number of custom classes you have to create and maintain. If you’re implementing an interface for one-time use, it doesn’t always make sense to create yet another code file or yet another named class. A Lambda Expression can define an anonymous implementation for one time use and significantly streamline your code. Defining and instantiating a functional interface To get started learning about Lambda expressions, I’ll create a brand new functional interface. An interface with a single abstract method, and then I’ll implement that interface with the Lambda expression. You can use my source code project “JavaSE8-Features” hosted on github to navigate the project code.Method without any argument, Lambda implementation In my source code, I’ll actually put the interface into its own sub-package ending with lambda.interfaces. And I’ll name the interface, HelloInterface.In order to implement an interface with a lambda expression, it must have a single abstract method. I will declare a public method that returns void, and I’ll name it doGreeting. It won’t accept any arguments.That is all you need to do to make an interface that’s usable with Lambda expressions. If you want, you can use a new annotation, that’s added to Java SE 8, named Functional Interface. /** * * @author mohamed_taman */ @FunctionalInterface public interface HelloInterface { void doGreeting(); } Now I am ready to create a new class UseHelloInterface under lambda.impl package, which will instantiate my functional interface (HelloInterface) as the following: /** * @author mohamed_taman */ public class UseHelloInterface { public static void main(String[] args) { HelloInterface hello = ()-> out.println("Hello from Lambda expression"); hello.doGreeting(); } } Run the file and check the result, it should run and output the following. ------------------------------------------------------------------------------------ --- exec-maven-plugin:1.2.1:exec (default-cli) @ Java8Features --- Hello from Lambda expression ------------------------------------------------------------------------------------ So that’s what the code can look like when you’re working with a single abstract method that doesn’t accept any arguments. Let’s take a look at what it looks like with arguments.Method with any argument, Lambda implementation Under lambda.interfaces. I’ll create a new interface and name it CalculatorInterface. Then I will declare a public method that returns void, and I will name it doCalculate, which will receive two integer arguments value1 and value2. /** * @author mohamed_taman */ @FunctionalInterface public interface CalculatorInterface { public void doCalculate(int value1, int value2); } Now I am ready to create a new class Use CalculatorInterface under lambda.impl package, which will instantiate my functional interface (CalculatorInterface) as the following: public static void main(String[] args) { CalculatorInterface calc = (v1, v2) -> { int result = v1 * v2; out.println("The calculation result is: "+ result); }; calc.doCalculate(10, 5); } Note the doCalculate() arguments, they were named value1 and value2 in the interface, but you can name them anything here. I’ll name them v1 and v2. I don’t need to put in int before the argument names; that information is already known, because the compiler can infer this information from the functional interface method signature.Run the file and check the result, it should run and output the following. ------------------------------------------------------------------------------------ --- exec-maven-plugin:1.2.1:exec (default-cli) @ Java8Features --- The calculation result is: 50 ------------------------------------------------------------------------------------ BUILD SUCCESS Always bear in mind the following rule: Again, you have to follow that rule that the interface can only have one abstract method. Then that interface and its single abstract method can be implemented with a lambda expression.Using built-in functional interfaces with lambdas I’ve previously described how to use a lambda expression to implement an interface that you’ve created yourself.Now, I’ll show lambda expressions with built in interfaces. Interfaces that are a part of the Java runtime. I’ll use two examples. I’m working in a package called lambda.builtin, that’s a part of the exercise files. And I’ll start with this class. UseThreading. In this class, I’m implementing the Runnable interface. This interface’s a part of the multithreaded architecture of Java.My focus here is on how you code, not in how it operates. I’m going to show how to use lambda expressions to replace these inner classes. I’ll comment out the code that’s declaring the two objects. Then I’ll re-declare them and do the implementation with lambdas. So let’s start. public static void main(String[] args) { //Old version // Runnable thrd1 = new Runnable(){ // @Override // public void run() { // out.println("Hello Thread 1."); // } //}; /* ***************************************** * Using lambda expression inner classes * ***************************************** */ Runnable thrd1 = () -> out.println("Hello Thread 1."); new Thread(thrd1).start(); // Old Version /* new Thread(new Runnable() { @Override public void run() { out.println("Hello Thread 2."); } }).start(); */ /* ****************************************** * Using lambda expression anonymous class * ****************************************** */ new Thread(() -> out.println("Hello Thread 2.")).start(); } Let’s look at another example. I will use a Comparator. The Comparator is another functional interface in Java, which has a single abstract method. This method is the compare method.Open the file UseComparator class, and check the commented bit of code, which is the actual code before refactoring it to lambda expression. public static void main(String[] args) { List<string> values = new ArrayList(); values.add("AAA"); values.add("bbb"); values.add("CCC"); values.add("ddd"); values.add("EEE"); //Case sensitive sort operation sort(values); out.println("Simple sort:"); print(values); // Case insensetive sort operation with anonymous class /* Collections.sort(values, new Comparator<string>() { @Override public int compare(String o1, String o2) { return o1.compareToIgnoreCase(o2); } }); */ // Case insensetive sort operation with Lambda sort(values,(o1, o2) -> o1.compareToIgnoreCase(o2)); out.println("Sort with Comparator"); print(values); } As before, it doesn’t provide you any performance benefit. The underlying functionality is exactly the same. Whether you declare your own classes, use inner or anonymous inner classes, or lambda expressions, is completely up to you.In the next article of this series, we will explore and code how to traverse the collections using lambda expression, filtering collections with Predicate interfaces, Traversing collections with method references, implementing default methods in interfaces, and finally implementing static methods in interfaces. Resources:The Java Tutorials, Lambda Expressions JSR 310: Date and Time API JSR 337: Java SE 8 Release Contents OpenJDK website Java Platform, Standard Edition 8, API SpecificationReference: Java SE 8 new features tour: Functional programming with Lambda Expression from our JCG partner Mohamed Taman at the Improve your life Through Science and Art blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: