Featured FREE Whitepapers

What's New Here?

software-development-2-logo

Flyway and jOOQ for Unbeatable SQL Development Productivity

When performing database migrations, we at Data Geekery recommend using jOOQ with Flyway – Database Migrations Made Easy. In this post, we’re going to look into a simple way to get started with the two frameworks.     Philosophy There are a variety of ways how jOOQ and Flyway could interact with each other in various development setups. In this tutorial we’re going to show just one variant of such framework team play – a variant that we find particularly compelling for most use cases. The general philosophy and workflow behind the following approach can be summarised as this:1. Database increment 2. Database migration 3. Code re-generation 4. DevelopmentThe four steps above can be repeated time and again, every time you need to modify something in your database. More concretely, let’s consider:1. Database increment – You need a new column in your database, so you write the necessary DDL in a Flyway script 2. Database migration – This Flyway script is now part of your deliverable, which you can share with all developers who can migrate their databases with it, the next time they check out your change 3. Code re-generation – Once the database is migrated, you regenerate all jOOQ artefacts (see code generation), locally 4. Development – You continue developing your business logic, writing code against the udpated, generated database schema0.1. Maven Project Configuration – Properties The following properties are defined in our pom.xml, to be able to reuse them between plugin configurations: <properties> <db.url>jdbc:h2:~/flyway-test</db.url> <db.username>sa</db.username> </properties> 0.2. Maven Project Configuration – Dependencies While jOOQ and Flyway could be used in standalone migration scripts, in this tutorial, we’ll be using Maven for the standard project setup. You will also find the source code of this tutorial on GitHub, and the full pom.xml file here. These are the dependencies that we’re using in our Maven configuration: <!-- We'll add the latest version of jOOQ and our JDBC driver - in this case H2 --> <dependency> <groupId>org.jooq</groupId> <artifactId>jooq</artifactId> <version>3.4.0</version> </dependency> <dependency> <groupId>com.h2database</groupId> <artifactId>h2</artifactId> <version>1.4.177</version> </dependency><!-- For improved logging, we'll be using log4j via slf4j to see what's going on during migration and code generation --> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.16</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>1.7.5</version> </dependency><!-- To esnure our code is working, we're using JUnit --> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> <scope>test</scope> </dependency> 0.3. Maven Project Configuration – Plugins After the dependencies, let’s simply add the Flyway and jOOQ Maven plugins like so. The Flyway plugin: <plugin> <groupId>org.flywaydb</groupId> <artifactId>flyway-maven-plugin</artifactId> <version>3.0</version><!-- Note that we're executing the Flyway plugin in the "generate-sources" phase --> <executions> <execution> <phase>generate-sources</phase> <goals> <goal>migrate</goal> </goals> </execution> </executions><!-- Note that we need to prefix the db/migration path with filesystem: to prevent Flyway from looking for our migration scripts only on the classpath --> <configuration> <url>${db.url}</url> <user>${db.username}</user> <locations> <location>filesystem:src/main/resources/db/migration</location> </locations> </configuration> </plugin> The above Flyway Maven plugin configuration will read and execute all database migration scripts from src/main/resources/db/migrationprior to compiling Java source code. While the official Flyway documentation suggests that migrations be done in the compile phase, the jOOQ code generator relies on such migrations having been done prior to code generation. After the Flyway plugin, we’ll add the jOOQ Maven Plugin. For more details, please refer to the manual’s section about the code generation configuration. <plugin> <groupId>org.jooq</groupId> <artifactId>jooq-codegen-maven</artifactId> <version>${org.jooq.version}</version><!-- The jOOQ code generation plugin is also executed in the generate-sources phase, prior to compilation --> <executions> <execution> <phase>generate-sources</phase> <goals> <goal>generate</goal> </goals> </execution> </executions><!-- This is a minimal working configuration. See the manual's section about the code generator for more details --> <configuration> <jdbc> <url>${db.url}</url> <user>${db.username}</user> </jdbc> <generator> <database> <includes>.*</includes> <inputSchema>FLYWAY_TEST</inputSchema> </database> <target> <packageName>org.jooq.example.flyway.db.h2</packageName> <directory>target/generated-sources/jooq-h2</directory> </target> </generator> </configuration> </plugin> This configuration will now read the FLYWAY_TEST schema and reverse-engineer it into the target/generated-sources/jooq-h2 directory, and within that, into the org.jooq.example.flyway.db.h2 package. 1. Database increments Now, when we start developing our database. For that, we’ll create database increment scripts, which we put into thesrc/main/resources/db/migration directory, as previously configured for the Flyway plugin. We’ll add these files:V1__initialise_database.sql V2__create_author_table.sql V3__create_book_table_and_records.sqlThese three scripts model our schema versions 1-3 (note the capital V!). Here are the scripts’ contents -- V1__initialise_database.sql DROP SCHEMA flyway_test IF EXISTS;CREATE SCHEMA flyway_test; -- V2__create_author_table.sql CREATE SEQUENCE flyway_test.s_author_id START WITH 1;CREATE TABLE flyway_test.author ( id INT NOT NULL, first_name VARCHAR(50), last_name VARCHAR(50) NOT NULL, date_of_birth DATE, year_of_birth INT, address VARCHAR(50),CONSTRAINT pk_t_author PRIMARY KEY (ID) ); -- V3__create_book_table_and_records.sql CREATE TABLE flyway_test.book ( id INT NOT NULL, author_id INT NOT NULL, title VARCHAR(400) NOT NULL,CONSTRAINT pk_t_book PRIMARY KEY (id), CONSTRAINT fk_t_book_author_id FOREIGN KEY (author_id) REFERENCES flyway_test.author(id) );INSERT INTO flyway_test.author VALUES (next value for flyway_test.s_author_id, 'George', 'Orwell', '1903-06-25', 1903, null); INSERT INTO flyway_test.author VALUES (next value for flyway_test.s_author_id, 'Paulo', 'Coelho', '1947-08-24', 1947, null);INSERT INTO flyway_test.book VALUES (1, 1, '1984'); INSERT INTO flyway_test.book VALUES (2, 1, 'Animal Farm'); INSERT INTO flyway_test.book VALUES (3, 2, 'O Alquimista'); INSERT INTO flyway_test.book VALUES (4, 2, 'Brida'); 2. Database migration and 3. Code regeneration The above three scripts are picked up by Flyway and executed in the order of the versions. This can be seen very simply by executing: mvn clean install And then observing the log output from Flyway… [INFO] --- flyway-maven-plugin:3.0:migrate (default) @ jooq-flyway-example --- [INFO] Database: jdbc:h2:~/flyway-test (H2 1.4) [INFO] Validated 3 migrations (execution time 00:00.004s) [INFO] Creating Metadata table: "PUBLIC"."schema_version" [INFO] Current version of schema "PUBLIC": <> [INFO] Migrating schema "PUBLIC" to version 1 [INFO] Migrating schema "PUBLIC" to version 2 [INFO] Migrating schema "PUBLIC" to version 3 [INFO] Successfully applied 3 migrations to schema "PUBLIC" (execution time 00:00.073s). … and from jOOQ on the console: [INFO] --- jooq-codegen-maven:3.5.0-SNAPSHOT:generate (default) @ jooq-flyway-example --- [INFO] Using this configuration: ... [INFO] Generating schemata : Total: 1 [INFO] Generating schema : FlywayTest.java [INFO] ---------------------------------------------------------- [....] [INFO] GENERATION FINISHED! : Total: 337.576ms, +4.299ms 4. Development Note that all of the previous steps are executed automatically, every time someone adds new migration scripts to the Maven module. For instance, a team member might have committed a new migration script, you check it out, rebuild and get the latest jOOQ-generated sources for your own development or integration-test database. Now, that these steps are done, you can proceed writing your database queries. Imagine the following test case import org.jooq.Result; import org.jooq.impl.DSL; import org.junit.Test;import java.sql.DriverManager;import static java.util.Arrays.asList; import static org.jooq.example.flyway.db.h2.Tables.*; import static org.junit.Assert.assertEquals;public class AfterMigrationTest {@Test public void testQueryingAfterMigration() throws Exception { try (Connection c = DriverManager.getConnection("jdbc:h2:~/flyway-test", "sa", "")) { Result<?> result = DSL.using(c) .select( AUTHOR.FIRST_NAME, AUTHOR.LAST_NAME, BOOK.ID, BOOK.TITLE ) .from(AUTHOR) .join(BOOK) .on(AUTHOR.ID.eq(BOOK.AUTHOR_ID)) .orderBy(BOOK.ID.asc()) .fetch();assertEquals(4, result.size()); assertEquals(asList(1, 2, 3, 4), result.getValues(BOOK.ID)); } } } If you run the mvn clean install again, the above integration test will now compile and pass! Reiterate The power of this approach becomes clear once you start performing database modifications this way. Let’s assume that the French guy on our team prefers to have things his way (no offense intended): -- V4__le_french.sql ALTER TABLE flyway_test.book ALTER COLUMN title RENAME TO le_titre; They check it in, you check out the new database migration script, run mvn clean install And then observe the log output: [INFO] --- flyway-maven-plugin:3.0:migrate (default) @ jooq-flyway-example --- [INFO] --- flyway-maven-plugin:3.0:migrate (default) @ jooq-flyway-example --- [INFO] Database: jdbc:h2:~/flyway-test (H2 1.4) [INFO] Validated 4 migrations (execution time 00:00.005s) [INFO] Current version of schema "PUBLIC": 3 [INFO] Migrating schema "PUBLIC" to version 4 [INFO] Successfully applied 1 migration to schema "PUBLIC" (execution time 00:00.016s). So far so good, but later on: [ERROR] COMPILATION ERROR : [INFO] ------------------------------------------------------------- [ERROR] C:\...\AfterMigrationTest.java:[24,19] error: cannot find symbol [INFO] 1 error When we go back to our Java integration test, we can immediately see that the TITLE column is still being referenced, but it no longer exists: public class AfterMigrationTest {@Test public void testQueryingAfterMigration() throws Exception { try (Connection c = DriverManager.getConnection("jdbc:h2:~/flyway-test", "sa", "")) { Result<?> result = DSL.using(c) .select( AUTHOR.FIRST_NAME, AUTHOR.LAST_NAME, BOOK.ID, BOOK.TITLE // ^^^^^ This column no longer exists. // We'll have to rename it to LE_TITRE ) .from(AUTHOR) .join(BOOK) .on(AUTHOR.ID.eq(BOOK.AUTHOR_ID)) .orderBy(BOOK.ID.asc()) .fetch();assertEquals(4, result.size()); assertEquals(asList(1, 2, 3, 4), result.getValues(BOOK.ID)); } } } Conclusion This tutorial shows very easily how you can build a rock-solid development process using Flyway and jOOQ to prevent SQL-related errors very early in your development lifecycle – immediately at compile time, rather than in production!Reference: Flyway and jOOQ for Unbeatable SQL Development Productivity from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
software-development-2-logo

Fibonacci and Lucas Sequences

This posts touches on three of my favorite topics – math, transferring knowledge through experience (tutorial unit tests) and the importance of research. Most developers are aware of the Fibonacci sequence, mostly through job interviews. To briefly recap the series is defined a: F(n) = F(n-1) + F(n-2), n > 2 F(1) = F(2) = 1   There’s a variant definition: F(n) = F(n-1) + F(n-2), n > 1 F(1) = 1 F(0) = 0 There are four well-known solutions to the white-board question “write code to calculate F(n)”. Recursion – you need to mention this to show that you’re comfortable with recursion but you must also mention that it’s a Really Bad Idea since it requires O(2n) time and space stack since you double the work for each n. Recursion with memoization – this can be a good approach if you point out it’s a good generalization. Basically it’s recursion but you maintain a cache (the memoization) so you only need to make the recursive call once – subsequent recursive calls just look up the cached value. This is a flexible technique since it can be used for any pure recursive function. (That is, a recursive function that depends solely on its inputs and has no side effects.) The first calls require O(n) time, stack and heap space. I don’t recall if it matters if you do the recursive call on the smaller or larger value first. If you have a persistent cache subsequent calls require O(1) time and stack space and O(n) heap space. Iteration – if you can’t cache the values (or just want to efficiently initialize a cache) you can use an iterative approach. It requires O(n) time but only O(1) stack and heap space. Direct approximation – finally there is a well-known approximation using φ, or a variant using sqrt(5). It is O(1) for time, stack space, and heap space. It’s a good approach if you 1) use a lookup table for the smallest values and 2) make sure n is not too big. The last point is often overlooked. The approximation only works as long as you don’t exceed the precision of your floating point number. F(100,000) should be good. F(1,000,000,000,000) may not be. The iterative approach isn’t practical with numbers this large. Research Did you know there’s two other solutions with performance O(lg(n)) (per Wikipedia) in time and space? (I’m not convinced it’s O(lg(n)) since it’s not a divide-and-conquer algorithm – the two recursive calls do not split the initial work between them – but with memoization it’s definitely less than O(n). I suspect but can’t quickly prove it’s O(lg2(n)).) Per Wikipedia we know: F(2n-1) = F2(n) + F2(n-1) F(2n) = F(n)(F(n) + 2F(n-1)) It is straightforward to rewrite this as a recursive method for F(n). There is another property that considers three cases – F(3n-2), F(3n-1) and F(3n). See the code for details. These sites provide many additional properties of the Fibonacci and related Lucas sequences. Few developers will ever need to know these properties but in those rare cases an hour of research can save days of work. Implementation We can now use our research to implement suitable methods for the Fibonacci and Lucas sequences. Fibonacci calculation (This code does not show an optimization using direct approximation for uncached values for sufficiently small n.) /** * Get specified Fibonacci number. * @param n * @return */ @Override public BigInteger get(int n) { if (n < 0) { throw new IllegalArgumentException("index must be non-negative"); }BigInteger value = null;synchronized (cache) { value = cache.get(n);if (value == null) { int m = n / 3;switch (n % 3) { case 0: value = TWO.multiply(get(m).pow(3)) .add(THREE.multiply(get(m + 1)).multiply(get(m)) .multiply(get(m - 1)));break;case 1: value = get(m + 1).pow(3) .add(THREE.multiply(get(m + 1) .multiply(get(m).pow(2)))) .subtract(get(m).pow(3));break;case 2: value = get(m + 1).pow(3) .add(THREE.multiply(get(m + 1).pow(2) .multiply(get(m)))) .add(get(m).pow(3));break; }cache.put(n, value); } }return value; } Fibonacci Iterator /** * ListIterator class. * @author bgiles */ private static final class FibonacciIterator extends ListIterator { private BigInteger x = BigInteger.ZERO; private BigInteger y = BigInteger.ONE;public FibonacciIterator() { }public FibonacciIterator(int startIndex, FibonacciNumber fibonacci) { this.idx = startIndex; this.x = fibonacci.get(idx); this.y = fibonacci.get(idx + 1); }protected BigInteger getNext() { BigInteger t = x; x = y; y = t.add(x);return t; }protected BigInteger getPrevious() { BigInteger t = y; y = x; x = t.subtract(x);return x; } } Lucas calculation /** * Get specified Lucas number. * @param n * @return */ public BigInteger get(int n) { if (n < 0) { throw new IllegalArgumentException("index must be non-negative"); }BigInteger value = null;synchronized (cache) { value = cache.get(n);if (value == null) { value = Sequences.FIBONACCI.get(n + 1) .add(Sequences.FIBONACCI.get(n - 1)); cache.put(n, value); } }return value; } Lucas iterator /** * ListIterator class. * @author bgiles */ private static final class LucasIterator extends ListIterator { private BigInteger x = TWO; private BigInteger y = BigInteger.ONE;public LucasIterator() { }public LucasIterator(int startIndex, LucasNumber lucas) { idx = startIndex; this.x = lucas.get(idx); this.y = lucas.get(idx + 1); }protected BigInteger getNext() { BigInteger t = x; x = y; y = t.add(x);return t; }protected BigInteger getPrevious() { BigInteger t = y; y = x; x = t.subtract(x);return x; } } Education What is the best way to educate other developers about the existence of these unexpected relationships? Code, of course! What is the best way to educate other developers about the existence of code that demonstrates these relationships? Unit tests, of course! It is straightforward to write unit tests that simultaneous verify our implementation and inform other developers about tricks they can use to improve their code. The key is to provide a link to additional information. Fibonacci Sequence public class FibonacciNumberTest extends AbstractRecurrenceSequenceTest { private static final BigInteger MINUS_ONE = BigInteger.valueOf(-1);/** * Constructor */ public FibonacciNumberTest() throws NoSuchMethodException { super(FibonacciNumber.class); }/** * Get number of tests to run. */ @Override public int getMaxTests() { return 300; }/** * Verify the definition is properly implemented. * * @return */ @Test @Override public void verifyDefinition() { for (int n = 2; n < getMaxTests(); n++) { BigInteger u = seq.get(n); BigInteger v = seq.get(n - 1); BigInteger w = seq.get(n - 2); Assert.assertEquals(u, v.add(w)); } }/** * Verify initial terms. */ @Test @Override public void verifyInitialTerms() { verifyInitialTerms(Arrays.asList(ZERO, ONE, ONE, TWO, THREE, FIVE, EIGHT)); }/** * Verify that every third term is even and the other two terms are odd. * This is a subset of the general divisibility property. * * @return */ @Test public void verifyEvenDivisibility() { for (int n = 0; n < getMaxTests(); n += 3) { Assert.assertEquals(ZERO, seq.get(n).mod(TWO)); Assert.assertEquals(ONE, seq.get(n + 1).mod(TWO)); Assert.assertEquals(ONE, seq.get(n + 2).mod(TWO)); } }/** * Verify general divisibility property. * * @return */ @Test public void verifyDivisibility() { for (int d = 3; d < getMaxTests(); d++) { BigInteger divisor = seq.get(d);for (int n = 0; n < getMaxTests(); n += d) { Assert.assertEquals(ZERO, seq.get(n).mod(divisor));for (int i = 1; (i < d) && ((n + i) < getMaxTests()); i++) { Assert.assertFalse(ZERO.equals(seq.get(n + i).mod(divisor))); } } } }/** * Verify the property that gcd(F(m), F(n)) = F(gcd(m,n)). This is a * stronger statement than the divisibility property. */ @Test public void verifyGcd() { for (int m = 3; m < getMaxTests(); m++) { for (int n = m + 1; n < getMaxTests(); n++) { BigInteger gcd1 = seq.get(m).gcd(seq.get(n)); int gcd2 = BigInteger.valueOf(m).gcd(BigInteger.valueOf(n)) .intValue(); Assert.assertEquals(gcd1, seq.get(gcd2)); } } }/** * Verify second identity (per Wikipedia): sum(F(i)) = F(n+2)-1 */ @Test public void verifySecondIdentity() { BigInteger sum = ZERO;for (int n = 0; n < getMaxTests(); n++) { sum = sum.add(seq.get(n)); Assert.assertEquals(sum, seq.get(n + 2).subtract(ONE)); } }/** * Verify third identity (per Wikipedia): sum(F(2i)) = F(2n+1)-1 and * sum(F(2i+1)) = F(2n) */ @Test public void verifyThirdIdentity() { BigInteger sum = ZERO;for (int n = 0; n < getMaxTests(); n += 2) { sum = sum.add(seq.get(n)); Assert.assertEquals(sum, seq.get(n + 1).subtract(ONE)); }sum = ZERO;for (int n = 1; n < getMaxTests(); n += 2) { sum = sum.add(seq.get(n)); Assert.assertEquals(sum, seq.get(n + 1)); } }/** * Verify fourth identity (per Wikipedia): sum(iF(i)) = nF(n+2) - F(n+3) + 2 */ @Test public void verifyFourthIdentity() { BigInteger sum = ZERO;for (int n = 0; n < getMaxTests(); n++) { sum = sum.add(BigInteger.valueOf(n).multiply(seq.get(n)));BigInteger x = BigInteger.valueOf(n).multiply(seq.get(n + 2)) .subtract(seq.get(n + 3)).add(TWO); Assert.assertEquals(sum, x); } }/** * Verify fifth identity (per Wikipedia): sum(F(i)^2) = F(n)F(n+1) */ public void verifyFifthIdentity() { BigInteger sum = ZERO;for (int n = 0; n < getMaxTests(); n += 2) { BigInteger u = seq.get(n); BigInteger v = seq.get(n + 1); sum = sum.add(u.pow(2)); Assert.assertEquals(sum, u.multiply(v)); } }/** * Verify Cassini's Identity - F(n-1)F(n+1) - F(n)^2 = -1^n */ @Test public void verifyCassiniIdentity() { for (int n = 2; n < getMaxTests(); n += 2) { BigInteger u = seq.get(n - 1); BigInteger v = seq.get(n); BigInteger w = seq.get(n + 1);BigInteger x = w.multiply(u).subtract(v.pow(2)); Assert.assertEquals(ONE, x); }for (int n = 1; n < getMaxTests(); n += 2) { BigInteger u = seq.get(n - 1); BigInteger v = seq.get(n); BigInteger w = seq.get(n + 1);BigInteger x = w.multiply(u).subtract(v.pow(2)); Assert.assertEquals(MINUS_ONE, x); } }/** * Verify doubling: F(2n-1) = F(n)^2 + F(n-1)^2 and F(2n) = * F(n)(F(n-1)+F(n+1)) = F(n)(2*F(n-1)+F(n). */ @Test public void verifyDoubling() { for (int n = 1; n < getMaxTests(); n++) { BigInteger u = seq.get(n - 1); BigInteger v = seq.get(n); BigInteger w = seq.get(n + 1);BigInteger x = v.multiply(v).add(u.pow(2)); Assert.assertEquals(seq.get((2 * n) - 1), x);x = v.multiply(u.add(w)); Assert.assertEquals(seq.get(2 * n), x);x = v.multiply(v.add(TWO.multiply(u))); Assert.assertEquals(seq.get(2 * n), x); } }/** * Verify tripling. */ @Test public void verifyTripling() { for (int n = 1; n < getMaxTests(); n++) { BigInteger u = seq.get(n - 1); BigInteger v = seq.get(n); BigInteger w = seq.get(n + 1);BigInteger x = TWO.multiply(v.pow(3)) .add(THREE.multiply(v).multiply(u).multiply(w)); Assert.assertEquals(seq.get(3 * n), x);x = w.pow(3).add(THREE.multiply(w).multiply(v.pow(2))) .subtract(v.pow(3)); Assert.assertEquals(seq.get((3 * n) + 1), x);x = w.pow(3).add(THREE.multiply(w.pow(2)).multiply(v)).add(v.pow(3)); Assert.assertEquals(seq.get((3 * n) + 2), x); } } } Lucas Sequence public class LucasNumberTest extends AbstractRecurrenceSequenceTest { private static final FibonacciNumber fibonacci = new FibonacciNumber();/** * Constructor */ public LucasNumberTest() throws NoSuchMethodException { super(LucasNumber.class); }/** * Get number of tests to run. */ @Override public int getMaxTests() { return 300; }/** * Verify the definition is properly implemented. * * @return */ @Test @Override public void verifyDefinition() { for (int n = 2; n < getMaxTests(); n++) { BigInteger u = seq.get(n); BigInteger v = seq.get(n - 1); BigInteger w = seq.get(n - 2); Assert.assertEquals(u, v.add(w)); } }/** * Verify initial terms. */ @Test @Override public void verifyInitialTerms() { verifyInitialTerms(Arrays.asList(TWO, ONE, THREE, FOUR, SEVEN, ELEVEN, BigInteger.valueOf(18), BigInteger.valueOf(29))); }/** * Verify Lucas properties. */ @Test public void verifyLucas() { // L(n) = F(n-1) + F(n+1) for (int n = 2; n < getMaxTests(); n++) { Assert.assertEquals(seq.get(n), fibonacci.get(n - 1).add(fibonacci.get(n + 1))); } }/** * F(2n) = L(n)F(n) */ @Test public void verifyLucas2() { for (int n = 2; n < getMaxTests(); n++) { Assert.assertEquals(fibonacci.get(2 * n), seq.get(n).multiply(fibonacci.get(n))); } }/** * F(n) = (L(n-1)+ L(n+1))/5 */ @Test public void verifyLucas3() { for (int n = 2; n < getMaxTests(); n++) { Assert.assertEquals(FIVE.multiply(fibonacci.get(n)), seq.get(n - 1).add(seq.get(n + 1))); } }/** * L(n)^2 = 5 F(n)^2 + 4(-1)^n */ @Test public void verifyLucas4() { for (int n = 2; n < getMaxTests(); n += 2) { Assert.assertEquals(seq.get(n).pow(2), FIVE.multiply(fibonacci.get(n).pow(2)).add(FOUR)); }for (int n = 1; n < getMaxTests(); n += 2) { Assert.assertEquals(seq.get(n).pow(2), FIVE.multiply(fibonacci.get(n).pow(2)).subtract(FOUR)); } } } Conclusion Obviously developers rarely need to compute Fibonacci numbers unless they’re working on Project Euler problems or at a job interview. This code isn’t going to have direct utility. At the same time it’s a powerful demonstration of the value of investing an hour or two in research even if you’re sure you already know everything you need to know. You probably don’t need BigInteger implementation but some people might consider the O(lg(n)) approach preferable to the estimate using powers of φ, or could make good use of the relationships discussed on the MathWorld and Wikipedia pages. Source Code The good news is that I have published the source code for this… and the bad news is it’s part of ongoing doodling when I’m doing Project Euler problems. (There are no solutions here – it’s entirely explorations of ideas inspired by the problems. So the code is a little rough and should not be used to decide whether or not to bring me in for an interview (unless you’re impressed): http://github.com/beargiles/projecteuler.Reference: Fibonacci and Lucas Sequences from our JCG partner Bear Giles at the Invariant Properties blog....
java-logo

Type safe dependency injection using Java 8.0

So I sometimes really miss old school Dependency Injection. Back when Spring was still “lightweight” we happily configured all our beans in an application.xml file with the “learn-in-a-day” Spring bean xml configuration. The downsides to this were of course a loss of type safety. I can think of quite a few test cases whose sole purpose was to bootstrap the Spring configuration file and just see if the ApplicationContext starts up without going belly-up due to mis-wiring and correct resolution of included bean xml configuration files. I may be a minority but I never liked the Spring Schema configuration. To me it feels a bit like configuration for configuration.   Annotations came along and improved things, with the caveat in that you have to import libraries for all those annotations. I like annotations but  there is a good case for having all your DI information in a central place so you can actually see how your app hangs together. Finally you sometimes need to create managed objects you can’t annotate. Java Spring configuration makes things better with compile time safety, but I had to rethink the way I did a lot of my wiring, as I had to be careful how I did my wiring as I  lost some of the lazy eval that you get in a Spring context as your Java code evaluates immediately when the ApplicationContext starts up. So, Java based DI is nice but how can we use Java 8.0 to improve it? Apply that Lambda Hammer Right so this is the part of the post that starts applying the new hammer in Java 8.0: Lambdas. Firstly Lambdas give a type safe way of deferring execution till it’s needed. So, lets first create a wrapper object called “ObjectDefinition” who’s job it is to define how an object should be created and wired with various values. It works by instantiating the class we want to create and object from (In this case we have a class called “MyObject“). We also give it a list of java.util.function.BiConsumer interfaces which are mapped to a specific value. This list will be used to perform the actual task of setting values on the object. ObjectDefintion then instantiates the object using normal reflection and then runs though this list of BiConsumer interfaces, passing the the instance of the concrete object and the mapped value. Assuming we give our ObjectDefinition a fluent DSL we can do the following to define the object by adding the set() method which takes a BiConsumer and the value to set and populates the BiConsumer list as follows: MyObject result = new ObjectDefinition() .type(MyObject.class) .set((myObject, value)-> myObject.setA(value), "hello world") .set((myObject, value)-> myObject.setB(value), "hallo welt") .create();The create() method simply instantiates a MyObject instance and then runs through the list of BiConsumers and invokes them passing the mapped value. Method pointers??!! in Java??!! (Well Kinda) Now, Another interesting feature in Java 8.0 is Method references, which is a feature where the compiler wraps a method in a functional interface provided that that method can map to the signature of that functional interface. Method references allow you to map to an arbitrary instance of an object provided that the first parameter of that method is that instance value, and the subsequent parameters match it’s parameter list. This allows us to map a BiConsumer to a setter where the first parameter is the target instance and the second parameter is the value passed to the setter: MyObject result = new ObjectDefinition() .type(MyObject.class) .set(MyObject::setA, "hello world") .set(MyObject::setB, "hallo welt") .create();Method references provide an interesting feature in that it provides a way of passing a reference to a method in a completely type safe manner. All the examples require the correct types and values to be set and the setter method needs to correspond to that type. It’s Container Time So now we have  a nice little DSL for building objects, but what about sticking it into a container and allowing our ObjectDefinition to inject references to other values. Well, assuming we have this container, which conveniently provides a build() method which provides a hook to add new ObjectDefinitions. We now have a container we can use to inject different objects in that container: Container container = create((builder) -> { builder .define(MyObject.class) .set(MyObject::setA, "hello world"); }); String myString = container.get(MyObject.class);          Our Container object has the define() method which creates an instance of an ObjectDefinition, which is then used to define how the object is created. But what about dependencies? Dependency Injection is no fun without being able to inject dependencies, but since we have a container we can now reference other objects in the container. To this end we add the inject() method to our ObjectDefinition type, this can then be used to reference another object in the container by it’s type: Container container = create((builder) -> { builder.define(String.class) .args("hello world");builder.define(MyObject.class) .inject(MyObject::setA,String.class); }); MyObject myString = container.get(MyObject.class);In this example we map an additional object of type String (the args() method here is method which can map values to the the constructor of an object). We then inject this String calling the inject() method. Cycle of Life. We can use the same approach of Lambdas and Method References to manage the life cycle of an object in the container. Assuming we want to run an initialisation method after all the values have been set, we simply add a new Functional interface which is invoked after all the values are set. Here we use the a java.util.function.Consumer interface where the parameter is the instance we want to call the initialisation code on. Container container = create((builder) -> { builder.define(MyObject.class) .set(MyObject::setA,"hello world") .initWith(MyObject::start); }); MyObject myString = container.get(MyObject.class); In this example we added a start() method to our MyObject class. This is then passed to the ObjectDefinition as a Consumer via the initWith() method. Yet Another Dependency Injection Container So all these techniques (and more) are included in the YADI Container, which stands for Yet Another Dependancy Injection Container.The code is available on Github at https://github.com/jexenberger/yadi. And is licensed under an Apache License.Reference: Type safe dependency injection using Java 8.0 from our JCG partner Julian Exenberger at the Dot Neverland blog....
jcg-logo

Codename One & Java Code Geeks are giving away free JavaOne Tickets (worth $3,300)!

Would you like to go to JavaOne? Besides being a cool conference its loads of fun with shows and events thru-out the city.   Now you have a chance, Codename One and Java Code Geeks have teamed up to give away TWO full passes (worth $1,650 each) and to win all you need to do is:   – Retweet this tweet: https://twitter.com/javacodegeeks/statuses/483499905978490880 and/or – Share this Facebook post: https://www.facebook.com/javacodegeeks/posts/690259861009205   Finally fill out the form below (notice only fill it once and don’t cheat or you will be disqualified!).   We will hold a random draw based on the rules listed in the form then publish the results on July 15th 2014. Tickets will be handed out by the end of July.   Good Luck!     Update – 7/15/2014 – The Lucky Winners! The lucky winners of this giveaway are : Joey Kendall, from USA and Larry Melvin Lemons, from USA A Codename One representative will contact the winners by email to provide their JavaOne tickets.   We like to thank you all for participating to this giveaway. Till next time,   Keep up the good work! ...
software-development-2-logo

How do I make testing faster?

Earlier this week I was the guest of a large bank in the City, OK Canary Wharf actually. They had their own little internal Agile conference. As well as myself some of the usual suspects were on parade as well as some internal speakers. It was very enjoyable and as usual the C-List speakers were some of the most interesting. Later in the day I found myself in conversation with two people concerned about software testing. They posed the question: “How do we make testing faster?” Specifically, how do we make SIT (“System Integration Testing”) and UAT (“User Acceptance Testing”) faster? Now of these solutions are far from unique to this particular bank. Indeed after listening to the question and situation I recalled several discussions I’d had, problems I’d seen and solutions I’d suggested before. (OK bad consultant me, I should have enquired into the problem, understood the context fully, coached her to find her own solutions etc. etc. But we only had 30 minutes and I wasn’t being paid for a full consulting session – nor am I charging you for reading this! – so lets not invent the wheel, lets fill in with assumptions and given some general answers.) The solutions we came up with were:More testing environments: it is shocking how often I hear testers say they do not have enough test environments. They have a limited number of environment and they have to time-share or co-habit. There are two answers: either increase the number of test environments or reduce the number of test and testers. I’m sorry I can’t give you a magic sentence that persuades the money people to buy you more environments. Now we can virtualise machines there is very little excuse for not creating more test environments. The usual one is cost. But even here the costs are not that high. (Unless that is you need to buy a new mainframe.) Testers integrated & co-located with developers: if you want to go faster I’ve yet to find anything that beats face-to-face direct verbal communications. If you want/need people distributed then you should accept you will not be as fast as you could be. I’ve said it before and I’ll keep saying it: Testers are first class citizens, treat them as you would developers. Automate SIT: simple really, automate the hell out of testing. Well easy to say, harder to do but far from impossible. Since most UAT/Beta testing is exploratory in nature it is largely SIT that need automation. And please please please, don’t spend money on tool test tools. Make UAT just UAT: Recognise that some of what passes as UAT is actually SIT and automate it. In large corporates an awful lot of what is called “User Acceptance Testing” isn’t “User Testing” – although it may well be “Acceptance Testing.” Look carefully at what happens in this phase: that which is genuine Acceptance Test can – indeed should – be specified in advance (of the coding), can be moved into the SIT phase and automated. The true User bit is going to be much more explorative in nature. Improve development quality with TDD and do it test first: one reason for long SIT and UAT phases is that they find bugs. One way to significantly reduce the number of bugs is to have developers practice Test Driven Development, i.e. build in quality earlier. This may mean coding takes longer in the first place but it will speed up the wider process. TDD is still quite new to many developers so you can speed up the process by giving them some help, specifically TDD coaching from someone who has done lots of this before. Until developers make the jump from “post coding automated unit tests” to “pre-coding test first development” you will not maximise the benefits of TDD, you will have more code in your system than you need and you will have a poorer overall design, and both of those make for more tests later on and potentially more problems. When tests are written before code TDD becomes a design activity. Reduce batch size: i.e. reduce the size of the release, and probably do more releases. Do less, do a better job, push it out more often, reduce risk. Think dis-economies of scale. Many large corporations use a large batch size simply because any release is so painful. This is completely the wrong approach, rather than running scared of releases take them head on. Done means done and tested in the iteration – at the end of the iteration it is releasable: when you reach the end of an iteration and declare work “Done” that means “Done ready for release (to customers).” Well, that should be your aim anyway and you should be working towards that point. If “Done” currently means “Done development, ready for SIT” then you should be working towards bringing SIT inside the iteration. If that looks hard right now then look again at items 1 to 6 in this list. And when you’ve moved SIT inside the iteration start on UAT. Until both those phases (and any other phases in the tail) are inside the iteration they can throw up problems (bugs) which will disrupt a future iteration which implies the work you said was Done is not Done.Now the catch. Money. Some of these things require money to be spent. In a rational world that would not be a problem because there is plenty of evidence that increased software quality (by which I mean specifically few bugs and higher maintainability) leads to shorter schedules and therefore reduced costs. And as a plus you will get more predictability. What’s not to like? So rationally its a case of spend money to save money. Quality is free as Philip Crosby used to say. But unfortunately rational engineers often find it difficult to see rationality inside big corporation operating annual budgeting cycles. One of the reoccurring themes this week – although not one I think was given official time – was the constraints of the budgeting cycle. The rational world of the engineer and the rational world of the account and budgeteer seem a long way apart. There is a solution but it is a big solution which will scare many people. It is Beyond Budgeting, and that is why I was so keen to invite Bjarte Bogsnes – the author of Implementing Beyond Budgetting – to talk about Beyond Budgeting at this years Agile on the Beach conference. And its probably why Bjarte recently tweeted “Invitations to speak about Beyond Budgeting at Agile and HR conferences are now as frequent as Finance conference invitations. Wonderful!”Reference: How do I make testing faster? from our JCG partner Allan Kelly at the Agile, Lean, Patterns blog....
career-logo

So You Want to Use a Recruiter Part III – Warnings

This is the final installment in a three-part series to inform job seekers about working with a recruiter. Part I was “Recruit Your Recruiter” and Part II was “Establishing Boundaries”  In Part II, I alluded to systemic conditions inherent to contingency recruiting that can incentivize bad behavior. Before proceeding with warnings about recruiters, let’s provide some context as to why some recruiters behave the way they do. Agency recruiters (AKA “headhunters”) that conduct contingency searches account for most of the recruiting market and are subsequently the favorite target of recruiter criticism. These are recruiters that represent multiple hiring firms that pay the recruiter a fee ranging anywhere from 15-30% of the new employee’s salary. This seems a great deal for the recruiter, but the downside of contingency recruiting is that the recruiter may spend substantial time on a search yet earn no money if they do not make the placement. Contingency recruiters absorb 100% of the “risk” for their searches by default, unlike retained recruiters who take no risk. Hiring companies can establish relationships with ten or twenty contingency firms to perform a search, with each agency helping expand the company’s name and employer brand, yet only one (and sometimes none) is compensated. When we combine large fees with a highly competitive, time-sensitive demand-driven market, the actors in that market are incentivized to take shortcuts. Please don’t confuse these revelations as excuses for bad behavior. Recruiters who either do not understand or choose to ignore industry ethics make it much more difficult for those who do follow the rules. I provide these warnings to expose problems in a secretive industry, with hopes that sunlight will serve as disinfectant.All recruiters won’t act this way. Many will. Keep these things in mind when interacting with your recruiter. Your recruiter may send your résumé places without your knowledge – To maximize the chances of getting a fee or to utilize your desirable background as bait to sign a prospective client, recruiters may shop you around without your consent. This only tends to cause issues when the recruiter sends your résumé somewhere that you are already interviewing. REMEDY: Insist that your recruiter only submits you with prior consent (in writing if that makes you feel more comfortable). Your recruiter may attempt to get you as many interviews as possible, with little consideration for fit – This sounds like a positive until you have burned all your vacation days and realize that over 50% of your interviews were a complete waste of time. This is the “throw it against the wall and see what sticks” mentality loathed by both candidates and employers. REMEDY: Perform due diligence and vet jobs before agreeing to interviews. If you reject a job offer, the recruiter may take questionable actions to get you to reconsider – No one can fault a recruiter for wanting to promote their client when a candidate is on the fence. That is part of the recruiter’s job. Recruiters cross the line when they knowingly provide false details about a job to allay a candidate’s fears. A recruiter may call a candidate’s home when the recruiter knows the candidate isn’t there in an attempt to speak to and get support from a spouse or significant other. The higher the potential fee, the more likely you are to see these tactics. REMEDY: If you have questions about an offer that don’t have simple answers, such as inquiries about career path or bonus expectation, get answers directly from the company representatives. When your decision is final, make that fact clear to your recruiter. If you accept a counteroffer, the recruiter will attempt to scare you - Counteroffers are the bane of the recruiter’s existence. Just as the recruiter starts counting their money, it’s swiped at the last possible moment – and just because the candidate changed their mind. Recruiting is a unique sales job, in that the hire can refuse the deal after all involved parties (employer, new employee, broker) have agreed to terms. Sales jobs in other industries don’t have that issue. When a counteroffer is accepted, expect some form of “recruiter terrorism“. In my opinion, this is perhaps the most shameful recruiter behavior. Recruiters have been known to tell candidates that their career is over, they will be out of a job in a few months, and that the decision will haunt them for many years to come. All of those things may be true in some isolated instances, but plenty of people have accepted counteroffers without ill effects. I’ve written about this before, as it’s important to understand the difference between the actual dangers of counteroffer acceptance and the recruiter’s biased perspective. REMEDY: Consider any counteroffer situation carefully and do your own research on the realities of counteroffer, while keeping in mind the source of any content you read. You will be asked for referrals, perhaps in creative ways – Recruiters are trained to ask everyone for referrals. This was much more important before the advent of LinkedIn and social media, when names were much more difficult to come by. Candidates may expect that recruiters will ask “Who is the best Python developer you know?”, but they may feel less threatened by a recruiter asking “Who is the worst Python developer you know?”.  Again, we shouldn’t blame recruiters for trying to expand their network, but if the recruiter continues to ask for names without providing any value it’s clearly not a balanced relationship. REMEDY: Give referrals (if any) that you are comfortable providing, and tell the recruiter that you’ll keep them in mind if any of your associates are looking for work in the future. Whether you act on that is up to you. If you list references they will be called (and perhaps recruited) – When a candidate lists references on a résumé, it’s an open invitation to recruit those people as well. If your references discover that you leaked their contact information indiscriminately to a slew of recruiters and that act resulted in their full inbox, don’t expect them to volunteer to serve as references in the future. REMEDY: Never list references on your résumé. Only provide references when necessary, and ask the references what contact information they would like presented to the recruiter. You will receive continuous recruiter contact for years to come, usually more often than you’d like - Once your information is out there, you can’t erase it. Don’t provide permanent contact details unless you are willing to field inquiries for the rest of your career. REMEDY: Use throwaway email addresses and set guidelines on future contact. Recruiters get paid when you take a job through them, regardless of whether it’s the best job choice for you – This is a simple fact that most candidates probably aren’t conscious of during the job search. There are three potential outcomes – you accept a job through the recruiter, you accept a job without using the recruiter, or you stay put. Only the first outcome results in a fee, so the recruiter has financial incentive first to convince you to leave and then to only consider their jobs. What type of behavior does this lead to? Recruiters may ask you where you are interviewing, where you have applied, and what other recruiters you are using. Some may refuse to work with you if you fail to provide this information. They may provide some explanation as to why this information is vital for them to know, but the reason is only the desire to know who they are competing against and to have some amount of control. The more detail you provide, the more ammunition the recruiter has to make a case for their client. REMEDY: Always consider a recruiter’s advice, but also consider their incentives. Provide information to recruiters on a need to know basis and only provide what will help them get you a job. Specifics about any other job search activity are private unless you choose to make it known. Recruiters have almost no incentive to provide feedback – Many job seekers wonder why agency recruiters often don’t provide feedback after a failed interview. Of my 60+ articles on Job Tips For Geeks the most popular (based on traffic coming from search engines) is “Why The Recruiter Didn’t Call You Back“, so it’s clear to me that this is a bothersome trend. Once it becomes clear that you will not result in a fee, your value to the recruiter is primarily limited to the possibility of a future placement or a source for referrals. Interview feedback is valuable to candidates, and job seekers that commit to interviews deserve some explanation as to why they were not selected for hire. REMEDY: Set the expectation with the recruiter that you will be interested in client feedback, and ask for specific feedback after interviews are complete.Reference: So You Want to Use a Recruiter Part III – Warnings from our JCG partner Dave Fecak at the Job Tips For Geeks blog....
salesforce-logo

How to use Salesforce REST API with your JavaServer Pages

Abstract: This tutorial gives an example of a JSP and how to integrate it with the Salesforce REST API. We will walk through the step­by­step process of creating an external client to manage your data with Force.com,while using HTTP(S) and JSON. In this example, I am using Mac OS X 10.9.2 with Apache Tomcat 7 server and Java 1.7. Eclipse Java EE edition is the IDE used for development and testing. The instructions given in this tutorial should work with minor modifications for other platforms as well. If you want to access the entire sample code from this tutorial, you can access it here: github.com/seethaa/force_rest_example All code is updated to work with the httpclient 4.3 libraries. What Is REST? REST stands for Representational State Transfer, and is a stateless client­server communications protocol over HTTP. Why and When To Use A REST API in Java for Your JSP A REST API is well suited for browser applications which require a lot of interaction, and uses synchronous communication to transfer data. The Salesforce REST API provides a programming interface for simple web services to interact with Force.com, and supports both XML and JSON formats. The Salesforce REST API works well for mobile applications or dynamic websites to retrieve or update records quickly on your web server. While bulk record retrieval should be reserved for the BulkAPI, this lightweight REST API can be used for common server pages which involve quick updates and frequent user interactions, for example updating a single user record. Setting Up Your Development Account and Prerequisites You will need the following:Go to https://developer.salesforce.com/signup and register for your Free DE account. For the purposes of this example, I recommend sign up for a Developer Edition even if you already have an account. This ensures you get a clean environment with the latest features enabled. Java application Server. I created mine using Apache Tomcat 7 on Mac OS X and Eclipse as the IDE. There is also a free Eclipse plugin at http://developer.salesforce.com/page/Force.com_IDE but the original Eclipse setup was used in this tutorial. Configure SSL on your Tomcat server using http://tomcat.apache.org/tomcat-7.0-doc/ssl-howto.html. If you are developing in Eclipse, make sure to add the Connector piece in server.xml file in your Eclipse environment, e.g.: <Connector SSLEnabled="true" clientAuth="false" keystoreFile="/Users/seetha/.keystore" keystorePass="password" maxThreads="200" port="8443" protocol="HTTP/1.1" scheme="https" secure="true" sslProtocol="TLS"/>Add the required jar files to WebContent/WEB­INF/lib. You will need commons-­codec-­1.6.jar, httpclient­4.3.3.jar, httpcore-­4.3.2.jar, commons-­logging­-1.1.3.jar, and java-­json.jar. For Eclipse, I also had to make sure that all jars were added to the build path (Right click Project → Build Path → Configure build path →  Select Libraries tab → Click Add Jars → Select the Jar files from the WEB­INF/lib folder.Create a Connected AppBack in your Force.com DE, create a new Connected App through the console. Click on Setup → Build → Create → Apps. Scroll down to the Connected Apps section and click on the New button.Ensure that the callback URL is http://localhost:8080/<your_app_context_path>/oauth/_callback (You can find the app context path by going back to Eclipse: Right clicking on Project → Properties → Web Project Settings → Context root) Check “Enable OAuth Settings” checkbox The required OAuth scopes for this tutorial (see Figure 1) are “Access and manage your data (api)” and “Provide access to your data via the Web (web)”, but these scopes should be changed as per your requirement. SaveCopy the ClientID and Client Secret (see Figure 2), because both of these will be used in the next step.  Authentication There are three files that need to be imported into your JSP project, given below: index.html <!DOCTYPE html PUBLIC "­//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http­equiv="Content­Type" content="text/html; charset=UTF­8"> <title>REST/OAuth Example</title> </head> <body> <script type="text/javascript" language="javascript"> if (location.protocol != "https:") { document.write("OAuth will not work correctly from plain http. "+ "Please use an https URL."); } else { document.write("<a href=\"oauth\">Run Connected App demo via REST/OAuth.</a>"); } </script> </body> </html> OAuthConnectedApp.java import java.io.IOException; import java.io.InputStream; import java.io.UnsupportedEncodingException; import java.net.URLEncoder; import java.util.ArrayList; import java.util.List;import javax.servlet.ServletException; import javax.servlet.annotation.WebInitParam; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse;import org.apache.http.Consts; import org.apache.http.HttpEntity; import org.apache.http.NameValuePair; import org.apache.http.client.entity.UrlEncodedFormEntity; import org.apache.http.client.methods.CloseableHttpResponse; import org.apache.http.client.methods.HttpPost; import org.apache.http.impl.client.CloseableHttpClient; import org.apache.http.impl.client.HttpClients; import org.apache.http.message.BasicNameValuePair;import org.json.JSONException; import org.json.JSONObject; import org.json.JSONTokener;@WebServlet(name = "oauth", urlPatterns = { "/oauth/*", "/oauth" }, initParams = { // clientId is 'Consumer Key' in the Remote Access UI //**Update with your own Client ID @WebInitParam(name = "clientId", value = "3MVG9JZ_r.QzrS7jzujCYrebr8kajDEcjXQLXnV9nGU6PaxOjuOi_n8EcUf0Ix9qqk1lYCa4_Jaq7mpqxi2YT"), // clientSecret is 'Consumer Secret' in the Remote Access UI //**Update with your own Client Secret @WebInitParam(name = "clientSecret", value = "2307033558641049067"), // This must be identical to 'Callback URL' in the Remote Access UI //**Update with your own URI @WebInitParam(name = "redirectUri", value = "http://localhost:8080/force_rest_example/oauth/_callback"), @WebInitParam(name = "environment", value = "https://login.salesforce.com"), })/** * Servlet parameters * @author seetha * */ public class OAuthConnectedApp extends HttpServlet {private static final long serialVersionUID = 1L;private static final String ACCESS_TOKEN = "ACCESS_TOKEN"; private static final String INSTANCE_URL = "INSTANCE_URL";private String clientId = null; private String clientSecret = null; private String redirectUri = null; private String environment = null; private String authUrl = null; private String tokenUrl = null; public void init() throws ServletException { clientId = this.getInitParameter("clientId"); clientSecret = this.getInitParameter("clientSecret"); redirectUri = this.getInitParameter("redirectUri"); environment = this.getInitParameter("environment");try {authUrl = environment + "/services/oauth2/authorize?response_type=code&client_id=" + clientId + "&redirect_uri=" + URLEncoder.encode(redirectUri, "UTF­8"); } catch (UnsupportedEncodingException e) { throw new ServletException(e); }tokenUrl = environment + "/services/oauth2/token"; }protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { String accessToken = (String) request.getSession().getAttribute(ACCESS_TOKEN);//System.out.println("calling doget"); if (accessToken == null) { String instanceUrl = null;if (request.getRequestURI().endsWith("oauth")) { // we need to send the user to authorize response.sendRedirect(authUrl); return; } else { System.out.println("Auth successful ­ got callback"); String code = request.getParameter("code");// Create an instance of HttpClient. CloseableHttpClient httpclient = HttpClients.createDefault();try{ // Create an instance of HttpPost. HttpPost httpost = new HttpPost(tokenUrl);// Adding all form parameters in a List of type NameValuePair List<NameValuePair> nvps = new ArrayList<NameValuePair>(); nvps.add(new BasicNameValuePair("code", code)); nvps.add(new BasicNameValuePair("grant_type","authorization_code")); nvps.add(new BasicNameValuePair("client_id", clientId)); nvps.add(new BasicNameValuePair("client_secret", clientSecret)); nvps.add(new BasicNameValuePair("redirect_uri", redirectUri));httpost.setEntity(new UrlEncodedFormEntity(nvps, Consts.UTF_8)); // Execute the request. CloseableHttpResponse closeableresponse=httpclient.execute(httpost); System.out.println("Response Statusline:"+closeableresponse.getStatusLine());try { // Do the needful with entity. HttpEntity entity = closeableresponse.getEntity(); InputStream rstream = entity.getContent(); JSONObject authResponse = new JSONObject(new JSONTokener(rstream));accessToken = authResponse.getString("access_token"); instanceUrl = authResponse.getString("instance_url");} catch (JSONException e) { // TODO Auto­generated catch block e.printStackTrace(); e.printStackTrace(); } finally { // Closing the response closeableresponse.close(); } } finally { httpclient.close(); }}// Set a session attribute so that other servlets can get the access token request.getSession().setAttribute(ACCESS_TOKEN, accessToken);// We also get the instance URL from the OAuth response, so set it in the session too request.getSession().setAttribute(INSTANCE_URL, instanceUrl); }response.sendRedirect(request.getContextPath() + "/ConnectedAppREST"); } }ConnectedAppREST.java import java.io.IOException; import java.io.InputStream; import java.io.PrintWriter; import java.net.URISyntaxException; import java.util.Iterator;import javax.servlet.ServletException; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse;import org.apache.http.HttpEntity; import org.apache.http.HttpStatus; import org.apache.http.client.methods.CloseableHttpResponse; import org.apache.http.client.methods.HttpDelete; import org.apache.http.client.methods.HttpGet; import org.apache.http.client.methods.HttpPost; import org.apache.http.client.utils.URIBuilder; import org.apache.http.entity.ContentType; import org.apache.http.entity.StringEntity; import org.apache.http.impl.client.CloseableHttpClient; import org.apache.http.impl.client.HttpClients;import org.json.JSONArray; import org.json.JSONException; import org.json.JSONObject; import org.json.JSONTokener;@WebServlet(urlPatterns = { "/ConnectedAppREST" }) /** * Demo for Connect App/REST API * @author seetha * */ public class ConnectedAppREST extends HttpServlet {private static final long serialVersionUID = 1L; private static final String ACCESS_TOKEN = "ACCESS_TOKEN"; private static final String INSTANCE_URL = "INSTANCE_URL"; private void showAccounts(String instanceUrl, String accessToken, PrintWriter writer) throws ServletException, IOException { CloseableHttpClient httpclient = HttpClients.createDefault();HttpGet httpGet = new HttpGet();//add key and value httpGet.addHeader("Authorization", "OAuth " + accessToken);try { URIBuilder builder = new URIBuilder(instanceUrl+ "/services/data/v30.0/query"); builder.setParameter("q", "SELECT Name, Id from Account LIMIT 100");httpGet.setURI(builder.build());CloseableHttpResponse closeableresponse = httpclient.execute(httpGet); System.out.println("Response Status line :" + closeableresponse.getStatusLine()); if (closeableresponse.getStatusLine().getStatusCode() == HttpStatus.SC_OK) { // Now lets use the standard java json classes to work with the results try { // Do the needful with entity. HttpEntity entity = closeableresponse.getEntity(); InputStream rstream = entity.getContent(); JSONObject authResponse = new JSONObject(new JSONTokener(rstream));System.out.println("Query response: " + authResponse.toString(2));writer.write(authResponse.getInt("totalSize") + " record(s) returned\n\n");JSONArray results = authResponse.getJSONArray("records"); for (int i = 0; i < results.length(); i++) { writer.write(results.getJSONObject(i).getString("Id") + ", " + results.getJSONObject(i).getString("Name") + "\n"); } writer.write("\n"); } catch (JSONException e) { e.printStackTrace(); throw new ServletException(e); } } } catch (URISyntaxException e1) { // TODO Auto­generated catch block e1.printStackTrace(); } finally { httpclient.close(); } } private String createAccount(String name, String instanceUrl, String accessToken, PrintWriter writer) throws ServletException, IOException { String accountId = null; CloseableHttpClient httpclient = HttpClients.createDefault(); JSONObject account = new JSONObject(); try { account.put("Name", name); } catch (JSONException e) { e.printStackTrace(); throw new ServletException(e); } HttpPost httpost = new HttpPost(instanceUrl+ "/services/data/v30.0/sobjects/Account/");httpost.addHeader("Authorization", "OAuth " + accessToken);StringEntity messageEntity = new StringEntity( account.toString(), ContentType.create("application/json"));httpost.setEntity(messageEntity);// Execute the request. CloseableHttpResponse closeableresponse = httpclient.execute(httpost); System.out.println("Response Status line :" + closeableresponse.getStatusLine()); try { writer.write("HTTP status " + closeableresponse.getStatusLine().getStatusCode() + " creating account\n\n");if (closeableresponse.getStatusLine().getStatusCode() == HttpStatus.SC_CREATED) { try { // Do the needful with entity. HttpEntity entity = closeableresponse.getEntity(); InputStream rstream = entity.getContent(); JSONObject authResponse = new JSONObject(new JSONTokener(rstream)); System.out.println("Create response: " + authResponse.toString(2));if (authResponse.getBoolean("success")) { accountId = authResponse.getString("id"); writer.write("New record id " + accountId + "\n\n"); } } catch (JSONException e) { e.printStackTrace(); // throw new ServletException(e); } } } finally { httpclient.close(); }return accountId; } private void showAccount(String accountId, String instanceUrl, String accessToken, PrintWriter writer) throws ServletException, IOException {CloseableHttpClient httpclient = HttpClients.createDefault(); HttpGet httpGet = new HttpGet(); //add key and value httpGet.addHeader("Authorization", "OAuth " + accessToken); try { URIBuilder builder = new URIBuilder(instanceUrl + "/services/data/v30.0/sobjects/Account/" + accountId);httpGet.setURI(builder.build());//httpclient.execute(httpGet);CloseableHttpResponse closeableresponse = httpclient.execute(httpGet); System.out.println("Response Status line :" + closeableresponse.getStatusLine()); if (closeableresponse.getStatusLine().getStatusCode() == HttpStatus.SC_OK) {try { // Do the needful with entity. HttpEntity entity = closeableresponse.getEntity(); InputStream rstream = entity.getContent(); JSONObject authResponse = new JSONObject(new JSONTokener(rstream)); System.out.println("Query response: " + authResponse.toString(2)); writer.write("Account content\n\n"); Iterator iterator = authResponse.keys();while (iterator.hasNext()) { String key = (String) iterator.next();Object obj = authResponse.get(key); String value = null; if (obj instanceof String) { value = (String) obj; }writer.write(key + ":" + (value != null ? value : "") + "\n"); }writer.write("\n"); } catch (JSONException e) { e.printStackTrace(); throw new ServletException(e); } } } catch (URISyntaxException e1) { // TODO Auto­generated catch block e1.printStackTrace(); } finally { httpclient.close(); } } private void updateAccount(String accountId, String newName, String city, String instanceUrl, String accessToken, PrintWriter writer) throws ServletException, IOException { CloseableHttpClient httpclient = HttpClients.createDefault();JSONObject update = new JSONObject(); try { update.put("Name", newName); update.put("BillingCity", city); } catch (JSONException e) { e.printStackTrace(); throw new ServletException(e); } HttpPost httpost = new HttpPost(instanceUrl + "/services/data/v30.0/sobjects/Account/" +accountId+"?_HttpMethod=PATCH"); httpost.addHeader("Authorization", "OAuth " + accessToken); StringEntity messageEntity = new StringEntity( update.toString(), ContentType.create("application/json"));httpost.setEntity(messageEntity); // Execute the request. CloseableHttpResponse closeableresponse = httpclient.execute(httpost); System.out.println("Response Status line :" + closeableresponse.getStatusLine()); try { writer.write("HTTP status " + closeableresponse.getStatusLine().getStatusCode() + " updating account " + accountId + "\n\n"); } finally { httpclient.close(); } } private void deleteAccount(String accountId, String instanceUrl, String accessToken, PrintWriter writer) throws IOException {CloseableHttpClient httpclient = HttpClients.createDefault();HttpDelete delete = new HttpDelete(instanceUrl + "/services/data/v30.0/sobjects/Account/" + accountId);delete.setHeader("Authorization", "OAuth " + accessToken);// Execute the request. CloseableHttpResponse closeableresponse = httpclient.execute(delete); System.out.println("Response Status line :" + closeableresponse.getStatusLine()); try { writer.write("HTTP status " + closeableresponse.getStatusLine().getStatusCode() + " deleting account " + accountId + "\n\n"); } finally { delete.releaseConnection(); } } /** * @see HttpServlet#doGet(HttpServletRequest request, HttpServletResponse * response) */ @Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { PrintWriter writer = response.getWriter();String accessToken = (String) request.getSession().getAttribute( ACCESS_TOKEN);String instanceUrl = (String) request.getSession().getAttribute( INSTANCE_URL);if (accessToken == null) { writer.write("Error ­ no access token"); return; } writer.write("We have an access token: " + accessToken + "\n" + "Using instance " + instanceUrl + "\n\n");showAccounts(instanceUrl, accessToken, writer);String accountId = createAccount("My New Org", instanceUrl, accessToken, writer);if (accountId == null) { System.out.println("Account ID null"); }showAccount(accountId, instanceUrl, accessToken, writer); showAccounts(instanceUrl, accessToken, writer); updateAccount(accountId, "My New Org, Inc", "San Francisco", instanceUrl, accessToken, writer);showAccount(accountId, instanceUrl, accessToken, writer);deleteAccount(accountId, instanceUrl, accessToken, writer);showAccounts(instanceUrl, accessToken, writer); } }Change OAuthConnectedApp.java to replace Client ID, Client Secret, and Callback URI fields based on the Connected App configuration. Start Tomcat server in Eclipse (see Figure 3) or externally, and navigate to https://localhost:8443/<your_app_context_path>/ Clicking on the link above (see Figure 4) will not work unless it is through HTTPS, and SSL must be configured as endpoint for Tomcat. If all configurations were done properly, you should see a salesforce.com login screen (see Figure 5). Go ahead and login with your salesforce.com credentials to authorize your web application to access resources.Logging in will allow the ConnectedAppREST demo to execute methods to create, show, update, and delete records (see Figure 6).*Tips & WarningsMake sure you have a Developer Edition (DE) account, because there are slight differences between Professional, Enterprise, Developer, etc. The Developer edition is free and does not expire (unless unused after a year). The callback URL in the OAuthConnectedApp.java must be same as the URL added to the connected app. If you get HTTP 403 error, it means the resource you are requesting is “Forbidden” from being accessed. Check that the username/account you are accessing with has the appropriate permissions. Make sure index.html is directly under the WebContent directory.Resources For a comprehensive set or resources, check out: http://developer.salesforce.com/en/mobile/resources ReferencesForce.com REST API Developer’s Guide (PDF) Using the Force.com REST API...
software-development-2-logo

Benchmarking SQS

SQS, Simple Message Queue, is a message-queue-as-a-service offering from Amazon Web Services. It supports only a handful of messaging operations, far from the complexity of e.g. AMQP, but thanks to the easy to understand interfaces, and the as-a-service nature, it is very useful in a number of situations. But how fast is SQS? How does it scale? Is it useful only for low-volume messaging, or can it be used for high-load applications as well?          If you know how SQS works, and want to skip the details on the testing methodology, you can jump straight to the test results. SQS semantics SQS exposes an HTTP-based interface. To access it, you need AWS credentials to sign the requests. But that’s usually done by a client library (there are libraries for most popular languages; we’ll use the official Java SDK). The basic message-related operations are:send a message, up to 256 KB in size, encoded as a string. Messages can be sent in bulks of up to 10 (but the total size is capped at 256 KB). receive a message. Up to 10 messages can be received in bulk, if available in the queue long-polling of messages. The request will wait up to 20 seconds for messages, if none are available initially delete a messageThere are also some other operations, concerning security, delaying message delivery, and changing a messages’ visibility timeout, but we won’t use them in the tests. SQS offers at-least-once delivery guarantee. If a message is received, then it is blocked for a period called “visibility timeout”. Unless the message is deleted within that period, it will become available for delivery again. Hence if a node processing a message crashes, it will be delivered again. However, we also run into the risk of processing a message twice (if e.g. the network connection when deleting the message dies, or if an SQS server dies), which we have to manage on the application side. SQS is a replicated message queue, so you can be sure that once a message is sent, it is safe and will be delivered; quoting from the website: Amazon SQS runs within Amazon’s high-availability data centers, so queues will be available whenever applications need them. To prevent messages from being lost or becoming unavailable, all messages are stored redundantly across multiple servers and data centers. Testing methodology To test how fast SQS is and how it scales, we will be running various numbers of nodes, each running various number of threads either sending or receiving simple, 100-byte messages. Each sending node is parametrised with the number of messages to send, and it tries to do so as fast as possible. Messages are sent in bulk, with bulk sizes chosen randomly between 1 and 10. Message sends are synchronous, that is we want to be sure that the request completed successfully before sending the next bulk. At the end the node reports the average number of messages per second that were sent. The receiving node receives messages in maximum bulks of 10. The AmazonSQSBufferedAsyncClient is used, which pre-fetches messages to speed up delivery. After receiving, each message is asynchronously deleted. The node assumes that testing is complete once it didn’t receive any messages within a minute, and reports the average number of messages per second that it received. Each test sends from 10 000 to 50 000 messages per thread. So the tests are relatively short, 2-5 minutes. There are also longer tests, which last about 15 minutes. The full (but still short) code is here: Sender, Receiver, SqsMq. One set of nodes runs the MqSender code, the other runs the MqReceiver code. The sending and receiving nodes are m3.large EC2 servers in the eu-west region, hence with the following parameters:2 cores Intel Xeon E5-2670 v2 7.5 GB RAMsThe queue is of course also created in the eu-west region.Minimal setup The minimal setup consists of 1 sending node and 1 receiving node, both running a single thread. The results are, in messages/second:average min maxsender 429 365 466receiver 427 363 463Scaling threads How do these results scale when we add more threads (still using one sender and one receiver node)? The tests were run with 1, 5, 25, 50 and 75 threads. The numbers are an average msg/second throughput.number of threads: 1 5 25 50 75sender per thread 429,33 407,35 354,15 289,88 193,71sender total 429,33 2 036,76 8 853,75 14 493,83 14 528,25receiver per thread 427,86 381,55 166,38 83,92 47,46receiver total 427,86 1 907,76 4 159,50 4 196,17 3 559,50  As you can see, on the sender side, we get near-to-linear scalability as the number of thread increases, peaking at 14k msgs/second sent (on a single node!) with 50 threads. Going any further doesn’t seem to make a difference.The receiving side is slower, and that is kind of expected, as receiving a single message is in fact two operations: receive + delete, while sending is a single operation. The scalability is worse, but still we can get as much as 4k msgs/second received. Scaling nodes Another (more promising) method of scaling is adding nodes, which is quite easy as we are “in the cloud”. The test results when running multiple nodes, each running a single thread are:number of nodes: 1 2 4 8sender per node 429,33 370,36 350,30 337,84sender total 429,33 740,71 1 401,19 2 702,75receiver per node 427,86 360,60 329,54 306,40receiver total 427,86 721,19 1 318,15 2 451,23  In this case, both on the sending&receiving side, we get near-linear scalability, reaching 2.5k messages sent&received per second with 8 nodes.Scaling nodes and threads The natural next step is, of course, to scale up both the nodes, and the threads! Here are the results, when using 25 threads on each node:number of nodes: 1 2 4 8sender per node&thread 354,15 338,52 305,03 317,33sender total 8 853,75 16 925,83 30 503,33 63 466,00receiver per node&thread 166,38 159,13 170,09 174,26receiver total 4 159,50 7 956,33 17 008,67 34 851,33  Again, we get great scalability results, with the number of receive operations about half the number of send operations per second. 34k msgs/second processed is a very nice number!To the extreme The highest results I managed to get are:108k msgs/second sent when using 50 threads and 8 nodes 35k msgs/second received when using 25 threads and 8 nodesI also tried running longer “stress” tests with 200k messages/thread, 8 nodes and 25 threads, and the results were the same as with the shorter tests. Running the tests – technically To run the tests, I built Docker images containing the Sender/Receiver binaries, pushed to Docker’s Hub, and downloaded on the nodes by Chef. To provision the servers, I used Amazon OpsWorks. This enabled me to quickly spin up and provision a lot of nodes for testing (up to 16 in the above tests). For details on how this works, see my “Cluster-wide Java/Scala application deployments with Docker, Chef and Amazon OpsWorks” blog. The Sender/Receiver daemons monitored (by checking each second the last-modification date) a file on S3. If a modification was detected, the file was downloaded – it contained the test parameters – and the test started. Summing up SQS has good performance and really great scalability characteristics. I wasn’t able to reach the peak of its possibilities – which would probably require more than 16 nodes in total. But once your requirements get above 35k messages per second, chances are you need custom solutions anyway; not to mention that while SQS is cheap, it may become expensive with such loads. From the results above, I think it is clear that SQS can be safely used for high-volume messaging applications, and scaled on-demand. Together with its reliability guarantees, it is a great fit both for small and large applications, which do any kind of asynchronous processing; especially if your service already resides in the Amazon cloud. As benchmarking isn’t easy, any remarks on the testing methodology, ideas how to improve the testing code are welcome!Reference: Benchmarking SQS from our JCG partner Adam Warski at the Blog of Adam Warski blog....
software-development-2-logo

Open Source Projects – Between accepting and rejecting pull request

Lately I have done a lot of work for the sbt-native-packager project. Being a commiter comes with a lot of responsibilities. You are responsible for the code quality, supporting your community, encouraging people to contribute to your project and of course providing an awesome open source product. Most of the open source commiters will probably start out as a contributor by providing pull requests fixing bugs or adding new features. From this side it looks rather simple, the project maintainer probably knows his/her domain and the code well enough to make a good judgement. Right?     This is not always the case. The bigger the projects get, the smaller the chance gets one contributor alone can merge your pull requests. However there’s a lot you can do to make things easier! I’m really glad a lot of contributors already do a lot of these things, but I wanted to write down my experience. Provide tests This is obvious, right? However tests are so much more than just proving it works or proving it’s fixed. Tests are like documenation for the maintainers. They can see how the new features work or what caused the bug. Furthermore it gives the maintainer confidence to work on this feature/bug fix himself as there’s already a test which checks his work. Provide documentation If you add a new feature then add a minimal documentation. A few sentence what does this, how can I use it and why should I use it are enough. It makes life a lot easier for maintainers judging your pull request, because they can try it out very easily themselves without going through all of your code at first. Be ready for changes To maintain a healthy code base with a lot of contributors is a challenge. So if you decide to contribute to an open source project try to stick to the style which is already applied in the repository. This applies to the high abstraction level to the deep bottom of low level code. And if you don’t then be prepared to change your code as the maintainers have to make sure the code can be easily understood by everybody else. Sometimes it’s hard not to take this personally and we try to be very polite. However sometimes corrections are necessary. There’s an easy way to avoid all of this… Small commits, early pull requests Start small and ask early. Write comments in your code, use the awesome tooling most of the code hosting sites provide like discussions or in-code-comments. Providing a base for discussions is IMHO the best way to get things done. You can discuss what’s good and bad, if the approach is correct or not. You avoid a lot work, which might  not be useful or out of scope and the maintainers don’t have to feel bad about rejecting a lot of work. Tell us more! A lot of open source projects where created for a specific need, but the nature of an open source project leads sometimes to an extension of this specific need and you add more features. Tell us what you do with it! The maintainers (hopefully) love there project and are amazed by the things you can do with it. Write blog posts, tweets or stackoverflow discussions to show your case.Reference: Open Source Projects – Between accepting and rejecting pull request from our JCG partner Nepomuk Seiler at the mukis.de blog....
career-logo

So You Want to Use a Recruiter Part II – Establishing Boundaries

This is the second in a three-part series to inform job seekers about working with a recruiter. Part I was “Recruit Your Recruiter” and Part III is “Warnings” Once you have identified the recruiter(s) you are going to use in your job search, it is ideal to immediately gather information from the recruiter (and provide some instructions to the recruiter) so expectations and boundaries are properly set. All recruiters are not alike, with significant variation in protocol, style, and even the recruiter’s incentives. The stakes are high for job seekers who entrust someone to assist with their career, but it’s important to keep in mind that a recruiter stands to earn a sizable amount when making a placement. For contingency agency recruiters who make up the majority of the market, the combination of large fees and competition can incentivize bad behavior. More on this in Part III. As a recruiter, I find that transparency helps gain trust and is necessary to establish an effective professional relationship. Candidates should realize that I have a business and profit motive, but I also want my candidates to understand my specific incentives so they can consider those incentives during our interactions. The negative reputation of agency recruiters makes this transparency necessary, and honest recruiters should have nothing to hide. Some recruiters will be more open than others, and the recruiter’s willingness to share information can and should be used as potential indicators of the recruiter’s interests. A recruiter must be able to articulate their own incentives, and be willing to justify situations where full transparency is not provided. To establish boundaries and set expectations, there are several topics that need to be addressed. What you need to know The recruiter’s experience – Hopefully you vetted your recruiter before contact, but now is the time to verify anything that you may have read. Confirm any claimed specialties. How the recruiter is paid for any given client – Whether or not recruiters should reveal their fee percentages is debatable, but job seekers certainly have the right to know how fees are calculated. Why is this important? Some fees may be based on base salary only while other agreements may stipulate that a fee includes bonuses or stock grants. If the recruiter is providing advice in negotiation, it’s helpful to know what parts of the compensation package impact the recruiter’s potential fee. Keep in mind that recruiters often have customized agreements with their clients. When a recruiter is representing you to multiple opportunities, it’s absolutely necessary for you to be made aware of each client’s fee structure. If you sense that your recruiter is pushing you towards accepting an offer from Company A and discouraging you from a higher offer with Company B, knowing who pays the recruiter more helps temper the advice. The recruiter’s relationship with any given client – Did the recruiter just sign this client last week or do they have a ten year history of working together? Has the recruiter worked with certain employees of the client in the past? This information is primarily useful when considering a recruiter’s advice on hiring process and negotiation, as the recruiter’s familiarity (or lack thereof) could be a contributing factor to getting an offer and closing the deal. The recruiter should also be willing to share if the client is a contingency search or retained (some fee paid in advance). This information has little impact on incentives, but clients do have a vested interest in hiring from a recruiter on retainer as they already have some skin in the game. As much detail as possible on any given job being pitched – Some candidates are satisfied with only knowing a job title while others want to know whether a company has a tendency to hire executives from outside or within. Recruiters will have some specific details, but candidates should expect to perform a bit of due diligence as well. If there are certain deal breakers regarding your job search (maybe tuition reimbursement is a requirement for you), it’s the candidate’s responsibility to convey those conditions and the recruiter’s responsibility to clear those up before starting the process. What you need to express How and when to contact – If you share all your contact information with a recruiter without instruction, many recruiters will assume they have full access. Recruiters want to establish a solid relationship and may feel the best way to do that is through extensive live contact. An inordinate number of calls to your mobile phone during office hours could tip off managers to your search, which may even benefit the recruiter’s efforts to place you. Set guidelines on both method and time acceptable for contact. No changes to the résumé without consent – I hear this complaint often, and the solution for many is a PDF. The most common change made is the addition of the recruiter’s contact info and maybe a logo. This is harmless, and designed to ensure that the recruiter gets their fee if the résumé is found three months later and the candidate is hired. There are many anecdotes about recruiters adding or subtracting details from a résumé, which is a different story. It’s entirely unethical for a recruiter to insert skills or buzzwords without consent. No résumés submitted without permission - To prevent a host of potential issues, be explicit about this. A recruiter who is not given this directive may feel they have carte blanche and might submit your résumé to a company you are already interviewing with, a former boss you didn’t like, or any number of places you don’t want your résumé going. Need to provide client names before submittal – See above. There are somewhat unique scenarios where companies request anonymity before they establish interest in a candidate, but these are extremely rare cases. It is not only important to know that your résumé is being sent out, but also where it is going. Only want to be pitched jobs that meet your criteria – This is more about saving time than anything else, but contingency recruiters playing the numbers game may try to maximize their chances of making a fee on you by submitting you to every client in their portfolio. The result is wasteful interviews for jobs that you are unqualified for or that you would never have accepted in the first place. Recruiters aren’t mind readers, so you’ll need to be specific. If you are limiting your search to specific locations and types of jobs, establish those parameters early and ask to be informed only about jobs that fit. Expectation of feedback, preferably actionable – One of the biggest complaints about recruiters is that they suddenly disappear after telling you about a job or sending you on an interview. There are multiple reasons for this, some understandable and others less so. Asking the recruiter when you should expect to hear feedback and sending prompt emails after interviews should help you gather valuable information about what you are doing well and where you could use some work. Recruiters don’t want to hurt a candidate’s feelings and may filter their feedback, but the raw information is more useful and often actionable. Ask for a low level of filtering.Reference: So You Want to Use a Recruiter Part II – Establishing Boundaries from our JCG partner Dave Fecak at the Job Tips For Geeks blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close