Featured FREE Whitepapers

What's New Here?


So You Want to Use a Recruiter Part III – Warnings

This is the final installment in a three-part series to inform job seekers about working with a recruiter. Part I was “Recruit Your Recruiter” and Part II was “Establishing Boundaries”  In Part II, I alluded to systemic conditions inherent to contingency recruiting that can incentivize bad behavior. Before proceeding with warnings about recruiters, let’s provide some context as to why some recruiters behave the way they do. Agency recruiters (AKA “headhunters”) that conduct contingency searches account for most of the recruiting market and are subsequently the favorite target of recruiter criticism. These are recruiters that represent multiple hiring firms that pay the recruiter a fee ranging anywhere from 15-30% of the new employee’s salary. This seems a great deal for the recruiter, but the downside of contingency recruiting is that the recruiter may spend substantial time on a search yet earn no money if they do not make the placement. Contingency recruiters absorb 100% of the “risk” for their searches by default, unlike retained recruiters who take no risk. Hiring companies can establish relationships with ten or twenty contingency firms to perform a search, with each agency helping expand the company’s name and employer brand, yet only one (and sometimes none) is compensated. When we combine large fees with a highly competitive, time-sensitive demand-driven market, the actors in that market are incentivized to take shortcuts. Please don’t confuse these revelations as excuses for bad behavior. Recruiters who either do not understand or choose to ignore industry ethics make it much more difficult for those who do follow the rules. I provide these warnings to expose problems in a secretive industry, with hopes that sunlight will serve as disinfectant.All recruiters won’t act this way. Many will. Keep these things in mind when interacting with your recruiter. Your recruiter may send your résumé places without your knowledge – To maximize the chances of getting a fee or to utilize your desirable background as bait to sign a prospective client, recruiters may shop you around without your consent. This only tends to cause issues when the recruiter sends your résumé somewhere that you are already interviewing. REMEDY: Insist that your recruiter only submits you with prior consent (in writing if that makes you feel more comfortable). Your recruiter may attempt to get you as many interviews as possible, with little consideration for fit – This sounds like a positive until you have burned all your vacation days and realize that over 50% of your interviews were a complete waste of time. This is the “throw it against the wall and see what sticks” mentality loathed by both candidates and employers. REMEDY: Perform due diligence and vet jobs before agreeing to interviews. If you reject a job offer, the recruiter may take questionable actions to get you to reconsider – No one can fault a recruiter for wanting to promote their client when a candidate is on the fence. That is part of the recruiter’s job. Recruiters cross the line when they knowingly provide false details about a job to allay a candidate’s fears. A recruiter may call a candidate’s home when the recruiter knows the candidate isn’t there in an attempt to speak to and get support from a spouse or significant other. The higher the potential fee, the more likely you are to see these tactics. REMEDY: If you have questions about an offer that don’t have simple answers, such as inquiries about career path or bonus expectation, get answers directly from the company representatives. When your decision is final, make that fact clear to your recruiter. If you accept a counteroffer, the recruiter will attempt to scare you - Counteroffers are the bane of the recruiter’s existence. Just as the recruiter starts counting their money, it’s swiped at the last possible moment – and just because the candidate changed their mind. Recruiting is a unique sales job, in that the hire can refuse the deal after all involved parties (employer, new employee, broker) have agreed to terms. Sales jobs in other industries don’t have that issue. When a counteroffer is accepted, expect some form of “recruiter terrorism“. In my opinion, this is perhaps the most shameful recruiter behavior. Recruiters have been known to tell candidates that their career is over, they will be out of a job in a few months, and that the decision will haunt them for many years to come. All of those things may be true in some isolated instances, but plenty of people have accepted counteroffers without ill effects. I’ve written about this before, as it’s important to understand the difference between the actual dangers of counteroffer acceptance and the recruiter’s biased perspective. REMEDY: Consider any counteroffer situation carefully and do your own research on the realities of counteroffer, while keeping in mind the source of any content you read. You will be asked for referrals, perhaps in creative ways – Recruiters are trained to ask everyone for referrals. This was much more important before the advent of LinkedIn and social media, when names were much more difficult to come by. Candidates may expect that recruiters will ask “Who is the best Python developer you know?”, but they may feel less threatened by a recruiter asking “Who is the worst Python developer you know?”.  Again, we shouldn’t blame recruiters for trying to expand their network, but if the recruiter continues to ask for names without providing any value it’s clearly not a balanced relationship. REMEDY: Give referrals (if any) that you are comfortable providing, and tell the recruiter that you’ll keep them in mind if any of your associates are looking for work in the future. Whether you act on that is up to you. If you list references they will be called (and perhaps recruited) – When a candidate lists references on a résumé, it’s an open invitation to recruit those people as well. If your references discover that you leaked their contact information indiscriminately to a slew of recruiters and that act resulted in their full inbox, don’t expect them to volunteer to serve as references in the future. REMEDY: Never list references on your résumé. Only provide references when necessary, and ask the references what contact information they would like presented to the recruiter. You will receive continuous recruiter contact for years to come, usually more often than you’d like - Once your information is out there, you can’t erase it. Don’t provide permanent contact details unless you are willing to field inquiries for the rest of your career. REMEDY: Use throwaway email addresses and set guidelines on future contact. Recruiters get paid when you take a job through them, regardless of whether it’s the best job choice for you – This is a simple fact that most candidates probably aren’t conscious of during the job search. There are three potential outcomes – you accept a job through the recruiter, you accept a job without using the recruiter, or you stay put. Only the first outcome results in a fee, so the recruiter has financial incentive first to convince you to leave and then to only consider their jobs. What type of behavior does this lead to? Recruiters may ask you where you are interviewing, where you have applied, and what other recruiters you are using. Some may refuse to work with you if you fail to provide this information. They may provide some explanation as to why this information is vital for them to know, but the reason is only the desire to know who they are competing against and to have some amount of control. The more detail you provide, the more ammunition the recruiter has to make a case for their client. REMEDY: Always consider a recruiter’s advice, but also consider their incentives. Provide information to recruiters on a need to know basis and only provide what will help them get you a job. Specifics about any other job search activity are private unless you choose to make it known. Recruiters have almost no incentive to provide feedback – Many job seekers wonder why agency recruiters often don’t provide feedback after a failed interview. Of my 60+ articles on Job Tips For Geeks the most popular (based on traffic coming from search engines) is “Why The Recruiter Didn’t Call You Back“, so it’s clear to me that this is a bothersome trend. Once it becomes clear that you will not result in a fee, your value to the recruiter is primarily limited to the possibility of a future placement or a source for referrals. Interview feedback is valuable to candidates, and job seekers that commit to interviews deserve some explanation as to why they were not selected for hire. REMEDY: Set the expectation with the recruiter that you will be interested in client feedback, and ask for specific feedback after interviews are complete.Reference: So You Want to Use a Recruiter Part III – Warnings from our JCG partner Dave Fecak at the Job Tips For Geeks blog....

How to use Salesforce REST API with your JavaServer Pages

Abstract: This tutorial gives an example of a JSP and how to integrate it with the Salesforce REST API. We will walk through the step­by­step process of creating an external client to manage your data with Force.com,while using HTTP(S) and JSON. In this example, I am using Mac OS X 10.9.2 with Apache Tomcat 7 server and Java 1.7. Eclipse Java EE edition is the IDE used for development and testing. The instructions given in this tutorial should work with minor modifications for other platforms as well. If you want to access the entire sample code from this tutorial, you can access it here: github.com/seethaa/force_rest_example All code is updated to work with the httpclient 4.3 libraries. What Is REST? REST stands for Representational State Transfer, and is a stateless client­server communications protocol over HTTP. Why and When To Use A REST API in Java for Your JSP A REST API is well suited for browser applications which require a lot of interaction, and uses synchronous communication to transfer data. The Salesforce REST API provides a programming interface for simple web services to interact with Force.com, and supports both XML and JSON formats. The Salesforce REST API works well for mobile applications or dynamic websites to retrieve or update records quickly on your web server. While bulk record retrieval should be reserved for the BulkAPI, this lightweight REST API can be used for common server pages which involve quick updates and frequent user interactions, for example updating a single user record. Setting Up Your Development Account and Prerequisites You will need the following:Go to https://developer.salesforce.com/signup and register for your Free DE account. For the purposes of this example, I recommend sign up for a Developer Edition even if you already have an account. This ensures you get a clean environment with the latest features enabled. Java application Server. I created mine using Apache Tomcat 7 on Mac OS X and Eclipse as the IDE. There is also a free Eclipse plugin at http://developer.salesforce.com/page/Force.com_IDE but the original Eclipse setup was used in this tutorial. Configure SSL on your Tomcat server using http://tomcat.apache.org/tomcat-7.0-doc/ssl-howto.html. If you are developing in Eclipse, make sure to add the Connector piece in server.xml file in your Eclipse environment, e.g.: <Connector SSLEnabled="true" clientAuth="false" keystoreFile="/Users/seetha/.keystore" keystorePass="password" maxThreads="200" port="8443" protocol="HTTP/1.1" scheme="https" secure="true" sslProtocol="TLS"/>Add the required jar files to WebContent/WEB­INF/lib. You will need commons-­codec-­1.6.jar, httpclient­4.3.3.jar, httpcore-­4.3.2.jar, commons-­logging­-1.1.3.jar, and java-­json.jar. For Eclipse, I also had to make sure that all jars were added to the build path (Right click Project → Build Path → Configure build path →  Select Libraries tab → Click Add Jars → Select the Jar files from the WEB­INF/lib folder.Create a Connected AppBack in your Force.com DE, create a new Connected App through the console. Click on Setup → Build → Create → Apps. Scroll down to the Connected Apps section and click on the New button.Ensure that the callback URL is http://localhost:8080/<your_app_context_path>/oauth/_callback (You can find the app context path by going back to Eclipse: Right clicking on Project → Properties → Web Project Settings → Context root) Check “Enable OAuth Settings” checkbox The required OAuth scopes for this tutorial (see Figure 1) are “Access and manage your data (api)” and “Provide access to your data via the Web (web)”, but these scopes should be changed as per your requirement. SaveCopy the ClientID and Client Secret (see Figure 2), because both of these will be used in the next step.  Authentication There are three files that need to be imported into your JSP project, given below: index.html <!DOCTYPE html PUBLIC "­//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http­equiv="Content­Type" content="text/html; charset=UTF­8"> <title>REST/OAuth Example</title> </head> <body> <script type="text/javascript" language="javascript"> if (location.protocol != "https:") { document.write("OAuth will not work correctly from plain http. "+ "Please use an https URL."); } else { document.write("<a href=\"oauth\">Run Connected App demo via REST/OAuth.</a>"); } </script> </body> </html> OAuthConnectedApp.java import java.io.IOException; import java.io.InputStream; import java.io.UnsupportedEncodingException; import java.net.URLEncoder; import java.util.ArrayList; import java.util.List;import javax.servlet.ServletException; import javax.servlet.annotation.WebInitParam; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse;import org.apache.http.Consts; import org.apache.http.HttpEntity; import org.apache.http.NameValuePair; import org.apache.http.client.entity.UrlEncodedFormEntity; import org.apache.http.client.methods.CloseableHttpResponse; import org.apache.http.client.methods.HttpPost; import org.apache.http.impl.client.CloseableHttpClient; import org.apache.http.impl.client.HttpClients; import org.apache.http.message.BasicNameValuePair;import org.json.JSONException; import org.json.JSONObject; import org.json.JSONTokener;@WebServlet(name = "oauth", urlPatterns = { "/oauth/*", "/oauth" }, initParams = { // clientId is 'Consumer Key' in the Remote Access UI //**Update with your own Client ID @WebInitParam(name = "clientId", value = "3MVG9JZ_r.QzrS7jzujCYrebr8kajDEcjXQLXnV9nGU6PaxOjuOi_n8EcUf0Ix9qqk1lYCa4_Jaq7mpqxi2YT"), // clientSecret is 'Consumer Secret' in the Remote Access UI //**Update with your own Client Secret @WebInitParam(name = "clientSecret", value = "2307033558641049067"), // This must be identical to 'Callback URL' in the Remote Access UI //**Update with your own URI @WebInitParam(name = "redirectUri", value = "http://localhost:8080/force_rest_example/oauth/_callback"), @WebInitParam(name = "environment", value = "https://login.salesforce.com"), })/** * Servlet parameters * @author seetha * */ public class OAuthConnectedApp extends HttpServlet {private static final long serialVersionUID = 1L;private static final String ACCESS_TOKEN = "ACCESS_TOKEN"; private static final String INSTANCE_URL = "INSTANCE_URL";private String clientId = null; private String clientSecret = null; private String redirectUri = null; private String environment = null; private String authUrl = null; private String tokenUrl = null; public void init() throws ServletException { clientId = this.getInitParameter("clientId"); clientSecret = this.getInitParameter("clientSecret"); redirectUri = this.getInitParameter("redirectUri"); environment = this.getInitParameter("environment");try {authUrl = environment + "/services/oauth2/authorize?response_type=code&client_id=" + clientId + "&redirect_uri=" + URLEncoder.encode(redirectUri, "UTF­8"); } catch (UnsupportedEncodingException e) { throw new ServletException(e); }tokenUrl = environment + "/services/oauth2/token"; }protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { String accessToken = (String) request.getSession().getAttribute(ACCESS_TOKEN);//System.out.println("calling doget"); if (accessToken == null) { String instanceUrl = null;if (request.getRequestURI().endsWith("oauth")) { // we need to send the user to authorize response.sendRedirect(authUrl); return; } else { System.out.println("Auth successful ­ got callback"); String code = request.getParameter("code");// Create an instance of HttpClient. CloseableHttpClient httpclient = HttpClients.createDefault();try{ // Create an instance of HttpPost. HttpPost httpost = new HttpPost(tokenUrl);// Adding all form parameters in a List of type NameValuePair List<NameValuePair> nvps = new ArrayList<NameValuePair>(); nvps.add(new BasicNameValuePair("code", code)); nvps.add(new BasicNameValuePair("grant_type","authorization_code")); nvps.add(new BasicNameValuePair("client_id", clientId)); nvps.add(new BasicNameValuePair("client_secret", clientSecret)); nvps.add(new BasicNameValuePair("redirect_uri", redirectUri));httpost.setEntity(new UrlEncodedFormEntity(nvps, Consts.UTF_8)); // Execute the request. CloseableHttpResponse closeableresponse=httpclient.execute(httpost); System.out.println("Response Statusline:"+closeableresponse.getStatusLine());try { // Do the needful with entity. HttpEntity entity = closeableresponse.getEntity(); InputStream rstream = entity.getContent(); JSONObject authResponse = new JSONObject(new JSONTokener(rstream));accessToken = authResponse.getString("access_token"); instanceUrl = authResponse.getString("instance_url");} catch (JSONException e) { // TODO Auto­generated catch block e.printStackTrace(); e.printStackTrace(); } finally { // Closing the response closeableresponse.close(); } } finally { httpclient.close(); }}// Set a session attribute so that other servlets can get the access token request.getSession().setAttribute(ACCESS_TOKEN, accessToken);// We also get the instance URL from the OAuth response, so set it in the session too request.getSession().setAttribute(INSTANCE_URL, instanceUrl); }response.sendRedirect(request.getContextPath() + "/ConnectedAppREST"); } }ConnectedAppREST.java import java.io.IOException; import java.io.InputStream; import java.io.PrintWriter; import java.net.URISyntaxException; import java.util.Iterator;import javax.servlet.ServletException; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse;import org.apache.http.HttpEntity; import org.apache.http.HttpStatus; import org.apache.http.client.methods.CloseableHttpResponse; import org.apache.http.client.methods.HttpDelete; import org.apache.http.client.methods.HttpGet; import org.apache.http.client.methods.HttpPost; import org.apache.http.client.utils.URIBuilder; import org.apache.http.entity.ContentType; import org.apache.http.entity.StringEntity; import org.apache.http.impl.client.CloseableHttpClient; import org.apache.http.impl.client.HttpClients;import org.json.JSONArray; import org.json.JSONException; import org.json.JSONObject; import org.json.JSONTokener;@WebServlet(urlPatterns = { "/ConnectedAppREST" }) /** * Demo for Connect App/REST API * @author seetha * */ public class ConnectedAppREST extends HttpServlet {private static final long serialVersionUID = 1L; private static final String ACCESS_TOKEN = "ACCESS_TOKEN"; private static final String INSTANCE_URL = "INSTANCE_URL"; private void showAccounts(String instanceUrl, String accessToken, PrintWriter writer) throws ServletException, IOException { CloseableHttpClient httpclient = HttpClients.createDefault();HttpGet httpGet = new HttpGet();//add key and value httpGet.addHeader("Authorization", "OAuth " + accessToken);try { URIBuilder builder = new URIBuilder(instanceUrl+ "/services/data/v30.0/query"); builder.setParameter("q", "SELECT Name, Id from Account LIMIT 100");httpGet.setURI(builder.build());CloseableHttpResponse closeableresponse = httpclient.execute(httpGet); System.out.println("Response Status line :" + closeableresponse.getStatusLine()); if (closeableresponse.getStatusLine().getStatusCode() == HttpStatus.SC_OK) { // Now lets use the standard java json classes to work with the results try { // Do the needful with entity. HttpEntity entity = closeableresponse.getEntity(); InputStream rstream = entity.getContent(); JSONObject authResponse = new JSONObject(new JSONTokener(rstream));System.out.println("Query response: " + authResponse.toString(2));writer.write(authResponse.getInt("totalSize") + " record(s) returned\n\n");JSONArray results = authResponse.getJSONArray("records"); for (int i = 0; i < results.length(); i++) { writer.write(results.getJSONObject(i).getString("Id") + ", " + results.getJSONObject(i).getString("Name") + "\n"); } writer.write("\n"); } catch (JSONException e) { e.printStackTrace(); throw new ServletException(e); } } } catch (URISyntaxException e1) { // TODO Auto­generated catch block e1.printStackTrace(); } finally { httpclient.close(); } } private String createAccount(String name, String instanceUrl, String accessToken, PrintWriter writer) throws ServletException, IOException { String accountId = null; CloseableHttpClient httpclient = HttpClients.createDefault(); JSONObject account = new JSONObject(); try { account.put("Name", name); } catch (JSONException e) { e.printStackTrace(); throw new ServletException(e); } HttpPost httpost = new HttpPost(instanceUrl+ "/services/data/v30.0/sobjects/Account/");httpost.addHeader("Authorization", "OAuth " + accessToken);StringEntity messageEntity = new StringEntity( account.toString(), ContentType.create("application/json"));httpost.setEntity(messageEntity);// Execute the request. CloseableHttpResponse closeableresponse = httpclient.execute(httpost); System.out.println("Response Status line :" + closeableresponse.getStatusLine()); try { writer.write("HTTP status " + closeableresponse.getStatusLine().getStatusCode() + " creating account\n\n");if (closeableresponse.getStatusLine().getStatusCode() == HttpStatus.SC_CREATED) { try { // Do the needful with entity. HttpEntity entity = closeableresponse.getEntity(); InputStream rstream = entity.getContent(); JSONObject authResponse = new JSONObject(new JSONTokener(rstream)); System.out.println("Create response: " + authResponse.toString(2));if (authResponse.getBoolean("success")) { accountId = authResponse.getString("id"); writer.write("New record id " + accountId + "\n\n"); } } catch (JSONException e) { e.printStackTrace(); // throw new ServletException(e); } } } finally { httpclient.close(); }return accountId; } private void showAccount(String accountId, String instanceUrl, String accessToken, PrintWriter writer) throws ServletException, IOException {CloseableHttpClient httpclient = HttpClients.createDefault(); HttpGet httpGet = new HttpGet(); //add key and value httpGet.addHeader("Authorization", "OAuth " + accessToken); try { URIBuilder builder = new URIBuilder(instanceUrl + "/services/data/v30.0/sobjects/Account/" + accountId);httpGet.setURI(builder.build());//httpclient.execute(httpGet);CloseableHttpResponse closeableresponse = httpclient.execute(httpGet); System.out.println("Response Status line :" + closeableresponse.getStatusLine()); if (closeableresponse.getStatusLine().getStatusCode() == HttpStatus.SC_OK) {try { // Do the needful with entity. HttpEntity entity = closeableresponse.getEntity(); InputStream rstream = entity.getContent(); JSONObject authResponse = new JSONObject(new JSONTokener(rstream)); System.out.println("Query response: " + authResponse.toString(2)); writer.write("Account content\n\n"); Iterator iterator = authResponse.keys();while (iterator.hasNext()) { String key = (String) iterator.next();Object obj = authResponse.get(key); String value = null; if (obj instanceof String) { value = (String) obj; }writer.write(key + ":" + (value != null ? value : "") + "\n"); }writer.write("\n"); } catch (JSONException e) { e.printStackTrace(); throw new ServletException(e); } } } catch (URISyntaxException e1) { // TODO Auto­generated catch block e1.printStackTrace(); } finally { httpclient.close(); } } private void updateAccount(String accountId, String newName, String city, String instanceUrl, String accessToken, PrintWriter writer) throws ServletException, IOException { CloseableHttpClient httpclient = HttpClients.createDefault();JSONObject update = new JSONObject(); try { update.put("Name", newName); update.put("BillingCity", city); } catch (JSONException e) { e.printStackTrace(); throw new ServletException(e); } HttpPost httpost = new HttpPost(instanceUrl + "/services/data/v30.0/sobjects/Account/" +accountId+"?_HttpMethod=PATCH"); httpost.addHeader("Authorization", "OAuth " + accessToken); StringEntity messageEntity = new StringEntity( update.toString(), ContentType.create("application/json"));httpost.setEntity(messageEntity); // Execute the request. CloseableHttpResponse closeableresponse = httpclient.execute(httpost); System.out.println("Response Status line :" + closeableresponse.getStatusLine()); try { writer.write("HTTP status " + closeableresponse.getStatusLine().getStatusCode() + " updating account " + accountId + "\n\n"); } finally { httpclient.close(); } } private void deleteAccount(String accountId, String instanceUrl, String accessToken, PrintWriter writer) throws IOException {CloseableHttpClient httpclient = HttpClients.createDefault();HttpDelete delete = new HttpDelete(instanceUrl + "/services/data/v30.0/sobjects/Account/" + accountId);delete.setHeader("Authorization", "OAuth " + accessToken);// Execute the request. CloseableHttpResponse closeableresponse = httpclient.execute(delete); System.out.println("Response Status line :" + closeableresponse.getStatusLine()); try { writer.write("HTTP status " + closeableresponse.getStatusLine().getStatusCode() + " deleting account " + accountId + "\n\n"); } finally { delete.releaseConnection(); } } /** * @see HttpServlet#doGet(HttpServletRequest request, HttpServletResponse * response) */ @Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { PrintWriter writer = response.getWriter();String accessToken = (String) request.getSession().getAttribute( ACCESS_TOKEN);String instanceUrl = (String) request.getSession().getAttribute( INSTANCE_URL);if (accessToken == null) { writer.write("Error ­ no access token"); return; } writer.write("We have an access token: " + accessToken + "\n" + "Using instance " + instanceUrl + "\n\n");showAccounts(instanceUrl, accessToken, writer);String accountId = createAccount("My New Org", instanceUrl, accessToken, writer);if (accountId == null) { System.out.println("Account ID null"); }showAccount(accountId, instanceUrl, accessToken, writer); showAccounts(instanceUrl, accessToken, writer); updateAccount(accountId, "My New Org, Inc", "San Francisco", instanceUrl, accessToken, writer);showAccount(accountId, instanceUrl, accessToken, writer);deleteAccount(accountId, instanceUrl, accessToken, writer);showAccounts(instanceUrl, accessToken, writer); } }Change OAuthConnectedApp.java to replace Client ID, Client Secret, and Callback URI fields based on the Connected App configuration. Start Tomcat server in Eclipse (see Figure 3) or externally, and navigate to https://localhost:8443/<your_app_context_path>/ Clicking on the link above (see Figure 4) will not work unless it is through HTTPS, and SSL must be configured as endpoint for Tomcat. If all configurations were done properly, you should see a salesforce.com login screen (see Figure 5). Go ahead and login with your salesforce.com credentials to authorize your web application to access resources.Logging in will allow the ConnectedAppREST demo to execute methods to create, show, update, and delete records (see Figure 6).*Tips & WarningsMake sure you have a Developer Edition (DE) account, because there are slight differences between Professional, Enterprise, Developer, etc. The Developer edition is free and does not expire (unless unused after a year). The callback URL in the OAuthConnectedApp.java must be same as the URL added to the connected app. If you get HTTP 403 error, it means the resource you are requesting is “Forbidden” from being accessed. Check that the username/account you are accessing with has the appropriate permissions. Make sure index.html is directly under the WebContent directory.Resources For a comprehensive set or resources, check out: http://developer.salesforce.com/en/mobile/resources ReferencesForce.com REST API Developer’s Guide (PDF) Using the Force.com REST API...

Benchmarking SQS

SQS, Simple Message Queue, is a message-queue-as-a-service offering from Amazon Web Services. It supports only a handful of messaging operations, far from the complexity of e.g. AMQP, but thanks to the easy to understand interfaces, and the as-a-service nature, it is very useful in a number of situations. But how fast is SQS? How does it scale? Is it useful only for low-volume messaging, or can it be used for high-load applications as well?          If you know how SQS works, and want to skip the details on the testing methodology, you can jump straight to the test results. SQS semantics SQS exposes an HTTP-based interface. To access it, you need AWS credentials to sign the requests. But that’s usually done by a client library (there are libraries for most popular languages; we’ll use the official Java SDK). The basic message-related operations are:send a message, up to 256 KB in size, encoded as a string. Messages can be sent in bulks of up to 10 (but the total size is capped at 256 KB). receive a message. Up to 10 messages can be received in bulk, if available in the queue long-polling of messages. The request will wait up to 20 seconds for messages, if none are available initially delete a messageThere are also some other operations, concerning security, delaying message delivery, and changing a messages’ visibility timeout, but we won’t use them in the tests. SQS offers at-least-once delivery guarantee. If a message is received, then it is blocked for a period called “visibility timeout”. Unless the message is deleted within that period, it will become available for delivery again. Hence if a node processing a message crashes, it will be delivered again. However, we also run into the risk of processing a message twice (if e.g. the network connection when deleting the message dies, or if an SQS server dies), which we have to manage on the application side. SQS is a replicated message queue, so you can be sure that once a message is sent, it is safe and will be delivered; quoting from the website: Amazon SQS runs within Amazon’s high-availability data centers, so queues will be available whenever applications need them. To prevent messages from being lost or becoming unavailable, all messages are stored redundantly across multiple servers and data centers. Testing methodology To test how fast SQS is and how it scales, we will be running various numbers of nodes, each running various number of threads either sending or receiving simple, 100-byte messages. Each sending node is parametrised with the number of messages to send, and it tries to do so as fast as possible. Messages are sent in bulk, with bulk sizes chosen randomly between 1 and 10. Message sends are synchronous, that is we want to be sure that the request completed successfully before sending the next bulk. At the end the node reports the average number of messages per second that were sent. The receiving node receives messages in maximum bulks of 10. The AmazonSQSBufferedAsyncClient is used, which pre-fetches messages to speed up delivery. After receiving, each message is asynchronously deleted. The node assumes that testing is complete once it didn’t receive any messages within a minute, and reports the average number of messages per second that it received. Each test sends from 10 000 to 50 000 messages per thread. So the tests are relatively short, 2-5 minutes. There are also longer tests, which last about 15 minutes. The full (but still short) code is here: Sender, Receiver, SqsMq. One set of nodes runs the MqSender code, the other runs the MqReceiver code. The sending and receiving nodes are m3.large EC2 servers in the eu-west region, hence with the following parameters:2 cores Intel Xeon E5-2670 v2 7.5 GB RAMsThe queue is of course also created in the eu-west region.Minimal setup The minimal setup consists of 1 sending node and 1 receiving node, both running a single thread. The results are, in messages/second:average min maxsender 429 365 466receiver 427 363 463Scaling threads How do these results scale when we add more threads (still using one sender and one receiver node)? The tests were run with 1, 5, 25, 50 and 75 threads. The numbers are an average msg/second throughput.number of threads: 1 5 25 50 75sender per thread 429,33 407,35 354,15 289,88 193,71sender total 429,33 2 036,76 8 853,75 14 493,83 14 528,25receiver per thread 427,86 381,55 166,38 83,92 47,46receiver total 427,86 1 907,76 4 159,50 4 196,17 3 559,50  As you can see, on the sender side, we get near-to-linear scalability as the number of thread increases, peaking at 14k msgs/second sent (on a single node!) with 50 threads. Going any further doesn’t seem to make a difference.The receiving side is slower, and that is kind of expected, as receiving a single message is in fact two operations: receive + delete, while sending is a single operation. The scalability is worse, but still we can get as much as 4k msgs/second received. Scaling nodes Another (more promising) method of scaling is adding nodes, which is quite easy as we are “in the cloud”. The test results when running multiple nodes, each running a single thread are:number of nodes: 1 2 4 8sender per node 429,33 370,36 350,30 337,84sender total 429,33 740,71 1 401,19 2 702,75receiver per node 427,86 360,60 329,54 306,40receiver total 427,86 721,19 1 318,15 2 451,23  In this case, both on the sending&receiving side, we get near-linear scalability, reaching 2.5k messages sent&received per second with 8 nodes.Scaling nodes and threads The natural next step is, of course, to scale up both the nodes, and the threads! Here are the results, when using 25 threads on each node:number of nodes: 1 2 4 8sender per node&thread 354,15 338,52 305,03 317,33sender total 8 853,75 16 925,83 30 503,33 63 466,00receiver per node&thread 166,38 159,13 170,09 174,26receiver total 4 159,50 7 956,33 17 008,67 34 851,33  Again, we get great scalability results, with the number of receive operations about half the number of send operations per second. 34k msgs/second processed is a very nice number!To the extreme The highest results I managed to get are:108k msgs/second sent when using 50 threads and 8 nodes 35k msgs/second received when using 25 threads and 8 nodesI also tried running longer “stress” tests with 200k messages/thread, 8 nodes and 25 threads, and the results were the same as with the shorter tests. Running the tests – technically To run the tests, I built Docker images containing the Sender/Receiver binaries, pushed to Docker’s Hub, and downloaded on the nodes by Chef. To provision the servers, I used Amazon OpsWorks. This enabled me to quickly spin up and provision a lot of nodes for testing (up to 16 in the above tests). For details on how this works, see my “Cluster-wide Java/Scala application deployments with Docker, Chef and Amazon OpsWorks” blog. The Sender/Receiver daemons monitored (by checking each second the last-modification date) a file on S3. If a modification was detected, the file was downloaded – it contained the test parameters – and the test started. Summing up SQS has good performance and really great scalability characteristics. I wasn’t able to reach the peak of its possibilities – which would probably require more than 16 nodes in total. But once your requirements get above 35k messages per second, chances are you need custom solutions anyway; not to mention that while SQS is cheap, it may become expensive with such loads. From the results above, I think it is clear that SQS can be safely used for high-volume messaging applications, and scaled on-demand. Together with its reliability guarantees, it is a great fit both for small and large applications, which do any kind of asynchronous processing; especially if your service already resides in the Amazon cloud. As benchmarking isn’t easy, any remarks on the testing methodology, ideas how to improve the testing code are welcome!Reference: Benchmarking SQS from our JCG partner Adam Warski at the Blog of Adam Warski blog....

Open Source Projects – Between accepting and rejecting pull request

Lately I have done a lot of work for the sbt-native-packager project. Being a commiter comes with a lot of responsibilities. You are responsible for the code quality, supporting your community, encouraging people to contribute to your project and of course providing an awesome open source product. Most of the open source commiters will probably start out as a contributor by providing pull requests fixing bugs or adding new features. From this side it looks rather simple, the project maintainer probably knows his/her domain and the code well enough to make a good judgement. Right?     This is not always the case. The bigger the projects get, the smaller the chance gets one contributor alone can merge your pull requests. However there’s a lot you can do to make things easier! I’m really glad a lot of contributors already do a lot of these things, but I wanted to write down my experience. Provide tests This is obvious, right? However tests are so much more than just proving it works or proving it’s fixed. Tests are like documenation for the maintainers. They can see how the new features work or what caused the bug. Furthermore it gives the maintainer confidence to work on this feature/bug fix himself as there’s already a test which checks his work. Provide documentation If you add a new feature then add a minimal documentation. A few sentence what does this, how can I use it and why should I use it are enough. It makes life a lot easier for maintainers judging your pull request, because they can try it out very easily themselves without going through all of your code at first. Be ready for changes To maintain a healthy code base with a lot of contributors is a challenge. So if you decide to contribute to an open source project try to stick to the style which is already applied in the repository. This applies to the high abstraction level to the deep bottom of low level code. And if you don’t then be prepared to change your code as the maintainers have to make sure the code can be easily understood by everybody else. Sometimes it’s hard not to take this personally and we try to be very polite. However sometimes corrections are necessary. There’s an easy way to avoid all of this… Small commits, early pull requests Start small and ask early. Write comments in your code, use the awesome tooling most of the code hosting sites provide like discussions or in-code-comments. Providing a base for discussions is IMHO the best way to get things done. You can discuss what’s good and bad, if the approach is correct or not. You avoid a lot work, which might  not be useful or out of scope and the maintainers don’t have to feel bad about rejecting a lot of work. Tell us more! A lot of open source projects where created for a specific need, but the nature of an open source project leads sometimes to an extension of this specific need and you add more features. Tell us what you do with it! The maintainers (hopefully) love there project and are amazed by the things you can do with it. Write blog posts, tweets or stackoverflow discussions to show your case.Reference: Open Source Projects – Between accepting and rejecting pull request from our JCG partner Nepomuk Seiler at the mukis.de blog....

So You Want to Use a Recruiter Part II – Establishing Boundaries

This is the second in a three-part series to inform job seekers about working with a recruiter. Part I was “Recruit Your Recruiter” and Part III is “Warnings” Once you have identified the recruiter(s) you are going to use in your job search, it is ideal to immediately gather information from the recruiter (and provide some instructions to the recruiter) so expectations and boundaries are properly set. All recruiters are not alike, with significant variation in protocol, style, and even the recruiter’s incentives. The stakes are high for job seekers who entrust someone to assist with their career, but it’s important to keep in mind that a recruiter stands to earn a sizable amount when making a placement. For contingency agency recruiters who make up the majority of the market, the combination of large fees and competition can incentivize bad behavior. More on this in Part III. As a recruiter, I find that transparency helps gain trust and is necessary to establish an effective professional relationship. Candidates should realize that I have a business and profit motive, but I also want my candidates to understand my specific incentives so they can consider those incentives during our interactions. The negative reputation of agency recruiters makes this transparency necessary, and honest recruiters should have nothing to hide. Some recruiters will be more open than others, and the recruiter’s willingness to share information can and should be used as potential indicators of the recruiter’s interests. A recruiter must be able to articulate their own incentives, and be willing to justify situations where full transparency is not provided. To establish boundaries and set expectations, there are several topics that need to be addressed. What you need to know The recruiter’s experience – Hopefully you vetted your recruiter before contact, but now is the time to verify anything that you may have read. Confirm any claimed specialties. How the recruiter is paid for any given client – Whether or not recruiters should reveal their fee percentages is debatable, but job seekers certainly have the right to know how fees are calculated. Why is this important? Some fees may be based on base salary only while other agreements may stipulate that a fee includes bonuses or stock grants. If the recruiter is providing advice in negotiation, it’s helpful to know what parts of the compensation package impact the recruiter’s potential fee. Keep in mind that recruiters often have customized agreements with their clients. When a recruiter is representing you to multiple opportunities, it’s absolutely necessary for you to be made aware of each client’s fee structure. If you sense that your recruiter is pushing you towards accepting an offer from Company A and discouraging you from a higher offer with Company B, knowing who pays the recruiter more helps temper the advice. The recruiter’s relationship with any given client – Did the recruiter just sign this client last week or do they have a ten year history of working together? Has the recruiter worked with certain employees of the client in the past? This information is primarily useful when considering a recruiter’s advice on hiring process and negotiation, as the recruiter’s familiarity (or lack thereof) could be a contributing factor to getting an offer and closing the deal. The recruiter should also be willing to share if the client is a contingency search or retained (some fee paid in advance). This information has little impact on incentives, but clients do have a vested interest in hiring from a recruiter on retainer as they already have some skin in the game. As much detail as possible on any given job being pitched – Some candidates are satisfied with only knowing a job title while others want to know whether a company has a tendency to hire executives from outside or within. Recruiters will have some specific details, but candidates should expect to perform a bit of due diligence as well. If there are certain deal breakers regarding your job search (maybe tuition reimbursement is a requirement for you), it’s the candidate’s responsibility to convey those conditions and the recruiter’s responsibility to clear those up before starting the process. What you need to express How and when to contact – If you share all your contact information with a recruiter without instruction, many recruiters will assume they have full access. Recruiters want to establish a solid relationship and may feel the best way to do that is through extensive live contact. An inordinate number of calls to your mobile phone during office hours could tip off managers to your search, which may even benefit the recruiter’s efforts to place you. Set guidelines on both method and time acceptable for contact. No changes to the résumé without consent – I hear this complaint often, and the solution for many is a PDF. The most common change made is the addition of the recruiter’s contact info and maybe a logo. This is harmless, and designed to ensure that the recruiter gets their fee if the résumé is found three months later and the candidate is hired. There are many anecdotes about recruiters adding or subtracting details from a résumé, which is a different story. It’s entirely unethical for a recruiter to insert skills or buzzwords without consent. No résumés submitted without permission - To prevent a host of potential issues, be explicit about this. A recruiter who is not given this directive may feel they have carte blanche and might submit your résumé to a company you are already interviewing with, a former boss you didn’t like, or any number of places you don’t want your résumé going. Need to provide client names before submittal – See above. There are somewhat unique scenarios where companies request anonymity before they establish interest in a candidate, but these are extremely rare cases. It is not only important to know that your résumé is being sent out, but also where it is going. Only want to be pitched jobs that meet your criteria – This is more about saving time than anything else, but contingency recruiters playing the numbers game may try to maximize their chances of making a fee on you by submitting you to every client in their portfolio. The result is wasteful interviews for jobs that you are unqualified for or that you would never have accepted in the first place. Recruiters aren’t mind readers, so you’ll need to be specific. If you are limiting your search to specific locations and types of jobs, establish those parameters early and ask to be informed only about jobs that fit. Expectation of feedback, preferably actionable – One of the biggest complaints about recruiters is that they suddenly disappear after telling you about a job or sending you on an interview. There are multiple reasons for this, some understandable and others less so. Asking the recruiter when you should expect to hear feedback and sending prompt emails after interviews should help you gather valuable information about what you are doing well and where you could use some work. Recruiters don’t want to hurt a candidate’s feelings and may filter their feedback, but the raw information is more useful and often actionable. Ask for a low level of filtering.Reference: So You Want to Use a Recruiter Part II – Establishing Boundaries from our JCG partner Dave Fecak at the Job Tips For Geeks blog....

Java EE Pitfalls #1: Ignore the default lock of a @Singleton

EJB Singleton Beans were introduced by the EJB 3.1 specification and are often used to store cached data. This means, we try to improve the performance of our application by using a Singleton. In general, this works quite well. Especially if there are not too many calls in parallel. But it changes if we ignore the default lock and the number of parallel calls increases. Sensible defaults Let’s start with some Java code and see how the sensible default of the lock works out. The following snippet shows a simple EJB Singleton with a counter and two methods. method1 writes the current value of the counter to the log and method2 counts from 0 to 100. @Singleton @Remote(SingletonRemote.class) public class DefaultLock implements SingletonRemote { Logger logger = Logger.getLogger(DefaultLock.class.getName());private int counter = 0;@Override public void method1() { this.logger.info("method1: " + counter); }@Override public void method2() throws Exception { this.logger.info("start method2"); for (int i = 0; i < 100; i++) { counter++; logger.info("" + counter); } this.logger.info("end method2"); } } As you can see, there is no lock defined. What do you expect to see in the log file, if we call both methods in parallel? 2014-06-24 21:18:51,948 INFO [blog.thoughts.on.java.singleton.lock.DefaultLock] (EJB default - 5) method1: 0 2014-06-24 21:18:51,949 INFO [blog.thoughts.on.java.singleton.lock.DefaultLock] (EJB default - 4) start method2 2014-06-24 21:18:51,949 INFO [blog.thoughts.on.java.singleton.lock.DefaultLock] (EJB default - 4) 1 2014-06-24 21:18:51,949 INFO [blog.thoughts.on.java.singleton.lock.DefaultLock] (EJB default - 4) 2 2014-06-24 21:18:51,950 INFO [blog.thoughts.on.java.singleton.lock.DefaultLock] (EJB default - 4) 3 ... 2014-06-24 21:18:51,977 INFO [blog.thoughts.on.java.singleton.lock.DefaultLock] (EJB default - 4) 99 2014-06-24 21:18:51,977 INFO [blog.thoughts.on.java.singleton.lock.DefaultLock] (EJB default - 4) 100 2014-06-24 21:18:51,978 INFO [blog.thoughts.on.java.singleton.lock.DefaultLock] (EJB default - 4) end method2 2014-06-24 21:18:51,978 INFO [blog.thoughts.on.java.singleton.lock.DefaultLock] (EJB default - 6) method1: 100 2014-06-24 21:18:51,981 INFO [blog.thoughts.on.java.singleton.lock.DefaultLock] (EJB default - 7) method1: 100 2014-06-24 21:18:51,985 INFO [blog.thoughts.on.java.singleton.lock.DefaultLock] (EJB default - 8) method1: 100 2014-06-24 21:18:51,988 INFO [blog.thoughts.on.java.singleton.lock.DefaultLock] (EJB default - 9) method1: 100 OK, that might be a little unexpected, the default is a container managed write lock on the entire Singleton. This is a good default to avoid concurrent modifications of the attributes. But it is a bad default if we want to perform read-only operations. In this case, the serializationion of the method calls will result in a lower scalability and in a lower performance under high load. How to avoid it? The answer to that question is obvious, we need to take care of the concurrency management. As usual in Java EE, there are two ways to handle it. We can do it ourself or we can ask the container to do it. Bean Managed Concurrency I do not want to go into too much detail regarding Bean Managed Concurrency. It is the most flexible way to manage concurrent access. The container allows the concurrent access to all methods of the Singleton and you have to guard its state as necessary. This can be done by using synchronized and volatile. But be careful, quite often this is not as easy as it seems. Container Managed Concurrency The Container Managed Concurrency is much easier to use but not as flexible as the bean managed approach. But in my experience, it is good enough for common use cases. As we saw in the log, container managed concurrency is the default for an EJB Singleton. The container sets a write lock for the entire Singleton and serializes all method calls. We can change this behavior and define read and write locks on method and/or class level. This can be done by annotating the Singleton class or the methods with @javax.ejb.Lock(javax.ejb.LockType). The LockType enum provides the values WRITE and READ to define an exclusive write lock or a read lock. The following snippet shows how to set the Lock of method1 and method2 to LockType.READ. @Singleton @Remote(SingletonRemote.class) public class ReadLock implements SingletonRemote { Logger logger = Logger.getLogger(ReadLock.class.getName());private int counter = 0;@Override @Lock(LockType.READ) public void method1() { this.logger.info("method1: " + counter); }@Override @Lock(LockType.READ) public void method2() throws Exception { this.logger.info("start method2"); for (int i = 0; i < 100; i++) { counter++; logger.info("" + counter); } this.logger.info("end method2"); } } As already mentioned, we could achieve the same by annotating the class with @Lock(LockType.READ) instead of annotating both methods. OK, if everything works as expect it, both methods should be accessed in paralel. So lets have a look at the log file. 2014-06-24 21:47:13,290 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 10) method1: 0 2014-06-24 21:47:13,291 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) start method2 2014-06-24 21:47:13,291 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 1 2014-06-24 21:47:13,291 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 2 2014-06-24 21:47:13,291 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 3 ... 2014-06-24 21:47:13,306 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 68 2014-06-24 21:47:13,307 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 69 2014-06-24 21:47:13,308 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 3) method1: 69 2014-06-24 21:47:13,310 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 70 2014-06-24 21:47:13,310 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 71 ... 2014-06-24 21:47:13,311 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 76 2014-06-24 21:47:13,311 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 77 2014-06-24 21:47:13,312 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 2) method1: 77 2014-06-24 21:47:13,312 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 78 2014-06-24 21:47:13,312 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 79 ... 2014-06-24 21:47:13,313 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 83 2014-06-24 21:47:13,313 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 84 2014-06-24 21:47:13,314 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 5) method1: 84 2014-06-24 21:47:13,316 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 85 2014-06-24 21:47:13,316 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 86 2014-06-24 21:47:13,317 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 87 2014-06-24 21:47:13,318 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 88 2014-06-24 21:47:13,318 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 6) method1: 89 2014-06-24 21:47:13,318 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 89 2014-06-24 21:47:13,319 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 90 ... 2014-06-24 21:47:13,321 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 99 2014-06-24 21:47:13,321 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) 100 2014-06-24 21:47:13,321 INFO [blog.thoughts.on.java.singleton.lock.ReadLock] (EJB default - 1) end method2 Conclusion At the beginning of this article, we found out that Java EE uses a container managed write lock as default. This results in a serialized processing of all method calls and lowers the scalability and performance of the application. This is something we need to have in mind when implementing an EJB Singleton. We had a look at the two exisiting options to control the concurrency management: the Bean Managed Concurrency and the Container Managed Concurrency. We used the container managed approach to define a read lock for both methods of our singleton. This is not as flexible as the bean managed approach, but it is much easier to use and sufficient in most of the cases. We just need to provide an annotation and the container will handle the rest.Reference: Java EE Pitfalls #1: Ignore the default lock of a @Singleton from our JCG partner Thorben Janssen at the Some thoughts on Java (EE) blog....

Java EE 8 – Deliver More Apps to More Devices

If there’s one thing I dislike about summer, it is the fact that there isn’t much news to share or talk about. Whoever decided to put the Java Day Tokyo into this boring time of the year did a pretty good job and gave me an opportunity to write a blog post about new and upcoming Java EE 8 specification enriched with some more thoughts and pointers. As announced on the Java EE 7 EG Mailinglist beginning of June the new EE 8 JSR is going to be filed shortly (before JavaOne).     Contents of EE 8 Unlike the first version of EE 7 which was totally dominated by the word “cloud” and later re-aligned with the hard facts, this new Java EE version will basically stick to three different areas of improvement.HTML 5 / Web Tier Enhancements CDI Alignment / Ease-of-Development Cloud EnablementAll three can be seen as a continued evolution of what EE 7 already delivered and there is no real surprise in it at all. Head over to The Aquarium to read more about the details.Cameron Purdy about EE 8 at Java Day Tokyo 2014  Hidden Gems – What might come up at JavaOne The Java Day Tokyo was held recently and with Cameron Purdy as a keynote speaker about Java EE and it’s general direction (mp4 download, 363MB) this probably was one of the first chances to see, what will be the overall story for JavaOne with regards to the platform. As Oracle should have learned the Java community isn’t interested in big and unpleasant surprises. Strategic directions are communicated and prepared a bit more carefully. We all have seen and heard about the IoT hype and the efforts everybody puts in it. This obviously also seems to have some outreach into Java EE. Beside the general topics and contents of EE 8 the Purdy keynote also contained a slide titled “Powering Java Standard in the Cloud – Deliver Mode Apps to More Devices with Confidence”.  Java Standards in the Cloud.  And yes, you are correct about thinking that this is EE 7 coverage. It actually is. But at least for me it is the first time, that individual features have been isolated from individual technical specifications and put into a complete, strategic picture outlining use-cases in the enterprise. It will be interesting to see, if there is something more like this to be shown at JavaOne and how much IoT we will see in EE 8 when it finally hits the road.Reference: Java EE 8 – Deliver More Apps to More Devices from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....

How to Handle Incompetence?

We’ve all had incompetent colleagues. People that tend to write bad code, make bad decisions or just can’t understand some of the concepts in the project(s). And it’s never trivial to handle this scenario. Obviously, the easiest solution is to ignore it. And if you are not a team lead (or something similar), you can probably pretend that the problem doesn’t exist (and occasionally curse and refactor some crappy code). There are two types of incompetent people: those who know they are not that good, and those who are clueless about their incompetence.   The former are usually junior and mid-level developers, and they are expected to be less experienced. With enough coaching and kindly pointing out their mistakes, they will learn. This is where all of us have gone though. The latter is the harder breed. They are the “senior” developers that have become senior only due to the amount of years they’ve spent in the industry, and regardless of their actual skills or contribution. They tend to produce crappy code, misunderstand assignments, but on the other hand reject (kindly or more aggressively) any attempt to be educated. Because they’re “senior”, and who are you to argue with them? In extreme cases this may be accompanied with an inferiority complex, which in turn may result in clumsy attempts to prove they are actually worthy. In other cases it may involve pointless discussions on topics they do not want to admit they are wrong about, just because admitting that would mean they are inferior. They will often use truisms and general statements instead of real arguments, in order to show they actually understand the matter and it’s you that’s wrong. E.g. “we must do things the right way”, “we must follow best practices”, “we must do more research before making this decision”, and so on. In a way, it’s not exactly their incompetence that is the problem, it’s their attitude and their skewed self-image. But enough layman psychology. What can be done in such cases? A solution (depending on the labour laws) is to just lay them off. But in a tight market, approaching deadlines, a company hierarchy and rules, probably that’s not easy. And such people can still be useful. It’s just that “utilizing” them is tricky. The key is – minimizing the damage they do without wasting the time of other team members. Note that “incompetent” doesn’t mean “can’t do anything at all”. It’s just not up to the desired quality. Here’s an incomplete list of suggestions:code reviews – you should absolutely have these, even if you don’t have incompetent people. If a piece of code is crappy, you can say that in a review. code style rules – you should have something like checkstyle or PMD rule set (or whatever is relevant to your language). And it won’t be offensive when you point out warnings from style checks. pair programming – often simple code-style checks can’t detect bad code, and especially a bad approach to a problem. And it may be “too late” to indicate that in a code review (there is never a “too late” time for fixing technical debt, of course). So do pair programming. If the incompetent person is not the one writing the code, his pair of eyes may be useful to spot mistakes. If writing the code, then the other team member might catch a wrong approach early and discuss that. don’t let them take important decisions or work or important tasks alone; in fact – this should be true even for the best developer out there – having more people involved in a discussion is often productiveDid I just make some obvious engineering process suggestions? Yes. And they would work in most cases, resolving the problem smoothly. Just don’t make a drama out of it and don’t point fingers… …unless it’s too blatant. If the guy is both incompetent and with an intolerable attitude, and the team agrees on that, inform management. You have a people-problem then, and you can’t solve it using a good process. Note that the team should agree. But what to do if you are alone in a team of incompetent people, or the competent people too unmotivated to take care of the incompetent ones? Leave. That’s not a place for you. I probably didn’t say anything useful. But the “moral” is – don’t point fingers; enforce good engineering practices instead.Reference: How to Handle Incompetence? from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....

More #NoEstimates

Quite an interesting conversation and reaction to the #NoEstimates post. Good questions too, and frankly, to some I don’t have answers. I’ll try, anyway. Let’s start with classic project management. It tells us that in order to plan, we need to estimate cost and duration. Estimation techniques have been around for a while.     @gil_zilberfeld all estimates probabilistic based on underlying statistics of work processes. what's alternative 4 knowing cost/sched/tech? — Glen B. Alleman (@galleman) June 19, 2014  There’s a problem with the assumption that we can “know” stuff.  We can’t know stuff about the future. Guessing, or estimating, as we call it is the current alternative. To improve, we can at most try to forecast. And we want a forecast we can trust enough to make further plans on. If confidence is important then:   @gil_zilberfeld So, get good enough at estimates to feel that confidence, make decisions accordingly. Shouldn't be a controversy. @galleman — Peter Kretzman (@PeterKretzman) June 19, 2014  Sounds easy enough… Estimating is a skill. It takes knowledge of process and ability to deduce from experience. As with other skills, you can improve your estimations. It works well, if the work we’re doing is similar to what you did before. However, if history is different than the future, we’re in trouble. In my experience, it usually is. Variations galore. In the projects I was involved in, there were plenty of unknowns: technology, algorithms, knowledge level, team capacity and availability, even mood. All of those can impact delivery dates, and therefore the “correctness” of estimations. With so many “unknown unknowns” out there, what’s the chance of a plausible estimation? We can definitely estimate the “knowns”, try to improve on the “known unknowns”, but it’s impractical to improve on estimating that part. Yet the question remains @adubism how do you determine cost to reach that schedule with needed capabilities? @PeterKretzman @gil_zilberfeld — Glen B. Alleman (@galleman) June 19, 2014  Ok, wise-guy, if estimating can yield lousy results, what’s the alternative? Agile methodologies take into account that reality is complex, and therefore involve the feedback loop in short iterations. The product owner can decide to shut down the project or continue it every cycle. I think we should be moving in that direction at the organizational level. Instead of trying to predict everything, set short-term goals and check points. Spend small amount of money, see the result, then decide. Use the time you spent on estimating to do some work. Improving estimates is a great example of local optimization. After all, the customer would rather have a prototype in the hand, than a plan on the tree. And if he wants estimates? Then we will give a rough estimate, that doesn’t cost much. I know project managers won’t like this answer. I know a younger me wouldn’t either. But I refer you to the wise words of the Agile Manifesto, which apply to estimating, among other things: We are uncovering better ways of developing software by doing it and helping others do it.There are better ways. We’ll find them.Reference: More #NoEstimates from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....

Using Git- Part -I : Basics

Introduction Git is popular distributed version control system created by Linus Torvalds, creator of Linux OS. So, as you might have guessed it is first used for version controlling the Linux Kernel code.            Its widely used in most of open source and closed source software development. Thanks to Github popularity and its own feature sets. Most of software open source projects foundation such as Eclipse Foundation recently moved its projects SVN and CVS repositories to Git. You can read more about here and here. This is basic tutorial targeted at fellow beginners to git version control system. It shows very basic git workflow to get started with git. Installation For Windows and Mac OS For this, go ahead to git-scm downloads site then go ahead to download executable files specific to your Operating System. Click on downloaded installer file to install git on your machine. For Debian based OS- Ubuntu/Mint Execute following commands in terminal window, it installs git using PPA (personal package archive): sudo add-apt-repository ppa:git-core/ppa sudo apt-get update sudo apt-get install git Now, you have installed the git, you can check for proper installation using git –version:    Note : For this tutorial, i will be using terminal for demonstration of all git commands. So, if you are on Windows OS, make use of bash command prompt that shipped with git installation and on Mac or Linux use normal terminal. As now, we have git executable available from command prompt/terminal to execute the git commands. Git executable has following basic format: git <command> <switch-options/command-options> <sub-command> <more-sub-command-options>   Doing Git global user configuration Command to be used : git config First step after installation, is to do user configurations that is to setup the name and email id so, that git can identify while you making commit, that basically adds ownership of each commit. Execute the following commands: git config --global user.name <your-name> git config --global user.email <your-email>   Creating Git repository Command to be used : git init For tracking files using git, first you need to create git repository first. Lets create hello-git directory and initialize the git repository in it. mkdir hello-git cd hello-git git init    After initializing you will see the message that Initialized git repository as shown in above screenshot and if you checked out the ‘hello-git’ directory , you will notice ‘.git’ directory is created. This is the directory where git operates.    Creating files and adding it to git Commands to be used : git add, git commit, git status Now, lets add few files to our newly created git project.    I have added the hello.js and README.md files to project directory, Now lets see what git tells us about status of our project directory. git status- this command lets you see whats the status of files in a git project directory.    Git File states As we are discussing from start that git tracks changes for git intialized directory. So, when working with git, filles goes to few states which explained as follows: Untracked Files: This are kind of files which are newly added to directroy and it is not tracked by git for version control Tracked Files : This are the files which are aleady committed to git but not staged or added to Index area Staged / Index Files :These are the files which are to be added to next commit. Modified / Unstaged files : These are files which are modifed but not staged As you can see it says our two files are Untracked, meaning that we haven’t told git to track our files. For telling this, we going to use git add command. git add - It adds untracked or unstaged files to Staging/ Index area of git. Adding specific files : git add <file-1> <file-2>Adding all files in directory and sub-directories : git add . or git add --all For our case, i am going to add all files:    As you can see that, files are added to staging area. Now, we need to commit our files to that is to add to git repository, for this we use git commit. git commit : In simple terms, this command does two thing first it adds files to git repository and makes the commit log based on commit message you provided. git commit -m “<your message>” For our case: git commit -m "Initial commit for project, added hello.js and README.md"    Now, after doing this you will see that your changes are committed to git repository. Now, if you do git status , you will see no changes to commit, that means our working directory changes are recorded by git:    git log : This let you see your older commits, in this you can see commit hash , name of author, date to which commit has been made and commit message.    To see more compact commit messages, use oneline switch on log command shown as follows:    Doing more changes to files Commands to be used : git diff, git add and git commit Lets do more changes to files in repository and learn more git commands. Now, lets add few lines to hello.js and README.md, Open this files in the your favorite editor and add few lines to it. I have added few lines to files you can see those as follows:       Now, if you do git status:  You can see the files are tracked but not staged for commit, as compared previous un-tracked files. Now, in middle changing files, you might want to see what is been changes since last staged. git diff : It shows you code diff’s between what you have changed in working directory since your last commit for each files in directory. git diff <options> <file-name> If you just do git diff, without specifying the file-name you diff of all files:    While if you specified file-name as follows, it shows diff of only that particular file:    You can see the lines in light green color, which are started with +, indicates that these lines are added since last commit. Lets add files to staging area using git add command: git add . Now, if do diff of repository, you will not see the anything because, by default diff command shows the diff’s between unstaged files. To see diff’s of files that has been staged, you have to do the following: git diff --cached    Now, lets commit the files: git commit -m “added the more contents”    and now if you do git log, you see our two commits as shown as follows:    To get more information use –stat, which gives more information about which files are changed. Look at following image:    I Hope this tutorial helped you to at-least understand basics of Git. Git has lot of cool features you might need it later as you get advanced. For that, be sure to check out this blog again for more tutorials on same.Reference: Using Git- Part -I : Basics from our JCG partner Abhijeet Sutar at the ajduke’s blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: