StatCounter

Friday, June 19, 2015

Enterprise Web Development

[Excerpts from my review published in ACM's Computing Reviews.]

Enterprise web development : building HTML5 applications from desktop to mobile

Fain Y., Rasputnis V., Tartakovsky A., Gamov V., O’Reilly Media, Inc., Sebastopol, CA, 2014. 642 pp. Type: Book (978-1-449356-81-1)

Date Reviewed: Jun 18 2015



A basic truth about this book is that at 600 pages it is ambitious in scope and covers a whole lot of ground.

The book defines what it means for an app to be an enterprise app--that it is integrated with one or more company-specific business processes and helps an organization run its business online--and then goes about developing one.

The authors have earned their stripes having built and written books on enterprise web apps using Java, then Adobe Flex, and now Hypertext Markup Language 5 (HTML5).

HTML5, of course, isn’t just HTML. It’s a replacement term for what used to be called Web 2.0, and is short for the entire tech stack that comprises avant-garde HTML5 development today: HTML, cascading style sheets (CSS), JavaScript and “dozens” of JavaScript frameworks, libraries, and tools.

For those new to JavaScript, available online only is a 100-page “Advanced Introduction to JavaScript”; this document is well worth reading. The book also aims to recommend a broad tech stack for enterprise web app development. As the book’s introduction reveals, it is aimed at server-side developers who wish to learn HTML5 front-end development. Understandably, this is a rather large audience that needs to re-skin a well-designed back-end to provide modern-day user experiences.

The book takes a “continuous refactoring” (my words) approach to teaching web app development, wherein the application is continuously rewritten to embrace a new framework and a different or better way to accomplishing the application’s goals (all of the code examples from the book are available online). In that sense, the book serves as a fairly complete tutorial on web app development. The tutorials follow a show-and-tell style, whereby the reader is first shown a new application behavior and then told how to make it happen via code.

Broadly, the example enterprise web app is developed in the following stages:

(1) Pure HTML/CSS/JavaScript;

(2) jQuery rewrite;

(3) ExtJS rewrite (ExtJS introduces concepts similar to classes and classical inheritance.);

(4) Integration with HTML5’s geolocation application program interface (API) and Google Maps;

(5) Embedding videos and charts (using Canvas and SVG);

(6) Modularization and RequireJS;

(7) WebSocket for server-side push;

(8) Responsive design and mobile enablement using jQuery Mobile, Sencha Touch, and PhoneGap;

(9) Productivity tools: npm, Grunt, Bower, Jasmine, Yeoman, CDB;

(10) Security considerations;

(11) Hybrid development, using the HTML5 stack, along with native platform-specific APIs; and

(12) Appendices on key HTML5 APIs and leading integrated development environments (IDEs)

The book’s declared goal isn’t to convince readers to adopt HTML5 development, but to know when HTML5 is the right approach (as opposed to native development using a platform-specific development stack, for example, Android or iOS).

Developers coming to JavaScript from a Java background might find this book particularly helpful, for it routinely draws analogies between well-known Java constructs and the corresponding analogues in JavaScript.

This is the first book I’ve read that’s not dedicated to WebSocket and yet provides a serious chapter-length discussion on why WebSocket can serve as a replacement for hypertext transfer protocol (HTTP) in performance-conscious web applications.

The grammar and quality of prose is not quite up to the standards one has come to expect from an O’Reilly publication. Technology books, in particular those dealing with the always-moving target of front-end development, need to be published in a timely fashion so that they are relevant for at least six to 12 months. So it’s a significant challenge to provide editorial input in a short time frame. That challenge is multiplied when the book to be edited is as big as this one. The authors have done a good job of putting together a fairly elaborate treatise on enterprise web development. However, the book clearly didn’t receive the required editorial rigor, which can certainly cause a book to lose its audience, especially when that audience is already being asked to struggle with a complicated language and grammar, that is, the HTML5 development stack.

That said, considering the scarcity of good consolidated information on bleeding-edge web app development, I would highly encourage readers to look past the editorial gaps and dive into this treasure trove of guidance. It will vastly improve your understanding of developing enterprise-class web apps using current best practices.

If a beginner asked me for one book to get started with web app development, I would recommend Purewal’s book, which I reviewed earlier this year [1]. After completing Purewal’s book, this book would be a good next step to expand on your client-side development skills (note that it doesn’t cover server-side topics such as Node or MongoDB).

More reviews about this item: Amazon, Goodreads

Reviewer: Puneet Singh Lamba Review #: CR143538

1) Purewal, S. Learning web app development. O'Reilly Media, Sebastopol, CA, 2014. See CR Rev. No. 143127 (1505-0332).

Saturday, April 4, 2015

Single Page Web Applications

[Excerpts from my book review published in ACM ComputingReviews.com on March 31, 2015.]

Single page web applications : JavaScript end-to-end

Mikowski M., Powell J., Manning Publications Co., Greenwich, CT, 2014. 432 pp.  Type: Book (978-1-617290-75-6)

Date Reviewed: Mar 31 2015

In the late 1990s, a technology now known as Ajax became prevalent in web applications as a tool for fetching content from a web application server asynchronously rather than having to fetch an entire new Hypertext Markup Language (HTML) page whenever a page update was required. The payload format for data exchange between client and server was predominantly Extensible Markup Language (XML) (AJAX was originally an acronym for Asynchronous JavaScript and XML). Typical Ajax use included fetching portions of a page or data updates from a server, the most famous one being Google Maps, which updated its web page asynchronously in response to the user zooming in or out or dragging to a different portion of the map. Even with the advent of Ajax, for most web applications, significant state changes were represented by distinct pages that had to be loaded from the server. These state changes were typically orchestrated and managed using server-side code and the model-view-controller (MVC) pattern. Herein, the server-side controller determines which model (data) and view (page) to return to the client in response to specific user actions.

In the early 2000s, JavaScript Object Notation (JSON) supplanted XML as a lightweight option for structuring the data to be exchanged between clients and servers. Around the same time, people began developing web applications using what came to be called the single page application (SPA) paradigm. In this approach, the only page that ever loads is the first one when a user loads or reloads the application (via a browser refresh). All other state changes, minor and major, are handled asynchronously and generally managed on the client side using a pattern known variously using terminology that is most simply described as “client-side MVC.” Herein, client-side code is responsible for evaluating user actions, fetching the appropriate data from a server (using Ajax/JSON), and mapping the received data to an appropriate view (a responsibility commonly referred to as data binding). In a sense, an SPA behaves more like a desktop application than a traditional web application.

MVC can be implemented by hand (roll your own), but as your application grows (with perhaps tens of states to manage) it quickly becomes unwieldy to continue to do so. Therefore, just as there are various frameworks/libraries for facilitating server-side MVC, many JavaScript frameworks/libraries have emerged for enabling client-side MVC, including Knockout, Backbone, Ember, and Angular. However, what sets this book apart is that the authors argue against using a client-side MVC framework. Having used several client-side MVC frameworks over the past few years, I can appreciate the stance taken by the authors.

Just a couple of years ago, Knockout and Backbone were considered de facto standards for client-side MVC. Then, almost out of nowhere, came Angular, supported by Google’s seemingly infinite programming and marketing resources. But Angular is new and is still undergoing radical changes from release to release. As a result, documentation is often lagging and there are multiple ways to do the same thing: legacy approaches often co-exist with newer approaches, as if to see what sticks. Furthermore, each of these “automatic two-way data binding” frameworks requires the programmer to accept some rigidity in exchange for convenience.

In case you’re wondering, it’s clear that the authors aren’t “roll your own” advocates. It’s just that they don’t want to invest in an immature or rigid client-side MVC framework. Although Angular is gaining traction, as of now there are no client-side MVC frameworks that can reasonably be termed as mature. As evidence of the authors’ level-headedness, consider their testing approach, detailed in Appendix B. Here, the authors write, “Node.js has many test frameworks that have years of use and refinement. Let’s be wise and use one instead of hacking our own.” The key here is the framework’s maturity. Since there are several reasonably mature JavaScript testing frameworks, the authors do not shy away from using them. After briefly describing jasmine-jquery, mocha, nodeunit, patr, vows, and zombie, the authors decide to go with nodeunit and describe how to set up a JavaScript testing framework for their SPA. Similarly, the authors use jQuery, which is another highly mature JavaScript framework for document object model (DOM) manipulation. Additionally, the authors use a pure JavaScript stack, with Node.js and MongoDB on the server side.

One upside of not using a client-side MVC framework is that you’re left with little choice but to become an expert at JavaScript. This book certainly helps with that as it goes step-by-step, building an SPA end to end. One of the book’s highlights is its overview of JavaScript in chapter 2. The authors do a tremendous job of explaining concepts like variable scoping and variable and function hoisting and closure, and provide new insights into the inner workings of JavaScript, such as the execution context object. As a complement to chapter 2, the authors present a useful set of coding standards in Appendix A.

In closing, here’s a sampling of some of the interesting approaches used in this book:


  • The HTML file is merely a shell with no content at all. All of the HTML is converted to JavaScript strings (using regular expressions) and embedded within the JavaScript code.
  • The use of callbacks is reduced via the use of jQuery global custom events.
  • The book recommends feeling a lot less guilty about using pixels since browsers have started implementing pixels as relative units and pixels are often more reliable than traditional relative units like em or percent.
  • The book recommends testing views and controllers manually (although user interface (UI) testing automation frameworks have matured and I’ve had considerable success with the combination of Protractor, Jasmine, and AngularJS).
  • The authors discourage the use of embedded template libraries like Underscore.js, but encourage the use of toolkit-style template systems such as those provided by Handlebars, Mustache, and Dust.

Wednesday, February 11, 2015

Towards a Pixel to Metal Perspective

[This is an excerpt of my review published in ACM's ComputingReviews.com on January 29, 2015.]

Becoming proficient at web application development involves a very steep learning curve and is often a never-ending, career-long endeavor. And yet a newbie has to start somewhere. When getting started, it’s easy to become overwhelmed by the vast array of concepts, technologies, and tools one has to master.

Often, even proficient web application developers have blinders on when it comes to having a broad perspective on web application development. In general, most developers are adequately familiar with only one or two of the following aspects of web application development: client-side programming, middleware programming, server-side programming, text editors/integrated development environments (IDEs), source/version control systems, data stores, operating systems, network programming, and hardware configurations. Few developers have a broad enough perspective--or what I call a 360-degree “pixel to metal (P2M)” worldview--of web application development.

That’s why Semmy Purewal’s new book is a highly welcome and valuable contribution. Along similar lines, at the New England Java Users Group (NEJUG, Boston), in August 2013, I presented a technology stack and sample application with source code for end-to-end web application development. After my presentation, I had wished to write a book similar to this one. Alas, I had neither the time nor the energy to see it through. However, the stack I presented was a polyglot stack, including JavaScript for client-side development, Java for server-side development, and a relational database. Purewal’s book somewhat simplifies the learning curve, to the extent that’s possible, by choosing an entirely JavaScript-based stack.

At less than 300 pages, Purewal’s book may seem short, but it manages to introduce the reader to a surprisingly wide array of concepts, technologies, and tools. And in doing so, the author does a great job of spending just the right amount of time on each topic, gradually building on previously explained topics to construct increasingly sophisticated web application snippets in each successive chapter.

Here’s a listing of the concepts, technologies, and tools this book introduces: Sublime, Git, GitHub, Chrome, Linux, Hypertext Markup Language (HTML), cascading style sheets (CSS), JavaScript, jQuery, JSON, Ajax, VirtualBox, Vagrant, Node.js, Express, PuTTY, hypertext transfer protocol (HTTP), Redis, MongoDB, Mongoose, and Cloud Foundry. As a result, it is not only ideal for novices, but also helpful for experienced developers looking to plug gaps in their P2M perspective of web application development.

This is a very thoughtfully put together book. There are very few typographical errors or inconsistencies, which is difficult to achieve in a book on programming. Exercises at the end of each chapter stimulate the reader to take on additional challenges, and some of the exercise results are leveraged in future chapters (for example, finding the number of occurrences of a string in an array of strings). There are useful pointers to further reading at the end of each chapter. Most of the book’s code is available on GitHub, organized by chapter.

The author uses two major themes for most of the exercises and code examples. The first is a to-do list that the user can build, tag, and categorize by tags. The second is a poker hand evaluator that is used to demonstrate how best to leverage some of JavaScript’s built-in functions. Other smaller examples involve consuming JSON feeds from Yahoo, Flickr, and Twitter. I got some of the harder exercises working on my machine, and found the code examples to be accurate and helpful. The author follows a certain discipline for each project, including checking code into GitHub at regular intervals, which is good for developers to emulate.

Along the way, the author manages to impart useful information on programming best practices, especially with respect to JavaScript, which is central to the programming environment that this book is sculpted around.

A few words of caution. If you’re using the Internet Explorer browser (for example, because your employer only allows IE), you won’t get the full experience. IE’s JavaScript console is much inferior to that of Chrome. On IE 9, for example, I was not able to drill down into JavaScript objects in the way that Chrome permits. Also, many of the longstanding Node.js modules, for example, Express, have been broken up into their component modules, which have to be installed and linked separately. So, the code in the book isn’t going to work as is, but it will point you in the right direction.

Unfortunately, past chapter 6 (on Node.js) things start to get a bit light on details. On page 229, we connect to the Amazerrific data store in Mongo, but we never created the data store! Also, on GitHub, the code only goes up to chapter 6. So, at this crucial juncture, the reader is left to his or her own devices. We have a pretty decent to-dos app, but it has no ability to persist the to-dos. So, the next time you launch the app, all your to-dos are gone! (So, I took it upon myself to address the gaps--Google my GitHub to see a working demo with source code.)

Whereas the author champions RESTful application programming interfaces (APIs), working through a project as I did will surely cause you to realize the shortcomings of an approach that attempts to use HTTP methods to imply which CRUD operation is taking place. A fundamental gap is that the four HTTP methods don’t have uniform behavior. For example, an HTTP POST (meant to be used for updates) embeds parameters in the request body whereas an HTTP GET (meant to be used for gets or selects) places parameters on the uniform resource locator (URL). Apart from security and aesthetic considerations with placing parameters openly on the URL, it is unpleasant to not be able to work with uniform programming paradigms for these four HTTP methods either on the client side (jQuery doesn’t have a $.put() method) or on the server side (Express).

One area the author seems to omit completely is security. I don’t expect a book to be able to cover all topics, but the author does a great job of alluding to important concepts even when he isn’t planning to cover them in depth (for example, JavaScript Promise APIs). I believe readers ought to be given as complete a list of to-dos (pun intended!) required to make even a simple application production-ready. Furthermore, authentication and authorization are definitely must-haves.

Sunday, January 18, 2015

AngularJS And HTML5 ContentEditable

Getting Angular to Play Well With HTML5 ContentEditable


HTML5 introduced a cool new attribute, which can be attached to any element to make it editable (contenteditable="true"). However, that's just the theory. To make that work, you have to use a data binding library (e.g. AngularJS, BackboneJS) or build your own using JavaScript.

I recently created a todos application using pure JavaScript (with jQuery) with a Node/Mongo backend, i.e. without using any data binding framework. This is a good exercise to ensure that you understand JavaScript well enough before you start leaning on hefty frameworks that obscure a lot of what goes on behind the scenes. If you understand JavaScript, you'll be better able to troubleshoot and fix issues that come up when you're using frameworks.

My pure JavaScript app is deployed on Heroku here. The code is on GitHub here.

Once I was happy enough with the pure JavaScript application, my next goal was to redo it using AngularJS (I had previously done some prototyping with Mustache, Handlebars, Backbone, etc.). I also decided to use Bootstrap in order to explore that in the process.

AngularJS Context


If you've played around a little with AngularJS, you know that one of its coolest features is the ng-repeat built-in directive. This directive can be applied to almost any HTML element to cause the element to be repeated for each member of your model. So, it's great for displaying something like a todo list. For example, a single ng-repeat="todo in todos" nested in a paragraph element would cause several paragraph elements to be added to the DOM, one for each todo in your model.

AngularJS uses the model to provide two-way data binding between your database and your displayed data. The idea is that AngularJS keeps track of your edits via the model and makes it easy to save those edits to a database at appropriate junctures.

Also, AngularJS provides a templating engine that allows you to easily control data formatting. So, I could do something like {{ todo.created_date | date }} in order to format the date using Angular's default date format. I could also specify a custom format.

First, Here Are the Requirements for What I Wanted to Achieve


I wanted to be able to edit my todos inline or in place. So I would be able to click to edit my todo description and have the enter key result in saving the edited todo. Sweet and simple. No forms or popups. Definitely no new pages! (Pressing escape or clicking outside the element being edited would cause it to lose focus, but not lose the edits until the user either returns to complete the edits and presses enter to save or performs some other action that causes the changes to be discarded.)

There's a bunch of directives people have written to get this to work, but none of them performed satisfactorily enough for me despite plenty of tinkering from me.

Along the way, I discovered HTML5's super cool contenteditable feature. Upon this discovery I was hell bent on using it rather than any other fancy approach. As it happens, there's no good documentation on how to get AngularJS (I'm using version 1.4 beta) with contenteditable.

I'll come back to this blog post shortly and provide more detail, but after struggling with this for a while, here's a summary of what I found out.

The Issue and the Resolution


The foundation for my effort was provided by Dmitri Akatov's library.

The odd behavior I saw was that if I used a template on a contenteditable element, as soon as I typed a single key, AngularJS would move the cursor to the beginning of the text. Very frustrating, of course. After trying a number of directives, libraries, etc., the solution that worked for me is rather simple.

If I am going to make an element editable, then I don't use a template. Instead, I provide a model for it, e.g. ng-model="todo.description". If I'm not making an element editable, I provide a template for it, e.g. {{ todo.created_date | date }}.

Careful observers will note that both model and template provide granular information about how the data maps to the model. So, you really should only need to provide one or the other. I learned this the hard way, because there are no good documents or examples on the web showing how to do exactly what I was attempting.

Caveats


Although contenteditable is cool to explore, be aware that support for it is still a bit shaky and will vary across browsers. There's a good discussion on some of the pros and cons here.

Next Steps


Over the next few days, I will test this solution out more thoroughly to look for any gaps. Then I will deploy the application to Heroku and add the code to GitHub. I'll return to this blog post and add links so that you can follow along more easily.

You can help to spread the good word by liking and sharing!

Thanks and cheers.


Update


Folks, as promised, here are the links.

App

Code

Saturday, January 3, 2015

Adriano, the Arduino Robot | Step 3 (L293D Controlled)


[Everything in this robot is built from scratch, no kits used (except for the chassis).]

Click here to view the previous steps in this series.

Step 1 (Basic Operation)

Step 2 (Remote Controlled)

Further to adding the remote control feature, I added the L293D H bridge chip to allow the Arduino to reverse motor direction. This also allows for the removal of the 2N2222 transistors that were previously managing the power supply to the motors based on signals received from the Arduino.


As you can see from the video and the schematic, using the L293D chip significantly reduces the complexity and bulk of the circuit. On the other hand, you now need to respond to one more remote key press (down/reverse) and issue two more pin signals (reverse action, one for each motor).

Friday, January 2, 2015

Adriano, the Arduino 2WD Robot | Step 2 (Remote Controlled)


[Everything in this robot is built from scratch, no kits used (except for the chassis).]

I finally found the time to add remote control capability to the robot I had created around this time last year. Check out my earlier blog entry for details on how I created the core robot.

After building the core robot (see link above) and getting it working, I decided that the next best step would be to add remote control capability to it so that it could be moved around as desired. I decided to try the Vishay TSOP4838 infrared remote receiver/sensor. I first created a proof of concept for the remote and the receiver.


Even though I have several remotes at home, I decided to buy a cheap and simple remote just for the robot so that I could do all of my testing without interrupting those at home who wanted to watch TV. As it turns out, any infrared remote can be made to work with the robot. Just get your program to spit out (to the serial console) the received raw codes from the remote for each key. Then you can program the actions you want the robot to take in response to each key. I ended up programing the arrow keys (forward, backward, left, right) and the OK key (stop). The Comcast remote turned out to be the most usable of the lot due to the size and placement of its arrow keys. Everything works reasonably well. Occasionally the remote seems to ignore a remote key press even though the key press is received. I added an LED to indicate when a key press is received so that I know whether the key press wasn't received or whether the key press was just ignored. In the cases where the key press is ignored, I'm guessing that their the Arduino is too busy to process the key press or the key press code got mangled due to interference and the code received was not a code that the program recognized. This is a good topic for further research/investigation.

Anyhow, here's how I integrated the remote capability into the core robot I had build earlier.


Click here to view the other steps in this series.

Step 1 (Basic Operation)

Step 3 (L293D Controlled)

Wednesday, December 10, 2014

DIY: Atari Punk Console, Sound Synthesizer, Noise Maker


Here's a video I've posted on YouTube.com with detailed steps on how to build an Atari Punk Console (APC). I've also covered some of the key concepts required to understand the APC circuit, including the workings of the 555 integrated circuit chip and the workings of the 555's astable oscillator circuit setup and monostable oscillator circuit setup, which collaborate to produce the wonderful variety of tones we hear from the APC. I've also included several demos of the finished product, both solo as well as plugged into my Line 6 Spider IV 75W amplifier, which adds some amazing effects on top of the APC!

To help you get the most out of this tutorial video, I've posted the slides and the video narration transcript at the links below.

APC Tutorial Slides

APC Video Narration Transcript

Please don't forget to like the video and subscribe to my YouTube channel!

Friday, December 5, 2014

A Short History of Iterative Software Development

[This post has also been published on DZone.com, where it has gathered well over 5,000 views.]

It is useful to know about the various iterative software development project methodologies and how they originated, not so that you can follow them religiously but so that you can add them to your bag of tricks.

I recommend learning about the following.

  • Kanban (1953), whose origin lies within the Toyota manufacturing process innovations
  • Scrum (1986), and its storied origin in the 1986 Harvard Business Review seminal article The New New Product Development Game, which examined 6 major Japanese and American companies to conclude that successful teams had replaced sequential approaches with iterative ones
  • Rational Unified Process (1996), proposed by IBM in response to some of the growing market forces looking for alternatives to the Waterfall family of methodologies
  • Lean (1998), and its origin in Toyota's manufacturing system
  • Agile (2001), especially the contents and motivation behind the Manifesto for Agile Software Development, written by many of the software development thought leaders of our time

A good starting point might be to read the content at the two links I've provided in this post.

I believe it is more important to understand the driving forces behind these movements than it is to understand the detailed practices that are in vogue today.

On most projects, I end up adopting or recommending not a pure version of any one of these methodologies, but a hybrid methodology employing those practices and techniques that seem best suited for the team and project at hand.

A final word. Whereas the methodologies listed above mostly concern themselves only with the softer aspects of project management, it is important to understand what I call technical Agile practices, i.e. the many software development tools and practices (e.g. unit testing, continuous integration) that must be mastered in order to extend the theories expounded in these methodologies into practical reality.

Monday, July 28, 2014

DIY: Rebuilding My Front Porch

My new DIY project. Rebuild the front porch. Lessons learned (actually confirmed) so far. Nothing is as easy as it seems. Always allocate time for preparation, evaluation and planning (don't jump straight to project execution). Pick a small representative (and, ideally, non-critical) section of the project to execute first. Apply the lessons learned to the remainder of the project (if something goes wrong, the impact is somewhat limited).

Watch this space as I report on the progress of this project.

Sunday, March 16, 2014

Learning to Fly: School Bus Adventures in Delhi


I spent most of my younger years in New Delhi, India. I lived with my family on the campus at IIT Delhi and attended Modern School, Vasant Vihar. The campus where we lived was like a walled city within a city. School buses weren't allowed inside the campus. So, in order to catch their respective school buses, kids had to walk from their homes to the campus gates, of which there were three or four. This could easily be up to a half mile walk. On occasion, I missed the school bus and had to take the public transport bus. Delhi's public transport system is known as the Delhi Transport Corporation or DTC.

As anyone who has lived in Delhi would know, taking the DTC bus is an adventure and a test of fitness. During rush hour (i.e. when I missed my school bus in the mornings), DTC buses don't quite stop at designated stops. Rather they merely slow down because while they must let off passengers they don't quite have room to let on more passengers. Therefore, those who need to get off must do so with great skill and determination, running for a few steps upon getting off to keep from falling due to inertia. And they must do so while taking care not to wipe out by stepping into a pot hole or on to a banana peel.


And while the bus slows down to let off passengers, those who have sufficient confidence in their abilities (or a desperate urgency) try to run and hop onto the moving bus. And getting on the bus isn't as simple as it might sound. Recall that the bus is full. Passengers are bulging out of both the front exit and the back entrance, neither of which have doors (see picture above). Therefore, a passenger wishing to get onto the bus must first use one hand to secure a grip on something -- a handle bar, a window, or even another sturdy passenger. Second, the passenger must now quickly secure a foot hold, ideally on the exit or entrance steps but occasionally on another passenger's unsuspecting foot.

These rush hour rides were great exercise and an excellent opportunity to travel ticket-less because the destination would typically arrive before one could wrestle one's way to the bus conductor comfortably seated in the back of the bus.

Anyway, I digress. Back to the school bus. I remember that sometime in the middle grades we had a Sikh bus driver. I, too, was born into a Sikh family. So, the driver might have been partial to me. Anyway, he used let me sit next to him on the engine cover, which used to get pretty hot (so one needed motivation to sit on it). And, as you will discover, I had a certain motivation. At some point I mustered up enough courage to ask him if he would let me shift the gear for him -- just once. To my surprise the driver turned out to be quite a sport and played along. Slowly I graduated to shifting gears for him all the way to school (several miles). He would, of course, press his foot down on the clutch and I would try my best to time the actual gear shift with his clutch work. The engine cover used to be hot as hell, which was tolerable during winters (our school uniform required shorts until grade 10) but sucked during summers. The stick shift was this large apparatus (about as big as a baseball bat) that I could barely maneuver. Every once in a while I'd screw up and cause the transmission system to let out embarrassingly loud grunts and roars. The driver would smile and bail me out. I can only hope that I didn't do any great harm to the transmission system during these adventures. But I had fun and the driver was an absolute darling for humoring me and providing me with my first ever driving lessons, albeit rather unconventional ones.

Saturday, December 7, 2013

My Experiments with the PICAXE 08M2+

PICAXE 08M2+
I generally prefer to do things the hard way. For example, you can either microwave a frozen dinner or cook from scratch. The microwave option is a good backup plan, but cooking from scratch has far too many advantages, as I outlined in a recent blog post on the do-it-yourself way of life.

In the electronics and robotics world, the analogy of cooking from scratch is to build circuits using a bare bones micro-controller chip (e.g. the PICAXE) rather than a fancy board (e.g. the Arduino). Therefore, once I get a circuit working with the packaged Arduino approach (e.g. this robot I built recently), I usually try to replicate the circuit using more basic components like the PICAXE.

My decision to consider the PICAXE was influenced by Charles Platt's coverage of it in his awesome book Make: Electronics. But essentially, I am a minimalist and I want to see how much I can get done with a bare bones chip rather than a bulky board-based micro-controller like the Arduino. My current favorites are the Nano (quite small, just $7, breadboard friendly, and just as easy to work with as the Uno). My next experiment will be the ATtiny85V-10PU chip because I believe it's still fairly easy to work with (it can be programmed via a regular Arduino and has many of the same capabilities as the Arduino -- many programs written for the Arduino can easily be modified to work on the ATtiny), costs just $2.50, and is of course breadboard friendly. (I don't mind soldering, but for most of my experimentation breadboards are the way to go.)

And the actual PICAXE I chose was the latest model of the smallest chip available, i.e. the 08M2 ($7), for which (as a gentle soul on one of the forums clarified to me) the label on the chip actually reads 08M2+.

If you're planning to use the PICAXE, step one is to figure out how to program it. PICAXE programs are written on and uploaded to the PICAXE chip using an integrated development environment (IDE) known as the PICAXE Programming Editor, which is available as a free download. Henceforth I will refer to this software as simply the IDE. As it turns out, this IDE is fairly handy and includes a simulator to show you how your program will play out on the PICAXE chip once you upload it. Note that PICAXE programs are written in BASIC (yeah, not the most popular language out there) as opposed to Arduino programs, which are written in C and Raspberry Pi programs, which are written in Python.

Uploading your program to the PICAXE is the first major hurdle you're likely to encounter. As I have discovered, everything involving the PICAXE is harder to do than it is with the Arduino. That's because the PICAXE is just a bare bones chip, whereas the Arduino is a micro-controller chip on a board along with a fairly large supporting cast of components and connectors (e.g. USB).

The standard way to program the PICAXE is by using the AXE027 (USB to 3.5mm jack plug male audio cable) or the USB010 (USB to RS232 9-pin male cable). In either case, additional components are needed to complete the setup.

The AXE027 cable uses an FTDI chip to perform the USB to RS-232 serial conversion required to program most micro-controller chips. However, I was inspired by this video by Kevin Darrah and other postings on the web to employ means other than having to buy this pricey $30 cable.

Kevin Darrah's approach uses an Arduino Duemilanove, which I don't have. So, I tried doing this with the Arduino Uno. The Uno didn't work because the Uno uses the Atmega16U2 chip to perform the USB to RS-232 serial conversion. According to my testing, using the PICAXE IDE's "Test Port" feature, the Uno's serial converter did not seem to support the RS-232 "break" command. The IDE's "Test Port" feature works by turning the signal on and off and a multimeter hooked up to measure the voltage across the PICAXE's serial input pin and 0V (or ground) is supposed to detect a change in voltage as the test LED is switched on and off.

This port test did succeed for me when I tried it with the Arduino Nano, probably because the Nano uses an FTDI chip to perform the USB to RS-232 serial conversion, which clearly does support the RS-232 "break" command.

Note: the exact chip I used was the PICAXE 08M2+, which is a significant upgrade on the 08 and 08M chips. I paid about $7 for it (compare with $35 or so for the Arduino, $45 or so for the Raspberry Pi, and $2.50 for the ATtiny85).

Note that the FTDI chip and most other USB to serial converters that perform the conversion often stop short of inverting the signals, as is needed for programming the PICAXE. The FTDI chip can be configured to invert the signals using FTDI's FT_PROG tool. However, I decided to heed the warning on the FTDI site and use the circuit I've described in this blog to invert the signals rather than mess around with the FTDI chip on my Nano and risk having my Nano being rendered useless.

As I mentioned previously, the reason this circuit is required at all is because signals are inverted in the RS-232 serial protocol. Using 0V to represent logic 0 (a bit value of 0) and 5V to represent logic 1 (a bit value of 1) is the more common approach and is known as the transistor-transistor logic (TTL) level. The RS-232 serial logic levels are not only higher (+/- 12V, so you wouldn't want to connect them directly to your chip since most chips operate on 5V or 3.3V) but also inverted. Therefore a USB logic 1 signal needs to be converted to a logic 0 signal and a USB logic 0 signal needs to be converted to a logic 1 signal. The circuit described here accomplishes that inversion of signals.

To program the PICAXE the IDE needs to send data to the PICAXE and receive data back from the PICAXE. Think of the RX (receive) pin on the Arduino as representing data arriving from the PICAXE IDE via the USB-to-serial FTDI converter. When the IDE is sending data to the PICAXE, the RX logic 1 signal is applied to the transistor's base, which results in completing the circuit from the transistor's collector to emitter and a logic 0 signal being applied to the PICAXE serial-in pin (the logic 1 signal from the Arduino's RX pin got inverted to a logic 0 signal prior to being sent to the PICAXE's serial in pin).

Conversely, when the PICAXE sends data back to the IDE, a logic 0 signal from the PICAXE's serial out pin causes the transistor to remain off and the 5V (logic 1) signal is applied to the Arduino's TX pin. And a logic 1 signal from the PICAXE's serial out pin, when applied to the transistor's base, turns the transistor on, thereby causing a logic 0 (0V) signal to be applied to the Arduino's TX pin. Hence, the inversion of signals is accomplished via the use of these two NPN 2N2222 transistors and four 10K resistors.

So, finally, here's the circuit that I've been describing in the paragraphs above.

Schematic From Kevin Darrah's Video
And here's the breadboard version of the diagram I created, to help you implement this for your own use. I recommend purchasing a bunch of smaller breadboards for this kind of thing so that, for example, once you've setup the breadboard for programming the PICAXE 08M2+ you can set that breadboard aside and leave it dedicated to that purpose. In fact, what I do is to simply move the jumpers off the PICAXE's serial in and serial out pins, move them to an used part of the board, wire the PICAXE's serial in and serial out pins to ground (assuming they will not be used) and then use the same breadboard for operating the PICAXE circuit. Bottom line, you probably want to dedicate a board per micro-controller so that you can have the circuit required to program it left in tact for the next time you wish to use it, and possibly use the same board to operate the micro-controller project you're developing.

Breadboard Circuit for Programming the PICAXE with an Arduino
Also, here's the actual simple program that utilizes this circuit.

LED Blink Program for the PICAXE
But my challenges did not end once I had figured out how to program the PICAXE. After I first wired up the LED it wouldn't blink at all. After some soul searching I realized that the pin assignments on the PICAXE 08M2 follow a port.pin address space in order to to provide for a greater number of options. (Note that the 08M2+ chip is vastly improved and expanded relative to the 08 or 08M.) I have to say, PICAXE has done a poor job of explaining how port.pin addressing works. Also, the fan base seems to be declining since there aren't enough YouTube videos or web posting on these important topics. Accordingly, my next stop is likely to be the ATtiny85, which is roughly comparable to the PICAXE 08XX series, but might enjoy a somewhat better community due to the traction it gets via its inclusion into the Arduino boards.

However, to be fair, I should acknowledge that Atmel also uses different physical/hardware and software/code pin numbering schemes for its chips, e.g. the ATtiny85. Regardless, I'd appreciate it if anyone could point me to a good tutorial on how to use the four ports (A, B, C, D) on the PICAXE 08M2+.

The ATtiny85 by Atmel
Anyhow, after some trial and error, I realized that based on the port.pin addressing scheme, the "traditional" pin 3 is what mapped to B.4 (the port I had programmed to turn on and off every second, using the port.pin notation). I now had my LED blinking, but very erratically. For a brief moment I considered abandoning the PICAXE as a lost cause. But that was a very fleeting moment indeed. As any engineer knows, perseverance bears sweet fruit So, I summoned all of the resources on the web and my learning in the past about the ll-effects of "floating" pins (i.e. pins that are neither clearly low nor high). And so, I added a rectifier diode (1N4001) and ceramic capacitor (0.1 micro Farads) between 5V and the PICAXE's voltage in pin. Additionally, I tied the unused pins to ground via a 560 ohm resistor (the only one I had handy). And low and behold, the LED started blinking the with the clockwork regularity you expect from a swinging pendulum.

PICAXE 08M2+ Pin Assignments
This was a learning experience for me about how working with chips is different (and a bit more challenging) than working with boards, such as the Arduino Uno, that come with supporting circuitry to ensure that unused pins are not left "floating" etc.

Here's the circuit I put in place to get a reliably blinking LED. Note that this is the configuration you'll need in case you wish to have the serial in/out pins to double duty, e.g. digital I/O. However, my favorite configuration is one in which I leave serial in/out pins dedicated to programming and use the other four pins for I/O. That way, I can program on the fly. Also, note that here the Arduino is merely supplying power (5V) and I have not shown here the USB cable from my laptop providing power to the Nano.

Breadboard Circuit for PICAXE 08M2+ Blinking LED
Something I noticed about the PICAXE Programming Editor or IDE may be useful to point out. On Windows 8 the IDE seems to get orphaned (i.e. the process continues to run in the background even after you've closed the UI) and chews up a fair bit of CPU (50% in my case). Something to watch for.

In closing I'd like to share one final observation. So far, I have been lucky to have fried or damaged very few components. The first was a set of three LEDs that came with my Raspberry Pi starter kit from Adafruit. That was until I realized that I simply must add a 1K resistor in series with LEDs at all times. And I also realized that once you know exactly what you want (which requires a bit of learning and research), it's a lot cheaper to buy components via eBay. The second was the Raspberry Pi's SD card slot, which broke upon the slightest application of vertical stress. (BeagleBone's MicroSD card slot is a much better idea since it's smaller and less susceptible to vertical stress.) And finally, the first PICAXE I ordered either arrived damaged or was damaged during my experiments with trying to program it. I suspect it might have been the former (a ding against eBay), because when it arrived the painted branding information (PICAXE 08M2+ and serial number) had already worn off and I had to strain to read the engravings (a shout out to the iPhone magnifying app "Mag Light" for making this a lot easier). The second PICAXE I ordered (again via eBay), which is the one I successfully programmed, seems pretty resilient. I have even accidentally applied reverse voltage and all it did was pop the software fuse on my laptop's USB port (requires a laptop reboot to reset). To be fair, I have ordered many, many components from eBay at extremely reasonable prices and have had very few negative experiences (although the best deals do take forever to ship and will test your patience and stamina for electronics). Note that I always make sure that the seller has at least a few hundred reviews and a 98% or higher ratio of positive feedback.

Update: Good news! The first PICAXE, which I had set aside as "damaged" has turned out to be perfectly fine and healthy. I swapped it into my programming/test operation circuit (described above in this blog post) and I am able to program and operate it. So, it was human error after all. However, when I'm at my wits end I often fall back to swapping components like chips that cannot easily be tested with a multi-meter. And it is certainly good to have plenty of spare breadboards and jumpers so that once, for example, you've established a working circuit for programming a micro-controller chip you can set it aside and not have to tear-down and build it each time you need to program the chip. Now on to programming my 2D string array of remote control codes for my Arduino robot project (see link above), which I eventually want to try controlling with the smallest bare bones micro-controller that'll do the job, e.g. a PICAXE or an ATtiny. Also, I am told that faded etching on top of the engraved chip markings are quite normal and are not necessarily a sign of wear.

I know I had a bunch of trouble getting my PICAXE 08M2+ to do my bidding. So, I decided to document my experiences in order to help others and perhaps even myself, if several days from now I can't recall how I actually got it working. As I mentioned, working with the PICAXE (or any other bare bones micro-controller chip like the ATtiny85) is sure to be more challenging than working with a board like the Arduino boards. Also, relative to the PICAXE, there is a lot more help available for the Arduino in the form of videos, web posts, and books. But if you're up for the challenge, then the PICAXE can give you more control at a smaller cost. All said and done, I feel very positive about the PICAXE and its prospects on future projects of mine.

Thursday, November 28, 2013

Adriano, the Arduino Robot | Step 1 (Basic Operation)

Arduino Robot | Step 1 (Basic Operation)
I started this series of projects to teach my kids (Ria and Ronak) about programming, electronics, mechanics and robotics. Although this design has been informed by numerous books, videos and articles (see references below), the final design is my own (not kit-based, except for the chassis, or copied verbatim from anywhere) and I will have to take responsibility for any flaws and imperfections. However, as you can see from this video I uploaded to YouTube, it does work reasonably well for its intended goal.

Component List

  • Arduino Uno. I plan to switch to the smaller Arduino Nano to see whether the Uno is overkill for this project. Subsequently, I also plan to see how far I can get with the ATtiny85V, PICaxe 08M2 and any other smaller micro-controllers worth trying.
  • Breadboard. I know how to solder (and you should, too), but it's not much fun at all and risks burning out components. Breadboards are great for prototyping, making modifications on the fly, and building projects incrementally, in phases (not unlike Agile software development). Later on, you can solder the robot into a permanent gadget, if you really want to (and have no need to reuse the components). This is why I want to experiment with smaller/cheaper micro controllers units (MCUs), although the cheap prices for the MCUs are often offset by the complicated/expensive setup required to program them.
  • 9V Battery. To power the Arduino. It's a good idea to keep the Arduino circuit separate from the circuit driving the motors for several reasons. One, the two circuits might require different voltages to drive them. Two, doing so protects the Arduino from the noise produced by induction motors. Three, the Arduino might not directly be able to provide the current required to drive the motors (I tried and failed to run the motors directly off the Arduino digital pin output).
  • Custom Arduino Software/Program. The actual program that causes the robot to move in the pattern you see in the video.
  • Jumper Cables. Invaluable for connecting components using a breadboard. For example, jumper cables are used to carry the digital output signals from the Arduino to the breadboard.
  • Test Leads with Alligator Clips. These are also invaluable when you don't have the ideal connectors handy. I used these to connect the 9V battery to the Arduino, using jumper cables.
  • 2WD Chassis. This is the base for the robot.
  • DC Motors. These are attached to the chassis.
  • Wheels. Attached to the motors.
  • Caster Wheel. Attached to the back of the chassis. This provides a simpler option than a 4WD robot, which is harder to make and manage, in part because you then have four motors to manage instead of just two.
  • 6V Battery Case. This is what powers the circuit for the motors.
  • 2N2222 Transistors. These happen to be the most popular semiconductor of all time. I used the TO-92 packaging components to bridge the Arduino circuit to the motor circuit. The digital output from the Arduino is wired to the transistor base (via a 560 ohm resistor in series). When the base receives a signal from the Arduino, it allows current to flow form the collector to the emitter, thereby completing the circuit for the motor and causing the motor to run.
  • 560 Ohm Resistors. As described above.
  • Double-Sided Tape and Velcro. Invaluable for attaching things (e.g. the breadboard, the Arduino and the 9V battery) to the robot chassis so that they don't fall off while the robot is operating and they can be easily removed when you need to use them for other prototypes.
Next Steps

This step is just the beginning on what I expect to be a long road with the following enhancements and/or modifications.

  • Add a distance sensor (SR04) for obstacle/precipice avoidance.
  • Add light sensors (LDRs) for light following behavior.
  • Add infrared (IR) remote control capabilities. At least for me, the VS1838 module (a little board with a built-in LED) was a miserable failure. So, I switched to the raw TSOP4838 (not a module), and it works like a charm (with the Keyes remote control that shipped along with the VS1838 module).
  • Add an H-bridge using the L293D chip (I pulled it off my Arduino Motor Shield, which I found to be overkill) to allow the Arduino to reverse motor direction without rewiring.
  • Experiment with smaller micro-controllers, e.g. the Arduino Nano, the ATTINY85V, and the PICAXE.
  • Experiment with alternate wireless remote control technologies, including Wi-Fi and Bluetooth wireless.
  • Store robot configuration (e.g. MRU motor speeds) in EEPROM using I2C serial communication from the Arduino.
  • Add LEDs to indicate status, e.g. remote control signal received (blink), light following behavior on/off, turning indicators/blinkers.
  • Use Wi-Fi/wireless communication (e.g. using the NRF24L01+ board) to send remote signals to a server, a form of client to server logging. This approach might help to troubleshoot why the robot seems to occasionally ignore remote key presses even though they are received by the Arduino (as demonstrated by the LED flash response). Is it because the Arduino is too busy to process the instruction or because the remote signal code got mangled during transmission? The log, collected via wireless communication, will tell us.
References

  • Make: Electronics by Charles Platt. A delightful book with detailed pictures, drawings and explanations. Easily the best modern book on the subject.
  • Arduino Cookbook (Second Edition) by Michael Margolis. Encyclopedic treatment of all that you can do with an Arduino micro-controller.
  • Arduino in Action by Martin Evans. There a lot of Arduino books in the market. But I found this one to be one of the most useful books.
  • YouTube Videos. Especially the ones by Jaidyn Edwards and Jeremy Blum.
Click here to view the subsequent steps in this series.

Step 2 (Remote Controlled)

Step 3 (L293D Controlled)

Wednesday, October 30, 2013

Learning to Fly: Peanut Butter, Sardines, and Turbans

Haagen Dazs Chocolate Peanut Butter Ice Cream

During my most recent visit to New Delhi my cousin's wife re-introduced me to an old classmate, of whom I had only vague recollections. She was able to convince me that we had been classmates. She recalled that I used to bring peanut butter and honey sandwiches to school for recess. Of course, I had purged that little detail of my school days, along with all of the other embarrassments that caused me not to blend in. Nobody else brought peanut butter sandwiches to school for lunch. This was New Delhi, not Canada (the land of Winnie the Pooh), where I was born and raised for the first three years of my life and where I undoubtedly developed a taste for peanut butter sandwiches.

The school I attended was Modern School (initially the Humayun Road branch and later the Vasant Vihar branch, aka MSVV). This is a rather Westernized and hip private school. And yet no one except me brought peanut butter sandwiches to school. To add to my inability to blend in, unlike Guru Harkrishan Public School (GHPS) up the street on Poorvi Marg, not many kids at MSVV looked like me. So, in more than one way, I did not blend in. I had unshorn hair, worn in a top bun, as is done by observant Sikhs, and covered up using a patka or turban. Sikhs are like India's Jews, a mere two percent of the population, but responsible for disproportionately large contributions in the military, agriculture, transportation, sporting, and many other endeavors. Memories of me wearing a turban are some of my proudest moments, in large part because it made my family happy. Paradoxically, wearing a turban caused me to stand out, which is the direct opposite of blending in, but in a good way. Most kids, as well as adults, want to blend in, that is if they're not going to stand out in good way. Observant Sikhs automatically get to choose the latter.

My turban was welcome protection during the winter months. But for the most part it was a source of ridicule. Among the countless forms of teasing observant Sikhs have to endure in India's schools, friends openly speculate on whether their turbaned mates stand any chance with the most sought after girls. Gone were the days, it would appear, when a turban signified respect, honor and prestige in Indian society.

My former classmate's recollection about my fondness for peanut butter sandwiches had rung true because I still like them and they are, to this day, a favorite option for breakfast. The turban, on the other hand, is no more. I had never liked wearing it although I was quite adept at tying it neatly and looked smart in it, or so I was told. The protection a turban offered me in the winter months was massively offset by the discomfort of unshorn hair, especially during New Delhi's sweltering heat and never-ending hot spells.

To make things worse, my curly hair was particularly unsuited for keeping long. I remember countless hours of working through knots that would form in my hair after washing them and letting them dry for a few hours (see my Afro pictures from college days upon returning to Canada). As a kid I had help from my mom, but later in life I had to fend for myself. And it was quite an ordeal. I don't know what the statistics are globally, but at least in my circles curly hair are a fairly rare feature and so not many people (i.e men) have experience with the torture involved in keeping them long or unshorn. My daughter has inherited my curly hair and it didn't take her long to discover hair straighteners, which vastly simplify her life. The wonders of technology.

My parents used to tell me that I liked sardines. Of that I have no memory. I don't like them presently and no classmate has stepped forward to own the rekindling of that memory. Perhaps that is in store for my next trip to India. And although I remain opposed to sardines, I am very passionately ambivalent to turbans, which I find any excuse to don, including religious and family events. The extent of my indecision is so extreme that I sometimes wish I could wear it to bed and never ever take it off, as if it were stuck to my head like peanut butter.

Monday, September 23, 2013

Sikhs as a Firewall for Hate Crime

By the time this reaches you, you would likely already have heard about the heinous hate crime attack on a Sikh professor from Columbia University in New York. All too predictably, the attackers referred to the victim as "Osama" and "terrorist".

In building construction, a firewall is built as a barrier to prevent a fire in one part of the building from spreading through the rest of the building.

Building on the analogy (no pun), in Web or Internet technology, which is where I earn my living, a firewall is used as a first line of defense to block unauthorized access from sources that wish to perpetuate attacks of various kinds on a Web site.

I have borrowed the term "firewall" to describe the role Sikhs have played, from their origin leading up to current times.

The Sikh religion was formed, in some part, due to the dire need to protect India's predominant Hindus against unrelenting attacks from Muslim invaders from Mongolia, Persia, and beyond.

Fast forward to today. And we find that, in America and in other Western nations, Sikhs have become a perpetual "mistaken identity" for Muslims and have faced uncountable "mistaken identity" attacks (starting with Meso, Arizona). In doing so, unwitting Sikhs have served as a canary in a coal mine, warning the Muslim community of what awaits them once the firewall melts and is no longer able to stop the spread of the fire.

Tuesday, September 17, 2013

Learning to Fly: Learning to Swim

The IIT Delhi Swimming Pool
(This is the first of a series of posts I intend to write in order to document my life. The series is called "Learning to Fly". This post is called "Learning to Swim". The posts in this series will be chronologically random, a stream of consciousness, if you will -- typically, thoughts triggered by an event.)

I can't recall the last time I had to work on a Saturday. But taking the kids to swim is not work. It's a joy to see kids learn, and grow. I remember learning to swim at the swimming pool at IIT Delhi (India). My father, who was a professor at IIT, used to stress that it was an "Olympics size" pool. I have no reason to doubt that it was. IITD had awesome facilities. And I am lucky to have grown up on campus.

I remember my father doing length-wise laps in the pool. He was a good swimmer. (My mother's strokes were a bit more labored. She could only manage breadth-wise laps.) My father used to tell us that a good swimmer causes little or no splashing in the water. The strokes must be smooth, like a knife through butter. He didn't say that last part. I added that because it describes what he had tried to convey. We (my younger brother and I) got the message, loud and clear. But I never became as accomplished a swimmer as my dad. In fact, I doubt I'm more accomplished than him at much at all. Perhaps squash? Perhaps parenting? Taking the kids for swimming lessons is good parenting. But my dad taught me himself. There were no swimming lessons to be bought.

We were instructed to hold on to the lip around the pool and splash our legs. Of course, we had inflated tire tubes around our waists holding us up. We weren't going to sink. And we weren't going to swim. The most I ever managed was to push my self off one side of the pool, splash wildly, and land on the pool floor, often just short of making it from one side of the shallow end to the other. But not quite knowing how to swim didn't stop me from joining white water expeditions during college in Windsor, Ontario (Canada). Of course, I had no life vest and had not contemplated the possibility of the raft tipping over! And when it did, I had to recall everything I had learned about making a few lunges from one side of the pool's shallow end to the other and, somehow, I managed to grab hold of a boulder sticking out of the water. Once the group found out about my swimming prowess, I was banned from joining them on wilder expeditions to follow.

And, yes, somewhere between the raft tipping over and the group managing to get it back under their command, we lost the beer we had ingeniously tied to outside of the raft (so that the water would keep it oh so cold). I sincerely hope that my kids will turn out to be better swimmers than me and will show better judgment than I did when asked to partake in a crazy adventure. But then, they might not have anything to write about.


Thursday, May 23, 2013

Are JSP tag libraries still relevant?


Figure 1
Figure 2
Figure 3

Figure 4



Dump JSP tag libraries and switch to JSON.

I often see development teams using JSP tag libraries when they shouldn't be. I wrote this post to explain why it's best to view JSP tag libraries as relics of the past.

Most non-trivial web applications store data in a database on the server side. These applications need mechanisms that allow the clients (web browsers) and servers (e.g. Java application servers) to exchange data. Typically, either a) data needs to be displayed for the user (so, the client sends the look up criteria to the server and the server responds with the relevant data) or b) the user changes data in the browser and the client needs to submit the data modification to the server for processing and/or permanent storage.

Until recently, most Java web applications have used JSP tag libraries as a client-side mechanism to extract data out of Java objects (JavaBeans) passed back and forth between clients (web browsers) and servers as part of the JSP/servlet paradigm offered by Java. (Note: JSPs are HTML files that get converted into Java servlets so that they can contain Java code for manipulating server-side Java objects.) In each case, the server responds with a new page (with embedded data), also known as a full page refresh.

In 1995, AJAX came along and changed the full page refresh paradigm described above. AJAX allows for partial page refreshes and data exchanges between the browser and the app server without having to do full page refreshes.Since then, AJAX has continuously gained momentum with support built into popular frameworks like Spring (for Java) in v3.0/2010 and jQuery (for JavaScript) in v1.5/2011. 

The data exchange format that works best with AJAX is JSON, since JSP tag libraries cannot be invoked unless a full page refresh is involved. There are several options for mapping between the server-side Java model objects (JavaBeans) and JSON, which can easily be consumed by JavaScript running in the browser. (Note: Since JSON is the literal representation of a JavaScript object, the conversion from JSON to JavaScript object is trivial.) The option I recommend and have been using is Spring MVC's @RequestBody and @ResponseBody annotations as part of the controller method definitions (which leverage the Jackson library for JSON processing) to automatically map JavaBeans to JSON and back (see figures 3 and 4). (The alternative is to use a proprietary framework like Direct Web Remoting or DWR, which I do not recommend for obvious reasons.)

As a result, I recommend to most teams I consult with that it's best to abandon JSP tag libraries entirely in favor of a pure AJAX/JSON based approach.

Here's a summary of the reasoning behind my recommendation to use AJAX/JSON exclusively (even for full page refreshes).


  1. Unless, you have a very simple application, you will likely need to support partial page refreshes using AJAX (rather than do a full page refresh each time that some data needs to change on the page). To do so, you need to map between Java objects (JavaBeans) and JavaScript objects (JSON) in order to exchange data between the browser/client and the application server. Therefore, it probably doesn't make much sense to support two channels for data exchange (JSP tag libraries for full page refreshes and AJAX/JSON for partial page refreshes). And if you have to pick one it has to be AJAX/JSON, since JSP tag libraries don't work for partial page refreshes. Hence, my recommendation to go head first with AJAX/JSON and abandon JSP tag libraries. But if you need more incentive, please read on.

  2. I have worked with teams that have analyzed the size of the data being shuttled back and forth across the network and found that JSON consumes a lot less network bandwidth than the JavaBeans/JSP tag library approach or even XML payloads. Their analysis seems to make sense to me since JSON is a bare bones pure text format without the syntactical overhead involved with XML or the rich object overhead involved with JavaBeans.

  3. Relative to the acrobatics required for manipulating JavaBeans using JSP tag libraries (see figure 1), the JavaBeans to JSON mapping is completely seamless with Spring MVC and requires no coding whatsoever (see figure 2). Whether or not you're using JSP tag libraries, chances are that you need to populate JavaScript objects with the data in order for the data to be consumed by jQuery widgets. In other words, the JavaScript object(s) are required in regardless of whether you use JSP tag libraries or not. Abandoning JSP tag libraries allows you to skip step 2 (see figures) and go straight to JSON and the corresponding JavaScript object(s) without having to muddle through the manipulation of JavaBean objects using JSP tag libraries.


Thanks for reading. I hope I've made my case adequately. However, I'd like to have your feedback, especially if you believe I've overlooked something.

Saturday, May 4, 2013

Ode To The Handyman




I recently posted a list of DIY projects on Facebook and a friend responded to suggest that I hire a handyman. So, I wrote this response to explain why I prefer the DIY route.

Understand. I like to understand how things work. Fixing things is a great way of achieving an appreciation for how things work, what causes them to stop working, and how to build them better and use them the right way so that they last longer. (While growing up, one of my father's books The Way Things Work was among my favorites. Hard to believe, but this 1967 classic is apparently still in print!)

Delegate With Competence. I like to know how something is done before I delegate it. That way I can provide competent supervision and am less likely to be taken for a ride.

Reduce Waste. Once you develop a handyman mentality, you tend to fix things rather than throw them away. We have become a throwaway society that creates far too much trash. So, I am always looking for ways to reduce my garbage footprint.

Save Time. Rather than schedule an appointment with a handyman, likely take a day off work, wait for his arrival, and hang around while he works, I can do the job during off hours, at my own convenience.

Stay Active. All the fixing helps me maintain an active lifestyle. And that's a major plus in today's sedentary society wherein we spend most of the day either sitting or lying down.

Save Money. It's cheaper. Not only do I not have to pay exorbitant hourly rates, I also don't lose a vacation day at work. And I can use the money I save to fund our next vacation trip.

(Of course, none of the above really applies if you're not handy. And in that case you have no choice but to either hire or befriend a handyman.)

Saturday, January 5, 2013

How to Select and Install Shelves

The kitchen in my house has a closeted area for laundry, where the washer and dryer take up the floor space. But there's 50 square feet (5' wide x 4' high x 2.5' deep) of space above the washer and dryer that was going unused. So, I started researching shelving systems. I began online at Home Depot, Lowe's, etc. But this isn't something you can do online unless you've worked with that exact system previously and know exactly what to order. After speaking with someone at the local Home Depot, I ended up going with a ClosetMaid ShelfTrack system that has several parts that all need to be coordinated carefully to get a working system. That is the system I will describe in detail in this post. But I'll also allude to other options.

First, note that this is a system especially suited for situations like mine where you're working exclusively within the top half of the space between the ceiling and the floor. Here is the component list for the ClosetMaid ShelfTrack system, also referred to ClosetMaid's web site as Adjustable Mount Hardware.

Shelf End Caps
Hang Track
Standard
Bracket
Superslide Shelf

  • Hang Tracks. Installed horizontally near the top of the shelving area. Create one-step leveling and prevent the need to level each standard separately. The length of the Hang Track should be as much a as possible without exceeding the length of the space in which you plan to install the shelving system.
  • Standards. Installed vertically. The notches at the top of the standards fish tail with the Hang Tracks so that the Standards lock into and hang from the Hang Tracks. The height of the Standards should be a much as possible without exceeding the height of the space in which you plan to install the shelving system.
  • Wire Shelving Brackets. Attach to the Standard at the desired height and spacing such that Brackets sit parallel to the floor and support the shelves. The Bracket size (e.g. 16'') should match the depth of the Shelves you intend to install.
  • Superslide Shelf. There are several kinds of CloseMaid Wire Shelves that can sit on top of the Brackets. Superslide Shelf is the one that seemed most appropriate for my purpose. Shelves come in fixed lengths. So, I had to buy a 72'' Shelf and have it cut down to 60''. And when you cut a shelf, you end up with sharp edges at the cut end. So, it's best to cover up those sharp edges with Shelf End Caps. (I did consider wood shelves but somehow could not find the right size. Also, the CloseMaid Wire Shelves snap onto the back of the Brackets so that they don't move around once they placed on the Brackets.)
This system cost me around $150.


Wall Clips
Wall Brackets

Support  Bracket


At this point I should mention a major alternative, suitable for situations where you're only planning on one or shelves and aren't worried about being able to adjust the height. Fittingly, this is the system listed under ClosetMaid Fixed Mount Hardware. Broadly, it consists of Wire Shelves and the following.


  • Wall Clips. These are used for attaching the back of the Wire Shelves to the wall.
  • Wall Brackets. These are used for attaching the front of the Wire Shelves sideways to the wall.
  • Support Brackets. The top end attaches to the front lip of the Wire Shelves and the bottom end is screwed into the wall. 

Although picking a shelving system and buying the right shelving components that fit well together is hard enough, the part that stumps most people is the correct methodology for screwing these pieces into the wall. In most Western houses, walls are erected by screwing sheets of dry wall to wood or steel frames setup along the perimeter of each room. The frame is typically made up of 1" x 2'' or 2'' x 4'' slabs of wood, also known as studs. Whenever possible, you want your screw to go into a stud so that it will be more secure and will support enough weight. However, locating studs isn't easy and involves either knocking on the dry wall to use the change in sound to detect the existence of studs or the use of a stud detector ($10 to $50, depending on the level of sophistication). And when you're installing something horizontally (as is the case for our Hang Track) you will be lucky if you're able to line up one or two of the 6 screws with a stud. The remaining screws will go into dry wall, also known as hollow dry wall. And this where most people get stumped. If you use regular wood screws to screw your Hang Track into dry wall, the Hang Track will not have must support and will come down like a house of cards as soon as you put some weight on your shelves.

Dry Wall Installation

So, an entire cottage industry has evolved for coming up with creative ideas on how to more successfully drive screws into dry wall (and have them stay there). The primary choices are as follows.

Dry Wall Screw With Anchor
Dry Wall Screw With Self-Drilling Anchor
Dry Wall Toggle Bolt
Dry Wall Screws (Hybrid or Triple)
Dry Wall Screw (Anchor-less)
  • Dry Wall Screws With Anchors. You first drill a pilot hole and tap the anchor into the dry wall with a hammer. Then, as you rotate the screw into the anchor, the anchor typically expands on the other side the dry wall, thereby locking the screw in. These are cheaper and are a good choice for low load situations, e.g. if all you're planning to put on the shelves is pillows.
  • Dry Wall Screws With Self-Drilling Anchors. This is a bear claw style anchor (as opposed to the expanding style) and has the advantage of not requiring a pilot hole to be drilled first. You just drill in the anchor and then put in the screw. These are more expensive and are a good choice for high load situations, e.g. if you're planning on loading the shelves with cans of soup or books.
  • Dry Wall Toggle Bolt. This is actually a bolt with spring-loaded wings that expand once they get past the dry wall. As you continue to screw in the bolt the wings will eventually be flush with the other end of the dry wall and will lock the bolt in place.
  • Dry Wall Screws (Anchor-less). This is an all-in-one option that is typically best for situations where the leading half of the screw is going into a stud. However, it may not work in many situations because the hole in the part you're trying to screw into the wall isn't wide enough for this type of screw.
  • Dry Wall Screws Hybrids. Sometimes also known as triples, these are futuristic looking anchors (generally plastic) that implement multiple locking strategies as described above.
Here's how my ClosetMaid ShelfTrack system looks fully assembled (except for the rod at the front, which is handy in case you wish to hang clothes -- but I didn't need to).



I hope you found this useful. Happy shelving!