Laramie/Consulting/Schwab/Proposal

From ThinkPrank Wiki
Jump to: navigation, search

Contents

Overview

I am working with the Cashiering team on SchwabInstitutional.com, on a contract from October 2006, extended to February 2007. I have enjoyed working with the management, analysts and developers, and would like to further assist the team by providing some observations on the infrastructure and methodologies used in delivering our product. I would also like to offer my services to implement several of the recommendations.

Observations on Existing Infrastructure and Methodologies

Recommendations

Reliable Test Data

The system should provide for reliable test data, that can be refreshed at any time to a know, good initial state.

Currently, developers rely on accounts and profiles that are set up by hand, and can be easily corrupted.

Best practice would be to have automated setup of these accounts, so that the baseline is pristine every time. When the data become corrupt, the accounts can be deleted and re-created. Many organiziations have a periodic refresh of the test data, for example, nightly.

  • This can be done by a series of scripts
    • SQL scripts that populate the database, or:
    • VB scripts that exercise the create account functionality of the VB Client Central client

Disconnected Development Environment

Each layer in the system should be developable and testable as a unit.

If the back end had a proxy that simply returned XML responses, the front end could be tested without a live back end.

Disconnected pieces can be unit tested simply.

Currently, developers spend time waiting for responses from the back end, and recovering from back end availability failures. In an ideal system, the back end would be rapid and reliable. However, the amount of memory allocated to the development region is very small compared to the production region, so developers can not rely on the back end.

Proposal: develop a proxy that sits just above the layer that communicates the XML request/responses to the back end. If the system is in proxy mode, then the proxy returns the response requested. In most cases, the developer knows which back end response is desired, and could code it directly in the test request. This would also have the advantage of being very fast.

The savings in development time will easily pay for time spent developing and installing the proxy back end. To make changes to simple UI elements, developers must often use the full system, which can mean that 20 round trips to the back end must happen to get an element right. These steps, in a local environment should take a few seconds to save the file and reload in the browser, or 20*3 = 60 seconds for a minor change. However, with this environemnt, developers often spend 20*60 seconds waiting for the back end to reload and sometimes time out. Obviously, there is still some time to analyze the code and make the changes, but after that change has been made, the comparison is between refreshing a proxied back end or waiting for a real back end. With a proxied back end, this hypothetical change takes one minute, without it takes 20 minutes of development time. This estimate is in keeping with my experience that the rate of development at Schwab in general is about 10 to 20 times longer, when compared to other organizations I've worked at which have more rapid development methodologies.

Repetition of Steps

The Use Case methodology is well established, and is well adopted by the Schwab team. However, the Use Case methodology could be enhanced at Schwab to reap much higher dividends on productivity and quality.

  • Open the document to more collaboration. Many, many eMails will go back and forth to clarify changes to Use Case documents that could simply be done in the document itself using the collaboration and review tools already in place at Schwab.
  • Include in the documents more grids of fields expected on screen (see Example)
  • Make the documents readable to implementors and testers. I've seen the documents re-written and passed around in eMail in various forms: implementation notes, test scripts, etc., that should really just be the use case itself.
  • Standardize the use of section and subsection numbering, and then refer to these numbers when testing and implementing
  • Standardize on strikethrough
  • Use revision control to clean up documents

Unit Tests for each Layer

The current system has almost no unit tests, or automated integration or regression tests. These are essential for confidence in previous releases and bugfixes.

Proposal:

  • Unit Tests
    • set up JUnit code and test harness
    • set up JUnit test cases for some of the system, for examples
    • train developers on how to use and add test cases
  • Integration Tests
    • set up automated tests that exercise the front end, and terminate in the new proxy stubs for the back end. These may be run in disconnected mode.
    • set up automated tests that provide test cases of front end data to excercise the back end through XML requests. There is a tool in place for these tests, but I'm not sure if they emulate the wsdl request completely.

Configurable Logs

Log4j provides customizable logging levels, and is easy to install in place of the existing logging manager. Then different log configuration files can apply to development, QA, and production, without re-instrumenting the code. The Log4j configuration mechanism is much more powerful than the current system, and allows developers to see just the errors that pertain to them.

Personal tools