Measured for Success

Keeping in the theme from last week, every day will have a specific theme, and I will keep the same ideas of last week. So without adieu, here is this weeks plan:

Monday – Measured for Success; Planning ideas and thoughts
Tuesday – You Did What?; focusing on coding
Wednesday – I am Not All Alone In the World; ideas and code segments for people like me, social lepers
Thursday – Everywhere and Nowhere; examinations on location services, social integration and strategy
Friday – Riddle Me This; fun stuff to finish up the week

Birds Fighting Over Food
Image courtesy of Ducklips Photography

And today’s topic is figuring out what is the measure for success. If we look at last week and my posts, I wanted to have a post every day of last week, and I got through Thursday. I missed Friday, and that should have been the easiest one to do. So based on my own goals, was last week a success? I would have to say no. I put a goal of 5 posts in 5 days. I did 4 posts in 5 days, so I fell short. And that is what is important to understand when trying to plan a new application: what is the measurement for success, and can it actually be measured. When planning for an application, it is important to understand what will define the success of the application. It is much like a goal. It should be conceivable, achievable, measurable, and desirable (others do exist, but these are important when planning on gathering statistics). Another important part of this measurement is the fault tolerance allowance. This includes the level of error, or missing the set goal, that you are willing to accept. For this, let’s examine two different scenarios.

Alone in the tree
Courtesy of Ducklips Photography

For the first example, a B2B company provided an online catalog. In this e-commerce application, a company could create a new profile, set up an account, buy on credit, set up items in the cart, then check out. The first iteration of this application was good. The executive management wanted to create a new version, to provide more services and bring the application up to date. This is a good thing, and something that management should be eying. However, this was given for the measurement of success: “make it best in class”. When prodded for further clarification, they said to make sure it had everything it needed to be the best online experience. Any further attempts to drive more detail resulted in the same. Occasionally, a very specific requirement would come out, and very specific direction on how to measure this enhancement would be provided. For a year, the teams would work on this application. Doing what they thought was right. When it came time to determine if this initiative succeeded, what is there to measure? The executive team determined that the past year had been a failure and a waste. But how could they even come to that conclusion? What did they measure this application on? Was it best in class, and which class, and what exactly is best? Sales via the application had increased, but so had the number of “tickets” from customers asking how to use the site. More bugs were found because more enhancements were added, not all really added value to the check out process. Sure it was nice to look at, and it was able to do some tasks, but in the end, it was a failure. It could not be measured, and was not achievable with a desirable definition.

Second example is a little different. It did not involve an e-commerce application, it involved an application that provided centralized data for teachers. This data is used to help do lesson plans, drill down into topic details, and provide teaching aids, such as images, videos, etc. The old system was bulking, piggy-backing off a forum type of set up. The new application had to be searchable, with results found within 1.2 seconds of search. The returned data must include sections for video, images, and textual data. In order to use this application, the account had to be set up by the individual, and had to be verified via email before the account was active. Passwords had to consist of at least 6 characters, include letters, numbers, and special characters. The password must be encrypted, and if they forgot the password, then a new one would be issued to the user via an email. The new password was only valid for 2 hours, and if they did not request the password, then they could either click a specific link that would remove the temp password, or they could let it expire. The data that was added must pass a peer review of selected accounts for each subject before they are queued for search. And if a new item was added, it had 12 hours to be approved.

Starting to get it
Courtesy of Ducklips Photography

Sounds a little detailed, correct? But where is the success measure on this application? We have very detailed plans on what the application should do, how it should act (and I should also add, this was not all the requirements, but it was a very detailed document). However, outside of the search results, there is little to gauge the success of this application. Sure we can use the other requirements to determine if it is successful. But how about this scenario: all these items are in place, and the total membership of this application grows to 5 people who use the application. Is this considered a success? Who was the target audience, how was it marketed? Did it actually help these 5 people, or just provide more noise to an already loud teaching environment?

When planning an application, getting the details of the application are good, and it provides a very good measure of certain successes. However it is not all that needs to be done. It could be the best application in the world, and if no one uses it, is it still a success?

Leave a Reply