Today we finished release cycle 2 for Project OSL. The team spent much of Tuesday night and all of Wednesday morning and afternoon putting the release together. It was a particularly difficult release with a lot of dead ends and roadblocks. In the end, though, we made some incredible progress toward our goal in only 2 short weeks.
We planned to implement the following features in RC2
- Localizer access to ranked list of localizations
- Add support of .strings files
- Create user feedback system
We also fixed a few bugs:
- Modal welcome window not centered
- Design issues in resource bundle viewer
- Files not filtered in dialog box
- Edits to localizable resource not saved
- Uploaded resources not saved
Which proved to be a bit more challenging than I had originally thought. Most of the difficulty was fixing these bugs while bringing the application up to date with the new backend architecture.
REST API Now, I get to talk about the cool stuff. We created a REST API. It's sweet. It's also ugly on the server's side. It almost looks like a hack when I read the PHP (and, knowing my PHP skills, it probably is). However, it's slick.
Our API is very simple..
GET api/:model/_all_docs --> List of 'em GET api/:model/:id --> Give me that object! POST api/:model/:id --> Update me some data PUT api/:model --> Shiny new object for me to play with DELETE api/:model/:id --> No want!
and our only :model right now is "resourcebundle." So, yes, you can actually do a
and get some cool JSON back.
Okay, so that probably isn't as cool to you as it is to me, but I'm really happy about that. This was my first RESTful API and it seems to have worked out quite well so far.
Anyway, in our application, Chandler abstracted all those sexy GET, PUT, POST and DELETE headers into some ActiveRecord save, list, get, delete methods. Boo! But also very cool.
User Feedback Another addition to our application is the User Feedback submission form. You see, we want everyone to give us feedback. Lots of feedback. And we intended on developing this feature as early, as quickly and as visibly as possible from the get-go. Why?
User feedback is the only way to actually know if your application is usable. Follow as many guidelines, track as many metrics, have as many usability gurus as you want, you will Never(TM) have a good, usable product until you put it in the field and let users hack away at it for a few months. So, this means early betas with quick turn around times on feature and usability bugs. Not only are we doing it early.. we're doing it on Week 4 of development for a project with about 20 weeks of development. That's early.
Inline Editing of CPTableView If you're very familiar with Cappuccino, you know that inline table view editing isn't supported. That is, until Chandler and Kyle rocked it. Go check it out, Cappuccinoians!
Lessons Learned Caleb, Chandler and I had a good discussion about what we need to improve on. In particular, we all agreed that we need to spend more time defining use cases. The reason for this is that when I handed out tasks to the team we spent a lot of time spinning our wheels trying to understand what exactly that task or feature was trying to accomplish. In the last couple of days we found our traction and make some really good progress. But, we had almost a week of little to no traction before that.
In part, it can be blamed on our demanding schedule at Rose. In part, it can be blamed on the psychological effect of a deadline. In part, it can be blamed on this being a very difficult feature set. But, in reality, our team overcomes those with ease. Instead, the fault was with our process. We very loosely define our use cases at the beginning of the project because we want to get our hands dirty in code. Instead, we need to spend more time defining what a user would expect out of a feature so that all of the developers are on the same page.
Once again, our decision to tackle the riskiest features first seems to be going great. It was an experiment that I wanted to try. Our team is clearly several steps ahead of a traditional senior project schedule. We will have 3 full releases before the end of the Fall term. There is only one other team that is really anywhere close to where we are (tip of the hat to the HiSchool team! Good job guys!) and the rest still seem stuck on wireframing or prototyping or even requirements, still! Some of that is skill level, some of that is work ethic and some of that is process. Our team would be hindered by a process that requires more documentation that what we are comfortable with.
Metrics And, as always, here are some metrics:
Number of Methods: 112 Number of Tests : 26 Tests per Method : .232 Number of Classes: 37 Methods per Class: 3.027
So, the two metrics that we care about are Tests per Method and Methods per Class. Our Tests per Method is abysmal. We need to fix that. We will. It is going to be a top priority for RC3. Our Methods per Class, however, is as close to perfect that I can imagine. This means that we are keeping a relatively good design as our project progresses. If we track the percentages from Release 1 we see:
Number of Methods: +78% Number of Tests : 0% Tests per Method : -43% Number of Classes: +54% Methods per Class: +15%
where we increased our codebase by somewhere around 60% but only increase our methods per class by 15%. Of course, I want that number to stay somewhere around 3 so I'll be looking for ~0% in Methods per Class for the future!