Having worked as a pure Business Analyst (BA) a pure Project Manager (PM) and a few hybrid roles, I've been giving the rise-and-rise of the 'hybrid' role some thought lately; particularly whether it has been born of necessity in these austere times, or whether it has been artificially constructed.
Then today I come across this (Article) and it gave me the nudge to clarify my own thoughts on the subject.
I think the writer of the Article has missed an opportunity by not putting the roles in historical context. It seems strange to use the phrase 'Historical' when referring to two, relatively new, modern occupations; however, time is relative. I like the definition of a BA, but I can't help feeling the definition of a PM fails to capture the value of their domain experience. Yes, identifying and managing risks is critical in managing a project to delivery, but how can somebody do that without understanding a) the specialist roles within their team (e.g. BA, Change Manager, Systems Analyst, Developer, Tester, Test Manager, Subject Matter Expert etc..) and b) how to do whatever it is they've been hired to do ?
Take me for example; I can comfortably PM an IT development or an implementation of a vendor solution, or even a business change such as realigning to a new target operating model. But I wouldn't be able to build a skyscraper....as least, not as efficiently as a PM who has domain knowledge of building skyscrapers. I'd have to delegate more to architects and 'Master Builders' as, although I know to start by digging a hole for the foundations, I wouldn't know how to order the thousands of other tasks in the most efficient way to avoid rework.
So, although 'Project Management' is a career in its own right and there are University Courses in it (I know, as I did one) I think it's exactly this professionalisation of the role that is ultimately leading to its undoing. The world is now full of PMs who haven't actually done any of the jobs they are supposed to be managing; people coming straight out of University and working as a PM.
Being a good PM requires more than being the 'Keeper of Lists & Plans', because asking people what they need to worry about and what tasks they need to do, doesn't add much value. It also doesn't add much value to then ask the same people each week whether they've resolved the things they told you they worry about, or finished the tasks they told you need doing. Now, I don't mean to devalue the good work done by Project Management courses and qualifications; however the governance skills and tools they provide need to be ADDED to existing domain experience. That is where the value is added, and this is something I think we have lost.
Back in the day (remember my use of 'historical' earlier ?), you started work in IT as an operator or support. Then the gifted moved up to be programmers, working with a systems analyst (I'm thinking back to mainframe days here; proper history). Good programmers then became systems analysts and worked with BAs. Good system analysts then became BAs and worked with PMs. Eventually, after building up a career of experience, good BAs became PMs. PMs were respected, knowledgeable people who understood everything and every role about their projects; but thesedays, those are sadly a dying breed.
So far.then, it's looking more accidentally created than artificially constructed.
The article also suggested BA was too broad a term and "you may end up with someone who is little more than a scribe...", well, I think that can apply equally to PMs as I've seen many who are just scribes. A PM needs to understand what a particular risk means; not just write it down. They also need to understand what purpose each task fulfils so that they can make decisions as to sequencing and time boxing. For example, if your developers are busy at the moment, why not take more time on the functional specifications and take those to a lower level to save time later ?
A programmer receives specifications to work from, so they know what needs to be in them. A systems analyst receives requirements documents, so they know what needs to be in them. A BA receives scope and steering, so they know what needs to be in them. Experience isn't everything of course, but wouldn't you rather have somebody with the skills AND the experience ?
Reverting back to the question of hybrid roles, I agree with the article in terms of there being a degree of taking advantage of people by making them do two roles for the price of one. I've found that by far the most common balance due to time constraints is that the BA part is 80% and the PM part 20% (at best !). Possibly a vote in favour of 'necessity for these austere times' then?...on balance, I think not, as whilst 80/20 can work for small projects, once you start to get to teams of 6-10 people for more than a few weeks it neglects the role of the PM. RAID reviews and plan updates become perfunctory at best; usually a rushed 30 mins before the weekly project reports are due, and that isn't good enough for projects above a certain size.
Can BAs add PM skills?...of course they can, it is (was?) the natural progression.
Can PMs add BA skills ?...of course they can, although they will lack the foundations of a good BA as they've coming at it from the wrong direction.
Should we more clearly define the roles ?....I'd say yes for large projects, but for smaller projects a hybrid role offers many benefits and works hand-in-hand with the reduced governance demands.
In summary, if you are recruiting for a hybrid PM/BA please be careful that the role really does suit a hybrid role, and then be careful who you hire, as not all hybrids are created equal.
Michael Baycroft
Thursday, 2 October 2014
Tuesday, 12 October 2010
What happened to Unit Testing ?
As teams move towards more Agile approaches to development, there is an almost total reliance on automated unit testing as this is widely supported by accepted tools such as JUnit, NUnit and even DBUnit alongside other products for Continuous Integration (CI). Whilst thoroughly written automated unit tests can be fine for server-side code, I am noticing that more and more time is being needed in Independent Testing/System Testing where there is a significant element of User Interface (UI) development, as too many defects are being discovered.
There are so many events and complications when building a UI that the lack of hands-on, post-build developer testing causes far too many defects and too much rework. In an Agile environment, this causes a high volume of Change Stories; in a traditional environment, this causes an extended System Testing period. In both cases this causes delays in delivering value to the business.
The closer to the coal face that issues are found, the quicker and cheaper it is to resolve them, so it is imperative that this gap in Unit Testing is resolved.
I should point out that not all Agile testing falls foul of this, however, the problem appears to be more prevalent in teams caught in a halfway house between Agile and more traditional software development lifecycles (SDLC). This is probably a result of trying to work to Agile delivery cycles without the close, end-user involvement and other supporting practices.
I have worked successfully in both environments, so I would argue that it doesn't matter which SDLC you follow; however, once you've picked one make sure you stick with it. It is possible to mix-n-match across SDLCs, but this must be done with great care and requires indepth understanding of the likely issues. It is rare that a single practice can be surgically implanted into a foreign SDLC.
There are so many events and complications when building a UI that the lack of hands-on, post-build developer testing causes far too many defects and too much rework. In an Agile environment, this causes a high volume of Change Stories; in a traditional environment, this causes an extended System Testing period. In both cases this causes delays in delivering value to the business.
The closer to the coal face that issues are found, the quicker and cheaper it is to resolve them, so it is imperative that this gap in Unit Testing is resolved.
I should point out that not all Agile testing falls foul of this, however, the problem appears to be more prevalent in teams caught in a halfway house between Agile and more traditional software development lifecycles (SDLC). This is probably a result of trying to work to Agile delivery cycles without the close, end-user involvement and other supporting practices.
Failure to adhere to any one methodology appears more prevalent where the development is performed by globally dispersed teams, however, I have also seen this problem arise due to development teams trying to respond to business demands of faster turnaround times, particularly where multiple development streams are used to try to meet demanding deadlines.
These pressures are leading teams to move towards Agile practices even when the overarching project setup within their organisation is more traditionalist and this is causing more problems than it solves. Although there are benefits to be had from adopting some Agile practices, there is a real danger of becoming stuck in the middle.
I have worked successfully in both environments, so I would argue that it doesn't matter which SDLC you follow; however, once you've picked one make sure you stick with it. It is possible to mix-n-match across SDLCs, but this must be done with great care and requires indepth understanding of the likely issues. It is rare that a single practice can be surgically implanted into a foreign SDLC.
Subscribe to:
Posts (Atom)