Backlogs have long been used to organize work. I could go back as early as Benjamin Franklin writings on how he dealt with practicing a list of mental virtues in an iterative, incremental fashion.
However I would like to point out two more recent examples, the first one in this post, followed by another post on the second one.
Not many are aware of how Steven Spielberg operated when developing his first block buster, Duel. It was shot in a relentless pace, just sixteen days, at a total cost of $425,000 – after all it was originally a TV movie. Its quality though attracted enough audience that allowed it to be released in Europe and Latin America as a theatrical release, and two years later it also came to movie theaters in the United States as well. Brode’s description of Spielberg’s approach is fascinating as anyone who practices Agile today would recognize what he was doing (my inline comments between s):
[Spielberg] became involved during the time period when Matheson [the original story author] was writing the script, though Spielberg did not consult with him during the writing process. Methodically he blocked out the entire film on IBM cards [the backlog], the first time he tried what would become his regular approach. Each card contained the gist of the scene, the angle he would take on it, and how many camera setups he needed. While filming in Lancaster, he assembled those cards on a huge bulletin board in his motel room [the task board]. Rather than opening the script to the day’s page, he would instead take down several cards. They constituted the day’s work [daily iterations – after all it was sixteen day project], and when each scene was finished, he would tear the card up and throw it away, knowing every night, by glancing at the bulletin board, how much was left to complete.
In addition to cards, he had his art director sketch the entire film on one long mural that arced around the motel room, an aerial view portraying every moment in the movie [the user story map], including chases. Never a great reader, Spielberg liked to avoid referring to his script and memorizing blocks of words, preferring to study this visual panorama, locking himself into it before filming any one day.
The films of Steven Spielberg, by Douglas Brode
I brought this extract to show that using backlogs has always been a way of working in the minds not only of software developers under a delivery pressure, but also of the general public. Well, at least for smaller projects – the DOD distortion field that swept the 70’s leading to Waterfall stems from the need of extra planning big projects under extreme risk pressure.
Hopefully this example shows how people while thinking create new ideas, which are eventually formalized as a best practice (I will bring that up on the next post related to backlogs), and finally automated in a tool such as Team Foundation Server 2013.
Where did Spielberg have the idea of using IBM punch cards? Maybe by interacting with friends at the university Computer Science department? If you happen to have a chance, please ask Spielberg for me.
I will be presenting on how to access a TFS Git repository programmatically on April 11th, from 11:30 to 1 PM.
If you are in Austin and would like to attend, please register at: http://tfsaustinaprilmeeting2014.eventbrite.com
Now that TFS 2013 supports Git there is a need to replicate on top of this new storage infrastructure the same kind automation that has been well known to TFS users for the past 8 years, for instance, build scripts and other administrative operations so that you can transparently access the repos, the same you can do with TFS Source Control. This talk explores some of the existing options, plus how-tos with code samples, and gotchas.
We intend to record the presentation and make it available online afterwards, so even if you can’t attend in person, check this post a few weeks from now.
[for my comments on the previous version of this report, and links to others, see here]
Not sure if you have been able to read the latest Gartner report on ALM, so here goes the link: http://www.gartner.com/technology/reprints.do?id=1-1N99LF3&ct=131120&st=sb
This current version is very precise in positioning both the strengths and perceived weaknesses of the Microsoft ALM platform, especially the “gap of relevance” in the mobile world.
However to me, although precise, this view is just perception. With the integration of Git into the toolset, in addition to the existing Eclipse integration with Team Explorer Everywhere, it now allows for developers across multiple platforms to use TFS (that includes developers of mobile apps for Apple products, for instance). Obviously the main benefit comes from the integrated reporting that it allows, and this should be made very clear to upper management when deciding whether to adopt yet another SCM/ALM tool.
As for the other perceived weaknesses:
- “The vendor lacks a stand-alone requirements management approach; instead, it takes an enhance-and-integrate Office approach.”
- This statement is a common misunderstanding between requirements elicitation and requirements management (I am using the SEI CMMI definitions): Microsoft has supported requirements management (even if bare-bones) since TFS 2005. However requirements elicitation is not properly there yet (although some could argue that it is a basic version). In my consulting experience the best way to handle this gap is to use something such as Blueprint with the business users. Microsoft has had out-of-the box requirements management that has been good enough for Agile teams since the inception of TFS, and has constantly improved on it. I would say that this category either has to be refined into the subcategories I mentioned, or that this statement should be fixed in the next report to acknowledge the actual state of affairs.
- “Microsoft lacks the agile depth of pure-play vendors around project portfolio analysis, and management or support of Scaled Agile Framework (SAFe).”
- It seems that the evaluation was not done using the latest version (TFS 2013) as it has has Agile portfolio features, and more in the backlog for v.next.
- As for the second part of the statement: if you examine the list of SAFe tooling partners up to October of 2013, they only mention three of the report leaders as supporting. I read this page at the time that the report was released on November 19th, and it was still the same three: Rally, VersionOne, and AgileCraft. It is interesting that Gartner chose to single out Microsoft for not supporting it out-of-the-box, instead of also mentioning that when writing about the other Leaders at the time (IBM, Atlassian and CollabNet).
- The fact that most of the Leaders did not yet support SAFe, plus the lack of any Big-5 consulting companies in the Partner directory (it competes with their own offerings) tells me that Gartner elevated a niche-factor to a major-must have feature with yet no market justification. However this niche-factor is getting a lot of buzz so I would expect it to become more important for the market in general by the time of the next report iteration.
- The list of SAFe partners is growing, with the addition of Hansoft, Sellegi and Usecase S.A., and the notable addition of a leader, IBM (to be more precise IBM DevOps – the Agile practice has not mentioned it much other than DAD).
So it is quite clear that after just a few years in the market that TFS not only now sets the pace as the best ALM system available, but it is also the one to catch, having done a maneuver similar to the ice speed skating ones that Viktor Ahn, a Russian/Korean gold medalist did in his impeccable career – a careful tour-de-force that has propelled it to the leadership in the market, and that builds on its own momentum.
I want to thank Richard Hundhausen and Brian Blackman for candid feedback on this post.
When trying to use the Microsoft.TeamFoundation.Build.Activities.Git library I got the error
The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)
The detailed error message said it was looking for version 0.13.0.0 of libGit2Sharp.dll. I noticed that through Nuget I had installed 0.14.0.0. I uninstalled it and reinstalled the proper version using Nuget. However the error persisted.
Given that this should work under TFS since it is used by the build system, I looked for, and found the version installed by TFS (at C:\Program Files\Microsoft Team Foundation Server 12.0\Tools). By using ILSpy I discovered that although it has the same version ID (0.13.0.0), it is in fact different as it has a newer dependency on file git2-msvstfs.dll.
Apparently Microsoft had a “buddy build” of libGit2Sharp.dll added to TFS 2013 with the same version number, instead of using the official download at https://github.com/libgit2/libgit2sharp. I later found out what the differences are, and will report that in another post.
When doing operations against TFS repos you should use the version of libGit2Sharp.dll shipped with TFS (installed under C:\Program Files\Microsoft Team Foundation Server 12.0\Tools), as Microsoft did private changes but did not change the version number.
I will be presenting on the TFS Austin Users group this month, on the 27th about some learned lessons from one of my recent projects. You can register for it at http://tfsaustinseptembermeeting2013-es2.eventbrite.com/
The presentation is on “Migrating a Mid-sized Team to TFS”:
While working at a major game/hardware/casino management system provider, the ALM team migrated a team of about 60 people from a set of disparate ALM tools comprised of SVN/CCNET/Scrumworks/Serena Teamtrack/Word documents/Excel spreadsheets to using TFS for its source control, build, work item tracking, and requirements management. In this presentation we will go over the existing problems, and issues that were resolved with the change, gotchas we found along the way, and added benefits we got from this migration.
See you there.
Over the last 6 months have been asked which one is better: TFS, based on the latest Gartner report, or Team Concert and Rally, based on the latest Forrester report (which you get here and here)? The answer is quite simple if you actually read both. Here is the short version:
- Forrester’s Wave report is made every two years, and it was comparing the latest version of Rally and Team Concert with Visual Studio/TFS 2010, so naturally TFS is in the second tier as the Agile backlog and planning tools were not considered;
- Gartner’s Magic Quadrant report compares Rally and Team Concert with Visual Studio/TFS 2012, so the three are in the leadership tier. In fact Microsoft is leader at this moment, a remarkable achievement considering that it was a late entrant in the race. Team Concert and Rally continue almost neck to neck, and it will be interesting to see what the next report will tell us.
Both reports are useful because they also take into account other ALM tool providers. Also the combination of the two outlooks tells us about the complexity of the ALM industry, which is not captured by any single report.
That said, in the specific case of TFS you will need to pay close attention to the Gartner report as it is based on the latest and greatest. Also Forrester worked with Microsoft to create a new specific ROI report based on TFS 2012. It’s the Forrester “Total Economic Impact Of Microsoft Application Lifecycle Management”, and it shows amazing ROI over a 3 year period.
I also recommend reading the excellent Ovum Technology Research Report: Software Lifecycle Management 2011/2012. It provides yet another profile of the leadership tier (“Shortlist” in this case) which includes TFS as well, even though the report was based on TFS 2010.
What I like about this report is its Ovum SLM Solution model, which highlights new trends such as DevOps and the growing overlap of ALM and PLM (Product Lifecycle Management). It also includes a list of vendor profiles such as Seapine and Tasktop, and their current ALM products. This is very useful if you are trying to understand industry trends as opposed to tracking a single company.
Notice how the same company/ALM suite can be in a completely different ranking depending on the report focus. That’s why I recommend paying close attention to the methodology used by each report vendor. The best way to understand the complexity of the ALM industry is to create your own composite view based on many sources. At a minimum you will need these four reports.
Just reading the marketing perspective from each company, which obviously will leverage those reports to provide a biased perspective of they own products, will put you in a position of an uninformed bystander in the ALM world.
If you have any comments, please send me an email and I will add them to the post.
The current issue of the ALM Magazine is out. Besides contributing as an editor, I also had the opportunity of publishing one of my articles:
Enacting Scrum and Agile with Visual Studio 2012
Abstract. Find out how Visual Studio has become the tool of choice to manage your Scrum projects, and how it stands out of the way allowing you to do Agile in your own terms instead of forcing you to adapt your development process to a tool. We will take a tour on how you can enact Scrum best practices and cycles, allowing the team to always have a clear picture of Done should look like at the end of a sprint through using Team Foundation Server as a team communication hub.
Follow the link to get the full article: “Enacting Scrum and Agile with Visual Studio 2012”. I will be publishing it as blog posts as well, so it will be easier to reference.
As some of you might know, I have been contributing to the ALM Magazine for the past couple of months as one of its editors. The nice thing about it is that it forces me to keep abreast of what is going on in the industry by our fellow ALM experts (nothing compared to Keith Denham though, as he reads all the articles made available to him, no matter if they will be published or not :-)).
Tarun Arora just posted on his experience of contributing to the ALM Magazine. It is very easy as long as you have already been contributing to the community with quality blog articles, and original articles are also very welcome as well.
This effort has the potential of helping to resolve one of the issues of the current IT environment: information overflow. The editorial team has been focusing on articles that provide the best and current information on ALM, out of thousands out there.
ALM Summit 2013 just finished a week ago – but I still feel as if going from session to session. It reminds me of playing Mass Effect 2: after you finish the missions you can still stick around and find a little treasure here and there, as you got ready for the next installment. This was a conference rich on content, and I am still exploring each of the “planetary systems” defined by each session.
Attending this conference has given me so much valuable information, that I will be digesting and revisiting it for the next couple months. I intend to watch every other session that I could not attend, after they are released to the the general public. Just the literature references have already added another 10+ books to my reading list. And the business contacts have been invaluable. There will be some more follow up posts :-)
Jim starts by talking about the two computing eras in the last 20 years, and then branches into the 2010’s era:
- 1990’s - Store and compute
- 2000’s - Search and browse
- 2010’s - Know and do
He then continued referring to the “Old magic” with the “New magic”. I don’t need to go into the old magic because we have all lived it – but he says cool things about the new magic. It’s going to be based on new radical ways of interacting with your digital world.
Know and do
This era will be based on three things:
- Data – not the web, but your own index to the web
- Experiences – “takes the data and weaves into cool things”
- Ecosystem – Cloud and devices
Whittaker talks about the paradigm switch from generic search to specific “experiences” tied to local knowledge of data, that is, not the web but the personal indexes to the web based on how we interact with it: location, timing, history. Underlying it of course is that “Data is currency”. This all leads to “experiences” in the sense that the data needs to be harvested by app developers with data owners, at a local basis, to design those experiences.
Data is tied to experiences which are tied to data gathering/harvesting to something that is the data ecosystem: clouds and devices. Most experiences are drawn in “canvasses”: space and time. Most experiences can be mapped to both spatial and temporal a relationship, that is canvasses for the experiences to unfold. Example of an experience:
“I need a vacation”. Where would such experience start? Maybe in the calendar, and it does not need to leave there. Whittaker then went into a lot of interesting scenarios such as “Decline all my meetings and show me some flights”. After this your calendar would show flights in the calendar, and as you choose one, then screen changes with suggestions for hotels and discounts – all in the calendar. Plus it shows some recommended activities that you can just pick and have them neatly fit into the calendar. It ends with a nice one-week calendar with a vacation created from an auction system for the best experiences you might want to live.
As a new era starts, Whittaker explained that it becomes difficult for the incumbents. We just started, so the winner is still not know, and he invites us to join in developing new experiences in this new paradigm.