When trying to use the Microsoft.TeamFoundation.Build.Activities.Git library I got the error
The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)
The detailed error message said it was looking for version 0.13.0.0 of libGit2Sharp.dll. I noticed that through Nuget I had installed 0.14.0.0. I uninstalled it and reinstalled the proper version using Nuget. However the error persisted.
Given that this should work under TFS since it is used by the build system, I looked for, and found the version installed by TFS (at C:\Program Files\Microsoft Team Foundation Server 12.0\Tools). By using ILSpy I discovered that although it has the same version ID (0.13.0.0), it is in fact different as it has a newer dependency on file git2-msvstfs.dll.
Apparently Microsoft had a “buddy build” of libGit2Sharp.dll added to TFS 2013 with the same version number, instead of using the official download at https://github.com/libgit2/libgit2sharp. I later found out what the differences are, and will report that in another post.
When doing operations against TFS repos you should use the version of libGit2Sharp.dll shipped with TFS (installed under C:\Program Files\Microsoft Team Foundation Server 12.0\Tools), as Microsoft did private changes but did not change the version number.
I will be presenting on the TFS Austin Users group this month, on the 27th about some learned lessons from one of my recent projects. You can register for it at http://tfsaustinseptembermeeting2013-es2.eventbrite.com/
The presentation is on “Migrating a Mid-sized Team to TFS”:
While working at a major game/hardware/casino management system provider, the ALM team migrated a team of about 60 people from a set of disparate ALM tools comprised of SVN/CCNET/Scrumworks/Serena Teamtrack/Word documents/Excel spreadsheets to using TFS for its source control, build, work item tracking, and requirements management. In this presentation we will go over the existing problems, and issues that were resolved with the change, gotchas we found along the way, and added benefits we got from this migration.
See you there.
Over the last 6 months have been asked which one is better: TFS, based on the latest Gartner report, or Team Concert and Rally, based on the latest Forrester report (which you get here and here)? The answer is quite simple if you actually read both. Here is the short version:
- Forrester’s Wave report is made every two years, and it was comparing the latest version of Rally and Team Concert with Visual Studio/TFS 2010, so naturally TFS is in the second tier as the Agile backlog and planning tools were not considered;
- Gartner’s Magic Quadrant report compares Rally and Team Concert with Visual Studio/TFS 2012, so the three are in the leadership tier. In fact Microsoft is leader at this moment, a remarkable achievement considering that it was a late entrant in the race. Team Concert and Rally continue almost neck to neck, and it will be interesting to see what the next report will tell us.
Both reports are useful because they also take into account other ALM tool providers. Also the combination of the two outlooks tells us about the complexity of the ALM industry, which is not captured by any single report.
That said, in the specific case of TFS you will need to pay close attention to the Gartner report as it is based on the latest and greatest. Also Forrester worked with Microsoft to create a new specific ROI report based on TFS 2012. It’s the Forrester “Total Economic Impact Of Microsoft Application Lifecycle Management”, and it shows amazing ROI over a 3 year period.
I also recommend reading the excellent Ovum Technology Research Report: Software Lifecycle Management 2011/2012. It provides yet another profile of the leadership tier (“Shortlist” in this case) which includes TFS as well, even though the report was based on TFS 2010.
What I like about this report is its Ovum SLM Solution model, which highlights new trends such as DevOps and the growing overlap of ALM and PLM (Product Lifecycle Management). It also includes a list of vendor profiles such as Seapine and Tasktop, and their current ALM products. This is very useful if you are trying to understand industry trends as opposed to tracking a single company.
Notice how the same company/ALM suite can be in a completely different ranking depending on the report focus. That’s why I recommend paying close attention to the methodology used by each report vendor. The best way to understand the complexity of the ALM industry is to create your own composite view based on many sources. At a minimum you will need these four reports.
Just reading the marketing perspective from each company, which obviously will leverage those reports to provide a biased perspective of they own products, will put you in a position of an uninformed bystander in the ALM world.
If you have any comments, please send me an email and I will add them to the post.
The current issue of the ALM Magazine is out. Besides contributing as an editor, I also had the opportunity of publishing one of my articles:
Enacting Scrum and Agile with Visual Studio 2012
Abstract. Find out how Visual Studio has become the tool of choice to manage your Scrum projects, and how it stands out of the way allowing you to do Agile in your own terms instead of forcing you to adapt your development process to a tool. We will take a tour on how you can enact Scrum best practices and cycles, allowing the team to always have a clear picture of Done should look like at the end of a sprint through using Team Foundation Server as a team communication hub.
Follow the link to get the full article: “Enacting Scrum and Agile with Visual Studio 2012”. I will be publishing it as blog posts as well, so it will be easier to reference.
As some of you might know, I have been contributing to the ALM Magazine for the past couple of months as one of its editors. The nice thing about it is that it forces me to keep abreast of what is going on in the industry by our fellow ALM experts (nothing compared to Keith Denham though, as he reads all the articles made available to him, no matter if they will be published or not :-)).
Tarun Arora just posted on his experience of contributing to the ALM Magazine. It is very easy as long as you have already been contributing to the community with quality blog articles, and original articles are also very welcome as well.
This effort has the potential of helping to resolve one of the issues of the current IT environment: information overflow. The editorial team has been focusing on articles that provide the best and current information on ALM, out of thousands out there.
ALM Summit 2013 just finished a week ago – but I still feel as if going from session to session. It reminds me of playing Mass Effect 2: after you finish the missions you can still stick around and find a little treasure here and there, as you got ready for the next installment. This was a conference rich on content, and I am still exploring each of the “planetary systems” defined by each session.
Attending this conference has given me so much valuable information, that I will be digesting and revisiting it for the next couple months. I intend to watch every other session that I could not attend, after they are released to the the general public. Just the literature references have already added another 10+ books to my reading list. And the business contacts have been invaluable. There will be some more follow up posts :-)
Jim starts by talking about the two computing eras in the last 20 years, and then branches into the 2010’s era:
- 1990’s - Store and compute
- 2000’s - Search and browse
- 2010’s - Know and do
He then continued referring to the “Old magic” with the “New magic”. I don’t need to go into the old magic because we have all lived it – but he says cool things about the new magic. It’s going to be based on new radical ways of interacting with your digital world.
Know and do
This era will be based on three things:
- Data – not the web, but your own index to the web
- Experiences – “takes the data and weaves into cool things”
- Ecosystem – Cloud and devices
Whittaker talks about the paradigm switch from generic search to specific “experiences” tied to local knowledge of data, that is, not the web but the personal indexes to the web based on how we interact with it: location, timing, history. Underlying it of course is that “Data is currency”. This all leads to “experiences” in the sense that the data needs to be harvested by app developers with data owners, at a local basis, to design those experiences.
Data is tied to experiences which are tied to data gathering/harvesting to something that is the data ecosystem: clouds and devices. Most experiences are drawn in “canvasses”: space and time. Most experiences can be mapped to both spatial and temporal a relationship, that is canvasses for the experiences to unfold. Example of an experience:
“I need a vacation”. Where would such experience start? Maybe in the calendar, and it does not need to leave there. Whittaker then went into a lot of interesting scenarios such as “Decline all my meetings and show me some flights”. After this your calendar would show flights in the calendar, and as you choose one, then screen changes with suggestions for hotels and discounts – all in the calendar. Plus it shows some recommended activities that you can just pick and have them neatly fit into the calendar. It ends with a nice one-week calendar with a vacation created from an auction system for the best experiences you might want to live.
As a new era starts, Whittaker explained that it becomes difficult for the incumbents. We just started, so the winner is still not know, and he invites us to join in developing new experiences in this new paradigm.
Peter’s presentation was to me a direct segue from my observations on Leffingwell’s presentation. It was about make programming fun again, to motivate people so they get to that next level where you are looking forward to Monday, to delight your colleagues with the best you can do, where colleagues are friends, where you work hard and play hard. “Have fun!”
His argument is that we need to focus on the long game but not forget the short game.
“Just in time planning, definition, estimation and design are good but people need to understand the roadmap”
So when teams don’t see the long picture teams will be make decisions to account for this uncertainty. That reminded me of “shared vision” that Jim McCarthy talked about: with it you can make short term decisions and align yourself with the long term, have a north to your compass.
Add a personal perspective that gets you excited about “changing the world”. That brings me a reflection on being part of a team, an organization, which provides value to society. Are we helping people be happier, better themselves?
“Tools are important, but not the most important.”
I talked at length about this on my talk at Scrum.org Face to Face, and it seems to be a recurring theme on this ALM Summit. We concurred that “peoples and interactions” are the substance of bringing motivation back to the team. “Tools should be supportive to the process”. If the tool gets in the way, if it brings you pain, if it takes fun of what you are doing then there is something wrong (for those in the Pulse team: Peter is a program manager in Visual Studio/TFS – feel free to send me a summary of your pain, and I will send it to him).
Peter then moved to talking about “Simplicity doesn’t mean stupid”:
“Just Barely Good Enough==the most effective possible
Apply this rule to everything in your project, team, tools & artifacts”
This point by Scott Ambler is better understood by looking at the following diagram:
Listening to Peter talk about this is very convincing about the value of this principle – however, I also respect Rebecca Wirfs-Brock as one of the pioneers of Agile, when Agile was not even called that – and she argues the opposite: http://wirfs-brock.com/blog/2006/09/29/barely-good-enough-doesnt-cut-it/. I will need to think more about this one.
“Beware Unintended Consequences – your team will react to anything that hurts or scares them”
As an example he mentions the expression “don’t break the build!” – developers will react to that (“no one wants to have a toilet seat or crocodile doll in your office ” by imposing restrictions on themselves (check-in less frequently, check-in bigger chunks, which lead to big merges). It should have been stated “You broke the build? OK, but fix it immediately”. Why the short phrase? Because the earlier statement was just too big, “Don’t break the build” is easier to remember. And scare. Another classical example of this are metrics – use them wisely and for your own understanding, because the moment you try to drive motivation based on metrics the team will contort their behaviors to satisfy those instead of focusing of having pleasure from shipping something new.
Well my take on his idea is to think about what the unintended consequences of rules. If they bring the team down, then recognize, think about them, and change them – don’t let them spoil the team motivation because of tradition and legacy imposed on you.
Then Peter talked about Dan Pink’s ideas about motivation, which he summarized as:
· Autonomy – Empower and trust people to make decisions
· Mastery – Give your team members the opportunity to improve and grow their craft
· Purpose – Show them the big picture and their place in it
Funny is that Eric Weber had talked about the same ideas in his presentation (“Scrum and Drive”) during the Scrum Face to Face meeting. And it seems a running theme across the minds of Agile thinkers.
Finally he talked about “Outcomes are more important than processes”. What I understood from this is that at the end of the day, we should think of ourselves not as “cost centers” (especially if you are in IT) but as “business differentiators, as contributors”. It is not nice to be considered from the outset as a burden. Motivation goes down from there. Why stick with this tradition dictated by process when what matters is to motivate people to the next level of excellence?
Sean Laberee (lead program manager on SharePoint and office development tools) started by describing the impression that most people have of the current state of the art of SharePoint development by asking: what do these have in common?
- Dentist visit
- Public speaking
- SharePoint development
So the purpose of his talk is to show that SharePoint development has become Agile, and most of the pain/panic associated with it can go away.
The main topics were the following:
- A nice way to find relatively cheap business apps for common needs
- Nice sandbox app isolation – apps from store can’t take over SharePoint data without explicit permission
Better App Management
- Nice interface to manage app licenses
- Both a catalogue and license manager
- Able to configure a developer box in no less than 5 hours before, now it takes 15 minutes
- Microsoft NAPA is a SharePoint app for development that does not need SharePoint – you can start in Office 365 and develop for 365
New Cloud App model
- provides a simpler, neat model for app development (Office and SharePoint)
- SharePoint app dev now seamless look like normal web development
- See more comments below on options for hosting
Continuous Integration Improvements
SharePoint app hosting options:
- Supported on premises and on office 365
- Most flexible if you don’t need server code
- New Azure website and SQL Azure DB per app instance
- Best suited for store apps for office 365
- Use any provider to host your web server
- Support on premises and on office 365
- One server supports all instances of the app
He finalized by mentioning that this talk is about making SharePoint more like normal web development – that’s why not much of SharePoint was shown in the presentation. I agree: it looked a lot more Agile in the sense that it flowed naturally, no snags/impediments that normally come from infrastructure. I will finally go back to the dentist, err, SharePoint development in the near future.
Leffingwell’s presentation was on Scaled Agile Framework (SAFE), and an explanation on Lean applied to software development. He showed how Scrum fits neatly with Lean principles:
“Scrum is founded on Lean
- Cross functional teams
- Time boxes to limit WIP
- Sprints provide fast feedback
- Empowered, local decision making
- ·Colocation reduces batch sized and communication overhead
XP is quintessentially Lean [etc]”
Also while talking about applying WIP constraints he mentioned the following:
“Timeboxes prevent uncontrolled expansion of work making wait times predictable.”
This to me resolves one of those artificial conflicts between Scrum and Lean/Kanban: that timeboxes are not useful anymore. Leffingwell provides a very fresh view on a subject that tends to get polarized once in a while.
However I am still pondering if ideas that have origin in manufacturing are fully applicable to an intellectual endeavor such as software development. To me the bottleneck is managing creative people so as to raise in them the motivation to do their best work, and excellence follows naturally. Lean seems to me (at this point of my research) more on the side of getting out of the way the hurdles caused by administrators that imagine that they can manage developers as they manage robots in a factory floor. Queuing theory understanding helps alleviate the burden caused by managers that want to over-optimize the external measures of efficiency, such as hours, backlog size, etc., that is, physical coordination.
Don’t get me wrong, this is useful to put some restraints on the “Office Space” manager kind. However, the effect on motivation is just to free up the mind of the developer so they can have the ideal proposed by Kent Beck as the “40-hour work week”, that is, the effect to me is essentially cleaning up the road. The road trip has yet to be done.
I was listening again to Jim McCarthy’s presentation on how he achieved a “shared mindset” with the Visual Studio team. That was real motivation, the kind that can make a developer, who is just doing their minimum to get-by until he finds something better, into someone who can be ten times more productive only because now he is motivated to do so. This is the kind of productivity management the software industry needs today, and will always need.