Distributed version control systems (DVCS) have made huge gains in adoption over the past few years and GitHub in particular has really brought DVCS to the geeky masses in a way that indicates to me that soon enough the only centralized VCSs left will be legacy systems. I know there are still a few reasons some folks prefer the simplicity of something like Subversion (SVN) and I don’t think there should be any rush to migrate large legacy code bases from more “modern” centralized tools like SVN or Perforce.
On the other hand, Git et al. are really not that complicated and the ease of branch and merge in a DVCS compared to SVN enables a great workflow for just about everything Agile, from TDD to Continuous Integration. If you are so unfortunate as to be running an older tool like ClearCase and compare this to the raw speed and branching/merging experience of a Git or Mercurial and then compare the price … well let’s just say that IBM’s got a good thing going.
Sure, DVCS is not perfect, for example storing large binary files is a pain (why are you doing that again?). Also, like any tool, folks take some time to wrap their heads around new concepts before they are comfortable and by the time you scale to development organizations that number in the hundreds (or even thousands), there are going to be some organizational and engineering challenges. Nevertheless, in this same environment, Agile is everywhere and Centralized VCS faces some pretty serious challenges for large geographically distributed organizations (last I checked, globalization isn’t going anywhere). From my perspective, the writing is on the wall, it’s only a matter of time …
That said, “only a matter of time” can be an eternity in the enterprise world and even though the productivity benefits a of DVCS are pretty clear, new technology always faces an uphill battle in large organizations, especially when existing tools are seen as “good enough”. Here at OpenMake, we know all too well how the enterprise can ignore hidden time sinks with negative impacts for the ALM process (see manually scripted builds). So with that said, I’m curious, what is the biggest thing holding back DVCS from getting more traction in the enterprise?
The best answer I can come up with is that, like almost everything in enterprise IT, it’s often more a matter of people and the ecosystem around a technology rather than any one thing specific to the technology itself. We see this all the time with our ALM automation solutions. Once enough people are familiar with the ideas and there are enough resources for those who aren’t, the switch flips and it becomes a no-brainer for the organization.
So, if all it takes is a critical mass of knowledgeable folks, what will it take for the enterprise IT market to get there on DVCS? Communities like GitHub and the broader open source community certainly are driving a lot of interest in DVCS, especially for the younger generation. What’s more, I’ve found many SCM administrators in the enterprise who say that their developers are already using DVCS, whether it’s supported or not. On the side of visibility and mindshare, I think we are almost there.
While it’s true you have to have the internal knowledge and expertise, most businesses will also balk at any tool that lacks rich tools and support. With Git at least, I get the sense that the IDE integrations and GUI tools are at the level of maturity (or close enough) to make it a good choice for enterprise development teams and of course, everyone sells Git support these days. But then when you are talking DevOps (see SCM and ALM), the developer story is only part of the solution. What about what the rest of the organization needs out of VCS?
I think tools like CollabNet’s TeamForge and Microsoft’s Team Foundation Server are on the right track by adding DVCS (in this case, Git) support to existing enterprise-class ALM tools. Access control, issue tracking, and the capability to integrate with other important pieces of the ALM workflow (say, Meister and Deploy+) more or less completes the tooling puzzle, but even still, Microsoft and CollabNet will both tell you the move to DVCS is still not for everybody.
Even if you consider DVCS superior in every way (I”m not interested in that holy war), the cost of a large VCS migration project is not something to be taken lightly. That value proposition and migration path have to be crystal clear. You do not want to be THAT person, who championed a major infrastructure change only to discover that your existing processes are so closely tied to the old systems (e.g. our old friend, static build and deployment scripts) that the project runs seriously over budget and oh, by the way, audit compliance means you have to maintain both systems for years to come. There’s a fine line between running older systems because they’re well understood and battle-tested and being stuck with obsolete technology that represents it’s own form of technical debt.
All things considered, I’m not sure that 2013 is THE year for DVCS in the enterprise, but I think all the components are there and I would not bet against it.
Sessions to Attend:
MC110SN – It’s Just Ops – Understanding the DevOps Challenge
Presented by Tracy Ragan, COO OpenMake Software
Tuesday April 23, 2013 11:15 – 12:15
This presentation will cover the DevOps challenge for the distributed platform and review how the mainframe platform met this challenge close to 30 years ago with CA Endevor Software Change Manager. Attendees learn how to begin analyzing their distributed DevOps challenge by starting with 5 easy steps. The use of OpenMake Software’s Dynamic DevOps with CA Harvest Software Change Manager, CA Server Automation and CA Client Automation will be reviewed.
Visit OpenMake Software on the Exhibit Floor:
Sunday April 21st 5:00 p.m. to 7:00 p.m. Welcome Reception and Exhibition Center Grand Opening
Monday April 22nd 12 Noon to 4:45 pm Lunch Served
Tuesday April 23rd 12 Noon to 5:00 pm Lunch Served
Wednesday April 24th 12 Noon to 4:45 pm Lunch Served
Get informed about DevOps including the origin and history of DevOps. This webinar, Mastering DevOps Challenges, covers basic information about DevOps and gives you tips on analyzing your own process to determine what you will need to do to move from ALM to DevOps.
Latest rollup patch for 7.4 can be found at: http://www.openmakesoftware.com/support/updates-01242013.zip. It is for AIX, HP-UX, Linux, Solaris, Windows and Linux.
This patch should be applied to all sides including the KB Server, Client and Remote Agents.
I’ve recently been published in Better Software Magazine on the topic of DevOps. Go to :
The article is on Page 22. Enjoy!
Chicago, IL – December 4, 2012 – OpenMake Software announced today the acquisition of Trinem, a DevOps solution provider based in Edinburgh, Scotland who specializes in software deployment solutions and application lifecycle management consulting.
This strategic acquisition will bring advanced deployment technology to the OpenMake Software DevOps suite as well as provide improved services to customers in the European market and allow OpenMake Software to expand its current European outreach.
“The Trinem acquisition is a tremendous complement for OpenMake Software. It expands both our product offering in the Software Deployment space and our ability to better service the European DevOps Market” explains Stephen King, CEO OpenMake Software.
Like OpenMake Software, Trinem has partnered with CA Technologies in providing add-in benefits to the CA Service Management product line. While OpenMake Software has specialized in Build Automation technology, Trinem has focused on the Deployment challenge. Integrating these solutions creates a streamlined build to deploy solution that will support a large variety of development platforms, versioning tools and production release environments.
James Wilson, CEO of Trinem stated, “We are very excited about joining the OpenMake Software team as our joint technologies provide a complete end-to-end solution, from Build through Deploy for customers looking to solve the DevOps challenge. Trinem’s platform agnostic, agentless products will set OpenMake Software apart from any other vendor in the DevOps and Release Automation space.”
I just watched a webinar by UC4. The point was clearly made that deploy scripts are in general a bad idea. Now I’ve seen and written both deploy and build scripts. Build scripts can be literally pages long with lots of little pieces and parts. Deploy scripts are quite honestly not so difficult. So why if Deploy scripts are so bad, why do companies like UC4 think that build scripts are OK. Scripts are scripts. The same problems found with deploy scripts are found with build scripts. Automating the generation of both based on standardized templates are key to creating a consistent build to deploy environment.
Build, Package, Deploy – your DevOps solution needs to do it all – automatically and without build or deploy scripts.
Is there a cost to Open Source? Here is a great article on this topic from an open source expert.
We often get questions about the use of reusable workflows. OpenMake Mojo supports the use of Reusable Workflows for both Builds and Deploys. Reusable Workflows allows you to define a set of activities that can be re-used over by other Workflows. This can be very handy in the deployment process in particular. Reusable Workflows allows for granular organization of functionality and a high level of re-usability. A Reusable Workflow is defined in the same way as a Nested Workflow. The difference is in how they are used. A Reusable Workflow is a set of Workflow Activities that may be used in the same way by multiple Project Teams. Instead of re-defining these Workflow Activities for every Workflow, the Workflow can instead call another Workflow to perform those Workflow Activities. If you make a change in the Reusable Workflow, any Workflow that is using that Reusable Workflow will get the new changes at execution. When running the Workflow Monitor, you will see how each Workflow is called in the correct order, and you will see each step inside that reusable workflow execute.
In order to use Reusable Workflows the Environment Variable OMSUBMIT_MAX_USER_PROC must be set to a value of 3. This must be set in the shell that launches the omsubmit executable. For Example: OMSUBMIT_MAX_USER_PROC=3
OK – all you DevOps experts a bit of a history lesson. Did you know that the mainframe tool Endevor means “Environment for Development and Operations” ENDEVOR. This is why is has no “a” in the name. The first DevOps tool on the market and it is 30 years old and still standing! The distributed side could learn from this!