Are your builds really faster?

The October 2013 issue of SD times has an article called “The reconstruction of deployment” by Alex Handy.  He starts the article by talking about Builds.  ”In days past, there was one sure fire, always working plan for building software: start he build, then go get some coffee.  Sometimes building even meant it was time to go home.”   The article talks about continuous build and deploy and gives credit to CI for improving the speed of the build and deploy process.

However, you must ask yourself, “are my builds really faster?”  Builds, the process of compiling and linking code, has not changed at all.  Yes, we have Ant and Maven and not just Make, but in essence, CI does not change the build scripts themselves.  In Alex’s article, he alludes to a time in our past that builds would take hours to run.  Guess what, they still take hours to run when a script is driving them.  The same build script that was executed manually and took hours to run, is now just executed via Jenkins, and takes hours to run.  A build script executed by Jenkins runs no faster than a build script executed in any other way.

In the article Brad Hurt, VP of Product Management at AccuRev confirms this.  He explains that you need to have control over the different levels of code maturity so in the case of an 8 Hour build you don’t have “random developer” code checked in that pollutes the build. In this reference to “build” Brad is talking about the compile and link process.  Some people refer to the build as a set of steps that are executed before the compile, the compile and after the compile.  But Brad’s reference is more accurate. He is talking about a compile process that can take hours to run. And for large projects this is not unusual.

The goal is actually to never have an 8 hour build.   As we mature in DevOps, we are moving away from one-off scripts, particularly around Deploy.  OpenMake Meister moves away from scripts in both build and deploy. This allows intelligence in the build for building incrementally, with acceleration and parallelization decreasing build times substantially. For an incremental build, an 8 hour build can become a 10 minute build. This incremental processing is passed to the Deploy, so even deploys are incremental and not monolithic.

So lets stop kidding ourselves.  A Jenkins build is no faster than the script it is calling. And if the script cannot support incremental changes (agile practice) or support parallelization for speeding up monolithic compiles, then you have a really cool CI process with a very slow back end.

 

OpenMake Software today announced the 7.5 release of its market leading Meister build automation, Mojo workflow management and CloudBuilder provisioning products together with their new Deploy+ offering which comprise its Dynamic DevOps Suite. Delivering a consolidated tool chain for process automation, continuous build and continuous deploy, the Dynamic DevOps suite offers a model driven framework for simplifying the hand-off of the software build and delivery process from development teams to production control.

Tracy Ragan, COO, OpenMake Software explains, “We have expanded our model driven framework for managing Builds into the Deployment realm. Our Dynamic DevOps suite substantially reduces the use of one-off build and deploy scripts, delivering a more reliable and transparent method of delivering binaries.”

The Dynamic DevOps Suite includes both Build Services and Deploy Services that create standard process for building and deploying applications. It includes standard models for delivering to WebSphere, Tomcat, Microsoft IIS and other common server environments. Standard process workflows can be defined and reused across the development to production environments with dynamic changes addressing the uniqueness of each environment.

Why Dynamic? “Defining a reusable, standardized process across the lifecycle from build through deploy is the ultimate goal in achieving DevOps. Changes between environments for builds, test execution and deployments should be addressed dynamically, without human intervention or one-off scripts. We uniquely achieve this level of automation for build and now deployments with our 7.5 release” explains Steve Taylor, CTO, OpenMake Software.

The Dynamic DevOps Suite is now available for download from http://www.openmakesoftware.com/download/

  • Filed under: OpenMake Software News
  • Watch this Google  video from their education series.

    They have written a homegrown process that is extremely similar to OpenMake Meister.  Build Rules, the elimination of scripts, incremental processing, management of libraries, parallelization  and distribution of workload is all shown.  The good news is you do not need to write this process on your own. You can use Meister instead.  Meister solves all the problems and provides all the features covered here.   So yes, your builds can be intelligent too.

  • Filed under: Build and Deploy
  • SVN Importer

    On and off over the last few years or so I have been working with the open-source SVN Importer tool from Polarion to help customers migrate to Subversion (SVN) from legacy version control systems (aka VCSs) like CVSSerena PVCS, Borland StarTeam, IBM ClearCase and MKS Integrity.

    In this and future posts, I’m going to share my experience with this tool.  While the tool has it’s limitations, in total its  strengths make it worth your time if you find yourself searching for a so-called “full history” migration from one of these other VCSs to SVN (more on the dreaded “full history” term a little later).  Furthermore, you could also use this tool to perform a legacy VCS Git migration, as there is the well regarded svn2git tool that provides high quality conversion capabilities between SVN and Git.

    At any rate, for right now I’m simply going to go over the framework and model that SVN Importer uses, and then touch on the high-level features and benefits, as well as some of the limitations and gotchas.  Expositions on migrations from specific VCSs will (hopefully) follow in later posts.  I’m also working on getting some of the updates I’ve made to the SVN Importer code base published to GitHub, so stay tuned for news on that as well.

    How it works

    To start, it helps to understand the challenges that we face in trying to convert from these legacy VCSs to SVN.  All of the older systems mentioned above handle source revisions at the file level, as opposed to SVN which records revisions on the repository as a whole.  SVN’s style allows multiple file updates to be consolidated into a single repository revision, sometimes referred to as a changeset .  The older style of defining independent file revisions for each file changed in a commit goes all the back to RCS and over time that model has shown it’s shortcomings.   For one, commits with these systems are generally not atomic or transaction-based and so the systems have potential integrity issues.  In the usability department, I think most users agree that having the notion of changesets in your VCS is preferable to tracking individual file revisions (though some of the above systems do have some changeset capabilities).

    Besides functionality this fundamental difference in models between these systems and SVN present some challenges for how to translate the file revisions from these systems into SVN repository revisions.  For some legacy repositories (e.g. CVS and PVCS), there is no other signaling metadata that can help the tool group these file revisions into SVN repository revisions and so each file revision is brought over as a unique repository revision.  Other systems have various types of signalling metadata like change requests that can be used to group otherwise disparate file revisions into single repository revisions.  In general, every source VCS provider has it’s own model in the software that is then transformed into the SVN model in order to generate a SVN compatible dump file from requests to the source VCS.

    The Good Stuff

    While each of SVN Importer’s individual VCS providers has multiple features and configuration options, globally the tool has a pretty limited set of options, though I must say this is not necessarily a bad thing.  At any rate, the one feature that truly stands out as both appealing and actually usable is the capability to do incremental conversions.  This allows conversions to run once to convert all current history to a SVN dump file and to then pickup the process at a later time to convert only what has changed in that time.  This gives you some flexibility if you need to allow developers to continue working while the conversion process runs.  You can then test against that converted dump for awhile to be sure it passes muster (tags are accurate, etc.).  You can update that conversion incrementally to make your production transition at a moments notice.  This way even the biggest source repository can be converted to a single SVN repository with all of the history, with a minimum of disruption to development activities.

    As a general feature, the other standout with this tool is the overall quantity and quality of the metadata that is brought over.   In most cases, the tool is explicitly programmed to read the various source VCS metadata such as commit date and time, commit user, tags, branches, Change Request (CR) numbers, etc.  Generally this is all the data you could ask for from the source VCS.  In some cases, I have had to extend the tool to support bringing some additional data and since it is Apache license software and fairly well designed, it is easy enough to make  these adjustments if you know Java and the tools that the source VCS provides for gathering metadata.  Usually the metadata is read via the tool’s command line interface (CLI) but some tools also provide API support.

    But I digress … the real beauty of this tool is the quality of how the source VCS metadata is translated into SVN.  Because the software knows the syntax for writing an SVN dump file, the converted SVN repositories are truly remarkable.  All dates and times of commits are accurate, associated with the correct users.  In addition, other metadata are stored as part of the SVN commit comments or SVN properties.  For all intents and purposes, the converted repository has an accurate and complete history of the source repository, only in SVN format.

    No Silver Bullet

    The downsides to this tool all have to do with its performance.  My biggest gripe is that some of the source provider models are inefficient and bloated (though in one case I encountered, I consider this most directly the fault of the legacy VCS for having weak command line tools and a bloated API).  Regardless which source provider you use, since the process of transforming the source model to the SVN model happens in memory, if you have a repository with hundreds of thousands (or millions) of files and/or file revisions, the memory usage of the process as it translates can balloon rather quickly, especially when there are also many tags and branches.  In one extreme case I have seen the memory requirement climb past 24GB, though this was for a project with close to 10 million file revisions, hundreds of branches, and thousands of tags.

    Besides the memory footprint, the processing time for very large projects can also become prohibitive.  As previously mentioned, that’s one situation for certain where the incremental import feature can help immensely. Nevertheless, at this point it must be said, you should probably stay away from this tool if you want to convert repositories with millions of source revisions and cannot dedicate the necessary hardware (16GB +) and time (~2-3 days processing per million revisions) to the problem.

    Then again, given the relatively small number of tools out there for this job, your options for these conversions are rather limited.  I have written simple VCS conversion scripts from scratch before.  If your needs are simple, the DIY approach is certainly doable for these legacy tools.  On the other hand, if you have to write a from-scratch full history conversion program, migrating from a file revision VCS to SVN, supporting robust metadata migrations, tags and branches … you’re gonna have a bad time.

    Of the other free tools out there, CVS users may of course be better off with with cvs2svn.  While I’ve heard that tool is not without its wrinkles, I’ve had good success with its cvs2git sub-module on smaller CVS repositories.  For these old style enterprisey VCSs, the only other tool out there for SVN conversions that I’m aware of is the cc2svn tool.  As I understand, this tool is only capable of converting the history of a single ClearCase view at a time, but given it’s lightweight Python implementation, it may be a nice alternative for ClearCase users with very large repositories that cannot use SVN Importer.

    As a final note, I want to make clear that I believe the term “full history” conversion can be quite misleading.  Whenever you perform a data migration on the scale of migrating VCSs, while all source code data should be preserved, you cannot avoid some changes in, at the very least, the form of the metadata.  If your organization has strict requirements that your VCS data and metadata must be maintained for so many years previous for audit purposes, a migration of your data using a tool like SVN Importer may or may not help to avoid that requirement.

    Whew … OK, that’s all for now.  Stay tuned for further posts and please let me know what you think in the comments.

  • Filed under: DevOps
  • DVCS in the Enterprise: Are we there yet?

    Distributed version control systems (DVCS) have made huge gains in adoption over the past few years and GitHub in particular has really brought DVCS to the geeky masses in a way that indicates to me that soon enough the only centralized VCSs left will be legacy systems.  I know there are still a few reasons some folks prefer the simplicity of something like Subversion (SVN) and I don’t think there should be any rush to migrate large legacy code bases from more “modern” centralized tools like SVN or Perforce.

    On the other hand, Git et al. are really not that complicated and the ease of branch and merge in a DVCS compared to SVN enables a great workflow for just about everything Agile, from TDD to Continuous Integration.  If you are so unfortunate as to be running an older tool like ClearCase and compare this to the raw speed and branching/merging experience of a Git or Mercurial and then compare the price … well let’s just say that IBM’s got a good thing going.

    Sure, DVCS is not perfect, for example storing large binary files is a pain (why are you doing that again?).  Also, like any tool, folks take some time to wrap their heads around new concepts before they are comfortable and by the time you scale to development organizations that number in the hundreds (or even thousands), there are going to be some organizational and engineering challenges.  Nevertheless, in this same environment, Agile is everywhere and Centralized VCS faces some pretty serious challenges for large geographically distributed organizations (last I checked, globalization isn’t going anywhere).  From my perspective, the writing is on the wall, it’s only a matter of time …

    The enterprise is a fickle beast …

    That said, “only a matter of time” can be an eternity in the enterprise world and even though the productivity benefits a of DVCS are pretty clear, new technology always faces an uphill battle in large organizations, especially when existing tools are seen as “good enough”.  Here at OpenMake, we know all too well how the enterprise can ignore hidden time sinks with negative impacts for the ALM process (see manually scripted builds). So with that said, I’m curious, what is the biggest thing holding back DVCS from getting more traction in the enterprise?

    The best answer I can come up with is that, like almost everything in enterprise IT, it’s often more a matter of people and the ecosystem around a technology rather than any one thing specific to the technology itself. We see this all the time with our ALM automation solutions. Once enough people are familiar with the ideas and there are enough resources for those who aren’t, the switch flips and it becomes a no-brainer for the organization.

    So, if all it takes is a critical mass of knowledgeable folks, what will it take for the enterprise IT market to get there on DVCS?  Communities like GitHub and the broader open source community certainly are driving a lot of interest in DVCS, especially for the younger generation.  What’s more, I’ve found many SCM administrators in the enterprise who say that their developers are already using DVCS, whether it’s supported or not.  On the side of visibility and mindshare, I think we are almost there.

    While it’s true you have to have the internal knowledge and expertise, most businesses will also balk at any tool that lacks rich tools and support.  With Git at least, I get the sense that the IDE integrations and GUI tools are at the level of maturity (or close enough) to make it a good choice for enterprise development teams and of course, everyone sells Git support these days.  But then when you are talking DevOps (see SCM and ALM), the developer story is only part of the solution. What about what the rest of the organization needs out of VCS?

    Can DVCS deliver?

    I think tools like CollabNet’s TeamForge and Microsoft’s Team Foundation Server are on the right track by adding DVCS (in this case, Git) support to existing enterprise-class ALM tools. Access control, issue tracking, and the capability to integrate with other important pieces of the ALM workflow (say, Meister and Deploy+) more or less completes the tooling puzzle, but even still, Microsoft and CollabNet will both tell you the move to DVCS is still not for everybody.

    Even if you consider DVCS superior in every way (I”m not interested in that holy war), the cost of a large VCS migration project is not something to be taken lightly.  That value proposition and migration path have to be crystal clear. You do not want to be THAT person, who championed a major infrastructure change only to discover that your existing processes are so closely tied to the old systems (e.g. our old friend, static build and deployment scripts) that the project runs seriously over budget and oh, by the way, audit compliance means you have to maintain both systems for years to come.  There’s a fine line between running older systems because they’re well understood and battle-tested and being stuck with obsolete technology that represents it’s own form of technical debt.

    All things considered, I’m not sure that 2013 is THE year for DVCS in the enterprise, but I think all the components are there and I would not bet against it.

  • Filed under: DevOps
  • CA World 2013 Just around the corner

    Upcoming Events

    Sessions to Attend:

    MC110SN – It’s Just Ops – Understanding the DevOps Challenge
    Presented by Tracy Ragan, COO OpenMake Software 

    Tuesday April 23, 2013 11:15 – 12:15
    This presentation will cover the DevOps challenge for the distributed platform and review how the mainframe platform met this challenge close to 30 years ago with CA Endevor Software Change Manager. Attendees learn how to begin analyzing their distributed DevOps challenge by starting with 5 easy steps. The use of OpenMake Software’s Dynamic DevOps with CA Harvest Software Change Manager, CA Server Automation and CA Client Automation will be reviewed.

     

    Visit OpenMake Software on the Exhibit Floor:
    Sunday April 21st 5:00 p.m. to 7:00 p.m. Welcome Reception and Exhibition Center Grand Opening
    Monday April 22nd 12 Noon to 4:45 pm Lunch Served
    Tuesday April 23rd 12 Noon to 5:00 pm Lunch Served
    Wednesday April 24th 12 Noon to 4:45 pm Lunch Served

    register here for CA World 2013

  • Filed under: OpenMake Software News
  • Get informed about DevOps including the origin and history of DevOps. This webinar, Mastering DevOps Challenges, covers basic information about DevOps and gives you tips on analyzing your own process to determine what you will need to do to move from ALM to DevOps.

  • Filed under: DevOps
  • Latest rollup patch for 7.4 can be found at: http://www.openmakesoftware.com/support/updates-01242013.zip. It is for AIX, HP-UX, Linux, Solaris, Windows and Linux.

    This patch should be applied to all sides including the KB Server, Client and Remote Agents.

  • Filed under: OpenMake Software News
  • I’ve recently been published in Better Software Magazine on the topic of DevOps.  Go to :
    www.stickyminds.com/bettersoftware/downloads/V15I1.pdf

    The article is on Page 22.  Enjoy!

  • Filed under: Uncategorized
  • Chicago, IL – December 4, 2012 – OpenMake Software announced today the acquisition of Trinem, a DevOps solution provider based in Edinburgh, Scotland who specializes in software deployment solutions and application lifecycle management consulting.

    This strategic acquisition will bring advanced deployment technology to the OpenMake Software DevOps suite as well as provide improved services to customers in the European market and allow OpenMake Software to expand its current European outreach.

    “The Trinem acquisition is a tremendous complement for OpenMake Software. It expands both our product offering in the Software Deployment space and our ability to better service the European DevOps Market” explains Stephen King, CEO OpenMake Software.

    Like OpenMake Software, Trinem has partnered with CA Technologies in providing add-in benefits to the CA Service Management product line. While OpenMake Software has specialized in Build Automation technology, Trinem has focused on the Deployment challenge. Integrating these solutions creates a streamlined build to deploy solution that will support a large variety of development platforms, versioning tools and production release environments.

    James Wilson, CEO of Trinem stated, “We are very excited about joining the OpenMake Software team as our joint technologies provide a complete end-to-end solution, from Build through Deploy for customers looking to solve the DevOps challenge. Trinem’s platform agnostic, agentless products will set OpenMake Software apart from any other vendor in the DevOps and Release Automation space.”

  • Filed under: OpenMake Software News