Release Engineer to go GA on July 15th, 2014

OpenMake Software drives ARA to the next level with the general availability of Release Engineer, a new and powerful ARA solution designed for the enterprise.

Delivers a scalable ARA solution that facilitates the reuse and sharing of release objects across teams, provides agentless distribution, and supports multi-platforms and environments.

Chicago, IL – June 25, 2014 – OpenMake Software today announced the July 15th , 2014 GA date of Release Engineer, the newest addition to the OpenMake Software Dynamic DevOps Suite. Release Engineer is an enterprise-scale application release automation (ARA) solution designed for complex multi-platform environments.

Release Engineer, formerly Deploy+, centralizes the management, configuration, and reuse of all Release and Deploy elements for the enterprise. Its flexible design allows Operations to define release standards that can be inherited and customized specifically for each project team. It supports multi-tiered platforms with no reliance on any agent technology. Unlike its competitors, Release Engineer leverages a domain structure that facilitates the sharing of release components and dependencies for reuse across teams and delivers a unique roll forward logic for incremental release processing. Eliminate delivery errors through reuse, planning and shared control of your release with full audit transparency.

“An enterprise release automation solution requires a tool that is highly reusable, can support multiple platforms including both WebSphere and MS IIS, and allow the central teams to support more releases with less staff”, explains Stephen King, CEO, OpenMake Software. “Release Engineer solves these problems delivering a domain-driven framework for sharing Components and Release Modules coupled with an Agentless technology that reduces the overhead associated with deploy solutions that require the management of hundreds, if not thousands, of end-point deploy agents.”

Steve Taylor, CTO of OpenMake Software, emphasized “Keeping in step with our company philosophy of providing model-driven DevOps, Release Engineer centralizes the definition and sharing of component models and reusable actions across all teams company-wide. Our competitors manage these attributes at the application level creating silos of information that cannot be shared. We minimize the work required for automating releases by defining objects once and reusing them for all teams with similar requirements.”

Release Engineer was designed specifically with the multi-platform enterprise in mind with a design that allows the enterprise release requirements to be defined at the highest level Domain and shared across all Sub-domains creating a high-level of transparency and control available all to teams within the organization from central release teams to each unique development team.

SVN Importer – converting from Borland StarTeam

Intro

I’ve previously posted about the SVN Importer tool here and hoped at some point to follow up on my experiences converting from specific version control tools.  Well, after a StarTeam conversion project last year that was easily an order of magnitude larger than any other conversion project I’ve ever done, I think I’m fairly well qualified to write on the topic.  I had previously done some small conversions using StarTeam 2005 (aka version 11) but for this project, the customer was using StarTeam 2009  (aka version 12.5).  Oh, and I when I say this effort was big, I mean REALLY big: the largest project had almost 20 million file revisions and the whole system had around 50 million file revisions.

Groundwork

The first thing I noticed in doing other smaller conversions is that StarTeam lacks certain critical functions in its command line interface (CLI) that allow these sorts of conversions.  Because of this, the SVN Importer developers, out of necessity I believe, choose to use the StarTeam API to perform the conversion to SVN.  This requires that you have the StarTeam SDK installed on your conversion machine.  Also, if you are converting very large projects (greater than 1 million file revisions) as I was, it means you’ll need a 64-bit version of the SDK.  While I was able to track this down for StarTeam 2009, I don’t believe this exists in earlier versions.  You’ll also need to make sure that the correct version of the StarTeam API jar file is in the classpath of the importer and that the Lib directory of the StarTeam SDK is included in your PATH environment variable.

Once I actually got my conversions running with SVN Importer things went well converting the trunk of projects but I encountered the following error anytime I tried to convert any branches, aka derived views in StarTeam:

INFO historyLogger:84 - EXCEPTION CAUGHT: org.polarion.svnimporter.svnprovider.SvnException: Unknown branch:

Since I was familiar with the inner workings of SVN Importer and the source was freely available, I worked to debug this issue and was able to find a simple coding error that was easily corrected.  As I recall it was because the code in question was using the wrong method, with the wrong return type, to get the branch name.

Later on, I encountered another problem where the same file would be added twice in the same SVN revision in the output dump files.  When attempting to load these dumps into a SVN repository, I would see the error message ‘Invalid change ordering: new node revision ID without delete.’  After some detective work I determined that the same file was being added to revisions multiple times when there were multiple StarTeam labels (equivalent to SVN tags) for the same set of changes.  I made a small adjustment to the model for StarTeam to check if a file exists in a revision before trying to add and this resolved the issue.

Besides these more significant problems, there were a few things I wanted to improve about how the conversion process worked.  To start, the converter was performing duplicate checkouts for each file revision that was adding a good deal of extra time to the conversion process.  In addition, because the conversions I was doing were on very large repositories, over the course of a longer conversion certain StarTeam operations could fail for various reasons (for example network and/or server flakiness) and the converter was written in a such a way that a failure on any StarTeam operation would cause the whole conversion to fail.  To mitigate this issue, I wrapped each call to StarTeam in some logic to retry the operation if there was an error.  Once all these changes were made, I was ready to tear though these projects … or perhaps crawl is a better way to describe it!

Make it go

If you have ever done a version control history migration, you know that these migrations can take a long time to run as the process checks out every version of every file and constructs the new repository.  When we ran smaller tests we found the performance to be a bit slow, but nothing prepared us for the projects with millions of file revisions.

As we moved to larger and larger projects, not only did the time requirements swell, but also the hardware requirements.  While projects with tens (or even hundreds) of thousands of revisions were achievable with 8 GB RAM, we found that this was not enough RAM for projects with millions of file revisions.  This could be very frustrating because the conversions could sometimes run for over a day before erroring out and when they did there was no way to recover the conversion; you had to start all over from the beginning.  When even 16 GB was not enough for the very largest project (consisting of roughly 18 million file revisions), I even had doubts that increasing our RAM up to 32 GB would be sufficient.  Fortunately, once at 32 GB of RAM we never had to worry about RAM again.

In all, the conversion process for this largest project took almost 2 weeks (!) to complete its processing, and almost just as long to validate.  The validation portion of a conversion is probably most often overlooked, and it is mostly simple to do, but still necessary.  The process of loading very large SVN repositories takes nearly as long as the conversion process itself.  One issue that we encountered on this project was actually a limit on the filesystem inodes for ext3.  While this was simple enough to handle, I’m glad we did the validation load to test everything before moving on to the load of the production SVN system.

All in all, this StarTeam to SVN conversion effort took roughly 3 months and was not without its share of challenges but was ultimately worth the effort for the customer.  There really is no substitute for this sort of migration.  In most cases, without a migration like this, companies that need this data available will keep an older VCS running for years, with all the associated costs, in order to stay in compliance with their internal policies or external regulations.

If you’d like to know more about the code changes made to SVN Importer, here’s the situation.  I have made all of these updates available to Polarion, but as of now I don’t have an idea when these changes will be made publicly available through their SVN repository.  If you have questions about StarTeam conversions or the code changes I made, respond in the comments and I can give more detail and possibly find another way to share my changes.

DevOps for the Large Organization

DevQops

dev·qops     \?d?v-,Kops\
: to undertake “DevOps” with particular impetus on Quality Assurance (QA), ideal for the larger organisation with rigorous QA processes and sophisticated, multi-tiered IT systems.
Example: “I use DevQops since DevOps didn’t quite address my development, quality assurance and IT operational needs.”

DevQops v. DevOps

Wikipedia describes DevOps as:
DevOps (a portmanteau of development and operations) is a software development method that stresses communication, collaboration and integration between software developers and information technology (IT) operations professionals.
Without trying to be overly simplistic, the emphasis here is the interaction between developers and operations. Since DevOps is still relatively immature in the IT industry, the general consensus is to focus on the literal interpretation; communication, collaboration and integration between developers and operations.
In my opinion, DevOps is the latest ‘branding exercise’ to encompass Continuous Integration (developers) and Continuous Delivery (operations). In theory this makes absolute sense; submit small changes frequently by the developers (without breaking the application) and implement in Production frequently.
DevOps encompasses both Continuous Integration and Continuous Delivery
However, I believe this approach is only suitable for a subset of all organisations, those that develop relatively simple systems with a short route to Live. For organisations that have large, complex, multi-tiered systems – such as Financial Services, Telecommunications, Utilities, Retail and Gaming organisations – it is likely that rigorous QA processes are in place to address various methods of testing, such as:
  • Systems Integration Testing,
  • Regression Testing,
  • User Acceptance Testing,
  • Penetration Testing,
  • Load Testing,
  • Functional Testing,
  • Smoke Testing.
Once these levels of QA are introduced, the ability to frequently make changes and deliver changes is somewhat restricted.
System architectural example
If we consider the example above; this could be a typical financial system topology which might represent a single QA stage (a single test-rig) within the overall software development life cycle. As a minimum – and for the purposes of this example – let’s assume this particular organisation undertakes at least four levels of testing:
  • Systems Integration Testing,
  • Regression Testing,
  • User Acceptance Testing, and
  • Load Testing.
In terms of complexity, change management and environment management, it is my belief that the QA managers have a far more difficult job than even the Operations team – the Operations team, while important, typically only have to manage a single version of any given system within a single environment, Production.
The QA team has the more complex and demanding role of having to:
  • Manage multiple versions of a single application,
  • Manage multiple applications simultaneously,
  • Manage multiple test-rigs simultaneously, and
  • Co-ordinate all of the above to ensure testing is completed within the agreed test duration.
If the QA element cannot be managed efficiently (streamlined and automated) then it effectively renders Continuous Integration and Continuous Delivery nigh on useless – as per Kanban, your process is only as effective as your greatest bottleneck.
So, to get to DevOps nirvana (DevQops) one has to assess the processes surrounding QA. In my experience the most common issues are around:
  • Managing “change-sets”; a collection of small changes from development scheduled for delivery,
  • Infrastructure Management; provisioning required infrastructure to support altered application(s), and
  • Application Deployment; implementing changes from development – or other test stages – into desired test-rig.
These common issues are not difficult to manage in themselves, but when applied to a large organisation with hundreds, if not thousands, of servers, test-rigs environments and applications, then the task becomes brittle, cumbersome and resource-intensive [costly] to manage.
I will dedicate a future Blog on “How to Implement Fully Automated DevQops”.

Strategic Tooling & Release Automation

Strategic Tools Prevent Enterprise Automation

Many organisations have “strategic” tools intended for large groups of users, designed to standardize and centralize processes, management and enforcement. Software Configuration Management and Application Life Cycle Management (ALM) are just two disciplines where “strategic tooling” is commonplace.
While the concept of a strategic tool is logical, it is impractical to assume that ‘one size’ can ‘fit all’. In fact, I can almost guarantee that whoever you are, you probably work for an organisation that has implemented a “strategic tool” (or a “tool of choice”) that was intended to satisfy your organisation’s business or operational requirements.
If we focus on Software Configuration Management (SCM) and software engineering, then we could use IBM’s Rational products or CA Technologies’ Software Change Manager (Harvest) product as examples. These products are typically implemented with the view of enforcing and managing SCM-based processes (source code control, life cycle management, etc) across the IT organisation. Whilst these tools are more than capable of addressing SCM and ALM requirements, there are factors that alienate users, such as:
  • Product Administration; requesting product configuration changes takes time and delays users, such as:
    • User Administration
    • Project Configuration
    • Repository Configuration
    • Build Configuration
    • Administration & Housekeeping
    • Process Configuration
  • Process Automation; recognising repetitive processes that can be automated.
  • Tool Integration; integrating the product with other enterprise systems (Change Management, Service Management, etc).
  • Product Education; understanding how to use the product and what the product is capable of.
  • Product Usability; do the end users actually feel comfortable using the product.
  • Product Familiarity; simply, end users prefer something that is familiar to them.
The above example illustrates Git as the ‘strategic tool’ and how the output from Git – the source code – is propagated through the life cycle, to be deployed to the various QA and Production environments. The illustration also highlights the manual processes of an Environment Manager and Release Manager utilising Microsoft Excel to [manually] manage the application changes and deployments.
This example might be reasonably typical for smaller organisations, but what inevitably happens in larger companies is that additional tools are adopted over time. The following example illustrates how individual teams and projects might ‘break away’ from the “IT Standard” and adopt their own tooling, which finds a place within the organisation and contributes to the overall tooling complexity.
The number of tools expands and the role of Environment & Release Managers becomes even more complex as they attempt to co-ordinate change from numerous tools, locations and processes. This is where the majority of large organisations have difficulty in implementing automation – automation is achieved by assigning computational resource to undertake manual, repetitive tasks. If the processes are fluid, the tooling non-standard and the scale vast, then achieving full Release Automation is impossible.
To achieve full Release Automation in this scenario, one would have to have a solution that could dynamically link to all the various tools and their underlying repositories (where the system changes are made by the developers). What is also important to note is that any given Application is not simply a collection of compiled binaries; an Application consists of configuration settings (application and environment), technology frameworks, services, packaged solutions, etc.
Each one of these Components has to exist somewhere, whether the Release process is manual or automated. For Release Automation, however, the tool responsible for managing the end-to-end deployments has to be capable of dynamically linking to various distributed sources.
Essentially, by modelling the Application, its Components and the location [tool] of those Components, the ability to achieve full Release Automation is attainable.
The above example illustrates how the logical definition of Applications, Components and their sources enables large organisations to achieve Release Automation. Notice that a Component’s source could be a source code control tool, a file system location or a binary repository – it’s not uncommon to have elements of a Release that do not adhere to the “Strategic Tooling” initiative.
By implementing a tool – such as Release Engineer from OpenMake Software – it is possible to achieve full Release Automation with very little cultural impact, operational risk and without ripping out existing tools and processes.

Agent-based or Agent-less Release Automation Solution?

Release Automation: Agent v. Agentless

Many enterprise software systems can be categorized as either “Agent-based” or “Agent-less”. This blog is going to discuss why any organisation would choose to select one method over the other, specifically around Release Automation and Software Deployments.

The first question one should pose – regardless of whether the potential solution is agent-less or not – is this, “What tasks am l looking to conduct as part of my Software Deployment solution?” At this point I also want to make it clear that when l am referring to “Software Deployments”, l am addressing the deployment of software across the end-to-end software development life cycle, not just production systems.

Without creating an exhaustive list that anticipates every granular task, required by every organisation, for every software deployment scenario, I shall attempt to summarize the most common and typical steps and tasks:

  • Send artefacts to a remote server.
  • Manage and manipulate artefacts on a remote server.
  • Execute remote ‘jobs’ on a remote server – this can be categorised further as:
    • Execute a remote script that already resides on a remote server,
    • Send a script to the remote server and execute, and
    • Execute a system command on the remote server.
  • Compile source code on remote server.
  • Start / Stop Services on remote server.
  • Capture the results and output of a remote script or system command.

All things considered, the mechanisms employed around deploying software to remote servers is finite. Now that we understand what we need to manage a software deployment, we now need to assess the merits of whether or not to use an agent-based system.

To be blatantly clear, I would like to state now that my preference is firmly on the side of agent-less. This opinion is formed through many years of working with enterprise software – not just in the Release Automation space – and witnessing real-world limitations and challenges.

 

Disadvantages of an Agent-based Release Automation Solution

Agents are a great way of building robust connectivity between the deployment server and its end points – the remote servers to which one would like to deploy software and systems. However, any agent-less system could be deemed as robust if it is based on SSH/SSL secured connections.

In my opinion, the overheads associated with an agent-based solution far outweigh those of an agent-less solution.

 

Agent Installation

Obvious, but crucial, an agent-based solution will require the customer to install agent software on each and every end point to which the customer is looking to deploy software and systems. In small companies this may not be a significant problem, but when dealing with large enterprises that might have hundreds, if not thousands, of end points then the resource requirements increase substantially.

 

Agent Configuration

Each and every agent will require configuration settings amended to ensure it can connect to the solution server. Each agent may also require configuration settings altered based on the role of the agent.

 

 

Agent Maintenance

Software vendors continually update and improve their solutions; an agent-based solution will, therefore, potentially require software updates. Again, not such a huge problem for small companies but large organisations will have to allocate resources for these upgrades, possibly initiating dedicated project teams to complete the upgrade effort.

 

 

Firewall & Relay Configuration

Agent-based systems will undoubtedly require firewall configuration changes to allow the agents and solution server to communicate and relay instructions and data between numerous domains within large corporate networks.

 

 

Agent Availability

Installing an agent will typically involve installing the agent software as a service running on the remote server. As with any service, it is possible that this service may ‘fail’, require configuration changes, need recycling, not be compatible with other services or, as a worst case scenario, require a complete re-installation.

 

 

Platform Support

Agents are a piece of software built for a specific platform; if you want to deploy to Windows then you will need a Windows agent, UNIX will require its specific agent, so too will Linux, and so on. Since it requires development resource for any vendor to build a specific agent, it is most cost effective for vendors to target the distributed platforms – Windows, UNIX and Linux.

 

However, large organisations make use of various platforms designed to address specific needs. Financial Services companies will make use of fault tolerant and high transaction processing platforms such as; iSeries, Stratus, OpenVMS and Tandem. Another example is Retail organisations that are likely to make use of the IBM4690 platform, for example. It is highly likely that these platforms are not supported by agent-based systems and prevent, therefore, organisations from achieving full Release Automation.

 

 

 

Advantages of an Agent-less Release Automation Solution

 

Release Engineer (from OpenMake Software) is an agent-less solution. This means that all of the above agent-based disadvantages are addressed by removing the need for agents.

 

  • Agent Installation – Not Required
  • Agent Configuration – Not Applicable
  • Agent Maintenance – Not Applicable
  • Firewall & Relay Configuration – Most organisations will likely use SSH/SSL to connect and administer their server infrastructure, and since Release Engineer utilises the same protocol then no firewall changes will be necessary. Even if firewall changes are required, it’s no more effort than the agent-based solution and the connectivity configuration will be robust and secure.
  • Agent Availability – There is no reliance on the availability of any agents.
  • Platform Support – Since Release Engineer utilises standard protocols then the extent of the platforms to which Release Engineer can deploy is diverse, such as; Windows, UNIX, Linux, iSeries, OpenVMS, Tandem, Stratus, IBM4690, Tru64…

Third Party Development with CA Harvest

The Problem

I’ve had a number of clients ask my advice around how to manage development when a third party is involved in the development process. Defining, managing and enforcing a development process with a Software Configuration Management (SCM) tool is reasonably straightforward, but large organisations may depend upon third parties to provide software and system changes.
The following is a fairly common problem that highlights the complexity and potential points of failure when a customer has to interact with a third party responsible for any part of the software engineering process.
Third Party Development Complexity
The process can be summarised as follows:
  1. Change Initiator requests a system change by the supplier.
  2. Client Liaison gathers the requirements and initiates the development.
  3. The developer creates the necessary changes which, in this example, are delivered as any combination of the following:
    1. Step by Step Instructions to apply the changes,
    2. SQL to be applied, embedded within the email,
    3. SQL Packages to be applied to customer’s system, or
    4. A set of binaries to be copied to relevant system.
  4. Customer’s Test Manager co-ordinates the change (manually) and applies to in-house test rig.
  5. Changes are tested until accepted and promoted through the life cycle – using CA Harvest.
  6. Release Manager co-ordinates the deployment of the changes to the Production systems.
The major obstacle with this process is that whether the development best practice is linear development, agile, iterative or heavily prescribed, there are two parties with two sets of processes and tooling.
The process therefore becomes fragmented, error-prone, cumbersome and costly.

The Solution

The ideal solution for any organisation in a similar circumstance would be to extend their internal development processes and tooling to their supplier. This is easier said than done as it’s not very likely that both the customer and the supplier will be making use of the same tooling.
If we assume – for the sake of this Post – that the supplier is keen to please their customer then they might be willing to adopt the same tool as their customer – in this case, CA Harvest. If this is indeed the case then a quick and easy solution would be to provide the supplier’s developers with Harvest Clients and configure them to connect – via SSH – to the customer’s Harvest Server, where the source and binary code is managed.
Third Party Development: CA Harvest
Should the supplier find it impossible to make use of the same tooling then the only other [reasonable] alternative would be to automate the supplier’s updates to the Harvest repository by way of an automated event – after a check in to supplier’s Perforce repository, for example, then an event will fire to create the Harvest Package and check the relevant artifacts into  said Package.

 

Are your builds really faster?

The October 2013 issue of SD times has an article called “The reconstruction of deployment” by Alex Handy.  He starts the article by talking about Builds.  “In days past, there was one sure fire, always working plan for building software: start he build, then go get some coffee.  Sometimes building even meant it was time to go home.”   The article talks about continuous build and deploy and gives credit to CI for improving the speed of the build and deploy process. However, you must ask yourself, “are my builds really faster?”  Builds, the process of compiling and linking code, has not changed at all.  Yes, we have Ant and Maven and not just Make, but in essence, CI does not change the build scripts themselves.  In Alex’s article, he alludes to a time in our past that builds would take hours to run.  Guess what, they still take hours to run when a script is driving them.  The same build script that was executed manually and took hours to run, is now just executed via Jenkins, and takes hours to run.  A build script executed by Jenkins runs no faster than a build script executed in any other way. In the article Brad Hurt, VP of Product Management at AccuRev confirms this.  He explains that you need to have control over the different levels of code maturity so in the case of an 8 Hour build you don’t have “random developer” code checked in that pollutes the build. In this reference to “build” Brad is talking about the compile and link process.  Some people refer to the build as a set of steps that are executed before the compile, the compile and after the compile.  But Brad’s reference is more accurate. He is talking about a compile process that can take hours to run. And for large projects this is not unusual. The goal is actually to never have an 8 hour build.   As we mature in DevOps, we are moving away from one-off scripts, particularly around Deploy.  OpenMake Meister moves away from scripts in both build and deploy. This allows intelligence in the build for building incrementally, with acceleration and parallelization decreasing build times substantially. For an incremental build, an 8 hour build can become a 10 minute build. This incremental processing is passed to the Deploy, so even deploys are incremental and not monolithic. So lets stop kidding ourselves.  A Jenkins build is no faster than the script it is calling. And if the script cannot support incremental changes (agile practice) or support parallelization for speeding up monolithic compiles, then you have a really cool CI process with a very slow back end.

OpenMake Dynamic DevOps Suite 7.5 now available

OpenMake Software today announced the 7.5 release of its market leading Meister build automation, Mojo workflow management and CloudBuilder provisioning products together with their new Deploy+ offering which comprise its Dynamic DevOps Suite. Delivering a consolidated tool chain for process automation, continuous build and continuous deploy, the Dynamic DevOps suite offers a model driven framework for simplifying the hand-off of the software build and delivery process from development teams to production control.

Tracy Ragan, COO, OpenMake Software explains, “We have expanded our model driven framework for managing Builds into the Deployment realm. Our Dynamic DevOps suite substantially reduces the use of one-off build and deploy scripts, delivering a more reliable and transparent method of delivering binaries.”

The Dynamic DevOps Suite includes both Build Services and Deploy Services that create standard process for building and deploying applications. It includes standard models for delivering to WebSphere, Tomcat, Microsoft IIS and other common server environments. Standard process workflows can be defined and reused across the development to production environments with dynamic changes addressing the uniqueness of each environment.

Why Dynamic? “Defining a reusable, standardized process across the lifecycle from build through deploy is the ultimate goal in achieving DevOps. Changes between environments for builds, test execution and deployments should be addressed dynamically, without human intervention or one-off scripts. We uniquely achieve this level of automation for build and now deployments with our 7.5 release” explains Steve Taylor, CTO, OpenMake Software.

The Dynamic DevOps Suite is now available for download from http://www.openmakesoftware.com/download/

Manage your Builds like the Google Rockstars

Watch this Google  video from their education series.

They have written a homegrown process that is extremely similar to OpenMake Meister.  Build Rules, the elimination of scripts, incremental processing, management of libraries, parallelization  and distribution of workload is all shown.  The good news is you do not need to write this process on your own. You can use Meister instead.  Meister solves all the problems and provides all the features covered here.   So yes, your builds can be intelligent too.