Free CA Harvest User Conference

In case you are not following the Harvest Community – I wanted to make you aware that registration is now OPEN for our CA AppDev Technical Conference.

This is a FREE Technical Seminar that will feature many of the CA AppDev products including CA Harvest SCM with Meister setup as a CI Server. Seating IS limited to please be sure to submit your registration form ASAP.

Registration information (including session agendas and recommended Hotels) can be found at the community:

Hope to see you in May!

Forrester Research Shares Agile Requirements and Testing Best Practice for Regulated Industries

Forrester’s Principal Analyst and Vice President, Diego Lo Giudice, recently collaborated with one of our key technology partners, Polarion on an exceptionally well received webinar titled “Agile Requirements And Testing For Continuous Software Delivery” .

In his presentation, Diego shared valuable information on modern application delivery trends and how they can be applied to the complex development challenges in regulated industries. He set the stage by stipulating that compliance and complexity represent a hurdle, but not a barrier, to successful agile development practices, and that such practices are becoming mission critical to enable innovation at ever accelerating speed across the board.

In fact, 2014 Forrester survey results shared by Diego substantiate that Agile methodology adoption is growing across engineering organizations worldwide, with 41.7% of 637 global software developers indicating that it is the “methodology that most closely reflects the development processes currently in use.”

Furthermore, according to the findings, DevOps or Agile all the way through is growing, resulting in the following top three benefits:
• One lifecycle and streamlined process, where everyone is involved in releasing business value
• Shared goals, where operations and development connected on business goals
• Tooling that integrates, representing huge opportunities for process automation.
You can learn all about Diego’s recommendations from the Polarion site 


Continuous Delivery Vs. ARA

Continuous Delivery (CD) is a process, not a solution. Continuous Delivery is an extension of  Continuous Integration. When a software update is saved to the version repository, the Continuous Integration workflow is triggered to execute steps that may include the calling of a script to compile code into binaries (Continuous Build), followed by a script to deliver the binaries to a list of servers (Continuous Delivery). In some cases where the production environment is made up of only a small set of servers, the Continuous Integration process may support production deployments, but in most organizations CI is used mainly by development and testing teams. When someone states they are doing Continuous Delivery, they are saying that they use their CI process to execute a deployment script.

Application Release Automation (ARA) is designed to fully orchestrate the delivery of software including infrastructure and database updates, server configuration management, calendaring, roll-forward, rollback, security access and component packaging. A Continuous Delivery process may call an ARA solution to perform the orchestration of the deployment, replacing the one-off deployment scripts written by developers.

Learn more – download the whitepaper 


RTI 2.1 release announcement

OpenMake Software announced today the release of its latest version of RTI that is compatible with CA Harvest 12.5 and 12.6.

RTI is an interface that extends CA Harvest to allow it to seamlessly control code on many mid-range platforms (including IBM AS/400, OpenVMS and Tandem). End users work with CA Harvest in their usual way, with RTI taking care of the communication to the extended mid-range server base. This latest release supports the 64-bit versions of Harvest.

Concurrent with this release OpenMake Software is offering all purchasers of RTI 2.1 an early mover discount of 15% off list price to encourage customers to move with a firm order to the company by March 30th 2015.

Contact us at for more information.

Release Engineer to go GA on July 15th, 2014

OpenMake Software drives ARA to the next level with the general availability of Release Engineer, a new and powerful ARA solution designed for the enterprise.

Delivers a scalable ARA solution that facilitates the reuse and sharing of release objects across teams, provides agentless distribution, and supports multi-platforms and environments.

Chicago, IL – June 25, 2014 – OpenMake Software today announced the July 15th , 2014 GA date of Release Engineer, the newest addition to the OpenMake Software Dynamic DevOps Suite. Release Engineer is an enterprise-scale application release automation (ARA) solution designed for complex multi-platform environments.

Release Engineer, formerly Deploy+, centralizes the management, configuration, and reuse of all Release and Deploy elements for the enterprise. Its flexible design allows Operations to define release standards that can be inherited and customized specifically for each project team. It supports multi-tiered platforms with no reliance on any agent technology. Unlike its competitors, Release Engineer leverages a domain structure that facilitates the sharing of release components and dependencies for reuse across teams and delivers a unique roll forward logic for incremental release processing. Eliminate delivery errors through reuse, planning and shared control of your release with full audit transparency.

“An enterprise release automation solution requires a tool that is highly reusable, can support multiple platforms including both WebSphere and MS IIS, and allow the central teams to support more releases with less staff”, explains Stephen King, CEO, OpenMake Software. “Release Engineer solves these problems delivering a domain-driven framework for sharing Components and Release Modules coupled with an Agentless technology that reduces the overhead associated with deploy solutions that require the management of hundreds, if not thousands, of end-point deploy agents.”

Steve Taylor, CTO of OpenMake Software, emphasized “Keeping in step with our company philosophy of providing model-driven DevOps, Release Engineer centralizes the definition and sharing of component models and reusable actions across all teams company-wide. Our competitors manage these attributes at the application level creating silos of information that cannot be shared. We minimize the work required for automating releases by defining objects once and reusing them for all teams with similar requirements.”

Release Engineer was designed specifically with the multi-platform enterprise in mind with a design that allows the enterprise release requirements to be defined at the highest level Domain and shared across all Sub-domains creating a high-level of transparency and control available all to teams within the organization from central release teams to each unique development team.

SVN Importer – converting from Borland StarTeam


I’ve previously posted about the SVN Importer tool here and hoped at some point to follow up on my experiences converting from specific version control tools.  Well, after a StarTeam conversion project last year that was easily an order of magnitude larger than any other conversion project I’ve ever done, I think I’m fairly well qualified to write on the topic.  I had previously done some small conversions using StarTeam 2005 (aka version 11) but for this project, the customer was using StarTeam 2009  (aka version 12.5).  Oh, and I when I say this effort was big, I mean REALLY big: the largest project had almost 20 million file revisions and the whole system had around 50 million file revisions.


The first thing I noticed in doing other smaller conversions is that StarTeam lacks certain critical functions in its command line interface (CLI) that allow these sorts of conversions.  Because of this, the SVN Importer developers, out of necessity I believe, choose to use the StarTeam API to perform the conversion to SVN.  This requires that you have the StarTeam SDK installed on your conversion machine.  Also, if you are converting very large projects (greater than 1 million file revisions) as I was, it means you’ll need a 64-bit version of the SDK.  While I was able to track this down for StarTeam 2009, I don’t believe this exists in earlier versions.  You’ll also need to make sure that the correct version of the StarTeam API jar file is in the classpath of the importer and that the Lib directory of the StarTeam SDK is included in your PATH environment variable.

Once I actually got my conversions running with SVN Importer things went well converting the trunk of projects but I encountered the following error anytime I tried to convert any branches, aka derived views in StarTeam:

INFO historyLogger:84 - EXCEPTION CAUGHT: org.polarion.svnimporter.svnprovider.SvnException: Unknown branch:

Since I was familiar with the inner workings of SVN Importer and the source was freely available, I worked to debug this issue and was able to find a simple coding error that was easily corrected.  As I recall it was because the code in question was using the wrong method, with the wrong return type, to get the branch name.

Later on, I encountered another problem where the same file would be added twice in the same SVN revision in the output dump files.  When attempting to load these dumps into a SVN repository, I would see the error message ‘Invalid change ordering: new node revision ID without delete.’  After some detective work I determined that the same file was being added to revisions multiple times when there were multiple StarTeam labels (equivalent to SVN tags) for the same set of changes.  I made a small adjustment to the model for StarTeam to check if a file exists in a revision before trying to add and this resolved the issue.

Besides these more significant problems, there were a few things I wanted to improve about how the conversion process worked.  To start, the converter was performing duplicate checkouts for each file revision that was adding a good deal of extra time to the conversion process.  In addition, because the conversions I was doing were on very large repositories, over the course of a longer conversion certain StarTeam operations could fail for various reasons (for example network and/or server flakiness) and the converter was written in a such a way that a failure on any StarTeam operation would cause the whole conversion to fail.  To mitigate this issue, I wrapped each call to StarTeam in some logic to retry the operation if there was an error.  Once all these changes were made, I was ready to tear though these projects … or perhaps crawl is a better way to describe it!

Make it go

If you have ever done a version control history migration, you know that these migrations can take a long time to run as the process checks out every version of every file and constructs the new repository.  When we ran smaller tests we found the performance to be a bit slow, but nothing prepared us for the projects with millions of file revisions.

As we moved to larger and larger projects, not only did the time requirements swell, but also the hardware requirements.  While projects with tens (or even hundreds) of thousands of revisions were achievable with 8 GB RAM, we found that this was not enough RAM for projects with millions of file revisions.  This could be very frustrating because the conversions could sometimes run for over a day before erroring out and when they did there was no way to recover the conversion; you had to start all over from the beginning.  When even 16 GB was not enough for the very largest project (consisting of roughly 18 million file revisions), I even had doubts that increasing our RAM up to 32 GB would be sufficient.  Fortunately, once at 32 GB of RAM we never had to worry about RAM again.

In all, the conversion process for this largest project took almost 2 weeks (!) to complete its processing, and almost just as long to validate.  The validation portion of a conversion is probably most often overlooked, and it is mostly simple to do, but still necessary.  The process of loading very large SVN repositories takes nearly as long as the conversion process itself.  One issue that we encountered on this project was actually a limit on the filesystem inodes for ext3.  While this was simple enough to handle, I’m glad we did the validation load to test everything before moving on to the load of the production SVN system.

All in all, this StarTeam to SVN conversion effort took roughly 3 months and was not without its share of challenges but was ultimately worth the effort for the customer.  There really is no substitute for this sort of migration.  In most cases, without a migration like this, companies that need this data available will keep an older VCS running for years, with all the associated costs, in order to stay in compliance with their internal policies or external regulations.

If you’d like to know more about the code changes made to SVN Importer, here’s the situation.  I have made all of these updates available to Polarion, but as of now I don’t have an idea when these changes will be made publicly available through their SVN repository.  If you have questions about StarTeam conversions or the code changes I made, respond in the comments and I can give more detail and possibly find another way to share my changes.

DevOps for the Large Organization


dev·qops     \?d?v-,Kops\
: to undertake “DevOps” with particular impetus on Quality Assurance (QA), ideal for the larger organisation with rigorous QA processes and sophisticated, multi-tiered IT systems.
Example: “I use DevQops since DevOps didn’t quite address my development, quality assurance and IT operational needs.”

DevQops v. DevOps

Wikipedia describes DevOps as:
DevOps (a portmanteau of development and operations) is a software development method that stresses communication, collaboration and integration between software developers and information technology (IT) operations professionals.
Without trying to be overly simplistic, the emphasis here is the interaction between developers and operations. Since DevOps is still relatively immature in the IT industry, the general consensus is to focus on the literal interpretation; communication, collaboration and integration between developers and operations.
In my opinion, DevOps is the latest ‘branding exercise’ to encompass Continuous Integration (developers) and Continuous Delivery (operations). In theory this makes absolute sense; submit small changes frequently by the developers (without breaking the application) and implement in Production frequently.
DevOps encompasses both Continuous Integration and Continuous Delivery
However, I believe this approach is only suitable for a subset of all organisations, those that develop relatively simple systems with a short route to Live. For organisations that have large, complex, multi-tiered systems – such as Financial Services, Telecommunications, Utilities, Retail and Gaming organisations – it is likely that rigorous QA processes are in place to address various methods of testing, such as:
  • Systems Integration Testing,
  • Regression Testing,
  • User Acceptance Testing,
  • Penetration Testing,
  • Load Testing,
  • Functional Testing,
  • Smoke Testing.
Once these levels of QA are introduced, the ability to frequently make changes and deliver changes is somewhat restricted.
System architectural example
If we consider the example above; this could be a typical financial system topology which might represent a single QA stage (a single test-rig) within the overall software development life cycle. As a minimum – and for the purposes of this example – let’s assume this particular organisation undertakes at least four levels of testing:
  • Systems Integration Testing,
  • Regression Testing,
  • User Acceptance Testing, and
  • Load Testing.
In terms of complexity, change management and environment management, it is my belief that the QA managers have a far more difficult job than even the Operations team – the Operations team, while important, typically only have to manage a single version of any given system within a single environment, Production.
The QA team has the more complex and demanding role of having to:
  • Manage multiple versions of a single application,
  • Manage multiple applications simultaneously,
  • Manage multiple test-rigs simultaneously, and
  • Co-ordinate all of the above to ensure testing is completed within the agreed test duration.
If the QA element cannot be managed efficiently (streamlined and automated) then it effectively renders Continuous Integration and Continuous Delivery nigh on useless – as per Kanban, your process is only as effective as your greatest bottleneck.
So, to get to DevOps nirvana (DevQops) one has to assess the processes surrounding QA. In my experience the most common issues are around:
  • Managing “change-sets”; a collection of small changes from development scheduled for delivery,
  • Infrastructure Management; provisioning required infrastructure to support altered application(s), and
  • Application Deployment; implementing changes from development – or other test stages – into desired test-rig.
These common issues are not difficult to manage in themselves, but when applied to a large organisation with hundreds, if not thousands, of servers, test-rigs environments and applications, then the task becomes brittle, cumbersome and resource-intensive [costly] to manage.
I will dedicate a future Blog on “How to Implement Fully Automated DevQops”.

Strategic Tooling & Release Automation

Strategic Tools Prevent Enterprise Automation

Many organisations have “strategic” tools intended for large groups of users, designed to standardize and centralize processes, management and enforcement. Software Configuration Management and Application Life Cycle Management (ALM) are just two disciplines where “strategic tooling” is commonplace.
While the concept of a strategic tool is logical, it is impractical to assume that ‘one size’ can ‘fit all’. In fact, I can almost guarantee that whoever you are, you probably work for an organisation that has implemented a “strategic tool” (or a “tool of choice”) that was intended to satisfy your organisation’s business or operational requirements.
If we focus on Software Configuration Management (SCM) and software engineering, then we could use IBM’s Rational products or CA Technologies’ Software Change Manager (Harvest) product as examples. These products are typically implemented with the view of enforcing and managing SCM-based processes (source code control, life cycle management, etc) across the IT organisation. Whilst these tools are more than capable of addressing SCM and ALM requirements, there are factors that alienate users, such as:
  • Product Administration; requesting product configuration changes takes time and delays users, such as:
    • User Administration
    • Project Configuration
    • Repository Configuration
    • Build Configuration
    • Administration & Housekeeping
    • Process Configuration
  • Process Automation; recognising repetitive processes that can be automated.
  • Tool Integration; integrating the product with other enterprise systems (Change Management, Service Management, etc).
  • Product Education; understanding how to use the product and what the product is capable of.
  • Product Usability; do the end users actually feel comfortable using the product.
  • Product Familiarity; simply, end users prefer something that is familiar to them.
The above example illustrates Git as the ‘strategic tool’ and how the output from Git – the source code – is propagated through the life cycle, to be deployed to the various QA and Production environments. The illustration also highlights the manual processes of an Environment Manager and Release Manager utilising Microsoft Excel to [manually] manage the application changes and deployments.
This example might be reasonably typical for smaller organisations, but what inevitably happens in larger companies is that additional tools are adopted over time. The following example illustrates how individual teams and projects might ‘break away’ from the “IT Standard” and adopt their own tooling, which finds a place within the organisation and contributes to the overall tooling complexity.
The number of tools expands and the role of Environment & Release Managers becomes even more complex as they attempt to co-ordinate change from numerous tools, locations and processes. This is where the majority of large organisations have difficulty in implementing automation – automation is achieved by assigning computational resource to undertake manual, repetitive tasks. If the processes are fluid, the tooling non-standard and the scale vast, then achieving full Release Automation is impossible.
To achieve full Release Automation in this scenario, one would have to have a solution that could dynamically link to all the various tools and their underlying repositories (where the system changes are made by the developers). What is also important to note is that any given Application is not simply a collection of compiled binaries; an Application consists of configuration settings (application and environment), technology frameworks, services, packaged solutions, etc.
Each one of these Components has to exist somewhere, whether the Release process is manual or automated. For Release Automation, however, the tool responsible for managing the end-to-end deployments has to be capable of dynamically linking to various distributed sources.
Essentially, by modelling the Application, its Components and the location [tool] of those Components, the ability to achieve full Release Automation is attainable.
The above example illustrates how the logical definition of Applications, Components and their sources enables large organisations to achieve Release Automation. Notice that a Component’s source could be a source code control tool, a file system location or a binary repository – it’s not uncommon to have elements of a Release that do not adhere to the “Strategic Tooling” initiative.
By implementing a tool – such as Release Engineer from OpenMake Software – it is possible to achieve full Release Automation with very little cultural impact, operational risk and without ripping out existing tools and processes.

Agent-based or Agent-less Release Automation Solution?

Release Automation: Agent v. Agentless

Many enterprise software systems can be categorized as either “Agent-based” or “Agent-less”. This blog is going to discuss why any organisation would choose to select one method over the other, specifically around Release Automation and Software Deployments.

The first question one should pose – regardless of whether the potential solution is agent-less or not – is this, “What tasks am l looking to conduct as part of my Software Deployment solution?” At this point I also want to make it clear that when l am referring to “Software Deployments”, l am addressing the deployment of software across the end-to-end software development life cycle, not just production systems.

Without creating an exhaustive list that anticipates every granular task, required by every organisation, for every software deployment scenario, I shall attempt to summarize the most common and typical steps and tasks:

  • Send artefacts to a remote server.
  • Manage and manipulate artefacts on a remote server.
  • Execute remote ‘jobs’ on a remote server – this can be categorised further as:
    • Execute a remote script that already resides on a remote server,
    • Send a script to the remote server and execute, and
    • Execute a system command on the remote server.
  • Compile source code on remote server.
  • Start / Stop Services on remote server.
  • Capture the results and output of a remote script or system command.

All things considered, the mechanisms employed around deploying software to remote servers is finite. Now that we understand what we need to manage a software deployment, we now need to assess the merits of whether or not to use an agent-based system.

To be blatantly clear, I would like to state now that my preference is firmly on the side of agent-less. This opinion is formed through many years of working with enterprise software – not just in the Release Automation space – and witnessing real-world limitations and challenges.


Disadvantages of an Agent-based Release Automation Solution

Agents are a great way of building robust connectivity between the deployment server and its end points – the remote servers to which one would like to deploy software and systems. However, any agent-less system could be deemed as robust if it is based on SSH/SSL secured connections.

In my opinion, the overheads associated with an agent-based solution far outweigh those of an agent-less solution.


Agent Installation

Obvious, but crucial, an agent-based solution will require the customer to install agent software on each and every end point to which the customer is looking to deploy software and systems. In small companies this may not be a significant problem, but when dealing with large enterprises that might have hundreds, if not thousands, of end points then the resource requirements increase substantially.


Agent Configuration

Each and every agent will require configuration settings amended to ensure it can connect to the solution server. Each agent may also require configuration settings altered based on the role of the agent.



Agent Maintenance

Software vendors continually update and improve their solutions; an agent-based solution will, therefore, potentially require software updates. Again, not such a huge problem for small companies but large organisations will have to allocate resources for these upgrades, possibly initiating dedicated project teams to complete the upgrade effort.



Firewall & Relay Configuration

Agent-based systems will undoubtedly require firewall configuration changes to allow the agents and solution server to communicate and relay instructions and data between numerous domains within large corporate networks.



Agent Availability

Installing an agent will typically involve installing the agent software as a service running on the remote server. As with any service, it is possible that this service may ‘fail’, require configuration changes, need recycling, not be compatible with other services or, as a worst case scenario, require a complete re-installation.



Platform Support

Agents are a piece of software built for a specific platform; if you want to deploy to Windows then you will need a Windows agent, UNIX will require its specific agent, so too will Linux, and so on. Since it requires development resource for any vendor to build a specific agent, it is most cost effective for vendors to target the distributed platforms – Windows, UNIX and Linux.


However, large organisations make use of various platforms designed to address specific needs. Financial Services companies will make use of fault tolerant and high transaction processing platforms such as; iSeries, Stratus, OpenVMS and Tandem. Another example is Retail organisations that are likely to make use of the IBM4690 platform, for example. It is highly likely that these platforms are not supported by agent-based systems and prevent, therefore, organisations from achieving full Release Automation.




Advantages of an Agent-less Release Automation Solution


Release Engineer (from OpenMake Software) is an agent-less solution. This means that all of the above agent-based disadvantages are addressed by removing the need for agents.


  • Agent Installation – Not Required
  • Agent Configuration – Not Applicable
  • Agent Maintenance – Not Applicable
  • Firewall & Relay Configuration – Most organisations will likely use SSH/SSL to connect and administer their server infrastructure, and since Release Engineer utilises the same protocol then no firewall changes will be necessary. Even if firewall changes are required, it’s no more effort than the agent-based solution and the connectivity configuration will be robust and secure.
  • Agent Availability – There is no reliance on the availability of any agents.
  • Platform Support – Since Release Engineer utilises standard protocols then the extent of the platforms to which Release Engineer can deploy is diverse, such as; Windows, UNIX, Linux, iSeries, OpenVMS, Tandem, Stratus, IBM4690, Tru64…