This post came through and I felt it was too good to not forward on. Read about one person’s introduction to what DevOps is. From Cisco Communities.
This post came through and I felt it was too good to not forward on. Read about one person’s introduction to what DevOps is. From Cisco Communities.
OpenMake Software drives ARA to the next level with the general availability of Release Engineer, a new and powerful ARA solution designed for the enterprise.
Delivers a scalable ARA solution that facilitates the reuse and sharing of release objects across teams, provides agentless distribution, and supports multi-platforms and environments.
Chicago, IL – June 25, 2014 – OpenMake Software today announced the July 15th , 2014 GA date of Release Engineer, the newest addition to the OpenMake Software Dynamic DevOps Suite. Release Engineer is an enterprise-scale application release automation (ARA) solution designed for complex multi-platform environments.
Release Engineer, formerly Deploy+, centralizes the management, configuration, and reuse of all Release and Deploy elements for the enterprise. Its flexible design allows Operations to define release standards that can be inherited and customized specifically for each project team. It supports multi-tiered platforms with no reliance on any agent technology. Unlike its competitors, Release Engineer leverages a domain structure that facilitates the sharing of release components and dependencies for reuse across teams and delivers a unique roll forward logic for incremental release processing. Eliminate delivery errors through reuse, planning and shared control of your release with full audit transparency.
“An enterprise release automation solution requires a tool that is highly reusable, can support multiple platforms including both WebSphere and MS IIS, and allow the central teams to support more releases with less staff”, explains Stephen King, CEO, OpenMake Software. “Release Engineer solves these problems delivering a domain-driven framework for sharing Components and Release Modules coupled with an Agentless technology that reduces the overhead associated with deploy solutions that require the management of hundreds, if not thousands, of end-point deploy agents.”
Steve Taylor, CTO of OpenMake Software, emphasized “Keeping in step with our company philosophy of providing model-driven DevOps, Release Engineer centralizes the definition and sharing of component models and reusable actions across all teams company-wide. Our competitors manage these attributes at the application level creating silos of information that cannot be shared. We minimize the work required for automating releases by defining objects once and reusing them for all teams with similar requirements.”
Release Engineer was designed specifically with the multi-platform enterprise in mind with a design that allows the enterprise release requirements to be defined at the highest level Domain and shared across all Sub-domains creating a high-level of transparency and control available all to teams within the organization from central release teams to each unique development team.
I’ve previously posted about the SVN Importer tool here and hoped at some point to follow up on my experiences converting from specific version control tools. Well, after a StarTeam conversion project last year that was easily an order of magnitude larger than any other conversion project I’ve ever done, I think I’m fairly well qualified to write on the topic. I had previously done some small conversions using StarTeam 2005 (aka version 11) but for this project, the customer was using StarTeam 2009 (aka version 12.5). Oh, and I when I say this effort was big, I mean REALLY big: the largest project had almost 20 million file revisions and the whole system had around 50 million file revisions.
The first thing I noticed in doing other smaller conversions is that StarTeam lacks certain critical functions in its command line interface (CLI) that allow these sorts of conversions. Because of this, the SVN Importer developers, out of necessity I believe, choose to use the StarTeam API to perform the conversion to SVN. This requires that you have the StarTeam SDK installed on your conversion machine. Also, if you are converting very large projects (greater than 1 million file revisions) as I was, it means you’ll need a 64-bit version of the SDK. While I was able to track this down for StarTeam 2009, I don’t believe this exists in earlier versions. You’ll also need to make sure that the correct version of the StarTeam API jar file is in the classpath of the importer and that the Lib directory of the StarTeam SDK is included in your PATH environment variable.
Once I actually got my conversions running with SVN Importer things went well converting the trunk of projects but I encountered the following error anytime I tried to convert any branches, aka derived views in StarTeam:
INFO historyLogger:84 - EXCEPTION CAUGHT: org.polarion.svnimporter.svnprovider.SvnException: Unknown branch:
Since I was familiar with the inner workings of SVN Importer and the source was freely available, I worked to debug this issue and was able to find a simple coding error that was easily corrected. As I recall it was because the code in question was using the wrong method, with the wrong return type, to get the branch name.
Later on, I encountered another problem where the same file would be added twice in the same SVN revision in the output dump files. When attempting to load these dumps into a SVN repository, I would see the error message ‘Invalid change ordering: new node revision ID without delete.’ After some detective work I determined that the same file was being added to revisions multiple times when there were multiple StarTeam labels (equivalent to SVN tags) for the same set of changes. I made a small adjustment to the model for StarTeam to check if a file exists in a revision before trying to add and this resolved the issue.
Besides these more significant problems, there were a few things I wanted to improve about how the conversion process worked. To start, the converter was performing duplicate checkouts for each file revision that was adding a good deal of extra time to the conversion process. In addition, because the conversions I was doing were on very large repositories, over the course of a longer conversion certain StarTeam operations could fail for various reasons (for example network and/or server flakiness) and the converter was written in a such a way that a failure on any StarTeam operation would cause the whole conversion to fail. To mitigate this issue, I wrapped each call to StarTeam in some logic to retry the operation if there was an error. Once all these changes were made, I was ready to tear though these projects … or perhaps crawl is a better way to describe it!
If you have ever done a version control history migration, you know that these migrations can take a long time to run as the process checks out every version of every file and constructs the new repository. When we ran smaller tests we found the performance to be a bit slow, but nothing prepared us for the projects with millions of file revisions.
As we moved to larger and larger projects, not only did the time requirements swell, but also the hardware requirements. While projects with tens (or even hundreds) of thousands of revisions were achievable with 8 GB RAM, we found that this was not enough RAM for projects with millions of file revisions. This could be very frustrating because the conversions could sometimes run for over a day before erroring out and when they did there was no way to recover the conversion; you had to start all over from the beginning. When even 16 GB was not enough for the very largest project (consisting of roughly 18 million file revisions), I even had doubts that increasing our RAM up to 32 GB would be sufficient. Fortunately, once at 32 GB of RAM we never had to worry about RAM again.
In all, the conversion process for this largest project took almost 2 weeks (!) to complete its processing, and almost just as long to validate. The validation portion of a conversion is probably most often overlooked, and it is mostly simple to do, but still necessary. The process of loading very large SVN repositories takes nearly as long as the conversion process itself. One issue that we encountered on this project was actually a limit on the filesystem inodes for ext3. While this was simple enough to handle, I’m glad we did the validation load to test everything before moving on to the load of the production SVN system.
All in all, this StarTeam to SVN conversion effort took roughly 3 months and was not without its share of challenges but was ultimately worth the effort for the customer. There really is no substitute for this sort of migration. In most cases, without a migration like this, companies that need this data available will keep an older VCS running for years, with all the associated costs, in order to stay in compliance with their internal policies or external regulations.
If you’d like to know more about the code changes made to SVN Importer, here’s the situation. I have made all of these updates available to Polarion, but as of now I don’t have an idea when these changes will be made publicly available through their SVN repository. If you have questions about StarTeam conversions or the code changes I made, respond in the comments and I can give more detail and possibly find another way to share my changes.
DevOps (a portmanteau of development and operations) is a software development method that stresses communication, collaboration and integration between software developers and information technology (IT) operations professionals.
|DevOps encompasses both Continuous Integration and Continuous Delivery|
|System architectural example|
Many enterprise software systems can be categorized as either “Agent-based” or “Agent-less”. This blog is going to discuss why any organisation would choose to select one method over the other, specifically around Release Automation and Software Deployments.
The first question one should pose – regardless of whether the potential solution is agent-less or not – is this, “What tasks am l looking to conduct as part of my Software Deployment solution?” At this point I also want to make it clear that when l am referring to “Software Deployments”, l am addressing the deployment of software across the end-to-end software development life cycle, not just production systems.
Without creating an exhaustive list that anticipates every granular task, required by every organisation, for every software deployment scenario, I shall attempt to summarize the most common and typical steps and tasks:
All things considered, the mechanisms employed around deploying software to remote servers is finite. Now that we understand what we need to manage a software deployment, we now need to assess the merits of whether or not to use an agent-based system.
To be blatantly clear, I would like to state now that my preference is firmly on the side of agent-less. This opinion is formed through many years of working with enterprise software – not just in the Release Automation space – and witnessing real-world limitations and challenges.
Agents are a great way of building robust connectivity between the deployment server and its end points – the remote servers to which one would like to deploy software and systems. However, any agent-less system could be deemed as robust if it is based on SSH/SSL secured connections.
In my opinion, the overheads associated with an agent-based solution far outweigh those of an agent-less solution.
Obvious, but crucial, an agent-based solution will require the customer to install agent software on each and every end point to which the customer is looking to deploy software and systems. In small companies this may not be a significant problem, but when dealing with large enterprises that might have hundreds, if not thousands, of end points then the resource requirements increase substantially.
Each and every agent will require configuration settings amended to ensure it can connect to the solution server. Each agent may also require configuration settings altered based on the role of the agent.
Software vendors continually update and improve their solutions; an agent-based solution will, therefore, potentially require software updates. Again, not such a huge problem for small companies but large organisations will have to allocate resources for these upgrades, possibly initiating dedicated project teams to complete the upgrade effort.
Agent-based systems will undoubtedly require firewall configuration changes to allow the agents and solution server to communicate and relay instructions and data between numerous domains within large corporate networks.
Installing an agent will typically involve installing the agent software as a service running on the remote server. As with any service, it is possible that this service may ‘fail’, require configuration changes, need recycling, not be compatible with other services or, as a worst case scenario, require a complete re-installation.
Agents are a piece of software built for a specific platform; if you want to deploy to Windows then you will need a Windows agent, UNIX will require its specific agent, so too will Linux, and so on. Since it requires development resource for any vendor to build a specific agent, it is most cost effective for vendors to target the distributed platforms – Windows, UNIX and Linux.
However, large organisations make use of various platforms designed to address specific needs. Financial Services companies will make use of fault tolerant and high transaction processing platforms such as; iSeries, Stratus, OpenVMS and Tandem. Another example is Retail organisations that are likely to make use of the IBM4690 platform, for example. It is highly likely that these platforms are not supported by agent-based systems and prevent, therefore, organisations from achieving full Release Automation.
|Third Party Development Complexity|
The October 2013 issue of SD times has an article called “The reconstruction of deployment” by Alex Handy. He starts the article by talking about Builds. “In days past, there was one sure fire, always working plan for building software: start he build, then go get some coffee. Sometimes building even meant it was time to go home.” The article talks about continuous build and deploy and gives credit to CI for improving the speed of the build and deploy process. However, you must ask yourself, “are my builds really faster?” Builds, the process of compiling and linking code, has not changed at all. Yes, we have Ant and Maven and not just Make, but in essence, CI does not change the build scripts themselves. In Alex’s article, he alludes to a time in our past that builds would take hours to run. Guess what, they still take hours to run when a script is driving them. The same build script that was executed manually and took hours to run, is now just executed via Jenkins, and takes hours to run. A build script executed by Jenkins runs no faster than a build script executed in any other way. In the article Brad Hurt, VP of Product Management at AccuRev confirms this. He explains that you need to have control over the different levels of code maturity so in the case of an 8 Hour build you don’t have “random developer” code checked in that pollutes the build. In this reference to “build” Brad is talking about the compile and link process. Some people refer to the build as a set of steps that are executed before the compile, the compile and after the compile. But Brad’s reference is more accurate. He is talking about a compile process that can take hours to run. And for large projects this is not unusual. The goal is actually to never have an 8 hour build. As we mature in DevOps, we are moving away from one-off scripts, particularly around Deploy. OpenMake Meister moves away from scripts in both build and deploy. This allows intelligence in the build for building incrementally, with acceleration and parallelization decreasing build times substantially. For an incremental build, an 8 hour build can become a 10 minute build. This incremental processing is passed to the Deploy, so even deploys are incremental and not monolithic. So lets stop kidding ourselves. A Jenkins build is no faster than the script it is calling. And if the script cannot support incremental changes (agile practice) or support parallelization for speeding up monolithic compiles, then you have a really cool CI process with a very slow back end.
OpenMake Software today announced the 7.5 release of its market leading Meister build automation, Mojo workflow management and CloudBuilder provisioning products together with their new Deploy+ offering which comprise its Dynamic DevOps Suite. Delivering a consolidated tool chain for process automation, continuous build and continuous deploy, the Dynamic DevOps suite offers a model driven framework for simplifying the hand-off of the software build and delivery process from development teams to production control.
Tracy Ragan, COO, OpenMake Software explains, “We have expanded our model driven framework for managing Builds into the Deployment realm. Our Dynamic DevOps suite substantially reduces the use of one-off build and deploy scripts, delivering a more reliable and transparent method of delivering binaries.”
The Dynamic DevOps Suite includes both Build Services and Deploy Services that create standard process for building and deploying applications. It includes standard models for delivering to WebSphere, Tomcat, Microsoft IIS and other common server environments. Standard process workflows can be defined and reused across the development to production environments with dynamic changes addressing the uniqueness of each environment.
Why Dynamic? “Defining a reusable, standardized process across the lifecycle from build through deploy is the ultimate goal in achieving DevOps. Changes between environments for builds, test execution and deployments should be addressed dynamically, without human intervention or one-off scripts. We uniquely achieve this level of automation for build and now deployments with our 7.5 release” explains Steve Taylor, CTO, OpenMake Software.
The Dynamic DevOps Suite is now available for download from http://www.openmakesoftware.com/download/
Watch this Google video from their education series.
They have written a homegrown process that is extremely similar to OpenMake Meister. Build Rules, the elimination of scripts, incremental processing, management of libraries, parallelization and distribution of workload is all shown. The good news is you do not need to write this process on your own. You can use Meister instead. Meister solves all the problems and provides all the features covered here. So yes, your builds can be intelligent too.