November 14, 2018 | DIEBOLD NIXDORF
Modern IT projects are executed in a world of growing uncertainties. This leads to the need to rapidly change or modify software implementations as our clients’ requirements shift.
In the past, software products were typically developed using the waterfall approach, hallmarked by a long-lasting analysis of customer requirements in the beginning, plus the associated detailed documentation and specifications, all of which take place before starting the actual code development.
“Not my problem”
This waterfall approach fosters the creation of monolithic applications that are usually built, packaged and released as a single, coherent entity every 6 to 12 months. This approach works best in an environment with stable and well-known requirements, but eventually leads to a huge amount of code changes within every release, putting a lot of effort on software integration tests with every new version.
As the development and operation teams are clearly separated, this also supports the "not my problem" attitude, in which the development teams' responsibilities typically end once they successfully build and package the product, and the operations team is left to figure out and handle deployment and operational issues on their own.
Figure 1: Impact of the Waterfall model on software development
Drawbacks of waterfall approach
Needless to say, the sequential waterfall approach has many flaws for users of the software in terms of time-to-market, quality, predictability and adaptability. The internal structure of the application is built upon tightly coupled components, which are wired together via their public API. As a result, the monolithic application architecture makes parallel development and deployment of parts of the application nearly impossible: the application has to be built and released as a single entity every time.
The long timeframe needed to develop software leads to a huge set of changes to various parts of the application's source code, which makes it nearly impossible to predict the impact on the behavior of the final software application at the end of a release cycle. In addition, the clear separation between development and operational teams can often result in misunderstandings on deployment constraints, a gap of non-functional requirements and a lack of feedback from the operations team to the software engineers.
To solve these issues, we take a different approach to the software development process, one that is more flexible and agile and allows us to develop, package and deploy isolated and self-contained parts of the overall software application individually.
We propose a more loosely coupled application architecture typically based on various microservices, each of which offers a domain-driven public API, and communicates and integrates via standard Web protocols with its surroundings. Any such microservice can be seen as a standalone, self-contained, coherent functional entity that can be designed, developed, built and released independently.
This makes it easier to adapt parts of the application to changing business requirements as well as develop parts of the application in parallel. In conjunction with Continuous Integration & Delivery processes that support the automatic building and testing of parts of the application as soon as a developer commits code changes, this will lead to higher quality and better predictability of the software applications' behavior in the end.
Figure 2: Impact of Agile methodologies on software development
A Retail example
To explain the benefits of microservices in a real world scenario, let’s look at the core functionality of a Point-of-Sale application. The application has to look up item master data for barcodes scanned or keyed in by an operator. Based on the item data, it has to calculate the final item price, including quantity, promotion, tax, etc. and add the information to the current basket. In addition to the single item details, the total amount, tax and discount has to be reevaluated for the whole basket each time an item is added, changed or removed. At the end of the transaction, a payment has to be made by the consumer to settle the outstanding amount with the retailer.
In the Waterfall model, we would start to gather all the requirements of the application in detail, and document them in a Product Requirements document. After this document has been reviewed and accepted we would start specifying the application architecture, including the required components (in this case, Item Lookup, Discount, Tax, POS Transaction and Payment), their public APIs and interaction between them.
Only after this Implementation Specification is finalized, reviewed and accepted would the development teams start writing the code that will end up in the final application. After the implementation of the application as a whole is done (which also includes building and packaging the result at the end) we would start the next phase of verifying the implemented functionality against the specification and requirements from the beginning. As the application is always built and packaged as a monolithic block of code, we always end up testing the whole application with every subsequent release to make sure there are no regressions of already-existing functionality. If the verification is successful, we can hand over the application to the operations team to deploy or ship it to our clients.
Alternatively, in the microservice world, we would break down the whole application into its functional building blocks and start describing and specifying each of them individually. Once the functional and non-functional requirements for a single microservice (in this case, Item Lookup, Discount, Tax, POS Transaction and Payment) are settled, the development team can immediately begin implementation—we don’t have to wait until the whole application is described and defined in detail, but can, in parallel, start with implementing the parts that are specified already.
As some parts of the application are quite well-known and stable, we would start with them to show some progress (in this example, Item Lookup and POS Transaction). Other parts, which might not be as well-known or may have unclear business requirements, can take a little longer to specify in detail (i.e. Tax, Discount and Payment). So for those, we can start with the Minimum Viable Product specs and rapidly iterate on those based on further market insights or experience gained in subsequent release cycles.
From a development perspective, the integration of the functional building blocks no longer takes place at the component API level during build time; instead, it’s gradually moved to the service API level at runtime. As each microservice encapsulates a certain part of the applications’ business logic and is designed, developed and released independently from the others, there has to be a clear specification and versioning of the microservices’ public API.
Figure 3: Differences in deployment of monolithic vs. micro service based applications
We usually end up with a couple of microservices that have to be tested, deployed and managed individually. It might be even necessary that some parts of the application are also available in different versions at runtime, which further increases the complexity of the operational aspect of the solution.
I’ll explore how to handle these difficulties and the resulting complexity more efficiently in my next post. Stay tuned!
Is your software application environment future ready? Let’s discuss strategies to minimize the time and effort spent on developing, updating and managing your software stack.