Contents of this page is copied directly from IBM blog sites to make it Kindle friendly. Some styles & sections from these pages are removed to render this properly in 'Article Mode' of Kindle e-Reader browser. All the contents of this page is property of IBM.

Page 1|Page 2|Page 3|Page 4

IBM Cloud Code Engine: Three Tips for Migrating from Cloud Foundry

2 min read


Kazuki Nobutani, Staff Software Engineer - IBM Cloud Support

Recently, IBM announced the deprecation of IBM Cloud Foundry. This post looks at migration options using IBM Cloud Code Engine. 

IBM Cloud Code Engine is a fully managed, serverless platform that runs your containerized workloads, including web apps, microservices, event-driven functions and batch jobs. In this post, I am going to share what I've explored so far and tips for migrating from Cloud Foundry to Code Engine.


The goal here is to manually migrate a Node.js app on Cloud Foundry to Code Engine. In this example, I used a blank Cloud Foundry Node.js template and enabled a toolchain so that the source code exists in a private repository in

Later on, for a production migration, we should automate deployment using a new toolchain, but tips in this post should help your first step to migrate your microservices:

Tip 1. Write a Dockerfile and test

Make sure you create an appropriate Dockerfile for your app. Cloud Foundry apps on IBM Cloud used buildpacks and a Dockerfile was not mandatory as it is in Code Engine. However, it is a perfect time to move to Dockerfile to have more control and better performance. Due to the nature of Docker (run anywhere concept), I recommend you to check if your app starts in your local environment, if possible. Debugging a container build on IBM Cloud would be inefficient.

Tip 2. Configure repository access for a private repository

If you are directly pulling source code from a private repository, you must give Code Engine sufficient permission to the repository. If it is private, make sure you create a Code repo access. Please refer to this page for more details:

Tip 3. Check your service binding

Many existing Cloud Foundry apps use Service binding to access other services, such as DB and Cognitive services. Service binding is a quick way to create service credentials for an IBM Cloud service, and if your Cloud Foundry app has been using service bindings, you need to create new bindings for your Code Engine app and update your source code.

To be specific, service binding information for your Cloud Foundry apps is stored in the VCAP_SERVICES environment variable. However, for your Code Engine apps, it will be stored in CE_SERVICES.

For example, I used Node.js and set console logs to show service bindings. I migrated it to Code Engine, and now you see CE_SERVICES shows service binding between Cloud Object Storage, but VCAP_SERVICES is undefined:

This is the console log I used:

app.listen(process.env.PORT, '', function() {
  console.log("server starting on " + appEnv.url);
  // CF Environment var
  console.log("Binded service : process.env.VCAP_SERVICES = " + process.env.VCAP_SERVICES);
  // CE Environment var
  console.log("Binded service : process.env.CE_SERVICES = " + process.env.CE_SERVICES);
  console.log("PORT : " + process.env.PORT);
  // Test for custom environment var "test"
  console.log("CUSTOM ENV VAR : " + process.env.test);
What's next?

Nowadays, it is common to have an automated build and deploy chain for many reasons. I recommend you try the Develop a Code Engine application template to understand the entire flow:

Please feel free to contact IBM client support for any questions or concerns. 

Kazuki Nobutani

Staff Software Engineer - IBM Cloud Support


Circular Economy: How AI Is Helping to Reduce Landfill Waste Automation

4 min read


Stefano Innocenti Sedili, Senior Account Technical Leader

A look at how one organization is bringing innovation to waste management using intelligent automation.

In the opening sequence of Pixar’s WALL-E, nearly every inch of Planet Earth is covered in garbage. The world is no longer habitable by humans, and only the robot WALL-E remains, rolling among the heaps of trash, sorting each piece even though it’s clearly too much for one little robot.

It’s estimated that the global population generates nearly 5.5 million tons of municipal waste — roughly the weight of the Great Pyramid of Giza — every day. At this rate, we are quite literally burying ourselves in trash. In an ideal world, the obvious solution might be to simply stop producing so much to begin with. This is easier said than done. But circular economy models provide smart examples and accessible starting points for how we can avoid a trashy future, so to speak.

A circular economy is essentially a model of production and consumption that benefits businesses, people and the environment by getting the most use and least waste out of the stuff we use. For an example of a perfect circular economy, look to nature. A plant grows, nearby wildlife feeds on it, and both eventually die and nourish the flora and fauna that once nourished them.

Humans tend to engage in a more linear “make-take-waste” process. We make things, buy and use them, then trash them. If only things like plastic bags, diapers and old computers degraded as fast as autumn leaves and nourished the soil in the process.

Let a circular economy mindset drive innovation

Circular economy activities can yield significant advantages for business and government, such as improving the security of the raw materials supply, stimulating innovation, boosting economic growth and creating jobs.

In a recent study by the IBM Institute for Business Value, chief supply chain officers (CSCOs) identify several specific actions they plan to take over the next three years in pursuit of their circular economy goals:

  • 47% are initiating full lifecycle design of their materials and products with the intent to expand reuse of materials and components to reduce waste in the product lifecycle.
  • 44% plan to improve the energy-efficiency of their products and services.
  • 35% plan to develop new products and services based on renewable energy componentry.
  • 30% expect to engineer new zero-waste products and services. Packaging goals include reducing first-use (virgin) plastic usage (32%) and increasing the use of recyclable or biodegradable materials and packaging (30%).

To move toward circularity, organizations are automating workflows with environmental impact and innovation in mind. Waste management companies, in particular, have the opportunity to minimize landfill waste using artificial intelligence (AI) and automation technologies.

How to bring innovation to waste management — and potentially influence an entire industry

As a provider of electricity, water cycle management and heating services, and as Italy’s largest waste management and recycling company, Hera S.p.A, is on the front lines of the urgency to reduce waste and minimize environmental damage.

Where traditional recycling practices may be one arc in the cycle of reuse, Hera offers integrated solutions that help complete the circle. With plastics, for example, it not only recovers waste but also incorporates it into the production of high-quality new products that are themselves recyclable.

“Today, in our territories, most of the waste is recovered... only a small portion ends up burnt, but this is burnt in waste-to-energy plants, producing new energy.” — Andrea Bonetti, Hera’s manager of IT architecture

But the recovery process depends on quickly finding and separating reusable material from tons of refuse. It was with this process that Bonetti and her colleague, Milena Zappoli, Innovation Manager of the Environmental Services of the Hera Group, decided to explore how intelligent automation could improve efficiency and help channel more material to new use.

Evaluating the potential of AI for waste sorting

Hera personnel analyze waste manually. As trucks unload at the entrance to the plants and the trash is pushed toward conveyors, spotters watch for recoverable materials — including plastics, glass, aluminum and organic material — and help direct downstream sorting.

It’s an onerous job, especially at scale: 1,400 spotters work at 89 plants, where 6.3 million tons of waste is treated every year. And spotters faced a lot of inefficiency — when a sorting anomaly occurs during the collection phase of the waste management process, the whole plant is stopped.

Hera envisioned capturing video of incoming trash and using AI to recognize characteristics of items and materials that would qualify them for recovery and reuse. “This could have a decisive impact on the costs of recovery and disposal activities, which is the focus of the circular economy,” explains Bonetti.

The Hera and IBM Garage teams quickly recognized that the plants were not the right place to capture video. There was too much material going by too quickly. Instead, they identified a better vantage point upstream. By mounting cameras on trash trucks, they could video the smaller amounts of material falling out of bins. “It’s still an extremely rapid passage of images,” says Bonetti. “But the study of these images has allowed us to identify significant patterns for the qualitative evaluation of the waste during the collection process, not inside the plant, which could improve the time and cost of the transformation process.”

The Hera team also hopes to correlate waste-quality data with collection locations, helping the company develop targeted information campaigns to help people better differentiate between waste items.

"The experience with IBM Garage has allowed us to activate a particularly innovative solution in the field of waste collection, selection and recovery: the project is positioned along the entire operational supply chain and can be a valid support to increase efficiency, but above all it can affect the improvement of the quality of separate collection and, therefore, the maximization of recyclable waste, making full use of the efforts made by the Hera Group in the circular economy.”  — Milena Zappoli, Innovation Manager of the Environmental Services of the Hera Group

Learn more

Bring innovation to environmental stewardship and read the Hera S.p.A case study.

Experience an IBM Automation Innovation Workshop at no charge. Find new ways to improve business and IT operations using intelligent automation.

Stefano Innocenti Sedili

Senior Account Technical Leader


Solutions for Migrating Unstructured Data from On-Premises to AWS Cloud Migration Storage

6 min read


Rakesh Rao, Migration Engineer - Microsoft Windows
Santhosh Kumar Ramabadran, Cloud Solution Architect
Chandrashekhar Ghaitade, Advisory Project Manager, PMP®

Three specific use cases around unstructured data migration to AWS.

During cloud migrations, we come across scenarios where there is a need to migrate or transfer files (typically unstructured data), from on-premises (SAN/NAS) to a specific storage service in AWS (e.g., EBS/EFS/S3/FSx). These can be files generated by the application, user uploads, integration files that are created by one application and consumed by others (B2B), etc. In most cases, these unstructured data may vary in total size from a few MBs to 1 TB, and most importantly, the underlying application is not expected to undergo a lot of remediation to utilize the target AWS service.

In this blog post, we share our experience with three specific use cases around unstructured data migration to AWS:

  • In the first scenario, where the requirement is to share data among multiple VMs/applications, we describe how unstructured data from a Network Attached Storage (NAS) was migrated to AWS.
  • In the second scenario, we talk about how we migrated B2B data to AWS Storage.
  • In the third scenario, where the unstructured data exists in the native file system (NTFS, xfs or ext4) and not exposed to the network as a File Share, we discuss how the data in Windows/Linux instances is migrated to AWS.
1. From network attached storage (NAS) to AWS using AWS DataSync Problem/scenario

Application A picks up incoming files from an Application X, processes them and generates data files that are 50–300 GB. That, then, becomes the input for another Application Y to consume. The data is shared by means of an NFS Storage accessible to all three applications.

Application A is being migrated to AWS and the Applications X and Y continue to remain on-premises. We used AWS Elastic File System (EFS) to replace NFS on AWS. However, that makes it difficult for the applications to read/write from a common storage solution, and network latency slows down Application X and Application Y


In this case, we used AWS DataSync Service to perform the initial migration of nearly 1 TB of data from the on-premises NFS storage to AWS EFS.

AWS DataSync can transfer data between any two network storage or object storage. These could be network file systems (NFS), server message block (SMB) file servers, Hadoop distributed file systems (HDFS), self-managed object storage, AWS Snowcone, Amazon Simple Storage Service (Amazon S3) buckets, Amazon Elastic File System (Amazon EFS) file systems, Amazon FSx for Windows File Server file systems, Amazon FSx for Lustre file systems and Amazon FSx for OpenZFS file systems.

To solve the need for the applications to read/write from a common storage solution and address the network latency involved during read/write operations across the Direct Connect, we scheduled a regular synchronization of the specific input and output folders using the AWS DataSync service between the NFS and EFS. This means that all three applications look at same set of files after the sync is complete.

  • Syncs can be scheduled at minimum one-hour intervals. This soft limit can be modified for up to 15-minutes intervals, however, that leads to performance issues and subsequent sync schedules getting queued up, which forms a loop.
  • Bidirectional Syncs were configured to run in a queued fashion. That is, only one-way sync can be executed at a time. Applications will have to read the files after the sync interval is completed. In our case, files are generated only one time per day, so this challenge was mitigated by scheduling the read/writes in a timely fashion.
Cost implications
  • No fixed/upfront cost and only $0.0125 per gigabyte (GB) for data transferred.
  • AWS DataSync Agent (virtual appliance) must be installed on a dedicated VM on-premises.
2. Data/Files in FTP locations to AWS via AWS Transfer family Problem/scenario

Another Application B had to process lot of unstructured data from an FTP location. These files were transferred to the application server through SFTP by dependent applications. Since this application is moved to AWS, we must have the dependent applications also transfer these files to a storage location in AWS.


AWS Transfer Family provides options to transfer your unstructured data to an S3 bucket or an EFS Storage using SFTP, FTPS or FTP protocols. This easily integrates with any standard FTP client (GUI- or CLI-based) and thus allows you to transfer your data from on-premises to AWS. As a managed service backed by in-built autoscaling features, it can be deployed in up to three availability zones to achieve high availability and resiliency.

Private VPC endpoints are available to securely transfer data within the internal network.

AWS Transfer Family can also be used for a one-time data migration for B2B Managed File Transfer.

We used an EFS mount on the application server and directed the other dependent applications to use the AWS Transfer Family SFTP private endpoint to send the files securely. The authentication was handled via SSH Key Pair so that there is no hardcoded username/password in either location. This way, we do not expose the application server over SSH port 22, which was a client-mandated security control.


It was very easy to set up and get going because our application was running in Linux.

However, FSx is not a supported target storage option because AWS Transfer Family suits use cases for a target application hosted on Linux platforms. Some additional programming is needed to access an S3 Bucket if a Windows-based application must consume these managed services.

Cost implications

There is a $0.30 per hour fixed charge while the service is enabled and $0.04 per gigabyte (GB) data upload/download charges are applicable.

3. From Windows/Linux local storage to AWS using rsync/robocopy Problem/scenario

Application C used (read/write) a lot of data from a native file system, which was needed in AWS when the application was migrated to AWS. This data on native file systems could not be migrated as-is to EBS Volumes or EFS Storage because both source and target should be network file storage to use the AWS native file/data transfer solutions.

While we could have presented the native file system as NFS share and used AWS DataSync as in the second scenario, this would have required additional installation and configuration on source servers, which is usually not desired in case of Migrations.


We used traditional tools like rsync/robocopy to copy data to AWS Storage like EFS (mounted on EC2) or EBS volumes.

We used a shell script based on rsync to pull data from an on-premises server to the EC2 instance, keeping in mind the security mandate not to expose EC2 instances on SSH port 22. Due to rsync features and good bandwidth available with Direct Connect, the data migration was seamless.


While rsync/robocopy is a good fit for the above problem, it may not be suitable if the following characteristics are exhibited by the application and the environment:

  1. If on-prem and target storage is a network file system, then the preferred option would be to use AWS DataSync due to its advanced features to schedule, etc.
  2. If the size of the data exceeds 1-2 TB, which would otherwise lead to bandwidth throttling.
  3. Migration of data and then regular synchronization of data between on-prem and AWS.
  4. Security rules in place in most organizations prevent inbound security group rules from allowing direct access to EC2 instances on Port 22. In such cases, the 'pull' from on-prem storage can be 'initiated from AWS' and now only the outbound security group in AWS needs to allow traffic over Port 22, which organizations would allow.
Cost implications

There are no ingress charges, and it is $0.08 to $0.12 per gigabyte (GB) for egress to Internet/on-premises.


In this post, we discussed very common use cases in data migrations to AWS Cloud and how native and traditional tools are used to tackle some unique situations. To summarize our experience, a quick comparison of these tools is depicted below:

We did not discuss the option of using AWS Snow Family due to feasibility issues in the scenarios. It requires physical access to a data center and is only appropriate for transferring very large data (in many TBs) — our data was not very large for any of the above use cases.

Similarly, AWS Storage Gateway was not considered as it is ideal for on-prem backup/archival/DR scenarios and none of the use cases had that requirement.

There are managed services available on AWS for data migrations and each of them cater to a very specific set of use cases.

We will continue to share our experience as we encounter new scenarios for transferring or storing unstructured data in AWS.

References Rakesh Rao

Migration Engineer - Microsoft Windows

Santhosh Kumar Ramabadran

Cloud Solution Architect

Chandrashekhar Ghaitade

Advisory Project Manager, PMP®


Domain-Driven Modernization of Enterprises to a Composable IT Ecosystem: Part 3 Cloud

8 min read


Balakrishnan Sreenivasan, Distinguished Engineer

Facing challenges and preparing for organizational readiness.

In the previous two blog posts of this series (see Part 1 and Part 2), we saw various aspects of establishing an enterprise-level framework for domain-driven design (DDD)-based modernization into a composable IT ecosystem and a systematic way to modernize the applications and services. However, enterprises must acknowledge significant challenges they will face at different levels and establish an executable step-by-step plan to address them. Part 3 of this blog series essentially talks about various execution challenges that enterprises will encounter and potential approaches to address them based on learnings from engagements.

The previous blog posts establish a “happy path” where everything falls in line per strategy and every squad is composed of people with the right skills, with various domain-aligned product squads operating independently while still knowing how to collaborate. Unfortunately, that’s not the case in most enterprises. There are several challenges an organization is likely to face during this process and let’s examine each of them:

A big bang driven by excitement: Root for failure

Given the excitement around the “shiny new object” (composable IT ecosystem) and the immense value that it brings to the table, we have seen significant interest across the board in enterprises. In theory, end-to-end composability happens only when all parts of the enterprise move into that model, and this is where reality must prevail. There will simply be too many takers for this model, and peer pressure also adds to this. IT teams embrace it well considering the skill transformation they will go through, the shininess of this model and how much market relevant they will gain.

This potentially leads to a big-bang approach of starting too many initiatives across the enterprise to embrace the composable IT model. Transformation to a composable/domain-driven IT ecosystem needs to start in a calibrated way through step-by-step demonstration of value. Enterprises can achieve this by focusing on one value lever, with business functions (and associated IT capabilities) moving into that model and demonstrating value alongside embracing the new operating model. The most difficult part here is to choose the right MVP candidate/the next set of candidates and managing the excitement in a sustainable way.

Evolving domain model, org transformation and change

As one can imagine, domain models are central to the entire program and changes to that could have much larger impact to such transformation programs. It is essential to customize an industry-standard domain model to the needs of the enterprise with deeper business involvement.

It is not a bad idea to establish organizational structures around value streams that are higher-order elements above domains and products. While too many sub-organizations based on domain will result in a very complex, challenging executional ecosystem with too many leaders as stakeholders, too few domains will end up losing the purpose and tend to drive monolith thinking.

IT leaders will have to move to a capability- or service-based measurement model — moving away from application-oriented measurements. As the notion of applications are not going away either, it is important to accommodate the shift to a composable capabilities and services model from an application-oriented model. Every product leader should be able to demonstrate their progress and performance through capabilities and services they have built and deployed (for consumption by applications within or outside domains) and should also include consumption-related performance metrics. There needs to be a funding linkage to this metrics model to balance the needs of each product team based on what each of the teams need to deliver.

Also, from an organizational-change perspective, it is important to focus on enablement of each layer of the organization — from value-stream leaders to product owners to architects, developers, etc. A systematic enablement program that validates the learning and ensures hands-on, side-by-side learning along with experts is critical to the success of the program. Tracking the learning progress (coverage and depth) is important to ensure individuals are really “ready.”

Change becomes impossible if the IT metrics for each of the leaders are not transformed to reflect the composable IT ecosystem model (e.g., shifting focus from applications to capabilities/capability components (services) owned, deployed and operated at desired SLAs, etc.) and the funding model is not aligned along these lines.

Focus more on value and less on transforming the entire IT ecosystem

First and foremost, modernization initiatives typically consider “value”-driven levers to identify candidate applications and services for modernization. Based on experience, it is important to focus on outcomes and resist the urge to eliminate technical debt completely from the enterprise at one go.

It is important to establish a value vs. effort view of various applications and services being modernized and look for value streams that benefit from the modernization. It is best to choose value streams that deliver the maximum impact and establish the modernization scope along various intersecting application capabilities and services. Successful engagements have always focused on the modernization of a set of capabilities impacting one or more important business levers (revenue, customer service, etc.). For example modernizing 20+ user journeys (associated applications and services) in a large UK bank or modernizing certain IT capabilities impacting crew functions of European airline, etc.

Control on modernization scope and efforts

Modernization programs should “identify and refine” their processes in alignment with the modernization scope and identify in-scope applications and services. It is easy to get into an “iceberg” kind of situation where modernizing one service will drive the need to modernize the entire dependency tree underneath and it is important to manage the scope of modernization, with a clear focus on value.

It is also important to align the processes to respective domains/products. This is quite a challenging effort because of many technical and non-technical reasons. Every domain/product team would like to own as many capabilities and services as possible. Also, data becomes a significant element of ownership discussion. It’s important to consider data owned and managed by respective processes and ensure alignment across product teams.

The biggest challenge presents itself when it comes to aggregates, where everyone wants to copy and own data simply because their processing needs are very different from what data-owning products do. There also must be a recognition of the needs of business when making these decisions as there are situations where data is needed to perform necessary analysis and it’s not necessarily an indication of data ownership. These issues take much longer to resolve, and this where a reference domain model including guidance and a decision matrix becomes important.

Decomposing applications to capabilities for multiple domains and building them needs a significant level of coordination across domains

As we saw, a well-institutionalized domain-driven design (DDD) model (e.g., practices, core team, DDD facilitation skills) and cloud-native services work in tandem to help modernize monolith applications into a composable set of capabilities/capability components (microservices) as owned by appropriate product teams. While it is easy to design such a decomposed view, building the same will need several product owners to align timelines and resolve several design conflicts.

These product teams are expected to be independent, and this is where a significant amount of collaboration and roadmap alignment needs to happen before the application can completely be decomposed/modernized. Since enterprises’ priorities change over time, it becomes challenging for each of the teams to undertake constant reprioritization of activities. One would notice that the speed of modernization is much higher in applications and capabilities (including services) that are well contained within a domain:

When it comes to applications decomposed to capabilities owned by many product teams, execution complexity creeps in (e.g., conflicting priorities, resource challenges, roadmap alignment issues, design challenges to accommodate multiple consumers of capabilities, etc.). One of the ways to address this is by ensuring the development of capabilities end-to-end; squads also build services and other dependent capabilities together (performed by one team with squads represented by domains with regards to SMEs, developers, etc.).

It is also far more pragmatic for different product teams to come together and build several application capabilities and services together and subsequently move them to the appropriate day-2 model for subsequent iterations. This approach introduces minimal disruption across the enterprise and helps address various organizational-readiness challenges (e.g., funding, people/skills, getting the roadmap right, etc.). The biggest impact is the business risk introduced by the day-2 operating model that needs to be significantly reskilled and readied to operate in a composable IT ecosystem.

The figure above provides a way to build capabilities and associated capability components (microservices) in a much more integrated/one-team model and subsequently have them moved to the end-state operating model.

It is extremely important to have a good handle on the backlog of open items/technical debt items coming out of design and implementation activities that could be worked on in subsequent roadmap iterations (and more so for various compromises being made to make progress).

Day-2 support model challenges

In continuation with challenges imposed by multi-domain applications — where capabilities are deployed and managed by respective product teams — we look at the challenges imposed by the need for a different day-2 operating model. Traditionally, application teams are in full control of the code base and data of the application, and in the composable IT ecosystem, this becomes decentralized (with a distributed ownership). Teams on the forefront (frontend of the application) need to understand this model and operate in it accordingly. This is also the case with various product teams building, deploying and operating capabilities. Now the incident management/ITSM processes need to accommodate for distributed squads supporting different capabilities (piece-parts) of a given application.

It takes a certain degree of operations processes, tooling and squad maturity and skilling levels to operate in that model with the right routing and segregation of incidents. Moving applications into a composable IT ecosystem model without fully readying support teams, processes, tooling skill levels could result in a significant risk to the business in terms of supportability of the capabilities. It is best to perform a staggered move to the composable IT ecosystem model, with specific capabilities moved to separate product teams or squads with anadequate maturity period.

Measuring success at intermediate points is key

While the larger success of modernization to a composable IT ecosystem is in the business seeing the results (e.g., rapid innovation, improved customer service, etc.), it’s important to also measure early progress indicators. This could include things like the number of products implementing the model and the number of squads (or product teams) self-sufficient in of skill needs (e.g., ability to perform DDD, DevOps readiness, foundation platform readiness, etc.). It can also include incremental capabilities deployed and at what velocity. One should also have a backlog of technical debt or design decisions (compromises), which needs to be a manageable one with an inclusive design authority that governs the decisions.


The evolution of cloud has opened a plethora of possibilities for various enterprises to exploit them, and this makes the composable IT ecosystem a reality. The emergence of various proven practices — such as domain-driven design, DevOps and site reliability engineering — has made full-stack squads a reality, which enables realization of independent product teams that can build end-to-end capabilities and services without layers of IT getting involved (as we have seen in traditional IT ecosystems).

Enterprises embarking on modernization initiatives to transform their IT ecosystem into composable model need to recognize the quantum of change and operating model transformation across the enterprise and think through this more pragmatically. It is important to establish a roadmap and scope of modernization that is defined by the business levers impacted.

Enterprises need to recognize the fact that clarity on domains and processes will evolve with time, and there needs to be room for changes. While value streams and the lowest unit of such an organization — like products and product teams — are not likely to change that often, intermediate organizational constructs do change significantly.

Initial steps should focus on identifying a smaller subset of products (or domains) to pilot and demonstrate success. Learnings should be fed back to refine the roadmap, plans and operating model. Moving to a composable IT ecosystem is a long journey and measuring success at every intermediate change is key. Too much framework or too little framework could pose significant challenges, ranging from analysis paralysis to chaos. Therefore, a first-pass framework needs to be in place quickly, while focused pilot/MVP initiatives should be run to test and refine the framework. The framework should and will evolve with time and only based on real execution experiences (e.g., process overlaps learned from decomposing applications, domain model refinements based on gaps, etc.).

Check out the following links to learn more:

Balakrishnan Sreenivasan

Distinguished Engineer


How One Busy IBMer Quickly Completed Four Certifications at the IBM Center for Cloud Training Cloud

3 min read


Jani Byrne Saliga, Ph.D, IBM Center for Cloud Training

Yair Shaked’s learning journey and how he achieved four cloud certifications.

To plan or not to plan, that is the (common) question. But for IBMer Yair Shaked, the approach of combining the two approaches yielded uncommon results — four professional certifications from the IBM Center for Cloud Training (ICCT) in only two months.

“When I look at it, in retrospect,” he says, “it reminds me of a quote from the movie Forrest Gump: ‘I just felt like running.’ In my case, I just felt like learning.”

That part he didn’t plan. But once he dived into certification, the busy Yair took a methodical approach to the process.  

The certification journey begins

It all started when Yair received an email with a subject line that challenged him: “So You Think You Know Cloud?”

Yair did know cloud. He is a Cloud Lead Architect at IBM in Israel, where he helps organizations move their workload to the IBM public cloud for application modernization. This means using the most advanced technologies, tools and skill sets to match his customers’ needs. The email’s challenge looked like an unmissable opportunity for building expertise and opportunities with new clients.

So, he turned to the IBM Center for Cloud Training — the free, role-based cloud training and certification program from IBM.  The center provides learning paths for cloud professionals to quickly and smoothly fill the gap between business needs and individuals’ capabilities.

A plan opens opportunities

With his experience, Yair knew he could pass the introductory-level IBM Center for Cloud Training (ICCT) program. “But,” he says, “then I thought that this was a real opportunity for me to become certified in one of the new and exciting cloud technologies.”

The technology that caught his eye was certification in the IBM Cloud Satellite Specialty.

In Yair’s plan, the first steps were to review training materials from ICCT and to take a sample test to evaluate his knowledge level. Then, he followed the ICCT training sessions on the IBM Cloud Satellite Specialty learning path. Importantly, his plan also included keeping a notebook, which eventually became his go-to journal for all of the learning modules he later would encounter.

When Yair felt he was ready, he took the assessment exam, but his grade was not as high as he wanted (which may indicate that it may impact the real exam). So, he reviewed his answers, paying particular attention to those that had given him the most trouble.

A few days later, he took the certification exam and passed with a high score.

His learning never ends

This might have been the end of Yair’s learning journey, but a few days later, he says, “I saw LinkedIn posts from two of my colleagues that announced that they had achieved their IBM Cloud Architect certifications. Their achievements spurred me on to get my IBM Cloud Architect certificate as well.”

Once more, Yair went to the ICCT website, where he started down the learning path for his IBM Cloud Professional Architect certification. He used the same methodology as he used for his Cloud Satellite certification — a combination of pretesting, learning and keeping detailed notes in his notebook.

It didn’t take long for Yair to pass the IBM Cloud Professional Architect exam with a high score and share his success on LinkedIn, where as he puts it: “The training cycle never ends.”

Next up were certifications for IBM Cloud Security Engineer and IBM Cloud Advanced Architect.

Following his proven training methodology, Yair soon passed the Advanced Architect exam and let the world know on LinkedIn. There he restated his commitment to “…expanding my knowledge and expertise with IBM Cloud technology and my commitment to my clients to become their technical cloud advocate.”

And just like that, in only two months, Yair had completed four certifications.

But he still feels like learning.

Now that he has completed four Cloud certifications, Yair has a new goal: completing his IBM Cloud Professional Developer certification — because, he says, “The way that I see it, as a cloud architect, you must be familiar with a developer’s perspective and challenges in order to better support these developers, who are among the most important customers in the cloud environment.”

Get started on your learning path

Visit the IBM Center for Cloud Training to learn more about how you can achieve your next cloud certification. And be sure to check out the new Cloud Prep Web App for flashcards, detailed study guides and practice quizzes to help you on your cloud certification journey.  

Jani Byrne Saliga, Ph.D

IBM Center for Cloud Training


Kubernetes vs. Docker: Why Not Both? Cloud Compute

5 min read


IBM Cloud Team, IBM Cloud

Is Kubernetes or Docker the better choice (or is it really even a choice at all)?

When it comes to container technologies, two names emerge as open-source leaders: Kubernetes and Docker. And while they are fundamentally different technologies that assist users with container management, they are complementary to one another and can be powerful when combined. In this regard, choosing to use Kubernetes or Docker isn’t a matter of deciding which option is better; in reality, they’re not in competition with one another and can actually be used in tandem. So, to the question of whether Kubernetes or Docker is the better choice, the answer is neither.

The fact that Kubernetes and Docker are complementary container technologies clears up another frequent question: Is Kubernetes replacing Docker?

In short, no. Since Kubernetes isn’t a competing technology, this question likely derives from the news that broke in 2021 that Kubernetes would no longer be supporting Docker as a container runtime option (i.e., a container component that communicates with the operating system (OS) kernel throughout the containerization process). However, Kubernetes and Docker are still compatible and provide clear benefits when used together, as we’ll explore in greater detail later in this post. First, it’s important to start with the foundational technology that ties Kubernetes and Docker together — containers.

What is a container?

container is an executable unit of software that packages application code with its dependencies, enabling it to run on any IT infrastructure. A container stands alone; it is abstracted away from the host OS — usually Linux — which makes it portable across IT environments.

One way to understand the concept of a container is to compare it to a virtual machine (VM). Both are based on virtualization technologies, but while a container virtualizes an OS, a VM leverages a hypervisor — a lightweight software layer between the VM and a computer’s hardware — to virtualize physical hardware. 

With traditional virtualization, each VM contains a full copy of a guest operating system (OS), a virtual copy of the hardware needed to run the OS and an application (and its associated libraries and dependencies). A container, on the other hand, includes only an application and its libraries and dependencies. The absence of a guest host significantly reduces the size of a container, making it lightweight, fast and portable. Additionally, a container automatically uses the DNS settings of the host.

For a full rundown on the differences between containers and VMs, see "Containers vs. VMs: What’s the difference?"

Engineers can use containers to quickly develop applications that run consistently across a large number of distributed systems and cross-platform environments. The portability of containers eliminates many of the conflicts that come from differences in tools and software between functional teams. 

This makes them particularly well-suited for DevOps workflows, easing the way for developers and IT operations to work together across environments. Small and lightweight, containers are also ideal for microservices architectures, in which applications are made up of loosely coupled, smaller services. And containerization is often the first step in modernizing on-premises applications and integrating them with cloud services:

What is Docker?

Docker is an open-source containerization platform. Basically, it’s a toolkit that makes it easier, safer and faster for developers to build, deploy and manage containers. This toolkit is also known as a containerd.

Although it began as an open-source project, Docker today also refers to Docker, Inc., the company that produces the commercial Docker product. Currently, it is the most popular tool for creating containers, whether developers use Windows, Linux or MacOS.

In fact, container technologies were available for decades prior to Docker’s release in 2013. In the early days, Linux Containers (or LXC) were the most prevalent of these. Docker was built on LXC, but Docker’s customized technology quickly overtook LXC to become the most popular containerization platform. 

Among Docker’s key attributes is its portability. Docker containers can run across any desktop, data center or cloud environment. Only one process can run in each container, so an application is able to run continuously while one part of it is undergoing an update or being repaired.

Some of the tools and terminology commonly used with Docker include the following:

  • Docker Engine: The runtime environment that allows developers to build and run containers.
  • Dockerfile: A simple text file that defines everything needed to build a Docker container image, such as OS network specifications and file locations. It’s essentially a list of commands that Docker Engine will run to assemble the image.
  • Docker Compose: A tool for defining and running multi-container applications. It creates a YAML file to specify which services are included in the application and can deploy and run containers with a single command via the Docker CLI.

Now let’s revisit why Kubernetes stopped supporting Docker as a container runtime. As noted at the top of this section, Docker is a containerd and not a container runtime. This means that Docker sits on top of an underlying container runtime to provide users with features and tools via a user interface. To support Docker as a runtime, Kubernetes had to support and implement a separate runtime known as Docker Shim, which essentially sat between the two technologies and helped them communicate.

This was done during a time when there weren’t a lot of container runtimes available. However, now that there are — with CRI-O an example of one such container runtime — Kubernetes can provide users plenty of container runtime options, many of which that use the standard Container Runtime Interface (CRI), a way for Kubernetes and the container runtime to communicate reliably without a middle layer acting as the go-between.

However, even though Kubernetes no longer provides special support to Docker as a runtime, it can still run and manage containers built with the Open Container Initiative (OCI), Docker’s own image format that allows you to use Dockerfiles and build Docker images. In other words, Dockers still has a lot to offer in the Kubernetes ecosystem.

What are the advantages of Docker?

The Docker containerization platform delivers all the of previously mentioned benefits of containers, including the following:

  • Lightweight portability: Containerized applications can move from any environment to another (wherever Docker is operating), and they will operate regardless of the OS.
  • Agile application development: Containerization makes it easier to adopt CI/CD processes and take advantage of agile methodologies, such as DevOps. For example, containerized apps can be tested in one environment and deployed to another in response to fast-changing business demands.
  • Scalability: Docker containers can be created quickly and multiple containers can be managed efficiently and simultaneously.

Other Docker API features include the ability to automatically track and roll back container images, use existing containers as base images for building new containers and build containers based on application source code. Docker is backed by a vibrant developer community that shares thousands of containers across the internet via the Docker Hub.

But while Docker does well with smaller applications, large enterprise applications can involve a huge number of containers — sometimes hundreds or even thousands — which becomes overwhelming for IT teams tasked with managing them. That’s where container orchestration comes in. Docker has its own orchestration tool, Docker Swarm, but by far the most popular and robust option is Kubernetes.

See "Docker Swarm vs. Kubernetes: A Comparison" for a closer look at the Kubernetes vs. Docker Swarm debate.

Docker has several commands used in the creation and running of containers:

  • docker build: This command builds a new Docker image from the source code (i.e., from a Dockerfile and the necessary files).
  • docker create: This command creates a new Docker image from an image without starting it, which involves creating a writeable container layer over the image and preparing it.
  • docker run: This command works exactly like the docker create command, except it takes the added step of running it after creation.
  • docker exec: This command is used to execute a new command inside a container that is already running.
What is Kubernetes?

Kubernetes is an open-source container orchestration platform for scheduling and automating the deployment, management and scaling of containerized applications. Containers operate in a multiple container architecture called a “cluster.” A Kubernetes cluster includes a container designated as a control plane that schedules workloads for the rest of the containers — or worker nodes — in the cluster.

The master node determines where to host applications (or Docker containers), decides how to put them together and manages their orchestration. By grouping containers that make up an application into clusters, Kubernetes facilitates service discovery and enables management of high volumes of containers throughout their lifecycles. 

Google introduced Kubernetes as an open source project in 2014. Now, it’s managed by an open source software foundation called the Cloud Native Computing Foundation (CNCF). Designed for container orchestration in production environments, Kubernetes is popular due in part to its robust functionality, an active open-source community with thousands of contributors and support and portability across leading public cloud providers (e.g., IBM Cloud, Google, Azure and AWS).

What are the advantages of Kubernetes?
  • Automated deployment: Kubernetes schedules and automates container deployment across multiple compute nodes, which can be VMs or bare-metal servers. 
  • Service discovery and load balancing: It exposes a container on the internet and employs load balancing when traffic spikes occur to maintain stability.
  • Auto-scaling features: Automatically starts up new containers to handle heavy loads, whether based on CPU usage, memory thresholds or custom metrics.
  • Self-healing capabilities: Kubernetes restarts, replaces or reschedules containers when they fail or when nodes die, and it kills containers that don’t respond to user-defined health checks.
  • Automated rollouts and rollbacks: It rolls out application changes and monitors application health for any issues, rolling back changes if something goes wrong.
  • Storage orchestration: Automatically mounts a persistent local or cloud storage system of choice as needed to reduce latency — and improve user experience.
  • Dynamic volume provisioning: Allows cluster administrators to create storage volumes without having to manually make calls to their storage providers or create objects.

For more information, see our video “Kubernetes Explained”: 

Kubernetes and Docker: Finding your best container solution

Although Kubernetes and Docker are distinct technologies, they are highly complementary and make a powerful combination. Docker provides the containerization piece, enabling developers to easily package applications into small, isolated containers via the command line. Developers can then run those applications across their IT environment, without having to worry about compatibility issues. If an application runs on a single node during testing, it will run anywhere.

When demand surges, Kubernetes provides orchestration of Docker containers, scheduling and automatically deploying them across IT environments to ensure high availability. In addition to running containers, Kubernetes provides the benefits of load balancing, self-healing and automated rollouts and rollbacks. Plus, it has a graphical user interface for ease of use.

For companies that anticipate scaling their infrastructure in the future, it might make sense to use Kubernetes from the very start. And for those already using Docker, Kubernetes makes use of existing containers and workloads while taking on the complex issues involved in moving to scale. For more information, watch “Kubernetes vs. Docker: It’s Not an Either/Or Question”: 

Integration to better automate and manage applications

Later versions of Docker have built-in integration with Kubernetes. This feature enables development teams to more effectively automate and manage all the containerized applications that Docker helped them build.

In the end, it’s a question of what combination of tools your team needs to accomplish its business goals. Check out how to get started with these Kubernetes tutorials and explore the IBM Cloud Kubernetes Service to learn more.

Earn a badge through free browser-based Kubernetes tutorials with IBM CloudLabs.

IBM Cloud Team

IBM Cloud


SQL vs. NoSQL Databases: What's the Difference? Database

7 min read


Benjamin Anderson, STSM, IBM Cloud Databases
Brad Nicholson, Senior Database Engineer, IBM Cloud Databases

Explore key differences between SQL and NoSQL databases and learn which type of database is best for various use cases.

SQL is a decades-old method for accessing relational databases, and most who work with databases are familiar with it. As unstructured data, amounts of storage and processing power and types of analytics have changed over the years, however, we’ve seen different database technologies become available that are a better fit for newer types of use cases. These databases are commonly called NoSQL.

SQL and NoSQL differ in whether they are relational (SQL) or non-relational (NoSQL), whether their schemas are predefined or dynamic, how they scale, the type of data they include and whether they are more fit for multi-row transactions or unstructured data.

What is a SQL database?

SQL, which stands for “Structured Query Language,” is the programming language that’s been widely used in managing data in relational database management systems (RDBMS) since the 1970s. In the early years, when storage was expensive, SQL databases focused on reducing data duplication.

Fast-forward to today, and SQL is still widely used for querying relational databases, where data is stored in rows and tables that are linked in various ways. One table record may link to one other or to many others, or many table records may be related to many records in another table. These relational databases, which offer fast data storage and recovery, can handle great amounts of data and complex SQL queries.

What is a NoSQL database?

NoSQL is a non-relational database, meaning it allows different structures than a SQL database (not rows and columns) and more flexibility to use a format that best fits the data. The term “NoSQL” was not coined until the early 2000s. It doesn’t mean the systems don’t use SQL, as NoSQL databases do sometimes support some SQL commands. More accurately, “NoSQL” is sometimes defined as “not only SQL.”

To lay the groundwork, see the following video from Jamil Spain:

Watch the video How SQL works

SQL databases are valuable in handling structured data, or data that has relationships between its variables and entities.


In general, SQL databases can scale vertically, meaning you can increase the load on a server by migrating to a larger server that adds more CPU, RAM or SSD capability. While vertical scalability is used most frequently, SQL databases can also scale horizontally through sharding or partitioning logic, although that’s not well-supported.


SQL database schema organizes data in relational, tabular ways, using tables with columns or attributes and rows of records. Because SQL works with such a strictly predefined schema, it requires organizing and structuring data before starting with the SQL database.


RDBMS, which use SQL, must exhibit four properties, known by the acronym ACID. These ensure that transactions are processed successfully and that the SQL database has a high level of reliability:

  • Atomicity: All transactions must succeed or fail completely and cannot be left partially complete, even in the case of system failure.
  • Consistency: The database must follow rules that validate and prevent corruption at every step.
  • Isolation: Concurrent transactions cannot affect each other.
  • Durability: Transactions are final, and even system failure cannot “roll back” a complete transaction.

Because SQL databases have a long history now, they have huge communities, and many examples of their stable codebases online. There are many experts available to support SQL and programming relational data.

Examples of SQL databases How NoSQL works

Unlike SQL, NoSQL systems allow you to work with different data structures within a database. Because they allow a dynamic schema for unstructured data, there’s less need to pre-plan and pre-organize data, and it’s easier to make modifications. NoSQL databases allow you to add new attributes and fields, as well as use varied syntax across databases.


NoSQL databases scale better horizontally, which means one can add additional servers or nodes as needed to increase load.


NoSQL databases are not relational, so they don’t solely store data in rows and tables. Instead, they generally fall into one of four types of structures:

  • Column-oriented, where data is stored in cells grouped in a virtually unlimited number of columns rather than rows.
  • Key-value stores, which use an associative array (also known as a dictionary or map) as their data model. This model represents data as a collection of key-value pairs.
  • Document stores, which use documents to hold and encode data in standard formats, including XML, YAML, JSON (JavaScript Object Notation) and BSON. A benefit is that documents within a single database can have different data types.
  • Graph databases, which represent data on a graph that shows how different sets of data relate to each other. Neo4j, RedisGraph (a graph module built into Redis) and OrientDB are examples of graph databases.

While SQL calls for ACID properties, NoSQL follows the CAP theory (although some NoSQL databases — such as IBM’s DB2, MongoDB, AWS’s DynamoDB and Apache’s CouchDB — can also integrate and follow ACID rules).

The CAP theorem says that distributed data systems allow a trade-off that can guarantee only two of the following three properties (which form the acronym CAP) at any one time:

  • Consistency: Every request receives either the most recent result or an error. MongoDB is an example of a strongly consistent system, whereas others such as Cassandra offer eventual consistency.
  • Availability: Every request has a non-error result.
  • Partition tolerance: Any delays or losses between nodes do not interrupt the system operation.

While NoSQL has quickly been adopted, it has smaller user communities and, therefore, less support. NoSQL users do benefit from open-source systems, as opposed to the many SQL languages that are proprietary.

Examples of NoSQL databases When to use SQL vs NoSQL When to use SQL

SQL is a good choice when working with related data. Relational databases are efficient, flexible and easily accessed by any application. A benefit of a relational database is that when one user updates a specific record, every instance of the database automatically refreshes, and that information is provided in real-time.

SQL and a relational database make it easy to handle a great deal of information, scale as necessary and allow flexible access to data — only needing to update data once instead of changing multiple files, for instance. It’s also best for assessing data integrity. Since each piece of information is stored in a single place, there’s no problem with former versions confusing the picture.

Most of the big tech companies use SQL, including Uber, Netflix and Airbnb. Even major companies like Google, Facebook and Amazon, which build their own database systems, use SQL to query and analyze data.

When to use NoSQL

While SQL is valued for ensuring data validity, NoSQL is good when it’s more important that the availability of big data is fast. It’s also a good choice when a company will need to scale because of changing requirements. NoSQL is easy-to-use, flexible and offers high performance.

NoSQL is also a good choice when there are large amounts of (or ever-changing) data sets or when working with flexible data models or needs that don't fit into a relational model. When working with large amounts of unstructured data, document databases (e.g., CouchDB, MongoDB, and Amazon DocumentDB) are a good fit. For quick access to a key-value store without strong integrity guarantees, Redis may be the best choice. When a complex or flexible search across a lot of data is needed, Elastic Search is a good choice.

Scalability is a significant benefit of NoSQL databases. Unlike with SQL, their built-in sharding and high availability requirements allow horizontal scaling. Furthermore, NoSQL databases like Cassandra, developed by Facebook, handle massive amounts of data spread across many servers, having no single points of failure and providing maximum availability.

Other big companies that use NoSQL systems because they are dependent on large volumes of data not suited to a relational database include Amazon, Google and Netflix. In general, the more extensive the dataset, the more likely that NoSQL is a better choice.


Selecting or suggesting a database is a key responsibility for most database experts, and “SQL vs. NoSQL'' is a helpful rubric for informed decision-making. When considering either database, it is also important to consider critical data needs and acceptable tradeoffs conducive to meeting performance and uptime goals.

IBM Cloud supports cloud-hosted versions of several SQL and NoSQL databases with its cloud-native databases. For more guidance on selecting the best option for you, check out "A Brief Overview of the Database Landscape" and "How to Choose a Database on IBM Cloud."

Interested in going more in-depth with individual databases? Check out our “Database Deep Dives” series of blog posts.

Benjamin Anderson

STSM, IBM Cloud Databases

Brad Nicholson

Senior Database Engineer, IBM Cloud Databases


Distributed Cloud: Empowerment at the Edge Cloud Networking

6 min read


Ashok Iyengar, Executive Cloud Architect
Ashoka Rao, Cloud Solution Leader, STSM

What is distributed cloud, and how does it facilitate edge computing?

Earlier this year, we added distributed cloud as another cloud deployment option on our cloud architecture heatmap slide. When it all started, we had two cloud deployment models: public and private. Then there were three: public, private and hybrid. Now we have five cloud deployments models: public, private, hybrid, multicloud and distributed (see Figure 1 below):

Figure 1. Cloud deployment models.

Given that we know it is a new cloud deployment model, what exactly is distributed cloud? Is it yet another combination of the public and private? What makes this cloud deployment model so different? And how does it relate to edge computing?

This blog post will address these and other related questions by looking at specific use cases.

Please make sure to check out all the installments in this series of blog posts on edge computing:

What is distributed cloud?

In a distributed cloud, public cloud services are made available to consumers at different physical locations outside of the cloud provider’s facilities, also known as satellite locations. The public cloud provider is responsible for the operation, governance and updates of the services. Distributed cloud computing extends the range of cloud use cases all the way to the edge. Thus, an enterprise using distributed cloud computing can store and process its data in different data centers that may be physically located in different remote locations.

To paraphrase from an IBM paper on distributed cloud, it is public cloud computing that lets you run public cloud infrastructure in multiple different locations — not only on your cloud provider's infrastructure, but on-premises, in other cloud providers’ data centers and in third-party data centers or colocation centers — and manage everything with a single control plane.

The promise of managing everything from a single control plane is what makes distributed cloud compelling. Remote locations, a single control plane and a secure network tunnel are the key components in a distributed cloud offering.

Distributed cloud empowering the edge

In this blog series, we have discussed nuances of edge computing. In a previous blog called “Cloud Services at the Edge,” we alluded to distributed cloud or a “cloud-out” notion wherein the computing resources of the data center and cloud services are now available at the edge. That allows for processing and analysis of data at the source where the data is generated. Distributed cloud provides credence to that “cloud-out” thinking by bringing those remote locations in focus.

Distributed computing empowers edge computing by bringing the power to process large amounts of data due to the physical proximity of the data sources, while maintaining data security and compliance aspects. Edge computing can be — and has been — implemented without a distributed cloud architecture, but distributed cloud makes edge application deployment and management a lot easier, especially when we deal with the telco edge or the enterprise edge.

Distributed cloud use cases

It is easy to envision many use cases, but we highlight two in this section that we think are rather unique given the complementary nature of distributed cloud and edge computing. Figure 2 highlights the areas that befit this paradigm.

We will use the IBM Cloud Satellite product to showcase these solutions. The remote locations we alluded to — Satellite locations — can be managed from a common control plane. IBM Cloud Satellite does this by attaching remote infrastructure in the Satellite location to the IBM Cloud Satellite control plane. The Satellite location can then be used by IBM Cloud services as deployment targets. Remember, a Satellite location can even be an environment hosted on another hyperscaler like AWS, Azure or Google Cloud:

Figure 2. Distributed cloud usage patterns.

Meeting industry or local data residency requirements

Also known as data sovereignty, many industries and countries have regulations that specify that a user's personal information (PI) cannot leave the user's region/country. Distributed cloud infrastructure makes it easier for an organization to process PI in the user's country of residence. This is especially useful in regulated industries like healthcare and in many European countries.

Figure 3 depicts one such distributed cloud architecture involving a hypothetical healthcare provider. The healthcare provider has multiple clinics, hospitals and diagnostic centers. Patient data confidentiality and data residency regulations dictate that the data cannot be moved out of the provider’s individual locations. The data is accessed by the medical staff comprised of doctors, nurses, research teams, diagnostic teams and the patient themself.

Patients can visit any facility operated by the healthcare provider. This introduces the need for a fast and secure communication channel for the doctors and research partners to allow them to provide timely medical care and up-to-date information to facility staff and patients.

The location staff expect a secure solution that allows control of the data within their private cloud, while providing cloud-native capabilities locally. The solution also complies with data security and local data privacy requirements. The solution also needs to be operated by the provider’s staff within their own data centers.

The distributed cloud instances use a Red Hat OpenShift-based Kubernetes platform and a secure cloud-native messaging application based on event-driven architecture provided the low-latency message platform:

Figure 3. IBM Cloud Satellite deployment in a healthcare scenario.

Solution highlights

  • The health provider designates a hospital data center as the on-premises Satellite  location.
  • Telemetry data from various devices monitoring the patient is fed to the Internet of Things (IoT)/edge application running at the Satellite location.
  • Data is analyzed at the Satellite location in close proximity to patient and medical staff.
  • Hospital medical and patient records are stored on-site and cannot be moved out.
  • Hospital staff leverage a fully managed OpenShift platform’s capabilities, including end-to-end encryption.
  • The health provider has plans to add another hospital as the next Satellite location and will use the same public cloud services.
Multi-access edge computing (MEC)

Telecoms or communications service providers (CSPs) have been looking at ways to monetize 5G technology by providing better multi-access edge solutions. A distributed cloud topology enables this offering by allowing CSPs to offer their customers single tenancy in the Satellite locations and bring computing power and MEC services to those premises. This, in turn, helps address data security concerns while delivering the edge applications that demand low latency. In summary, the telco operator uses its 5G network and a distributed cloud to provide MEC services.

Consider the scenario where a CSP is offering its services to an auto manufacturer that has multiple plants in a 100-mile radius. Parts and other assembly-line data is shared between those plants. The CSP can provide a distributed cloud solution where MEC services are provided at each plant, acting as a Satellite location and providing a secure and consistent view of all the relevant manufacturing data to the apps running in those plants. These could be visual inspection apps that inspect painting or welding of parts or analytical apps that provide analysis in real-time or near-real time with the help of 5G technology.

Another use case along those lines is the CSP servicing two different auto manufacturers in an industrial city. A distributed cloud solution using IBM Cloud Satellite, shown in Figure 4, allows the CSP to offer its customers single tenancy in remote locations. Each customer can run workloads where they want with complete observability, and all their data is totally secure and isolated:

Figure 4. MEC solution using IBM Cloud Satellite.

Solution highlights

  • Deploy edge applications at multiple Satellite locations.
  • CSPs can monetize existing networks at edge locations and offer new services based on 5G.
  • Leverage fully managed platform capabilities, easing the burden on telecom providers.
  • Customers get identity and key management services
  • With a single pane of glass, customers also get observability that includes central logging and monitoring for apps and the platform.
Wrap up

Distributed cloud enables public cloud providers to offer an entire set of services wherever a customer might need it — on-premises in the customer's own data center, in a private cloud or off-premises in one or more public cloud data centers that may or may not belong to the cloud provider. 

The use cases we described show how distributed cloud and edge computing complement each other. Among other benefits, solutions using these combined technologies provide low latency access to on-premises systems, local data processing, and even local data storage which is especially useful for running AI workloads at the edge.

Do let us know what you think. Special thanks to Joe Pearson and Gerald Coon for reviewing the article.

Please make sure to check out all the installments in this series of blog posts on edge computing:

Learn more Related articles Ashok Iyengar

Executive Cloud Architect

Ashoka Rao

Cloud Solution Leader, STSM


Migrating from VMware NSX-V to NSX-T Cloud

3 min read


Bryan Buckland, Senior Technical Staff Member
Neil Taylor, Senior Cloud Solution Architect
Sami Kuronen, Senior Cloud Solution Architect

With the fast-approaching end of life of VMware 6.5/6.7 and NSX-V, customers need to prepare and take action to perform migration activities.

Enterprises need to plan a seamless upgrade to VMware vSphere 7.x and VMware NSX-T and take advantage of the latest security, automation, network scalability and more.

IBM Cloud can help businesses upgrade to the latest vSphere and NSX-T. Before you start migrating your workloads, here are a few things to consider:

  1. Understanding workloads: Your existing NSX-V-based instance was deployed previously and it hosts the current workloads and NSX-V network configurations. For a smooth transition, it is key to understand the environment, the workloads that are deployed on it and the NSX-V with the underlay network configuration. 
  2. Analyse your capacity needs: Before you deploy a new NSX-T-based vCenter Server target, thoroughly estimate your capacity needs. Optimize and size your new hosts and clusters by using the latest hardware options available in IBM Cloud. If you’re not sure, you can check the latest options in the IBM Cloud for VMware Solutions console or contact your local sales representatives.
  3. Don’t forget your networking needs: You can deploy the new vCenter Server with NSX-T instance in new VLANs and a new pod, or you can use existing VLANs. 25GE NICs might not be available on the same pod. Also, you cannot move the subnets between VLANs if, for example, you need to reuse the public IP addresses. Analyse your networking needs and then decide.
  4. Network configurations from NSX-V to NSX-T: As the NSX-T architecture is different, you have the opportunity to design and implement the overlay network and configurations based on NSX-T best practices. NSX-T offers scripting capabilities, or you can use Terraform to define your topology. Alternatively, you can alternatively use the NSX-T Migration Coordination tool or third-party tools to migrate your existing network configurations, firewall and load-balancing rules.
  5. Layer 2 (L2) Network Extension with Bridging or HCX: L2 extension is typically used between NSX-V logical switches and NSX-T overlay segments. You can use either NSX-T Bridging or HCX Network Extension here. HCX provides tools for bulk and vMotion migrations, too.
  6. Choose your best option to migrate workloads: After your network configurations are migrated or prepared for migration, you can start to migrate your workloads between the environments. Here, you have a choice of various methods. With HCX, you have a single tool to do both L2 extension and migration. You can also use Advanced vMotion and storage vMotion between the environments. In addition, you can use services and tools from Zerto or Veeam.
The way forward

On IBM Cloud, the VMware NSX-V to VMware NSX-T migration is done by following the VMware® lift-and-shift migration model. In this approach, the IBM Cloud automation is used to deploy a new vCenter Server instance on the same or different VLANs. With this action, you can perform both NSX-V to NSX-T migration and workload migration:

This diagram demonstrates the lift-and-shift architecture utilizing the VMware Solutions automated deployment. The lift-and-shift migration approach enables you to do the following:

  • Plan the migration flexibly based on your workload requirements.
  • Adopt a modular migration approach; for example, partial subnet evacuation.
  • Configure a new network topology in the NSX-T environment.
  • Fail-back of a migration wave, as the existing NSX-V environment is still running.
  • Logically extend networks between both environments for smooth network migration.
  • Migrate the workloads gradually from NSX-V to NSX-T.
Learn more

IBM is committed to supporting its customers and making this migration as seamless as possible.

Check out the NSX V2T Migration and FAQ documentation:

As an alternative to the automation deployment, utilize the VPC Bare metal capabilities for the migration target by following the VPC roll-your-own solution tutorial.

IBM is committed to supporting its customers and making this migration as seamless as possible.

Bryan Buckland

Senior Technical Staff Member

Neil Taylor

Senior Cloud Solution Architect

Sami Kuronen

Senior Cloud Solution Architect


Updated Tutorial: VPC/VPN Gateway Cloud

1 min read


Powell Quiring, Offering Manager

How to use a VPC/VPN gateway for secure and private on-premises access to cloud resources.

Site-to-site VPN is a communication link used to extend your on-premises network to the cloud. The updated Use a VPC/VPN gateway for secure and private on-premises access to cloud resources solution tutorial captures the steps to create a simulated environment:

The tutorial has been enhanced to use the IBM Cloud DNS Services and updated to leverage the IBM Cloud Schematics service for provisioning. DNS is required to access resources like PostgreSQL and IBM Cloud Object Storage through the VPN and virtual private endpoints gateways.

The architecture is captured as Terraform files. A few clicks in IBM Cloud Schematics triggers the creation of the resources. After your testing is complete, click to destroy. Use the simulated on-premises environment to explore VPN connection parameters, verify a workload or test connectivity to cloud services. Later, you can connect your actual on-premises environment to the IBM Cloud.

Get started

Are you ready do get started? The resources in the diagram above can be deployed in a few minutes and then deleted when you are done. Here are the relevant resources:

If you have feedback, suggestions, or questions about this post, please reach out to me:

Powell Quiring

Offering Manager


Docker Swarm vs. Kubernetes: A Comparison Compute

5 min read


Chris Rosen, Program Director, Offering Management

Docker Swarm vs. Kubernetes: Which of these container orchestration tools is right for you?

Workload orchestration is vital in our modern world, where automating the management of application microservices is more important than ever. But there's strong debate on whether Docker Swarm or Kubernetes is a better choice for this orchestration. Let’s take a moment to explore the similarities and differences between Docker Swarm and Kubernetes and see how to choose the right fit for your environment.

What are containers?

In a nutshell, containers are a standard way to package apps and all their dependencies so that you can seamlessly move the apps between runtime environments. By packaging an app’s code, dependencies and configurations into one easy-to-use building block, containers let you take important steps toward shortening deployment time and improving application reliability.

In enterprise applications, the number of containers can quickly grow to an unmanageable number. To use your containers most effectively, you'll need to orchestrate your containerized applications, which is where Kubernetes and Docker Swarm come in.

What is Kubernetes?

Kubernetes is a portable, open-source platform for managing containers, their complex production workloads and scalability. With Kubernetes, developers and DevOps teams can schedule, deploy, manage and discover highly available apps by using the flexibility of clusters. A Kubernetes cluster is made up of compute hosts called worker nodes. These worker nodes are managed by a Kubernetes master that controls and monitors all resources in the cluster. A node can be a virtual machine (VM) or a physical, bare metal machine.

In the early days of Kubernetes, the community contributors leveraged their knowledge of creating and running internal tools, such as Borg and Omega, two cluster management systems. With the advent of the Cloud Native Computing Foundation (CNCF) in partnership with the Linux Foundation, the community adopted Open Governance for Kubernetes, a set of rules for Kubernetes clusters that help teams operate at scale. IBM, as a founding member of CNCF, actively contributes to CNCF’s cloud-native projects, along with other companies like Google, Red Hat, Microsoft and Amazon.

Advantages of Kubernetes
  • Kubernetes offers a wide range of key functionalities, including service discovery, ingress and load balancing, self-healing, storage orchestration, horizontal scalability, automated rollouts/rollbacks and batch execution.
  • It has a unified set of APIs and strong guarantees about the cluster state.
  • It’s an open-source community that’s very active in developing the code base.
  • Fast-growing KubeCon conferences throughout the year offer user insights.
  • Kubernetes has the largest adoption in the market.
  • It’s battle-tested by big players like Google and our own IBM workloads, and it runs on most operating systems.
  • It’s available on the public cloud or for on-premises use, and it has managed or non-managed offerings from all the big cloud providers (e.g., IBM Cloud, AWS, Microsoft Azure, Google Cloud Platform, etc.).
  • There’s broad Kubernetes support from an ecosystem of cloud tool vendors, such as Sysdig, LogDNA, and Portworx (among many others).
Kubernetes challenges
  • It has a steep learning curve and management of the Kubernetes master takes specialized knowledge.
  • Updates from the open-source community happen frequently and require careful patching to avoid disrupting workloads.
  • It’s too heavyweight for individual developers to set up for simplistic apps and infrequent deployments.
  • Teams often need additional tools (e.g, kubectl CLI), services, continuous integration/continuous deployment (CI/CD) workflows and other DevOps practices to fully manage access, identity, governance and security.

See the following video for a deeper dive into Kubernetes:

What is Docker Swarm?

Docker Swarm is another open-source container orchestration platform that has been around for a while. Swarm — or more accurately, swarm mode — is Docker’s native support for orchestrating clusters of Docker engines. A Swarm cluster consists of Docker Engine-deployed Swarm manager nodes (which orchestrate and manage the cluster) and worker nodes (which are directed to execute tasks by the manager nodes).

Advantages of Docker Swarm
  • Docker is a common container platform used for building and deploying containerized applications. Swarm is built for use with the Docker Engine and is already part of a platform that’s familiar to most teams.
  • It’s easy to install and set up for a Docker environment.
  • Tools, services and software that run with Docker containers will also work well with Swarm.
  • It has its own Swarm API.
  • It smoothly integrates with Docker tools like Docker Compose and Docker CLI since it uses the same command line interface (CLI) as Docker Engine.
  • It uses a filtering and scheduling system to provide intelligent node selection, allowing you to pick the optimal nodes in a cluster for container deployment.
Docker Swarm challenges
  • Docker Swarm offers limited customizations and extensions.
  • It’s less functionality-rich and has fewer automation capabilities than those offered by Kubernetes.
  • There’s no easy way to separate Dev-Test-Prod workloads in a DevOps pipeline.

Not to confuse matters too much, but Docker Enterprise Edition now supports Kubernetes, too.

Kubernetes vs. Docker Swarm: A simple head-to-head comparison Now that’s we’ve covered the advantages and challenges, let’s break down the similarities and differences between Kubernetes and Docker Swarm. Both platforms allow you to manage containers and scale application deployment. Their differences are a matter of complexity. Kubernetes offers an efficient means for container management that’s great for high-demand applications with complex configuration, while Docker Swarm is designed for ease of use, making it a good choice for simple applications that are quick to deploy and easy to manage. Here are some detailed differences between Docker Swarm and Kubernetes: Installation and setup

Because of the complexity of Kubernetes, Docker Swarm is easier to install and configure.

  • Kubernetes: Manual installation can differ for each operating system. No installation is required for managed offerings from cloud providers.
  • Swarm: There is simple installation with Docker, and instances are typically consistent across operating systems.

Kubernetes offers all-in-one scaling based on traffic, while Docker Swarm emphasizes scaling quickly.

  • Kubernetes: Horizontal autoscaling is built in.
  • Swarm: Offers autoscaling of groups on demand.
Load balancing

Docker Swarm has automatic load balancing, while Kubernetes does not. However, an external load balancer can easily be integrated via third-party tools in Kubernetes.

  • Kubernetes: Discovery of services is enabled through a single DNS name. Kubernetes has access to container applications through an IP address or HTTP route.
  • Swarm: Comes with internal load balancers.
High availability

Both tools provide a high level of availability.

  • Kubernetes: By diverting traffic away from unhealthy pods, Kubernetes is self-healing. It offers intelligent scheduling and high availability of services through replication.
  • Swarm: Swarm Managers offer availability controls, and microservices can be easily duplicated.
Which container orchestration tool is right for you?

Like most platform decisions, the right tool depends on your organizations’ needs.

Kubernetes has widespread adoption and a large community on its side. It is supported by every major cloud provider and do-it-yourself offerings like Docker Enterprise Edition. It is more powerful, customizable and flexible, which comes at the cost of a steeper initial learning curve. It requires a team that’s experienced and capable of running it; however, companies are also opting to use a managed service provider to simplify open-source management responsibilities and allow them to focus on building applications.

Docker Swarm’s advantage comes with familiarity and emphasis on ease-of-use. It is deployed with the Docker Engine and is readily available in your environment. As a result, Swarm is easier to start with, and it may be more ideal for smaller workloads. 

Now that you’ve covered the differences between Kubernetes and Docker Swarm, take a deeper dive in the IBM Cloud Kubernetes Service and learn how to build a scalable web application on Kubernetes.

Learn more about Kubernetes and containers

Want to get some free, hands-on experience with Kubernetes? Take advantage of IBM CloudLabs, a new interactive platform that offers Kubernetes tutorials with a certification—no cost or configuration needed.

Chris Rosen

Program Director, Offering Management


Are Your Data Centers Keeping You From Sustainability? Automation

4 min read


Chris Zaloumis, Principal Product Manager, IT Automation

Automating application resource management should be your first step in the sustainability journey.

Imagine driving your car to work, parking it in the parking lot and then leaving it running all day long just because you might step out for lunch at some point. If you manage a data center and leave applications running that you’re using periodically, you’re in essence doing the same thing — wasting money and energy.

Data centers account for 1% of the world’s electricity use and are one of the fastest-growing global consumers of electricity. On top of this, almost every data center in the world is dramatically overprovisioned. The average rate of server utilization is only 12-18% of capacity. The smart way to address the issue is to automate application resource management, which assures application performance and increases your data center’s utilization. This materially reduces cost and energy use.

Let application performance drive sustainability and green data centers

For today’s modern business, its applications are its business. Maintaining performance is key to achieving growth. Application resource management software is designed to continuously analyze every layer of applications’ resource utilization to ensure applications get what they need to perform, when they need it. This not only ensures performance quality and reliability, but it also saves money and energy. Electricity accounts for as much as 70% of total data center operating costs, according to the Barclays Equity Research report Green Data Centers: Beyond Net Zero.

The National Resource Defense Council suggests that increasing server utilization is one of the industry’s biggest energy-saving opportunities. When you increase server utilization, you naturally decrease the number of servers you have to power and cool, which saves electricity.

Cutting data center electricity consumption by 40% would save 46 billion kwh of electricity annually. That’s enough to power the U.S. state of Michigan for a year — no small feat when you consider it’s the home of Ford, Chrysler and General Motors, as well as the University of Michigan, Michigan State and a sprawling Google Campus. If you increase utilization by 40% in a 100,000 square foot facility, it’s like getting an extra 40,000 square feet for free.

Why every data center should be automating application resource management

Application resource management tools use software to manage your app stack automatically, optimizing performance, compliance and cost in real-time — all while managing business constraints.

Every business and IT leader can set the pace towards sustainability and green computing, starting with their data centers. By automating application resource management, organizations can ensure every application is resourced to perform at its optimal level without being overutilized. But what should you look for in a software solution? These three capabilities are key:

  • Optimization must be actionable and continuous.
  • Automation needs to be trusted and applied for prevention of performance risk and environmental impact.
  • Solutions must be able to manage across hybrid cloud and multicloud environments.

Hear how one data center manager is leading the way in modern, green data centers without compromising performance.

You don’t need to be a large enterprise to take advantage of application resource management software

In early 2016, the Bermuda Police Service (BPS) thought it might need to purchase new hardware for its data center in order to address poor application performance. Budget constraints forced them to consider alternatives.

BPS solution provider Gateway Systems Limited believed BPS could operate their virtualized environment far more effectively and efficiently with the help of IBM Turbonomic’s Operations Manager, software that creates a virtual marketplace for data center resources and assigns them based on application priority. It automates hundreds of daily workload placements, as well as sizing and capacity decisions. This keeps the data center in a healthy state and ensures that only required host servers are active at any given point in time.

In addition to addressing the problem of poor application performance, the software made far more effective use of BPS’s existing host servers. IBM Turbonomic freed up enough server resources to run an additional 412 virtual machines, and it ultimately allowed BPS to decommission and remove 16 Cisco UCS blade servers with 32 total CPU sockets — 67% of its hosts. Doing so eliminated all associated hardware maintenance costs and software licensing costs, including an estimated $11,600 per year for VMWare licenses. As a welcome side effect, UPS standby time increased from 12 to 26 minutes.

Cisco’s UCS Power Calculator estimates the direct energy savings from removing 16 host servers (32 sockets) at 4,550 watts, assuming average server utilization of 60%. At BPS’s electricity rate of $0.40 per kWh, that comes to nearly $16,000 in annual savings.

Why every data center should be running IBM Turbonomic Application Resource Management

IBM Turbonomic can support your business through assured application performance while reducing cost and carbon footprint. Data center optimization is a great place to start. For example, IBM Turbonomic can increase virtual machine density to achieve the same level of performance with fewer resources, thereby allowing customers to suspend the number of hosts or repurpose. If you’re further along in your cloud journey, it allows you to migrate on-prem workloads safely to the cloud and continuously optimize cloud consumption, thereby saving on carbon footprint.

In a Total Economic Impact study of IBM Turbonomic Application Resource Management, Forrester found that clients see a 75% improvement in infrastructure utilization and avoided required infrastructure growth spend by 70%. Forrester also noted that optimizing application resource consumption in the data center or the public cloud improves an organization’s long-term energy consumption profile, contributing to environmental sustainability. IBM Turbonomic clients see a payback in under 6 months and a 471% ROI.

In summary

You don’t have to compromise between carbon neutrality and application performance. When applications consume only what they need to perform, you can trim costs and materially reduce your carbon footprint immediately and continuously.

Chris Zaloumis

Principal Product Manager, IT Automation


LAMP vs. MEAN: What’s the Difference? Cloud Database

6 min read


IBM Cloud Education, IBM Cloud Education

Learn the differences between the LAMP and MEAN stacks, their benefits and their advantages for web app development.

LAMP and MEAN are popular open-source web stacks used for developing high-performance, enterprise-grade web and mobile apps. Like other web stacks, they combine technologies (operating systems, programming languages, databases, libraries and application frameworks) that developers can use to create, deploy and manage a fully functional web app efficiently and reliably via stack development.

LAMP and MEAN are different in that they provide developers with different layers — or “stacks” — of technologies that a web app needs to function across all frontend interface, network and backend server activity. For example, a web-based banking application might rely on either the LAMP stack or MEAN stack to interpret a user’s request to see banking activity, retrieve the necessary data and display it in a user interface.

What is LAMP stack?

LAMP stands for the following stacked technologies:

  • L: Linux (operating system)
  • A: Apache (web server)
  • M: MySQL (a relational database management system, or RDBMS, that uses SQL)
  • P: PHP (programming/scripting language)

The Linux OS enables the entire web app to function correctly on a given piece of hardware. The Apache web server translates a user’s request and then retrieves and “serves” information back to the user via HTTP (Hypertext Transfer Protocol). The MySQL database (a relational database management system) stores the data (e.g., bank statement archives, financial activity, image files, CSS stylesheets) that the web server can retrieve and provide based on the user’s specific request. The PHP programming language works with Apache to retrieve dynamic content from the MySQL database and present it back to the user. While HTML can display static content (e.g., a headline that remains on the interface regardless of data), dynamic content that changes based on user interaction relies on PHP. The programming languages PERL and Python can also be used in the LAMP stack. Writer Michael Kunze was the first to use the acronym LAMP stack in an article for a German computer magazine published in 1998.

Figure 1 shows a high-level example of how a web app responds across its LAMP stack when a user requests information. This request can include user actions like opening the application, logging in and performing a search function within the application:

Figure 1: How a user request is processed across the LAMP stack.

What is MEAN stack?

MEAN stands for the following stacked technologies:

  • M: MongoDB (non-RDBMS NoSQL database)
  • E: Express.js (backend web framework)
  • A: AngularJS (frontend framework that builds user interfaces)
  • N: Node.js (open-source backend runtime environment)

The AngularJS framework processes an incoming user request. Node.js then parses the request and translates it into inputs the web app can understand. Express.js uses these translated inputs to determine what calls to make to MongoDB, a non-relational NoSQL database. Once MongoDB provides the necessary information, Express.js then sends the data back to Node.js, which in turns sends it to the AngularJS framework so it can display the requested information in the user interface.

While the AngularJS frontend framework can be substituted for others like React.js, the Node.js environment is critical to the MEAN stack and cannot be replaced. This is because Node.js enables full-stack JavaScript development, a key benefit that makes developing and managing applications with the MEAN stack highly efficient. When the AngularJS framework is replaced with React.js, the stack is referred to as MERN. The acronym MEAN stack was first used in 2013 by MongoDB developer Valeri Karpov.

Figure 2 shows a high-level example of how a web app responds across its MEAN stack to fulfill a user’s request for information:

Figure 2: How a web app responds across the MEAN stack to fulfill a request.

What are the advantages and disadvantages of LAMP stack development? Advantages of LAMP

The following are some benefits of using LAMP to create, deploy and manage web applications:

  • Widespread support and trust: Because the technologies of LAMP have existed since the 1990s and have been used in various kinds of software development, it is universally trusted and supported by the open-source community. For example, many hosting providers support PHP and MySQL.
  • Open-source technology: The LAMP technologies are open source, meaning they are readily available and free for developers to use. LAMP is also highly flexible due to its open-source technologies, freeing developers to use the components that make the most sense for a given web app. For example, PHP can use multiple compiler runtime engines, such as Zend or Laravel. LAMP can also use any number of open-source databases, such as PostgreSQL.
  • Apache: The Apache web server is regarded as reliable, fast and secure. It is also modular, making it highly customizable.
  • Security: The LAMP stack features enterprise-grade security architecture and encryption.
  • Efficiency: Using the LAMP stack can reduce app development time due to its ease of customization. For example, programmers can start with an Apache module and change the code as needed versus developing code entirely from scratch.
  • Scalability: Web apps built, deployed and managed using LAMP stack are highly scalable and fast to develop due to its non-blocking structure.
  • Low maintenance: The LAMP stack ecosystem is stable and requires little maintenance.
  • Comprehension: Because PHP and MySQL are relatively easy to understand, LAMP stack development is a good option for beginners.
Disadvantages of LAMP

The disadvantages of using LAMP to create, deploy and manage web applications include the following:

  • Multiple languages: LAMP is not considered “full stack” because it requires multiple languages in its development. While PHP is used for server-side programming, client-side programming is done in JavaScript. This means that either a full-stack developer or multiple developers are needed.
  • Limited OS support: LAMP only supports the Linux operating system and its variants, such as Oracle Linux.
  • Monolithic architecture: While arguably more secure than cloud, LAMP is more monolithic than cloud-based architectures (cloud architectures are more scalable and affordable and return data quicker via APIs).
What are the advantages and disadvantages of MEAN stack development? Advantages of MEAN

The benefits of using MEAN to create, deploy and manage web applications include the following:

  • The use of a single language: MEAN is considered “full stack” because it uses JavaScript as its only language. This makes switching between client-side and server-side programming convenient and efficient. For example, a single JavaScript developer could ostensibly build an entire web app.
  • Real-time updates and demonstrations: The technologies in the MEAN stack make it possible to push real-time updates to deployed web apps. Developers can also quickly demonstrate the functionality of web apps in development.
  • Cloud compatibility: The technologies in the MEAN stack can work with the cloud-based functions found in modern web services (such as calling on an API for data retrieval).
  • JSON files: MEAN allows users to save documents as JSON files, which are designed for fast data exchange across networks.
  • Efficiency: Developers can use resources from public repositories and libraries to reduce web application development time. This makes MEAN stack development a cost-effective option that startups may find appealing.
  • A fast runtime environment and ease of maintenance: The Node.js runtime is fast and highly responsive while the Angular.js framework is easy to maintain and testable.
  • Cross-platform support: MEAN is a cross-platform stack, meaning its web applications can function on multiple operating systems.
Disadvantages of MEAN

These are some disadvantages of using MEAN to create, deploy and manage web applications:

  • Potential data loss: Large-scale applications may experience data loss due to MongoDB requiring excessive memory for data storage. Additionally, MongoDB does not support transactional functions.
  • Load times and incompatibility: JavaScript may load websites or applications slowly on some devices, particularly older or low-end devices. Web apps may even be rendered inoperable if JavaScript is disable on a device. Additionally, MEAN can be hard to implement in existing architectures since older applications are unlikely to use JavaScript.
  • High maintenance: The technologies in the MEAN stack are updated often, which means frequent maintenance on web apps is required.
MEAN vs LAMP: Which is better?

Neither stack is better than the other, per se. However, LAMP stack or MEAN stack may be better suited for a particular web development use case.

LAMP stack is generally the better option for web applications or sites with the following characteristics:

  • Are large in scope, static (i.e., not needing real-time updates) and will experience heavy workflows with spikes in traffic
  • Have a short lifespan
  • Are server-side in nature
  • Use a CMS such as WordPress

Conversely, MEAN stack is the better choice for web applications or sites like these:

  • Take advantage of modern cloud technologies like APIs and microservices
  • Have a long lifespan
  • Are smaller in scope with consistently predictable traffic (decreasing the likelihood of data loss)
  • Require a lot of logic on the client side
LAMP stack, MEAN stack and IBM

To get back to basics, LAMP stack takes you a little closer to the technical serving of web pages and how that is done. You have your database, your scripting language, and a way to serve it to clients — that’s LAMP.

If you want to see how easy it is to develop and deploy an application to the cloud using a LAMP or MEAN stack, IBM offers a the following tutorials:

Sign up and create your IBM Cloud account.

IBM Cloud Education

IBM Cloud Education


How to Limit Access to Specific IBM Cloud Accounts Cloud

6 min read


Martin Smolny, IBM Cloud Identity and Access Management
Michael Beck, IBM Cloud Identity and Access Management

To prevent data theft, enterprise customers with a strong demand for security want to make sure that their users can only access approved accounts.

In the past, this could be managed by only allowing their employees to connect to certain allow-listed domains and/or IP addresses. This approach works as long as you can distinguish tenant-specific traffic using domains or IP addresses. Using cloud services, this approach often fails, as tenant- or account-specific access cannot be secured on a domain or IP address level. Access to specific IBM Cloud Accounts must be limited using a different, application-level approach. IBM Cloud Identity and Access Management (IAM) is introducing a feature to limit access for tenants based on their numeric enterprise ids and account ids. 

How does this feature work?

To limit which accounts a user might select, the customer must add a comma-separated list of enterprise ids and/or account ids as HTTP header called IBM-Cloud-Tenant to requests for IBM Cloud. Whenever an IBM Cloud user wants to switch to an initial or a follow-on account during a login session, IAM will check if that tenant header exists, and if it exists, that the target account is either allow-listed directly as account id or allow-listed indirectly because the account belongs to an allow-listed enterprise id.

Which requests need the HTTP header?

If an IBM Cloud customer wants to invoke an IBM Cloud API, the customer must first create an IAM Token. The typical call using an API key is as follows:

curl --data-urlencode 'grant_type=urn:ibm:params:oauth:grant-type:apikey' \
     --data-urlencode 'apikey=<...>' \

Each API key is connected to an IBM Cloud account; therefore, this call is considered as an account-selecting call.

If the user runs a client that directly calls IBM Cloud IAM from the enterprise network like the IBM Cloud CLI, or a self-written script or an application directly interacts with IBM Cloud IAM, you can add an appropriate tenant header to those requests like in the following enhanced API key token call with the mentioned HTTP header:

curl -H 'IBM-Cloud-Tenant: <account1>,<account2>,<enterprise1>'
     --data-urlencode 'grant_type=urn:ibm:params:oauth:grant-type:apikey' \
     --data-urlencode 'apikey=<...>' \

Executing this call will only be successful if the account that is connected to the API key is included in the IBM-Cloud-Tenant header — otherwise it will fail with an error (see later).

As a second example, if an IBM Cloud service wants to call IAM to switch the account (e.g., when working in the IBM Cloud Console), the service must execute an OAutb refresh-token grant request. During that request, the IBM Cloud service provides the new account id in the parameters (e.g., an account switch without the mentioned HTTP header):

curl -u '<clientid>:<clientsecret>' \
     --data-urlencode 'grant_type=refresh_token' \
     --data-urlencode 'refresh_token=<...>' \
     --data-urlencode 'account=<new account>' \

To enforce the right account, if the user runs an application that directly calls IBM Cloud IAM from the enterprise network (like the IBM Cloud CLI) or a self-written script or an application directly interacts with IBM Cloud IAM, those requests need the appropriate tenant header (e.g., like this account switch with the mentioned HTTP header):

curl -u '<clientid>:<clientsecret>' \
     -H 'IBM-Cloud-Tenant: <account1>,<account2>,<enterprise1>' \
     --data-urlencode 'grant_type=refresh_token' \
     --data-urlencode 'refresh_token=<...>' \
     --data-urlencode 'account=<new account>' \

If <new account> is not known in the list <account1>,<account2> and is also not part of <enterprise1>, the request will be refused; otherwise, it succeeds.

How do you add the HTTP header to the requests?

Typically, the enterprise wants to enforce this tenant header in each request to IBM Cloud IAM, so relying on all clients adding the tenant header to their request is dangerous. Individual applications might forget to add that header and unintentionally open security holes. Others might intentionally avoid adding the header and get access to IBM Cloud accounts which do not belong to the tenant:

Therefore, it is much more common to place a transparent HTTPS proxy into the enterprise network that will intercept HTTPS calls and add the required header to each call to IBM Cloud. This will enforce the right behavior for all applications that are directly connecting to IBM Cloud IAM:

What about web-based user interfaces?

As indicated above, the HTTP header that contains the tenant restrictions must reach IBM Cloud IAM. Requests that are originated from your local environment can easily be intercepted, and the HTTP header can be added. What about other applications — like the web-based IBM Cloud Console — that indirectly talk to IAM?:

To support tenant restrictions, the IBM Cloud Console would need to add the required header to its own IAM requests. To solve this scenario, IBM Cloud Console also has added support for the tenant HTTP header (i.e., whenever a request hits IBM Cloud Console, any existing HTTP header named IBM-Cloud-Tenant will be added to any outgoing call). This means that if the enterprise ensures (e.g., via a transparent proxy as described above) that the tenant HTTP header is added to all requests to IBM Cloud, IBM Cloud Console will use that information and also enforce this limitation:

Which URLs should the proxy intercept?

We suggest adding the tenant restriction HTTP header to all requests to and their sub domains. This will include and all region-specific subdomains. It will also cover the IBM Cloud Console and all its dashboards, catalogs, and functionalities.

System behavior in case of a mismatch

This section discusses how an end user will be notified if an action is blocked because of a provided IBM-Cloud-Tenant header.


When you log in to the IBM Cloud CLI, you always have to select an account for the subsequent commands. If you specify an API key, for example, the account is already part of the information represented by the API key. Doing a CLI login using username/password or the --sso option will present you with an account selection dialog.

In both cases, if your network is adding the IBM-Cloud-Tenant header, and the account that you selected is not allow-listed in that header, the following error message will occur as part of the authentication or account selection step:

Remote server error. Status code: 403, error code: BXNIM0523E, message: Account id or enterprise id not found in matching ibm-cloud-tenant allow list.


If you log in to the IBM Cloud Console and your network is sending the IBM-Cloud-Tenant header, some of the accounts that you are a member of might be blocked. In that case, you will observe an error dialog like this one:

Please select another account that is not blocked or click on Log out to exit from the IBM Cloud Console.


When you do API calls, you might get the following error response during token creation:

403 Forbidden
  "errorCode": "BXNIM0523E",
  "errorMessage": "Account id or enterprise id not found in matching ibm-cloud-tenant allow list.",
  "context": {

In all of these cases, if you see this error, this means that your enterprise network seems to run a transparent HTTP(S) proxy and injects an IBM-Cloud-Tenant header in which your required account id or enterprise id is missing. If you need to access that account, please contact your network administrator to update the IBM-Cloud-Tenant header to contain the account id that you are required to work with.


This section discusses how you can test the feature without installing a transparent HTTP(S) proxy in your network.


To simulate the behavior of a transparent HTTP(S) proxy running on your network, you can install a proxy server on your local system that injects the IBM-Cloud-Tenant header into outgoing HTTPS requests. There are several products available. 

For our tests, we have chosen to use the Burb Suite Community Edition and configured the proxy server to inject an IBM-Cloud-Tenant header with an appropriate account id into outgoing requests:

In addition to running this proxy server locally, you also have to indicate to your IBM Cloud CLI to use the proxy server for outgoing traffic. The traffic can be redirected by setting the following environment variables:


The IBM Cloud CLI respects these values and redirects traffic over those proxy endpoints. 

To intercept HTTPS traffic, the Burp Suite Community Edition has to create its own certificate, which is not accepted by the IBM Cloud CLI. You can either download the Root CA from the Burp Suite Community Edition and import it into your operating system's trust store, or simply pass the option — --skip-ssl-validation — on each invocation of the IBM Cloud CLI. For example:

ibmcloud login --sso --skip-ssl-validation

If you have installed a local HTTP(S) proxy for CLI testing, you can reuse that installation for your browser test, too. The only thing you need to do is to point your browser's proxy endpoint to Potentially, you will have to accept invalid certificates for this test.

Alternatively, each browser that can be extended by plugins will likely have a browser extension that allows you to add a custom HTTP header to your web browser requests. In our tests, we decided to use the ModHeader Chrome Plugin to add arbitrary HTTP headers — we were using the IBM-Cloud-Tenant header with appropriate account ids — to outgoing calls.

In case you have already installed a proxy server like the Burp Suite Community Edition as described above, you can simply reconfigure your browser to use the proxy server at on port 8080 to execute the requests. Again, you have to either import the Root CA from the Burp Suite Community Edition or accept insecure traffic for your test.


On the API, specify a new HTTP header IBM-Cloud-Tenant with the list of permitted account ids. If you are using the command line tool curl, this would be done with the following parameter: 

-H "IBM-Cloud-Tenant: <comma separated list of accountids>"

For example, if using this parameter with a valid API key in one of the specified accounts, the call execution will work as expected:

curl --data-urlencode "grant_type=urn:ibm:params:oauth:grant-type:apikey" \
     --data-urlencode "apikey=${APIKEY}" \
     -H "IBM-Cloud-Tenant: ${ACCOUNTID}" \

200 OK
  "access_token": "...",
  "refresh_token": "not_supported",
  "ims_user_id": ...,
  "token_type": "Bearer",
  "expires_in": 3600,
  "expiration": 1651825368,
  "scope": "ibm openid"

If the API key is created in an account that is not in the IBM-Cloud-Tenant list, you will get the following error message:

curl --data-urlencode "grant_type=urn:ibm:params:oauth:grant-type:apikey" \
     --data-urlencode "apikey=${APIKEY}" \
     -H "IBM-Cloud-Tenant: ${ACCOUNTID}" \

403 Forbidden
  "errorCode": "BXNIM0523E",
  "errorMessage": "Account id or enterprise id not found in matching ibm-cloud-tenant allow list.",
  "context": {
Learn more Martin Smolny

IBM Cloud Identity and Access Management

Michael Beck

IBM Cloud Identity and Access Management


Digital Workers vs. Chatbots vs. Bots: What’s the Difference? Automation

6 min read


IBM Cloud Education, IBM Cloud Education

Explore the differences between these three types of automation and learn about when to use them in your organization.

Businesses are seeing more use of digital (robotic) workers, chatbots and bots as digital transformation continues to revolutionize the workplace. These automation technologies — which use artificial intelligence (AI) and its subsets, machine learning (ML) and natural language processing (NLP) — are making customer service available around the clock, removing tasks from employees’ workflows, speeding up processes, removing errors and otherwise reducing costs and increasing competitiveness.                                   

While the digital workers, chatbots and bots might sound like they are interchangeable, they operate differently and meet distinct needs. Here’s a closer look.

What is a digital worker?

A digital worker, sometimes defined as a category of software robots, is a non-human team member that’s trained to use intelligent automation technologies to automate multiple tasks in a set of sequences and meet a complete business need from beginning to end. An example might be processing invoices through an organization’s system by moving them from sales to finance to procurement for execution and delivery.

Also referred to as a virtual or digital employee, a digital worker uses artificial intelligence and machine learning to perform one or more routine and repetitive business processes — not just a single task as a bot does, but an entire process. A digital worker is intelligent enough to ask questions if it needs more information, and it can improve the employee experience by taking monotonous work off the table. It can also be trained to deal with exceptions to the rule and learn by doing. More advanced digital worker software has the ability to remember past interactions, so that when you switch it off, it doesn’t forget you or what you worked on before.

Forrester describes a type of digital worker automation as combining AI (such as conversational AI and robotic process automation (RPA)) to work alongside employees and “understand human intent, respond to questions and take action on the human's behalf, leaving humans with control, authority and an enhanced experience.”

In the following video, Leslie Chau goes deeper on digital workers:

Benefits of digital workers

As with chatbots and bots, digital workers can improve employee and customer experience and productivity, and they bring unique benefits in these areas:

  • They not only save human employees time, but they can also assist them in doing more creative, strategic and high-value work by providing the right information and recommendations at the right time.
  • They can perform actions within and across multiple processes and systems, breaking down silos.
  • They can handle more dynamic conversational flows.
  • More advanced digital workers can remember past business interactions to make workflows more effective.
What is a chatbot?

A chatbot is an automated software program that uses artificial intelligence and natural language processing to simulate a chat — generally through a website, email, SMS or other messaging app — first by understanding a user’s questions and then providing the correct answers. By processing and simulating human conversation, either written or spoken, the conversational AI delivers an experience that can seem like two people communicating. Chatbots are used for both internal and external customers.

Many companies have AI chatbots, comprised of software and code, that pop up in the lower corner of their website to ask how they can help visitors.

Simple (or rule-based) chatbots respond to pre-written keywords or questions programmed into the system. Advanced or AI chatbots use natural language processing and machine learning. They understand basic language and communication, can understand the different ways a customer may ask the same question and can help with much more complex tasks. They can understand different ways of asking for things, respond with multiple suggestions and offer a back-and-forth conversation that feels, to a customer, as though they are chatting with a human employee in real-time.

Consider a chatbot when your customers want questions anytime they are online. A chatbot means your customers are not limited to getting information and answers only when your call center is open.

The Gartner Technology Roadmap survey found that customer service and support leaders will invest heavily in chatbots over the next several years. While only one in four service organizations fully deploys chatbots and AI today, 37% are running pilots or planning to deploy chatbots by 2023.

Gartner pointed out the growth of chatbots corresponds to the millennials’ increase in the workplace. “Because chatbots cater to millennials’ demand for instant, digital connections that keep them up to date at all times, millennials will likely have a large impact on how well and how quickly organizations adopt the technology.”

Benefits of chatbots

Chatbots provide these unique benefits:

  • They personalize service for many customers at once.
  • They allow end-users to have a self-service experience.
  • They are available for customer interactions and customer service at any time.
  • They can be programmed to communicate with customers who speak different languages.

Chatbots can also be used successfully for lead generation. They let you ask for customer information 24/7 and can add that information to a lead generation form that you then integrate into your sales workflow.

Chatbots can help customers make reservations on the spot, send promotional messages and even identify the right time to engage with customers for sales and business development.

What is a bot?

Unlike digital workers, which perform complete business functions from start to finish, and chatbots, which focus on communication, a bot (short for robot, and sometimes called an internet bot) is a software application that operates over a network and is programmed to do a specific, repetitive, predefined work task that a human would typically do. Bots operate without specific instructions from a person. They are valuable because they execute work much faster than a person (and without errors).

Benefits of bots

Bots are a way to easily automate individual, relatively simple tasks that would otherwise be handled manually.

Basic bots are relied on for the following benefits:

  • They speed simple tasks that can be precisely documented and have a defined sequence of steps.
  • They eliminate human error and provide total accuracy.
The value in digital workers, chatbots and bots

These three types of automation operate differently and meet different goals. Digital workers are trained to complete an entire business function from start to finish. Chatbots are a kind of bot that simulates human conversation, and they focus on a relatively narrow range of issues compared to what digital workers can do. Bots are simpler yet in that they are programmed to complete a single task.

Everything a chatbot and a bot can do, a digital worker can do, but a digital worker can also perform actions within and across processes and systems, handle more dynamic conversational flows and remember past business interactions.

“Conversational AI and RPA are useful and valuable,” says Jon Lester, IBM’s Director of HR Service Delivery & Transformation, “But there are things they can’t do that a digital worker can. Our Ask HR chatbot does its tasks really well — and has saved IBM employees and managers lots of time — but it can only do tasks one at a time. It can’t link transactions across multiple processes or systems. And a chatbot lacks long-term memory. The moment you switch it off, it forgets that you exist. It has no memory of what you did before.”

When to use a digital worker vs. a chatbot vs. a bot

A digital worker is appropriate when the goal is to automate a business function from start to finish, so it can follow sets of sequences and perform multiple tasks. An example of a digital worker’s role might be handling the complete process of preparing a quarterly revenue report and designing a presentation around it for the executive team. Another example would be performing human resources (HR) tasks like creating job descriptions, onboarding new employees, setting up user accounts and handling healthcare referrals.

When your need is around communications, consider a chatbot. Chatbots, which operate around the clock and can respond to questions in various languages, correspond with a customer over messaging to answer FAQs quickly and take pressure off your customer service reps. They can turn potential customers into qualified leads and book meetings or appointments. They also provide a business with information that is valuable for analytics.

As for simpler bots, use them when you need a specific automation task done repeatedly, without requiring supervision or, in fact, any human intervention beyond an initial trigger.

Digital workers, chatbots, bots and IBM

IBM offers award-winning digital worker, bot and chatbot solutions that enable you to do the following:

  • Return significant time-savings to your teams with their own digital employee. IBM Watson® Orchestrate, a 2022 CES Innovation Award Honoree, helps human employees perform both routine and mission-critical work faster. Intelligent digital employees work across existing business apps to take on time-consuming tasks, like gathering data from multiple systems, enabling end-to-end automation of processes in a way that robots or chatbots cannot.
  • Deliver exceptional customer experiences anywhere. IBM Watson Assistant uses artificial intelligence that understands customers in context to provide fast, consistent, and accurate answers across any application, device, or channel. Remove the frustration of long wait times, tedious searches, and unhelpful chatbots with a leader in trustworthy AI.
  • Start your automation journey with AI-driven RPA. Robotic process automation can help you automate more business and IT tasks at scale. IBM Robotic Process Automation can be used to implement attended and unattended bots and chatbot solutions. 
IBM Cloud Education

IBM Cloud Education


Veeam Backup Compute Storage

5 min read


Jordan Shamir, Offering Manager

Veeam for IBM Cloud: A backup and disaster recovery solution

Backup and disaster recovery is no longer a luxury, but an absolute necessity. Whether in your own data center, cloud, or a multicloud environment, you need a disaster recovery and backup plan.

Veeam on IBM Cloud is one option for backup and disaster recovery solutions in IBM Cloud. IBM Cloud provides full bare metal and hypervisor access, and this is important because it significantly reduces the learning curve of using the cloud.

This enables organizations to maintain the same security, flexibility and control that they’re currently running in their own data center today.  Veeam on IBM Cloud is a non-IBM Product offered under terms and conditions from Veeam.

In this lightboarding video, I'm going to go over the benefits you will enjoy by using Veeam on IBM Cloud for your disaster recovery and backup needs.

Watch the video Learn more Video Transcript What is Veeam?

Hi, my name is Jordan Shamir with IBM Cloud, and today I'm going to be talking about Veeam.

Veeam is an intelligent data-management backup and disaster recovery solution that is focused on providing hyper-availability and uptime for their customers.

Backup and disaster recovery

Backup and disaster recovery is no longer a luxury, but an absolute necessity. Our consumers expect us to stay online, they want to be able to access their products when they want it. 

However, this is becoming much more challenging with the rise of disasters, such as natural disasters, the rise of ransomware and cyberattacks, and, lastly, human errors. Maintaining this uptime and availability is becoming much, much more challenging. 

Backup and disaster recovery environments in the cloud

Many businesses have their disaster recovery and backup environment within their own data center. This is risky because if your data center goes down, you are down.

So, as companies begin to modernize, they look at their cloud to provide a disaster recovery and backup environment.

Some of the benefits are the geographic spread. We're giving you that two-hundred, four-hundred, five-hundred mile difference to ensure availability by a different region. 

Also, beyond that, we provide a variety of different economic benefits because you don't need a disaster recovery environment until you're going through a test or an outage. So, within the cloud, you can use the flexibility to have a low initial footprint and scale up as needed to accommodate an outage.

Backup and disaster recovery vs. availability

Backup and disaster recovery is completely different than availability. Cloud service providers guarantee you a certain level of availability of your workloads, but it doesn't mean that your workloads are backed up and resilient.

Veeam on IBM Cloud

Whether in your own data center, cloud, or a multicloud environment, you need disaster recovery and backup: Veeam on IBM Cloud is one option.

Within IBM Cloud, we give you full bare metal and hypervisor access, and this is important because it significantly reduces the learning curve of using the cloud. This enables you to maintain the same security, flexibility, and control that you're running in your own data center today.

Specifically with Veeam, you get access to all the advanced functionality which really makes Veeam, Veeam—such as Instant VM recovery, SureReplica, SureBackup, and a variety of other features.

IBM Cloud and deployment flexibility

It's critical that businesses have deployment flexibility, as their requirements vary by specific applications and workloads. In IBM Cloud, we have over 13,000 different deployments of Veeam. You can either deploy on our Virtual Server Instances, are Non-Virtualized Bare Betal, as well as our VMware and SAP workloads. These deployment options are available in 60+ data centers around the globe.

Replicate between data centers for free

One of the really cool things as you can actually replicate between these data centers for free and securely. So, if I think about backing up my own workloads, I have my production data center in A, and then my backup data center in B. So, replicating between the two is completely free and secure within the IBM Cloud. And this really saves me a significant amount of networking cost to have a backup and disaster recovery environment in the cloud.

Veeam on IBM Cloud in a multicloud strategy

In order to meet your consumer expectation and availability requirements, you need a multicloud strategy.

Veeam on IBM cloud provides you the control, flexibility, and availability to meet your business objectives.

Jordan Shamir

Offering Manager


Updated Tutorial: Build a Database-Driven Chatbot Automation

3 min read


Henrik Loeser, Technical Offering Manager / Developer Advocate

Use IBM Watson Assistant to build a Slackbot backed by a database app on IBM Cloud Code Engine, with data stored in Db2 on Cloud.

One of my favorite services on IBM Cloud is Watson Assistant, which allows you to easily build highly sophisticated chatbots. Even better, Watson Assistant has integrations with Slack, Facebook Messenger, WhatsApp, phone systems and more. Chatbots can become dynamic by leveraging data from backend systems, such as databases, customer relationship management (CRM) or enterprise resource planning (ERP) systems. The same way our (human) acceptance and interaction with chatbots have evolved, so have the capabilities and features of Watson Assistant. Therefore, it was time to update our popular tutorial on how to build a database-driven Slackbot. It shows how to build a chatbot to retrieve data about events and how to gather data for and then create a new database record.

The tutorial now makes use of action skills instead of dialog skills. And it shows how to reach out to a backend system with the new custom extensions in Watson Assistant. The updated tutorial demonstrates how such an extension is deployed to IBM Cloud Code Engine, the fully managed, serverless platform for containerized workloads. The backend app features a REST API to interact with data stored in an IBM Db2 on Cloud database. The following diagram shows the new overall architecture:

Solution architecture.

Watson Assistant and custom extensions

To deploy your chatbot to the (communication) channels your customers use, Watson Assistant offers a set of integrations. By adding them to your assistant, you can easily make the chatbot available to users on Slack or WhatsApp. The following screenshot shows the chatbot from the tutorial interacting on Slack:

A user interacting in Slack with the new chatbot.

To obtain data for a conversation from a database, web service, CRM or ERP system, Watson Assistant supports webhooks. The old version of the tutorial utilized webhooks to reach out to Cloud Functions to retrieve data from Db2 on Cloud. Custom extensions are a new feature in Watson Assistant. An extension is defined by importing an OpenAPI specification for a REST API. Thereby, the API functions with their input and output schemas are defined, so Watson Assistant knows about what functions are available, what data to pass to them and what result data to expect. This simplifies the conversation design because you are asked to map variables and user input to API parameters and you can reference result fields in assistant responses (see screenshot below).

In the updated tutorial, we use a REST API to manage data about events (conferences) stored in a Db2 on Cloud database. First, you import the API specification into Watson Assistant. Then, you configure server details for your deployed backend app (REST API) and tailor security settings:

REST API call: Watson Assistant references fields from result.

The custom extension for the chatbot is a REST API. It is a Python database app that exposes operations on the Db2 on Cloud database as API functions. The source code for the app is available in a GitHub repository. For the tutorial, you can either use a pre-built container image with that app or build a container image on your own (with your modifications included). The image is deployed to IBM Cloud Code Engine as an app simply by first creating a Code Engine project, then creating the app.

Get started

Chatbots are an established tool for customer care. In the updated tutorial you learn how to work with a custom extension in Watson Assistant to create a database-driven chatbot integrated with Slack. 

Are you ready to get started? Here are the relevant resources:

If you have feedback, suggestions, or questions about this post, please reach out to me on Twitter (@data_henrik) or LinkedIn

Henrik Loeser

Technical Offering Manager / Developer Advocate


Moving into the Cloud: The Metamorphosis of Work and Malware Security

3 min read


John Dwyer, Head of Research, X-Force

Taking proactive measures to address security concerns when moving work to the cloud.

The very idea of what constitutes “work” has undergone a metamorphosis over the past two years. Companies and their employees have proven incredibly adaptable, and our ability to thrive collaborating online, rather than in a shared physical working space, has ushered in a work-from-anywhere era.

It's been an exciting time of accelerated digital transformation, but has the hasty shift into the cloud environment left organizations more vulnerable?

Overlooked gaps in cloud security

The reality is, for many organizations working in a cloud environment, security hasn’t been a primary concern. As people are working with tools and applications that weren’t designed to securely function in the cloud — resulting in overlooked gaps in cloud security — opportunities to exploit security vulnerabilities abound.

“Bad guys are always going to follow the money. They’re watching organizations moving into the cloud, and of course they’re going to follow that money,” says Charles DeBeck, Senior Cyber Threat Intelligence Analyst for IBM Security. “What we’re seeing across the board is threat actors investing heavily in cloud-focused malware.”

So perhaps it’s no surprise that malware, like work, is undergoing its own metamorphosis, with a growing emphasis on Linux malware innovation. Linux — the open-source code that supports cloud infrastructure and data storage — is believed to power around 90% of cloud workloads. As you can imagine, Linux malware presents an incredibly alluring and lucrative area of focus for threat actors.

Malware trends are on the rise

Although Linux malware trends have been increasing steadily since 2018, largely driven by the opportunities that crypto-mining presents, there’s been a sharper rise in recent years. Between 2019 and 2020, there was a 40% increase in Linux malware families, according to the latest data from the IBM Security X-Force Threat Intelligence Index (TII). In fact, this malware had a 500% growth from 2010 to 2020.

“Threat actors are realizing how valuable Linux malware is, so that is where they're spending more time, ingenuity and resources,” says Camille Singleton, Manager, IBM X-Force Cyber Range Tech Team.

Linux malware saw a 146% increase in Linux ransomware with new code, according to the TII. And unique code increased in four out of five categories over the previous year. The banking industry experienced the greatest innovation increase — over tenfold — due to trojans. While Windows malware still makes up the vast majority of malware, the sheer volume of unique code suggests an ongoing trend.

Evasive, fileless malware lurking in memory can elude standard detection tools by exploiting legitimate scripting languages and sidestepping the use of signatures. Often used in Windows-based attacks, fileless malware is entering into the cloud with Ezuri, an open source crypter and memory loader written in Golang.

New malware suite focuses on Linux

IBM Security X-Force research in the Threat Intelligence Index (TII) highlighted the development of a new malware suite dubbed Vermillion Strike, which provides attackers with remote access capabilities. Based on the popular penetration testing tool Cobalt Strike, Vermillion Strike is designed to run on Linux systems.

The creation of Vermillion Strike shows that attackers are planning to expand human-operated attacks executed through Cobalt Strike to Linux systems, which may help them evade detection within enterprises. This development highlights the continued migration to malware targeting Linux and indicates that ongoing operations outside of Windows environments will continue into the future.

“Where Vermillion Strike is interesting for Linux is that it shows that there is an intent to increase the use of Linux systems during human-operated attacks,” says John Dwyer, Head of Research, X-Force. “For the past few years, Linux attacks have been mostly focused on delivering a cryptominer, ransomware or web shell often through automated mechanisms. But with Vermillion Strike, it offers attackers the opportunity to easily incorporate Linux systems into larger enterprise attacks for things like lateral movement and persistence by incorporating those systems within the Cobalt Strike C2 framework."

To limit breaches, shift your mindset to a zero-trust philosophy

Cloud migration was an urgent answer to an urgent need. It’s understandable that security was an afterthought as organizations quickly mobilized a work-from-home model. Now, in this work-from-anywhere era, security officers should concentrate on implementing more robust cybersecurity tools and strategies, such as Identity Access Management (IAM).

In the work-from-anywhere world, the perimeter is a person, not a place; organizations need to shift their security mindset. Implementing a zero-trust philosophy can connect the right users to the right data at the right time under the right conditions, while also protecting your organization from cyberthreats.

Whether in a cubicle or in the cloud, on a Microsoft or Linux platform, taking proactive measures to limit access is one of the most effective ways to limit a security breach.

Check out the IBM Security X-Force Threat Intelligence Index (TII) for a deeper dive.

Learn more about IBM's cloud security solutions.

John Dwyer

Head of Research, X-Force


Maximize the Value and Accelerate the Use of Your New IBM z16 System Features

4 min read


Roger Bales, Global IBM Z & LinuxONE Services Practice Leader

Our IBM Systems Lab Services experts offer infrastructure services to help you leverage the unique capabilities of the z16.

Is your enterprise mainframe environment quantum-safe? Are you currently maximizing your AI capabilities for highly time-sensitive transactions? Is your enterprise modern, or is it in need of a refresh?

Meet the IBM z16, which was built with these features in mind to help your enterprise advance.

The newly released z16 can help you predict, secure and modernize your enterprise, so you can do the following:

  • Predict and automate with accelerated AI: Apply insights at speed and scale to create new value in every customer interaction. This can increase productivity and lower operational costs with automation and AIOps. AI integration is also imperative. AI on z16 is designed to provide the best possible experience in highly time-sensitive transactions that require a rapid response. The AI computation, known as inference, takes place wholly within the core Z system, as the time restrictions do not allow for the inference to be performed outside of the system. This provides a rapid response.
  • Secure with a cyber-resilient system: Provide protection for data and systems that spans multiple and varied compute paradigms. Address ever-increasing regulations with automation for compliance, which can be difficult to manage. Plan and mitigate the risk of potential future disasters through quantum-safe encryption protection for post-quantum cryptographical vulnerabilities. Running more on a mainframe can put you in a better position to leverage cryptography and quantum-safe capabilities.
  • Modernize with a hybrid cloud: Empower your developers with the agility to accelerate the modernization of existing workloads. Enable the integration of z16 workloads with new digital services across the hybrid cloud. Support the application modernization, migration and subsequent compliance with global regulations.

These exciting new features can elevate your operational capabilities and prepare you for the future. If you are already on the z15 system and are looking to build upon it, or you are new to the z16 system, our TSS Technical Services (formerly known as IBM Systems Lab Services) experts are here to help. We specialize in supporting clients in accelerating their understanding and adoption of the technology.

It can be easier to update your operating system with the help of an expert who can guide you in your implementation or upgrade. This way, you can maximize the return on your investment by ensuring you have taken advantage of all the new features of the z16, while also connecting all of your systems.

How IBM Systems Lab Services can help with your IBM z16 system

Our IBM Systems Lab Services experts offer infrastructure services to help you leverage the unique capabilities of the z16 to build the foundation for today’s hybrid cloud and enterprise IT data centers. With z16, IBM Systems Lab Services helps you deploy the building blocks of a next-generation IT infrastructure that empowers your business.

Our experts can help you do the following:

  • Predict: Through the integration of AI, you can keep data close to extreme processing capabilities. z16 offers the perfect solution and services needed to help you understand AI capabilities. You can envision how to leverage the strengths of z16, while also identifying and prioritizing use cases and establishing the right foundation in the overall infrastructure. We also provide new services to help clients get up and running using AI on Z. For example, we work with the IBM Client Engineering team to help you build a plan and use case for AI, and we also work with IBM Consulting for those who need more help in an AI Model and application development
  • Secure: Security has never been more important than it is today. z16 can construct extremely secure defenses, and clients need to ensure that they proactively take advantage of those highly secure capabilities. Our services can help you understand potential security exposures and how to strengthen the system in response, which is even more important with quantum-safe capabilities. The IBM Z Security and Compliance Center (zSCC) offers services to help assemble the foundation to position your systems to support a business model that is compliant. In addition, we have the Quantum Safe Assessment for Z Systems. For this assessment, we will ensure that you are using the current best practices for your system and check that you are fully maximizing the Z Systems technologies to help protect your mainframe environment from quantum security risks.
  • Modernize: Application modernization, migration and compliance with global regulations can be difficult, so many clients can benefit from obtaining guidance in getting up to speed on their software and helping with their migration. Complying with the many different regulations globally is very important, both continuously and in a predictable way. For example, we provide a new CICS Modernization Health Check, which can help analyze and enhance your CICS application modernization capabilities by leveraging modernization capabilities delivered via Red Hat Open Shift and z/OS Connect.
Learn more

Start maximizing your z16 system today! Contact us to get connected with an expert at

IBM Systems Lab Services offers infrastructure services to help you build hybrid cloud and enterprise IT. Lab Services consultants collaborate with clients on-site, offering deep technical expertise, valuable tools and successful methodologies - and help clients solve business challenges, gain new skills and apply best practices.

Roger Bales

Global IBM Z & LinuxONE Services Practice Leader


Is Your IBM FlashSystem Up to Date? Storage

2 min read


Aubrey Applewhaite, Consulting IT Specialist - Storage

IBM Systems Lab Services’ FlashSystem Health Check helps you monitor the health and configuration of your IBM FlashSystem.

Customers often install an IBM FlashSystem, leave it running and don’t think any more about it until the next update cycle or until a problem arises. You wouldn’t do this to the car that you rely on every day, so why do it to your FlashSystem storage?

What is the FlashSystem Health Check?

The FlashSystem Health Check is an IBM Systems Lab Services offering that provides you with a report of the current state of your FlashSystem. We check for “good practice” configuration, anomalies and any outstanding errors. Once we check your FlashSystem, we produce a report with the findings, things to consider and suggestions or recommendations. This puts you in the position to make any adjustments, corrections or fixes to improve the operation of your FlashSystem and storage environment as a whole.

Each individual FlashSystem Health Check is different because every customer has their own unique needs and requirements, and each FlashSystem is unique once it is configured. The FlashSystem Health Check process is done in the same manner every time, allowing you to easily compare FlashSystems if you have multiple devices.

A FlashSystem has built-in monitoring that will report items like hardware errors, faulty or missing cables and some configuration errors. However, it is possible to simply have a poorly configured FlashSystem and be unaware that this is the case. In our health check, we look for anything that is misconfigured, not configured as would normally be expected, in an error state or anything else that could cause a potential issue in the immediate or short-term future.

The following are some examples of things that we evaluate:

  • Encryption keys: How many USB drives are there? Where are they located? Are they in a safe place?
  • Host-to-volume mapping: Are all clustered hosts mapped to the same volumes using the same SCSI ID?
  • FlashCopy: Do FlashCopy volumes have the same preferred node as their source volumes?
  • Volumes: Are volumes spread evenly between nodes and IO groups?

The FlashSystem Health Check will look at all these configuration parameters (and other aspects), but if you identify a specific problem that needs to be investigated, we can create a health check that is tailored to focus on whatever you need.

As part of our FlashSystem Health Check, we do not evaluate the FlashSystem’s performance as this is somewhat subjective and requires a deep understanding of the requirements of the FlashSystem and how it integrates into your whole  environment. Our health check essentially focuses on the FlashSystem’s configuration.

Learn more

To learn more about how we can help you with your FlashSystem Health Check, contact us at

IBM Systems Lab Services offers infrastructure services to help you build hybrid cloud and enterprise IT. Lab Services consultants collaborate with clients on site, offering deep technical expertise, valuable tools and successful methodologies - and help clients solve business challenges, gain new skills and apply best practices.

Aubrey Applewhaite

Consulting IT Specialist - Storage


Announcing Citrix Virtual Apps and Desktops (CVAD) on IBM Cloud Virtual Private Cloud (VPC) Cloud

2 min read


Akil Bacchus, Solutions Tech Lead
Prem D'Cruz, Product Manager

Citrix Virtual Apps and Desktops for IBM Cloud is now available on IBM Cloud Virtual Private Cloud (VPC) powered by Intel Xeon servers.

This enhancement unlocks opportunities for part-time workloads in the cloud, provides fast machine provisioning in minutes and dynamically deploys applications and desktop machines with Autoscale. Existing Citrix customers hosting virtual desktop infrastructure (VDI) on-premises can migrate or burst to VPC and begin delivering Desktop-as-a-Service (DaaS). Administrators can choose any VPC instance profile in a machine catalog to provide persistent and non-persistent desktop and application experiences.

Greenfield opportunities are another excellent cloud entry point requiring Digital Workplace leaders to build a DaaS business case clearly articulating the Return on Investment (ROI) based on three key considerations:

  • Cost savings by centralizing applications and data for risk reduction and compliance controls
  • Improved operational efficiency by enabling quick desktop replacement, better desktop performance and business continuity
  • Increased revenue with time-to-market gains by enabling device choice, working anywhere and onboarding new users in minutes
What is CVAD for IBM Cloud?

Citrix Virtual Apps and Desktops (CVAD) for IBM Cloud is a self-managed DaaS cloud pattern with built-in automation for sizing, ordering and provisioning. Citrix and IBM worked to give the customer the best user experience and ease of deployment. We will continue to support CVAD on Classic infrastructure with Bare Metal Servers and the VMware Solutions. The graphic below is the user interface tile from the IBM Cloud catalog showing the two infrastructure options, with VPC being the new release:

Benefits of DaaS

With the recent increase in remote and hybrid work, DaaS plays a pivotal role by providing virtualized desktops and applications to workers entirely from a remotely hosted experience, such as a public cloud. It eliminates the need for businesses to purchase the physical infrastructure — instead, functioning through subscription and usage-based payment structures. DaaS drives risk reduction, enhances flexibility and improves productivity by providing the following benefits:

  • Enable employees to work at multiple locations — home, office and on the road — and support Bring Your Own Device (BYOD) policies.
  • Streamline compliance and security audits for PCI, HIPPA, GDPR and SOC.
  • Quickly onboard new users like developers and testers, contractors, seasonal workers and merger and acquisition (M&A) employees
  • Ensure business continuity and improve performance and support issues with your current VDI implementation.
  • Rapidly scale to meet additional workloads and allow employees to be functional within minutes.

It is difficult and time-consuming to stand up VDI infrastructure in the cloud to implement DaaS. IBM has partnered with Citrix to automate and accelerate the deployment of market-leading DaaS solutions like CVAD while using IBM’s enterprise-grade cloud infrastructure.

Learn more Akil Bacchus

Solutions Tech Lead

Prem D'Cruz

Product Manager


Low-Code vs. No-Code: What’s the Difference? Automation

7 min read


IBM Cloud Education, IBM Cloud Education

Low-code and no-code are two new software development solutions — how do they compare?

The demand for hyperautomation and IT modernization has grown, but enterprises have been struggling to align with these trends because of the current limited availability of developer talent. Many IT projects get relegated to the “pending” file due to a shortage of resources with specialized technical skills. As a result, operational inefficiencies continue to exist and time-to-market — a crucial factor for businesses to remain competitive — is compromised.

To address these challenges, low-code and no-code software development solutions have emerged as viable and convenient alternatives to the traditional development process.

What is low-code?

Low-code is a rapid application development (RAD) approach that enables automated code generation through visual building blocks like drag-and-drop and pull-down menu interfaces. This automation allows low-code users to focus on the differentiator rather than the common denominator of programming. Low-code is a balanced middle ground between manual coding and no-code as its users can still add code over auto-generated code.

Examples of applications that lend themselves to low-code development include business process management platforms, website and mobile app development, cross-department tools like appraisal management software, integration with external plugins and cloud-based next-gen technologies, such as machine-learning libraries, robotic process automation and legacy app modernization.

Jamil Spain has a few great videos on low-code and no-code that we'll include for a deeper dive on the subject.

What is no-code?

No-code is also a RAD approach and is often treated as a subset of the modular plug-and-play, low-code development approach. While in low-code there is some handholding done by developers in the form of scripting or manual coding, no-code has a completely hands-off approach, with 100% dependence on visual tools.

Examples of applications suitable for no-code development include self-service apps for business users, dashboards, mobile and web apps, content management platforms and data pipeline builders. No-code is ideal for quick-to-build standalone apps, straightforward UIs and simple automations, and it is used in calendar planning tools, facility management tools and BI reporting apps with configurable columns and filters.

Low-code and no-code automation

A low-code application platform (LCAP) — also called a low-code development platform (LCDP) — contains an integrated development environment (IDE) with built-in features like APIs, code templates, reusable plug-in modules and graphical connectors to automate a significant percentage of the application development process. LCAPs are typically available as cloud-based Platform-as-a-Service (PaaS) solutions.

A low-code platform works on the principle of lowering complexity by using visual tools and techniques like process modeling, where users employ visual tools to define workflows, business rules, user interfaces and the like. Behind the scenes, the complete workflow is automatically converted into code. LCAPs are used predominantly by professional developers to automate the generic aspects of coding to redirect effort on the last mile of development.

Examples of such automation platforms include low-code application platforms, intelligent business process management suites, citizen development platforms and other such RAD tools.

In a no-code development platform (NCDP) — also sometimes called a citizen automation and development platform (CADP) — all code is generated through drag-and-drop or point-and-click interfaces. NCDPs are used by both professional developers and citizen developers (non-technical users or non-developers with limited or no coding skills).

Low-code and no-code: Similarities and benefits Both low-code and no-code are similar in that they aim to abstract the complex aspects of coding by using visual interfaces and pre-configured templates. Both development platforms are available as PaaS solutions and adopt a workflow-based design to define the logical progression of data. They share many benefits due to the common approach:
  • Democratization of technology: Both low-code and no-code solutions are built with the objective of empowering different kinds of users. This reduces dependency on hard-to-hire, expensive specialists and technologists.
  • Productivity enablers: Low-code/no-code increases the velocity of development, clearing IT backlogs, reducing project timelines from months to days and facilitating faster product rollouts.
  • Quick customer feedback at less risk: Prior to investing significant resources in a project, low-code/no-code allows developers to get feedback from customers by showcasing easy-to-build prototypes. This shifts the go/no-go decision earlier in the project schedule, minimizing risk and cost.
  • More build than buy: While commercial-off-the-shelf (COTS) products can be expensive and have a one-size-fits-all approach, low-code and no-code incentivize in-house customization, shifting the needle towards “build” in the buy vs. build dilemma.
  • Architectural consistency: For crosscutting modules like logging and audit, a centralized low-code/no-code platform ensures design and code consistency. This uniformity is beneficial while debugging applications, too, as developers can spend their time troubleshooting issues rather than understanding frameworks.
  • Cost-effectiveness: Low-code/no-code is more cost-effective than from-scratch manual development due to smaller teams, fewer resources, lower infrastructure costs and lower maintenance costs. It also results in better ROI with faster agile releases.
  • Collaboration between business and IT: Business and development teams have traditionally shared a push-pull relationship. However, with more business users participating in development through the low-code/no-code movement, there is better balance and understanding between the two seemingly different worlds.
How is low-code different from no-code?

There is much overlap between the two approaches (exacerbated by low-code and no-code platform vendors’ confusing positioning) despite subtle feature differences between their solutions. However, there are important differences to consider:

Target users

Low-code is aimed at professional developers to avoid replicating basic code and to create space for the more complex aspects of development that lead to innovation and richness in feature sets. By automating the standard aspects of coding and adopting a syntax-agnostic approach, it enables developer reskilling and talent pool expansion.

No-code, on the other hand, is aimed at business users who have vast domain knowledge and may also be slightly tech-savvy but lack the ability to write code manually. It’s also good for hybrid teams with business users and software developers or small business owners and non-IT teams, such as HR, finance and legal.

Use cases

No-code lends itself well to front-end apps that can be quickly designed by drag-and-drop interfaces. Good candidates are UI apps that pull data from sources and report, analyze, import and export data.

Also, no-code is ideal for replacing monotonous administrative tasks like Excel-based reports used by business teams. Such projects don’t get prioritized easily by IT but could be a lifesaver for business teams. It’s also well-suited for internal apps that do not carry the burden of extensive functionalities and for small-scale business apps with less development budget.

Low-code, with an exhaustive component library, can be extended to applications with heavyweight business logic and scaled to an enterprise level. Also, to integrate with other apps and external APIs, connect to multiple data sources and build systems with security guardrails that need the IT lens, low-code is a better alternative than no-code.


Low-code requires more training and time to onboard, develop and deploy as it offers more opportunities for customization. But it’s still considerably faster than traditional development.

No-code, being highly configurable and all plug-and-play, takes less time to build in comparison to low-code. Testing time is also reduced because there is minimal risk of potential errors normally introduced by manual coding. Here, it’s all about ensuring the configurations and data flow are set up correctly.

Open vs. closed systems

Low-code is an open system that allows its users to extend functionality through code. This means more flexibility and reusability. For instance, users can create custom plugins and data source connectors to fit their use cases and reuse them later. But it’s worth noting that newer upgrades and patches of the LCAP need to be tested with the manually introduced code.

No-code is a more closed system can only be extended through templated feature sets. This means restricted use cases and access to boilerplate plugins and integrations, but it’s easier to ensure backward compatibility as there is no manually written code that could break future versions of the NCDP.

Shadow IT risk

While this has been a concern for both low-code and no-code platforms, the risk of shadow IT is higher with no-code, which requires little or almost no intervention from IT teams. This could result in a parallel infrastructure that’s not closely monitored, leading to security vulnerabilities and technical debt.

However, the fact that low-code is still under the fold of IT teams can help ensure better governance and control.

Architectural range

Low-code scores over no-code in its support for scalability and cross-platform compatibility. Adding custom plugins and custom code opens up the possibility of a wider range of implementations and working with multiple platforms.

No-code has less extensibility and limited potential in connecting to legacy systems or integrating with other platforms. Therefore, it addresses a narrow set of use cases and has a reduced ability to scale.

When to use low-code vs. when to use no-code

Both low-code and no-code have their individual strengths. The similarities between the two don’t make this an easy decision either. The best way forward is to assess the current requirements and make a choice accordingly.

Here are a few questions to determine user needs:

  • What are the goals of using the low-code or no-code software?
  • Who are the users? What’s their programming expertise?
  • What is the scope and scale of the problem to be solved?
  • Does the build require custom integrations with external and internal applications?
  • What is the turnaround time needed?
  • How much control do users want to retain over code?
  • Does the application need to deal with confidential data or factor in security considerations?

The two key questions here are: What is the application for, and who is going to build it? While both these are important questions, it’s better to use a goal-centric approach than a user-centric approach — that is, the what is more important than the who.

If the use cases are complex, require integrations with other on-premises or cloud apps, have customer-facing or business-critical requirements or need to be deployed across the enterprise, low-code is the preferred option. In this case, even if users do not have the requisite expertise in programming languages, partnerships with IT teams or training programs can resolve the challenges.

Low-code and no-code with IBM

Working with IBM, you’ll have access to low-code and no-code intelligent automation capabilities that allow subject matter experts to automate processes without depending on IT.  

No-code solutions:

Low-code solutions:

  • Empower your business users to build their own bots using IBM Robotic Process Automation, as Lojacorr Network did, boosting process execution efficiency by 80% without hiring staff and without prior programming language experience.
  • Integrate applications using simple, web-based tooling with IBM AppConnect.
  • Set up file processing pipelines that interconnect business units and external partners with Aspera Orchestrator.
IBM Cloud Education

IBM Cloud Education


IBM Tech Now: May 23, 2022

1 min read


Ian Smalley, Content Director and Editor

X-Force Red Cloud Testing Services, IBM API Connect-aaS on AWS and the TrustRadius Awards.

Welcome IBM Tech Now, our video web series featuring the latest and greatest news and announcements in the world of technology. Make sure you subscribe to our YouTube channel to be notified every time a new IBM Tech Now video is published.

IBM Tech Now: Episode 54 Watch the video

This week, we're focusing on Think Broadcast 2022: 

Stay plugged in

You can check out the IBM Cloud Blog for a full rundown of all cloud news, announcements, releases, and updates, and hit the subscribe button on our YouTube channel to see all the new IBM Tech Now episodes as they're published.

Missed the previous episodes? Check out the full playlist.

Have feedback, comments, suggestions, or ideas? We'd love to hear from you, so leave us a comment on the video.

Ian Smalley

Content Director and Editor


5 Intelligent Automation Strategies for the IT Talent Shortage Automation

7 min read


Mandy Long, VP, IBM Automation

Five strategic recommendations to help top IT talent thrive and become more efficient and scalable.

IT is facing a serious talent shortage. Over 70% of IT executive respondents surveyed by IDC in Q1 2022 report that the talent shortage is an urgent concern that is currently slowing progress toward technology modernization and transformation goals. That shortage isn’t being addressed any time soon. According to the January 2022 Gartner IT Spend Forecast, 50% of tech vacancies have been open for six months, and this trend is expected to continue.

IT staffing shortages combined with high employee turnover rates result in institutional knowledge loss, manual process breakdowns and unplanned downtime. That delays new products from getting to market and leads to less satisfied customers. Worse yet, the burden of daily operations falls on remaining IT employees, increasing burnout and further perpetuating the core problem.

Intelligent automation can prevent and offset staffing issues by improving the employee experience. After all, IT excellence isn’t just about finding top talent. It’s about creating a culture and employee experience that keeps your top talent happy. Your best employees entered the field to innovate, not to be mired in reactive responsibilities and consumed by operational overhead. Give them the tools they deserve with IT solutions that automate menial tasks, break down silos and support collaboration.

These five strategic recommendations and associated technologies can help create an environment where top tech talent thrives, allowing IT talent to be more efficient and scalable.

1. Minimize manual allocation and guesswork with automated application resource management

Application resource management software is designed to continuously analyze every layer of applications’ resource utilization to ensure applications get what they need to perform, when they need it.

How it improves IT employee experience: The traditional process of application resource allocations is full of manual, time-consuming number crunching, and it typically involves guesswork and over-provisioning. By automating many of the often-reactive tasks burdening your team (such as container-sizing and resource decisions), IT teams can reclaim time from planning sessions for building revenue-generating functionality.

How it benefits the business: Depending on the strength of the solution, application resource management software can safely reduce cloud cost by up to 33% while optimizing application performance. By stitching together your entire IT supply chain from application to infrastructure, you can break down silos and move away from manual allocations and guesswork and toward real-time, dynamic and trustworthy resourcing across multicloud environments.

Real-world ROI: Apparel company Carhartt began using IBM Turbonomic Application Resource Management to help its hybrid cloud infrastructure handle dramatic spikes in demand. Using the software, the IT team clarified the resource relationships between its hardware, virtualization and application performance management (APM) solution, stitching together the company’s complete application stack. The software also helped identify opportunities for improvement, enabling Carhartt IT to prevent performance issues during the holiday season and beyond, driving record sales.

Carhartt has fully automated virtual machine (VM) placement, helping to improve overall performance while reducing resource consumption by 15%. Carhartt IT has also been using IBM Turbonomic to optimize cloud deployments, finding they could improve the efficiency of their Microsoft Azure cloud environment by 45%, while assuring workload performance.

“Turbonomic’s automated actions not only improve performance, but they free up the team’s resources. The team now has more time to innovate rather than focusing on keeping the lights on.” — Gary Prindle, Senior Systems Engineer, Carhartt

2. Quickly understand the impact of code changes with observability and immediate, granular context

APM and observability software is designed to provide deep visibility into modern distributed applications for faster, automated problem identification and resolution. The more observable a system, the more quickly and accurately you can navigate from an identified performance problem to its root cause, without additional testing or coding.

How it improves IT employee experience: Performance testing is critical to successful application development, but it often requires significant manual effort from employees, such as testing the load time for a specific endpoint. Having a tool that can monitor the application ecosystem with continuous real-time discovery empowers employees to identify and fix issues faster and accelerate products to market. It also helps enhance collaboration. All teams can gain transparency into the direct source of a problem with data-driven context, thereby reducing time spent on debugging and root cause analysis.

How it benefits the business: Ultimately, the right observability solution helps the business bring better products to market faster. And if you’re bringing more services to market faster than ever — and in the process, deploying new application components — traditional APM’s once-a-minute data sampling can’t keep pace. With an enterprise observability solution, you can better manage the complexity of modern applications that span hybrid cloud landscapes — especially as demand for better customer experiences and more applications impacts business and IT operations.

Real-world ROI: Vivy is an intermediary between patients and their healthcare providers, so it’s vital that their patient-facing application is always available. As the application gained popularity, receiving more than 200 million requests per second, Vivy’s developers realized that some services were running slowly. IBM Observability with Instana raises a single incident in response to slow service or problematic requests — including all corresponding events — and identifies the most probable root cause. Armed with this actionable data, Vivy’s engineers can quickly assess the situation and resolve issues. With this software, Vivy reduced mean time to repair (MTTR) by 66%, from up to three days to one day or less.

“Instana was fast and easy to deploy, and with zero configuration, it was able to discover all of our services and their corresponding dependencies.” — Kirill Merkushev, Head of Backend, Vivy

3. Eliminate fire drills using proactive incident management

Incident management is the critical process that IT teams follow to respond to service disruption or unplanned events. With proactive incident management tools, organizations can reliably prioritize and resolve incidents faster, offering better service to users.

How it improves IT employee experience: IT needs to quickly analyze, correlate and learn from its operational and unanticipated events to better prepare for disruptions. But time spent validating false alerts and managing huge volumes of data leads to employee fatigue. Proactive incident management software can help eliminate up to 80% of employee time wasted on false positives, allowing teams to reclaim time for proactively solving the real issues.

How it benefits the business: You can proactively improve service quality with data, minimize potential costly downtime and improve customer experience by minimizing event noise and correlating a vast amount of unstructured and structured data in real-time.

Real-world ROI: Electrolux needed to efficiently cut through the network noise and identify the tasks that maintain successful operations. “In one year,” says Joska Lot, Electrolux’s Global Solution Service Architect, “we fix the same type of issue 1,000 times. And we’ve had people spending one hour [per instance] managing these activities manually.” Using IBM Cloud Pak for Watson AIOps, resolution times are now one hour, not three weeks. By automating a menial task that consumes 1,000 hours a year, operators’ expertise can be applied to more valuable, higher-level tasks, such as identifying new correlation criteria to feed to the AIOps solution or refining rules and actions based on local conditions.

“We see about 100,000 events per day. It’s so important in this huge ocean to identify exactly the drop of venom that you have to remove to save your life.” — Joska Lot, Global Solution Service Architect: Monitoring and Events Management, Electrolux AB

4. Optimize uptime and spend safely in real-time using application resource management

Application resource management software helps ensure applications get what they need to perform. Beyond saving teams from manual provisioning, it can also help organizations save on cloud and infrastructure costs.

How it improves IT employee experience: With applications running on autopilot, IT teams can shift their energy to innovation and reclaim time to drive better customer experiences.

How it benefits the business: By giving applications what they need, when they need it (and nothing more), application resource management tools can assure performance and save cloud and infrastructure costs.

Real-world ROI: Providence, an organization that provides healthcare for the poor and vulnerable, faced a serious budget issue. The COVID-19 pandemic intensified the need to reduce waste and maintain cost efficiency. Using IBM Turbonomic, Providence achieved more than USD 2 million in savings through optimization actions while assuring application performance, even during peak demand.

“Instead of it being a two- or three-year journey for people to start to conceptualize that cloud is elastic, we showed how we could use the cloud to better manage costs and performance.” — Bryan de Boer, Executive Director, Providence

5. Automate license compliance (and more) with license and resource management tools

Software licensing management tools help you track and evaluate how software is being handled across the company to ensure licenses are being appropriately managed. This helps optimize your licensing system and boost revenue.

How it improves IT employee experience: The complexity of hybrid IT environments often requires software, hardware and cloud solutions from a variety of vendors. Managing these solutions burdens scarce IT resources with the responsibility of correlating insights across platforms, controlling license costs and optimizing resource investments — not to mention maintaining license compliance to avoid penalties and reduce security exposures. License and resource management solutions automate the manual tasks of software license and resource optimization, freeing IT talent to proactively right-size the software portfolio.

How it benefits the business: With license and resource management tools, you can stop over-allocating resources to support license workloads, avoid end-of-service outages, reduce security vulnerabilities with improved version management and help mitigate the risk of penalties from software non-compliance to minimize surprise billings.

Real-world ROI: Solutions like Flexera One with IBM Observability can help reduce time spent researching and validating IT asset data by 60%, decrease the effort spent managing external audits by 80% and reduce software license spend through rationalization by 2%, according to a study conducted by Hobson & Company.

“The value of Flexera IT Asset Management is immediate and demonstrable.” — Hobson & Company

Get started with intelligent automation solutions

Your technology should work for your people, not the other way around. Streamline your IT operations with intelligent automation so you can free your top talent to focus on innovating and engaging in meaningful work. Less time spent on operational overhead improves overall employee experience and simplifies tasks like debugging, application resource allocation and incident resolution.

Learn more about how IBM’s intelligent automation solutions can help you address the IT talent shortage:

Mandy Long

VP, IBM Automation


IBM Cloud App Configuration: A Deep Dive on Use Cases Cloud

3 min read


Josephine Justin, Architect
Srikanth Murali, Architect

Exploring some of the most common use cases for IBM Cloud App Configuration.

IBM Cloud App Configuration is a centralized feature-management and configuration service on IBM Cloud. App Configuration helps developers progressively deliver features in order to ship the code faster and reduce the risk of potential failures. It also offers SDKs for programming languages like Go, Node.js, Java, Python and Kotlin that can be integrated with the applications.

Examples of IBM Cloud App Configuration use cases

IBM Cloud App Configuration supports various use-cases for feature management and configuration management. This blog post provides a list of supported use cases.

Progressive delivery of features

Progressive delivery rolls out new features gradually to limit the potential negative impact. Progressive delivery combines software development and release practices to deliver with control. App Configuration helps roll out features in a regulated way by controlling the application behavior via toggling on-off the features. See here for details.

Dark launch and Beta testing

Dark launch releases features to a subset of users. Deploy the code to production and dark launch your features to a subset of users using Segmentation in App Configuration. 

Test in your production systems by allowing the feature to be available only to the quality engineers. Once feedback on usability and functionality is satisfactory, release it to all your customers. See here for details.

Kill switches

Kill switch is a mechanism to stop something in the event of a failure to avoid wider negative impact. In the context of feature flags, the kill switch is used to disable a feature because of critical bugs raised by customers (or any other type of feedback received). Instead of having to rollback the code deployment, just disable the feature using the toggle functionality in App Configuration. See here for details.

Canary/ring deployments

Canary or ring deployments are a strategy to release features incrementally to a subset of users. App Configuration supports phased rollout to enable incremental release of features to a subset of users or devices. See here for details.

Offline support

Enabling offline mode lets you evaluate feature flags or properties when the application is running in an air-gapped environment. The App Configuration SDK supports Bootstrap file for use in highly secure environments like FedRAMP compliant systems. See here for details.

Centralized configuration control

Configuration management represents a single source for configuration items. Configuration management needs an effective way of managing changes, access control and auditability. Create and manage properties in App Configuration to use it in your infrastructure or in your code, and access them using Razee plugin or CLI. See here for details.

Configuration as Code

Configuration as Code (CaC) separates the configuration from the code and maintains the configuration in files in a repository. App Configuration helps to export the configuration and store it into configuration files. See here for details.

Faster incident management

Feature flags help prevent issues using kill switches and also help to reduce the Mean Time to Respond (MTTR). Dynamically enable diagnostic traces across your applications or microservices using a feature flag to quickly debug any customer incident. See here for details.

Toolchain integration

Integrate App Configuration to your pipelines to apply specific feature flags or properties to the environment or trigger properties of the pipeline. See here for details.

Release features across clusters in multicloud deployments

Keeping the features and properties up to date across all clusters in real-time is key to multicloud management. App Configuration supports the Razee plugin, which helps templatize and control the deployment of Kubernetes clusters across clusters. See here for details.

Automate feature flag deployments

Terraform is an open-source project that lets you specify your cloud infrastructure resources and services by using the high-level scripting HashiCorp Configuration Language (HCL). App Configuration helps automate your feature flags in a multicloud deployment. See here for details.

Infrastructure as Code

Infrastructure as Code (IaC) is the process of managing and provisioning computer data centres through machine-readable definition files. IaC tools have conditional logic to turn on/off parts of the infrastructure. Using feature flags in IaC allows you to configure and build infrastructure dynamically based on environments. See here for details.

References Josephine Justin


Srikanth Murali



Change3 Builds and Scales its World-Changing Suite of Marketing Technologies on IBM Cloud Cloud Compute

4 min read


Susan Martens, Global Managing Director, Ingram Micro

Explore Change3’s journey to find the right fit with IBM Cloud.

In building a growth initiative to double the company’s revenue every year for the next 10 years, Kneko Burney — CEO & Chief Creative of Change3 — set out on an ambitious plan for her business.

With aspirations to scale her business globally and exponentially increase the number of users accessing their marketing applications, Kneko chalked out an aggressive plan for her company. To expand the proprietary marketing technologies of her core business, however, she needed a more secure and reliable cloud computing solution.

Who is Change3?

Change3 is a full-service marketing and lead generation technology company headquartered in Scottsdale, AZ, providing exceptional results to its technology clients with the help of seasoned marketing professionals in 12 countries. They offer creative, design, development and outbound services with a focus on technology underpinned by a mission to help grow their customers’ businesses through the delivery of exceptional services.

Their core business has been building and hosting custom marketing applications and websites serving small and large organizations around the world. The range of digital and marketing solutions Change3 offers its clients are custom data services, website services, application development services, lead generation, content creation and management, market research and surveys.

As a technology company with a diverse global set of clients and offerings, they view the cloud platform as a critical piece in their overall technology stack and a core way of delivering and scaling services to their clients.

Change3 needed a powerful, scalable and secure cloud solution

Change3 manages a variety of business-critical and customized marketing applications for their clients and has used various cloud service providers over the years, but felt none of them truly fit their needs nor provided the collaborative support they needed. Despite the high prices of some cloud services providers (CSPs), they lacked the advisory support and had persistent challenges around downtime and latency, which impacted Change3’s business applications.

As a result, Change3 sought a better alternative. They needed a powerful cloud solution that could scale seamlessly and provide the needed security and support — all while keeping costs low.

IBM Business Partners help clients succeed

Change3 started exploring public cloud options with Converge Technology Solutions, a trusted IBM Business Partner with whom they had already conducted lead-generation activities for IBM Planning Analytics. Change3 and Converge explored several public clouds but found that they did not offer the desired flexibility.

Converge walked Change3 through the IBM Public Cloud Bare Metal offering despite Change3’s concern that this offering would be out of their price range. The bare metal server option offered Change3 the opportunity to have dedicated hardware with no “noisy neighbors,” and Converge worked with IBM to arrange a 60-day proof of concept that helped Change3 realize all the benefits of the IBM Cloud solution.

Converge Technology Solutions Corp. is a software-enabled IT and Cloud Solutions provider focused on delivering industry-leading solutions and services. Converge's regional sales and services organizations deliver advanced analytics, cloud, and cybersecurity offerings to clients across various industries.

To learn more about this IBM Business Partner, visit Converge Technology Solutions, Corp.

IBM Cloud demonstrates great processing power and development efficiency

Change3 is delighted to have partnered with IBM because the benefits they have realized have exceeded expectations. Change3 has had a pleasant experience with the IBM support team and billing infrastructure and has confidently moved all their workloads to the IBM Cloud.

Change3 also launched two mission-critical applications in IBM Cloud that support their global growth initiative:

  • BeTechly: The lead-generation platform on which Change3 has already conducted more than 30,000 surveys for lead generation (and Change3 has another major lead-gen launch in their queue).
  • BeContently: An account-based marketing (ABM) platform.

“Not only has our team seen great processing power and development efficiency using the IBM Bare Metal option, but the cost has been surprisingly low.” — Kneko Burney – CEO & Chief Creative, Change3

Change3 agrees they no longer need to worry about security, downtime or unexpected system maintenance with IBM Cloud. Change3 believes that they have the right expertise to build a world-changing suite of marketing technology for tech companies as they are relying on IBM Cloud for scale, security and performance.

How are IBM Cloud Bare Metal Servers different

IBM Cloud® Bare Metal Servers are fully dedicated servers providing maximum performance and secure, single tenancy. No hypervisor means direct root access to 100% of your server resources. Customize it all with over 11 million configurations to choose from. Easily move workloads across generations of servers with customized images. IBM also lowered its bare metal server prices, on average, by 17% across the board — and included 20 TB of bandwidth, absolutely free of cost.

The following are some of the benefits that can be derived from using IBM Cloud Bare Metal Servers:

  • 100% dedicated to you: Get direct access to compute resources and hardware-level performance for the control you want and security you need.
  • Always the latest technology: Meet your performance and budget requirements with the latest generation NVIDIA GPUs and the latest x86 microarchitecture from Intel® Xeon® and AMD EPYC™.
  • Modern networking, global data centers: Get up and running quickly across 60 IBM Cloud data centers and points of presence in 9 regions and 18 availability zones. Deploy to one region or across multiple regions.
  • Pay-as-you-go: Choose the billing cycle that works for you, with on-demand hourly, monthly, and reserved options.

Check out the various services offered by Change3.

Learn more about IBM Cloud Bare Metal Servers.

Susan Martens

Global Managing Director, Ingram Micro


Domain-Driven Modernization of Enterprises to a Composable IT Ecosystem: Part 2 Cloud

8 min read


Balakrishnan Sreenivasan, Distinguished Engineer

Modernizing applications and services to a composable IT ecosystem.

In Part 1 of this blog series, we saw various aspects of establishing an enterprise-level framework for domain-drive design (DDD)-based modernization into a composable IT ecosystem. Once the base framework is established, teams can focus on modernizing their application in alignment with the framework.

Typically, teams undertake a two-pronged approach to modernize applications and services. The first step is to enumerate and scope the processes (mentioned in Part 1) and use this to conduct DDD-based event-storming sessions to identify various capability components (i.e., microservices). The second step is to decompose the applications and align them with appropriate products (within domains) and map each of the capabilities to respective capability components that help realize the capability. An iterative execution roadmap is built based on dependencies (capabilities and services) and execution starts typically with a minimum viable product (MVP). This blog post — Part 2 of the series — details the above-mentioned approach for modernizing applications to the composable model. Part 3 of the series will look at facing challenges and prepping for organizational readiness.

Decomposing applications and services to capabilities: Overview

So far, the discussion has been about laying the groundwork to composable IT capabilities, which essentially includes organization structure alignment, process scoping by domains and a broad set of products. Next is to focus on modernizing current applications and data into composable IT ecosystem capabilities.  

The following diagram illustrates how different layers of the system are supported by different IT teams in the traditional model versus how the capabilities are built and managed in a composable model. Essentially, a monolith application is decomposed into a set of capabilities and appropriately built and managed by squads based on domain alignment:

While this model is challenging to implement, the value achieved outweighs the challenges:

  • The model provides the best alignment with business domains and the most flexible and agile IT model, driving a high degree of time-to-market improvement.
  • Product-centric model drives clarity of ownership and independency to squads, helping drive an engineering culture across the IT organization.
  • Domain alignment promotes a high degree of reuse of capabilities, reducing enterprise-wide duplication of capabilities (both application and data).
  • This model helps build deeper domain and functional skills in squads and promotes end-to-end ownership and a continuous improvement culture, which, in turn, accelerates adoption of SRE practices.

The following diagram depicts the two major areas addressed in this blog: a domain-driven design (DDD)-based application and services decomposition and the building and deployment of capabilities details:

Domain-driven decomposition of applications and services

Most enterprises have several applications and a set of application services (legacy SOAP services/APIs, integration/messaging services, etc.). These applications have evolved over a period to become monoliths, whereas services have also evolved in a different way to meet demands of consumers. The problem most enterprises are trying to solve is to contain scope of transformation to the capabilities offered via existing applications and services and not end up driving a blue-sky approach.

The following are key steps involved in decomposing applications and services in each domain.

Step 0: Map applications and services to domains per the context and usage scenario

When there are existing applications and services in enterprises, it is important to ensure there are owners for the applications and services. Based on my experience, it is a good idea to look at personas using the applications versus a primary domain that would be looked upon to offer the application or service to consumers. Once the primary domain associated with an application or service is identified, later, the end-to-end consumer ownership of the same lies with the organization that owns the domain.

Step 1: Bottom-up analysis of applications to identify and map capabilities to business process level 3 or deeper (as appropriate)

Once applications and services are mapped to primary (owning) domains, they are then decomposed into capabilities. In general, capabilities are expressed in business terms, and they mostly map to level 3 or slightly deeper:

Applications offer a set of capabilities; and in turn, capabilities could be from different domains based on the business process and bounded context alignment. As applications are decomposed to capabilities, they are also mapped to respective domains to identify who builds and manages (owns) them. While mapping capabilities to domains, it is also important to understand which of the existing services and data that the capability maps to. This is going to be critical to establish input scope (process and boundaries) guidance for event-storming workshops and identification of services.

Step 2: Event-storming and identification of capabilities (services) from business process

There are excellent articles and technique papers for domain-driven design and event storming (e.g., “Event-driven solution implementation methodology” and “DDD approach” by Jerome Boyer and team), and I suggest going through them to get a good understanding of how event-storming is done, the taxonomy followed and so on. The idea here is to ensure that the processes and capabilities that are enabled via the applications in Step 1 drive the scope for event storming. The following are key activities performed in this step:

  • Establish a set of domain events and the actors (persona or system) triggering them and identify relationships between the events (as appropriate) into flows.
  • Review and align (or discard, if appropriate) events to the input scope of the event storming (from the application and service decomposition step).
  • Elaborate the event(s) as a combination of policy, data (business entity, value objects, etc.), command, business rules, actor (or persona), external systems, etc. into one or more flows.
  • Establish aggregate boundaries by analyzing the entities and values in terms of how they establish their context and identify potential services. While the aggregates are typically microservices, the data associated with them forms the bounded context.
  • One could establish user stories based on interactions between elements of each of the flows (it is important to identify user stories to completely implement the flows).
  • Iterate through the above to elaborate/refine each of the flows to such an extent that one can identify the initial set of services to build and likely capabilities to realize.
Step 3: Map application capabilities to services

On one side, we have a set of capabilities identified via decomposing the applications; on the other side, we have a set of microservices elaborated via event storming. It is important to ensure each of the capabilities are mapped to their respective services (or aggregates) to ensure the capabilities (or requirements) can be realized. The detailed operations that include data needed (including Swagger definitions) is defined after this mapping based on the consumption needs of each of the services:

Step 4: Iteration planning

It is also important to establish capability dependencies (with regard to data and services needed to realize them) in such a way that one can figure out how to sequence the build-out of the capabilities to build on top of one another. In most cases, the dependencies are much more complex, but this helps design the necessary coexistence solution to build and deploy capabilities:

The sequenced capabilities are bucketed into a set of iterations and are continuously refined with every iteration. While establishing an iteration plan for the entire system results in waterfall thinking and heavy analysis effort upfront, a high-level roadmap is always prepared based on a group of capabilities, high-level dependencies and approximate t-shirt sizes. As iterations progress, the number of squads and capabilities developed is realigned (or accelerated) based on velocity achieved versus desired velocity.

When building an iterative incremental roadmap, one must think through coexistence because it is a foundational ingredient for success. It implies the ability for the legacy and modernized capabilities to coexist, with a goal to strangulate legacy capabilities over the period while, at the same time, ensuring that consumer ecosystem(s) for legacy capabilities are not disrupted immediately and are given enough time to move towards modernized domain capabilities. A well-crafted co-existence model would allow for uni- or bi-directional data synchronization and/or cross-leverage of functionality not yet modernized through wrapper APIs to achieve such an objective, which needs careful architecture considerations for both functional and non-functional aspects.

Build and deploy capabilities, services and day-2 operations

Modernizing applications and services into a product-aligned, capability-based model is about building capabilities by respective squads per the product alignment. The capabilities built by multiple product squads are composed at the experience layer (originally applications) to ensure consistency for the users.

Squads follow a typical cloud-native development model (based on DevOps and SRE practices) to build and deploy capabilities for consumption. As capabilities and services are developed, their consumption needs are validated and improved continuously (mostly with iterations).

While domain-driven design (DDD) helps identify capabilities that are common across apps, building co-existence code while incrementally modernizing capabilities results in stickiness of the modernized capability to the legacy application and data until the entire capability (including services and data) is modernized. Therefore, the premise of reusability of capabilities and capability components (microservices) needs to be caliberated and governed until the desired level of reusability is achieved. This also contributes to the complexity of day-2 operations (where one will have the monolith legacy application, services and data on one side and distributed/product-led squads supporting modernized capabilities on the other side).

It is important to understand that the day-2 operations model shifts considerably from the traditional monolith-based support model. Product teams collaborate to align the build-out of capabilities that need to be integrated to compose the target application. This means that their iteration plan must be continuously aligned. A day-2 support model for composable applications is different because the capabilities are supported by their respective squads. Incident management and ITSM processes must be restructured to suit a products and services squad model.

Also, the tendency of teams to monitor and manage dependent capabilities (that existed in old monolith model) must be managed through clearly articulated boundaries for capabilities and capability components. One also must skill the teams so that they embrace the cloud-native models. This is a fundamental change in the day-2 support model and it takes a significant amount of organizational readiness to move to this model.

Program management of such programs needs a multi-pronged approach to minimize cross-domain chatter, prioritization challenges and complex dependency challenges. SAFe (Scaled Agile Framework) is probably one of the best models to execute such programs. While one aspect of program management is to keep a razor focus on applications and services being modernized and measure “how much of it is modernized” on a continuous basis, another perspective is to identify complex reusable capabilities and build them via vertically integrated (across products) to accelerate progress. Keeping a critical mass of squads in an application-centric way that builds out the capabilities (even if they are owned by other domains) is critical to ensure knowledge of current application and data is leveraged to the fullest and what is being modernized has functional parity with what exists today while meeting desired SLA levels.


While domain-driven design (DDD) helps establish a disciplined approach to decompose applications and services and establish an overall design that’s well aligned with business domains and processes, it is also important to ensure purity does not get in the way of progress. “Iterate-iterate-iterate” is the mantra, and success depend on how soon teams can quickly build, learn and refine on a continuous basis. Success also depends on the business and SME participation in the above design exercises, without which there will be a tendency to reproduce existing capabilities with minimal transformation.

If you haven’t already, make sure you read Part 1 of this blog series: “Domain-Driven Modernization of Enterprises to a Composable IT Ecosystem: Part 1 - Establishing a framework for a composable IT ecosystem.”

Check out the following links to learn more:

Balakrishnan Sreenivasan

Distinguished Engineer


Page 1|Page 2|Page 3|Page 4