Contents of this page is copied directly from IBM blog sites to make it Kindle friendly. Some styles & sections from these pages are removed to render this properly in 'Article Mode' of Kindle e-Reader browser. All the contents of this page is property of IBM.

Page 1|Page 2|Page 3|Page 4

Developer Tricks: Simulate Cloud Security for Local App Development Cloud Security

4 min read


Henrik Loeser, Technical Offering Manager / Developer Advocate

Everything you need to do to simulate the environment of a trusted profile compute resource.

Does this sound scary? Trusted profile, compute resource, access token, token conversion? I recently had to deal with these for an app deployed to Kubernetes. In this blog post, I discuss how I managed to develop and test the app on my local machine.

As a developer, I prefer to develop (including testing) and code locally. Sometimes, the code needs to interact with functionality that is only available in a certain environment. So how do we deal with this situation? Is there a workaround that allows for local tests?

Recently, I worked with trusted profiles and compute resources on IBM Cloud. Through this work, a special access token is made available in a designated compute environment. That token is then exchanged for an IBM Cloud IAM (Identity and Access Management) access token that allows you to work as a trusted profile (a special IAM identity.)

In this blog post, I share the lessons learned and tricks that I applied in order to develop the app code locally. What is needed to simulate an environment of a trusted profile compute resource? 

An expired access token for a compute resource is easy to fix.

Cloud-based security vs. local development

One of the security features of IBM Cloud is the concept of a trusted profile. There are different options available on how to assume the identity of a trusted profile. Without going into details, one of them is by operating on an identified (configured) compute resource, obtaining a special access token and then turning it into an IAM (Identity and Access Management) access token. One of the supported scenarios for compute resources is a service within a namespace of a Kubernetes cluster. For more details, read my recent blog post: “Turn Your Container Into a Trusted Cloud Identity.”

Because the above compute resource is a feature of IBM Cloud security, it is only available there. So, how can I develop an app locally that uses that feature? An option that I looked into (but did not follow further) is to use extensions for my code editor, such as the Kubernetes and Remote Development extensions. I wanted to have something working really on my machine. After looking into the entire authentication and authorization flow, I decided to simulate the compute resource and injecting a real access token—a JSON web token (JWT)—into my local test environment.

Decoded and pretty-printed compute resource token.

Simulate the compute resource

The approach to simulate a compute resource and to provide a valid access token is independent of the coding style. When working with any of the IBM Cloud SDKs (software development kits), the Container Authenticator is used with Kubernetes compute resources. The documentation on Authentication and the Container Authentication (e.g., the Node.js and Python SDKs) details how the token is obtained by the Container Authenticator and what needs to be configured. It is the name of the file from which a valid compute resource token can be read. On Kubernetes, the file is made available in the pod to the running app.

When working solely with the IBM Cloud API functions, your code will utilize the function to turn a valid compute resource token into an IAM access token using the available API function. But, first, you need to read that compute resource token from a file. So it is similar to the SDK—a local token file needs to be read instead of the one made available in the Kubernetes pod. In Python, it may be something like this to read the file name from an environment variable TEST_TOKEN_FNAME, and, when not set, use the default location: 


IBM Cloud SDK with Container Authenticator in action.

You may wonder how a valid token can be easily retrieved from the valid compute resource token and kept current. I resorted to deploying a skeleton app to the Kubernetes environment and setting up the trusted profile with my Kubernetes environment as the compute resource. Next, I connected to the Kubernetes pod and copied over the token to my local file. On the command line, with access to the Kubernetes cluster set up, you can connect to a pod named “tp-demo” like this:

kubectl exec -it tp-demo -- /bin/bash

Then, in the shell in the running container, show the token (and manually copy it over to your local file):

cat /var/run/secrets/tokens/sa-token

Another option is to use kubectl to copy over the token file from the pod to your machine:

cat /var/run/secrets/tokens/sa-token

The above uses the tar command on the pod to package up the directory with the compute resource token. Thereafter, that archive is extracted on my local machine and strips away the first four levels in the directory hierarchy (“/var/run/secrets/tokens”). The token file (which is actually two symbolic links and an file with content) ends up in the current directory.

When developing your app, you have to repeat the copy process whenever the token expires. Typically, the token is valid for 60 minutes and is refreshed after some 45 minutes. If you are curious, check on the token itself on when it expires.

Decode and pretty print the compute resource token—a regular JSON web token (JWT)—that is stored in the file “sa-token”:

cat sa-token | tr "." "\n" | for run in {1..2} ; do read line ; echo $line | base64 -i -d | jq ; done

The above breaks up (tr) the JWT components, decodes (base64) and pretty prints them (jq)—see the screenshot above. The attribute exp holds the expiration timestamp as integer. Simply decode such an integer like this:

date -d @1683550911

The command prints “Mon May 8 15:01:51 CEST 2023” on my machine. Another refresh is coming up pretty soon.


It is great to have more enhanced options for cloud security. As developer, I prefer local to remote development, so it is ideal if I can continue to develop and test apps locally. This is also the case for apps utilizing trusted profiles with Kubernetes-based compute resources. It does not require much, only some technical insights to simulate cloud security for local development—copy over the file with the access token and make the app use it.

If you have feedback, suggestions, or questions about this post, please reach out to me on Twitter (@data_henrik), Mastodon ( or LinkedIn.

Henrik Loeser

Technical Offering Manager / Developer Advocate


Using Collaboration (not Migration) to Modernize Your Mainframe Applications with the Cloud Cloud

4 min read


Jason Bloomberg, Managing Partner at Intellyx

A look at the various mainframe/cloud collaboration scenarios to see how best to provide value, save money and push digital transformation.

This is part two in a five-part series on mainframe modernization.

When you hear the phrase “modernize your mainframe applications with the cloud,” the first thing to come to mind is likely going to be migration. Not so fast. When you cut through the misconceptions, migrating off the mainframe is rarely the best approach.

Today’s mainframes are blisteringly fast, remarkably scalable and so unbelievably reliable that many mainframes in operation today have been running for decades with absolutely no downtime whatsoever.

That’s right. Not five 9’s. Not six 9’s. All the 9’s—we’re talking 100% uptime.

It’s no wonder two-thirds of the Fortune 100 relies upon its big iron for its mission-critical transactions. Banks, airlines, insurance companies, retailers—the list goes on. The global economy depends on mainframes, every minute of every day. But (and you knew there was a but), these same enterprises are in the midst of digital transformations. New economic and customer pressures are forcing them to innovate and rethink how they leverage technology to deliver customer value. Such transformations invariably include the cloud, and when it does, modernizing the mainframe soon enters the conversation.

While AWS, Microsoft Azure and other major cloud providers would love for all the enterprise workloads to run on their clouds, they realize that most mainframe customers would be better served by a mainframe/cloud collaboration strategy. IBM—the sole remaining mainframe vendor and a cloud provider in its own right—also champions bringing mainframes and clouds together to meet the modern digital needs of enterprises that have been depending on the mainframe platform for so many years.

Still think mainframe application modernization necessarily means migration? Let’s take a look at the various mainframe/cloud collaboration scenarios to see how best to provide value, save money, and most of all, achieve the customer-centric goals of digital transformation.

Placing mainframe modernization into context

For some enterprises with mainframes, modernization may not be a priority. Following the adage “if it ain’t broke, don’t fix it,” some mainframe applications continue to chug away, providing as much value today as they did when they were new.

Leaving such apps alone may be the best business decision, but might also still slow down the organization’s ability to innovate to meet changing customer needs. It’s important to weigh the pros and cons of making any change vs. maintaining the status quo.

In other situations, mainframe application modernization initiatives involve updating or replacing older applications on the mainframe with new or reworked mainframe applications. More likely than not, however, mainframe modernization requires some combination of mainframe and cloud-based capabilities working together.

Refactoring mainframe applications to run in the cloud is a sometimes considered modernization strategy. Convert COBOL to Java, for example, either to run as-is in the cloud or as part of a cloud-native re-architecture initiative.

Reducing mainframe MIPS costs is often a business motivation for shifting some or all applications to the cloud. “Customers want to do more with less,” explains Steven Steuart, AWS WW GTM Mainframe. “Our customers can transform with or to AWS (for example, to reduce MIPS consumption), process on mainframe, and consume on AWS.”

Separating application and data concerns

Enterprises use mainframes to both run applications and store data. The modernization considerations for each purpose are often different.

Approaching the question of mainframe vs. cloud as a question of which tool is right for which job is central to cost-effective modernization decisions.  Leaving core transaction processing on the mainframe, for example, is a straightforward, low-risk decision, while running workloads like analytics and customer experience apps on the cloud will reduce costs and deliver the full power of cloud-based services.

In fact, in many situations, the question is less about where applications are running and more about the data. Supporting mobile apps with mainframe data, for example, can be both expensive and slow due to mainframe processing costs, data transport costs and network latency issues.

Selectively replicating mainframe data to the cloud to support read-only access can solve such problems but is only appropriate where real-time access to the data is less important.

Inferencing in real-time closest to where the data resides is a competitive advantage the modern IBM z16 mainframe provides. In other situations, the goal is to expand access to mainframe data to cloud-based applications and services. Organizations are thus able to leverage the value of data on the mainframe across their IT landscape.

Mainframe development in the cloud

As the boomer generation of mainframe developers retires, mainframe-based organizations must energize a new workforce. Such professionals, however, don’t want to sit in front of green-screen terminals. They want to work with modern development tools in a modern, cloud-based environment.

Enter IBM Wazi Developer for Workspaces. Wazi is a development environment with a browser-based IDE that developers and testers can use to build, test and run mainframe applications from anywhere. AWS, for one, is all-in with Wazi. “Wazi from IBM is available today on the AWS Marketplace as part of IBM’s Z and Cloud Modernization Stack offering,” AWS’s Steuart points out.

Wazi addresses the generational mainframe skills problem by delivering a modern experience for developers working with z/OS mainframe apps in the cloud.

In addition, IBM has rolled out IBM Wazi as a Service on the IBM Cloud, bringing z/OS development and test as-a-service to the cloud for the first time. “Developers now have easy access to modern development tools and innovative cloud services,” said Andy Bradfield, vice president of IBM Z Hybrid Cloud. “And with hybrid clouds, they can keep their applications wherever they need to be—in the cloud, on-premises and at the edge.”

The Intellyx take

In addition to IBM and the major cloud providers, there’s an entire ecosystem of both young and mature vendors providing tools and platforms to help enterprises modernize their mainframe applications by working with the cloud.

Some vendors focus on DevOps tools. Others on mainframe-to-cloud integration. DataOps and other modern data tooling are also available from many ISVs.

As a result, the number of mainframe-based enterprises who seek to migrate off of the venerable platform is actually going down as it becomes increasingly clear that mainframes are an integral and essential part of the modern cloud-based world.

To learn more, see the other posts in this series:

Learn more about mainframe modernization by checking out the IBM Z and Cloud Modernization Center.


Copyright © Intellyx LLC. IBM is an Intellyx customer. Intellyx retains final editorial control of this article. No AI was used in the production of this article.

Jason Bloomberg

Managing Partner at Intellyx


Keep Your Users Logged In: Enable Session Renewal for IBM Cloud ALB OAuth Proxy Add-On Cloud

8 min read


Marcell Pünkösd, IBM Cloud Kubernetes Service Engineer
Attila Fábián, Software Engineer, IBM Cloud Kubernetes Service

ALB OAuth Proxy now offers the usage of refresh tokens for session lifetime extension.

If you were using ALB OAuth Proxy Add-On in IBM Cloud Kubernetes Service before, you may be familiar with the limitation where sessions have a static lifetime and require your users to reauthenticate periodically. With the recent introduction of support for refresh tokens in the add-on, you can enable automatic session renewal up to 90 days without compromising security.

As of 13 April 2023, a new configuration option has been introduced for ALB OAuth Proxy that enables the usage of refresh tokens for session lifetime extension.

If you’re just hearing about ALB OAuth Proxy Add-On, you should check out this blog post on the subject. 

What are refresh tokens?

In order to understand refresh tokens, we should get familiar with access tokens. Access tokens are required to access the backend resources themselves. These are issued by an authentication server and stored on the client side. The client then uses this token when they need to reach protected resources. Access tokens have an expiration time, and after they expire, the token is considered invalid, and access will be refused to the bearer of the token.

In contrast, refresh tokens cannot be used to authenticate backend services to access protected resources; instead, they can be used to issue access tokens without requiring the user to sign in interactively.

When enabled, users receive a refresh token after successful logins, and access tokens will be automatically renewed for them using their refresh token whenever they need it. Refresh tokens are automatically saved in cookies along with the access tokens, allowing users to stay signed in as long as the refresh token is valid.

IBM Cloud App ID can issue access and refresh tokens, but OAutb-Proxy does not rely on these by default. If the authentication succeeds, OAutb-Proxy creates a session cookie that is valid for a fixed period of time. As long as the cookie is valid, the user can access protected resources. However, once the cookie expires, the user will need to sign-in again.

With the recent changes introduced in ALB OAuth Proxy, it is now possible to configure OAutb-Proxy to rely on the access and refresh tokens. Users with valid access tokens will be able to access protected resources, and OAutb-Proxy will be able to renew access tokens on-demand as long as the refresh token is valid. Users will need to reauthenticate only when their refresh token expires (or the cookie containing the refresh token is removed from their browser).

Using refresh tokens (with short-lived access tokens) is generally more secure. Unlike long-living access tokens, which are validated by the backend application directly, these refresh tokens are controlled and validated by App ID. This allows revocation of refresh tokens when a user logs out, gets deleted or their permission is revoked in some other ways.

Configuring token refresh for App ID integration with IBM Cloud Kubernetes Service

Users can enable the feature in the following three steps. This guide assumes that you already have a working ALB OAuth Proxy Add-On set up for your IBM Cloud Kubernetes Service cluster. If you do not, check out this blog post or our official guide on how to set it up.

Step 1: Enable refresh tokens in App ID

In order to use refresh tokens, you have to enable them in App ID first. By default, this feature is turned off:

  1. Navigate to the management console of your App ID instance.
  2. In the IBM Cloud App ID management console, go to Manage Authentication.
  3. Select the Authentication Settings tab.
  4. Under Sign-in Expiration, enable the refresh tokens by clicking on the switch on the right side.
  5. The expiration setting for refresh tokens should be available now. You can change it to your desired value:
  6. Click Save.
Step 2: Update your Ingress resources When using refresh tokens for session renewal, the cookie maintained by OAutb Proxy can be larger than 4 kB. Most browsers do not handle cookies above this size, so OAutb Proxy splits the session cookie into two pieces. In order to let OAutb Proxy access both cookies properly, the following configuration snippet must be added to the Ingress resources that use authentication:
annotations: |
auth_request_set $_oautb_<App_ID_service_instance_name>_upstream_1 $upstream_cookie__oautb_<App_ID_service_instance_name>_1;
access_by_lua_block {
if ngx.var._oautb_<App_ID_service_instance_name>_upstream_1 ~= "" then
ngx.header["Set-Cookie"] = "_oautb_<App_ID_service_instance_name>_1=" .. ngx.var._oautb_<App_ID_service_instance_name>_upstream_1 .. ngx.var.auth_cookie:match("(; .*)")

Note: Kubernetes Ingress Controllers (ALBs) on clusters created on or after 31 January 2022 do not process Ingress resources that have snippet annotations (e.g., by default as all new clusters are deployed with the allow-snippet-annotations: "false" configuration in the ALB’s ConfigMap. This is a safe default introduced as a mitigation of CVE-2021-25742. You can find more information of this on our security bulletin. Because this configuration requires you to use configuration snippets, you need to edit the ALB’s ConfigMap (kube-system/ibm-k8s-controller-config) and change to allow-snippet-annotations: "false" to allow-snippet-annotations: "true".

If you were previously using the annotation in your Ingress configuration, you may have to merge the previous snippet with your config. See our official documentation for more examples.

Step 3: Configure cookie_refresh for OAutb-Proxy deployed by ALB OAuth Proxy Add-On

The final step is to enable using the refresh token on the OAutb Proxy side. To achieve this, the cookie_refresh value must be configured to a valid duration in the Kubernetes ConfigMap that describes custom OAutb-Proxy settings.

By default, this ConfigMap does not exist—if you have not customized the default behavior of OAutb-Proxy before, you may have to create it. The following is an example for creating such a ConfigMap:

apiVersion: v1
kind: ConfigMap
name: oautb-<App_ID_service_instance_name>
namespace: <ingress_resource_namespace>
cookie_refresh: "58m"

The value of cookie_refresh determines the age of the access token, after which OAutb-Proxy requests a new token from App ID. It is recommended to set this value a few minutes shorter than the expiration of your access token defined in App ID.

More information

For more information, check out our official documentation about configuring App ID authentication or learn more about App ID in general on this YouTube channel.

To learn more about refresh and access tokens in general, you may be interested in RFC6749.

Contact us

If you have questions, engage our team via Slack by registering here and join the discussion in the #general channel on our public IBM Cloud Kubernetes Service Slack.


Marcell Pünkösd

IBM Cloud Kubernetes Service Engineer

Attila Fábián

Software Engineer, IBM Cloud Kubernetes Service


Drive Digital Disruption with Middle- and Back-Office Workloads Cloud

5 min read


Ravesh Lala, Vice President, Hybrid Cloud Solutions
John De Marco, Distinguished Engineer, CTO

Learn about four key challenges to modernizing the enterprise core workloads and four ways to get started now.

Digital transformation has been a decade-long pursuit by organizations, accelerated by the pandemic, demands of digitally native consumers and the need to maximize existing investments. We see organizations drive stronger innovations by unlocking the enterprise core and realizing tangible benefits.

Digitize front office for quick wins, and modernize the core to unlock innovation 

Modern technology is key to digital transformation, but as of 2022, just 7% of core banking workloads globally have shifted to the cloud. It has been observed that the emphasis so far has been on the front-office applications (or digital front ends), but middle- and back-office systems also need modernizing.

We believe the real value of digital transformation lies in modernizing these middle- and back-office systems. These systems are often the backbone of a financial organization's operations, and modernizing them can help improve long-term competitiveness, cost-efficiency, resilience, security and compliance protocols in a constantly changing regulatory landscape.

For example, one of the leading financial trading organizations boosted performance by up to 3x by moving their trading platform to the cloud, helping clients seize fleeting opportunities for profit in a security-rich cloud platform and protecting client investments.

Modernizing middle- and back-office workloads can be a complex process, especially for established organizations and those in regulated industries. It requires a deep understanding of the existing operations and where the business wants to go, in addition to the regulatory and security issues inherent to these types of industries and the complexity of moving to the cloud. We believe the benefits of modernizing middle- and back-office system outweigh the risks.

Four key areas that present challenges

Enterprises usually  encounter challenges in four key areas:

  1. Platform: We have seen clients struggle with the cost of a secure, resilient and regulatory-compliant multicloud platform. A secure and resilient platform should leverage a confidential computing environment, spanning compute, containers, databases, encryption and key management. It should also leverage an industry-specific common-controls framework that supports an enterprise’s regulatory compliance requirements; however, the cost of maintaining a controls framework can be a challenge. Additionally, we believe relying on one cloud provider adds to cloud-concentration risk, impacting application performance and resiliency.
  2. Application: The complexity of middle- and back-office applications can be a significant challenge that organizations must tackle. High-speed and high-volume transaction processing, data consistency, batch processing and complex business rules contribute to this complexity. These applications have been around for years, and resources that understand the business rules and technology that supports them are few and far between.
  3. Data: Data is another critical area of concern. Data sprawl, consistency, latency and encryption are essential factors to consider. Latency between cloud front ends and on-premises middle- and back-office applications can impact user experience. Data sovereignty requirements can affect workload placement decisions and potentially stifle innovation.
  4. Operating model: Operating models can pose a significant challenge in the modernization journey as organizations shift their processes, technology and culture to embrace new technologies and ways of working. People and skills are crucial to success, with new roles like Site Reliability Engineer (SRE), full-stack developers and cloud engineers necessary for transforming applications. To address the need for cloud-native development, organizations should consider training and upskilling their workforce to have the talent needed to thrive in a rapidly changing digital landscape.

So, what techniques can be applied to help overcome these challenges and fully realize the benefits of digital transformation?

Four anchors of an enterprise modernization strategy across front, middle and back office

A strategy and vision that encompasses optimal workload placement, a business process-centric approach to modernization, security and compliance considerations, and a 360-degree operating model are crucial to a successful digital transformation journey across the front, middle and back office.

  1. Determine optimal workload placement: Reassess your current application and data portfolio in light of a hybrid multicloud approach to determine the best modernization pattern and “fit-for-purpose” landing zone to maximize business value. Consider the attributes of resiliency, performance, security, compliance and total cost of ownership in determining  the optimal workload placement. This orientation is essential to drive business optimization and gain efficiencies while modernizing applications. The fact is that different workloads have different needs to operate efficiently
  2. Take a business process-centric approach: Evaluate critical end-to-end business processes to determine how to optimally modernize front-, middle-, and back-office applications and data. To deliver efficient and seamless experiences, companies should also assess their underlying digital supply chain, including workflows, process automation and business rules. The strategy for middle- and back-office applications should be developed in parallel with front-office modernization, using prototypes to validate the approach. Organizations need to define new workflows, support configurable business rules and modernize the integration layer with APIs, messaging, eventing and other technologies to transform middle-office applications. For back-office applications, core banking functionality and high-throughput transaction processing can often run on the mainframe, with modern technologies like Linux and Java used to modernize in-place.
  3. Build security and compliance capabilities into modernization design: Security and compliance challenges in hybrid multicloud environments include complexity, lack of end-to-end observability, API vulnerabilities and changing regulatory requirements. We have noticed that “roll-your-own” security and compliance frameworks have shown to be costly to sustain over the years. Compliance capabilities aligned with industry frameworks can simplify operations and accelerate cloud deployment. We’ll discuss more about security in our upcoming blog.
  4. Establish a 360-degree operating model: Instantiating an operating model that addresses platform governance, DevSecOps and FinOps helps improve agility, establish a unified automated delivery pipeline with continuous monitoring and feedback, and enable collaboration and prioritization of data-driven investments.  Focusing on culture and cloud-native development skills while adopting structures and systems that encourage sharing of information and ideas is crucial to IT transformation.    
Start small and gradually scale your connected modernization efforts

It is crucial to prioritize digital transformation initiatives based on business value and total cost of ownership in the current economic climate, where budget, talent and time can be limited.

Drive discrete projects with a focus on cost efficiency. Organizations can start by modernizing either the front or back/middle office and connect their way up or down, respectively. Several clients have successfully executed such a strategy:

  • A major bank in Brazil is modernizing its Credit Direct to Consumer and Credit Card processes by modernizing the front-office applications to a cloud-native architecture on private/public cloud while leaving systems of record on the mainframe.  
  • A major airline is modernizing its IT application landscape using hybrid cloud and AI capabilities, resulting in a streamlined digital platform to improve agility, reduce costs, strengthen cybersecurity and enhance customer experience by leveraging existing IT investments while adopting new technologies.
  • A large finance mortgage loan company began by moving their most mission-critical workloads to a hybrid infrastructure to improve customer experience and create a sustainable technology platform for the future. Then, they gradually worked their way up. IBM's solution provided scalability, reduced total cost of ownership, and decreased time to market.

For regulated industries, it is essential to have a partner who can provide a cloud platform that is secured and adaptable to changing needs and supporting compliance requirements, regardless of where the cloud journey starts.

Why should you choose IBM?

IBM has been dedicated to delivering on top parameters—resiliency, performance, security, compliance protocols and total cost of ownership—with IBM Cloud, particularly for clients in highly regulated industries, so they can make decisions on their critical workloads in the middle and back office with confidence.

IBM has expertise in designing mission-critical and resilient systems for regulated industries and enterprise clients. We offer tailored solutions to meet your needs, leveraging deep industry expertise. We offer x86, Power and zSystems for running high-performance, mission-critical workloads in our cloud and on-premises. Our team has the expertise to deliver new solutions or improve existing ones.

Ready to begin your digital transformation journey? Speak with an IBM Tech Expert today by visiting our website and clicking on Chat with an IBM expert.

Avoid surprises on your cloud bills by estimating them even before migrating your workloads to the cloud. Take the no-cost cloud modelling assessment today.

Check out our blog on IBM Cloud driving innovation in regulated industries with security at the forefront.

Ravesh Lala

Vice President, Hybrid Cloud Solutions

John De Marco

Distinguished Engineer, CTO


Turn Your Terraform Templates into Deployable Architectures Cloud

7 min read


Frederic Lavigne, Product Manager

Your first step in succeeding with your platform engineering journey.

Recently, IBM Cloud introduced projects and deployable architectures. Projects are a named collection of configurations that are used to manage related resources and Infrastructure as Code (IaC) deployments. A deployable architecture is a cloud automation for deploying a common architectural pattern that combines one or more cloud resources that are designed for easy deployment, scalability and modularity.

Projects and deployable architectures enable teams to capture best practices into reusable patterns, get started with new environments in a few clicks and ensure that the environments remain compliant, secure and up-to-date over time. IBM Cloud provides a set of ready-to-use deployable architectures that can be deployed as-is or extended to meet your needs.

In this blog post, I will walk you through the steps of turning a simple Terraform template into a deployable architecture in a private IBM Cloud catalog. Eventually, you could build your own deployable architectures (or extend an existing one) to capture recommended security configurations and build pipelines and architectures for new projects in your organizations.

Note that you will need a paid IBM Cloud account if you plan to go through all the steps in your own account.

Capture best practices in a template

If you are already using Terraform to deploy your infrastructure, you’ve already been creating a deployable architecture. The only difference is that you might have had to come up with your own approach to distribute the templates across your company and manage updates.

With the native support for deployable architectures provided by IBM Cloud, you can turn your template into a tile in a private IBM Cloud catalog. The tile will allow self-service infrastructure to be deployed by your teams while ensuring that the provisioned resources adhere to your company standards and controls.

Let’s consider the following architecture that could be the starting point for new application development:

Simple architecture including a resource group, a virtual private cloud and access groups.

This architecture includes the following:

  • A resource group to isolate all resources required by the application
  • A virtual private cloud (VPC) to deploy virtual server instances or Red Hat OpenShift clusters
  • Access groups for administrators, operators and developers to implement separation of duties between team members

The main Terraform template for this architecture would look something like this:

Terraform template to create a simple architecture.

The full set of resources required for this architecture can be found in the companion Git repository.

Turn your template into a deployable architecture

At this stage, we have a set of Terraform files. A deployable architecture, as defined by IBM Cloud, is slightly more than just the Terraform files. It has the ability to turn a template into a product. By product, I mean a package including the following:

  • A version number so that you can manage product lifecycle
  • The Terraform templates and its inputs and outputs
  • IAM permissions to document what are the required permissions to deploy your architecture (very useful!)
  • Architecture diagrams so that the users looking at your product can understand what it is about
  • Prerequisites in case your product is not self-contained and must be deployed on top of another deployable architecture (yes, this is supported)
  • Flavors so you can bundle different variations of your deployable architecture (think small, medium, large configurations, for example, or proof-of-concept vs. production)
  • End-user license agreements so that you can define your own usage rules
  • A readme file (it is always good to provide some documentation)
  • Cost information to provide a rough estimate about the cost of deploying the architecture
  • Compliance information to capture all the controls implemented by your architecture

As you onboard your deployable architecture into IBM Cloud, you will be able to specify all of the above in the IBM Cloud console. Alternatively, the catalog manifest (a JSON file) provides a nice way to capture all this information as part of the Git repository hosting your Terraform templates. A template for this manifest is available here.

For our example, the manifest is provided at the root of the repository. It must be named ibm_catalog.json. If you look at the file, you will recognize most of the content listed above (version information, descriptions, architecture diagrams, inputs and outputs).

The deployable architecture is now almost ready to be included in your private IBM Cloud catalog. It just needs to be packaged into a single archive. In GitHub, this is as easy as creating a new release. The resulting archive (a .tar.gz) is exactly what private IBM Cloud catalogs need.

Create a catalog to distribute your deployable architecture

I’ve been using the word “catalog” without a proper introduction. As an IBM Cloud user, you must be familiar with the IBM Cloud catalog—it’s where you are finding all the products you can provision in IBM Cloud. Did you know that you can create your own private catalogs with your own products or a remix of IBM Cloud products?

A private catalog includes products, and each product can have multiple versions. You are in control of which products and versions you made available to others. When the time comes, you can deprecate versions and retire versions and products.

  1. In the IBM Cloud console, go to Catalog management.
  2. Create a catalog:
    • Select Product as the catalog type.
    • Give the catalog a name: “my-catalog.”
    • Start with no products.
    • Click Create.

A catalog to distribute your deployable architectures.

Distribute the deployable architecture as an accelerator for new projects

Our private catalog is ready to welcome our deployable architecture.

To add the deployable architecture to the private catalog:

  1. Click Add.
  2. Select Deployable architecture as the product type.
  3. Select Terraform as the delivery method.
  4. Keep Public Repository as the repository type. Catalogs give the option to host the deployable architecture in private repositories if needed.
  5. Go to to find the link to the archive file of the most recent version of the deployable architecture for this blog. At the time of this post, the most recent is Enter the URL in the Source URL field.
  6. Select Standard as the variation. The software version and category get populated automatically.
  7. Click Add product.

The product is now added and the first version is ready to be validated. You will notice that a lot of the information has been pre-filled for you. This is because we provided the catalog manifest ibm_catalog.json in the archive. This makes it easy to reduce the number of manual steps and to manage your metadata as code:

Imported my first deployable architecture.

The next step is to use the action menu on the imported version and to go through the validation phase. The validation gives you an opportunity to review all provided information and to perform a test deployment of the architecture:

Ready to validate the deployable architecture.

The deployable architecture is successfully validated.

After the deployable architecture is successfully validated, you can review the cost estimate and compliance. Then, the product is ready to share with others:

The deployable architecture is ready to share.

This deployable architecture is now a product in this private catalog. It can have multiple versions, just like any software you install on your computer. When a new version is available, users of the deployable architecture will be able to update their existing deployment to benefit from the latest and greatest.

Start your next project quickly

The deployable architecture should now be visible in your private catalog. Go back to the IBM Cloud catalog. Notice a new dropdown to select the active catalog. Use it to switch to the my-catalog catalog that you created. There, you’ll see the deployable architecture that we imported:

Simple deployable architecture available in a private catalog.

The next step is to create an instance of this deployable architecture. Anyone with access to the private catalog can do this. The architecture will be added to a project, and you can start managing its resources from there and evolve the deployed configuration as new versions of the architecture are made available in the catalog:

  1. Select the deployable architecture.
  2. Review deployment options.
  3. Select Add to project.
  4. Set the project name to “simple-da.”
  5. Create the project.
  6. Edit the configuration to your needs:
    • Enter an API key to use for deployment.
    • Optionally, set a prefix for the resources to be created.
  7. Save and Validate the configuration.

Deploying the architecture with only a few clicks.

After the validation completes, you can deploy the architecture. As part of the deployment, it will create a resource group, a virtual private cloud and access groups. As you release new versions of the product, the project will notice and offer a seamless upgrade path for all configurations by using the product:

Project configuration deployed. Resource group, virtual private cloud and access groups have been created.

Get started

Even though the example in this post is rather simple, in few steps, you went from a set of Terraform files to a tile in a private catalog where you can configure visibility and lifecycle of the product and versions.

In addition, this project gives you visibility on the cost of the configuration, ensures that the deployed resources are secure and compliant with the controls selected by your organization, and provides a clear approach to deploying new versions and managing upgrades.

You should now have a good understanding of the basics of deployable architectures and projects. The next step is to understand how they can be used within your organization. I would recommend reading the article on how to run secure enterprise workloads. It goes through the concepts in great details. In addition, start browsing the growing list of deployable architectures available in the IBM Cloud catalog and learn how you can customize and extend them to your needs. And as you embark in creating your own deployable architectures, keep in mind that automation through CLI or Terraform can also be used to build and make your deployable architecture available in private catalogs.

To learn more about projects and deployable architectures, see the following articles:

Feedback, questions and suggestions

Go ahead and try the sample on your own from the GitHub source. If you have feedback, suggestions, or questions about this post, please reach out to me on Linkedin.

Frederic Lavigne

Product Manager


Mainframe Application Modernization Beyond Banking Cloud Compute

4 min read


Jason English, Principal Analyst & CMO at Intellyx LLC

Looking at mainframe modernization in industries like insurance, automotive and retail.

When you think of the world’s biggest modernization challenges, you immediately think of banking, and for good reason. Banks were among the first to roll out advanced mobile apps some 15 years ago, and they had already started offering online services in the mid-1990s.

Well before that, banks were interacting through massive electronic payment gateways and operating mainframe services, many of which remain core to their business to this day even though the platforms themselves have evolved significantly since then.

Even if transactional systems that connect to a bank make up much of the mainframe landscape, they are part of a larger wave of transformation that isn’t only about banks. Mainframes are still an essential part of the digital backbone of many other industries, and there are still a lot more digital transformation stories left to be told that aren’t exclusively financially motivated.

Let’s explore some highlights of mainframe modernization in other industries, including insurance, automotive and retail.

It all starts with the customer

If there is one common pattern for mainframe modernization shared across all industries, it’s that companies are trying to improve their digital experience for customers without the risk of interrupting critical core systems.

The customer can be an end user on an app or website or an employee/partner in an office or in the field. Customers are evaluating the company based on how well the company’s business logic and data serve their experience. Customers don’t care if the back-end is a mainframe talking to a SaaS provider that offers a mobile app UI—they just want the whole system to meet their business needs efficiently and accurately.

Even in a machine learning scenario, where mainframe data may be informing an artificial intelligence (AI) model rather than a person, there’s still a customer who will want the resulting AI model to support a business process.

Insurance: DevOps at State Farm

There is no reason why DevOps practices should be reserved only for distributed applications and the cloud. The transformational story of State Farm, one of the world’s largest mutual insurers, offers a great leadoff example of agility.

For a mature industry, there’s still a lot of future uncertainty for insurers. New startups appear every day, driving customer demands for features like online price quotes and mobile claims processing apps. State Farm’s dev team was in the midst of its own DevOps transformation, having established a combination of its own homegrown automation and test tools with Jenkins CI/CD and Git for delivery and deployment of new apps. These leverage data from workhorse IBM Z mainframe servers—some of which have been in continuous operation for as long as 50 years.

Once changes to customer-facing apps and API-enabled services started rolling in with greater speed, it pointed out a bottleneck. Back-end services could not be modified with enough agility to keep up; and further, there was a shortage of experienced mainframe developers to make the changes.

Using the development and debugging environment of IBM® Developer for z/OS, which integrates directly with Git for version control and check-ins, even newer additions to the development team were able to gain leverage and update mainframe applications on IBM Z with an intuitive, familiar workflow.

“Our IBM Z systems offer a robust, secure and reliable foundation for growth. We wanted to support Z developers in achieving greater efficiency and speed but also help newer recruits feel comfortable on the platform, so that we can all work together across platforms to deliver rapid innovation," said Krupal Swami, Technology and Architecture Director for State Farm.

Automotive: Modernizing software delivery

A leading automotive firm depends upon core systems to populate new in-car applications and services with current data. Their existing internal source code management system was starting to affect the stability of mission-critical mainframe applications, and newer developers were having difficulty gaining visibility into the dependencies each change might affect.

The firm switched their mainframe teams to development on IBM IDZ (IBM Developer for z/OS) with IBM DBB (which improves the visibility of dependency based builds), and then used IBM UrbanCode Deploy to deliver agile updates to the highly distributed target systems.

Retail: Planning to avoid rework

A major retail enterprise will naturally accumulate many mainframe applications on the way to creating new customer-ready functionality. After a few years, rationalization of this extended application estate becomes a full-time job for too many skilled people, as interdependencies between apps and mainframes are hard to uncover.

Further complicating matters, newer development employees get very little transparency into the architectural and software decisions made by previous generations of developers and IT leaders, creating even more unproductive work.

Using IBM ADDI (Application Discovery and Delivery Intelligence), both senior and junior developers can analyze all mainframe applications alongside newer apps and services. Quick discovery and documentation of interdependencies now helps their combined ITOps and software delivery teams understand the impact of any introduced changes.

The Intellyx take

Interestingly, while these transformational stories happened outside of the financial industry, almost every one of them relies upon one or more critical transaction processes that likely happen on mainframes, bringing the modernization story back to banks, in a way.

New hybrid cloud use cases are appearing, offering higher performance and better data protection through selectively co-locating some inference, data processing and security workloads on mainframes, allowing teams to have the elasticity of cloud with the always-on power of the mainframe.

In all ­­these cases, unlocking the next frontier in productivity will require modernizing the human developer’s experience of working with the mainframe, so both experienced business developers and new talent can join the modernization effort.

Learn more about mainframe modernization by checking out the IBM Z and Cloud Modernization Center.

©2023 Intellyx LLC. Intellyx is editorially responsible for this document. No AI bots were used to generate any part of this content. At the time of writing, IBM is an Intellyx customer. 

Jason English

Principal Analyst & CMO at Intellyx LLC


Kubernetes-Native Security Now Available for IBM Systems Cloud

3 min read


Ajmal Kohgadai, Principal Product Marketing Manager at Red Hat

Red Hat Advanced Cluster Security provides a comprehensive and automated Kubernetes-native security solution for Red Hat OpenShift running on IBM Power, IBM zSystems and IBM LinuxONE.

One of the core aims at IBM is to help make security accessible to all customers and provide security-focused solutions that help businesses innovate with confidence. Today we are proud to inform customers that the latest release of Red Hat Advanced Cluster Security for Kubernetes extends protections for Red Hat OpenShift clusters running on IBM Power, IBM zSystems and IBM LinuxONE to address advanced security use cases.

IBM Power, IBM zSystems and IBM LinuxONE are enterprise-grade platforms that offer high performance, reliability and security for mission-critical workloads. These platforms are used by organizations in industries like banking, healthcare and government, where security and compliance are paramount.

Red Hat OpenShift is an enterprise application development platform for building, deploying and running cloud-native applications at scale. Red Hat OpenShift and IBM provide a flexible, open, hybrid and multicloud enterprise platform with security features supporting mission-critical workloads.

What is Red Hat Advanced Cluster Security?

Red Hat Advanced Cluster Security is a platform that provides Kubernetes-native security features for containerized applications and infrastructure across the full application lifecycle: build, deploy and runtime. It is designed to help organizations detect and remediate security risks throughout many stages of the container lifecycle by integrating with the Kubernetes API server, container registries and deployment pipelines. Red Hat Advanced Cluster Security is designed to be flexible and scalable, helping organizations to secure Kubernetes clusters across multiple clouds and on-premises environments.

With this new support, IBM customers can now benefit from a comprehensive and automated Kubernetes-native security solution for Red Hat OpenShift running on the aforementioned IBM platforms. Customers on IBM Power, IBM zSystems and IBM LinuxONE can address advanced Red Hat OpenShift security use cases like shift-left security, full lifecycle vulnerability management, network segmentation and runtime detection and response with Red Hat Advanced Cluster Security for Kubernetes.

Benefits of Red Hat Advanced Cluster Security

Red Hat Advanced Cluster Security for IBM systems positions customers to do the following:

  • Gain visibility into containerized environments: Red Hat Advanced Cluster Security provides a comprehensive view of Red Hat OpenShift security postures running on IBM Power, IBM LinuxONE and IBM zSystems to help customers gain insights, such as the number and severity of vulnerabilities and misconfigurations in their clusters, and whether they adhere to compliance standards.
  • Shift security left with DevSecOps: Red Hat Advanced Cluster Security integrates with existing CI/CD pipelines and provides developer-friendly guardrails to help organizations identify and address security issues (such as a fixable image vulnerability) earlier in the container lifecycle, before they impact production environments at runtime.
  • Identify and remediate security risks in their runtime environments: Red Hat Advanced Cluster Security provides runtime monitoring for threat detection and remediation in Red Hat OpenShift clusters running on IBM Power, IBM zSystems and IBM LinuxONE. This includes scanning running container images for vulnerabilities, detecting anomalous behavior and monitoring network traffic for malicious activity.
  • Support compliance policies: Red Hat Advanced Cluster Security helps organizations address compliance policies for Red Hat OpenShift. This includes detection protocols for potentially non-compliant workloads across several industry standards—such as PCI-DSS, HIPAA, CIS benchmarks and NIST—and providing audit support for compliance reporting.
  • Protect applications across hybrid cloud environments: Red Hat Advanced Cluster Security supports multiple cloud providers and on-premises environments, helping customers to secure their Red Hat OpenShift clusters across hybrid cloud environments. This includes providing centralized management and policy enforcement across multiple Red Hat OpenShift clusters running on different platforms.
Get started

With Red Hat Advanced Cluster Security, IBM customers can now benefit from a comprehensive and automated security solution to help protect their Red Hat OpenShift environments while leveraging the benefits of enterprise-grade platforms like IBM Power, IBM zSystems and IBM LinuxONE.

Learn more about Red Hat Advanced Cluster Security.

Ajmal Kohgadai

Principal Product Marketing Manager at Red Hat


Deploying a Simple HTTP Server to IBM Cloud Code Engine From Source Code Using Python, Node and Go Cloud

6 min read


Finn Fassnacht, Corporate Student/Intern
Enrico Regge, Senior Software Developer

In this blog post, we will explore how to deploy a simple HTTP server to Code Engine using three popular programming languages: Python, Node.js and Go.

IBM Cloud Code Engine is a fully managed, serverless platform that runs your containerized workloads, including web apps, microservices, event-driven functions or batch jobs. In this article, we’re focusing on Code Engine applications that are designed to serve HTTP requests. We will demonstrate how to use the web UI and the CLI to deploy our code. By the end of this post, you will have a clear understanding of how to deploy your own code to Code Engine using your preferred programming language.

Setting up Code Engine

Before we can deploy our simple HTTP server, we need to set up Code Engine. If you don't have an IBM Cloud account yet, you can create one for free here. Be aware that Code Engine requires an account with a valid credit card on file. However, Code Engine provides a generous free tier with ample resources to kickstart your project.

After logging into IBM Cloud, you have two options to interact with IBM Cloud and Code Engine—using the command line interface (CLI) or the web UI.

Using the CLI

Here are the steps:

  1. Install the IBM Cloud CLI:
    • On Linux:
      curl -fsSL | sh
    • On MacOS:
      curl -fsSL | sh
    • On Windows:
      iex (New-Object Net.WebClient).DownloadString('')
  2. Log in to IBM Cloud:
    ibmcloud login
  3. Install the Code Engine plugin:
    ibmcloud plugin install code-engine
Using the web UI

To use the web UI, follow these steps:

  1. Go to Code Engine.
  2. Log in.

You can now proceed to deploy your simple HTTP server. In the next section, we’ll look at the sample code for your HTTP server in Python, Node.js and Go.

Sample code

The following sample code will start a server on port 8080 and set up a simple GET route that serves a "Hello World" message. The beauty of using Code Engine is that you don't need any specific modules or configurations. If the code runs successfully on your localhost at port 8080, it will run on Code Engine without any modifications.

Node.js app with Express.js
// require expressjs
const express = require("express")
const app = express()
// define port 8080
PORT = 8080
// use router to bundle all routes to /
const router = express.Router()
app.use("/", router)
// get on root route
router.get("/", (req,res) => {
res.send("hello world!!!")

// start server
app.listen(PORT, () => {
console.log("Server is up and running!!")

Find the Github repository here.

package main
import (
// create function for route
func helloworld(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hello, World!")

func main() 
// use helloworld on root route
http.HandleFunc("/", helloworld)
// use port 8080
log.Fatal(http.ListenAndServe(":8080", nil))

Find the Github repository here.

from flask import Flask
import os

app = Flask(__name__)

# set up root route
def hello_world():
return "Hello World"

# Get the PORT from environment
port = os.getenv('PORT', '8080')
if __name__ == "__main__":'',port=int(port))

It is important to note that Python will require a Procfile:

web: python

Find the Github repository here.

Deploy an application

Now that we've looked at the sample code, let's move on to deploying it. The first step is to create a new project. Your application will be contained within this project.

To create a new Project in the web UI
  1. Click on Projects.
  2. Click on Create.
  3. Select a desired Location (e.g., us-east).
  4. Pick a name for your project.
  5. Set the resource group to Default.
  6. Click Create in the bottom right corner to submit the project creation.
To create a new Project using the CLI
ibmcloud ce project create --name projectname

Note that the group Default and region us-east are the default. If you want to target a specific region use the following:

ibmcloud target -r us-east

If you want to target a specific group, use the following:

ibmcloud target -d Default

Now that we've created a project, let's move on to deploying our code. We'll be deploying our Node.js code from a source that's hosted in a GitHub repository along with the package.json and package-lock.json files. In this repository, you'll find a sample server written in Node.js (see the sample code above).

In the web UI
  1. Click on Projects in the Code Engine dashboard.
  2. Select your project.
  3. Select Applications from the sidebar.
  4. Click Create to create a new application.
  5. Select Source code and enter the repository link.
  6. Click the Specify build details button.
  7. If your repository is not private, you can proceed to the next step.
  8. Select Cloud Native Buildpack and click Next.
  9. Unless you have reason to change something, click Next again.
  10. Once you're done configuring the build options, click Create to deploy your application.

Code Engine will now do its job and build and deploy the source code for you. You can click on Test application to obtain the application URL to see your application live.

In the CLI
ibmcloud ce app create --name appname --src repolink-here --str buildpacks 

Define the name of your app:

--name appname

Define the source of the code:

--src repolink or /path/to/folder

Note that the source can be a local file on your computer, as well.

When your code is not located in the root directory of your repo or directory, it is important to specify the exact location of your code by using the option --build-context-dir path/to/folder. To run the example code in the CodeEngine repository (as I am doing), execute the following command:

ibmcloud ce app create --name appname --src --str buildpacks --build-context-dir /helloworld-samples/app-nodejs/
Update an application

Let's briefly discuss how to update your application. For instance, you might want to add a new POST route to your Node.js code that returns data sent to the server:"/echo", (req, res) => {
// echo the message back to the user
res.json({ message: req && req.body && req.body.message || "nothing to echo back" });
Using the CLI
ibmcloud ce app update --name appname --src repolink-here --str buildpacks
Using the web UI
  1. Navigate to your application.
  2. Select Configuration.
  3. Click Edit and create new revision.
  4. Click Rerun build to open the build details.
  5. Trigger a build by clicking Save and Create.
How it works

That's all you need to do to deploy your app from source. Let's talk about the magic that happens after you hit Create.

First, Code Engine loads your source code from a repository or from your computer locally. It then builds a container image and runs it on IBM Cloud. Specifically, your container is managed by two open-source technologies—Kubernetes and Knative— that automatically scale your application up and down based on the traffic it receives. This ensures that your application always has enough resources to handle incoming requests, while saving money during low-traffic periods when your application is scaled down again. If it receives no traffic, it will even scale to 0 and stops incurring charges. Once your application receives traffic, it will automatically "wake up" and scale back up to one instance.

Better performance

Congratulations, you now know how to deploy a simple web app quickly and easily. But what if I told you there's a way to make your application more efficient? Enter containers.

When you deploy your app from source, Code Engine creates a container image for you. The platform detects the language you're using and selects a general purpose pre-built image for it. But, of course, it doesn't know exactly what your app needs or doesn't need. As a result, the image may be larger than necessary, leading to slower deployment performance and longer start-up times. By defining your own container image, you can optimize your app by ensuring that only the necessary components are included.

Making an image

Since Code Engine will build the container image for you, you don't need to worry about installing Docker or getting its daemon to run properly.

To build your own custom image, you'll need to create a Dockerfile. In this example, we'll be creating one for the Node.js application. To get started, create a new file called "Dockerfile" without any file extension:

FROM node:alpine
WORKDIR /usr/src/app
COPY package*.json index.js ./
RUN npm install
CMD ["node", "index.js"]

Use a pre-built image (alpine is extremely lightweight):

FROM node:alpine

Specify a working directory:

WORKDIR /usr/src/app

Copy package.json and package-lock.json:

COPY package*.json index.js ./

Install the required packages:

RUN npm install

Expose the port 8080 to the outside:

Expose 8080

Finally, run the server:

CMD ["node", "index.js"]

Once you have configured your Dockerfile (and added it to your repo or folder) you can easily deploy the image.

Using the web UI

Deploying your code with a Dockerfile in the web UI works the same as deploying your code without one. However, in step 8, you will need to select Dockerfile so that Code Engine uses the instructions in your Dockerfile to build the image.

Using the CLI
ibmcloud ce app update --name appname --src repolink-here --str dockerfile

Note that it's important that your Dockerfile is located in the root directory of your project.

By specifying how to build the image, the resulting image size can be dramatically reduced. In my case, I was able to reduce the image size by about 90%. This means that everything will run much faster. If you are interested in learning more about build optimizations, you’ll find useful information in the following article: “Writing a Dockerfile for Code Engine

Get started with IBM Cloud Code Engine

Deploying a simple HTTP server to Code Engine is a straightforward process that can be done using different programming languages, including Python, Node.js, Go and more. With Code Engine, you can deploy your code without worrying about the underlying infrastructure, making it easy to focus on writing your code.

In this blog post, we explored how to set up Code Engine using the web UI and the CLI and then provided sample code for the three programming languages. We also demonstrated how to create a new project and deploy your code. With the steps outlined in this post, you should have a clear understanding of how to deploy your own code to Code Engine using your preferred programming language and method.

Try it out for yourself and see how easy it is to deploy your code to IBM Cloud Code Engine.

Finn Fassnacht

Corporate Student/Intern

Enrico Regge

Senior Software Developer


IBM CIO Organization’s Application Modernization Journey: Tools to Simplify and Accelerate Cloud

3 min read


Sachin Avasthi, STSM, Lead Architect for Application Modernization
Jay Talekar, STSM, App and Data Modernization

How we used the IBM Cloud Transformation Advisor’s Data Collector to accelerate our containerization journey.

In our last blog post, we talked about how the IBM CIO organization approached the modernization of its legacy applications to hybrid cloud and microservices, incrementally unlocking the value from this transformation. In this post, we will focus on modernization tools for Enterprise Java applications and how the tools simplified and accelerated our process.

Since the early 2000s, IBM CIO teams have been using Java™ EE technologies, whether its frameworks like JSF, EJBs or Service Oriented Architectures (SOA) or Aspect-oriented programming frameworks like Spring Boot. Most of these applications were developed when cloud, containers and Kubernetes did not exist, but these applications are still supporting business-critical functions.

Our three-fold modernization objectives are as follows:

  1. Runtime modernization: Moving to a container-optimized WebSphere Liberty or Open Liberty application server from traditional WebSphere.

  2. Operational modernization: Deploying on OpenShift Container Platform.

  3. Architectural modernization: Refactoring monoliths into individually deployable and scalable microservices.

When you have hundreds of applications to modernize, it takes a long time to refactor, rewrite and decouple services. So, initially, our focus was runtime and operational modernization to take advantage of all of the native capabilities of our CIO hybrid cloud platform.

The first step for successful modernization is to assess and analyze the application, explore the integrations, identify dependencies and create a pragmatic plan to have minimal disruption for the applications. We quickly realized that we needed an automated tool to discover the dependencies, perform application inventory (Software Bill of Materials) and, most importantly, determine what code/configurations need to be changed for the modernization. That’s where IBM Cloud Transformation Advisor (TA) came to our rescue.

What is IBM Cloud Transformation Advisor?

IBM Cloud Transformation Advisor (TA) is a discovery tool that simplifies the containerization of Java applications from traditional app servers like WebSphere, WebLogic or Tomcat to WebSphere/Open Liberty. You can sign up for a trial of TA and find installation instructions here.

How does IBM Cloud Transformation Advisor work?

Transformation Advisor (TA) uses its Data Collector—a tool that gathers information about middleware deployments in your environment—to provide you with a migration analysis of Java™ EE applications running on IBM WebSphere Application Server, Apache Tomcat or Oracle WebLogic application servers. The tool generates one .zip folder per profile/domain and places analysis results within that directory. Results from the scan are uploaded to Transformation Advisor, where a detailed analysis is provided.

Alternatively, you can easily use the wsadmin tool’s migration commands to generate the reports. See the documentation for more information.

How did IBM CIO scale the Transformation Advisor deployment?

Our approach was as follows:

  • Automate the TA Data Collector script using Ansible.
  • Run the script using Ansible on all WebSphere servers.
  • Store the data collection .zip files in a GIT repo.
  • Use Local TA UI instances to analyze the scan results.

We were able to instrument 174 WebSphere instances over one weekend. Transformation Advisor helped our containerization journey at these critical stages:

  • Discovery (75% effort reduction compared to DIY approach):
    • Inventories applications’ content and structure.
    • Provides a migration complexity.
    • Identifies potential problems for moving to the cloud.
  • Strategic planning (ready backlog for modernization):
    • Estimates effort for resolving migration issues.
    • Provides assessments for different modernization options.
  • Execution (avg. 80-100 developer hours saved per application):
    • Produces out-of-the-box code/containerization artifacts to migrate to Open Liberty/WebSphere Liberty.
    • Includes capability to check-in the files to your GIT repos.
Key takeaways
  • Automation is the key for accelerating a large number of application modernizations. Transformation Advisor provided a simpler and faster way right from assessment to execution.
  • Transformation Advisor provided comprehensive issues lists and actionable resolutions, improving developer productivity and confidence.
  • Demonstrating early success with a few pilot applications benefits large-scale transformation projects.
  • Last, but not least, building a community of developers and sharing insights helps others in your organization to accelerate the journey.

In the next blog post, we will talk about architectural modernization of Java™ EE applications using IBM Mono2Micro—transforming monolithic applications to microservice-based architecture.

Learn more about WebSphere Hybrid Edition. Discover how to increase WebSphere ROI. Sachin Avasthi

STSM, Lead Architect for Application Modernization

Jay Talekar

STSM, App and Data Modernization


Exploring IBM’s New Optical Character Recognition Technology Artificial intelligence Automation

4 min read


Udi Barzelay, STSM, Manager, Vision & Learning Technologies
Tal Drory, Sr. Manager, AI - Multimedia
Andrew Cabral, Product Manager, IBM Watson
Calin Furau, Product Manager - Watson Discovery & NLU

How IBM’s latest research is leading the optical character recognition (OCR) revolution and pushing the boundaries of capabilities.

Documents have always been (and continue to be) a significant data source for any business or corporation. It’s crucial to be able to scan and digitize physical documents to extract their information and represent them in a way that allows for further analysis (e.g., for a mortgage or loan process for a bank) no matter how the data is captured. Even for documents created digitally (e.g., PDF documents) the process of extracting information can be a challenge.

At IBM, we are treating this as a multi-disciplinary challenge spanning across computer vision, natural language understanding, information representation and model optimization. With this approach, we are advancing the state-of-the-art in document understanding, which allows our models to analyze the layout and reading order in complex documents and understand visuals and represent them in multimodality manners that understand plots, chart and diagrams.

This work led to the new enhanced optical character recognition (OCR) IBM has created to digitize important, valuable business documents more easily and accurately for the enterprise to extract information for analysis.

Cleaner and more accurate extraction creates multiple benefits, including the following:

  • Accelerated workflows
  • Automated document routing and content processing
  • Reduced costs
  • Superior data security
  • Disaster recovery

Also, there are a variety of use cases that utilize optical character recognition technology that will benefit from the enhancements being made by IBM. From data extraction to automating big data processing workflows, OCR powers many systems and services used every day.

Document understanding

Document understanding is the ability to read these business documents—either programmatically or by OCR—and interpret their content so it can take part in an automatic business process. An example of an automatic business process utilizing OCR would be insurance automated claims processing, where data is extracted from ID cards, claims forms and claim descriptions, among others.

To perform the digitization of documents, optical character recognition (OCR) is utilized. OCR is composed of two stages:

  • Detection: Localize the various words in the document.
  • Recognition: Identify the comprising characters in the detected words.

This means that with OCR, we know where the words are on the document and what those words are. However, when using OCR, challenges arise when documents are captured under any number of non-ideal conditions. This can include incorrect scanner settings, insufficient resolution, bad lighting (e.g., mobile capture), loss of focus, unaligned pages and added artifacts from badly printed documents.

Our team focused on these two challenging areas to address how the next generation of OCR technology can detect and extract data from low-quality and natural-scene image documents.

Better training and accuracy

Imagine for a moment that you are going to build a computer vision system for reading text in documents or extracting structure and visual elements. To train this system, you will undoubtedly need a lot of data that has to be correctly labeled and sanitized for human errors. Furthermore, you might realize that you require a different granularity of classes to train a better model—but acquiring new labeled data is costly. The cost will likely force you to make some compromises or use a narrower set of training regimens which may affect accuracy.

But what if you could quickly synthesize all of the data you need? How would that affect the way you approach the problem?

Synthetic data is at the core of our work in document understanding and our high-accuracy technology. As we developed our OCR model, we required significant amounts of data—data that is hard to acquire and annotate. As a result, we created new methods to synthesize data and apply optimization techniques to increase our architecture accuracy given that the synthetic data can be altered.

Now we are synthesizing data for object segmentation, text recognition, NLP-based grammatical correction models, entity grouping, semantic classification and entity linkage.

Another advantage of synthetic data generation is the ability to control the granularity and format of the labels, including different colors, font, font sizes, background noise, etc. This enables us to design architectures that can recognize punctuation, layout, handwritten characters and form elements.

By leveraging synthetic data to train models mentioned previously, we’re excited to announce this effort has resulted in a major update to our core OCR model, providing a significant boost in accuracy and lower processing time.

Higher-level document understanding

Not all documents within an enterprise are of equal value. For example, business documents are central to the operation of business and are at the heart of digital transformation. Such documents include contracts, loan applications, invoices, purchase orders, financial statements and many more. The information in these business documents is presented in natural language and is unstructured. Understanding these documents poses a change due to the complex document layout and the poor-quality scans.

Now with IBM’s latest OCR technology, these critical documents can be read and the key information contained within can be extracted.


As data continues to provide the key insights enterprises need to analyze their business, understand their customers and automate workflows, document-understanding technology like optical character recognition (OCR) is more important than ever.

IBM’s latest research is leading the OCR revolution by pushing the boundaries of OCR capabilities and raising the standard for OCR in the development community. We’re committed to improving our product and providing our customers with the highest level of performance and accuracy possible.

This new OCR technology is being rolled out across all IBM products utilizing OCR and will allow users to digitize important, valuable business documents more easily and accurately for the enterprise to extract information for analysis.

To learn more, check out the documentation and release notes.

The new OCR technology is already available in IBM Watson Discovery—try it out and get started today.

Udi Barzelay

STSM, Manager, Vision & Learning Technologies

Tal Drory

Sr. Manager, AI - Multimedia

Andrew Cabral

Product Manager, IBM Watson

Calin Furau

Product Manager - Watson Discovery & NLU


Think Like a Chief Automation Officer to Deliver a Reliable Consumer Experience in Unreliable Times Artificial intelligence Automation

3 min read


IBM Cloud Team, IBM Cloud

The Weather Company shares how they're using intelligent automation to make sure consumers have the right information at their fingertips.

How often do you check the weather each day? According to one estimate by the US Department of Commerce, the majority of Americans check the weather forecast 3.8 times per day, equating to 301 billion forecasts consumed per year. And that’s just accounting for the majority of Americans, not the majority of everyone worldwide. 

Millions of decisions are made every day based on the weather forecast because it affects so much of our lives. For individuals, it can be deciding what to wear, where to shelter or when to travel. For businesses, it can be deciding where to route trucks or how to keep employees safe. 

When it comes to delivering a reliable consumer experience in unreliable times, The Weather Company, an IBM Business, is a good model. As the world’s most accurate weather forecaster overall, it strives to consistently deliver industry-leading forecasting—without any disruption or delays—which is challenging during times of peak demand. For example, during extreme weather events like hurricanes or winter storms, The Weather Company’s applications (like The Weather Channel app) typically see a 50-75% increase in users looking for weather information. 

What can we learn from The Weather Company when it comes to ensuring a reliable consumer experience? 

In the first session of the IBM and Bloomberg series “Intelligent Automation: Transformation in a Time of Uncertainty,” Travis Smith, IBM Distinguished Engineer and Head of Data and AI for The Weather Company, discusses the need to bring enterprise observability to their systems so they can see, in real-time, where traffic is occurring across their applications, where they need to put more emphasis and when events could cause downtime. 

“Our weather forecasting computes every 15 minutes. We constantly change our forecast in order to make sure you have the right information at your fingertips to help you figure out what you need to do with your life and also keep you safe,” says Smith.

At The Weather Company, thinking like a Chief Automation Officer (CAO) means moving from traditional application monitoring to observability in order to increase resiliency, reduce downtime and maintain the highest quality consumer experience. While monitoring can tell when something’s wrong, observability can tell what’s happening, why it’s happening and how to fix it. Imagine being able to know how everything is performing, everywhere, all at once.

Smith’s advice to leaders looking to ensure a reliable consumer experience is two-fold:

  1. Adopt automated observability for the deep 360 visibility into your distributed systems: This will allow for faster problem identification and resolution from the automation. Instead of measuring your resiliency in terms of mean time to recovery, use data observability and AI to avoid incidents in the first place. This is critical for IT resilience and for meeting The Weather Company’s goal of providing consumers with reliable access to weather information wherever they are, whenever they need it, despite the increasing volatility of the weather.
  2. Cultivate a team that’s open to new and emerging technology: In addition to observability, The Weather Company team is investigating different uses of generative AI, such as writing code to increase developer productivity and accelerate the onboarding of talent. They're also looking at using generative AI to provide smarter messages to consumers. For example, Smith explains how most people may only need a “snow is coming” alert the night before while a single parent may need the same alert earlier so they have more time to prepare. 

By adopting the mindset of a CAO, Smith and his team are making the best decisions about where to apply intelligent automation to make IT systems more proactive, their DevOps and QA teams more productive and, ultimately, their consumers more aware and prepared. 

Learn more

To learn more about how enterprise observability can lead to faster, automated problem identification and resolution, get the guide.

Watch the full 12-minute IBM and Bloomberg session with Travis Smith from The Weather Company.

IBM Cloud Team

IBM Cloud


IBM Cloud Storage for FedRAMP: A Secure and Compliant Solution Storage

3 min read


Norton Samuel Stanley, Lead Software Engineer - IBM Cloud Storage
Josephine Justin, Architect

How IBM Cloud storage with FedRAMP can provide significant benefits.

In today's digital age, data is an essential asset for businesses, organizations and governments alike. As the amount of data generated and stored continues to grow, so does the need for secure and compliant storage solutions. This is where IBM Cloud comes in, offering a range of storage options that are both secure and compliant with the Federal Risk and Authorization Management Program (FedRAMP).

IBM Cloud storage solutions that are compliant with FedRAMP can be used in various use cases by government agencies and other organizations that require secure and compliant storage options. Let's explore some use cases where IBM Cloud storage with FedRAMP can provide significant benefits.

FedRAMP is a government-wide program that provides a standardized approach to security assessment, authorization and continuous monitoring for cloud products and services. It is designed to ensure that government agencies and other organizations can confidently use cloud services that meet the highest security and compliance standards.

IBM Cloud offers several FedRAMP-compliant storage solutions, including IBM Cloud Object Storage, IBM Cloud File Storage and IBM Cloud Block Storage. Let's take a closer look at each of these solutions and their benefits.

IBM Cloud Object Storage

IBM Cloud Object Storage is a highly scalable and durable storage solution that allows you to store and access large amounts of unstructured data. It is designed to support a variety of workloads, including backup and recovery, archive and content management. With built-in encryption and access controls, you can be confident that your data is secure and compliant.

IBM Cloud File Storage

IBM Cloud File Storage is a fully managed, highly available and scalable file storage solution that allows you to share files across multiple systems and applications. It is designed to support high-performance workloads, such as media and entertainment, genomics, and financial services. With automatic data backups and snapshots, you can ensure that your data is always available and protected.

IBM Cloud Block Storage

IBM Cloud Block Storage is a high-performance, low-latency storage solution that is designed for I/O-intensive workloads, such as databases, analytics and virtual machines. It offers high availability, data protection and the flexibility to configure storage performance to meet specific workload requirements. With encryption, access controls, and logging, you can be confident that your data is secure and compliant.

To learn more about the distinctions between file, object, and block storage, as well as which type is most suitable for your requirements read here.

Benefits of IBM Cloud Storage and FedRAMP

By choosing IBM Cloud storage solutions that are FedRAMP compliant, you can enjoy the following benefits:

  • Confidence that your data is secure and compliant with the highest standards
  • Flexibility to choose the right storage solution for your specific workload requirements
  • Scalability to handle large amounts of data and growth over time
  • High availability to ensure that your data is always accessible
  • Automatic backups and snapshots for data protection
  • Cost-effectiveness through pay-as-you-go pricing models
IBM Cloud Storage FedRAMP use cases Government agencies

Government agencies at all levels require secure and compliant storage solutions to store sensitive data like citizen data, military data and classified information. IBM Cloud storage solutions with FedRAMP compliance can provide government agencies with the necessary security and compliance to meet the strict regulatory requirements.

Healthcare organizations

Healthcare organizations, including hospitals, clinics and research centers, generate and store a vast amount of sensitive patient data. IBM Cloud storage solutions with FedRAMP compliance can provide secure and compliant storage for electronic medical records (EMR), imaging data, genomics data and other sensitive healthcare data.

Financial institutions

Financial institutions, such as banks and insurance companies, generate and store large amounts of sensitive financial data. IBM Cloud storage solutions with FedRAMP compliance can provide secure and compliant storage for financial data like transaction data, customer information and financial reports.

Media and entertainment companies

Media and entertainment companies, including studios, broadcasters and streaming services, generate and store large amounts of media content, such as videos, music and images. IBM Cloud storage solutions with FedRAMP compliance can provide secure and compliant storage for media content, enabling companies to store, manage and distribute their media assets.  

Research institutions

Research institutions, such as universities and research centers, generate and store a lot of research data, including experimental data, simulations and scientific data. IBM Cloud storage solutions with FedRAMP compliance can provide secure and compliant storage for research data, enabling researchers to store, manage and share their research data securely and efficiently.


In conclusion, IBM Cloud Storage solutions with FedRAMP compliance can provide secure and compliant storage options for various use cases and ways of managing your data. Whether you are a government agency, a healthcare organization, a financial institution, a media and entertainment company or a research institution, IBM Cloud storage solutions can provide you with the necessary security, compliance, scalability and cost-effectiveness to meet your specific storage requirements.

Norton Samuel Stanley

Lead Software Engineer - IBM Cloud Storage

Josephine Justin



Mitigate Phishing and Business Email Compromise with IBM Security® Guardium® Insights Security

4 min read


Katie Schwarzwalder, Product Marketing Manager, IBM Security Guardium

The Threat Intelligence Index helps you understand common attack types. IBM Security Guardium Insights can help protect your data from those attacks.

As data grows and shifts rapidly to the cloud, threat actors are on the prowl now more than ever. The IBM Security X-Force Threat Intelligence Index 2023 reported that for the second year in a row, phishing was the leading infection vector, with 41% of attacks using this method. Additionally, the report found that 6% of attacks involved business email compromise.

A modern data security platform needs to be designed to help companies address their data security and compliance needs. IBM Security Guardium Insights risk-based user experience can be used to better understand and provide context to achieve a clearer story around your data. This solution feeds risk insights into advanced analytics and provides actionable intelligence to help users respond quickly and efficiently to events that occur.​ Read on to see how Guardium Insights can improve your data security and compliance strategy.

What is business email compromise?

Business email compromise (BEC)—also known as email account compromise (EAC)—is one of the most financially damaging online crimes. It exploits the fact that so many of us rely on email to conduct business, both personal and professional.  Attackers know that organizations of all sizes prioritize the security of their emails and, unfortunately, sometimes things get through.

Guardium Insights features

To be prepared, there are several different ways for someone to look to protect themselves. Within Guardium® Insights, we provide a risk-scoring engine. The risk-based dashboard highlights risk events based on database, database user and operating system users.​ This dashboard gives you an at-a-glance view of what’s happening with your organization’s data security and compliance risk. Using this view, the dashboard can properly alert the security team when there has been some sort of anomaly that may be the result of BEC. When one wishes to dig deeper into risk events, ​there will be a banner at the top that will help you understand what the tool can do. If you’d like to reduce noise and apply exclusions, such as excluding test databases, you can do that in the risk-scoring engine as well.

You can also create response rules to automate the handover to your security operations center.  If ​BEC is suspected, the risk level is high, and the event involves a database user who is an admin user, you might want to create a ticket in ServiceNow® for the security team to pursue. 

Addressing risk events

Now, let's dive into the manual end of risk events to see how you can use the Guardium Insights risk engine further. ​One of the things you might want to do is create a preset to give you a filtered view of your datapoints here. For example, you may want to create a preset that shows the data leaks that are critical. Once you save a preset, you can then shift back and forth between the various preset views of data.

Phishing is a cybercrime in which targets are contacted by someone posing as someone they aren’t to lure individuals into providing sensitive data, such as personally identifiable information, banking and credit card details and passwords. The information is then used to access important accounts and can result in identity theft and financial loss. If you were investigating a critical risk like a phishing attack, you could explore the details in the Risk events view.​ You can see additional details about what’s happening to your sensitive data within the report.

You could learn more about the phishing incident from the Risk events view. The findings table shows a list of datapoints sorted by time range. You can see the policy violations and outliers.​ You can also click any item to see more information about the specific outlier, policy violation, or anomaly. 

You may also wish to dive into the classification records to see what types of data exists within the data sources. Looking at the data table, you would be able to tell whether there may be some birth certificate and street addresses present (which is private information). Based on the classification records being present, one should want to treat this potential incident carefully.

If you have investigated and determined that there is something to be concerned about, you may need to go ahead and respond. The Respond | Tune button helps you respond tactically to a risk event. ​You could manually create a ticket based on the tools you have already integrated with Guardium Insights, such as ServiceNow or CP4S SOAR.​ Or if you've done your investigation and think it’s a false positive, you might want to close the risk event and exclude that event from future profiling. ​Reducing these false positives is essential to finding the signal in the noise and prioritizing your team’s resources.  

Guardium Insights and its powerful risk engine can help you connect the dots of different data points to gain a new level of understanding to assist your business in doing the following:

  • Reduce business silos​
  • Create actionable intelligence​
  • Simplify response​
  • Quickly respond to data risk​

Watch IBM Security Guardium Insights in action:

Check out the 2023 Threat Intelligence Index

With cyberattacks becoming more sophisticated and frequent, it is critical for organizations to understand the tactics employed by threat actors. The IBM Security X-Force Threat Intelligence Index 2023 provides actionable insights to help CISOs, security teams and business leaders proactively protect their organizations. In this landscape, IBM Security Guardium Insights offers a solution to gain visibility, ensure compliance and provide robust data protection throughout the data security lifecycle.

Get started with IBM Security Guardium Insights

To learn more about how your organization can benefit from Guardium Insights, we invite you to check out the following:

Read the full IBM Security X-Force Threat Intelligence Index 2023.

Katie Schwarzwalder

Product Marketing Manager, IBM Security Guardium


Managing AWS EC2 Pricing and Usage with IBM Turbonomic Automation

4 min read


Dina Henderson, Product Marketing Manager

How IBM Turbonomic can help you manage AWS EC2 pricing and usage while assuring your applications performance.

Amazon Elastic Compute Cloud (EC2) is the most widely used service on AWS. It provides compute capacity in the cloud and has a wide range of virtual machines (VMs) known as EC2 instances. These instances run an operating system on top of resources like CPU, memory and hard disk. With hundreds of different instance types at various price points, managing AWS EC2 pricing and usage can be a challenging task, but IBM Turbonomic can help identify which virtual machine instances are best suited for particular workloads.

Selecting the right VM instances

When it comes to managing EC2 pricing and usage, choosing the right VM instance is crucial. Each instance type is optimized for a specific use case, such as compute-intensive workloads, memory-intensive workloads or storage-intensive workloads. The instance type determines the resource capacity of the virtual machine and hourly pricing. You can do this manually from the AWS Management Console, and you are only charged for the instances while they are running. When you’re done with your instances, you can spin them down and stop paying for them.

The following is a list of EC2 families and their corresponding specifications on AWS’ website, which are accessible through the given links:

With Turbonomic, users can easily identify which virtual machine instances are best suited for their workloads based on their performance requirements and budget. Our software continuously analyzes EC2 usage patterns in real-time and recommends the most cost-effective virtual machine instance that meets performance needs. Users can ensure they are using the right virtual machine instance for their workload and not overspending on unnecessary compute resources.

Optimizing EC2 pricing

Managing EC2 pricing can be a challenging task, especially when dealing with a large number of instances. EC2 pricing depends on several factors, such as instance type, region, operating system and usage time, making it difficult to keep track of costs and optimize efficiently.

Turbonomic simplifies this process by providing real-time visibility into usage and costs, analyzing usage patterns, and generating cost-saving actions without sacrificing performance. Additionally, our software helps right-size instances by identifying underutilized instances and recommending more cost-effective instance types.

After deciding on the EC2 instance that best fits your use case, you’ll need to choose a pricing option. AWS offers a few EC2 instance pricing options, such as On-Demand pricing, Savings Plans, and Reserved Instances. In this post, we’ll only be mentioning On-Demand pricing and Reserved Instances.

On-Demand pricing allows you to pay for the compute capacity you need without any long-term commitments in a pay-as-you-go model. Reserved Instances (RIs) allow you to reserve EC2 capacity over a period of one to three years at a yearly discounted rate compared to On-Demand pricing. Turbonomic's actions are RI-aware, and our software scales virtual machines to maximize RI utilization.

AWS EC2 pricing: On-Demand

On-Demand EC2 pricing provides you the convenience and flexibility of choosing any instance type and size and paying only for what you use, with no upfront payments or long-term commitments. Billing is done per hour or per second, and prices vary based on the instance type and size, the operating system,and the region.

On-Demand pricing for EC2 instances is available for all operating systems, regions, and availability zones, and it is the default pricing option. However, its cost varies depending on the same parameters. Although On-Demand EC2 pricing offers convenience and flexibility, it's also the most expensive option. It's best suited for unpredictable workloads or applications.

AWS EC2 pricing: Reserved Instances

Opting for Reserved Instance EC2 pricing can result in significant cost savings of up to 72% compared to On-Demand EC2 pricing. In addition, you'll have the ability to reserve capacity in a specific availability zone, simplifying the process of launching new instances on an as-needed basis. This option requires a commitment to consistent usage for a duration of one to three years.

The amount of discount you'll receive through Reserved Instance EC2 pricing is determined by your upfront payment. Paying the full amount upfront results in the largest discount and greatest savings. Partial upfront payment is another option, with a lower discount but lower upfront costs. Finally, if you prefer not to pay anything upfront, you can still benefit from a smaller discount and allocate the saved funds to other projects.

If you want to learn more about AWS EC2 pricing, check out the Amazon EC2 pricing page.

Get started

Managing EC2 pricing and usage can be a complex and time-consuming task, especially as workloads grow. With IBM Turbonomic, you can simplify the process and optimize your cloud for maximum efficiency and cost savings. By selecting the right VM instances and optimizing your costs, you can ensure that you're getting the most out of your EC2 instances while keeping your cloud costs under control.

IBM Turbonomic can help you manage AWS EC2 pricing and usage while assuring your application's performance. Get started by trying out the IBM Turbonomic Sandbox or request your IBM Turbonomic demo today.

Dina Henderson

Product Marketing Manager


The Next Era of Serverless Cloud

3 min read


Jason McGee, IBM Fellow, VP, and CTO

How IBM is enabling Serverless 2.0.

For years, a topic of conversation at KubeCon has been the developers’ complicated relationship with Kubernetes. To understand their concerns, think about driving a car. You want to be able to drive the car without worrying about the engine under the hood, and many developers felt the same way about Kubernetes. They were spending too much time worrying about the underlying infrastructure. What makes the problem worse is that developers want to deploy different types of workloads on Kubernetes, including containerized applications, functions for event-driven workloads or batch jobs. At IBM, we realized that all these scenarios should be addressed by a single serverless platform, based on open-source technologies.

Where we started: Serverless 1.0

In 2021, IBM announced IBM Cloud Code Engine to help developers in any industry build, deploy and scale applications in seconds, while only paying when code is running. Code Engine was made available as a fully managed, serverless offering on IBM Cloud, the industry’s most secure and open cloud for business. This was the industry’s introduction to Serverless 1.0, which was focused on enabling the implementation of endpoints (REST APIs, web apps, etc.).

While Serverless 1.0 offered many benefits to developers, only a small percentage of applications were able to run in serverless function. In particular, it didn’t account for heavy-duty computational applications, processing large workloads or analyzing data. 

Where we are going: Making serverless the default, rather than the exception 

IBM Cloud Code Engine has been focused on enabling the next era of serverless: Serverless 2.0. We saw this first emerge with containers, as serverless is now the de facto standard for packaging applications. Developers want an infrastructure where cloud users can run containers without worrying about ownership and management of the computing infrastructure or Kubernetes cluster on which they are running. With IBM Cloud Code Engine serverless computing, IBM deploys, manages and autoscales our clients’ cluster. The serverless option enhances overall productivity and decreases time to deployment—a win/win for deployers.

Recently, IBM is taking the serverless approach to more complex offerings, such as high-performance computing (HPC). While the industry has long understood the benefits of HPC for running massive simulations (which is especially critical for sectors like financial services that need to continuously assess risk), enterprises were spending considerable amounts on hardware to support their computing needs. With a serverless architecture, IBM clients can cut out the hardware costs and work on an execution-based pricing model where they are only paying for the services they need.

Why IBM Cloud Code Engine? 

IBM Cloud Code Engine is a fully managed serverless platform. It allows our clients to deploy and run almost any workload, whether it’s source code, containers, batch jobs or event-driven functions.

Learn more about IBM Cloud Code Engine.

Meet us at KubeCon

The IBM booth at KubeCon (located in the middle of Solutions Showcase Hall between the two food areas) will be the best place to meet and talk to IBMers. You can also view and register for all the IBM sessions at KubeCon.

Jason McGee

IBM Fellow, VP, and CTO


The Evolution of Zero Trust and the Frameworks that Guide It Security

5 min read


David Heath, Americas Sales Leader, IBM Sustainability Software

What is zero trust, and what frameworks and standards can help implement zero trust security principles into your cybersecurity strategies?

Many IBM clients want to know what exactly zero trust security is and if it’s applicable to them. Understanding the zero trust concept and how it has evolved will help you and many of our clients understand how to best implement it to protect your company’s most valuable assets.

What is zero trust?

Zero trust is a framework that assumes every connection and endpoint are threats, both externally and internally within a company’s network security. It enables companies to build a thorough IT strategy to address the security needs of a hybrid cloud environment. Zero trust implements adaptive and continuous protection, and it provides the ability to proactively manage threats.

In other words, this approach never trusts users, devices or connections for any transactions and will verify all of these for every single transaction. This allows companies to gain security and visibility across their entire business and enforce consistent security policies, resulting in faster detection and response to threats.

The introduction of zero trust

Zero trust began in the "BeyondCorp" initiative developed by Google in 2010. The initiative’s goal was to secure access to resources based on identity and context, moving away from the traditional perimeter-based security model. This strategy allowed Google to provide employees with secure access to corporate applications and data from anywhere, using any device, without the need for a VPN.

In 2014, Forrester Research analyst John Kindervag coined the concept zero trust to describe this new security paradigm in a report titled “The Zero Trust Model of Information Security.” He proposed a new security model that assumes no one—whether inside or outside the organization's network—can be trusted without verification. The report outlined the zero trust model  based on two primary principles: “Never trust, always verify."

All users, devices and applications are assumed to be untrusted and must be verified before they are granted access to resources. The principle of least privilege means that every user or device is granted the minimum level of access required to perform their job, and access is only granted on a need-to-know basis.

Since then, the concept of zero trust has continued to gain momentum, with many organizations adopting its architectures to better protect their digital assets from cyber threats. It encompasses various security principles and technologies that are deployed to strengthen security and reduce the risk of security breaches.

Types of zero trust security models
  • Identity-based zero trust: This model is based on the principle of strict identity verification, where every user or device is authenticated and authorized before accessing any resources. It relies on multi-factor authentication, access controls and least-privilege principles.
  • Network-based zero trust: This focuses on securing the network perimeter by segmenting the network into smaller segments. It aims to reduce the attack surface by limiting access to specific resources to authorized users only. This model uses technologies like firewalls, VPNs and intrusion detection and prevention systems.
  • Data-based zero trust: This model aims to protect sensitive data by encrypting it and limiting access to authorized users. It employs data classification and labeling, data loss prevention and encryption technologies to protect data at rest, in transit and in use.
  • Application-based zero trust: This focuses on securing applications and their associated data. It assumes that all applications are untrusted and must be verified before accessing sensitive data. It uses application-level controls—such as runtime protection and containerization—to protect against attacks like code injection and malware.
  • Device-based zero trust: This model secures the devices themselves (e.g., smartphones, laptops and IoT devices). It assumes that devices can be compromised and must be verified before accessing sensitive data. It employs device-level security controls, such as endpoint protection, device encryption and remote wipe capabilities.

These models are designed to work together to create a comprehensive zero trust architecture that can help organizations to reduce their attack surface, improve their security posture and minimize the risk of security breaches. However, it's important to note that the specific types of zero trust security models and their implementation may vary depending on the organization's size, industry and specific security needs.

Zero trust has become a popular approach to modern cybersecurity. It has been embraced by many organizations to address the growing threat of cyberattacks and data breaches in today's complex and interconnected world. As a result, many technology vendors have developed products and services that are specifically designed to support zero trust architectures.

What is the National Institute of Standards and Technology (NIST)?

There are also many frameworks and standards that organizations can use to implement zero trust security principles in their cybersecurity strategies with the guidance of the National Institute of Standards and Technology (NIST).

NIST is a non-regulatory government agency at the U.S Department of Commerce, aimed at helping companies to better understand, manage and reduce cybersecurity risks to protect networks and data. They have published a couple of highly recommended comprehensive guides on zero trust:

NIST SP 800-207, Zero Trust Architecture

NIST SP 800-207, Zero Trust Architecture was the first publication to establish the groundwork for zero trust architecture. It provides the definition of zero trust as a set of guiding principles (instead of specific technologies and implementations) and includes examples of zero trust architectures.

NIST SP 800-207 emphasizes the importance of continuous monitoring and adaptive, risk-based decision-making. They recommend implementing a zero trust architecture with the Seven Pillars of Zero Trust (traditionally known as the Seven Tenets of Zero Trust)

Seven Pillars of Zero Trust
  1. All data sources and computing services are considered resources.
  2. All communication is secured regardless of network location.
  3. Access to individual enterprise resources is granted on a per-session basis.
  4. Access to resources is determined by dynamic policy—including the observable state of client identity, application/service and the requesting asset—and may include other behavioral and environmental attributes.
  5. The enterprise monitors and measures the integrity and security posture of all owned and associated assets.
  6. All resource authentication and authorization are dynamic and strictly enforced before access is allowed.
  7. The enterprise collects as much information as possible about the current state of assets, network infrastructure and communications and uses it to improve its security posture.

Overall, NIST SP 800-207 promotes an overall approach to zero trust that is based on the principles of least privilege, micro-segmentation and continuous monitoring, encouraging organizations to implement a layered security approach that incorporates multiple technologies and controls to protect against threats.

NIST SP 1800-35B, Implementing a Zero Trust Architecture

NIST SP 1800-35B, Implementing a Zero Trust Architecture is the other highly recommended publication from NIST and is comprised of two main topics:

  1. IT security challenges for private and public sectors.
  2. “How-to” guidance to implement a zero trust architecture in enterprise environments and workflows with standard-based approaches, using commercially available technology.

The publication correlates IT security challenges (applicable to private and public sectors) to the principles and components of a zero trust architecture so that organizations can first properly self-diagnose their needs. They can then adopt the principles and components of a zero trust architecture to meet the needs of their organization. Therefore, NIST SP 1800-35B does not identify specific types of zero trust models.

Maintaining continuity between architecture(s) and framework(s) as zero trust evolves

NIST leverages iterative development for the four zero trust architectures they have implemented, allowing them ease and flexibility to make incremental improvements and have continuity with the zero trust framework as it evolves over time.

The four zero trust architectures implemented by NIST are as follows:
  1. Device agent/gateway-based deployment.
  2. Enclave-based deployment.
  3. Resource portal-based deployment.
  4. Device application sandboxing.

NIST has strategic partnerships with many technology organizations (like IBM) that collaborate to stay ahead of these changes and emerging threats.

The collaboration allows IBM prioritize development to ensure technology solutions align with the seven tenets and principles of zero trust, securing and protecting IBM clients’ systems and data.

Learn more

Learn more about the importance of zero trust in IBM’s 2022 Cost of a Data Breach Report or directly connect with one of IBM’s zero trust experts.

Additional resources David Heath

Americas Sales Leader, IBM Sustainability Software


Ransomware Protection with Object Lock Security Storage

5 min read


Mark Seaborn, Senior Security Architect

Looking at Object Lock, versioning and immutability.

In my last post, I wrote about how to protect data from ransomware attacks using IBM Cloud Object Storage, its versioning system and separation of duty to store data in the cloud. In this post, I will expand on the fundamentals of using versioning as a basic concept for ransomware protection to include Object Lock.

I will outline the difference between Object Lock and versioning and discuss some additional threat vectors that will be addressed when using Object Lock as a defense against ransomware. As was true with versioning, the Object Lock technology does not introduce additional hardware or software components to the solution, nor is there an additional fee for the feature itself. Users just pay for the amount of data that is used.

Object Lock, versioning and immutability

The two features—Object Lock and versioning—are very similar in that Object Lock is a versioning system with immutable objects. Though versioning and immutability could seem diametrically opposed, the two concepts create a strong defense against ransomware attack when combined. I will make clear my claims of a "strong defense" later in this post where I compare Object Lock and versioning.

First, I’ll describe details about the Object Lock system that position it to address additional threat vectors. With Object Lock, end users can control retention policies for each object in an Object Lock-enabled bucket. Unlike classical immutable objects, data owners that have placed objects in buckets with Object Lock enabled can add new versions of objects to the bucket. Writes to the bucket with same named objects do not replace the existing object, but instead create new versions of the objects. The key difference between Object Lock and versioning is that the version history is immutable. This means that though new versions of the objects may be created, the version history cannot be modified outside the retention policy.

Object Lock and threat vectors

Using Object Lock as a data protection strategy will defend against ransomware attacks where the goal of the attack is to replace a victim’s data with encrypted data. In this scenario, only the adversary attacking the system is in control of the key material to decrypt the data. This ransomware defense will not address situations where the goal of the attacker is to use ransomware to exfiltrate data, then demand ransom to prevent leaking captured data.

There are several ways to prevent attackers from exfiltrating data. For instance, in addition to least privileged access controls, one can layer on context-based restrictions to prevent unauthorized access of the data. One could also apply strong encryption to data both in flight and at rest to defend against adversaries that manage to exfiltrate data. Encrypting the data on the client side before storing it in the cloud will help protect your data from attacks that might be mounted inside the cloud itself. The previous examples are just a few controls that could be applied to improve the security of your data.

As was alluded to earlier, the goal of using a system that includes versioning as a strategy to defend against ransomware is to prevent the overwrites that make the data unusable in the first place. The data is, of course, unusable because any software or human attempting to use the data cannot read the data after the adversary has encrypted it. The inability to access data will negatively impact the day-to-day business operations.

Adversaries attempt to use this data unavailability as the leverage against the victim. They demand payment to hand over encryption keys and instructions to decrypt the data and reenable business operations. Businesses that rely on their data for critical functions that generate revenue often see paying the adversary as the lesser of two evils. They must decide which is worse—the loss of revenue and/or customers that cannot use the services or paying the adversary to retrieve the keys and instructions to unlock their data. I will point out that paying the adversary is fraught with danger. What assurances does the victim have that the adversary will even turn over the keys and instructions to restore the data? The adversary may simply vanish after payment has been made.

This is where versioning is extremely helpful. By creating an environment where the only option to update the data is to create a new version, no adversary can permanently disable an enterprise. Attempts to encrypt the data and store it back to a bucket only generate a new version of the data, not replace it. This still means that the victim could see some disruption to daily operations, but they are able to help themselves. They are not reliant on the adversary’s good will.

A victim, armed with environment-specific information, can sort through the version history and find a useable version of the data. The cyber-recovery plan for the enterprise will need to consider automation that will allow version history inspection to find a useable version of the data. Manual attempts to restore large volumes of data will likely not reenable the enterprise's business functions in a timely manner, resulting in unacceptable losses of revenue. This automation will need to consider situations where the ransomware may have written the data more than once, creating multiple versions of the encrypted data atop the useable data.

Both Object Lock and versioning with separation of duty accomplish the goal of preventing the adversary from overwriting useable data with encrypted data, making it inaccessible to the victim. However, Object Lock offers protection against additional threats, human error and the insider attack. When using versioning and separation of duty alone as the defense, there is nothing stopping the attack from coming from inside the organization by the victim’s own administrators. If an administrator were to install the ransomware with elevated privileges, any "version-aware" ransomware could theoretically encrypt all versions of the data. A different exploit could be to simply remove all versions of the data and leave only a single encrypted version in the data store.

Another cold hard truth is that administrators are as human as the rest of us and make mistakes. It could be that the administrative credentials were simply mishandled (such as being left in a history file and extracted through reconnaissance by the attacker). Credentials could also be obtained from an unwitting employee through a phishing attack. However they are obtained, accidental leakage of credentials would allow the attacker to adversely manipulate the version history.

Object Lock's version history tree and any current version of the data are immutable. This means that not even administrators can modify a version of the data outside the retention policy. When using Object Lock, even if the administrative credentials were leaked, the ransomware could not destroy useable data. Clearly Object Lock can remove two critical attack vectors against the versioning approach to protecting data from ransomware: insider attack and human error.


In this post I introduced using Object Lock to address additional threat vectors when defending data against ransomware. Versioning with good separation of duty and Object Lock are both strategies that can successfully prevent ransomware from replacing valuable data with encrypted data.

However, Object Lock has additional benefits. When using Object Lock as a defense strategy, the version tree is protected against accidental or intentional modification. Because the version history is immutable, adversaries that manage to gain access to elevated privileges to the object storage account credentials cannot encrypt all previous version of the data or worse, remove all versions of the data. This is true of adversaries that are external to the victim’s organization or internal to the organization. Disgruntled administrators (or anyone for that matter) do not have permissions to remove or modify data outside the retention policy once it has been written. Object Lock has the benefit of protecting the version tree from insider threats, whereas versioning without Object Lock will only prevent adversaries without proper credentials from attacking the version tree.

Start your defense today

Take a tour of IBM Cloud Object Storage’s Object Lock feature to see how it can be employed in your own ransomware defense strategy.

Mark Seaborn

Senior Security Architect


How Went Serverless with IBM Cloud Code Engine Cloud

2 min read


Uwe Fassnacht, Product Director for IBM Cloud Code Engine

In today's digital age, virtual events have become more popular than ever.

That is why set out to transform the way organizations plan, run and evaluate in-person, virtual and hybrid events of any scale. makes it effortless and efficient to manage all aspects of a successful event—from pre-event communication and invitation to on-site check-in and attendee guidance to post-event evaluation.

Going serverless to manage infrastructure more efficiently

However, as the company grew and their customer base expanded, they realized that they needed to find a more efficient way to manage their infrastructure and keep up with the increasing demands of their clients. That's when turned to IBM Cloud Code Engine to go serverless and streamline their operations.

As part of their cloud journey, started adopting their own Kubernetes cluster. While this solution worked well initially, they soon realized that they needed to shift to a more serverless architecture to have their costs correlate with actual customer usage. With the goal of scaling reliably and only paying for what they actually use, they decided to deploy steady-state workloads on IBM Cloud Kubernetes Service on virtual private cloud, while scaling workloads on Code Engine.

Why went with IBM Cloud Code Engine

IBM Cloud Code Engine is a fully managed serverless platform that allows developers to build and deploy containerized applications quickly and easily. With Code Engine, developers can focus on writing code and delivering new features and functions to their customers, instead of worrying about infrastructure. By moving to Code Engine, was able to increase its development velocity and time-to-market for business value, shipping faster and keeping up with the increasing demands of its clients.

In addition to the benefits of serverless computing, also benefited from the high degree of automation that Code Engine provided. By using IBM Cloud DevOps toolchains and deploying via Terraform templates, was able to save on IT Operations costs and minimize user errors.

The integrated observability platform for monitoring and logging further enhanced the platform's functionality, ensuring that everything ran smoothly and efficiently. was also able to augment its Code Engine deployment with other IBM Cloud services to store and distribute data, such as Cloud Object Store, Postgres, and Event Streams. All of these services are managed platforms, which means that can focus on its core business and leave the management of infrastructure to IBM Cloud.'s journey to serverless computing with IBM Cloud Code Engine has allowed them to scale reliably and pay only for what they use, while increasing their development velocity and time-to-market for business value. With the added benefits of automation and observability, is now better positioned than ever to meet the demands of its clients and continue to grow and innovate in the virtual event space.

According to Sven Frauen, CIO & Co-Founder of, “IBM Code Engine empowers us at to handle peak demands (for example, for our email infrastructure with campaigns for large events). The auto-scaling capabilities allow us to focus on delivering value without having to worry about infrastructure management.”

Get started

Learn more about IBM Cloud Code Engine and (and their innovative virtual event platform).

Uwe Fassnacht

Product Director for IBM Cloud Code Engine


How to Migrate Buckets from One Cloud Object Storage Instance to Another Cloud

3 min read


Daroush Renoit, Solution Developer

The blog post showcases how to migrate all buckets from one IBM Cloud Object Storage (COS) instance to another in the US region.

This is done by using a script in Python. The source IBM Cloud Object Storage (COS) instance will be the instance from which you are migrating, and the target/destination COS instance is the instance to which instance you will be migrating. This script uses ibm-cos and ibm-platform-services SDKs. An example of the architecture is below:

  • Make sure you have at least two COS instances on the same IBM Cloud account
  • Install Python
  • Make sure you have the necessary permissions to do the following:
    • Create buckets
    • Modify buckets
    • Create IAM policy for COS instances
  • Install libraries for Python
    • ibm-cos-sdk for python: pip3 install ibm-cos-sdk
    • ibm-platform-services: pip3 install ibm-platform-services
Set environment variables for the script

The following are the environment variables that the scripts use:

  • IBMCLOUD_API_KEY=<ibmcloud_api_key>
  • SERVICE_INSTANCE_ID=<source_cos_instance_guid>
  • DEST_SERVICE_INSTANCE_ID=<target_cos_instance_guid>
  • US_GEO=<us_cos_endpoint>
  • IAM_POLICY_MANAGEMENT_APIKEY=<ibmcloud_api_key>
  • IAM_ACCOUNT_ID=<iam_account_id>
  • SUFFIX=<target_instance_suffix>

You can create and download your IBM Cloud API key in the IBM cloud console at Manage > Access (IAM) > API keys.

You can find GUID for the source and target instances in the cloud console resource list. Type in the name of each COS instance and click on the white part of the row of the instance to retrieve the GUID.

To find your US COS endpoint, click on your source COS instance from the Resource List in the navigation menu. Then, click on Endpoints and make sure the Selection Location dropdown says us-geo. Select the region that your buckets are in and make sure to prepend https:// in the environment variable.


The iam_account_id is the same value as your ibmcloud_api_key.

The suffix is used to append a name at the end of the newly created bucket since bucket names are globally unique.

Run the script

After the environment variables have been set, you may now run the script. You can find the code of the script below here.

import os
import ibm_boto3
from ibm_botocore.client import Config
from ibm_botocore.config import Config
from ibm_platform_services import IamPolicyManagementV1

# this is the suffix used for the new naming convention of buckets

# function to get region of a bucket
def getBucketRegion(locationConstraint):
if locationConstraint == "us-smart" or locationConstraint == "us-standard" or locationConstraint == "us-vault" or locationConstraint == "us-cold":
return "us-geo"
if locationConstraint == "us-east-smart" or locationConstraint == "us-east-standard" or locationConstraint == "us-east-vault" or locationConstraint == "us-east-cold":
return "us-east"
if locationConstraint == "us-south-smart" or locationConstraint == "us-south-standard" or locationConstraint == "us-south-vault" or locationConstraint == "us-south-cold":
return "us-south"
return ""

# function to get region of the URL endpoint
def getUrlRegion():
if endpoint_url=="" or endpoint_url=="" or endpoint_url=="":
return "us-geo"
if endpoint_url=="" or endpoint_url=="" or endpoint_url=="":
return "dallas"
if endpoint_url=="" or endpoint_url=="" or endpoint_url=="":
return "washington"
if endpoint_url=="" or endpoint_url=="" or endpoint_url=="":
return "san jose"
if endpoint_url=="" or endpoint_url=="" or endpoint_url=="":
return "us-east"
if endpoint_url=="" or endpoint_url=="" or endpoint_url=="":
return "us-south"
return ""

# function to list buckets
def get_buckets1(type,cos):
except Exception as e:
print("Error: Unable to get COS Buckets.",e)
for bucket in buckets:
request =cos.get_bucket_location(Bucket=bucket["Name"])
#this except accounts for when the bucket is not in the targeted region

if type == "target" and getUrlRegion()==getBucketRegion(bucketLocation):
elif getUrlRegion()==getBucketRegion(bucketLocation):
return bucketNames

# function to create buckets
def create_buckets(targetBucketNames):
# Destination cos client connection
destCos = ibm_boto3.client("s3",
location =getUrlRegion()+"-smart"
for bucketName in targetBucketNames:
destCos.create_bucket(Bucket=bucketName,  CreateBucketConfiguration={
'LocationConstraint': location
print("Created bucket:",bucketName)
except Exception as e:
print("ERROR: Unable to create bucket.",e)

def migrateBuckets():
# Create client connection
cos = ibm_boto3.client("s3",
# Getting all source buckets 
print("All buckets from source instance from "+getUrlRegion()+" region:",sourceBucketNames)
# Destination cos client connection
destCos = ibm_boto3.client("s3",

# Getting all target buckets to avoid duplicates
print("All buckets from target instance from "+getUrlRegion()+" region:",targetBucketNames)
# excluding buckets that already exists
targetBucketNames=[x for x in sourceBucketNames if x not in targetBucketNames]
print("All buckets from target instance without duplicates:",targetBucketNames)

# creating buckets on target cos instance

# function to get region of a bucket
def getBucketRegion(locationConstraint):
if locationConstraint == "us-smart" or locationConstraint == "us-standard" or locationConstraint == "us-vault" or locationConstraint == "us-cold":
return "us-geo"
if locationConstraint == "us-east-smart" or locationConstraint == "us-east-standard" or locationConstraint == "us-east-vault" or locationConstraint == "us-east-cold":
return "us-east"
if locationConstraint == "us-south-smart" or locationConstraint == "us-south-standard" or locationConstraint == "us-south-vault" or locationConstraint == "us-south-cold":
return "us-south"
return ""

# function to get region of the URL endpoint
def getUrlRegion():
if endpoint_url=="" or endpoint_url=="" or endpoint_url=="":
return "us-geo"
if endpoint_url=="" or endpoint_url=="" or endpoint_url=="":
return "dallas"
if endpoint_url=="" or endpoint_url=="" or endpoint_url=="":
return "washington"
if endpoint_url=="" or endpoint_url=="" or endpoint_url=="":
return "san jose"
if endpoint_url=="" or endpoint_url=="" or endpoint_url=="":
return "us-east"
if endpoint_url=="" or endpoint_url=="" or endpoint_url=="":
return "us-south"
return ""

# function to list buckets
def get_buckets2(type,cos):
except Exception as e:
print("Error: Unable to get COS Buckets.",e)
for bucket in buckets:
request =cos.get_bucket_location(Bucket=bucket["Name"])
#this except accounts for when the bucket is not in the targeted region
if getUrlRegion()==getBucketRegion(bucketLocation):

return bucketNames

#function to add replication rules to buckets
def addReplicationRules(buckets,targetID,cos):
if os.environ['DISABLE_RULES']=="true":
# this is the suffix used for the new naming convention of buckets
for bucket in buckets:
cos.put_bucket_replication(Bucket=bucket,    ReplicationConfiguration={
'Rules': [

'Priority': 0,
'Status': status,
'Filter': {},
'Destination': {
'Bucket': 'crn:v1:bluemix:public:cloud-object-storage:global:a/'+iamAccountID+':'+targetID+':bucket:'+bucket+suffix,
},  'DeleteMarkerReplication': {
'Status': 'Enabled'
if os.environ['DISABLE_RULES']!="true":
print("added replication rule to bucket",bucket)
print("disabled replication rule to bucket",bucket)
except Exception as e:
print("Error: Unable to add replication rule to bucket",bucket,e)

# function to enable versioning on buckets
def enableVersioning(buckets,cos):
for bucket in buckets:

'Status': 'Enabled'
print("versioning enable to bucket",bucket)
except Exception as e:
print("Error: Unable to enable versioning to bucket",bucket,e)

#function to create iam policy to for the source cos instance to write data to the target instance
def addAuthorization(sourceID,targetID):
#Create IAM client
service_client = IamPolicyManagementV1.new_instance()
service_client.create_policy(type="authorization",subjects=[{"attributes":[{"name": "accountId","value":iamAccountID},{"name": "serviceName", "value": "cloud-object-storage"},{"name":"serviceInstance", "value":sourceID}]}],roles=[{"role_id": "crn:v1:bluemix:public:iam::::serviceRole:Writer"}],resources=[{"attributes":[{"name": "accountId","value":iamAccountID},{"name": "serviceName","value": "cloud-object-storage"},{"name":"serviceInstance", "value":targetID}]}])
print("created authorization policy")
except Exception as e:
print("Warning: Unable to create policy. Please ignore if policy already exists",e)

def addReplicationRulesToMigratedBuckets():
# Create client connection
cos = ibm_boto3.client("s3",

# Getting all source buckets 

#enable versioning for both cos instances
print("enable versioning for source instances")

# Destination cos client connection
destCos = ibm_boto3.client("s3",
targetBucketNames = get_buckets2("target",destCos)
print("enable versioning for target instances")

#add authorization from source cos instance to target cos instance

#add replication rules to buckets

# function to get region of a bucket
def getBucketRegion(locationConstraint):
if locationConstraint == "us-smart" or locationConstraint == "us-standard" or locationConstraint == "us-vault" or locationConstraint == "us-cold":
return "us-geo"
if locationConstraint == "us-east-smart" or locationConstraint == "us-east-standard" or locationConstraint == "us-east-vault" or locationConstraint == "us-east-cold":
return "us-east"
if locationConstraint == "us-south-smart" or locationConstraint == "us-south-standard" or locationConstraint == "us-south-vault" or locationConstraint == "us-south-cold":
return "us-south"
return ""

# function to get region of the URL endpoint
def getUrlRegion():
if endpoint_url=="" or endpoint_url=="" or endpoint_url=="":
return "us-geo"
if endpoint_url=="" or endpoint_url=="" or endpoint_url=="":
return "dallas"
if endpoint_url=="" or endpoint_url=="" or endpoint_url=="":
return "washington"
if endpoint_url=="" or endpoint_url=="" or endpoint_url=="":
return "san jose"
if endpoint_url=="" or endpoint_url=="" or endpoint_url=="":
return "us-east"
if endpoint_url=="" or endpoint_url=="" or endpoint_url=="":
return "us-south"
return ""

# function to list buckets
def get_buckets3(type,cos):

except Exception as e:
print("Error: Unable to get COS Buckets.",e)
for bucket in buckets:
request =cos.get_bucket_location(Bucket=bucket["Name"])
#this except accounts for when the bucket is not in the targeted region
if getUrlRegion()==getBucketRegion(bucketLocation):

return bucketNames

def copy_in_place(bucket):
# Create client connection
cos = ibm_boto3.client("s3",
if "Contents" not in cosObjects:
print("source bucket is empty")

print("Priming existing objects in " + bucket + " for replication...")

paginator = cos.get_paginator('list_objects_v2')
pages = paginator.paginate(Bucket=bucket)

for page in pages:
for obj in page['Contents']:
key = obj['Key']
print("  * Copying " + key + " in place...")
headers = cos.head_object(

md = headers["Metadata"]

'Bucket': bucket,
'Key': key
print("    Success!")
except Exception as e:
print("    Unable to copy object: {0}".format(e))
print("Existing objects in " + bucket + " are now subject to replication rules.")

def replicateExistingFiles():

# Create client connection
cos = ibm_boto3.client("s3",

# Getting all source buckets 
print("All source buckets to replicate",sourceBucketNames)

# Copy data from source to target bucket
for bucket in sourceBucketNames:

# main
if os.environ['DISABLE_RULES']!="true":
COS instance migration script

This script was designed to help users migrate one COS instance to another instance on the same account for a US region. The function calls in the main function are executed in the following order.

  • migrateBuckets function: This function gathers all buckets from one source COS instance and creates them in the target COS instance. The newly created target bucket will have a suffix attached to it.
  • addReplicationRulesToMigratedBuckets function: The function enables replication rules to the source buckets so it can write data to the target buckets when data is added or removed after the rule is applied. This is achieved by enabling versioning on both source and target buckets. Versioning is required to enable replication. Versioning is a history of all files in a bucket. The script also creates an IAM policy on the entire source and destination instance to allow source buckets to write to their respective target buckets. Make sure DISABLE_RULES to false.
  • replicateExistingFiles function: I previously mentioned that replication applies to a bucket when newly adding or deleting files after the rule has been set. If you want to transfer files that existed before the rule was applied, make sure DISABLE_RULES to false to activate this function.
Disable replication rules

If you want to disable the replication rules for the buckets, set DISABLE_RULES to true and run the script again.


By following these steps, you will successfully migrate buckets from one US IBM Cloud Object Storage (COS) instance to another per region.

If you have any questions, you can reach out to me on LinkedIn.

Daroush Renoit

Solution Developer


Enhanced Ingress Domain Functionality for Kubernetes Service, OpenShift and Satellite Clusters Cloud

5 min read


Jared Hayes, Software Engineer
Lucas Copi, Software Engineer, IBM Cloud Kubernetes Service
Theodora Cheng, Software Developer - Armada Ingress
Dennis Warne, IBM Cloud Kubernetes Service Ingress Dev

On 6 April 2023, the IBM Cloud Kubernetes Service enhanced the Ingress domain management functionality for IBM Cloud Kubernetes Service and Red Hat OpenShift on IBM Cloud clusters.

The enhanced Ingress Domain functionality now supports the ability to expose your application with a custom domain, integrate with IBM Cloud Internet Services or leverage third-party DNS providers Akamai and Cloudflare to utilize existing domains.

Where can I find the new Ingress domain functionality?

You can use the new ibmcloud ks ingress domain commands to manage the domains and associated resources for your cluster. The command is grouped under the ingress namespace to enable better discoverability and to co-locate it alongside sibling commands in the Ingress feature family:

➜  ~ibmcloud ks ingress domain -h 
ibmcloud ks ingress domain - [Beta] Manage a cluster's Ingress domains. 
ibmcloud ks ingress domain command [arguments...] [command options] 

create          [Beta] Create an Ingress domain for a cluster. 
credential   [Beta] Manage a cluster's external domain provider credentials. 
default        [Beta] Manage a cluster's default Ingress domain. 
get              [Beta] View the details of an Ingress domain. 
ls                [Beta] List all Ingress domains for a cluster. 
rm                [Beta] Remove an Ingress domain from a cluster. 
secret          [Beta] Manage the secrets for an Ingress domain. 
update         [Beta] Update an Ingress domain for a cluster. The records passed in will fully replace the current records associated with the domain. Passing in no records will unregister the current records from a domain.

ibmcloud ks ingress domain create -h  
create - [Beta] Create an Ingress domain for a cluster.  

ibmcloud ks ingress domain create --cluster CLUSTER [--crn CRN] [--domain DOMAIN] [--domain-provider PROVIDER] [--domain-zone ZONE] [--hostname HOSTNAME] [--ip IP] [--is-default] [--output OUTPUT] [-q] [--secret-namespace NAMESPACE]  

--cluster value, -c value   Specify the cluster name or ID.  
--domain value              The Ingress domain. To see existing domains, run 'ibmcloud ks ingress domain ls'.  
--domain-provider           The external DNS provider type. The default is 'akamai'. Available options: akamai, akamai-ext, cis-ext, cloudflare-ext  
--ip value                  The IP addresses to register for the domain.  
--is-default                Include this option to set the relevant domain as the default domain for cluster.  
--crn value                 The CRN for the IBM CIS instance.  
--domain-zone value         The ZoneID for CIS.  
--hostname value            For VPC clusters. The hostname to register for the domain.  
--secret-namespace value  The namespace that the TLS secret is created in.  
--output                    Prints the command output in the provided format. Available options: json  
-q                          Do not show the message of the day or update reminders.

We have standardized the command operations on a CRUD model and created a cluster-infrastructure-agnostic command structure in order to provide a more consistent and understandable user experience.

The CLI command ibmcloud ks ingress domain create now supports custom domains, IBM Cloud Internet Services domains and third-party provider domains from Akamai and Cloudflare. If you do not specify a provider on the create domain command, the domain will be managed by IBM using the default domain provider:

➜  ~ibmcloud ks ingress domain create -h  
create - [Beta] Create an Ingress domain for a cluster.  

ibmcloud ks ingress domain create --cluster CLUSTER [--crn CRN] [--domain DOMAIN] [--domain-provider PROVIDER] [--domain-zone ZONE] [--hostname HOSTNAME] [--ip IP] [--is-default] [--output OUTPUT] [-q] [--secret-namespace NAMESPACE]  

--cluster value, -c value   Specify the cluster name or ID.  
--domain value              The Ingress domain. To see existing domains, run 'ibmcloud ks ingress domain ls'.  
--domain-provider           The external DNS provider type. The default is 'akamai'. Available options: akamai, akamai-ext, cis-ext, cloudflare-ext  
--ip value                  The IP addresses to register for the domain.  
--is-default                Include this option to set the relevant domain as the default domain for cluster.  
--crn value                 The CRN for the IBM CIS instance.  
--domain-zone value         The ZoneID for CIS.  
--hostname value            For VPC clusters. The hostname to register for the domain.  
--secret-namespace value  The namespace that the TLS secret is created in.  
--output                    Prints the command output in the provided format. Available options: json  
-q                          Do not show the message of the day or update reminders.

The ibmcloud ks ingress domain get and ibmcloud ks ingress domain ls  CLI commands have been updated to display more relevant data in the table output and condense the content to improve domain detail visibility:

➜  ~ibmcloud ks ingress domain get -h 
get - [Beta] View the details of an Ingress domain. 

ibmcloud ks ingress domain get --cluster CLUSTER --domain DOMAIN [--output OUTPUT] [-q] 

--cluster value, -c value  Specify the cluster name or ID. 
--domain value              The Ingress domain. To see existing domains, run 'ibmcloud ks ingress domain ls'. 
--output                    Prints the command output in the provided format. Available options: json
➜  ~ibmcloud ks ingress domain ls -h 
ls - [Beta] List all Ingress domains for a cluster. 

ibmcloud ks ingress domain ls --cluster CLUSTER [--output OUTPUT] [-q] 

--cluster value, -c value  Specify the cluster name or ID. 
--output                    Prints the command output in the provided format. Available options: json

The CLI Command ibmcloud ks ingress domain update follows a PUT model to align more closely with the backend operations and reduce ambiguity in record updates:

➜  ~ ibmcloud ks ingress domain update -h 
update - [Beta] Update an Ingress domain for a cluster. The records passed in will fully replace the current records associated with the domain. Passing in no records will unregister the current records from a domain. 

ibmcloud ks ingress domain update --cluster CLUSTER --domain DOMAIN [--hostname HOSTNAME] [--ip IP] [-q] 

--cluster value, -c value  Specify the cluster name or ID. 
--domain value                   The Ingress domain. To see existing domains, run 'ibmcloud ks ingress domain ls'. 
--ip value                         The IP addresses to register for the domain. 
--hostname value                For VPC clusters. The hostname to register for the domain. 
-q                                     Do not show the message of the day or update reminders.

The CLI command ibmcloud ks ingress domain rm supports deleting a domain and all associated resources from your cluster:

➜  ~ ibmcloud ks ingress domain rm -h 
rm - [Beta] Remove an Ingress domain from a cluster. 

ibmcloud ks ingress domain rm --cluster CLUSTER --domain DOMAIN [-f] [-q] 

--cluster value, -c value  Specify the cluster name or ID. 
--domain value                  The Ingress domain. To see existing domains, run 'ibmcloud ks ingress domain ls'. 
-f                          Force the command to run without user prompts. 
-q                          Do not show the message of the day or update reminders.
How do I use the Ingress domain management functionality?

You can use the ibmcloud ks ingress domain create functionality to create and register a custom domain, IBM Cloud Internet Services domain or a third-party DNS provider domain with any load balancer service in your cluster. We will fully manage the DNS registration and certificate lifecycle of this new domain on your behalf in the same way the existing domains are currently managed.

Creating a custom domain managed by IBM Cloud Kubernetes Service

Previously, all domains managed by IBM Cloud Kubernetes Service were created with the format <cluster_name>-<account_hash>-<counter>.<region>.<dns_zone> (e.g., including the default domain for the cluster. The new Ingress domain functionality supports creating a managed domain with a custom subdomain.

To create a custom domain, specify the desired subdomain using the --domain flag on the create command. Note that the DNS zone for the custom domains are still managed by IBM, so a provided custom subdomain of test-custom-domain will result in a full domain The custom domains are validated for uniqueness to ensure there are no noisy neighbor conflicts:

➜  ~ibmcloud ks ingress domain create -c cgl90um10k5cc5n2msfg --domain test-custom-domain --ip

Protecting your applications with IBM Cloud Internet Services

The enhanced Ingress domain functionality supports the ability to create a domain for your cluster from an IBM Cloud Internet Services domain. This allows you to enable Web Application Firewalls, DDOS protection and global load balancing for your applications.

To create a domain from an existing IBM Cloud Internet Services domain, ensure that you have the appropriate service-to-service authorization policy in place. More details on creating this policy can be found here.

Once the service-to-service authorization is in place, you can use the ibmcloud ks ingress domain create command with the --domain-zone and --crn flags to create a domain from an IBM Cloud Internet Services domain. More details on the benefits of using IBM Cloud Internet Services and how to create an instance can be found here:

➜  ~ibmcloud ks ingress domain create -c test-cluster --domain --domain-provider cis-ext --ip --domain-zone 88ea2a737fbd5b149aa62c03d0adf343 --crn crn:v1:staging:public:internet-svcs:global:a/e3f386b3b6d14874a5437701b88371ca:f96ddbe5-6512-42ce-864e-d4dcabcc7057::
ibmcloud ks ingress domain ls -c test-cluster
Domain Target(s) Default Provider Secret Status Status no cis-ext pending pending
Integrating with an existing third-party DNS providers

You can now integrate an existing third-party Akamai or Cloudflare domain with your cluster for global load balancing support. To create a domain from a third-party provider, set the appropriate credentials for your cluster and use the domain create command with the --domain-provider flag. Note that you can only choose one active third-party provider for a cluster.

Adding the credentials to your cluster

To begin, ensure that you have created credentials with the required permissions:

  • Akamai: Read-write permissions for the /config-dns endpoint
  • Cloudflare: dns:read-writezone:read-writeapi-tokens:read

To set the credentials for your cluster, use the ibmcloud ks domain credential set command for the appropriate third-party provider:

➜  ~ibmcloud ks ingress domain credential set -h 
ibmcloud ks ingress domain credential set - [Beta] Add an external domain provider credential for the cluster. 
ibmcloud ks ingress domain credential set command [arguments...] [command options] 

akamai        [Beta] Set credentials for Akamai. 
cloudflare   [Beta] Set credentials for Cloudflare.
➜  ~ ibmcloud ks ingress domain credential set akamai -h 
akamai - [Beta] Set credentials for Akamai. 

ibmcloud ks ingress domain credential set akamai --cluster CLUSTER [--access-token TOKEN] [--client-secret SECRET] [--client-token TOKEN] [--domain-zone ZONE] [-f] [--host HOST] [-q] 

--cluster value, -c value  Specify the cluster name or ID. 
--host value                The host for the Akamai API Client Credentials. 
--client-token value        The client_token for the Akamai API Client Credentials. 
--client-secret value       The client_secret for the Akamai API Client Credentials. 
--access-token value      The access_token for the Akamai API Client Credentials. 
--domain-zone value      The zone to operate in. 
-f                          Force the command to run without user prompts.
➜  ~ ibmcloud ks ingress domain credential set cloudflare -h 
cloudflare - [Beta] Set credentials for Cloudflare. 

ibmcloud ks ingress domain credential set cloudflare --cluster CLUSTER [--domain-zone ZONE] [-f] [-q] [--token TOKEN] 

--cluster value, -c value  Specify the cluster name or ID. 
--token value               The API token. 
--domain-zone value      The zone to operate in. 
-f                          Force the command to run without user prompts.

You can use the additional ibmcloud ks ingress domain credential commands to manage the lifecycle of your credential. You can remove the credential from your cluster at any point by using the ibmcloud ks ingress domain credential rm command. If there are active domains for the provider still associated with your cluster, those domains will no longer receive record updates and will be marked with an error code in the Ingress status report. You can rotate the credential by re-running the ibmcloud ks ingress domain credential set command and specifying a new credential.

The ibmcloud ks ingress domain credential get command will supply credential metadata to help you keep track of which credential is in use for your cluster. Please note that once the credential is set, there is no way to view the actual credential:

➜  ~iks ingress domain credential get -c cgmog4k10hlptpsevhk0
Credential:     12345
Provider:       akamai-ext
Expires At:     2024-04-26T17:03:58.000Z
Last Updated:   11 hours ago
Adding a domain to your cluster

Once you have set the third-party provider credential for your cluster you can use the --domain-provider flag on the ibmcloud ks ingress domain create command to create a domain for that provider. You can choose to create a brand-new domain based on the existing DNS zone or use a pre-existing domain for global load balancing (GLB).

To create a new custom domain based on an existing DNS zone in your third-party domain provider, supply the fully qualified domain with the --domain flag on the create command. For example, if you have a DNS zone in your provider and you want to create a new domain for your cluster, you would include --domain on the create command.

To use an existing third-party domain with your cluster, create a cluster-associated domain with the ibmcloud ks ingress domain create command and provide the existing domain. The IPs will be appended to the existing registration, which allows multiple clusters to use the same domain:

➜  ~ibmcloud ks ingress domain create -c test-cluster --domain --domain-provider cloudflare-ext --ip
ibmcloud ks ingress domain ls -c test-cluster
Domain Target(s) Default Provider Secret Status Status no cloudflare-ext pending OK
How to change the default domain for your cluster (and what it means)

A cluster’s default domain is the domain reserved for registering the ALBs or OpenShift Ingress Controllers that come by default with your cluster. In Red Hat OpenShift on IBM Cloud clusters, this domain is the domain that exposes the OpenShift console (as well as the other default routes in the cluster).

The current default domain can be found in the Ingress Subdomain section of your cluster details or by listing the domains for your cluster using the ibmcloud ks ingress domain ls command:

➜  ~ibmcloud ks cluster get -c cgmrhv620eqpknudf6rg
Retrieving cluster cgmrhv620eqpknudf6rg...
Name: pvg-vpc-gen2-atpclujpb41ex83adi
ID: cgmrhv620eqpknudf6rg
State: normal
Status: All Workers Normal
Created: 2023-04-05 14:07:54 -0400 (7 hours ago)
Resource Group ID: 164fc63e5b694d4ca62ae09a8cae87de
Resource Group Name: Default
Pod Subnet:
Service Subnet:
Workers: 2
Worker Zones: us-south-3
Ingress Subdomain:
➜  ~ibmcloud ks ingress domain ls -c cgmrhv620eqpknudf6rg
Domain Target(s) Default Provider Secret Status Status - no akamai created OK yes cloudflare-ext created OK

You can update the default domain for you cluster by using the ibmcloud ks ingress domain default replace command or by specifying the --is-default flag on the ibmcloud ks ingress domain create command. To set a custom domain as the default domain for your cluster during cluster creation, use the ibmcloud ks ingress domain create command immediately following the cluster create command with the new cluster ID.

More information

For more information, check out our official documentation.

Learn more about IBM Cloud Kubernetes Service and Red Hat OpenShift on IBM Cloud.

Contact us

If you have questions, engage our team via Slack by registering here and join the discussion in the #general channel on our public IBM Cloud Kubernetes Service Slack.

Jared Hayes

Software Engineer

Lucas Copi

Software Engineer, IBM Cloud Kubernetes Service

Theodora Cheng

Software Developer - Armada Ingress

Dennis Warne

IBM Cloud Kubernetes Service Ingress Dev


IBM Security Randori: Harnessing the Attackers Perspective to Reduce Attack Surface Exposures Security

4 min read


Sanara Marsh, Director, Product Marketing

How to assess which target assets to investigate.

As we shared in our last blog, “Prevent App Exploitation and Ransomware by Minimizing Your Attack Surface,” the rapid adoption of hybrid cloud models and the permanent support of a remote workforce has made it virtually impossible to maintain a perfect inventory of external assets that are all properly patched. The world simply moves and changes too fast.

Defenders have always operated in a reactive fashion; for example, the anti-virus was first developed due to the creation of malware. The gap between adversaries and defenders continues to widen. According to the IBM Security X-Force Threat Intelligence Index 2023, deployment of backdoors was the most common action on objective, occurring in 21% of all reported incidents. This was followed by ransomware at 17% and business email compromise (BEC) at 6%.

To drive program efficiencies, organizations are flipping their perspective by narrowing their focus to elements of their attack surface that are most tempting to an adversary. This shift in perspective dramatically improves the efficiency of your team, while reducing the highest overall risk first.

The benefits of an attack surface management solution with an attacker’s design

Security teams need an attack surface management (ASM) solution that can quickly evaluate and rank each discoverable instance of software through the use of multiple factors, including enumerability, weakness, criticality, applicability, post-exploitation potential and research potential. Unable to do it all, a leading ASM solution must also offer bi-directional integrations that can work seamlessly with your vulnerability management solution and many other important security tools.

Using an ASM solution that operates like an attacker, vulnerability managers can take the necessary steps to reduce visibility gaps, improve prioritization and increase the ROI of their programs. While assessing your attack surface from an adversarial perspective is a critical first step, it’s only half the equation and must be viewed as only one part in a broader assessment of risk.

Report on external risk, not vulnerabilities

Risk is defined most basically as likelihood multiplied by impact. A powerful ASM solution like IBM Security Randori—with its patent pending Target Temptation modeling technology—can provide an adversarial assessment of the likelihood an asset is to be attacked, but with context into what the impact would be if that asset was attacked. While many in security would like to think that every attack is a problem that needs to be addressed, like shoplifting, the reality is often somewhere in between. While someone exploiting your VPN is likely an unacceptable business risk, a crypto miner on an isolated AWS node left over from an engineering experiment last year may be acceptable:

The latest X-Force Threat Intelligence Index found that just 26% of all reported vulnerabilities tracked in 2022 had a known and viable exploit, so reporting the raw number of vulnerabilities is of little practical value. You should be far more interested in the number of assets with either vulnerabilities or misconfigurations that truly pose a risk to your business and how those numbers either increase or decrease over time. This is key in both absolute and relative terms to an organization’s attack surface as the number of external-facing assets continues to grow.

By changing the conversation, vulnerability management teams can position themselves to have more strategic conversations with business stakeholders around what is and is not acceptable and better demonstrate the value of their work. Shifting the conversation can often have the added benefit of reenergizing teams with a new sense of optimism, as they no longer feel they must react to every new vulnerability and can proactively assess and hunt down risk.

Key external risk metrics worth reporting include the following:

  1. Number of high-risk external assets (top targets).
  2. Percent of attack surface categorized as high risk.
  3. Average time to remediation for high-risk assets.
  4. Number of new unknown external assets discovered per week.

When done on an ongoing basis, tracking and reporting on external risk can become a critical KPI that vulnerability management teams can use to demonstrate both immediate and long-term value over time. By following these steps using an ASM with bi-directional integrations that can prioritize exposures based on likelihood of targeting, teams can begin to deprioritize high-severity vulnerabilities that are of little adversarial value and prioritize those that present an adversary a lower friction path to initial access.

Investigating high-priority target assets

If we look beyond common vulnerabilities and exposures, we may notice that a target seems highly tempting for attackers to access. Naturally, we want to understand what’s driving this severity.

What you’re seeing below is based on Randori Recon’s patent-pending Target Temptation model. Considering exploitability (a.k.a., weakness), applicability and enumerability, the model is designed to calculate how tempting a target will be to an adversary. This prioritization algorithm helps level up your security program:

Based on the target identified, the IBM Security Randori platform also provides categorical guidance that goes beyond vulnerabilities to enable organizations to assess their cyber resiliency and design a more secure program. This categorical guidance details the appropriate steps your organization can implement to help improve its resiliency.

Get started with the IBM Security Randori platform

As a unified offensive security platform, IBM Security Randori is designed to drive resiliency through high-fidelity discovery and actionable context in a low-friction manner.

If you would like to learn more about how your organization can benefit from the IBM Security Randori platform, please sign up for a free Attack Surface Review or visit our web page.

Read the full IBM Security X-Force Threat Intelligence Index 2023 and check out Security Intelligence's piece, "Backdoor Deployment and Ransomware: Top Threats Identified in X-Force Threat Intelligence Index 2023."

Sanara Marsh

Director, Product Marketing


Modernizing Software Architecture with MicroProfile and Open Liberty Cloud

5 min read


Emily Jiang, STSM, Liberty Cloud Native Architect
Igor Berchtold, Product Owner, Shared Applications at Suva

How the combination of MicroProfile and Open Liberty provided Suva with the tools needed to develop a modern, cloud-native application architecture.

Suva has been insuring employees against accidents for 100 years and is a leading provider of health care coverage in Switzerland. They currently insure over 130,000 companies and over 2 million full-time employees.

“Overall, the combination of MicroProfile and Open Liberty provided us with the tools we needed to develop a modern, cloud-native application architecture in Java that is highly scalable, resilient, and easy to manage.” — Igor Berchtold, Product Owner, Shared Applications at Suva.

Challenges of the past

Looking back six years, Suva’s development and runtime environment was very different and relied heavily on a central Java Application Server. Although Suva used version control and modern development tools, they had limited responsibility for the application they designed, implemented and brought to production. Real operation was handled by a separate team, and they struggled to communicate with them when bugs invaded the application.

Change management—as the guardians of the holy grail—tried to support Suva as best they could, but ultimately, they decided when the application would be activated on the pre-production environment and when it would go to production. It was not an easy or enjoyable job, and the customers sometimes had to wait for days to see fixes.

Two years later, Suva decided to modernize and adopt leading edge technology and bring the agile world to life within the organization. This brought about some significant changes in the way they worked. The centerpiece of this new application development platform is an on-premises Kubernetes-based product (OpenShift) that allows teams to deploy their applications autonomously with full responsibility. However, teams had to be knowledgeable about the 12 factors for building cloud-ready applications. With the help of an agile framework that scaled up team productivity, work became enjoyable once again.

A significant transformation to the present

Over the past four years, Suva’s approach to software development has undergone a significant transformation. They migrated from a central, hosted application environment to a continuous integration and continuous deployment (CI/CD) pipeline, enabling teams to build, test and deploy their applications independently.

Each team now has a Jenkins build environment, deployed on OpenShift, with a set of configuration files and base images. The pipeline includes build, testing, image building and deployment preparation, which includes end-to-end integration testing using the Open Liberty cloud-native Java runtime. Once all tests are green, the final Docker image is produced, tagged with a version number and put into a Harbor registry. The product owner is now responsible for planning deployments, enabling greater autonomy for each team. Using Argo CD, each team can deploy their applications through different stages, including development, system testing, pre-production and production. Change management still plays a vital role in the audit process, but the relevant information is automatically collected through events Argo CD provides and is sent to a change management gateway that handles the audit process.

When Suva began the software architecture migration process, the first step was to replace their outdated framework. Suva wanted to use a microservices architecture for its benefits:

  • Scalability: Microservices are designed to be highly scalable, with each service being developed and deployed independently. This allows individual services to be scaled up or down as needed to meet changing demands.
  • Agility: Because each microservice is developed and deployed independently, changes can be made quickly and easily without affecting other services. This makes it easier to respond to changing requirements or customer needs.
  • Resilience: With microservices, if one service fails, it doesn't bring down the entire system. Each service is designed to be self-contained and fault-tolerant, so failures are isolated and can be quickly resolved.
  • Technology diversity: With microservices, different services can be built using different technologies if they all conform to the same set of standards for communication and integration. This allows developers to choose the best tool for the job, rather than being limited to a single technology stack.
  • Organizational structure: Microservices can be a good fit for organizations that are structured around small, cross-functional teams. Each team can be responsible for one or more microservices, allowing for faster development and deployment cycles.

After careful consideration, Suva decided to adopt MicroProfile, a lightweight framework that was gaining popularity as a natural progression from the Java EE world.

“The next step was to identify a suitable runtime for MicroProfile-based services. After conducting a thorough market evaluation, we selected Open Liberty for deeper evaluation. We quickly discovered that Open Liberty was the right cloud-native Java runtime for our needs. It was modular, fas, and designed to run in a containerized environment, making it ideal for deployment on Kubernetes. In addition, Open Liberty was fully committed to supporting all the MicroProfile standards,” says Igor Berchtold from Suva.

Suva created some initial prototypes to test Open Liberty’s capabilities, and their confidence in this cloud-native Java runtime grew even more. “The modularity of Open Liberty allowed us to build and deploy only the features we needed, which made our applications more lightweight and agile,” says Berchtold.

With Open Liberty, the teams at Suva were able to take advantage of MicroProfile's features—such as fault tolerance, health checks and metrics—and develop a highly scalable and resilient application architecture.

Overall, the combination of MicroProfile and Open Liberty provided Suva with the tools needed to develop a modern, cloud-native application architecture that is highly scalable, resilient and easy to manage.

Utilizing MicroProfile features provided by Open Liberty

Suva migrated several applications from an older centralized platform to Open Liberty, which has proven to be an effective choice. One such application involves retrieving currency data from Bloomberg. This application not only provides REST services for currency queries, but also integrates with SAP backends using SOAP services. The REST services have been designed using OpenAPI, while the currency data is stored in a Postgres database using JPA.

Another application provides REST services to health insurance companies, allowing them to send 5% of their claim cases for statistical analysis. These claim cases are received in the form of PDF documents, which are processed by another microservice running on Open Liberty. With the help of this microservice, the PDF documents are assembled into a single file that includes relevant table content. The resulting documents are then securely stored in a Postgres database using advanced encryption technologies.

“To develop this application in a standardized way, we utilized various MicroProfile features provided by Open Liberty, including MicroProfile Config, MicroProfile Health, MicroProfile Metrics and MicroProfile REST Client. These features helped us to streamline our development process and ensure that the application is scalable, reliable, and easy to maintain,” says Berchtold.

Reason to use Open Liberty  

The following are just a few of the benefits that Open Liberty provides

  • Open source: Open Liberty is an open-source, cloud-native Java runtime, which means the source code is available for everyone to use, modify and contribute to.
  • Lightweight: Open Liberty is designed to be lightweight and fast, with a small memory footprint and fast startup times. This makes it a good choice for microservices and cloud-native applications that require high performance and scalability.
  • Standards-based: Open Liberty is built on open standards (such as Jakarta EE and MicroProfile), which ensures that your applications are portable and can run on any compliant server.
  • Flexible: Open Liberty is highly configurable and can be used with a variety of deployment models, including traditional, cloud-native and hybrid environments.
  • Support: Open Liberty is backed by IBM, which provides enterprise-level support and services to ensure the stability and reliability of the server.
  • Updates: Over the past four years, Suva’s experience with updating Open Liberty has been very positive. Compared to our previous centralized application server, maintaining our application with the latest releases of Open Liberty has been significantly easier. The zero-migration architecture of Open Liberty allows applications to be updated to the latest releases without impact to your current applications and configurations.
Learn more and get started

Try out Open Liberty and Liberty as part of WebSphere Hybrid Edition to see if it's the right fit for you.  

Emily Jiang

STSM, Liberty Cloud Native Architect

Igor Berchtold

Product Owner, Shared Applications at Suva


IBM Continues Top Ranking in G2 Quarterly Reports Artificial intelligence Automation Cloud Integration

1 min read


Shannon Cardwell, Peer Review Program Manager

IBM offerings were featured in more than 1270 unique G2 reports, earning over 330 leadership awards across various categories.

The 2023 Spring Reports from G2 help buyers discover the right solution for their real-world business problems. G2 reports identify use cases for software purchasing and provide insight into technology trends. See how IBM stacks up against other vendors in these comprehensive reports, specifically looking at the following:

  • Ease of implementation (ease of setup, implementation time and more)
  • Ease of use (ease of administration, usability and more)
  • Relationship ratings (ease of doing business, quality of support and more)

“G2’s peer review content and recognition provide the validation that prospects and clients seek when researching our offerings. IBM leverages this valuable door opener on our webpages, in our nurture streams and within our seller content.” — Jennifer Turner, Marketing Leader, Progression and Expansion

Highlights of IBM’s leadership Insights from client feedback Get started with IBM 

Explore how IBM can support your business with these limited-time offers and discounts or read about G2’s methodology.

Shannon Cardwell

Peer Review Program Manager


Antler and IBM Are Helping Fintechs Evolve in the Next Chapter of Digital Transformation Cloud

4 min read


Prakash Pattni, MD, Financial Services Digital Transformation

How Antler and IBM are collaborating to enable fintechs to innovate at a rapid pace while addressing security and compliance.

From unknown disruptor to enabling partner, fintechs are reshaping the way traditional financial institutions around the world operate. Building accessible apps to ensure you can complete transactions and access funds instantly? Check. Democratizing finance to bring more of the world's population into the financial system? Check. Establishing digital currencies that have revolutionized the very way we think of money? Check.

We are seeing fintechs increasingly become a powerful arm in helping firms maneuver through a business climate of rapid disruption and growth.

With all eyes on them, it is essential that fintechs establish best practices and not introduce systemic risk into the financial system. How will fintechs that are bred by breaking new grounds comply with complex regulatory and compliance standards without becoming so overwhelmed by risk requirements that innovation suffers? How do they protect their business identity and reputation—which is deeply rooted in innovation, speed, and agility—while learning to navigate an evolving digital world? And lastly, how do they accomplish the above if there is a skills gap shortage in the financial services industry? This is where fintechs leverage the power of ecosystems.

Now is the time for fintechs to look for the right partners and tools that can help them be successful in both realms, without sacrificing their identity. Antler (a global investment firm focused on venture and innovation) and IBM are working together to help fintechs do just that—innovate at a rapid pace while addressing security and compliance.  

We believe Antler represents a new asset class: “day zero investing.” Antler backs a founder’s journey from the very beginning and continues the commitment to invest in them as they launch and scale startups. Through its residencies in 23 cities across 6 continents, Antler backs company founders from the beginning with co-founder matching, deep business model validation, initial capital, expansion support and follow-on funding up to Series C. As part of its commitment to help founders from the pre-idea stage scale faster and more efficiently, Antler is collaborating with IBM to give its fintech founders access to technology experts and cloud platforms, including IBM Cloud.  

Leveraging the power of ecosystems to balance innovation with security and compliance needs

As fintechs strive to drive innovation quickly, they should harness the power of ecosystems to collaborate with technology partners and successfully accelerate digital transformation. However, it is important for them to remember that third- and fourth-party dependencies can open the door to additional levels of risk that must be managed. To help both fintechs and financial institutions overcome this, industry cloud platforms can help them mitigate risk and address their compliance requirements—all while driving innovation. 

IBM has long been on a mission to mitigate risk in the industry—helping financial services organizations keep critical data secured with resiliency, performance, security, compliance capabilities and total cost of ownership at the forefront. With IBM Cloud for Financial Services, IBM is positioned to help fintechs become compliance-prepared from the onset. With security and controls built into the cloud platform and designed by the industry, we aim to help financial services institutions mitigate risk, address evolving regulations and accelerate cloud adoption.

Antler and IBM collaborate to deliver key technology and tools to fintechs

Our collaboration with Antler allows fintechs to gain the vital tools and technology skills needed to create rapid innovation for their customers, while helping them adhere to industry requirements.

Antler enables thousands of founders every year to launch and scale companies that are solving some of the most pressing issues of our time, including Yayzy, a UK-based fintech whose mission is to redefine sustainability innovation. The fintech developed its Carbon Footprint Calculation technology for banks and other fintechs to integrate within their mobile apps, allowing customers to track, reduce and offset the carbon footprint of their purchases in real-time.

Harnessing the power of collaboration with Antler and IBM allowed Yayzy to further accelerate its digital transformation by moving to IBM Cloud, which delivers the high levels of security that financial institutions need to help comply with stringent regulatory and compliance standards. The collaboration with Antler and IBM also allows Yayzy to scale globally in line with demand and leverage other advanced software capabilities, from artificial intelligence (AI) and machine learning (ML) to cybersecurity solutions.

Additionally, company founders that are working with Antler can accelerate secured innovation with Intel and IBM Cloud. Building on its mission to help reduce risk for highly regulated industries, IBM Cloud is one of the first cloud providers to deliver 4th Gen Intel® Xeon® Scalable processors. [1] With these advancements, fintech founders can take advantage of high performance, enhanced security and fast memory—all of which are especially important for the financial sector and other high-performance computing (HPC) workloads.

IBM Cloud and Intel continue to work together to deliver innovative cloud solutions designed to offer performance and security benefits that can help fintechs address their industry regulatory requirements, mitigate risk against cybersecurity threats and create faster, seamless customer experiences. Together, IBM Cloud and Intel technologies aim to help clients deliver better business results through innovative hardware and software solutions.

As the financial services industry continues to evolve, fintechs must continue to maintain their edge as they become recognized as a critical part of the global financial system, while keeping up with evolving regulatory requirements. With a strong ecosystem of partners, fintechs can better drive innovation to meet the demands of today’s customers while addressing the needs of the industry.

Learn more about how IBM and Antler are working together.

Get started with IBM Cloud for Financial Services.


[1] Available on 1/10/2023 on IBM Classic with expansion to MZRs February 2023. Expanded product availability expected throughout 2023. Statements regarding IBM's future direction and intent are subject to change or withdrawal without notice and represent goals and objectives only.

Prakash Pattni

MD, Financial Services Digital Transformation


Page 1|Page 2|Page 3|Page 4