Contents of this page is copied directly from IBM blog sites to make it Kindle friendly. Some styles & sections from these pages are removed to render this properly in 'Article Mode' of Kindle e-Reader browser. All the contents of this page is property of IBM.
Provision Bare Metal Servers on IBM Cloud Classic Infrastructure by Using the CLI Cloud Compute
4 min read
By:
Anupama Vijayan, Solution Developer, IBM Cloud Solution Engineering
An extension to the IBM documentation on how to provision bare metal servers on classic infrastructure using the IBM Cloud CLI.
Bare metal servers are single-tenant servers that are dedicated to the user on IBM Cloud and provisioned without a hypervisor. They are high-performance cloud servers that can be deployed in one or more data centers and are configurable with hourly or monthly options.
Bare metal options on IBM CloudIBM Cloud Bare Metal Servers provides various categories and options you can customize when provisioning. Some of these options include AMD, Intel, NVIDIA GPU, network redundancy, storage options, operating systems, monitoring, etc.
The following are the steps required to place an order using the IBM Cloud CLI. We will explain each step in further detail below:
- Get the bare metal server keyname you would like to provision based on the desired processor family, number of cores, drives etc.
- Choose the available locations for the server using the keyname from the first step.
- Using the same keyname from above, choose all the required and the optional values for various categories (OS, RAM, etc.) of the bare metal server.
- Select optional arguments (e.g., VLANID, subnets, ssh keys, etc.).
- Provision the bare metal server using the
ibmcloud sl order place
command with the values obtained from the above steps.
As a prerequisite, make sure to install and configure the IBM Cloud CLI locally.
In the following example, we are looking at the steps to provision a bare metal server of Dual Intel Xeon Processor Cascade Lake Scalable Family (4 Drives) in San Jose with the OS flavor Windows Server 2022 Standard Edition (64 bit).
1. Get the bare metal server keynameResources on classic infrastructure are organized into different kinds of packages. To create any classic infrastructure, we need values for two parameters: package type and package keyname. When creating a bare metal resource, the package type we use is BARE_METAL_CPU
.
Using this package type, we can use the ibmcloud sl command to list all the unique package keynames available for BARE_METAL_CPU
:
ibmcloud sl order package-list --package-type BARE_METAL_CPU
In Figure 1, for Dual Intel Xeon Processor Cascade Lake Scalable Family (4 Drives), the package keyname is DUAL_INTEL_XEON_PROCESSOR_CASCADE_LAKE_SCALABLE_FAMILY_4_DRIVES
.
For a particular package keyname, you can list all the available locations for provisioning this bare metal server using the following command:
ibmcloud sl order package-locations DUAL_INTEL_XEON_PROCESSOR_CASCADE_LAKE_SCALABLE_FAMILY_4_DRIVES
In Figure 2, the highlighted location keyname is SANJOSE04
.
To list all the categories/options that are available while provisioning a DUAL_INTEL_XEON_PROCESSOR_CASCADE_LAKE_SCALABLE_FAMILY_4_DRIVES
bare metal server, use the following command:
ibmcloud sl order category-list DUAL_INTEL_XEON_PROCESSOR_CASCADE_LAKE_SCALABLE_FAMILY_4_DRIVES
In Figure 3, the Name column has all the available categories for the DUAL_INTEL_XEON_PROCESSOR_CASCADE_LAKE_SCALABLE_FAMILY_4_DRIVES
bare metal server, and the Category Code column has the respective category codes. The Is Required column denotes if the category is mandatory or not. For the mandatory categories, respective values must be passed as an argument when placing the bare metal order.
To list all the available options and their description for all the categories (e.g., server, OS, bandwidth, etc.) for the package name DUAL_INTEL_XEON_PROCESSOR_CASCADE_LAKE_SCALABLE_FAMILY_4_DRIVES
, use the following command:
ibmcloud sl order item-list DUAL_INTEL_XEON_PROCESSOR_CASCADE_LAKE_SCALABLE_FAMILY_4_DRIVES
Further, to list all available options for a particular category code use the --category
filter in the above command.
For example, the below command lists all available OS flavors for package name DUAL_INTEL_XEON_PROCESSOR_CASCADE_LAKE_SCALABLE_FAMILY_4_DRIVES
:
ibmcloud sl order item-list DUAL_INTEL_XEON_PROCESSOR_CASCADE_LAKE_SCALABLE_FAMILY_4_DRIVES --category os
In Figure 5, the keyname to use for the OS flavor Windows Server 2022 Standard Edition (64 bit)
would be OS_WINDOWS_2022_FULL_STD_64_BIT
.
You can use other optional arguments while provisioning a bare metal server — like assigning it to a VLAN or subnet or adding an SSH key.
The following are some command to accomplish these tasks:
- Retrieve the VLAN ID:
ibmcloud sl vlan list | grep <unique name or number of your VLAN>
- Retrieve the SUBNET ID:
ibmcloud sl subnet list
- Retrieve the SSH Key IDs:
ibmcloud sl security sshkey-list
Step 5 shows how to add the above example arguments.
5. Place an orderTo verify a bare metal server order before you initiate it, use the –-verify
argument:
ibmcloud sl order place PACKAGE_KEYNAME LOCATION ORDER_ITEM1,ORDER_ITEM2,ORDER_ITEM3,ORDER_ITEM4... [OPTIONS] –verify
To place a bare metal server order, use the following command:
ibmcloud sl order place PACKAGE_KEYNAME LOCATION ORDER_ITEM1,ORDER_ITEM2,ORDER_ITEM3,ORDER_ITEM4... [OPTIONS]
The following is an example command to order a monthly DUAL_INTEL_XEON_PROCESSOR_CASCADE_LAKE_SCALABLE_FAMILY_4_
bare metal server in Dallas with values for different categories:
ibmcloud sl order place DUAL_INTEL_XEON_PROCESSOR_CASCADE_LAKE_SCALABLE_FAMILY_4_DRIVES DALLAS13 INTEL_XEON_4210_2_20,RAM_32_GB_DDR4_2133_ECC_NON_REG,OS_DEBIAN_10_X_BUSTER_MINIMAL_64_BIT,1_GBPS_REDUNDANT_PUBLIC_PRIVATE_NETWORK_UPLINKS,DISK_CONTROLLER_RAID,HARD_DRIVE_2_00_TB_SATA_2,HARD_DRIVE_2_00_TB_SATA_2,BANDWIDTH_1000_GB,REBOOT_KVM_OVER_IP,1_IP_ADDRESS,UNLIMITED_SSL_VPN_USERS_1_PPTP_VPN_USER_PER_ACCOUNT,NOTIFICATION_EMAIL_AND_TICKET,MONITORING_HOST_PING,AUTOMATED_NOTIFICATION --complex-type SoftLayer_Container_Product_Order_Hardware_Server --extras '{"hardware":[{"hostname":"c-test","domain":"ctest.com","primaryBackendNetworkComponent": {"networkVlan": {"primarySubnet":{"id": 123456}}},"primaryNetworkComponent": {"networkVlan": {"id": 23456}}}],"storageGroups":[{"arrayTypeId": 2,"arraySize": 2000,"hardDrives": [0,1],"partitionTemplateId": 1}],"sshKeys": [{"sshKeyIds":[123,456]}],"provisionScripts": ["https://pastebin.com/raw/SCp607Tm"]}' --billing monthly
This is an example command to order an hourly bare metal server in Dallas with values for different categories:
ibmcloud sl order place BARE_METAL_SERVER DALLAS12 BANDWIDTH_0_GB_2,100_MBPS_PRIVATE_NETWORK_UPLINK,REDUNDANT_POWER_SUPPLY,OS_UBUNTU_20_04_LTS_FOCAL_FOSSA_64_BIT,UNLIMITED_SSL_VPN_USERS_1_PPTP_VPN_USER_PER_ACCOUNT,1_IP_ADDRESS,REBOOT_KVM_OVER_IP --complex-type SoftLayer_Container_Product_Order_Hardware_Server --preset 1U_2174S_64GB_2X4TB_RAID_1 --billing hourly
After provisioning the bare metal server successfully, you can add or remove users who can be notified on ping failure. When a monitoring service on that hardware instance fails and the monitor is set to notify users, any users linked to that hardware instance using this service will be notified of the failure.
To create multiple user hardware notification entries:
ibmcloud sl call-api SoftLayer_User_Customer_Notification_Hardware createObjects --parameters '[[{"userId":1234,"hardwareId":11234},{"userId":1235,"hardwareId":1123}]]'
To delete multiple users:
ibmcloud sl call-api SoftLayer_User_Customer_Notification_Hardware deleteObjects --parameters '[[{"userId":1234,"hardwareId":11234,"id":23456}]]'
hardwareId
is the ID of the hardware object that is to be monitored (Type: int). The id
is the unique identifier for this object (Type: int). The userId
is the ID of the SoftLayer_User_Customer
object that represents the user to be notified on monitoring failure (Type: int).
When provisioning an IBM Cloud Bare Metal Server on Classic infrastructure, all the configurations that are available for the user in the IBM Cloud UI are also available via the IBM Cloud CLI. One advantage of using the CLI is that you can easily leverage the commands in automations/scripts to provision multiple bare metal servers with the same configuration.
Learn more about IBM Cloud Bare Metal Servers.
Anupama VijayanSolution Developer, IBM Cloud Solution Engineering
=======================
Migrate Your Cloud Foundry Liberty-for-Java Applications to the Paketo Buildpack for Liberty Cloud
1 min read
By:
Raymond Xu, Software Engineer
Introducing a migration guide to help migrate your application from Cloud Foundry to Paketo Buildpack for Liberty.
The deprecation of the IBM Cloud Foundry (CF) liberty-for-java buildpack has been announced and customers require a migration solution. The strategic alternative is using the cloud-native Paketo Buildpack for Liberty. The main advantage of Paketo Buildpack is the ability to transform your application source code into reproducible container images. The container image can then be used almost anywhere, providing flexibility and allowing the container images to be easily updated.
Other key advantages of using the Paketo Buildpack for Liberty include the ability to build your application image without creating a Dockerfile, fast rebuilds through built-in caching, easy customization and rebasing.
What’s in the migration guide?To help with the migration process, we are providing a migration guide. The guide is divided into two main sections: building your Liberty application with Paketo Buildpack for Liberty and advanced features for Liberty applications. There is a feature-to-feature comparison in each of the sections to compare the Cloud Foundry and Paketo Buildpack commands. All of these sections are designed to help you migrate your application from Cloud Foundry to Paketo Buildpack for Liberty.
The section on building your Liberty application with the Paketo Buildpack contains the following procedures:
- Building a container image from application source code
- Building an application with a simple war file
- Building an application from a Liberty server
- Building an application from a Liberty packaged server
- Building an application by using UBI images
The advanced features for Liberty applications that use Paketo Buildpack for Liberty includes the following sections:
- Providing server configuration at build time
- Using Liberty profiles to build applications
- Installing custom features
- Installing interim fixes
- Check out the migration guide
- Deprecation of IBM Cloud Foundry
- Introducing the Paketo Liberty Buildpack
- Cloud Native Buildpacks
- Paketo Cloud Native Buildpacks implementation
- Paketo Buildpack for Liberty Github Repository
Software Engineer
=======================
Why Green Coding is a Powerful Catalyst for Sustainability Initiatives Cloud
6 min read
By:
IBM Cloud Education, IBM Cloud Education
How environmentally friendly organizations can use green coding to drive long-term success.
Twenty years ago, coding had boundaries. Bandwidth restrictions and limited processing power forced developers to always be mindful of the length and complexity of their code. But as technology enabled greater innovation, programmers were no longer constrained by size.
For example, greater computing power allowed faster processing of large files and applications. Open-source libraries and frameworks allowed software engineers to reuse pieces of code in their projects, creating greater possibilities. This also led to programs with more lines of code—and more processing power required to parse it. The unintended consequence was greater energy usage and a higher global electricity demand.
As companies look to transform business and implement more sustainable practices, they’re digging deep into established processes to find new efficiencies. This includes evaluating the building blocks of their business operations, from storing data more efficiently to examining how code is written.
In this post, we’ll explore how green coding helps organizations find innovative ways to prioritize sustainability and reach their energy reduction goals.
What is green coding?Green coding is an environmentally sustainable computing practice that seeks to minimize the energy involved in processing lines of code and, in turn, help organizations reduce overall energy consumption. Many organizations have set greenhouse emission reduction goals to respond to the climate change crisis and global regulations; green coding is one way to support these sustainability goals.
Green coding is a segment of green computing, a practice that seeks to limit technology’s environmental impact, including reducing the carbon footprint in high-intensity operations, such as in manufacturing lines, data centers and even the day-to-day operations of business teams. This larger green computing umbrella also includes green software—applications that have been built using green coding practices.
Advances in technology—from big data to data mining—have contributed to a massive increase in energy consumption in the information and communications technology sector. According to the Association for Computing Machinery, annual energy consumption at data centers has doubled over the past decade. Today, computing and IT are responsible for between 1.8% and 3.9% of global greenhouse gas emissions.
The high energy consumption of computingTo fully understand how green coding can reduce energy consumption and greenhouse gas emissions, it helps to dive into the energy consumption of software:
- Infrastructure: The physical hardware, networks and other elements of an IT infrastructure all require energy to run. Within any organization, there are likely areas where the computing infrastructure is overly complicated or overprovisioned, which results in inefficient energy use.
- Processing: Software consumes energy as it runs. The more complicated the software or the larger the file, the more processing time it takes and the more energy it consumes.
- DevOps: In the typical coding process, developers write lines of code, which are parsed and processed through a device. The device requires energy, which unless powered by 100% renewable energy, creates carbon emissions. The more code to process, the more energy the device consumes and the higher the level of emissions.
Recent research into the speed and energy use of different programming languages found that C was the most efficient in speed, reducing energy and memory usage and providing another potential opportunity for energy savings. However, there is still some debate in terms of how this is realized and which metrics should be used to evaluate energy savings.
Writing more sustainable softwareGreen coding begins with the same principles that are used in traditional coding. To reduce the amount of energy needed to process code, developers can adopt less energy-intensive coding principles into their DevOps lifecycle.
The “lean coding” approach focuses on using the minimal amount of processing needed to deliver a final application. For example, website developers can prioritize reducing file size (e.g., switching high-quality media with smaller files). This not only accelerates website load times, but also improves the user experience.
Lean coding also aims to reduce code bloat, a term used to refer to unnecessarily long or slow code that is wasteful of resources. Open-source code can be a contributing factor to this software bloat. Because open-source code is designed to serve a wide range of applications, it contains a significant amount of code that goes unutilized for the specific software. For example, a developer may pull an entire library into an image, yet only need a fraction of the functionality. This redundant code uses additional processing power and leads to excess carbon emissions.
By adopting lean coding practices, developers are more likely to design code that uses the minimal amount of processing, while still delivering desired results.
Implementing green codingThe principles of green coding are typically designed to complement existing IT sustainability standards and practices used throughout the organization. Much like implementing sustainability initiatives in other areas of the organization, green coding requires both structural and cultural changes.
Structural changes- Improving energy use at the core: Multi-core processor-based applications can be coded to increase energy efficiency. For example, code can directly instruct processors to shut down and restart within microseconds instead of using default energy saving settings that might not be as efficient.
- Efficiency in IT: Sometimes referred to as green IT or green computing, this methodology aims for resource optimization and workload consolidation to reduce energy use. By optimizing IT infrastructure through use of modern tools like virtual machines (VMs) and containers, organizations can reduce the number of physical servers needed for operations, which in turn, reduces energy consumption and carbon intensity.
- Microservices: Microservices are an increasingly popular approach to building applications that break down complicated software into smaller elements, called services. These smaller services are called upon only when needed, instead of running a large monolithic program as a whole. The result is that applications run more efficiently.
- Cloud-based DevOps: Applications running on distributed cloud infrastructure cut the amount of data transported over the network and the network’s overall energy use.
- Empower management and employees: Change is only effective when employees and management are on board. Encouraging adoption with consistent messaging to the entire DevOps team helps support the sustainability agenda and makes people feel like they are part of the solution.
- Encourage innovation: DevOps teams are often driven by the desire to innovate and create solutions to big problems. Encourage teams to look for new ways to use data insights, collaborate with partners and take advantage of other energy-saving opportunities.
- Stay focused on outcomes: Problems will arise when implementing new initiatives like green coding. By anticipating challenges, companies can deal with problems that arise more easily.
Beyond the energy-saving benefits, companies may also find there are additional advantages to green coding practices, including the following:
- Reduced energy costs: It’s the simple principle of use less, spend less. With the increasingly volatile price of energy, organizations want to reduce the amount they spend on power not just for environmental sustainability, but also to maintain the sustainability of the business.
- Accelerated progress toward sustainability goals: Most organizations today have net zero emission goals or strategic initiatives to reduce emissions to increase sustainability. Green coding moves organizations closer to reaching this goal.
- Higher earnings: CEOs that implement sustainability and digital transformation initiatives, such as green coding, report a higher average operating margin than their peers, according to the IBM 2022 CEO Study.
- Better development discipline: Using green coding empowers programmers to simplify elaborate infrastructures and can ultimately save time, reducing the amount of code software engineers write.
To find out more about IBM and green coding, start with the white paper from the Institute for Business Value: IT sustainability beyond the data center.
This white paper investigates how software developers can play a pivotal role in promoting responsible computing and green IT, discusses four major sources of emissions from IT infrastructure, and looks at how to fulfill the promise of green IT with hybrid cloud.
Infrastructure optimization is an important way to reduce your carbon footprint through better resource utilization. One of the fastest ways to make an impact on energy efficiency is to configure resources automatically to reduce energy waste and carbon emissions. IBM Turbonomic Application Resource Management is an IBM software platform that can automate critical actions that proactively deliver the most efficient use of compute, storage and network resources to your apps at every layer of the stack continuously—in real-time—without risking application performance.
When applications consume only what they need to perform, you can increase utilization, reduce energy costs and carbon emissions, and achieve continuously efficient operations. Customers today are seeing up to 70% reduction in growth spend avoidance by leveraging IBM Turbonomic to better understand application demand. Read the latest Forrester TEI study and learn how IT can impact your organization’s commitment to a sustainable IT operation while assuring application performance in the data center and in the cloud.
A final critical way to promote green computing is to choose energy efficient IT infrastructure for on-prem and cloud data centers. For example, IBM LinuxONE Emperor 4 servers can reduce energy consumption by 75% and space by 50% for the same workloads on x86 servers/. Containerization, interpreter/compiler optimization and hardware accelerators can then reduce energy needs further through green coding.
Learn more about LinuxONE and sustainability.
IBM Cloud EducationIBM Cloud Education
=======================
Bamboo Rose and IBM Increase Process Efficiency for Retailers Automation
2 min read
By:
Lori Brown, Product Marketing, IBM Business Automation
While Black Friday and December may be when a large number of retail transactions occur, mid-January is when innovation is top of mind for retailers.
The National Retail Federation (NRF) conference will be held January 15-17, 2023, in New York City, showcasing the latest solutions to help retailers overcome challenges in the industry, including inflation, talent shortages and supply chain issues.
A large part of this discussion will be focused on intelligent automation. IBM and our partners have helped a wide variety of retail and consumer solutions clients increase agility, reduce costs, increase productivity, accelerate business processes while reducing errors and drive revenue growth.
Bamboo Rose and IBM Operational Decision ManagerBamboo Rose, an IBM Partner, helps retailers increase process efficiency in product development, sourcing and supply-chain operations by digitizing collaboration and transaction between retail brands and their supplier community. More than 50,000 companies and 250,000 users are engaged on the platform today, with goods valued at over USD 1 trillion flowing through the platform annually.
Their industry-leading, multi-enterprise platform helps retailers discover, develop and deliver great products to market as efficiently and cost effectively as possible. Its offerings include a B2B marketplace and business process applications, including product lifecycle management, global sourcing, purchase order management and global trade management. Bamboo Rose has helped American Eagle streamline its complex global sourcing processes, optimized vendor management for the Loblaw grocery store chain and helped Kaufland improve supplier discovery and reduce PO processing costs, among others.
Bamboo Rose’s offerings are powered by IBM Automation decision management technology, which they obtain via an embedded service agreement.
“IBM Operational Decision Manager (ODM) is the engine that enables our customers to automate their business processes across their entire supply chain—from product development, sourcing and production to delivery,” says Kamal Anand, CTO, Bamboo Rose.
A few highlights of how Bamboo Rose uses ODM:
- Determining project timelines: As new product development cycles are initiated, Bamboo Rose—with the embedded IBM ODM engine—intelligently assigns the right project timelines and milestones based on various product attributes.
- Identifying and assigning tasks: At any given instance, the Bamboo Rose platform is monitoring thousands of different events and workflow tasks across the entire community. As these events occur, Bamboo Rose automates the subsequent task creation, task assignment and further actions required to progress the projects.
- Matching invoices with payments: Trade management solutions use IBM ODM to automatically match-supplier submitted invoices for payments. Bamboo Rose is the only platform that enables five-way match to the invoices using purchase orders, packing lists, ASNs, BOL and warehouse receipts.
Both IBM (Booth 5139) and Bamboo Rose (Booth 3966) will be sharing our latest innovations at the National Retail Federation (NRF) 2023 conference, January 15-17 in New York City. If you are attending the event, please make sure to visit
- Learn more about IBM Decision Management.
- Learn more about IBM’s Partnership with Bamboo Rose.
- Learn more about embedding IBM technology into your solution as IBM Partner.
- Register for NRF 2023.
Product Marketing, IBM Business Automation
=======================
IBM Cloud Solution Tutorials: 2022 in Review Artificial intelligence Automation Cloud
6 min read
By:
Frederic Lavigne, Product Manager
Henrik Loeser, Technical Offering Manager / Developer Advocate
Dimitri Prosper, Product Manager
Powell Quiring, Offering Manager
A look back at the year 2022 by the team creating the IBM Cloud Solution Tutorials.
Similar to 2021’s review post, it’s once again the time of year to take some time and look at the work done, new experiences gained and interesting things seen. Without further ado, let’s get into it: Four different short views, written by members of the IBM Cloud Solution Tutorials team who you know from previous blog posts.
HenrikWhen was the last time you…?: That’s a question I heard often over the past 12 months. The pandemic caused many changes — in the ways we live, we work and everything in between (think “home office”). When was the last time you were in the office, met a co-worker, a customer or partner? When was the last time you attended a conference in person? When was the last time you heard a similar question?
Fortunately, those questions came often this year. And I was happy to hear them because it was at some in-person events. Being in the home office or traveling implies accessing work-related systems remotely. So, let me ask this question: When was the last time you had to use a VPN (Virtual Private Network) connection to access corporate resources?
Depending on your company and the kind of work you do, the need for VPN access (get within the perimeter) got reduced or eliminated. Many organizations have started to move toward a zero trust architecture. Instead of assuming that everything within the perimeter is secure, a zero trust approach assumes a breach. Hence the motto is “never trust, always validate.” The goal is to enforce accurate, least-privilege-per-request access decisions:
In my cloud account, I have enabled MFA (multi-factor authentication) for all users to tighten security. I also made use of custom roles for more fine-grained access management. Custom roles are useful to implement the principle of least privilege. To quickly and securely onboard/offboard teams and always assign the right set of privileges, I started to use Terraform code to roll out new IAM (Identity and Access Management) access groups combined with other security features.
Moreover, I am actively tracking down inactive identities to reduce risks. Finally, I am adding context-based restrictions to my account to limit which resources and endpoints are exposed. And to prove my security skills, I got certified for the IBM Cloud Security Engineer Specialty.
So, let me ask this question: When was the last time you got certified?
A good way to build up skills for certifications is by going through the provided training material and by hands-on experience. Personally, I learn and grow by going through the IBM Cloud solution tutorials (or creating new ones).
In my latest tutorial, I not only share my experience, but also insights into how to “Share Resources Across Your IBM Cloud Accounts.” So, let me ask this question: When was the last time you read (and tried) one of our IBM Cloud solution tutorials?
PowellI love the cloud. Creating a scalable and highly available architecture in my on-premises data center would be challenging, but making this happen in the IBM Cloud is straight forward. IBM has a collection of multizone regions (MZRs) around the world. Each regional zone has isolated power, network, cooling, etc. Workloads can be balanced across multiple servers in multiple zones. In the event of a server failure or the unlikely event of a zone failure, a workload can remain accessible:
There is no single point of failure since even the global load balancer is a highly available system provided by IBM’s partner Cloudflare. Using Infrastructure as Code in IBM Cloud Schematics allows the infrastructure to be developed and tested in my account before delivering to production through a DevSecOps environment.
A variation of this architecture for on-premises private access to cloud workloads across zones is also possible by layering on Direct Link and using the global load balancer in the IBM Cloud DNS Service:
FredericAutomate: Last year, I was referring to a lot of work done around automation. This is my new normal. I have a hard time remembering when I provisioned resources manually. Most of the times it was when playing with a new service or feature, but for any serious work, I go through some form of automation — Terraform being the standard when it comes to cloud. Even my personal projects, my own domains, my Git repos and my laptop configuration, are all captured as-code! There’s no doubt this will continue this year — more automation, more as-code for everything.
Secure: In the face of increasingly sophisticated cyber threats, security is also a critical concern for organizations. One way to improve security is to adopt a zero trust approach. This means that no user or device is automatically trusted, and all access to resources must be authenticated and authorized.
To allow users to connect to cloud resources, a company may deploy services like a bastion, establish a site-to-site VPN connection or deploy a more traditional client-to-site VPN. I had the opportunity to look at our client-to-site VPN and how it can be fully configured with Terraform:
Once you are authenticated and authorized, you want to make sure the system you are accessing is using the latest security fixes. For virtual servers, one approach is to build hardened custom images and to consider them immutable. As new fixes are released, new custom images are built and deployed. And guess what, that is another place for automation because with a tool like Packer can be integrated in a CI/CD pipeline to build custom images:
AI: In the last weeks of 2022, artificial intelligence (AI) dominated the headlines again. The trend is to make it more and more accessible to everyone with use-cases that we can all relate to (e.g., generate your social media avatar, craft nice emails, write a full essay from a bullet list, summarize a long article or a book). This is a trend we will likely see continue in 2023, in many fields:
Disclaimer: This section may or may not have been partially written by an artificial intelligence.
DimitriLast year, I described work in progress on a process I was using to manage SSH keys on virtual servers running in a Virtual Private Cloud. I published the “Using Instance Metadata and Trusted Profiles for Managing SSH Keys” post a few weeks later with the steps and source code I am using. I was glad to see a few uses of it as is, as well as some cloning and repurposing of the code for similar requirements (for example, to configure ephemeral storage on a compute resource after a restart).
For parts of 2022, I worked with several technical individuals that interact directly with our clients to help identify and start addressing gaps in our documentation and tooling that would help first-time users of IBM Cloud. As part of that effort, I developed and released a tool to help identify potential conflicts between IP ranges in on-premises environment(s) and IP ranges used in our IBM Cloud classic infrastructure. It is a common requirement, and you can perform a quick search using the IP Ranges Calculator tool. The tool allows you to also download the IP ranges in JSON format.
We also published in our cloud documentation a checklist for getting started on IBM Cloud. This checklist is based on experiences from our field teams on tasks they found are required for most users onboarding to IBM Cloud. It is meant as a one-stop shop, with some links to our existing documentation.
When I write tools like the IP ranges calculator tool, I use the IBM Cloud Code Engine service as my compute environment. With the source code usually available on GitHub, I needed a way to manage updates and validation without too much effort. I wrote a set of small GitHub actions — Set up the IBM Cloud CLI and Create, Update and Delete to IBM Cloud Code Engine — and I now use these to deploy all my apps. I hope you find these as useful as I do.
Engage with usIf you have feedback, suggestions, or questions about this post, please reach out to us on Twitter (@data_henrik, @l2fprod, @powellquiring) or LinkedIn (Dimitri, Frederic, Henrik, Powell).
Use the feedback button on individual tutorials to provide suggestions. Moreover, you can open GitHub issues on our code samples for clarifications. We would love to hear from you.
Frederic LavigneProduct Manager
Henrik LoeserTechnical Offering Manager / Developer Advocate
Dimitri ProsperProduct Manager
Powell QuiringOffering Manager
=======================
IBM Cloud Backup for VPC: Automated Snapshot Management of Block Storage Volumes Cloud Storage
1 min read
By:
Karthik Baskaran, Architect - IBM Cloud Storage
Smita Raut, Architect - IBM Cloud Storage
What are block storage VPC snapshots?
A block storage VPC snapshot is an on-demand, optimized copy where only the initial snapshot copies the entire content of the volume — the subsequent snapshots of the same volume capture only the recent changes. The snapshots are regional resources and can be accessed from any availability zone in that region — this is possible because the snapshots are uploaded to IBM Cloud Object Storage for a persistent backup copy. A snapshot can be restored by simply creating a new volume and providing the snapshot.
A snapshot is a basic building block for any backup and data protection solution, and it plays a major role in addressing business continuity and disaster recovery. To build an effective solution, some level of snapshot management is required, including which volumes should have a snapshot taken, how frequently, and how long they should be retained. Until last year, the way to do it was by automating the steps (as described in this blog post).
IBM Cloud Backup for VPCIn 2022, IBM introduced a new cloud service called IBM Cloud Backup for VPC that helps manage the automation and retention of snapshot creation. IBM Cloud Backup for VPC provides a policy-driven approach to snapshot lifecycle management. It lets you create backup policies for VPC block storage volumes and supports up to four plans to automate your backups on daily, weekly, monthly or yearly basis. One can also configure the retention of backups based on its age or total count. The backup plan contains a schedule and frequency of the backup.
The block storage volumes you want to back up are identified by using tags configured in backup policy. These then match target block storage volumes with user-provided tags. To learn about how to apply tags to block storage volumes, refer to the procedure for applying tags.
To make use of the VPC Backup service, you must set up service-to-service authorizations and specify user roles. This authorization enables the IBM Cloud Backup for VPC service to interact with volume and snapshot service on behalf of the customer and create restorable backups in the account. For details on how to set up service-to-service authorizations, please refer to VPC Backup service-to-service authorizations.
For more details on IBM Cloud Backup for VPC, please refer to the cloud documentation.
Karthik BaskaranArchitect - IBM Cloud Storage
Smita RautArchitect - IBM Cloud Storage
=======================
Is Your Open Source Software Secure? Cloud Security
2 min read
By:
Hanna Linder, Global Open Source Services Technical Lead
A cybersecurity attack can be devastating to any company, but improving your software supply chain can significantly minimize your risk of being compromised.
With the rapid increase in the adoption and use of Open Source Software (OSS) in modern application development, it is important to perform additional diligence. All of the components of your software supply chain need a thorough inspection. IBM is collaborating in the OSS ecosystem with other industry leaders and key OSS communities to address a broad range of security issues that have developed over time. These issues have appeared gradually, and it will take time to create new security best practices to address these challenges.
One of the many ways that IBM is collaborating with these OSS communities is by being an active member of the Open Source Security Foundation (OpenSSF), an initiative developed and launched by the Linux Foundation.
The OpenSSF describes itself as “… a cross-industry organization that brings together the industry’s most important open source security initiatives and the individuals and companies that support them. The OpenSSF is committed to collaboration and working both upstream and with existing communities to advance open source security for all.” IBM is one of the founding members of the organization, and Jamie Thomas, General Manager, Systems Strategy and Development at IBM, serves as a Board Chair at the OpenSSF.
IBM Technology ServicesClients can improve their security posture by leveraging several valuable products and services from IBM. IBM Technology Services provide clients with several options, including open source security vulnerability assessments and risk mitigation assistance.
The IBM Technology Services Open Source Software team has capabilities on many popular community and commercial products supporting business digital transformation and application modernization. We can offer a range of services helping clients to design, deploy and optimize open source technologies.
Security is not a linear or finite journey; rather, it is a continuous, evolving process. Clients need to take a proactive role in addressing their security today, as waiting for a threat to materialize can sometimes be too late.
How do we help our clients address their overall security?We integrate security into all of our client engagements. When providing a client with OSS services, we make sure that the OSS download site is valid and we verify the Software Bill of Materials (SBOM) when it is available, meaning that any download links we provide are verified that they are valid and safe. In addition, we can also perform vulnerability assessments and mitigations for or with a client as a separate service.
By using IBM Technology Lifecycle Services for Open Source Technology Services, clients can be confident that they are benefiting from a worldwide community that is constantly evaluating for the presence of bugs and critical vulnerabilities. Other important features include focusing on integrating DevSecOps as a key element in software development.
To learn more about how we can help you, contact technologyservices@ibm.com.
Hanna LinderGlobal Open Source Services Technical Lead
=======================
Build a Public, Highly Available, Scalable Workload Cloud
5 min read
By:
Powell Quiring, Offering Manager
Ahmed Osman, Global Offering Manager
Arda Gumusalan, Computer Scientist
Implement a scalable architecture that is resilient to node and availability zone failures.
IBM Cloud has a global network of multizone regions (MZRs) distributed around the world. Each zone has isolated power, cooling and network infrastructures.
This blog post presents an example architecture that utilizes a network load balancer (NLB) and is resilient to a zonal failure:
IBM Cloud Internet Services (ICS) provides security, reliability and performance to external web content. A global load balancer (GLB), as seen in the diagram above, can be configured to provide high availability by spreading load across zones.
IBM Cloud VPC load balancersIBM Cloud Virtual Private Cloud (VPC) supports two types of load balancers: an application load balancer (ALB) and a network load balancer (NLB).
The right side of the diagram shows a VPC in an MZR with three zones. Health checks will allow the NLB to distribute connections to the healthy servers. In this example, the servers are in the same zone as the NLB, but it is possible to accept members across all zones using multi-zone support.
Why use a network load balancer instead of an application load balancer?A network load balancer (NLB) works in Layer 4 and is best fit for workloads that require high throughput and low latency.
You may be asking why a separate network load balancer is needed if the application load balancer supports Layer 4 traffic. Often, a client will submit a request that is fairly small in size, with little performance impact on the load balancer; however, the information returned from the backend targets (virtual servers or container workloads) can be significant — perhaps several times larger than the client request.
With Direct Server Return (DSR), the information processed by the backend targets is sent directly back to the client, thus minimizing latency and optimizing throughput performance.
Additionally, network load balancers have the following unique characteristics when compared to an application load balancer (for more information, see the Load Balancer Comparison Chart):
- Source IP preservation: Network load balancers don’t NAT the client IP address. Instead, it is forwarded to the target server.
- Fixed IP address: Network load balancers have a fixed IP address.
Global load balancer (GLB) health checks allow for the distribution of requests across healthy NLB/servers:
Each red ‘X’ in the diagram above shows an unhealthy scenario (i.e., an unhealthy server detected by an NLB health check and an NLB or zonal failure that is detected by the CIS GLB health check).
This next diagram shows more concretely how the CIS GLB performs load balancing via DNS name resolution:
- The client requests the DNS name
cogs.ibmom.com
. - The client computer has a DNS resolver that contacts a web of DNS resolvers to determine the corresponding IP addresses. The diagram shows the client’s DNS resolver contacting an on-premises DNS resolver that will reach Cloudflare as the authoritative DNS Server for the IBM Cloud Internet Services and, therefore, the GLB
cogs.ibmom.com
. - A list of the NLB load balancers is returned, and one of those is used by the client. The order and weight of the origin pool members can be adjusted by configuring a global load balancer.
- The client uses the IP address to establish a TCP connection directly to a server through the NLB.
The first step is to use the IBM Cloud Console to create a Cloud Internet Services (CIS) instance if one is not available. A number of pricing plans are available, including a free trial. The provisioning process of a new CIS will explain how to configure your existing DNS registrar (probably outside of IBM) to use the CIS-provided domain name servers. The post uses ibmom.com
for the DNS name.
Follow the instructions in the companion GitHub repository to provision the VPC, VSI, NLB and CIS Configuration on your desktop or in IBM Cloud Schematics. After provisioning is complete, the Terraform output will show test curl
commands that can be executed to verify load is being balanced across zones via the GLB and across servers via the NLB.
Visit the IBM Cloud Console Resource list. Find and click on the name of the Internet Services product to open the CIS instance and navigate to the Reliability section of the CIS instance. Check out the Load balancer, origin pools and health checks. Navigate to the VPC Infrastructure and find the VPC, subnets, NLBs, etc. Verify that the CIS GLB is connected to the IP addresses of the VPC NLBs.
Kubernetes and OpenShiftThe same architecture can be used for Red Hat OpenShift on IBM Cloud or IBM Cloud Kubernetes Service. The IBM Cloud Kubernetes Service worker nodes replace the servers in the original diagram:
Follow the instructions in the companion GitHub repository to provision the IKS, NLB, CIS Configuration on your desktop or in IBM Cloud Schematics. While the Kubernetes Service cluster is being provisioned, read on to understand the Kubernetes resources configured.
Kubernetes deployments, by default, will spread pods evenly across worker nodes (and zones). The example configuration uses a nodeSelector to place the pods on zone-specific worker nodes (like those in us-south-1) using an IBM Cloud node attribute shown in the cutdown below:
kind: Deployment metadata: labels: app: cogs name: cogs-0 namespace: default spec: replicas: 2 selector: matchLabels: app: cogs template: metadata: labels: app: cogs spec: nodeSelector: ibm-cloud.kubernetes.io/zone: us-south-1 containers: ... pod code …
A Kubernetes service is configured to expose applications using load balancers for VPC. Each service is configured with a VPC NLB that can be access publicly. A service is created for each zone.
The service ingress is configured to keep the load in the worker node that receives the network request using externalTrafficPolicy: Local
. The Kubernetes default policy will balance the load across all selected pods in all workers in all zones. The default may be preferred for your workload:
apiVersion: v1 kind: Service metadata: name: myloadbalancer1 annotations: service.kubernetes.io/ibm-load-balancer-cloud-provider-enable-features: "nlb" service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: "public" service.kubernetes.io/ibm-load-balancer-cloud-provider-zone: "us-south-1" spec: type: LoadBalancer selector: app: cogs ports: - name: http protocol: TCP port: 80 externalTrafficPolicy: Local
After the Terraform provision is complete, visit the IBM Cloud Console Resource list. Find and click on the name of the Internet Services product to open the CIS instance and navigate to the Reliability section of the CIS instance. Check out the Load balancer, origin pools and health checks. Note that the origins contain IP addresses of the Kubernetes Service VPC NLBs.
This can be verified using the cli. The Terraform output has a test_kubectl
output that can be used to initialize the Kubernetes kubectl command-line tool. After initialization, get the services to see output like this:
$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 172.21.0.1 <none> 443/TCP 7d load-balancer-us-south-1 LoadBalancer 172.21.60.163 150.240.66.122 80:30656/TCP,443:31659/TCP 121m load-balancer-us-south-2 LoadBalancer 172.21.185.234 52.116.196.75 80:32152/TCP,443:32683/TCP 121m
The IBM Cloud Internet Services GLB is probing for health checks through the NLB to the server computers. This health check is a path very similar to a client accessing the servers. Under the extremely unlikely event of a zone failure, this architecture will continue to balance load across the remaining zones/workers. Each NLB has a static public IP address that remains fixed for the lifetime of the NLB, so the GLB will not need to be updated.
The TCP traffic in the example is not TLS encrypted. The TLS will need to be managed by the worker applications. IBM Cloud Secrets Manager can be used to automate the distribution of TLS certificates.
If you have feedback, suggestions or questions about this post, please email me or reach out to me on Twitter (@powellquiring).
Powell QuiringOffering Manager
Ahmed OsmanGlobal Offering Manager
Arda GumusalanComputer Scientist
=======================
New Checklist: Getting Started on IBM Cloud Cloud
3 min read
By:
Victoria Scott, Product Manager
Michelle Kaufman, Content Lead and Strategist, IBM Cloud Platform
Dimitri Prosper, Product Manager
Your complete guide to setting up your IBM Cloud account and resources.
A new checklist is now available in the IBM Cloud Docs for Getting started on IBM Cloud. This checklist outlines the tasks for you to complete that will accelerate your journey to cloud and guide you through your account setup and organization of resources.
With links to documentation and best practices that you can follow along the way, this is a one-stop shop to help you prepare to onboard your workloads to IBM Cloud.
Goals for this checklist- Get up and running quickly on IBM Cloud
- Teach you how to be successful at the following tasks:
- Set up your account
- Secure your cloud resources
- Track costs and billing
- Set up cloud connectivity
- Enable logging and monitoring
- Find additional self-help resources
The self-guided set up checklist is organized into eight sections:
- Explore the platform: Get an overview of IBM Cloud and learn about the options available to manage your resources. If you have little-to-no experience with IBM Cloud, this section is for you.
- Set up accounts and enterprises: Learn about the differences between a stand-alone account and an enterprise. You can also learn how to decide on federating with your Identity Provider. This section is for administrators responsible for creating and configuring an account structure in IBM Cloud.
- Secure your account and resources: After you set up your IBM Cloud account, you’re ready to start planning how you want to organize resources and assign access to identities in your account. These best practices provide you with the basic building blocks to enable successful and secure app development in IBM Cloud. This section walks you through setting up multifactor authentication (MFA) for your account and members. It also discusses configuring activity tracking and your data encryption options. It then guides you through setting up the services that best fit your needs.
- Manage billing and usage: Learn about the steps that you can take to manage and track billing and usage in your account.
- Connect your network to IBM Cloud: The need to create a private connection between a remote network environment and servers on the IBM Cloud private network is a common requirement. Review your options for connecting from your on-premises environments to IBM Cloud and how to create the related services.
- Enable logging and monitoring: Set up logging and monitoring to services running in your cloud account or forward to your security information and event management (SIEM).
- Streamline access management with identities, groups, and policies: Your IBM Cloud account includes many interacting components and systems for resource, user and access management. This section walks you through creating access groups, resource groups and assigning access to resources.
- Get support and other resources: If you experience problems with IBM Cloud, you have several options to get help determining the cause of the problem and finding a solution. In this section, you learn about available support options and supplementary resources.
Get started working through your checklist to get your account set up on IBM Cloud by following our best practices in the documentation.
Questions and feedbackWe encourage you to provide your feedback as you follow the tasks from this checklist by opening a doc issue or submitting an edit to the topic by using a pull request. You can find the options for opening an issue or submitting a pull request at the end of the page.
Victoria ScottProduct Manager
Michelle KaufmanContent Lead and Strategist, IBM Cloud Platform
Dimitri ProsperProduct Manager
=======================
“Automate This” for Smarter Procurement Automation
4 min read
By:
IBM Cloud Team, IBM Cloud
How automation and process mining can improve your procurement process.
How does it feel to work in “the most complex business function”? Many organizations describe procurement that way, calling it a “fundamental enabler” for operational excellence and cost savings (Ernst & Young, 2020).
According to a recent IBM IBV Study — Smart Procurement Made Smarter — there are five areas where smarter workflows can make the procurement function more resilient and better at creating new value. We’re going to focus one — “real-time, automated, frictionless processes” — because taking the friction out of end-to-end experiences, whether customer- or employee-facing, can produce substantial results (and perhaps take some of the complexity out of the job).
“. . . adding automation and AI into source-to-pay workflows offers a high degree of integration and visibility that, historically, most procurement processes have lacked.” — IBM IBV Study, Smart Procurement Make Smarter, 2022
Automate like Max Mara for smarter procurementIt’s one thing to say "let’s create a frictionless procurement process," but where’s the best place to start to ensure a healthy return on automation investments? We don’t have to look further than the global fashion company, Max Mara, to see the value of first identifying where the friction exists before automating any workflows or processes.
Max Mara targeted its Order-to-Cash (OTC) process because it was affecting the buying experience. “If you imagine a ‘heat map’ of potential process improvements, our reddest zone would be the Order-to-Cash cycle, from order processing to fulfillment, payment and customer service,” explains Max Mara’s Head of Digital Operations. “And during the seasonal spikes in sales we experience (typically in July and December), those red zones get even redder.”
To improve the process, the digital ops team needed to quickly find the problems and then determine the highest-ROI fixes. They needed to be able to pinpoint suboptimal processes at a granular level — say, staffing patterns in a particular warehouse. They also wanted to be able to confidently project how specific process changes, such as fixing a process flow or automating it, would impact key operational metrics.
The team considered traditional process redesign approaches, such as relying on business intelligence (BI) systems and insights from business analysts, process owners and other stakeholders. But they concluded those methods were a partial solution. “BI systems are valuable to point out symptoms of process problems,” explains the Head of Digital Operations, “but they’re not as capable at diagnosing their root causes, which is critical to solving them.”
The digital ops team settled on process mining as a key approach to improving its OTC process and, consequently, the buying experience across its 10 brand-specific websites and more than 2,300 brick-and-mortar stores around the world.
In one noteworthy case, the digital ops team wanted to understand how proposed changes in the processing of customer post-sales support inquiries would affect bottlenecks during so-called “high load,” when volume was reaching seasonal peaks. Using an advanced process mining solution with built-in simulation capabilities, they were able to identify the parts of the process prime for automation. And by simulating the changes — including the automation of key workflows — they were able to demonstrate up to a 90% decrease in customer service resolution times, along with a 46% reduction in the average cost per resolution.
Consider another case where their order lead time in a particular geography was known to be higher, and the suspected root cause was the warehouse pick-and-pack flow. By running relevant data through process mining models, the team was not only able to support their hypothesis, but they were also able to pinpoint unexpected process impacts that were making the problem worse. “In some cases, we knew there was a bottleneck due to process deviations,” says the digital ops lead. “But we were surprised at just how complex the flow was and how few orders in the warehouse followed the ‘happy path’ process flow. That data-driven insight enabled us to design a more appropriate and effective fix for the problem.”
“Making strategic investments in process automation will be critical to delivering the high-quality digital experience customers have come to expect. With [process mining], we’ve gained a powerful tool to identify where automation will have the highest payoff, both for our customers and for our business going forward.” – Head of Digital Operations, Max Mara Fashion Group
Read the full case study for more implementation and ROI details.
Process mining and smarter procurement: What to look forFor Chief Procurement Officers (CPOs) looking to increase operational efficiency, process mining technology can help deliver maximum value at the least possible cost.
To choose the right process mining solution, look for these two capabilities:
- Digital twin of an organization (DTO) is like having a dynamic copy of your processes that can be used to simulate changes and predict impacts. DTO capabilities should include the following:
- Automatic discovery and analysis of the end-to-end procurement process from transaction logs of any IT system.
- Constant monitoring of procurement process performance and compliance by analyzing variants, bottlenecks and deviations with root cause analyses.
- Continuous procurement process optimization through simulations of what-if scenarios with the expected ROI.
- Multilevel process mining enables you to analyze complex processes, like the ones in procurement. You should be able to map several processes — such as procure-to-pay’s different subprocesses (purchasing, ordering, invoicing and payment) — within a single comprehensive model, solving the limitation faced by traditional methodologies.
Procurement professionals have the same objective — find new ways to create cost savings and increase business value. Procurement leaders that can that take full advantage of process mining will be able to make more informed, cost-effective decisions by knowing where the opportunities are and testing ideas before deployment.
With process mining, CPOs can almost guarantee expected savings. Look at this automotive company’s procure-to-pay value chain with total expected savings of EUR 672,000 against five major challenges in procurement:
Read this paper to learn more about how process mining can help overcome five of the most critical challenges for procurement leaders: maverick buying, deviations, rework, automation enablement and cash discount losses.
IBM Cloud TeamIBM Cloud
=======================
Vision + Plan = Certification
3 min read
By:
Natalie Brooks Powell, GTM Leader, IBM Center for Cloud Training
IBM Cloud certification is for anyone, working anywhere. Just ask Amr Elmestekawi, a freelance computer engineer in Cairo, Egypt.
“Cloud technology will be important for every designer or engineer who works in a technical field,” he says, “because cloud computing means everything is connected to the internet. And once you are connected into the internet, you have a lot of options via the cloud.”
Amr tells a story of training and studying that will be familiar to a lot of people about his path to becoming skilled in cloud technologies.
Taking the first step to certification
Amr had earned his master's degree in computer engineering and completed training courses in cloud technologies, including winning a scholarship for graduate students to learn about cloud. But he had not earned any cloud certifications until a posting for a job in cloud caught his eye.
“I started my journey in cloud computing the day I found a project asking for certification,” he recalls.
That journey was eye-opening because while Amr had been introduced to cloud computing in his master's program, it was only a very basic introduction.
“Before starting the journey toward certification,” he explains, “I was thinking about cloud technologies as a big data center. But after reading about cloud technologies from different resources and on the IBM website, I began to be familiar with technologies related to cloud computing. I found the field more and more interesting.”
Looking ahead at opportunitiesAmr also learned about opportunities in the cloud. “If I could become a certified cloud technician or cloud engineer,” he says, “it would be a good thing so that I could work in a multinational organization and elevate my career.”
He planned to begin his certification journey as an IBM Cloud Technical Advocate. Amr explains, “It's the first certification on the track of cloud technologies and cloud computing. It's introductory and contains different technologies and useful knowledge for the beginning of your career in cloud.”
But first, there was training and the certification exam.
Studying and preparing for the examAmr studied hard. He read and prepared on his own. He participated in educational programs about cloud, though none was leading to certification. Then, he discovered IBM’s Cloud program. “It was my first time working with the IBM Center for Cloud Training,” he notes, “and I found it very helpful. I could get the required knowledge for the certification I needed.”
Preparation was a lot of work. But Amr had a plan: “The first day, I began to read about different technologies for cloud computing. The second day, I began to do in-depth reading about what IBM platforms can contain for cloud computing. After that, I began to watch more practical videos about this kind of certification, about this kind of technology, about deployment and about implementing new services. I began to review different questions from other people who completed this certification before so that I could be familiar with the type of questions and not to be surprised during the exam. “
And Amr had a vision. “Whenever I start something new, I think it's like cotton candy — whenever you start, you think that it's huge, but once you bite it, you find it shrinks. It's the same thing that happened to me the first time I dealt with IBM Cloud technology.”
Achieving successOn exam day, Amr was both nervous and curious about what would be on the test. “So, I began to calm down,” he recalls, “I told myself that the test is just one step to become certified. It was a situation like cotton candy again. I had to bite it to make it shrink. The most challenging part of the exam was when there were questions that I didn't expect to see. The most enjoyable part was the moment when I found that I had passed the exam. I was so happy. It was a big victory. I knew that all the time and effort that I spent on studying and searching for information on cloud technology allowed me to take the next step in my career.”
Moving ahead with cloud technologyToday, Amr is still learning and doing. “I spend approximately 50% of my day gaining new knowledge because technology is booming rapidly and at a fast pace, all over the world. And having this cloud training and knowing these emerging technologies enables me to be up to date.”
He has new confidence to keep going and says, “I’ve done it. Now, it’s your turn.”
Learn more about the IBM Center for Cloud Training.
Natalie Brooks PowellGTM Leader, IBM Center for Cloud Training
=======================
CFOs: “Automate This” for Sustainability and Cost Savings Automation
4 min read
By:
IBM Cloud Team, IBM Cloud
When it comes to sustainability, chief financial officers (CFOs) are in a tough, but influential, position.
According to IBM’s Institute of Business Value 2022 CEO Study on sustainability — “Own Your Impact” — more than 80% of CEOs say sustainability investments will drive better business results in the next five years. Yet more than half of the 3,000 CEOs surveyed ranked “unclear economic benefits” as the biggest blocker to achieving their objectives.
For CFOs, bridging this “intention-action gap” between vision and real-world initiatives is a worthy and winnable challenge, despite how divided C-suite leaders can be on priorities. “An organization can say it cares about air quality and oceans, but the hard step is operationalizing the idea of sustainability at scale across an enterprise,” says Karl Haller, a retail industry expert for IBM Consulting.
With pressure to demonstrate progress from board members, investors, regulators and customers alike, CFOs are uniquely positioned to clear a path for the C-suite to prioritize investments and allocate resources in ways that align with business goals. In doing so, CFOs can become trailblazers for their peers, delivering on sustainability metrics and demonstrating ROI. “When you combine sustainability performance with financial outcome and operational improvement . . . that’s when you switch the mindset,” says Jane Cheung, an IBM global research leader specializing in consumer industries.
But where do you start? Do either of these scenarios sound familiar?
- Your company manages a data center and leaves applications running that are only being used periodically, wasting energy and money. As described in the blog post — “Are Your Data Centers Keeping You from Sustainability?” — imagine driving your car to work, parking it in the parking lot and then leaving it running all day long just because you might step out for lunch at some point. You wouldn’t do that. And yet we do the equivalent of that in our data centers. Data centers account for 1% of the world’s electricity use and are one of the fastest-growing global consumers of electricity. And most every data center in the world is dramatically overprovisioned.
- Your company manages a public cloud environment or hybrid cloud estate and can’t match application demand with supply in real-time. As described in the blog post – "Mastering Cloud Cost Optimization: The Principles" – it’s the seventh of the month and you, the CFO, received another large cloud bill — higher than the last one and higher than budgeted for. “The truth is that while the promise of cloud is that you only pay for what you use, the reality is that you pay for what you provision,” writes Asena Hertz, VP, Product Marketing at Turbonomic.
There’s a practical first step CFOs can take on the road to greater sustainability and cost savings: get IT to consume less. Reducing (if not eliminating) resource waste in your company’s IT environment – without giving up performance – is the aim. In a world where our applications are our business, maintaining performance is key to maintaining a competitive advantage. Now, it’s also a path to carbon neutrality and green computing. In other words, let application performance drive cloud cost optimization, sustainability and green data centers.
Estimate how much carbon your cloud and data centers produce.
But optimizing cloud and data center resources can be hard, especially as applications and the environments they run on become more complex and distributed. This is where application resource management (ARM) becomes so critical. Companies need to be able to continuously analyze every layer of resource utilization to ensure applications get what they need to perform, when they need it — debunking any misconceptions that overprovisioning assures application performance.
We need to look no further than BBC Studios, J.B. Hunt and the City of Denver for examples of application performance driving operational efficiency. All three organizations used an application resource management solution to reduce resource waste without sacrificing performance.
Beyond efficiency: Sustainability as a profit driverProfit — that’s the goal of sustainability investments that work to reshape critical business functions, from how it develops products to the types of services it provides. For CFOs, this isn’t hyperbole. The “Own Your Impact” study shows that CEOs who implement “transformational” sustainability strategies achieve higher profit margins than those who don’t — up to 8% more. IBV research even shows purpose-driven consumers are willing to pay a premium on sustainable products — up to 70% more.
But the digital transformation that underlies these opportunities can be a multi-year journey, which often competes with focus on short-term returns. The key, then, is incremental, thoughtful transformation backed by technologies that exponentially increase capabilities.
For CFOs at organizations whose Environmental, Social and Governance (ESG) performance is already compliant with industry and regulatory standards, the goal is focus on operational improvements, which inherently demonstrate ROI. Technologies like application resource management make organizations more efficient through the orchestration and automation of apps, workloads, resources and infrastructure across platforms.
Learn moreIBM Turbonomic Application Resource Management: Cut infrastructure spend by 33%, reduce data center refresh costs by 75% and get back 30% of your engineering time with smarter resource management.
To learn about more practical approaches to transformational sustainability, read the 2022 CEO Study: "Own Your Impact"
IBM Cloud TeamIBM Cloud
=======================
How AI and the Latest Technologies Can Accelerate Progress on Sustainability Goals Artificial intelligence Automation
3 min read
By:
Jamie Thomas, General Manager, IBM Strategy and Development
Turn sustainability ambition into action.
It’s common knowledge that recycling, reducing waste and buying sustainably sourced products are all ways to promote sustainability. But it’s less well-known that some of today’s latest technologies are being used for the same purpose, at scale.
According to a 2022 UN report, progress in energy efficiency needs to speed up to achieve global climate goals. Not only is energy consumption a contributor to climate change if not procured from a renewable source, it’s also becoming increasingly expensive to source and consume energy. Technology companies — which have a hand in designing the infrastructure that powers our global economy — can help.
Hybrid cloud and AI technology can help organizations improve energy efficiencyEnergy efficiency across industries contributes to Sustainable Development Goal 12, which is a UN goal to ensure sustainable consumption and production patterns. Many companies know that now is the time to act to implement sustainable practices, but progress is hindered by a lack of expertise or not knowing where to start. Their challenge is to make it a true business driver while delivering ROI.
Examples of energy consumption that can account for a large portion of an organization’s use include data centers and IT infrastructure. With businesses increasingly running more modern applications that can draw more power, reducing energy consumption across the data center becomes even more important with emerging technologies like artificial intelligence (AI).
At IBM, we believe that hybrid cloud and AI technology are critical enablers to meet Environmental, Social and Governance (ESG) targets, both internally and externally. To help clients align sustainability goals to business objectives while complying with increasing regulatory demands, IBM offers a comprehensive portfolio of consulting and technology capabilities, including IBM LinuxONE.
Reducing IT infrastructure energy consumptionIBM LinuxONE can help optimize the data center for energy efficiency. LinuxONE is an IT infrastructure platform that is designed to scale up and scale out, enabling clients to run hundreds of workloads on a single system, reducing the physical space needed in the data center. For example, consolidating Linux workloads on five IBM LinuxONE Emperor 4 systems instead of running them on compared x86 servers under similar conditions can reduce energy consumption by 75%, space by 50% and the CO2e footprint by over 850 metric tons annually. [1]
Further enhancing its energy-saving capabilities, LinuxONE includes the IBM Telum processor, an integrated on-chip AI accelerator that that is designed to offer clients the ability to run AI inferencing at scale. This allows for more efficiency, enabling businesses to leverage AI applications while still supporting their sustainability goals.
In support of IBM LinuxONE, IBM Instana Observability provides observability dashboards of key metrics (including environmental data and power consumption data) to understand power consumption, temperature, humidity and heat loads of LinuxONE hardware. Instana software brings the power of data to help clients as they work to meet sustainability targets.
Take a look at this AI for Good webcast for more insight into these solutions.
Learn moreMeeting one of the biggest challenges of our lifetime requires utilizing the latest technologies at scale, some of which I’ve described here. Whether it’s those or others, it takes a village to make it work. Together, let’s turn sustainability ambition into action. Learn more about the innovative technology and consulting solutions designed to help clients reach their sustainability goals with IBM here.
Register for the upcoming session, "Harnessing AI to manage climate risk," to learn more about IBM Environmental Intelligence Suite (EIS), a modernized foundation toolkit that leverages advances in AI and data sciences to help organizations manage risk of the climate crisis.
Watch the replay of the AI for Good Webinar: "Creating sustainable business growth with a smaller footprint"
[1] Compared 5 IBM Machine Type 3931 Max 125 model consisting of three CPC drawers containing 125 configurable cores (CPs, zIIPs, or IFLs) and two I/O drawers to support both network and external storage versus 192 x86 systems with a total of 10364 cores. IBM Machine Type 3931 power consumption was based on inputs to the IBM Machine Type 3931 IBM Power Estimation Tool for a memo configuration. x86 power consumption was based on March 2022 IDC QPI power values for 7 Cascade Lake and 5 Ice Lake server models, with 32 to 112 cores per server. All compared x86 servers were 2 or 4 socket servers. IBM Z and x86 are running 24x7x365 with production and non-production workloads. Savings assumes a Power Usage Effectiveness (PUE) ratio of 1.57 to calculate additional power for data center cooling. PUE is based on Uptime Institute 2021 Global Data Center Survey. CO2e and other equivalencies that are based on the EPA GHG calculator use U.S. National weighted averages. Results may vary based on client-specific usage and location.
Jamie ThomasGeneral Manager, IBM Strategy and Development
=======================
Meet Business Demands Faster with IBM Intelligent Automation Solutions on AWS Automation Cloud
5 min read
By:
Bill Lobig, VP, Product Management, IBM Automation
Driving innovation and digital transformation with IBM intelligent automation solutions on AWS.
Technology has enabled the world GDP to grow 14x faster than population growth in the last two centuries and has accelerated per capita growth over the last 250 years.
It continues to be a driving force even in today’s economy and digital age, but businesses are faced with multiple challenges, including IT complexities and outages, a skills shortage and declining customer satisfaction. People are working harder and clocking longer hours, but delivering less impact. So, how do we make people more productive and make money at the same time? The answer is automation.
According to IDC, automation provides several benefits in terms of day-to-day efficiencies, where it can boost IT staff efficiency by 34%, bolster developer productivity by 30% and increase overall efficiencies in business process management by 24%. In business operations, automation can result in a 10% revenue growth per organization.
Cloud solutions are also gaining a lot of momentum as more enterprises realize the inherent benefits. Over 93% of enterprises are turning to public cloud platforms like Amazon Web Services (AWS) to drive agility and cloud cost savings. The cloud is ideally suited for scaling infrastructure up and down to meet the variability in demand that is prevalent in today’s increasingly complex business environment.
However, without the right tools in place, you can easily overprovision (with high financial and environmental costs) or under-provision (leaving your customers with a bad experience). To assure the best cost-effective performance, health and reliability of software delivered on the cloud, you need a modern observability platform that gives you full stack visibility and understands the underlying cloud-native architecture of modern SaaS applications.
Leverage the power of IBM with the speed and scale of AWSSome of the most critical business processes of the Fortune 500 and top global banks are powered by IBM software. Today, IBM is at the forefront of the automation revolution. IBM offers a variety of intelligent automation solutions that assure these benefits.
And AWS is the world’s most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers are using AWS to lower costs, become more agile and innovate faster.
That is why IBM is working with AWS to deliver IBM intelligent automation solutions on AWS. Through this strategic collaboration, you’re now empowered to quickly deploy reliable enterprise-grade cloud solutions with built-in AI to drive the greatest outcomes for millions of customers every day.
Keeping business strong and costs down through unprecedented changeDaxko, for example, provides a SaaS platform that allows gyms and fitness studios to manage their center’s daily operations.
Daxko solutions run in a hybrid cloud environment comprising four data centers and 27 Amazon Web Services (AWS) accounts across production and non-production environments. The company completed nine separate acquisitions to expand its product portfolio and technology to better align with customers’ new needs. With new functionality up and running and a dramatically expanded, heterogeneous IT landscape, Daxko’s biggest challenge was tracking the health and performance of its products.
The Daxko team was used to taking days to configure monitoring tools, but with IBM Instana Observability, they were able to get full visibility in minutes. Although the team already had a monitoring tool, its pricing was complicated, charging Daxko based on number of end-user visits and the amount of data to be captured. In a cost-sensitive economy, cost savings are paramount.
Having IBM Instana as its observability partner helped Daxko get up and running in 15–20 minutes (instead of days), release features quicker, gain thorough visibility into issues to minimize downtime and continue scaling to meet variability in demand without incurring unexpected costs.
Bringing observability to a containerized environmentDealerware, another software company, delivers a single platform to help dealers manage their rental fleets. They leverage automation through IBM Instana to deal with usage spikes and reduce delivery latency.
Dealerware planned a set of growth initiatives intended to drive up rental and loaner contract volume and quintuple the number of vehicles under fleet management. Since its founding in 2016, Dealerware has been running on the Amazon Web Services (AWS) cloud platform, building its app on Amazon Elastic Compute Cloud (EC2) instances with a monolithic stack. To prepare for the anticipated growth and even greater spikes during peak demand, the engineering team migrated Dealerware’s platform from monolithic applications to a more scalable container-based architecture with Amazon Elastic Kubernetes Service (EKS) clusters.
After moving to a better-performing environment, Dealerware needed to control latency with scalable observability and monitoring. Generally, observability tools collect and display data from the system that teams want to monitor. But data requires meaningful and actionable analysis. The better your analysis capabilities are, the more valuable your investments in observability and monitoring become. That is where Instana shines.
With IBM Instana as its observability partner, Dealerware was able to automatically detect its full AWS stack, with comprehensive monitoring of EKS clusters. The Instana agent does all the heavy lifting without additional configuration, including auto-injection into containers at runtime, rich visualization of application dependencies and performance metrics, and comprehensive mapping of all application dependencies. From a single control pane, Dealerware now has visibility on where issues occur, understands the causes and initiates fixes, allowing it to reduce delivery latency by 98%, from 10 minutes to 10–12 seconds.
Power your digital transformation by using IBM intelligent automation solutions on AWSIBM offers the following software products as Software-as-a-Service (SaaS) on AWS marketplace.
IBM Instana ObservabilityGet fast, precision observability into your AWS technology stack to drive performance and reliability. IBM Instana provides high-precision, one-second metrics with complete, unsampled end-to-end transaction traces for a vast range of AWS software services and applications.
IBM Turbonomic Application Resource ManagementCloud optimization you can continuously automate to prevent performance risk and cost overruns. IBM Turbonomic is a next generation IT infrastructure management software that makes complex resourcing decisions to ensure your applications get the resources they need while minimizing cloud spend and your carbon footprint.
IBM API ConnectConnect applications and data, wherever they reside. IBM API Connect provides a powerful toolkit for integration specialists and a no-code interface with AI-powered assistance to simplify and accelerate integration.
IBM AsperaMove data of any size across any distance. IBM Aspera is a hosted service to send and share your files and data sets of any size and type across a hybrid cloud environment — up to hundreds of times faster than FTP and HTTP quickly and reliably.
IBM Content Services (new offering)Accelerate content management and governance processes while delivering superior customer experiences. Securely access content where and when you need it with IBM Content Services.
IBM App Connect Enterprise SaaS (new offering)Instantly connect applications and data, wherever they reside. With a catalog of pre-built connectors and customizable templates, IBM App Connect Enterprise SaaS allows organizations to rapidly connect applications and build integration flows.
Learn more- How IBM and AWS Are Driving Joint Innovation for Partners and Clients
- IBM intelligent automation solutions on AWS
- Explore solutions on AWS Marketplace
VP, Product Management, IBM Automation
=======================
ITOps Teams: “Automate This” to Make Sure Your Applications Don’t Cost You Automation
4 min read
By:
IBM Cloud Team, IBM Cloud
Tim Cronin, Sr. Product Marketing Manager, IBM AIOps
If applications aren’t running effectively, what will it cost you? Savings, customers, reputation?
Pretty much everything is run by an application today. Fully 80% of organizations estimate they have up to 1,000 applications in their portfolio. IT Operations (ITOps) teams are on the front lines of helping to maintain application performance, thereby helping the business avoid unnecessary costs.
Even a one-second delay in page load time (i.e., application latency) results in a 7% loss in conversions and a 16% decrease in customer satisfaction. So, imagine a small website that generates $100,000 a day. That business stands to lose $2.5 million annually because of a one-second delay in loading speed.
According to Information Technology Intelligence Consulting, 40% of enterprises said a single hour of downtime could cost between USD 1–5 million — exclusive of any legal fees, fines, or penalties if the downtime was caused by a bad actor or lackluster cyber security efforts. In a Gartner survey, 98% of companies stated the cost of IT downtime ranged from USD 100,000 to USD 540,000 an hour.
If applications aren’t running effectively, it will cost you. But here are three ways automation can help ITOps teams make sure critical business applications run effectively — always.
1. Find and resolve application problems as fast as they happenCompanies are dealing with a lot of new IT complexity as they modernize their approach to platforms, applications, microservices and containers. This complexity makes it harder to see and manage their application environments.
Automate thisTo ensure application performance, ITOps teams should look to automate full-stack observability to get one-second traces across the application environment and use that to collect data in context. Not only can you get heartbeat data about the health of your applications, but you can also see how key interdependencies interact. For the SRE or IT operations engineer, being able to remediate incidents faster and make intelligent changes more quickly after a release can be extraordinarily impactful.
Real resultsRebendo, a developer of performance management solutions, delivers real-time visibility for clients at a one-second granularity.
“Some of our competitors,” notes Michael Kling, CEO, IBM Business Partner Rebendo AB, “offer insight only once every 60 seconds. But that’s not what our users want. When we were building Rebendo Insight, we asked our customers about that level of granularity, and they told us, ‘Why would I use a product that doesn’t see 59 seconds of a minute? That’s a worthless product for me.’”
IBM Instana Observability: Get the context you need to fix problems faster with full-stack observability.
2. Stop over- or under-provisioning your application’s resourcesDoes this sound familiar: “You can flexibly control your costs and how much you’re consuming of the cloud.” That’s something businesses were pitched in the early 2000’s. And, for a while, that promise seemed true. But now, decades into the cloud journey, this promise has yet to be fully realized. A lot of organizations don’t know how much resource they need or what resources would guarantee perfect performance, so they overprovision as a sort of insurance policy or under-provision to manage costs. They’re unable to walk that fine line between optimal performance and the appropriate amount of resourcing.
Automate thisTo ensure application performance, ITOps teams should look to automate actions that proactively deliver the most efficient compute, storage and network resources to their applications at every layer of the stack — and do it continuously, in real time, and without human intervention.
Real resultsSulAmerica, a leading insurance company headquartered in Brazil, saw a 70% reduction in tickets, 11% improvement in node density and 24×7 performance assurance using software to continuously make resource decisions.
“Our team’s macro goal is to deliver an application- and SLO-driven hybrid cloud,” says Rafael Noval, Technology Leader of NOC & Web, SulAmérica. “Applications will run wherever it best suits the business, and they will continuously perform and delight our customers.”
Read more about how SulAmerica maintains low application response times as demand fluctuates.
IBM Turbonomic Application Resource Management: Cut cloud spend by 33%, reduce data center refresh costs by 75% and get back 30% of your engineering time with smarter resource management.
3. Proactively prevent incidentsITOps teams do their best to prioritize resources during IT outages and enable quick response to alerts raised across disparate systems. In the traditional incident paradigm, getting things back up and running can be an arbitrary and clunky process involving more team members than necessary. To stay ahead of customer impact and IT risks, ITOps teams know the criticality of being able to recognize potentially anomalous behavior and proactively flag it for engineers.
Automate thisTo ensure application performance, ITOps teams should look to use artificial intelligence (AI) to more proactively identify potential risks or outage warning signs. By pulling out the wisdom trapped in prior resolutions to similar incidents and presenting those to engineers, you can more effectively and quickly resolve new ones.
Real resultsElectrolux, one of the world’s largest appliance manufacturers, reduced its time to resolve IT issues from three weeks to one hour.
“We see about 100,000 events per day. It is so important in this huge ocean to identify exactly the drop of venom that you have to remove to save your life,” said Joska Lot, Global Solution Service Architect: Monitoring and Events Management, Electrolux AB
IBM Cloud Pak for Watson AIOps: Achieve proactive IT operations using an AIOps platform.
In sumAvoid the costs of not effectively running your applications — whether it’s loss of savings, customer revenue or reputation. Deep observability metrics and data analytics tied to a resource optimization tool will provide enough insights to manage and resolve IT problems as they arise, and make sure your apps remain available to clients and customers alike.
IBM Cloud TeamIBM Cloud
Tim CroninSr. Product Marketing Manager, IBM AIOps
=======================
How to Privately Connect to IBM Public Cloud with Maximum Network Traffic Control Networking
4 min read
By:
Sebastian Böhm, IBM Expert Labs - Technical Specialist
IBM public cloud offers several ways to securely connect customer data centers and on-premises infrastructure to IBM public cloud resources.
Some of the most popular offerings include the following:
- VPN connections to IBM Cloud Classic Infrastructure using a virtual or physical network appliance (e.g., Juniper vSRX or Virtual Router Appliance (Vyatta))
- VPN Gateway for IBM Cloud Virtual Private Cloud (VPC)
- Direct, private connection with IBM Cloud Direct Link
While the virtual or physical network appliances reside in IBM Cloud Classic Infrastructure and give full control to the customer regarding their network management, the other two offerings are available as a service and are being managed by IBM with dedicated configuration capabilities. Compared to Classic Infrastructure, Virtual Private Clouds (VPCs) offer next-gen features and get constant hardware updates; therefore, customers tend to use VPC over Classic Infrastructure.
At the same time, they highly value the capabilities of a Classic Infrastructure’s network appliance. Unfortunately, those network appliances cannot manage a Virtual Private Cloud’s traffic by default. IBM Cloud Direct Link, however, can connect to VPCs and Classic Infrastructure devices.
Besides the various offerings to connect customer on-site infrastructure to IBM Public Cloud, there is also an additional service called IBM Cloud Transit Gateway that enables customers to interconnect IBM Cloud resources, including VPCs, Classic Infrastructure and even cross-account resources.
With a combination of the three offerings mentioned above, it is possible to create a highly secure IP connection to IBM Cloud VPC and Classic Infrastructure, while still having the maximum traffic and network control. It creates a single point of entry for all workload-related traffic (in a high availability scenario, there are, of course, two points of entry). To set this up, three steps are necessary, which will be explained on a high level below.
Architecture overviewThe following diagram shows the overall configuration, combining Direct Link with Classic Infrastructure, a Transit Gateway and a VPC:
Step 1: Setting up IBM Cloud Direct LinkThe Direct Link builds the underlay network of the overall solution and enables the customer to privately and directly connect to IBM Cloud infrastructure without having to route packages over the public network. As soon as the Direct Link connection has been established and IBM Classic Infrastructure has been attached to Direct Link, the customer can access the IBM Classic Infrastructure’s private network.
IBM Cloud Direct Link automatically announces all attached routes to the counterpart, which is usually an appliance controlled by the customer. For the scenario described in this article, the customer should apply a filter on the counterpart device to only allow the private IPs attached to the network appliance residing in Classic Infrastructure (as shown in the architecture overview).
Step 2: Establishing private connectivity to network appliancesAfter finishing the Direct Link setup, the customer can reach the private endpoints of the network appliances residing in Classic Infrastructure. Those endpoints can then be used to set up a private (not routed over the public network) GRE (Generic Routing Encapsulation) tunnel in combination with BGP (Border Gateway Protocol) to create the overlay network of the solution. BGP is responsible for exchanging the overlay routes between the devices.
Since the network appliance is the gateway being used by Classic Infrastructure devices, the customer is now already able to connect to the Classic Infrastructure devices attached to the network appliance. One more step is necessary to finalize the setup.
Step 3: Connecting an IBM Cloud Transit Gateway with network appliancesAs final step, the network appliance needs to be connected to a Transit Gateway, which manages the connection to one VPC or multiple VPCs. First, the possibility of a prefix filter should be used to restrict the Classic Infrastructure connection in such a way that only the prefix of the gateway appliance is permitted.
After that, the Transit Gateway feature to connect IBM Cloud Classic Infrastructure devices via GRE tunnel is used. This feature requires manual configuration, both using the Transit Gateway UI and on the virtual gateway appliance. The configuration includes tunnel IPs, gateway IPs and BGP autonomous system numbers. Detailed configuration steps for setting up a Transit Gateway GRE tunnel can be found in the IBM Cloud Docs.
The configuration of the appliance depends on the type of appliance involved. As soon as the connection has been established, the VPC routes attached to the Transit Gateway are automatically advertised to the network appliance. Similarly, the network appliance can advertise its attached routes to the Transit Gateway. The routes announced depend on the configuration of the appliance. With this last step, all configured routes are exchanged between the involved network nodes.
Now it is possible for customers to route all access to public cloud resources via the gateway appliance and control them there.
High availabilityFor production scenarios, it is also possible to build this architecture in a high availability architecture, as shown in the following figure:
ConclusionThe IBM Cloud Transit Gateway’s GRE feature opens up new possibilities for network design within the IBM Cloud for all customers with strict security requirements. Until now, it was only possible to manage and control network connections between VPCs and on-premises infrastructures to a limited extent. Thanks to the connection between a Transit Gateway and a Classic Infrastructure gateway appliance, customers now have the ability to make fine-grained network configurations and to control and manage any network flows.
For further information, please see the IBM Cloud documentation for IBM Cloud Direct Link, IBM Cloud Transit Gateway and Classic Infrastructure or directly contact an expert from IBM Expert Labs.
Sebastian BöhmIBM Expert Labs - Technical Specialist
=======================
Enhancing Cyber Resiliency with Safeguarded Copy and Cyber Vault Storage
3 min read
By:
Kurt Messingschlager, Senior Storage Consultant
Exploring IBM’s cyber resiliency solutions for critical data: IBM Safeguarded Copy and IBM Cyber Vault.
IBM has cyber resiliency solutions for both critical and non-critical applications and the corresponding application data.
Understanding cyber resiliency and disaster recoveryCyber resiliency and disaster recovery have many factors in common due to the fact that cyber resiliency has adopted and incorporated practices from disaster recovery. One practice cyber resiliency shares with disaster recovery is the approach of prioritizing the protection and recovery of applications and application data. The classification or categorization of applications is a well-established best practice that pertains to organizing applications into classes (e.g., Gold, Silver, Bronze) and assigning service levels to those application classes. In this context, we will focus on the “recovery” service levels of those applications.
If a fire or flood rendered an organization’s main data center inoperable and the go-ahead for failover and recovery in the secondary disaster recovery site was given, we know that priority will be given to critical applications. Because of that priority, those applications will re-emerge and be brought back online more quickly than non-critical applications. In this scenario, critical application data had been protected via synchronous or asynchronous replication to the secondary disaster recovery data center. The data resides “online” making the recovery quicker; non-critical application data, however, is protected with a lower-cost solution of backup to tape (or slow disk).
The same is true with cyber resiliency — critical applications are given priority and are protected differently than non-critical apps. Critical apps are protected via snapshots (i.e., point-in-time (PIT) copies) to high-performance media/disk. Non-critical apps are protected via backup to tape or slow disk. Critical apps are given priority in recovery and their data resides online. Consequently, critical apps will re-emerge faster than the non-critical apps.
What is IBM Safeguarded Copy?The IBM Safeguarded Copy solution is an online snapshot-to-Flash-storage solution. As such, it is well-suited for critical applications.
IBM Safeguarded Copy provides the following:
- Added cyber resiliency protection layers: Immutability and the logical isolation of point-in-time (PIT) copies.
- Automation: Scheduled and automated PIT copy creation and expiration.
- Additional protection measures: Stringent role-based access controls (RBACs) to prevent non-privileged users and restrict privileged users from compromising protected data.
IBM Cyber Vault builds upon the IBM Safeguarded Copy solution and adds value by further reducing the “time to recovery” via integration and automation. Cyber Vault basically equals Safeguarded Copy plus application integration and recovery automation:
- Application integration: Safeguarded Copy offers crash-consistent copies by default. Cyber Vault offers the option of application-consistent copies by integrating Safeguarded Copy with applications (i.e., it provides the option of quiescing databases before taking a safeguarded copy). Quiescing a database flushes any write data stored in the database server’s cache to disk. Application-consistent copies offer time savings in that they do not require utilizing a database’s redo logs to bring the database to a transaction-consistent state.
- Recovery automation: Cyber Vault adds automation to the data validation and recovery processes. Manual processes are automated via custom scripts. For example, custom scripting is used to save time by automating the location of the most current copy, mounting the copy to a host in a test environment and testing the copy to ensure it is a valid copy (free of errors). If valid, then restores the copy data back to the production host.
IBM offers Cyber Vault for variety of host-to-storage environments, including Mainframe, IBM i, Open Systems (Windows, UNIX, VMware), FlashSystem and DS8000 storage platforms.
For more information regarding the IBM Safeguarded Copy solution or IBM Cyber Vault service offerings, contact IBM Technology Services.
Kurt MessingschlagerSenior Storage Consultant
=======================
What is the Procure-to-Pay Workflow? Automation
6 min read
By:
IBM Cloud Education, IBM Cloud Education
Discover how procure-to-pay process automation and other next-gen technologies are giving organizations a competitive advantage.
Procurement plays a major role in the success of a business. Managing the flow of materials and services, forecasting costs and negotiating the best prices impacts nearly every aspect of an organization, from manufacturing to customer satisfaction. The procure-to-pay process creates a way for organizations to document this complex workflow, from the point of need through the final payment. Its aim is to integrate purchasing, supply chain management and accounts-payable functions seamlessly.
Because the procure-to-pay cycle is so complex and touches so many parts of the business, it is prone to backlogs and inefficiencies. Businesses need procure-to-pay software that can identify bottlenecks and more efficient workflows, as well as check all the procurement boxes. That includes using real-time analytics and flexible decision-making to optimize the procure-to-pay process, making businesses more agile in an era of unstable supply chains.
In this post, we’ll take a closer look at the procure-to-pay system and how cloud-based procure-to-pay solutions improve the bottom line and increase the speed of business.
What is the procure-to-pay process flow?Procure-to-pay — also known as purchase-to-pay or P2P — is the cycle of procuring and accounting for the goods or services needed to run a business in a timely manner and for a reasonable price.
A P2P solution seeks to document the procurement process and provide a system for accountability so that organizations can improve purchasing decisions and realize cost savings, such as early payment discounts.
Stages of the P2P processDifferent organizations take different approaches to the procurement process, based on cost, availability, sustainability and a wide range of other factors. Each organization will design its own procurement route based on its business strategy. Here is an example of a purchasing process commonly found within an enterprise:
Source-to-pay processOften confused with procure-to-pay, the source-to-pay process focuses on the initial stages of identifying a need and selecting the right goods or services to meet that need. These steps include the following:
- Identifying a need: Stakeholders review business needs and determine what goods or services are required, if they must be fulfilled by an outside vendor and which departments that will benefit.
- Selecting goods and services: A formal purchase requisition is made that identifies a product, service and vendor the manager or department would like to use. This kicks off the finance team’s vetting process for placing the correct order, selecting the best vendor and investigating potential cost discounts.
- Purchase order: Once the details of the purchase request are reviewed and a contract with the vendor is finalized, the finance team issues a purchase order approval and an order is placed with the vendor for the goods and services.
- Receiving: Goods and services are received, reviewed to ensure the right order was sent and integrated into the requesting department’s workflow, such as onboarding software or sending components to the correct manufacturing department. A goods receipt, which certifies the order was received as ordered, is sent to accounts payable team.
- Invoice processing and payment: The invoice is checked against the order received; then, payment is made to the vendor. This can be done with invoice matching, such as two-way matching, which checks the vendor’s invoice against the details of the purchase order, or three-way matching, which compares the details of the purchase order, the invoice and the delivery receipt to ensure they all match before the vendor payment is made.
A procurement solution that integrates the procurement department, accounting system and organization workflows offers many advantages:
- Better supplier relationships: Business is built on a foundation of good relationships. Being a great customer helps ensure products are high quality and delivered on time, and it increases the chance a supplier will go above and beyond during times of need. Having a system that not only ensures invoices are paid on time, but also offers visibility of the invoice’s status, helps keep a supplier happy.
- Visibility: Internal control and visibility over the end-to-end P2P cycle gives organizations full insight into cash-flow and financial commitments. When you’re capturing records of all transactions, they are easier to track. Plus, the data can reveal optimization opportunities.
- Fraud prevention: While having great business relationships is helpful, it can also open the door to favoritism and fraud. A P2P system that includes strict invoice matching and multiple points of review protects against fraud, such as granting a contract to an unqualified vendor personally connected to a buyer or making purchases without adhering to the agreed-upon price.
- Efficiency: Human error occurs when a system is overly complicated or is siloed within separate departments. Centralization of the procurement, supply chain and accounts payable processes helps organizations identify ways to improve the workflow and pay invoices faster.
- Cost savings: The P2P process helps organizations identify and foster good relationships with preferred suppliers and ultimately negotiate the best prices. When adding P2P automation and software solutions, it saves time and improves spend management. This enables better forecasting to prevent a spot buy — that is, making an immediate purchase to fill a need — when production demand increases.
- Speed: E-procurement solutions and automation help companies move faster and more quickly respond to disruptions in the supply chain. Streamlining the procurement process also saves time, frees up resources and rapidly enables new supplier approvals.
- Predictive modeling: Data analytics, process mining and other digital tools can identify areas of potential concern and opportunities for further process optimization. Emerging platforms are modeling changes before they are made to ensure no unforeseen consequences occur.
IBM has identified five main challenges to the procurement process:
- Maverick buying: Spot buying, poor contract management and unauthorized purchases make goods and services more expensive. Prices are higher when purchases are made at a lower volume or during peak demand.
- Deviations: Deviations are an expected component of business processes, such as unanticipated fluctuations in economic markets, changes in technology and spikes or lulls in customer demand. While deviations are a challenge that can increase costs, reviewing data on these deviations can also reveal where the P2P cycle may need to change.
- Reworks: Manual and repetitive tasks within the P2P process are prone to errors and ultimately slow down a workflow. Reworks are time-consuming, pulling the procurement team away from more important tasks.
- Enabling automation: Organizations may be missing an opportunity for efficiency and cost savings with legacy or semi-manual P2P systems that do not enable automation.
- Cash discount losses: Early payment programs offer negotiated cash discounts with suppliers that also support the organization’s supply chain. If payments are not managed in a timely manner and deadlines are not met, organizations miss out on important cash discounts and risk losing the supplier’s confidence.
Despite the shift toward cloud solutions, there are organizations today that still use a manual or semi-manual process for procurement and accounts payable. Many others use procurement software integrated into an enterprise resource planning (ERP) system or accounts payable system. This may get the job done, but an ERP system can lack deeper insights into key performance indicators (KPIs).
A cloud-based P2P software solution with analytics and process mining capabilities can improve compliance and control, providing deeper insights into global spend and process inefficiencies. The following are a few ways it can improve the process:
- Automation: Automatically send purchase orders, validate payments and trigger payments for faster invoice turnaround.
- Track KPIs: KPIs — such as lead time, average invoice approval time and average cost to process an invoice — offer insights into fine-tuning operations and keeping teams accountable.
- Deeper insights: Using process mining and data analytics, a P2P system can help organizations make informed decisions at a low cost, such as determining which supplier discounts deliver real value or identifying common bottlenecks in the approval process.
- Root cause analysis: When a problem consistently arises, process mining within P2P systems can dig deeper into the root causes and reveal where inefficiencies lie.
- Process optimization: Making process changes takes time and presents some uncertainty. Organizations may be hesitant to make changes because it may cause unforeseen problems elsewhere. An advanced software solution can conduct process simulations to determine the best path forward and safeguard against the unexpected consequences of process changes.
All procurement professionals have the same objective: find new ways to create cost savings and increase business value. Procurement leaders that can that take full advantage of intelligent automation solutions like process mining will be able to make more informed, cost-effective decisions by knowing where the opportunities are and testing ideas before implementation (and pretty much guarantee expected savings).
Read this paper to learn more about how process mining can help overcome five of the most critical challenges for procurement leaders: maverick buying, deviations, rework, automation enablement and cash discount losses.
IBM Cloud EducationIBM Cloud Education
=======================
Migrating from IBM Cloud Certificate Manager to IBM Cloud Secrets Manager for ALB Users Cloud
3 min read
By:
Kanako Harada, IBM Cloud Technical Account Manager
Step-by-step instructions for ALB users who need to migrate from Certificate Manager to Secrets Manager.
IBM Cloud Certificate Manager reached to End of Marketing on September 30, 2022, and will reach End of Support on December 31, 2022. This means that clients who currently use Certificate Manager need to migrate to IBM Cloud Secrets Manager.
Certificates can be migrated manually or through a script, and this will allow your certificates in Secrets Manager to be accessible from services in IBM Cloud. This blog post covers the simple use case of an ALB user who currently uses Certificate Manager.
StepsWe will cover the following:
- How to manually migrate the certificates for ALB from Certificate Manager to Secrets Manager
- How to assign access rights to Secrets Manager from ALB
- How to import certificates from Secrets Manager to ALB
First, you’ll need to download the certificates from Certificate Manager.
Required action within Certificate Manager- Navigate to the hamburger icon at the top left to display the Resource list.
- Open Security and search for the existing Certificate Manager that you want to migrate. Click on it to open.
- Open Your certificates and right-click on the three-dots icon at the right end of the certificate that you want to migrate.
- Select Download Certificate from the displayed drop-down list.
- A file will be downloaded in ZIP format, including the certificate, secret key and ICA (Intermediate Certificates).
- Extract it to a folder that you can find easily.
- Navigate to the hamburger icon at the top left to display the Resource list.
- Open Security and search for the existing Secrets Manager that you want to migrate. Click on it.
- Click on Secrets in the left pane and then click the Add button to the right above of the list of secrets.
- Select TLS certificate on the first page of Add Certificate.
- Select Import certificate on the next page.
- Scroll down the page and name your certification that you want to import.
- Add a pem file to the Certificate section. You can get the pem file once you extract the downloaded ZIP file from Step 5 in the section above.
- Add a secret key file to the Secret key (Option) section from the extracted folder.
- Add an ICA file to the ICA (Option) section from the extracted folder.
- Click Create at the bottom right of the page.
- Now you have finished importing the certificate to Secrets Manager.
The next step is to give sufficient access rights to ALB so that it can use Secrets Manager. This is done through IAM.
Preparation for giving access rights- Navigate to Manage > Access (IAM) on the portal to open the IAM page.
- Select Authorizations from the left pane to open the Manage Authorizations page.
- Click Create at the top right.
- Click Manage Authorizations to create a new Authorization.
- Select This Account for Source account.
- Select VPC Infrastructure Service for Source Service.
- Select Resources based on selected attributes for How do you want to scope the access?
- Check Resource type.
- Select Load Balancer for VPC for Resource type.
- Select Secrets Manager for Target service.
- Select All resources for How do you want to scope the access?
- Select Writer for Service access.
- Click Authorize.
The next step is to import certificates from Secrets Manager to ALB.
Note: If you encounter an issue in this step, it will revert to the status that existed before importing the certificate from Certificate Manager. If there is a need to revert to the original status, please select Certificate Manager as source and import the certificate. This will restore the original status.
Import the certificate for ALB- Click the icon shaped like four vertical lines in the top left of the portal.
- Select VPC Infrastructure in the left pane and select Load Balancers.
- Select your load balancer from the list in Load balancers for VPC and select the correct value for Region in the pop-up of the list.
- Click the Front-end listeners tab on the detailed page of load balancers.
- Select Edit from the drop-down list.
- Select Secrets Manager for Certificate source.
- Select your Secrets Manager instance for Secrets Manager.
- Select your imported certification for SSL Certificate.
- Click Save to import your selected certification.
This blog post has provided a good overview of manual migration from IBM Cloud Certificate Manager to IBM Cloud Secrets Manager and the steps for letting ALB use certification in Secrets Manager.
If you would like to complete the migration by using a script, please check out the links listed below:
- Migrating certificates from Certificate Manager
- How to Migrate Certificates from IBM Certificate Manager to IBM Cloud Secrets Manager
Kanako Harada
IBM Cloud Technical Account Manager
=======================
IBM Db2 + Amazon Web Services: Better Together Database
5 min read
By:
Ashley Bassman, Sr Product Marketing Manager
Miran Badzak, Program Director, Databases
The database built to run the world’s mission-critical workloads is on AWS.
In May 2022, IBM officially announced its strategic partnership with Amazon Web Services to deliver IBM SaaS products in the AWS marketplace. IBM's SaaS products on AWS are designed to provide customers with the availability, elastic scaling, governance and security required for their mission-critical workloads, fully managed in the cloud. The IBM and AWS partnership gives customers the opportunity to quickly get started with IBM SaaS products, integrated into the AWS services and experience.
“The IBM and AWS partnership allows our joint customers to accelerate their data modernization strategy in the cloud by combining the mission-critical reliability and performance of our databases and AWS' cloud infrastructure,” said Edward Calvesbert, Executive Director - Product Management, IBM Data Management. “Through our multi-year agreement, IBM’s entire databases portfolio will be available to run as software or SaaS on AWS. For existing IBM Db2 and Netezza data warehouse customers, migrating to a fully managed SaaS deployment on AWS has never been easier, with risk-free, frictionless upgrades.”
Why Db2 on AWS?Designed by the world’s leading database experts, IBM Db2 empowers developers, data engineers, DBAs and enterprise architects to run low-latency transactions and real-time analytics for the most demanding workloads. IBM Db2 accelerates time-to-value through end-to-end management of transactional, operational and analytical data across any cloud. Whether you need faster customer insights powered by in-memory processing or the ability to run cloud-native apps, Db2 is built to enable faster, data-driven decisions and drive innovation within your organization.
With over 30 years of expertise and innovation, IBM has consistently evolved Db2 to support ever-changing workload demands. Through our partnership with AWS, we will continue to deliver the tested, resilient and scalable cloud-first Db2 database for transactional and analytical workloads, providing the extreme availability, built-in refined security, effortless scalability and intelligent automation for systems that run the world.
What Db2 offerings are available on AWS? IBM Db2 Database for low-latency, mission-critical transactional workloadsIBM Db2 Database — our cloud-native relational database — is built on IBM Db2’s decades of innovation in bringing data governance, data security, low-latency transactions and continuous availability to your mission-critical applications, allowing operations to run faster and more efficiently. It provides a single place for DBAs, enterprise architects and developers to keep current applications running, store and query anything, and simplify the development of new applications. No matter the volume or complexity of transactions, make your applications secure, highly performant and resilient with Db2 on AWS.
Leverage your existing investment in Db2 by bringing your license directly to AWS with these deployment offerings:
- Db2 on AWS IaaS: For customers who prefer to have full control over the deployment of their high-performance transactional database, Db2 is offered on AWS IaaS.
- Choose between a single-node deployment (ideal for dev/test or light workloads that do not require failure redundancy) or HA/DR deployments that provide enterprise-class resiliency for your mission-critical workloads.
- Configure and customize your own high-availability (HA), disaster-recovery (DR), backup and restore plan and posture.
- Learn more about Disaster Recovery and High Availability approaches for Db2 on AWS IaaS.
- Db2 pureScale on AWS Marketplace: Mission-critical workloads require continuous availability at scale. With best-in-class failure detection and recovery, eliminate unplanned downtime while achieving high performance with IBM® Db2® pureScale® on AWS. pureScale® leverages the IBM® Db2® parallel sysplex architecture, providing continuous availability that runs anywhere, whenever you need it:
- Build and deploy faster — you can shorten cluster readiness from week(s)/month(s) to minutes with a one-click automated deployment in the cloud.
- Always-on to ensure continuity, security and performance to keep applications and daily operations running smoothly.
- Reduce costs by achieving optimal resource utilization at all times, which helps to keep application response times low, while reducing the risk and cost of application changes. Application transparency allows a seamless transition from clusters in traditional data
- Resiliency allows you to reliably process rapidly changing, diverse and unpredictable workloads in the cloud
- Get started with IBM® Db2® pureScale® on AWS Marketplace.
- Db2 Database on AWS ROSA/EKS: For customers who prefer to have full control over the deployment of their high-performance transactional database, Db2 Database is also available on AWS EKS/ROSA:
- Use our detailed reference architecture to deploy the Db2u container for warehousing workloads on either a fully managed OpenShift service (AWS ROSA) or fully managed Kubernetes service (AWS EKS).
- Easily manage, monitor and maintain a fleet of database instances — all uniformly managed with the Db2u container operator.
- Follow the reference architecture as provided or customize your deployment by choosing an appropriate set of EC2 nodes for your specific workload or performance requirements.
- Learn more about how to get started with Db2 Database on AWS ROSA and AWS EKS.
With the power of IBM Db2 Warehouse and AWS, we are making analytics secure, collaborative and real-time in the cloud. Our cloud-native data warehouse is built on Db2’s decades of innovation in data governance and security, responsible data sharing, advanced in-memory processing and massively parallel scale. Regardless of whether your data is unstructured or resides in your lake, you can only power faster decision-making and innovation across your organization when analytical data is unified, accessible and scalable. Db2 Warehouse deployments on AWS include the following:
- Db2 Warehouse on AWS ROSA/EKS: For customers who prefer to have full control over the deployment of their data warehouse:
- Use our detailed reference architecture to deploy the Db2u container for warehousing workloads on either fully managed OpenShift service (AWS ROSA) or a fully managed Kubernetes service (AWS EKS).
- Follow the reference architecture as provided or customize your deployment by choosing an appropriate set of EC2 nodes for your specific workload or performance requirements
- As a self-managed offering, customers are responsible for instantiation, scaling, backup/restore and ongoing management.
- Learn more on how to get started with Db2 Warehouse on AWS ROSA and AWS EKS
- Db2 Warehouse Fully Managed: Simplify database administration and maintenance with Db2 Warehouse deployed as a fully managed service:
- Automation of daily tasks, including monitoring, uptime checks and backups.
- Manage costs at scale and enable price-performance and cost predictability of your database with elastic scale and a cloud-native architecture based on object storage.
- Independent scaling of storage and compute — scale (burst) on compute (up to 576 cores or 1,152 vCPUs) and increase storage (up to 144TB, not accounting for compression).
- Self-service backup and restore allows for up to seven snapshot backups (included), with an option for unlimited backups to Amazon S3. Restore with the click of a button.
- For existing Db2 Warehouse customers, migrating to a fully managed SaaS deployment on AWS has never been easier with risk-free, frictionless upgrades
- Learn more about Db2 Warehouse or try out Db2 Warehouse on Cloud for free today.
This is only the beginning for IBM Db2 offerings on AWS. We look forward to bringing the entire IBM Db2 and IBM database portfolio onto AWS through our partnership. Whether you decide to deploy on AWS, hybrid cloud or on-premises, Db2 is essential to your hybrid cloud, AI and data fabric strategy.
Join us Wednesday, December 7, 2022 for a peek into latest cloud-first innovations from IBM Db2 and the AWS partnership. Register for the webcast today!
Check out the links below to learn more:
Ashley BassmanSr Product Marketing Manager
Miran BadzakProgram Director, Databases
=======================
“Automate This” to Compete with Techfins Automation
4 min read
By:
IBM Cloud Team, IBM Cloud
Fintechs like PayPal and Robinhood have already disrupted the banking industry — now, add “techfins” to the cocktail of disruption.
Techfins are tech companies that have extended their business into ancillary financial services that target the less regulated, more profitable segments of the financial services market. For example, companies like WhatsApp and Shopify are into payments, Uber is offering auto loans and Singapore’s Grab — which started as a ride-hailing app — is now the country’s biggest mobile wallet.
For techfins, the experience is the product. They’re enmeshed in customers’ lives, responding to needs and desires that are bigger — and more exciting — than financial services. “People don’t wake up in the morning and say, ‘I want a mortgage,’” says Shanker Ramamurthy, IBM’s Global Managing Partner for banking. “They say, ‘I want an apartment.’” Banks like Singapore’s DBS are taking cues from techfins by integrating brokers and property listings into their websites and apps, so that “by the time the consumer gets to buying a home, the bank has become the default place for a mortgage.”
An inflection point for banksPaolo Sironi, IBM IBV Global Research Leader for Banking and Financial Markets, says the current challenges have created an inflection point for banks, calling for novel responses. They need to figure out how the good DNA of traditional banking — with its advantageous core practices and capabilities, such as governance and security — can “recombine and adapt to the new conditions.” “It’s not enough to digitize the existing business models,” he says. “Banks need to adjust the business model to the digitalization of business services.”
Simply, banks need to think like techfins. That starts with finding new ways to use technology to engage with customers. “We absolutely need to invert the pyramid,” Ramamurthy says, by using automation, hybrid cloud technologies and machine learning to digitally transform the back office, allowing banks to shift more focus and resources toward engaging their customers.
The following are two examples of financial services companies that are using IT automation to do just that — create a more competitive experience for their customers.
Automate like Rabobank to assure application performanceRabobank is a cooperative bank headquartered in the Netherlands, offering private and commercial customers a variety of financial products. With a mission to create a future-proof society, Rabobank’s highest goals – creating wealth in the Netherlands and helping resolve food insecurity worldwide – depend on delivering an exceptional end-user experience.
But despite having a high-performing team, Rabobank IT couldn’t maintain target application response times manually. They also relied on different monitoring tools and couldn’t be certain of the impact of a resourcing decision before it was implemented. Effective resource allocation was beyond human capacity given the complexity of its environment.
To address these problems, they implemented a full-stack visibility and automation solution, enabling them to do the following:
- Proactively prevent application delays and ensure any resourcing changes don’t simply move the bottleneck to another layer.
- Consolidate workloads onto fewer machines without adversely affecting performance — hardware cost avoidance alone was in excess of EUR 4 million.
- Break down siloes between application owners and the infrastructure operations team to enable DevOps.
This more performant environment improved application response times and freed up the team to focus on innovation.
“ … full-stack visibility has not only helped us achieve a 15% – 23% hardware reduction, it has also allowed us to enhance our customer experience by reducing our time to market and improving application response time.” — Colin Chatelier, Manager of Storage and Compute, Rabobank
Read the full Rabobank case study for more implementation and ROI detail.
Automate like Enento to build and deploy software fasterEnento, based in Helsinki, is a leading provider of digital business and consumer information services in the Nordics. “Our solutions are designed to enable decisions that move money. Many banks are dependent on our services for making credit decisions. If our service is down, consumers may not receive their credit decisions, which have a real-life impact. So, maintaining service quality is highly critical for us,” explains Eero Arvonen, Strategic Architect at Enento.
To ensure service quality, Enento needed a tool that could monitor all its applications in one place — one that could enable fast identification of bugs, help lower existing latency and provide real-time visibility into every service request (with no sampling).
To support their goal of maintaining over 99.99% availability, Enento implemented an observability solution that helps it meet and exceed SLAs and deliver a reliable customer experience. Teams now have the visibility to make quick code changes to meet the evolving needs of their customers and the rapidly evolving fintech (and techfin) industry.
“For our customers, this means better service. We are now able to produce new services at a faster schedule,” — Jenni Huovinen, System Architect at Enento
Read full Enento case study for more implementation and ROI detail.
Automation drives digital transformationDigital transformation is key to enabling banks to create the kind of customer experiences that techfins provide. The digital transformation that underlies these opportunities is a “multi-year journey,” Ramamurthy says. That can be difficult when the focus is on short-horizon returns. The key, he says, is incremental, thoughtful transformation backed by technologies that exponentially increase capabilities, like application resource management and observability. With this approach, he says banks can “minimize the risk, defray the cost and get the ecosystem to move forward.”
Learn more about IBM Turbonomic Application Resource Management and IBM Instana Observability.
Moving forward – sustainability at the centerBanks need to innovate, and they need to do it in a way that drives value for consumers. Techfin, Shopify, recently launched the Shopify Sustainability Fund, which invests in companies that remove carbon from the environment. It’s a strategic move from a branding perspective: according to IBM’s Institute of Business Value’s 2022 CEO Study study, “Own Your Impact,” consumer willingness to support purposeful brands (and pay a premium for it) has deepened. But the scale of Shopify’s fund — $5 million annually — doesn’t compare to what traditional banks can do with their balance sheets. BBVA made headlines by being named Europe’s most sustainable bank by S&P, largely thanks to its commitment to sustainable financing, which is now at €300 billion.
Ramamurthy points out that banks have the opportunity to refocus many core capabilities — capabilities that techfins don’t have, such as regulatory reporting — on sustainability. These can be turned into new fee-for-service offerings that help clients reach their own sustainability goals and create new top-line revenue streams.
To learn more about practical approaches to transformational sustainability, read the 2022 CEO Study "Own Your Impact."
IBM Cloud TeamIBM Cloud
=======================
Enhanced Ingress Status for IBM Cloud Kubernetes Service and Red Hat OpenShift on IBM Cloud Clusters Cloud Compute
5 min read
By:
Attila Szűcs, Software Engineer, IBM Cloud Kubernetes Service
Balázs Szekeres, Software Engineer, IBM Cloud Kubernetes Service
On 15 November 2022, IBM introduced the enhanced Ingress Status for IBM Cloud Kubernetes Service and Red Hat OpenShift on IBM Cloud clusters.
The enhanced Ingress Status provides granular status reports for your Ingress components, including ALBs, subdomains and secrets.
Additionally, a set of new step-by-step troubleshooting guides is available in our documentation that will help narrow down and resolve issues reported by the Ingress Status. To make Ingress Status more flexible and suitable for all of your use cases, we also introduced new configuration options.
Where can I see the status report?You can use the ibmcloud ks ingress status-report get
CLI command to see the Ingress Status of your cluster. We grouped up the reports to provide a clear and readable view of your resources:
➜ ~ ibmcloud ks ingress status-report get -c example OK Ingress Status: warning Message: Some Ingress components are in warning state check `ibmcloud ks ingress status-report get` command. Cluster Status ingress-controller-configmap healthy alb-healthcheck-ingress The ALB health ingress resource is not found on the cluster (ERRAHINF). See: https://ibm.biz/ts-ingress-errahinf The ALB health service is not found on the cluster (ERRAHSNF). See: https://ibm.biz/ts-ingress-errahsnf Subdomain Status example-subdomain-0000.region.containers.appdomain.cloud healthy example-subdomain-0001.region.containers.appdomain.cloud The subdomain has DNS resolution issues (ERRDRISS). See: https://ibm.biz/ts-ingress-errdriss The subdomain has TLS secret issues (ERRDSISS). See: https://ibm.biz/ts-ingress-errdsiss ALB Status public-crexample-alb1 The ALB is unable to respond to health requests (ERRAHCF). See: https://ibm.biz/ts-ingress-errahcf public-crexample-alb2 One or more ALB pod is not in running state (ERRADRUH). See: https://ibm.biz/ts-ingress-erradruh The ALB is unable to respond to health requests (ERRAHCF). See: https://ibm.biz/ts-ingress-errahcf Secret Namespace Status example-secret-1 ibm-cert-store healthy example-secret-1 kube-system healthy
The first group at the top is the overall state of the Ingress Status. We added a few new states compared to the previous Ingress states. Now the possible states are the following:
healthy
: All Ingress components are looking good, or you chose to ignore the reported problems.warning
: Some Ingress components have issues.critical
: Some Ingress components are not functioning at all.unsupported
: The cluster runs a version that is not supported for Ingress Status.disabled
: Ingress Status reporting is disabled for the corresponding cluster.
We provide reports on more Ingress-related general components than before. This includes the Ingress operator in the case of Red Hat OpenShift clusters and, for Kubernetes clusters, the ALB health checker and configuration resources.
Now we have reports on the subdomains that are registered for the cluster. It might include warnings like: DNS registration issues, IP or hostname mismatch problems and many more.
We improved our Ingress report by breaking it down to each managed Ingress Controller (ALB or router) found on the cluster. The reports now can contain warnings like the following:
- The Ingress Controller is not running or has high availability issues.
- The Ingress Controller has an invalid configuration.
- The Ingress Controller does not respond to HTTPS requests.
We also added the managed secrets-related reports for this output. The following are a few of the warning you can see:
- The managed secrets will expire soon or already expired.
- The cluster has communication problems with the IBM Cloud Secrets Manager instances.
- The cluster has secret synchronization issues.
Each warning that Ingress Status might report has a step-by-step troubleshooting guide. The Ingress Status printout gives a short description of the problem that was found, and next to the description is a URL that takes you to the appropriate troubleshooting guide to solve the issue.
Each troubleshooting page has a detailed description of What’s happening, Why it’s happening and How to fix it. The Why it’s happening section gives more information about the reported issue or misconfiguration. By following the steps of How to fix it, you will be able to resolve the problem or misconfiguration of your Ingress components and make your Ingress Status healthy
again.
Once you have executed the troubleshooting steps, you might need to wait 10-15 minutes for the Ingress Status to be updated and to reflect the current state of your cluster.
The troubleshooting guides are also available from the Checking the status of Ingress components documentation page.
Report configurationThe Ingress Status report gives a detailed list of all Ingress-related components and their health state. Although it is always a good idea to fix the reported errors for the best experience with our managed solution, there can be scenarios where you would like to ignore specific warnings. The warnings might not be relevant to your business needs, so we made it possible to configure the Ingress Status.
Ignoring Ingress Status errorsTo configure your status report and choose which warnings you want to ignore for a cluster, use the ibmcloud ks ingress status-report ignored-errors
CLI commands. You can find all warning codes in our documentation:
➜ ~ ibmcloud ks ingress status-report ignored-errors --help NAME: ibmcloud ks ingress status-report ignored-errors - View and configure ignored warnings for a cluster. USAGE: ibmcloud ks ingress status-report ignored-errors command [arguments...] [command options] COMMANDS: add Add warnings to be ignored by Ingress status for a cluster. ls List warnings that are currently ignored by Ingress status for a cluster. rm Remove warnings that are currently ignored by Ingress status for a cluster.
To see the list of ignored errors, run ibmcloud ks ingress status-report ignored errors ls
. To add errors to the list to be ignored, run ibmcloud ks ingress status-report ignored-errors add
. To remove an error from list so that it is no longer ignored, run ibmcloud ks ingress status-report ignored errors rm
. You can specify more errors by using the --code
flag multiple times with different codes.
Ignoring specific Ingress-related errors can be useful if Ingress Status is reporting issues on components that you don’t use. Ignored warnings don’t affect the cluster’s overall Ingress Status. Cluster Ingress Status state shows as healthy
if all the reported errors are configured to be ignored.
Now you can harden your cluster even further. With new in-cluster HTTP health checks, you no longer need to add IBM Cloud control plane IP addresses to your allowlists. The HTTPS health checks will be performed by components that run on your cluster. For clusters running on VPC infrastructure, might need to adjust the VPC security group configuration to allow incoming requests to the VPC load balancer from the VPC Public Gateway.
In the case of Red Hat OpenShift clusters, this mechanism is already present and is called canary checks. This component will periodically send HTTPS requests to the Ingress subdomain of your router.
In the case of Kubernetes clusters, we developed a new component for health checking. Similarly to canary checks, this component will periodically send HTTPS requests to the public address or addresses of your ALBs. It will be deployed to your cluster if you are using the IBM-managed ALB solution. This feature runs in your cluster by default, but you can configure it to opt-out if you wish. You can find the in-cluster health checker commands under the ibmcloud ks ingress alb health-checker
CLI command.
If you are not interested in viewing your cluster's Ingress Status, you can disable it with the ibmcloud ks ingress status-report disable
command. If you later want to re-enable Ingress Status reporting, you can use the ibmcloud ks ingress status-report enable
command.
By disabling the Ingress Status, you no longer receive detailed health state reports of your Ingress-related components. Disabling Ingress Status reporting can be useful in case you don't use any of the IBM-managed Ingress components. such as cluster subdomains, default routers, ALBs or managed secrets.
More informationFor more information, check out our official documentation.
Learn more about IBM Cloud Kubernetes Service and Red Hat OpenShift on IBM Cloud.
Contact usIf you have questions, engage our team via Slack by registering here and join the discussion in the #general channel on our public IBM Cloud Kubernetes Service Slack.
KudosWe (the blog authors) would like to appreciate our coworkers who helped develop the enhanced Ingress Status.
- Attila Fábián
- Jared Hayes
- Lucas Copi
- Marcell Pünkösd
- Sándor Szombat
Software Engineer, IBM Cloud Kubernetes Service
Balázs SzekeresSoftware Engineer, IBM Cloud Kubernetes Service
=======================
Policy Tuning on IBM Cloud Code Engine Demonstrated with the Knative Quarkus Bench Cloud
4 min read
By:
Scott Trent, Research Staff Member
This post describes the major policy tuning options available for applications running on IBM Cloud Code Engine, using the Knative Serverless Benchmark as an example.
Our team has provided an IBM Cloud port of the Knative Serverless Benchmark that can be used to do performance experiments with serverless computing on IBM Cloud. As shown in our previous blog post, deploying a serverless application like this on IBM Cloud Code Engine can be as simple as running a command like the following:
ibmcloud code-engine application create --name graph-pagerank --image ghcr.io/ibm/knative-quarkus-bench/graph-pagerank:jvm
The next step after learning to deploy a workload would be to learn about policy tuning to improve performance, efficiency, cost control, etc. The following are two major categories of policies that can be requested for applications on Code Engine
- Pod resources, such as CPU and memory.
- Concurrency regarding the number of requests processed per pod.
Let’s look at each in more detail.
Pod resource allocationThe number of CPUs and the amount of memory desired for a Code Engine application pod can be specified initially when the ibmcloud code-engine application create
command is run and can be modified after creation with the ibmcloud code-engine application update
command.
The number of virtual CPUs desired can be specified with the --cpu <# of vCPUs>
option to either the create
or the update
command. The default vCPU value is 1 and valid values range from 0.125 to 12.
The amount of memory desired can be specified with the —memory
option to either the create
or the update
command. The default memory value is 4 GB and valid values range from 0.25 GB to 48 GB. Since only specific combinations of CPU and memory are supported, it is best to look at this chart to when requesting these resources.
One of the strengths of the serverless computing paradigm is that pods will be automatically created and deleted in response to the number of ongoing requests. It is not surprising that there are several options to influence this behavior. The easiest two are --max-scale
and -- min-scale
, which are used to specify the maximum and minimum number of pods that can be running at the same time.
These options can be specified at either application creation time with the create
command or at application modification time with the update
command. The default minimum is 0 and the default maximum is 10. Current information on the maximum number of pods that can be specified is documented here.
Increasing the max-scale
value can allow for greater throughput. Increasing the min-scale
value from 0 to 1 could reduce latency caused by having to wait for a pod to be deployed after a period of low use.
Slightly more interesting (yet more complicated) are the options that control how many requests can be processed per pod. The —concurrency
option specifies the maximum number of requests that can be processed concurrently per pod. The default value is 100. The --concurrency-target
option is the threshold of concurrent requests per instance at which additional pods are created. This can be used to scale up instances based on concurrent number of requests. If --concurrency-target
is not specified, this option defaults to the value of the --concurrency
option. The default value is 0. These options can be specified at either application creation time with the create
command or at application modification time with the update
command.
Theoretically, setting the --concurrency
option to a low value would result in more pods being created under load, allowing each request to have access to more pod resources. This can be demonstrated by the following chart where we used the bload
command to send 50,000 requests to each of four benchmark tests in knative-quarkus-bench. The key point is that when the concurrency target is set to 25, all benchmarks create more pods, and as the concurrency target is increased fewer pods are created:
The following chart demonstrates the effect that changing the concurrency target has on the same four benchmarks. In general, higher throughput (in terms of requests per second) can be seen with lower concurrency targets since more pods are created and fewer requests are running simultaneously on the same pod. The exact impact on throughput, however, depends on the workload and the resources that are required. For example, the throughput for the sleep
benchmark is nearly flat. This benchmark simply calls the sleep
function for one second for each request. Thus, there is very little competition for pod resources and modifying the concurrency target has little effect in this case. Other benchmarks like dynamic-html
and graph-pagerank
require both memory and CPU to run, and therefore see a more significant impact to changing the concurrency target than sleep
(which uses nearly no pod resources) and uploader
(which mostly waits on relatively slow remote data transfer):
Specifying resource policy options with an IBM Cloud Code Engine application can have a clear impact on both resources consumed and performance in terms of throughput and latency.
We encourage you to review your IBM Cloud Code Engine application requirements and experiment to see if your workload would benefit from modifying pod CPU, pod memory, pod scale and concurrency.
Scott TrentResearch Staff Member
=======================
Maximize ransomware protection with Veeam and Cyber Recovery on IBM Cloud Cloud Security
5 min read
By:
Dave Mitchell, Senior Product Manager, VMware Solutions
Bryan Buckland, Senior Technical Staff Member
This blog looks at the three distinct types of data protection that exist today and introduces IBM Cloud Cyber Recovery concepts.
We will see, using IBM Cloud Cyber Recovery, how organizations can now protect valuable data from modern threats through backup, disaster recovery and cyber recovery.
Data increasingly manages our business and personal lives, and today's global organizations rely on the constant flow of data across the interconnected network world. At the same time, the business risk also goes up significantly by increasing the quantity and value of data. With data continuing to increase in value, organizations need to recover critical data quickly if it becomes compromised.
Additionally, business transformation, advanced cyber-attacks and insider errors are constant threats. As the information footprint expands, organizations continue to look to the cloud for easy, cost-effective modernization methods, adding yet another opportunity for compromising an organization's critical data. Ongoing threats are continuing, and while standard backup and disaster recovery are crucial, they are not enough.
The ongoing threats to data are not diminishingThe 2022 release of the IBM Cost of a Data Breach report reveals a startling increase in breaches, with 83% of organizations studied having had more than one data breach and 45% of the breaches being cloud-based. Transforming infrastructure to the cloud does not automatically equate to better data protection.
The report further shares that 60% of organizations' breaches led to increased costs, which were passed on to customers.
Ongoing threats are continuing, and while standard backup and disaster recovery are crucial, they are not enough. What tools can be applied to mitigate this risk of going out of business, and what resources are critical? Can the cloud be a tool to help close this risk? According to Veeam, the answer is yes.
Cyber resiliency—valid data recovery from an attack—is the only way to plan.
Data resiliency, the cloud and the business of ransomwareAdvanced threats are only part of organizations' risks in moving to the cloud. The issue is being able to recover the data and ensure business continuity. Data resiliency through using standard backups has dramatically evolved over the years as technology has advanced. Now, the need to recover data, files and file structure, virtual machines and infrastructure has made disaster recovery (DR) data centers a necessity.
IBM has recognized the value of DR and backup by leveraging these use cases and providing increased client value through the IBM Cloud business model for disaster recovery. Using IBM Cloud as secure disaster recovery, infrastructure for cloud or on-premises disaster protection brings the following benefits:
- Geographic redundancy: Enable higher resiliency and availability.
- Affordability: Consume resources when (and as) needed.
- Scalability: Dynamically grow or shrink your cloud resource requirements.
- Immutable object storage: Protect and preserve records and maintain data integrity in a WORM (Write-Once-Read-Many), non-erasable and non-rewritable manner.
- COS replication
By combining the value of the cloud with the VMware virtualization model, organizations can now lower costs and complexity while maintaining the level of compliance and data resiliency needed.
The ransomware business, however, continues to expand and evolve. This evolution drives the ransomware's technology and quality and pushes the attacks into a new business model.
This means that organizations now face Ransomware-as-a-Service (RaaS). RaaS is now making it easier than ever for a threat actor to attack an organization. The attack motives can be any number of reasons—including financially motivated, political or destructive—with no restrictions on the operator having any technical knowledge. In addition, multiple ransomware operators now offer a wide array of tools and services to make ransomware attacks more effortless. RaaS is now a complete business model that includes marketing and technical support operations. This now puts backup and disaster recovery solutions to the test.
Business continuity: Protecting the data with Veeam and IBM CloudWith business-driven applications moving to the cloud, standard backups coupled with DR have been working and evolving to help protect data. However, with the value of business data increasing and the agile nature of application development, advanced threats like ransomware are following along into the cloud model. Using backup and DR solutions cannot always keep up. Now, by adding cyber resiliency capabilities to IBM Cloud, a third level or layer of protection is emerging. Just as there are differences in capabilities between backup and DR, cyber recovery brings its own specific nature to protecting critical data.
Until today, business continuity meant combining the best of standard data backup with disaster recovery technology. By providing cost-effective DR in the IBM Cloud, IBM has reduced the high cost overhead of replicating to a DR data center. Now, by providing an isolated hardened infrastructure in the cloud, IBM has taken the next step clients need for cyber resiliency.
First, it is essential to understand how disaster recovery and cyber recovery work together. Disaster recovery focuses on protecting the business from geological or regional incidents. It’s usually comprehensive in its data volume, and in the case of an incident, the point of recovery and fallback is quick—sometimes instantaneous. Adding immutable storage to DR solutions does add protection, but it does not provide the data validation and environment to ensure the data is not still infected in some way. It also does not guarantee application integrity to run a production environment separate from the environment where the infection originated.
Cyber recovery differs slightly and protects the entire business from a targeted selective attack. The key to recovery is the reliability of the data. Since the data is critical, the impact on the company is global—a situation different from the regional, more contained incident when a DR infrastructure provides protection. Admin access to the isolated recovery environment is highly protected, and data is typically scanned to ensure reliable recovery.
IBM Cloud Cyber Recovery with VeeamWith IT transformation to the cloud, IBM continues to follow the momentum by engineering and providing an automated, isolated, cyber-resilient infrastructure. Working with key partners like Veeam, a completely engineered cyber recovery infrastructure provides the foundation for clients and partners to create a hardened and virtual "air-gapped" environment for protecting critical data copies and backups. Along with the automated cloud solution, a Solutions Guide is available to provide details and help understand the components.
Read the cyber recovery solution guide.
The cyber recovery solution guide describes the automated tasks that create two cyber-resilient solution architectures: an immutable storage environment and an isolated recovery environment. Clients use immutable, unalterable storage technology for configuration data and write-once-read-many. Isolated recovery storage works to prevent corruption and ensures that recovered data is intact. In addition, network isolation separates the production environment from the isolated environment, providing a virtual "Air-Gap" for added protection from production network malware or attack. The solution guide also discusses the use cases for creating backups and making these backups available only to security administrators.
For a detailed understanding of the two solution architectures, see the overview of cyber recovery with Veeam architecture in the guide.
IBM awarded the Veeam Cloud and Service Provider Growth Partner of the YearOrganizations are seeing the value of IBM Cloud Cyber Recovery, and there has been tremendous growth in implementing this advanced, cost-effective solution. The value and momentum of the combined solution are why IBM was recently awarded the Veeam Growth Partner of the Year at VeeamON 2023. Veeam recognizes partners who have demonstrated outstanding performance and expertise in delivering data protection and ransomware recovery solutions and services.
SummaryModern malicious actors continue to evolve and adjust their tactics, and ransom demands continue rising. Veeam backup and data protection solutions work to prevent ransomware damage. IBM is meeting the challenges of protecting an organization's valuable data to survive and provide confidence for continued innovation. IBM Cloud Cyber Recovery with Veeam brings an easy-to-deploy automated solution complete with a virtual network air gap, immutable storage and a protected recovery environment. Check out the Solutions Guide today as a first step.
Now, with the cost-effective IBM Cloud, organizations can prepare a solid data resiliency strategy to include backup, disaster recovery and cyber recovery protection—all working in concert to keep your organizations protected from ransomware.
Check out the solutions guide today.
Dave MitchellSenior Product Manager, VMware Solutions
Bryan BucklandSenior Technical Staff Member
=======================
“Automate This” to Build a Better Black Friday Experience Automation
4 min read
By:
IBM Cloud Team, IBM Cloud
How IT automation can help you survive Black Friday.
A little history: “Black Friday” was coined in 1950s Philadelphia. It described the chaos that erupted when the city was flooded in advance of the big Army-Navy football game held on the Saturday after Thanksgiving. These days, it’s a global commerce event—in the U.S. alone, consumers spent $8.9 billion online—as retailers look to holiday sales to finish the year profitably (i.e., in the black).
Let’s look at how IT automation—specifically observability and application resource management—can deliver a better Black Friday experience.
Don’t freeze—observeMany retailers opt for the “Black Friday Freeze” of their website and infrastructure. For weeks before the big day (and sometimes weeks after) a company will halt any change to its production system to keep the website as stable as possible. The idea is if you don’t change the system once it’s reached a known acceptable state, you can eliminate the risk of that state being disrupted.
That approach may have worked in the past, but it’s a concept that now feels both woefully outdated and the opposite of what a competitive operation should be. Today’s modern e-commerce sites have too many components from too many vendors for this idea to make sense. Even if you were to freeze every system under your control, you can’t stop everything:
- What if your credit card processor changes its system?
- Can you prevent your hosting provider (all of them?) from performing potentially risky maintenance to their data center?
- Might your analytics vendor push some updated JavaScript, which ends up breaking the checkout form?
And that’s before you begin to consider the other business process IT systems that you don’t control at all, such as your integrated connections with third-party shipping vendors. Finally, if you do have to fix a problem, how can you know if it will actually fix it or make it worse?
It’s a fait accompli that you can’t simply “freeze away” the risk. For all but the biggest companies, e-commerce is inherently filled with this kind of technical risk. Now, however, there’s an alternative to the Black Friday Freeze. Observability is the way to understand how to quantify, handle and react to risks. With a modern observability platform, you can gain insight into the state of your running system and all its components from the underlying infrastructure to the specific browser and/or app interactions on customers’ devices.
This understanding lets you handle the risks of operating your site without halting important changes for weeks or months. With a full-stack observability platform, you have access to the following:
- Application performance monitoring (APM) for full request and response tracing between the end user, the application and any of its external services.
- Monitoring of the end-user device, including asset timings and JavaScript errors.
- CI/CD pipeline deployments to understand the impact of code-level changes within seconds of release.
- Distributed tracing to trace every request across every service for application troubleshooting and performance optimization.
- Automated root-cause analysis for immediate identification of every service impact.
- Business-relevant SLOs and dashboards generated based on application health and metrics.
Learn more about observability. Then see how solutions like IBM Instana Enterprise Observability can help you enhance application performance monitoring and resolve issues before they become problems.
Handle seasonal spikes in strideCarhartt, in business since 1889, has been the dependable apparel choice for workers in automobile production, construction and many other industries. More recently, younger consumers have embraced this iconic brand, growing Carhartt’s revenue from under USD 100 million in the 90s to nearly USD 1 billion today.
Such rapid growth creates application performance challenges, especially during Black Friday’s dramatic new spikes in demand. Yet the Carhartt IT team has managed it all with a combination of old-fashioned hard work and smart technology deployment.
The company was an early adopter of application performance monitoring (APM) to spot code issues and manage performance. But when there was a spike in demand, it created issues between the company’s front-end site and multiple back-end systems, including inventory and loyalty systems. The code was fine, but they could not pinpoint the root cause of the performance problem. They needed greater visibility into application resources and greater clarity into any performance issues.
Using IBM Turbonomic Application Resource Management, the team clarified the resource relationships between Carhartt’s hardware, its virtualization and its APM solution—stitching together the company’s complete application stack. The software then identified opportunities for improvement, including adjusting Java Heap sizes, powering off low-use systems, adjusting VM hardware for best performance and consolidating VMs for performance and efficiency. Following IBM Turbonomic’s prescriptive actions, the Carhartt team prevented performance issues from occurring during the holiday season (and beyond), driving record sales.
Read more about Carhartt’s Black Friday success.
A podcast for retailersFor technology you can use year-round, listen to the podcast, Automation in Retail, part of the Art of Automation series hosted by IBM fellow, Jerry Cuomo. He’s joined by NCR Corporation CTO, Tim Vanderham, to discuss how progressive retailers use automation to transform the customer's shopping experience.
IBM Cloud TeamIBM Cloud
=======================