Think2021 Keynote Address by Arvind Krishna

Our thoughts on Arvind Krishna’s Keynote from Think 2021

by Rob Davis

| 5 minute read

I always enjoy listening to Arvind speak and his keynote from Think2021 is no exception. IBM technology has been at the forefront of underpinning some of the world’s most important systems during the pandemic but it’s not just about the ‘plumbing’ which, often, we take for granted.

However, as good as IBM is at the ‘plumbing’, as businesses and governments plan to emerge and grow in a post-pandemic world, I was impressed with the strong, strategic announcements IBM made today to enable our customers to emerge and become stronger as the future unfolds.

Some of the thoughts and insights Arvind shared that I think were valuable:

✅ The increasing emergence of hybrid multi-cloud combined with Red Hat changes (for the better) how data is managed and secured across an organisation. This is not the standard ‘cloud-first’ message the market has been accustomed to over the last 10 years. This is something to pay attention to and something that we often refer to as Cloud2.0.

✅ The role that AI plays is clearly an ever-evolving topic as we progress into the ‘Digital Century’. AI is not going away and is becoming infused into nearly every aspect of our lives. This is a hot topic, one in which IBM is well versed, and with a slew of initiatives to emphasise that with Codenet and Watson Orchestrate being instrumental, IBM will continue to be a major player.

✅ With a focus on enhancing customer experience, innovating with resilience and addressing the $7.4Trillion Digital marketplace, as clients consider what to do with the remaining 75% of workloads not in the cloud, it’s an exciting time to be an IBM Business Partner. What is clear though is that the fundamentals of availability and security are not going away and in truth will only likely become more important with the hybrid cloud extending from the core of an organisations IT to the Edge.

And some topics, notable for their absence:

❌ No mention of the spin-off (aka “Kyndrl”) and what this means for clients – after all ‘what is an independent’ IBM company?

❌ I was hoping that he would have addressed some of the concerns that clients may have as IBM embraces its ecosystem approach and moves away from a direct engagement model (in some sectors). Personally, I think that this presents an opportunity for us as an IBM Business Partner to enhance the service that we provide and only see it as a positive but fear that clients are a little in the dark as to what this means for them. We are here to support our clients (old and new), as we always have.

All in all, very excited for the potential of 2021/2022 to add new value, reduce costs, drive benefit from IT expenditure and help our clients and partners to grow their businesses.

Those are my thoughts, what did you take away from the keynote from THINK?

To view for full address on-demand, click here



4 Options CIOs Are Considering for Centera End of Life

4 Options CIOs are Considering for Centera End of Life

| 7 minute read


If you’re facing a lengthy undesirable Centera migration decision,
then this guide is for you.


As of 31 March 2018, the Centera product line discontinued and became an end of life product. This resulted in no further product development. Leaders have a Fixed Content headache with the applications like FileNet, CMOD, Enterprise Vault, NICE, and EMR/HER/PACS applications.

Why Centera Customers are Considering a Change

Many Centera customers purchased the storage product for a specific use case several years ago. Since then, several other storage platforms and cloud-based storage have been introduced and evolved in the market over this timeframe. Enterprises are deploying hybrid cloud storage solutions to balance cost containment, data performance and data security.

For Centera, there is a perceived risk of migration to modernise your storage environment. You could be facing a lengthy and costly migration that non-TES customers report can take many months to years to complete. Slow migrations lead to an increased risk of downtime and data loss, plus the additional cost of running two systems for a lengthy period of time whilst your migration takes place.

This large (in some case petabyte-scale) repository of data is not deriving any business value due to the proprietary Centera interface which precludes any meaningful data analysis. Wouldn’t it be great to be using the trends seen in this data as part of an AI training model to be used for a competitive edge or a better quality of service?

4 Options to Address EMC Centera End of Life for Enterprise Storage

When dealing with end-of-life announcements, some organisations act quickly and move/migrate to another solution. Others seek to get maximum value from their current platform and manage the risk until such time as it becomes too expensive or risky to maintain.

Do nothing is still an option, just not a sustainable one. The status quo comes with additional risk since the proprietary hardware parts are no longer available and support is more difficult to procure.

However, ‘Do nothing’ is the exception in 2021. Over the last few years, concerned Centera customers are opting not to stick with the status quo and are seeking considering a change due to a) a short term need to expand enterprise storage capacity, b) the need to reduce current support costs and overall TCO, and/or c) a longer-term enterprise storage solution that is more flexible, scalable and secure.

Since ‘Do Nothing’ is the least favourable, the following are the three (3) appropriate migration paths for Centera:

Migrate to another On-prem platform(s)

This might be the most straight forward path, moving from one on-prem platform to another like IBM Cloud Object Storage.

Migrate to Cloud-only

This option is common with enterprises that are striving for a pure cloud-only architecture. Often years in the making to achieve, the migration to the cloud is often time-consuming, lengthy and costly upfront in order to receive the expected long-term cost savings (hint: early adopters are learning, especially during COVID, that the opposite may occur).

Hybrid Cloud Storage

The third option that is gaining momentum in 2020 is the concept of Hybrid Enterprise Cloud Storage, the blending of the best of cloud and on-prem storage. The Hybrid Cloud Storage infrastructure allows for matching the workload to the approach public cloud or on-prem hardware – thereby the storage of your data and content is matched with the most appropriate location based on expected cost, accessibility, frequency, security policies, and use case.

How Much Time Should You Dedicate to Your Centera Migration Project?

This can be challenging because of the manner in which Centera stores the data. This can be 100’s of TBs or even PBs scale and with a lack of application integration or multiple threading.

We have heard of horror stories of migration efforts taking years, often involving 3rd party consultants. This lengthy process can be unnecessarily expensive and risky. As each day passes the probability of the EoL Centera system experiencing an unrecoverable hardware failure increases.

TES has a modern approach that migrates up to 14TB/day and 50 million objects/day from Centera. This speed reduces the migration project from years to weeks and costs significantly less. Schedule a 15-minute consultation to determine the expected timeframe of your migration project using our modern, unique process. If you need a speedy and less costly migration solution for Centera, contact an Enterprise Storage specialist at TES.

The Typical Risks Associated with a Centera Migration Project?

As with any migration project, risk mitigation is best achieved with the proper planning, project management and allocation of resources. In the case of re-platforming from an EoL product like Centera, the speed of the migration effort (or lack of it) is a risk factor considering hardware failure may be unrecoverable.

In addition, migrating data from Centera tends to be trickier versus other storage platforms because of the proprietary API that allows applications to write data to and extract data from the repository. Extraction and migration tends to be slower and drags out the data migration process.

Finally, for many enterprises, any migration needs to include a process that ensures meeting the relevant regulations. As such, a cross-platform migration requires a process that ensures chain of custody.

How Can I Access Free Support to Build My Business Case?

With the proliferation of use cases, the simple storage decision is not straightforward anymore.

The Enterprise Storage Technical Specialists at TES will guide you to the ideal decision using a unique blend of analysis tools and assessment process. We can help you evaluate appropriate storage options within your own IT environment – often at no charge to you.

See if you qualify for the free business case development assessment by one of our Enterprise Storage Technical Specialists. Contact us today here.

Why Enterprises Running SAP HANA Seek IBM Power

Software platforms like SAP S/4 HANA have the power to digitally transform your Enterprise. For Enterprises that require scalability and availability at a lower TCO, there is a choice that makes the shortlist almost all the time:
IBM Power Systems.

Why Enterprises Running SAP HANA Seek IBM Power

Software platforms like SAP S/4 HANA have the power to digitally transform your Enterprise, streamlining business processes and generating real-time insights from your data. To take full advantage of the in-memory database capability of SAP HANA requires a review of your IT infrastructure. For Enterprises that require scalability and availability at a lower TCO, there is a choice that makes the shortlist almost all the time: IBM Power Systems.

IBM Power Delivers for SAP S/4 HANA

Since 2015, thousands of organisations across various sectors have chosen IBM Power Systems to run their SAP S/4 HANA platform. And for good reason, namely a £1.4M NPV positive impact over 3 years, a 7 month payback period, and no downtime over an 18-month period — all verified by a study completed by Forrester Consulting.

Forrester interviewed SAP HANA customers using IBM Power System about their experiences, then quantified the results a typical organisation could realise. Download your free copy of the report today.

The Key Criteria Enterprises Use to Select IBM Power Systems For HANA

In the digital-first world, Enterprises with a downtime/hour higher than £20,000/hour are starting to prioritise performance, scalability and security over cost with IBM Power. When unplanned infrastructure downtime is near zero over an 18-month period, revenues are maximised. This is not the only reason why Enterprises select IBM Power for their SAP HANA environment. Outside of this incredibly high-reliability rate, IBM Power also delivers several other benefits for enterprises:

  • Reduction in system/server administration costs: Enterprises report 43% less time to update solution stack and 30% cut in time spent on server administration with IBM Power
  • Infrastructure consolidation: Up to a 86% cut to the number of SAP HANA servers
  • Reduction in License costs: Consolidate up to 16 – x86 server onto 1 IBM Power. Fewer licenses to maintain and less server administration costs
  • Can Virtualise up to 24TB in scale-up. SAP has certified this environment for IBM Power
  • Precise cost control with allocation capacity to as little as 0.01 cores and 1GB. This enables you to avoid overpaying for capacity
  • Shared processor capacity: Efficiently utilize processing capacity across SAP HANA instances to further reduce Total Cost of Ownership (TCO)
  • Predictive failure alerts. IBM Power Systems uses heuristics, running in the background of ongoing SAP HANA workloads, to pre-emptively warn DBAs when a failure is likely to occur – thus enabling you to prevent such a failure that could be costly.

Download the Forrester TEI Report

If you are considering deploying SAP S/4 HANA either as an update or a new deployment, then download the Forrester report here. Read how one of the most important, yet underappreciated decisions could make or break your deployment: The IT infrastructure.

4 Overlooked Considerations That Can Cause Your AI Strategy to Fail

4 Overlooked Considerations That Can Cause Your AI Strategy to Fail

| 7 minute read


If you feeling some pain in scaling your AI strategy,
then this guide is for you.


AI is considered a driving force powering the next age of human progress and computing platforms. Early experiences suggest that achieving success with AI/Machine Learning/Deep Learning is harder than expected. The power of the transformative effects of AI is not as simply as turning on a light switch.

AI is the second-most important initiative to enterprise leaders today, second only to using data-driven insights to improve products and services, according Forrester Consulting.

The No. 1 goal for AI-based projects is increasing revenue growth (43%), followed closely by improving employee productivity, improving CX, and increasing profitability. Not surprisingly, top use cases mirror these key goals with over 70% of firms currently using or expanding their use of AI technology to support customer service interactions, operational efficiency, and business intelligence application scenarios.

Every organisation actively advancing their AI strategy and capabilities are doing so not in isolation, rather dealing with the direct dependencies and impact AI is placing on the organisation’s people, data, processes and technology.

AI Success is Driving the Next Generation of Market Leaders

With the rapidly evolving and transformative effects of the fourth platform, failure to participate is no longer a viable business option. Companies that wish to digitally transform must understand that embracing the status quo will leave them struggling to keep up with competitors that recognized the opportunity before them.

AI has the ability to create incredible value by decreasing costs, increasing productivity, and improving customer experiences.

Up until 2019/2020, enterprises focused on experimenting with AI within specific areas or functions. According to Forrester, those enterprises that have achieved success with AI are seven times more likely than firms that have not scaled AI to be the fastest-growing organizations in their industries. Conversely, those that have not scaled AI are 1.4x more likely simply to be average in terms of revenue growth rate compared to competitors.

Why Organisations are Failing at AI

Data Quality: 90% of firms are severely challenged at scaling AI across their enterprise, with data the driving force behind this difficulty.

Lack of AI Understanding: One of the most perplexing findings of the same Forrester study is that 52% of respondents simply don’t know what their AI data needs are. If enterprises don’t know what they need, they may blindly jump into AI initiatives that have little chance of success or worse, may never try in the first place.

AI Skills Shortage: Without the right skills in place, teams will struggle with solutions and fail to successfully carry out use cases. The skills shortage is real, and many enterprises are underestimating the time needed to rampup to become proficient (hint: It’s more than 12 months).

Not Thinking Beyond Compute: Simply put, there is no AI without information architecture (IA). Many organizations start with the focus on the compute side of AI, investing in GPUs. While GPUs are critical to AI success, this singular focus can, and sometimes does, lead to the disruption or complete failure of AI projects.

The IA that handles an AI pilot project may not function well when scaled across the enterprise. Organisations must review their entire information architecture for potential breakpoints (performance, cost, security) across computing processing, data storage and interconnectivity when they start to scale AI across the enterprise.

Data Quality is the Top Success Factor for AI. What are the others?

Without properly prepared and curated data, AI initiatives fail. While data quality and data standardisation are the top AI success factors, they are not the only ones:

Data Integration– The ability to connect AI platforms with analytics/business intelligence platforms, along with connecting multiple data sources.

People– Access to the abundant data science and AI/ML engineering skills is critical. As noted above, there is a shortage of these skills in 2020, as demand surges across many industries.

Tech Infrastructure– GPUs are a must, no arguments there. Like GPUs for computing power, not all storage is created equally for AI and Data workloads. Many general-purpose platforms were not designed with AI in mind. Purpose-built platforms like IBM Spectrum Scale and IBM Cloud Object Storage have been designed specifically to handle Data and AI workloads.

In addition, the next generation of Information Architecture (“IA”) is being designed to scale up and out with minimal to no disruption to your production operations. The current thinking behind multi-cloud and hybrid cloud architectures is ensuring this next-generation IA scales not only from a performance standpoint, also with cost and security considerations as well.

Data Management Processes– Managing data manually can be a challenge, especially when training AI. Organisations who are successful at scaling AI think ahead in this regard and consider automation to automatically manage data in an efficient manner.

Key IA Considerations within Each Stage of the AI Journey

Each AI journey or initiative contains four stages: a) collect the data, b) organise the data, c) analyse the data, and d) infuse insights into the organisation. AI is driven by data, and how your data is stored can significantly determine success. The specialists at IBM outline the impact of Storage across the four stages:

Data Collection The raw data for AI workloads can come from a variety of structured and unstructured data sources, and you need a very reliable place to store the data. The storage medium could be a high-capacity data lake or a fast tier, like flash storage, especially for real-time analytics.

Data Organisation. Once stored, the data must be prepared since it is in a “raw” format. The data needs to be processed and formatted for consumption by the remaining phases. File I/O performance is a very important consideration at this stage since you now have a mix of random reads and writes. Take the time to figure out what the performance needs are for your AI pipeline. Once the data is formatted, it will be fed into the neural networks for training.

Data Analyse and Infusion. These stages are very compute intensive and generally require streaming data into the training models. The Training and Analysis stage is an iterative process, requiring setting and resetting, which is used to create the models. Inferencing can be thought of as the sum of the data and training. The GPUs in the servers, and your storage infrastructure become very important here because of the need for low latency, high throughput and quick response times. Your storage networks need to be designed to handle these requirements as well as the data ingestion and preparation. At scale, this stresses many storage systems, especially ones not prepared for AI workloads, so it’s important to specifically consider whether your storage platform can handle the workload needs in line with your business objectives.

Moving Forward at Scale

To compete beyond 2020, organisations will need to progress on developing and scaling their AI capabilities in order to remain or become the leader in their space. The next paradigm is well under way. The next step is up to you.

TES offers a free IA and Storage Assessment, providing a free assessment report outlining where your IA and Storage is viable (and deficient) for scaling AI. Many of our clients use this report as a fresh set of eyes to validate their strategy and/or find the breakpoints that could emerge once the scale effort starts. Request the free assessment here.

The 4 Steps to Prioritise Your Cost Containment Initiatives

The 4 step process to identify, prioritise and execute your IT Cost Containment plan.

The 4 Step Process to Prioritise Your Cost Containment Initiatives

| 7 minute read


You may not know where to start, just that you must act swiftly.
If so, then this guide is for you.


It is famously said that “You can’t shrink your way to greatness”, but with a financial and health crisis, you do need to make cuts to survive.

In the early days of COVID, you raced to complete transformative initiatives to keep the organisation operating as expected. Many digital transformation initiatives became your top priority, helping transform the organisation in a matter of weeks rather than months or years.

However, a new reality has emerged for many: IT spend is too high and unsustainable within our unpredictable lockdown economy. Like many financial and IT leaders, you may be under pressure to get the operating environment ‘back to normal’ and must find a way to continue to deliver without the same level of resources.

Unfortunately, many, now exhausted, IT Leaders don’t want to dismantle the progress made over 2020 but realise that they must deal with the overspending issue in the face of one initiative depending on the other.

The Cost Reduction and Containment Initiatives for Enterprises in 2021

IT Cost containment and reduction initiatives range from small to involving the entire organisation. Easy, near pain free reductions include cancelling unused or underused monthly SaaS subscriptions while more complex, time-consuming initiatives involve code evaluation and shifting workloads to more cost-efficient platforms.

The cost containment and reduction initiatives many leaders are undertaking in 2021 include:

  • Evaluating existing projects for Pause, Stop and Continue
  • Terminating unused SaaS or cloud services
  • Ensuring application code minimizes computing resources
  • Shifting workloads or data storage to the most cost-efficient platforms
  • Negotiating down contracts
  • Consolidating databases and/or enterprise software onto less costly platforms, leading to a significant reduction in software licensing costs and operating costs
  • Increasing productivity of assets through increased use or consolidation
  • Driving automation across the IT department
  • Evaluating the impact of CapEx and OpEx on the IT budget. Leveraging SaaS-like consumption models for capital projects
  • Reducing internal service levels

Getting Started: Begin with the View to Strategic Cost Containment

Research shows that organisations that invest strategically during tough times are more likely to emerge as market leaders in the future. Tough times require difficult actions.

All organisations have some easy, tactical opportunities to save budget but this often not enough. Cost-cutting alone without context to the organisational impact is a recipe for disaster (and career limiting!). Frozen or suspended costs may help provide immediate expense relief, but may resurface at the wrong time. Getting it right means making strategic cost decisions.

Cost containment and optimisation initiatives should be sustainable over the short term and long term. Therefore, leaders should ensure decisions are made with a full understanding of the business impact and avoid cuts that simply shift spend — spend that is likely to return in another place or time without any overall gain or benefit to the organisation.


Step 2: Evaluate Cost Decisions with a Cost Optimisation Framework

To assess the cost containment, optimisation initiatives and programs to undertake in 2021, TES recommends the use of a cost optimisation framework. The framework balances the cost impact and potential benefits with that of the impact to the business, time-to-value, risk (business and technical) and any required investment.

Not all cost containment initiatives will result in the same benefits. By assessing cost containment initiatives within a framework, a prioritised and optimised list of cost containment initiatives will emerge that will:

a) Meet cost-cutting targets, and
b) Ensure the organisation is well-positioned for better days ahead.

The recommended framework consists of six (6) key areas to analyse and determine your prioritised cost-cutting initiatives:

Potential Financial Benefit
Estimate the financial impact each cost initiative can impact the bottom line.

Ask: How much can be cut from my budget if the initiative is implemented? Is there an effect on cash flow in the short and long term?

Business Impact
The optimal cost reductions are those that occur within the same fiscal period. While long term cost savings can drive the organisation forward, these may not produce much-needed immediate cost savings. Determine what impact an initiative will have on the operations of a specific business unit or function and on your people.

Ask: Will there be an adverse impact on day-to-day activities and operations, such as decreased productivity or product time to market? If the organisation fails to grasp these effects, initiatives may fail.

Time to Value
Whether cost containment and optimisation initiatives are approached via a Waterfall or Agile thinking, the time it will take to realise the cost savings and improve business value needs to be considered. If the cost savings will not be realised until the next fiscal period, then the initiative may not be as valuable as an initiative whose value is delivered instantly, no matter the ‘size of the prize’.

Ask: Can the cost savings be captured and realised within the desired time frame (weeks/months/fiscal year)? What is the best method to measure soft savings with an initiative?

Degree of Organisational Risk
The effectiveness of the cost containment initiative may depend on whether your organisation and people can change and adapt to new processes or structures.

Ask: Can our people ensure the changes are made? Does the organisation possess the capability of adapting and learning to change?

Degree of Technical Risk
This risk resides within the domain of IT leaders. IT leaders must work across the organisation to ensure IT changes can be integrated within the current operations. Delays caused by or attributed to the initiative could result in a loss of service delivery or productivity.

Ask: Can the change undermine the ability of our systems to deliver services?

Cost optimisation sometimes isn’t about cost reduction; in some cases, it is about sustained improvements in business processes, productivity and time to market. Some initiatives will require an initial investment, that leadership (and/or the executive board) must agree to fund. Present a business case showing the potential business benefits vs. the status quo and the level of investment required.

Ask: Does the initiative require a large, upfront investment before savings can be realised? Can our organisation make an investment at all?


Step 3: Determine Your Optimised Cost Containment List

Each organisation is different in terms of risk appetite, policies and investment considerations in challenging times, just to name a few. Your decision framework should account for these factors.

Start by weighing each of the six (6) assessment areas above, with Potential Financial Benefits and Business Impact weighed as one group, and the other four together within a second. Score the proposed initiative across each of the six areas. Once you determine the score values, calculate a weighted assessment score for the initiative and map to a 3×3 grid. Repeat this for all your considered initiatives. When all the initiatives have been mapped, you can prioritise your list with those high impact, low risk/time to value as first to action.

Need Help with the Assessment? Download our free IT cost containment framework tool to complete your assessment


Step 4: Action the Cost Containment Initiatives and Reduce your IT Spend

With your assessment complete and the cost containment initiatives prioritised, the work to contain, reduce, and change begins.

The strategic assessment outlines those initiatives appropriate for your organisation, balancing potential cost reductions against the net benefits and potential risks.

Putting these initiatives into action to realise the cost savings is when the real work begins. The Enterprise Specialists at TES are well experienced in database consolidations, data centre migrations, staff augmentation, hybrid cloud AI designs, contract renegotiations and code audits for performance productivity – many cost containment projects enterprises are executing in 2021.

Our specialists ensure you capture the anticipated cost reductions. Book a free assessment session today with an Enterprise Specialist to help you develop your executional roadmap to cost containment and reductions.

How to Design your Enterprise Hybrid Multi-Cloud Storage Strategy

A step by step guide to formulating your ideal infrastructure strategy.

How to Design your Hybrid Multi-Cloud Storage Strategy

| 7 minute read


Hybrid Cloud storage is an approach to managing cloud storage that uses both local and off-site resources. The hybrid cloud storage infrastructure is often used to supplement internal data storage with public cloud storage. Hybrid cloud storage is a critical component of an overall hybrid cloud strategy, as one drives the other. 94% of enterprises are pursuing a Hybrid Cloud Strategy in 2020. Cloud technologies, and to some extent on-premises technologies, have matured to make the Cloud value proposition less ‘either/or’, rather an ‘AND’  proposition.

Hybrid Cloud isn’t “to cloud or not to cloud” as Deepa Krishnan, IBM offering management director, wrote in her blog, rather “What is the best way to optimise my IT environment to drive my business forward”? The maturity of these technologies makes almost IT vision possible – all within your specific cost, regulatory compliance, and security configuration framework.

What Is Hybrid Cloud Storage?

When implemented successfully, no one in your organisation should know as the hybrid environment will act as a single storage system. Before we dive deeper into Hybrid Storage for enterprises, let’s first define the many terms associated with Hybrid cloud storage:

On-premise: This is the IT infrastructure you own, located inside a data centre or colocation facility. You bought the enterprise servers, storage environment, switches (etc). and you are responsible for the management and administration of the overall IT environment.

Public Cloud: This is the IT infrastructure you don’t own and pay for access through a cloud services provider such as IBM Cloud. The public cloud vendor provides access to a set of standardized resources and services and is available on a pay-per-use consumption model.

Private Cloud: Provides a cloud-like solution within a defined hardware footprint. Also known as a corporate/internal cloud.

Hybrid Cloud: Combines resources from private, public, and on-premises environments to take advantage of the cost-effectiveness each platform can deliver.

Benefits of Hybrid Cloud Storage

Enterprises adopting a hybrid cloud strategy view this as the optimal approach to address the constant explosive growth of data and content and to derive value from such data. Enterprises deploying a Hybrid Cloud Storage strategy are realising several benefits that may only be possible through a hybrid approach. These benefits include:

  • Extending the life of on-premise storage and maximising such investments
  • More predictable storage usage and scalability based on changing storage needs
  • Better control over data costs
  • Reclaiming on-site storage capacity
  • Optimising the balance between storage costs and data value
  • Improving disaster recovery and business continuity (DR/BC) strategies
  • Simplifying operations and saving time for IT personnel

Hybrid Storage Use Cases

You can use hybrid storage for a variety of purposes. The most common use cases include:

Sharing application data: Frequently you need to be able to access application data both on-premise and in the Cloud. Many applications share data and you may have applications in both environments. This requires applications to be able to access data no matter where the application is hosted. Hybrid storage enables you to share this data smoothly.

Cloud backup and archive: You can use hybrid storage to optimise backups and archives across multiple sites. For example, simple solutions will help you quickly and securely move backups to Cloud locations. Advanced solutions can help you combine back-ups from multiple sites into a centralised location for faster RTO and RPO.

Multi-site data: Hybrid storage can help you share data across sites while keeping data consistent. You can use hybrid storage solutions to synchronise data, ensuring that all storage resources contain reliable copies.

Extending on-premise data to the cloud Hybrid storage systems are used to supplement local data storage with cloud storage resources. These systems use policy engines to maintain active data on-site and move infrequently used data to cloud storage.

Big data applications Hybrid storage can help you process and analyze big data more efficiently. Using hybrid storage, you can easily transfer datasets from the cloud for in-house computations or vice versa. You can also more easily isolate sensitive or regulated data.

The 4 Areas to Assess when Designing a Hybrid Cloud Storage Strategy

A successful hybrid cloud strategy depends on a successful hybrid cloud storage environment. A Hybrid Cloud Storage Strategy starts not with the technology but rather understanding the complete picture of your data:

The Relationship between Your Data and Applications (Data Gravity)

  • What is the scope and size of your datasets?
  • Where does most of your data live? Is this ideal?
  • What applications need to access this data?
  • Can the data be easily moved? If the data needs to move, are there additional changes that need to occur to facilitate the change?



  • Access: Who should have access to your data? Perhaps more importantly, who should NOT have access to your data?
  • Monitoring: How are you keeping an eye on the above two groups of people?
  • Lifecycle: How long is your data valid/relevant/useful?
  • Retention: What backups and disaster recovery options do you need for your data?
  • Compliance: What, if any, governance law dictates where and how long your data can live?



  • Latency: Do any of your applications have latency requirements? What is the impact if they are not met?
  • Frequency of Access: How often does your data need to be accessed? This is important, as it can impact operating costs
  • Growth: How will data growth impact overall performance?
  • Data Types: How easy is it to sift through your data (structured vs. unstructured)?


Other factors

  • Cost: Considerations include price, performance, tiers, and data transfer rates
  • High availability requirements & Network connectivity: While related to latency, this consideration goes further when moving data to the cloud, as internal networking equipment may need to be updated to ensure reliable connections, you cannot consistently access data and services
  • Alignment of your data strategy to your overarching business strategy
  • Data integration: Data needs to be synced across your infrastructures. Managing this synchronisation can be challenging without an automated process. Products like IBM Cloud Paks enable this data integration
  • Unified management: Smooth operations require unified, centralised visibility and management. Platforms like Red Hat OpenShift can drive this optimisation and automation

Technical Considerations

Many storage vendors have hybrid storage solutions that are proprietary and another form of lock-in.

Many enterprises are preferring to consolidate their hybrid cloud storage without being forced down one on-premise storage manufacturer. We can show you how to avoid a closed path whilst receiving consistent management of your data across storage vendors and across public cloud providers.

How TES can Help You Pursue the Right Strategy

The Enterprise Storage Technical Specialists at TES can guide you to the ideal computing environment, whether on-premise or hybrid Cloud. Using a unique blend of assessment processes and analysis tools, see how you can select the ideal strategy for your operating environment – often at no charge to you.

See if you qualify for the free personalised storage assessment with one of our Enterprise Cloud Storage Technical Specialists. Request the storage assessment here

5+1 Considerations for Selecting the Ideal Platform for Your SAP HANA Environment

SAP HANA is one of the first data management platforms to handle both transactions and analytics in memory on a single data copy.

This changes the game when selecting the ideal platform for SAP HANA.

5 + 1 Considerations to Select the Ideal Platform for Your SAP HANA Environment

| 7 minute read

SAP HANA is one of the first data management platforms to handle both transactions and analytics in memory on a single data copy. It converges a database with advanced analytical processing, application development capabilities, data integration and data quality.

The journey to SAP HANA can be a major transition. The decision of how you choose to implement S/4 HANA and the underlying platform may impact your organisation for many years to come. Our view is that a mission-critical application and database like SAP HANA requires the same care when it comes to the underlying infrastructure platform.

Database Size (and projected growth)

According to SAP (Link:, Memory is the leading driver for SAP HANA sizing. This feels obvious since SAP HANA is an in-memory database.

Sizing your memory therefore becomes critical to success. Size incorrectly and this could lead to HANA underperforming. With the natural growth of structured data at 30% per year, accurately estimating your memory needs means both success now while avoiding unnecessary costs to replace an undersized platform. So, it is therefore essential to plan for the future, all the while optimizing total costs. The larger the database, the more suited platform is on-premise.

System sizing is an exercise that is now supported by using various computing calculator tools provided by SAP and IBM in order to build the ideal configuration according to the size of your database, planned use cases, and projected data growth.

Cost (and Risk) of Downtime

SAP Applications typically run the mission-critical processes inside an organisation. In this scenario, if SAP goes down, portions of the company may stop too. As systems (and data) continue to become integrated and connected inside the organisation, the importance of uptime increases dramatically.

The Enterprise Specialists at TES hear from other customers that unplanned downtime costs are increasing. We account for this increase due to an increase in higher lost revenues from digital operations. For many enterprises, the cost of unplanned downtime is running into the millions an hour in lost revenues and fixes.

Poorly configured or chosen infrastructure options can cause unplanned downtime to occur several times a year ( click here for a Forrester report on the potential cost of downtime), meaning that ‘cheap’ upfront investment could lead to costly 10x challenges down the road.

Sensitivity of Data

With persuasive encryption now common for data at-rest and in-transit, cybercriminals have shifted their attacks to target (memory) in-use. This is because data in-use has been slow to adapt to persuasive encryption and thus accessible.

If your SAP instance contains significant volumes of personally identifiable information (“PII”), financial or health data, then platforms with integrated confidential computing capabilities are best suited for you.

Infrastructure Consolidation

One often overlooked consideration when selecting the ideal platform for your SAP HANA environment is the state of your current infrastructure. In some cases, it can be economically feasible to invest in a modern platform that is not only the ideal choice for your SAP HANA environment, can also enable you to consolidate existing workloads onto the same platform choice.

When this occurs, the total cost of ownership (TCO) receives a multiplying effect. This type of consolidation can lead to significant software license savings (in some instances, 33%), reduced administration time and resources, and reduced operating costs.

Analytics and AI Workloads

AI is a very resource intensive workload. Your SAP HANA environment should provide significant data that AI would like to ‘consume’. Selecting a platform purpose-built (instead of general purpose) for AI strengths your chance of AI strategy and execution success. Such success would then impact the value you could receive from your SAP HANA environment.

No Longer a Consideration: OPEX Spend

The need to shift IT spend from CAPEX to OPEX was a high consideration five years ago, as many organisations sought to shift their IT consumption from Capex to Opex by leveraging Cloud. The perceived core benefit of this shift is an efficient IT environment in which you only pay for what you use, whilst having the capacity to expand quickly.

With advances with on-prem pricing models, organisations can now enjoy the same pay-as-you-go pricing popularised by cloud vendors with their on-prem environment.

Many Enterprises Depend on IBM Power for Enterprise SAP HANA environments

IBM Power is designed for the demanding, high volume, mission-critical transactional and analytical workloads that are produced by a SAP S/4 HANA environment.

IBM Power is purpose-built for Big Data and AI workloads, and is industry-leading in avoiding disruptions and disasters. (Read how one customer recorded 18 straight months of uptime here).

IBM Power Systems is certified by SAP to run both non-SAP and SAP on the same server (i.e. Oracle database and SAP production applications on the same physical server).

IBM Power Systems is certified by SAP to run both HANA production and HANA non-production on the same physical server.

Power Systems can run the largest SAP HANA virtual machines with almost zero overhead. SAP has certified 24TB in scale-up configuration for both OLTP (S/4H) and OLAP (BWH) environments. This scalability also allows customers to run large SAP-certified, scale-out SAP HANA configurations. Power Systems also delivers 2X faster core performance versus compared to x86 platforms. The higher throughput helps in reducing the number of cores needed, further reducing the cost.

Additionally, IBM Power Systems offers a predictive failure alert capability. Using heuristics running in the background, IBM Power can pre-emptively warn DBAs when a failure is likely to occur.

The Easier Method to Assess the Sizing Ideal for You

With so much choice, much can go wrong with the wrong decision. The Enterprise Specialists at TES can provide a no-charge assessment of the platform ideal for you. Contact a specialist today and start your SAP HANA deployment with the right step forward.

1 Unexpected Benefit of an Enterprise Storage Health Assessment

Such practises have been applied by IT leaders as well to avoid hardware failure. This is the expected benefit of an IT health assessment. But is there more to an Enterprise Storage health assessment beyond avoiding failure? In short Yes.

1 Unexpected Benefit of an Enterprise Storage Health Assessment

| 4 minute read


Regular health checks have been used for decades in personal healthcare as a preventive way to catch health concerns early to avoid premature aging, deterioration and in some cases, death.

Such practises have been applied by IT leaders as well to avoid hardware failure. This is the expected benefit of an IT health assessment. But is there more to an Enterprise Storage health assessment beyond avoiding failure? In short Yes.

Why your Enterprise Storage Needs Regular Health Checks

Just like you, the health and performance of your enterprise storage environment may change over time. While aging equipment is an obvious factor that fail, other factors like workload changes, use cases and requirements change can lead to seemingly ‘healthy’ equipment performing sub-optimal.

Raising the stakes is AI/Machine Learning. An optimal performing Information Architecture that includes a healthy enterprise storage environment can be the difference between success and failure with AI. In many sectors, being unsuccessful with AI-based initiatives can put your organisation in ‘follower’ status for years to come.

Whether your data storage is on-premise or in the cloud, it’s critical that you keep your storage infrastructure in good health—after all, data just be your organisation’s most valuable resource. When undertaking an enterprise storage health check, you should be able to get answers to these questions:

  • “How healthy is my storage?”
  • “Can it scale to handle an influx of data?”
  • “Does my enterprise storage environment strengthen my cybersecurity resilience?”
  • “Is the cost structure still optimal? Should updates be made to reduce cost and maintain performance ?”
  • “Does my enterprise storage environment protect my data from breaches and cyber attacks?”



Sub-Optimal Performance Being Uncovered in 2020

Many Enterprises moved to a cloud-first strategy over the last decade.

This initial transformation period has brought many benefits and advantages to organisations. However, many organisations are discovering that Cloud, like many past IT paradigm shifts, is not a perfect ‘silver bullet’ solution, and 12% are moving a portion of their workloads back to on-premise.

For these early adopters, Cloud 2.0 has begun with a Hybrid Cloud approach that takes a best-of-breed approach to IT infrastructure, computing, storage and data to create the optimal IT strategy and environment.


7 Causes of Sub-Optimal Performance Uncovered by Health Checks

Some of the areas that can result in sub-optimal performance of your enterprise storage environment:

Use cases: Data needs are changing. For example, some organisations are discovering that data storage locally improves the AI performance.

Workload changes: How prepared is your storage infrastructure to handle a flood of data? If you don’t know the answer, you’re putting your organization at risk.

Data Security: Has your enterprise storage environment prioritised data security? This is becoming more important as data breaches increase and ransomware attacks are becoming more expensive

Ageing hardware: Evenually, all assets need to be retired. Infrastructure is no different.

Inefficient computing resources: Not enough cores and memory constraints could be signs of future failure.

Misaligned storage media: Each type of storage media performs differently and have different failure rates. Health checks can ensure you have the right balance.

Networking configuration issues: An optimal performing environment is highly dependent on a healthy, well-functioning network.


Let the Enterprise Storage Specialists Guide You Forward

Over the last 6-months, many of our clients have taken a renewed look at their IA due to the rising IT costs caused by COVID. Many are retaining the strengths of the cloud while reducing the costs of their IT operations.

That’s where we can help guide you forward.

One such tool is the Client Storage Assessment. This Free-to-You engagement helps you understand the operating performance of your enterprise storage environment, now and into the short term future. The output provides you with a roadmap to Optimal Performance, whether you operate in an OPEX or CAPEX environment. Request your free engagement here.

Zero Trust Framework: Mitigate the Growing Insider Threat

The Zero Trust Framework is a way to mitigate the growing data security threat associated with internal sources – whether innocent or malicious.

Zero Trust Framework – Mitigating Your Growing Insider Threat with Data Security

by Paul Knight | 5 min Read

Zero Trust Computing flips the script on insider threats

The 2020 IBM Data Breach report highlights the existential threat from insider threats for enterprises handling sensitive data. Zero Trust counters insider threats by rethinking the data security model to secure all data and application assets in every state, at all times. This post examines the insider threat, how Zero Trust flips the script, and its growing importance to organisations entrusted with sensitive data.

Trust no-one… except the cybercriminal

IBM report the average enterprise cost of a cyber breach, at $3.86m, is slightly lower than in 2019, thanks to investment in automation and perimeter protection. Yet the impact of internal breaches is growing. Data breaches from stolen or compromised credentials cost businesses almost a million dollars more than the $3.86m average. Perimeter protection and automation are effective because cybercriminals can largely be trusted to behave in particular ways. It’s insider data security risks which present the greater risk, with negligence, credential theft or cloud misconfigurations creating the headline-grabbing losses.

Any organisation can have an Edward Snowden

Cybersecurity and training can’t avert the threat from a mistake or an external influence, from a recent attempt to bribe a Tesla employee into sharing confidential data to the case of Edward Snowden. Snowden demonstrated the power of the insider threat by walking out of his job for a US NSA defence contractor carrying four laptops and the ability to make millions of secret records public. He could do this because, as with many organisations, there was only operational protection for these for anyone like Snowden working within the NSA perimeter. He bypassed the external technical elements of protection and detection so the breach was only detected when he went public.

Conspiracy or carelessness, the damage is the same

It doesn’t need an Edward Snowden type conspiracy, just a moment of inattention such as succumbing to a spear-phishing attack, leaving a laptop unlocked or misconfiguring a cloud interface. The IBM report found that 63% of insider threats are caused by negligence. We trust our employees and partners to do the right thing, but we can’t legislate for individual mistakes.

Size (and scope) matters

IBM found that breaches of more than 50 million records cost $392m on average, 100 times the mean. Data sensitivity is a key factor in breach costs, with larger enterprises handling sensitive financial transactions, healthcare information, PII and digital IP particularly exposed. Another IBM report shows that the average insider breach recovery cost to a smaller company is $7m, a little more than half what it costs for larger companies in the finance, services or IT sector.

The Covid effect

Covid-19 has created its security challenges, with one report citing a 400% increase in complaints to the FBI cyber division. 70% of organisations have reported that remote working increases the cost of a data breach. 76% say it increases the time to identify and contain a breach.

Zero Trust and Confidential Computing

Traditional environments rely on operational assurance that employees, partners and applications will not access data without specific authorisation and need. Growing insider breach costs and post-Covid challenges indicate this model is no longer adequate for larger enterprises handling sensitive data. Zero trust changes the insider threat landscape by flipping traditional security models. It replaces operational assurance with technical assurance that actors cannot physically access data and applications, at rest, in transit or use, without specific justified need. A confidential computing environment delivers zero trust by providing a level of assurance of data integrity, data confidentiality, and code integrity, giving increased security guarantees for the execution of code and protection of data. Unauthorized entities encompass other applications on the host, the host operating system and hypervisor, system administrators, service providers, the infrastructure owner and anyone with physical access to the hardware. IBM z/series users have access to a confidential computing environment through IBM Hyper Protect Services, a flexible, Linux based platform offering seamless hybrid cloud capability. Implementing a zero trust framework has the potential to reduce IT spend while enhancing your protection and security, reducing both your current IT spend and your exposure to the potential cost of a breach.

What next?

Larger enterprises handling sensitive financial transactions, healthcare information, PII and digital IP, face existential risks from internal data breaches. Zero trust secures sensitive data with a model where internal threat protection is built-in. For some organisations the level of exposure does not warrant change, for others, it may be anything from a simple evolution on their natural upgrade path or a badly-needed wholesale change. If you believe your level of exposure warrants a closer look at zero trust, and to see where you are on the evolutionary path, book a free assessment with us today.