SAP systems: why memory costs do not rise linearly

26-07-2024 | 3 min read |

To start with, let’s share some historical values that seasoned IT professionals may remember.

  • Back in R/2, on the mainframe, before the Virtual Storage Access Method (VSAM), the database base hard limit was 2 GB. Yes, 2 GB, not 2 TB, and significant corporations ran on SAP systems…
  • At that time, a 20 GB disk memory required 35 cubic feet (1 cubic meter) of volume. And it was expensive, around $200,000 for 20 GB in 1990.
  • With R/3, 20 years ago, running a 500 GB system was an achievement, even for automotive manufacturers.
  • 15 years ago, running a one TB SAP system was expected, but it was reaching database limits if you were using Microsoft SQL database.
  • Today, running a 2TB system is no longer considered an achievement, and many corporations run SAP systems over a 10 TB HANA database (even with a compressed, column-based database)

In the 1960s, producing was the main issue instead of selling. Manufacturers would produce, load trucks, and let them go to resellers. The number of trucks was considered the unit of measure. Today, a retail company will trace every product individually. That’s a lot more data volume for the same product being sold. And we can anticipate an array of changes coming up:

  • Some corporations want to trace every product sold to the customer who purchased the product: who purchased the ice cream, when, for what price, and so on. Unilever demonstrated this with SAP CDP (Customer Data Platform): https://www.youtube.com/watch?v=RzzENGY4TZg
  • Similarly, some people track down the exact time and location when each pill or drug was taken, smartphones or Apple watch are linked to one individual, and patient smartphone or Apple watch identification.
  • Likewise, tobacco manufacturers track cigarette packaging in order for governments to track illegal cigarette traffic.

As you may see, from truck to individual tracking, from tracking products to tracking consumption, it all boils down to a significant increase in data volume in ERP systems.

Let’s imagine for a minute that database growth is stable—assume it’s one TB a year. Let’s imagine that part of the payment is made annually based on volume—this is the case for the SAP HANA database and S/4HANA sizing.

  • In year 1, you’ll reach 1 TB
  • In year 2, you’ll reach 2 TB, so a total of 3 TB over 2 years
  • In year 3, you’ll reach 3 TB, so a total of 6 TB over 3 years

*See annex for the full calculations

Therefore, the formula goes like this: in Year n, you reach n TB, so a total of n(n+1)/2 TB over n years will be reached.

If we assume that most ERP systems are typically around 15 years old, then in year 15, you’ll reach 15 TB, which accounts for a total of 120 TB over 15 years.

This means that, on average, over a 15-year period, you’ll end up needing an annual  average of 120/18 = 8 TB

Hopefully, SAP archiving with TJC Group software and consulting will provide a great ROI opportunity. Ensure ongoing volume and memory usage reduction with Automated SAP Data Archiving. You can find out more on this blog post: 6 reasons to embrace regular data archiving in SAP systems  

For those that enjoy maths…Let’s prove the above argument:

With paring method. Consider the sum:

S=1+2+3+…+n

We can also write the sum in reverse order:

S=n+(n−1)+(n−2)+…+1

Now, add these two equations together, term by term noticing that each of n pair sums to n+1:

2S=(n+1)+(n+1)+(n+1)+…+(n+1)

So

2S=n(n+1)

And we reach our point than

S=n(n+1)/2

You may find another method to get the same result. For example, the step-by-step Proof Using Mathematical Induction. Have fun!