Harald Reiter did the honors and interviewed Udo Patzelt of AOK for us. Thanks Harald! It’s a great story about how a healthcare IT provider massively accelerated data loads in an SAP data warehouse environment. Here is the transcript courtesy of Michael Koch:

HR: Hello, this is Harald Reiter for JD-OD. Today it’s my honour to interview Udo Patzelt. First of all Thank You, Udo for taking the time to talk to us today. So first of all, who do you work for and what is your role?

UP: Hello, my name is Udo Patzelt and I work for AOK Systems where I am responsible for Product Management and Architecture of our solution called OSCARE, which is an industry solution for statutory health insurances in Germany. AOK Systems is a separate IT service provider which originates from the AOK, Germany’s largest statutory health insurance. In Germany, AOK has about 24 million insurance holders and holds a market share of about 35%. Over the course of the last 10 years, AOK Systems and SAP have collaborated on this industry solution for Healthcare. Today, OSCARE is used by AOK Systems and other statutory health insurances in Germany.

HR: Super! We’re sitting here to talk about SAP HANA, so the first question is: Why HANA?

UP: Well maybe I should start by explaining how I got to know HANA. First time I became aware of HANA was during last year’s annual DSAG congress in Germany. Up to this point, there had only been speculations about SAP’s plans with in-memory, which I more or less filed under „Ideas“. However I soon found out during said congress that SAP actually had more serious plans with HANA and I therefore decided to get deeper into the matter. Simple reason for this was that I could think of many HANA use cases for OSCARE. Easy to imagine if you take into account that we’re holding data for 24 million insurance holders. I was able to think of many problems where HANA would be able to help us with. We then dived deeper into the subject and came up with questions such as „What happens when we’re having a power cut?“ and so on. We always received decent answers and then decided to start working on a roadmap, taking into account the perspectives HANA offered to us from the outset and tried to describe what HANA could mean to AOK. So we worked towards a clear goal, ie „what is the perspective we see for HANA?“. We then presented the roadmap and our findings to the various AOK boards and met a positive response, as HANA offers solutions for many of our problems. Next problem was that all of this sounded great in theory, but what about the real world?

HR: So you took the initiative, rather than SAP approaching you? You had the vision of what HANA can do for you and you then suggested it internally?

UP: Yes, however SAP of course also became active and certainly supported us during the exchange of information very well. We then initiaited an evaluation project, because we wanted to see ourselves how well HANA would work in a real world environment.

HR: And when did you start this?

UP: We installed the HANA appliance during June 2011.

HR: Still as part of the ramp-up ?

UP: (Yes) still as part of the ramp-up and with version SP2. Our aim was not to become too involved with performance and strengths of HANA. Instead, prior to the installation we came up with specific, real world, productive applications which are already in use today without HANA, with a defined dataset and clear response times. We then wanted to see how this would look like if you use the same under HANA.

HR: Excellent. So when the boxes arrived, did you get any help from SAP in terms of Data Model, for example, the actual technical realisation?

UP: Of course SAP supported us with this. We had one –sometimes more- consultants onboard. This is necessary, as it is a new technology after all. Our main aim was not to write some sort of new HANA application, but rather transfer existing application use cases onto HANA. From this perspective, our work was less creative, but more of a building kind of nature. We copied over the data sets and loaded it onto the box. In a next step we mimicked the data selctions and then compared the results.

HR: Could you describe which use cases you transfered over to HANA?

UP: It varied, partially complex SELECTs, as we would have them at AOK, such as overviews of transport costs during arrival day of new hospital patients. Basically real world selections which can be demanding and usually have long run times. But also data mining and analytics for procurement scenarios. For example there is one case for diabetic diseases where all data is accumulated and later sliced and diced in different ways.

HR: Does the data come from non-SAP systems, or SAP systems or a mix?

UP: It’s a mix. Nowadays the majority of data is held in the SAP system, but there is also relevant data for these specific cases which is held in non-SAP systems. The result sets are then all amalgamated in HANA, similar to storing them in databases today, where you deal with BW extracts, for example, and are then analysed using SQL and other analytics tools.

HR: What kind of data volumes are we talking about?

UP: We chose a medium sized AOK, simply because the HANA appliance we used had limitations and does not allow for unlimited memory. State of the art back in June was a model with 1 TB of main memory, a large HANA, so to speak. We loaded hundreds of GBs of data from various datamarts and datamart applications.

HR: What surprised you the most?

UP: The biggest surprised were the results, which scared us a little.

HR: I hope they were correct?

UP: Well, had they been bad then we would have had a different problem altogether! But we made an experience I’ve never had before. Usually, when you do some performance tuning then you have to face the fact that the results never are as good as you expected them to be. But with HANA the exact opposite was the case. The results were quite astonishing. Someone who knows and understands IT can simply not believe the results.

HR: Suprised in a positive way?

UP: Yes. For example, in our application scenarios we never had a test with an acceleration factor of under 25. 25 was usually the smallest factor. On average, we had acceleration factors of 30-40. Our colleagues then said that this was achieved without tuning. As of today, some applications have acceleration factors of 200. The datamining application normally runs for about 150 hours, with HANA it took 20 minutes. Unbelievable results. There were some things we couldn’t test, as HANA functionality wasn’t there yet, but there will obviously be further SPs. However we are 100% convinced that HANA works and provides remarkable results.

HR: So what does HANA’s future at AOK look like?

UP: I already mentioned the roadmap. Our perspective is that the entire operational data processing is done using HANA. Our ambition is to create the OLTP / OLAP convergence. At the moment we have 3 core SAP systems at every AOK: an ERP, CRM and custom system for other developments. In the future, these will have a common HANA data basis, which will also be used for analytics. This means no more distributed datasets, no more replications and analytics is based on real-time data. Quite unbelievable, really.

HR: What were your challenges during the technical realisation? I can imagine that a large amount of complexity is involved here. Different to your standard SAP and ABAP development. It involves new skills.

UP: First of all, this was an evaluation project. We wanted to find out if HANA really holds up to its promises. The conversion itself wasn’t overly difficult. Main challenges were the new toolsets, such as the HANA Studio, the analytical and calculation views. New terms and new skills basically. The biggest challenges are ahead of us. SP3 has arrived and this is the starting point of the next step of our project. At AOK we have a short term and a long term goal. Due to the advantages of the short term goal we have now decided to license HANA. Our short term goal is to use BW on HANA, we will remove all relational databases, which should already give us some in-memory performance advantages. This should be possible without touching our BI applications, which represent an investment of about 60 man years. It would be a big problem had we to redesign these. However our expertise tells us that this is not necessary and we will simply swap databases and load our remaining non-SAP datasets onto HANA. We will then arrive at a consolidated data warehouse, which enables us to pick a specific application area to redesign in the future. This means there won’t be an investment barrier, we have a clean platform.

HR: And finally: in terms of HANA, if you were granted one wish by SAP, what would it be?

UP: My wish would be that the roadmap would include more definition with regards to OLTP / OLAP convergence and also specified the exact dates and interim steps.

HR: Let’s hope that SAP watches this and listens. I know they do, of course. All the best for the future and your work with HANA.

UP: Thank You.