Categories: ABAP, Development/Programming

Yet another ABAP JSON parser – and some other stuff

We have had the capability to process JSON in ABAP for some time now (refer to this blog post for an introduction). The problem is that it does not satisfy all the use cases without some effort and, therefore, there is still some scope for writing a custom JSON parser in ABAP. I recently wrote one again.

I have written a JSON parser in ABAP before (prior to the native support in ABAP), but that was not a very good or reliable parser, although it did work for what we were doing at the time, but I have always been wanting to write a better parser. If you google “ABAP JSON”, you will find no shortage of JSON parsers in ABAP, but it is more fun to do it yourself and, as it turns out, it’s not that hard.

The problems I was trying to overcome with the built-in ABAP parsing functionality this time is mainly the fact that the mapping of JSON to ABAP data is case sensitive, and internally, in memory, ABAP fields are always upper case, while the JSON you get from applications out there are mostly either lower or mixed case.

The other problem I had is that, while some data is predictable, some is not. Consider the following extract from a search result from an Elasticsearch (now just called Elastic, apparently) search response:

While the structure of the response has a uniform and predictable structure, the actual source documents (contained in the “_source” element) do not, as they are free-form. So if you were to try and write an Elasticsearch client in ABAP, you would have the problem that you cannot parse the whole document, because you would not know how to handle the free-form elements. other than perhaps creating the data dynamically on the fly, which then becomes a little difficult to deal with.

What I did with the mapper part of my solution is to create an in-memory representation of the JSON data so that you can define your receiving structure in ABAP in such a way that any unknown elements are declared as (suitable) classes (a reference type to ANY OBJECT would do), and the mapper would simply store the reference to that part of the document, which you could then handle with a separate structure (supposing, for example, you were the consumer of the Elasticsearch client).

The intermediate representation of the document might consume more memory, but you could then go on to write a path querying mechanism to dynamically get data from the JSON as required with graceful degrading. So the structure to handle the Elasticsearch response could look as follows:

You will see that where it expects the sub-part containing the “_source” element, it references an object called “json_value”, which is the object that contains a generic JSON value in the library. Using a piece of code like the following, the library then allows you to map a structure, defined to represent the source payload, which presumably could sit outside the Elasticsearch client or in a helper method that changes a dynamic piece of data.

So what am I planning to do with the with this parser/mapper? Well, an Elasticsearch (I must stop calling it that; it’s “Elastic”) client would be one such application.

What I am really keen to do, however, is write a client for GitHub Gists; and on top of that I would like to build a tool (not unlike SAPLink, but with a different approach) that will allow you to store source objects as Gists on GitHub and share them with people, who can in turn, using your Gist URL, import it to their SAP system.

You can find the source of the JSON parser/mapper here: https://gist.github.com/mydoghasworms/4888a832e28491c3fe47.

It does not include a JSON generator/emitter. That is fairly trivial to do, though I guess it would be nice to complete the picture. Well, maybe another time.

Here is a quick indication of how to use it:


 

Article info



Leave a Reply

Your email address will not be published. Required fields are marked *