ABAP Dynamic Report Extraction and Cross-Database CRUD with ADBC: A Practical Guide to SALV, RTTS, and Batch Sync

This solution uses ABAP as the core to connect dynamic report extraction with external database CRUD, solving two common problems: poor reusability of SAP reports and the high rewrite cost of synchronizing with heterogeneous databases. Its core capabilities include SALV data interception, RTTS-based dynamic modeling, and ADBC batch writes. Keywords: ABAP, ADBC, RTTS.

The technical specification snapshot outlines the implementation profile

Parameter Description
Primary language ABAP
Core protocols/interfaces Native SQL, ADBC, SALV Runtime Info
Code structure Report + Include + FORM
Execution modes Background batch push, foreground manual selection
Core dependencies cl_salv_bs_runtime_info, cl_abap_structdescr, cl_abap_tabledescr, cl_sql_connection
Configuration tables Zdemoat1, Zdemoat2, Zdemoat3
Data operations Dynamic query, batch delete, batch insert
Star count Not provided in the source

This is an ABAP middleware solution for heterogeneous data synchronization

At its core, the original implementation is not a conventional report. It is a lightweight data integration middleware layer. It reuses existing ALV reports without rewriting query logic, captures their output at runtime, and pushes the result directly into an external database.

It addresses two major pain points. First, existing SAP reporting logic is difficult to reuse. Second, external data persistence often depends on custom interfaces that are expensive to maintain. This implementation uses configuration tables to drive the entire flow dynamically, including extraction, trimming, deletion, and insertion.

The main program entry point only switches between foreground and background execution modes

START-OF-SELECTION.
  IF p_back = 'X'.
    PERFORM frm_getdata_push. " Background mode: automatically push in batch
  ELSEIF p_fore = 'X'.
    PERFORM frm_fore_select_config. " Foreground mode: manually select configuration
  ENDIF.

This code decouples the execution entry point from the business logic, allowing the same synchronization capability to serve different operational scenarios.

SALV runtime interception turns legacy reports into data sources

The most important technique in this solution is cl_salv_bs_runtime_info. The program first disables display and metadata output while keeping data return enabled. It then runs the target report with SUBMIT and reads the ALV data reference.

The value of this approach is that it maximizes reuse of existing reports. As long as a report ultimately outputs ALV, the program can bring it into a unified collection scope. This avoids reimplementing complex business selection logic.

cl_salv_bs_runtime_info=>set(
  EXPORTING
    display  = abap_false
    metadata = abap_false
    data     = abap_true ).

SUBMIT (lv_submit_prog) USING SELECTION-SET lv_submit_vari AND RETURN.

TRY.
    cl_salv_bs_runtime_info=>get_data_ref( IMPORTING r_data = lr_data ).
    ASSIGN lr_data->* TO 
<lt_alv_data>. " Get the ALV result set reference
  CATCH cx_salv_bs_sc_runtime_info.
    WRITE: / 'Warning: Program did not return ALV data'.
ENDTRY.

This code converts any submit-capable ALV report into a unified dynamic data input source.

RTTS-based dynamic reconstruction generates target table structures from configuration

After obtaining the ALV data, the code uses RTTS to inspect the row type components, then inserts ID, YEAR, and MONTH fields as needed, and removes unnecessary fields based on the mapping table. This step determines the final schema used for insertion.

Compared with hardcoded target structures, this pattern is better suited to scenarios where multiple reports, variants, and target tables coexist. It is especially effective when field mappings must vary by program and variant. In that case, a configuration-driven approach is more stable than hardcoding.

lo_table_descr ?= cl_abap_tabledescr=>describe_by_data( 
<lt_alv_data> ).
lo_struct_descr ?= lo_table_descr->get_table_line_type( ).
lt_components = lo_struct_descr->get_components( ).

INSERT VALUE #( name = 'ID' type = cl_abap_elemdescr=>get_c( 32 ) ) INTO lt_components INDEX 1.
" Dynamically add YEAR/MONTH based on configuration and trim redundant fields

This code dynamically generates the target structure from the actual ALV output instead of relying on predefined DDIC types.

ADBC query and deletion logic emphasizes safety boundaries

The external database access layer uses cl_sql_connection. Based on configuration and variant conditions, the program builds a WHERE clause and then executes query, delete, and insert operations. The most important design choice here is not simply that deletion is possible, but that accidental full-table deletion is explicitly prevented.

The original implementation requires both the table name and the WHERE condition to be non-empty before it allows DELETE to run. This is one of the most valuable defensive design decisions in the entire solution, especially for background batch jobs.

The WHERE condition is generated jointly from variant content and mapping rules

CALL FUNCTION 'RS_VARIANT_CONTENTS'
  EXPORTING
    report  = ps_Zdemoat1-progname
    variant = ps_Zdemoat1-variant
  TABLES
    valutab = lt_val_tab.

lv_where = concat_lines_of( table = lt_where_clauses sep = ' AND ' ).

This code converts SAP variant conditions into a dynamic filter expression that can be used against an external database.

IF lv_target_tbname IS NOT INITIAL AND lv_where IS NOT INITIAL.
  DATA(l_stmt_bulk_del) = |DELETE FROM { lv_target_tbname } WHERE { lv_where }|.
  DATA(l_rows_deleted) = l_stmt_ref_del->execute_update( l_stmt_bulk_del ).
  go_db->commit( ). " Explicitly commit the transaction after deletion
ELSE.
  WRITE: / 'Warning: WHERE is empty. Deletion was stopped to protect data'.
ENDIF.

This code establishes a hard safety gate before deleting data from the external database and prevents catastrophic operator error.

Batch insertion reduces round-trip cost through dynamic internal table binding

During the write phase, the program first creates a dynamic internal table based on the new structure. It then maps the source ALV data into the target structure row by row, fills in UUID and year/month fields, and finally builds a placeholder-style INSERT statement for batch execution.

The key value of this implementation is array binding. set_param_table binds the entire dynamic internal table at once, which significantly reduces network round trips compared with row-by-row insertion. This makes it well suited to large-scale synchronization jobs.

DATA(lv_insert_sql) = |INSERT INTO { lv_target_tbname } ( { lv_fields_str } ) VALUES ( { lv_vals_str } )|.

lo_stmt_insert->set_param_table( lr_db_data ). " Bind the entire table of data
DATA(lv_inserted_cnt) = lo_stmt_insert->execute_update( lv_insert_sql ).
go_db->commit( ). " Commit after batch insert

This code uses ADBC batch parameter binding to perform efficient bulk inserts.

The foreground interaction mode improves manual control

In addition to background-based automatic execution for all configurations, the program also provides a foreground popup selection mode. It first reads unfrozen configurations, then displays them to the user in a popup through cl_salv_table, and supports multi-select before writing the result back to the global driver table.

This design balances automation with manual intervention. It is especially valuable for debugging, staged validation, and temporary remediation, because it avoids relying on background jobs or direct configuration table changes every time.

This implementation is mature, but three areas still deserve further optimization

First, the overall style still relies on FORM. A better long-term direction is to refactor it into OO ABAP and split report extraction, structure reconstruction, and database synchronization into independent classes.

Second, the field mapping stage uses nested loops with ASSIGN COMPONENT heavily, which can create performance pressure on large data volumes. If most fields share the same name, MOVE-CORRESPONDING is worth trying first.

Third, the external connection name is still hardcoded. It should be moved into the configuration table and managed together with the program, variant, and target table so the solution can evolve into a complete dynamic routing capability.

C Know

AI Visual Insight: This image is the brand mark of the CSDN AI Reading Assistant. It is a product logo and does not contain any technical structure that can be meaningfully interpreted, so no additional visual technical explanation is necessary.

FAQ structured answers

Why does this solution use ADBC instead of Open SQL?

Because the target scenario includes external database connections, Native SQL assembly, and batch array binding. Open SQL is better suited to SAP-managed objects, but it offers less flexibility for cross-database operations.

What is the most critical safety measure when dynamically deleting external data?

You must enforce validation that both the target table name and the WHERE condition are non-empty before executing deletion. You should also roll back the transaction on exceptions to avoid partially successful states.

Which business scenarios is this solution best suited for?

It is best suited to lightweight interface middleware and data synchronization scenarios where existing SAP reports must be reused, the target is a heterogeneous database, and field mappings change dynamically by program or variant.

Core summary

This article reconstructs an ABAP-based dynamic data integration solution: it reuses existing report output through SALV runtime interception, combines RTTS-based dynamic schema generation with field trimming, and uses ADBC to perform external database queries, batch deletes, and batch inserts. It is well suited to SAP-to-heterogeneous-database data synchronization scenarios.