# DBAPI User Guide
# Product Introduction
DBAPI is a low-code tool for data warehouse developers. It allows you to write SQL and configure parameters on the page to automatically generate rest web services. It helps programmers quickly develop backend data services, especially for BI reports or dashboard backend service development.
DBAPI is the management center for enterprise data service and a management platform for providing data services externally. It provides dynamic creation and publishing functions for APIs, unified management of APIs, and client management capabilities. It can monitor client calls to APIs and control client permissions for accessing APIs.
Demo Address: https://support.51dbapi.com/#/demo (opens new window)
# Core Features
- ✅ Ready to use: No programming required, does not depend on other software (standalone mode only requires Java runtime environment)
- ✅ Lightweight deployment: Extremely low resource consumption, can run stably on a 2-core 4GB server
- ✅ Multi-platform support: Supports standalone and cluster modes, supports Windows, Linux, macOS
- ✅ Dynamic configuration: Supports dynamic creation and modification of APIs and data sources, hot deployment without perception
- ✅ Access control: Provides API-level access permission management, supports IP whitelist/blacklist
- ✅ Wide compatibility: Supports all JDBC protocol relational databases (MySQL, SQL Server, PostgreSQL, Oracle, Hive, Dameng, Kingbase, Doris, OceanBase, GaussDB, etc.)
- ✅ Dynamic SQL: Supports MyBatis-style dynamic SQL, provides integrated SQL editing, execution, and debugging functions
- ✅ Full support: Supports Select/Insert/Update/Delete statements and stored procedure calls
- ✅ Transaction management: Supports multi-SQL execution, can flexibly control transaction switches
- ✅ Plugin extension: Provides rich plugin mechanisms for caching, data transformation, failure alerts, parameter processing, etc.
- ✅ Easy migration: Supports API configuration import/export, facilitating migration from test environment to production environment
- ✅ Parameter processing: Supports interface parameter passing and complex nested JSON parameter passing, supports parameter validation
- ✅ Monitoring statistics: Provides interface call record query and access information statistics functions
- ✅ Traffic control: Supports API rate limiting mechanism
- ✅ Process orchestration: Supports complex API orchestration functions
- ✅ Open integration: Provides OpenAPI interfaces for easy integration with other systems
# Basic Concepts
# Executor
Executor is an abstraction of API business logic execution, currently supporting multiple types:
| Executor Type | Function Description |
|---|---|
| SQL Executor | Executes SQL statements in the database and returns results |
| Elasticsearch Executor | Executes Elasticsearch DSL queries and returns results |
| HTTP Executor | Acts as a gateway proxy to forward HTTP requests and return results |
Personal edition currently only supports SQL Executor
# API Group
Used for classifying and managing APIs, grouping business-related APIs together for easy maintenance and searching.
# Client
Refers to applications that call platform APIs, such as Python, Java, etc. Each client has a unique identifier clientId and secret secret, created by the system administrator and assigned API access permissions.
# Quick Start
# System Login
Login to the system using default account password admin/admin.

# Data Source Configuration
Go to the data source management page and click "Create Data Source"

Fill in database connection information and save

After saving, return to the data source page where you can edit or delete the newly created data source

Important Notes:
- Theoretically supports all JDBC protocol databases
- When using other types of databases or Oracle, you need to manually fill in the JDBC driver class name and place the corresponding JDBC driver JAR package in the
extliborlibdirectory, then restart the service (for cluster mode, all nodes need to perform this operation)- The
libdirectory already has built-in drivers for MySQL/SQL Server/PostgreSQL/ClickHouse. Please manually replace them if versions don't match- It is recommended to put your own driver JAR packages in the
extlibdirectory for unified management
# API Creation Process
# 1. Create API Group
Go to the API management page and click "Create API Group" on the left

Fill in the group name in the pop-up window and save

After saving, you'll find a new group on the left. Click the more button on the group to edit or delete the group

# 2. Create API
Click the "Create API" button in the target group to enter the configuration page

Click Basic Information and fill in the API basic information

Access Permissions: Public interfaces can be accessed directly, private interfaces require clients to apply for a token to access (we choose public interface here for convenient testing later)
Content-Type: For
application/x-www-form-urlencodedtype APIs, parameters need to be configured; forapplication/jsontype APIs, JSON parameter examples need to be filled in.Click Executor and fill in executor information

SQL Writing: Uses MyBatis dynamic SQL syntax (without outer
<select><insert><delete><update>tags). Parameters are represented by#{}or${}. Refer to Dynamic SQL Syntax Documentation for dynamic SQL syntaxTransaction Control: Transactions are disabled by default. For insert/update/delete statements, it's recommended to enable transactions. When enabled, if SQL execution fails, the transaction will be rolled back. If an API contains multiple SQL statements, enabling transactions will execute all SQL statements within one transaction
Important Tip: For databases like HIVE that don't support transactions, do not enable transactions, otherwise errors will occur
Data Transformation: If data transformation is needed, fill in the Java class name of the data transformation plugin. Leave blank for no transformation. If the plugin requires parameters, fill in the parameters.
Multi-SQL Support: Click the add button to add SQL. An API can execute multiple SQL statements, with multiple results packaged together and returned. For example, pagination queries where both details and total count need to be queried. Only one SQL can be written per SQL editing window.

Important Note: If an executor contains multiple SQL statements, each SQL will correspond to a data transformation plugin configuration. Data transformation plugins always transform the result of a single SQL query.
Click the window maximize button to enter the SQL debugging interface

Click Run SQL to execute SQL. If SQL contains parameters, parameters need to be set

Click Global Plugins and fill in global plugin information

Caching: If data caching is needed, fill in the Java class name of the caching plugin. Leave blank to disable caching. If the plugin requires parameters, fill in the parameters.
Alerting: If you need alerts when API execution fails, fill in the Java class name of the alerting plugin. Leave blank to disable failure alerts. If the plugin requires parameters, fill in the parameters.
Global Data Transformation: If data transformation is needed, fill in the Java class name of the data transformation plugin. Leave blank for no transformation. If the plugin requires parameters, fill in the parameters.
Parameter Processing: If parameter processing is needed, fill in the Java class name of the parameter processing plugin. Leave blank for no processing. If the plugin requires parameters, fill in the parameters.
Click Save to create the API, then click the API menu to return to the API list page
# 3. API Publishing
Click the more button on the API to expand the online button, then click the online button to publish the API

Click the more button on the API to expand the request test button, then click the request test button to enter the request test page

Click the send request button to initiate a request. If there are parameters, they need to be filled in

# Client Management
Click the client menu and click the create client button

Fill in client information and save

Creating a client generates clientId and secret. The system administrator needs to inform the client (API caller) of the clientId and secret.
Clients use their own clientId and secret to access the http://192.168.xx.xx:8520/token/generate?clientId=xxx&secret=xxx interface to dynamically obtain a token. Clients can only access private APIs using this token (provided the system administrator has authorized the client to access this private API).
When creating a client, the token expiration time must be set. In the future, each token applied for by the client will have a corresponding expiration time. Within this validity period, using the previously applied token to access APIs will be valid. Otherwise, after the expiration time, a new token must be applied for.
If you want tokens to be permanently valid, please set the expiration time to a large value, such as 100 years
Click the authorization button to authorize APIs for the client

Select the groups to authorize and save

# Token Usage Instructions
Token Application Interface:
http://192.168.xx.xx:8520/token/generate?clientId=xxx&secret=xxxWhen requesting private interfaces, the token value needs to be placed in the
Authorizationfield of the header to access successfully. (For public interfaces, no header setting is required)Python Call Example:
import requests
headers = {"Authorization": "5ad0dcb4eb03d3b0b7e4b82ae0ba433f"}
re = requests.post("http://127.0.0.1:8520/API/userById", {"idList": [1, 2]}, headers=headers)
print(re.text)
Modify the previous API to a private interface, then click the request test button

At this point, clicking the send request button shows that the API is inaccessible

Fill in
clientIdandsecret, click the button to access the interface and obtaintoken
Fill the
tokeninto theAuthorizationfield of theheader, click the send request button, and the API access is successful
# IP Firewall Settings

Enabling the firewall can intercept IPs
# Monitoring
DBAPI can be used with only the metadata database (mysql/sqlite), but if you also want to use the monitoring functions on the page (monitoring API call records, traffic, success rate, etc.), you must rely on another log database (which users need to set up themselves) to store API access logs. ClickHouse is recommended, but you can also use other relational databases.
Currently, initialization scripts for clickhouse/mysql log databases are provided in the
sqldirectory
# Log Collection Methods
File Collection: DBAPI will write API access logs to disk file
logs/dbapi-access.logby default. Users can use tools likedataxflumeto collect logs to the log database.Direct Database Connection: If
access.log.writer=dbis configured in theconf/application.propertiesfile, DBAPI will write API access logs directly to the log database. This method is suitable for scenarios with not too large log volumes.Kafka Buffer: If
access.log.writer=kafkais configured in theconf/application.propertiesfile, DBAPI will write API access logs to Kafka. Users need to collect logs from Kafka to the log database themselves. This method is suitable for scenarios with large log volumes, where Kafka can serve as a data buffer.Note that with this method, you need to fill in the Kafka address in the
conf/application.propertiesfile.DBAPI also comes with a tool to consume Kafka logs and write them to the log database. Please use the
bin/dbapi-log-kafka2db.shscript.
If you don't need to use monitoring functions, you don't need to set up a log database, and you can configure
access.log.writer=null. (The default configuration isaccess.log.writer=null)
# Monitoring Summary
Click the monitoring menu to view API call record monitoring

# View Interface Call Records
Click the details tab to search API call records

# Other Functions
# Export Interface Documentation
Click the tools button in the API menu, then click the export interface documentation button

# Plugin System
DBAPI provides five types of plugin mechanisms. You can download plugins from the Plugin Market (opens new window).
# Cache Plugin
- Caches executor results, such as SQL executor, for query-type SQL, caches SQL query results to avoid frequent database queries and reduce database pressure.
- Cache logic is written by users, who can cache to ehcache/redis/mongodb, etc.
- When data cannot be found in the cache, the executor is executed, and the results are cached.
SQL Executor Multi-SQL Scenario: For SQL executors, if an executor contains multiple SQL statements, the cache plugin caches the encapsulated overall result of multiple SQL executions (if a single SQL is configured with a transformation plugin, the result will be transformed first) as a whole.
# Alert Plugin
- When an API internal error occurs, the alert plugin can send alert notifications, such as emails, SMS, DingTalk, Feishu, WeCom, etc.
- Alert logic is written by users.
# Data Transformation Plugin
- Sometimes SQL cannot obtain the desired data format in one go. If using code to process and transform data is more convenient, the data transformation plugin should be used. Users write the code for data transformation logic.
- For example, transforming user phone numbers and bank card numbers in SQL query results for desensitization.
SQL Executor Multi-SQL Scenario: For SQL executors, if an executor contains multiple SQL statements, each SQL corresponds to a data transformation plugin configuration. Data transformation plugins always transform the result of a single SQL query.
# Global Data Transformation Plugin
- The default API return data format is
{success: true, msg: xxx, data: xxx} - In some cases, the response data format needs to be transformed, such as the low-code framework
AMISwhich requires the interface return data to carry thestatusfield. In this case, theglobal data transformation plugincan be used to transform the entire API return data format.
Plugin Difference Explanation: Note the difference between data transformation plugins and global data transformation plugins. Data transformation plugins transform the executor execution results (such as transforming the results of SQL executor executing query SQL), while global data transformation plugins transform the entire API execution results.
# Parameter Processing Plugin
- Processes request parameters with user-defined logic
- For example, converting all request parameter values to uppercase
- For example, APIs receiving encrypted parameters, users define logic to decrypt parameter values
Version Support Note: Note that parameter processing plugins are supported from personal edition
4.0.16and enterprise edition4.1.10
# Notes
# Data Source Support
- If you want to use Oracle or other types of data sources, please manually place the corresponding JDBC driver package in the
extlibdirectory orlibdirectory after deploying DBAPI (for cluster deployment, each node needs to manually place the JAR package) - Recommended to place in the
extlibdirectory for unified management
# SQL Writing Standards
- Same as MyBatis dynamic SQL syntax, also supports parameters
#{}and${}. Refer to Dynamic SQL Syntax Documentation. No need to write the outer<select><update>tags, just write the SQL content directly - Note that like MyBatis, the less-than sign in SQL should not be written as
<, but as<
# Permission Verification Process
