Logic Apps Community Day 2025
August 14, 2025
Red-teaming a RAG app with the Azure AI Evaluation SDK
August 14, 2025Introduction
Enterprises upgrading legacy databases often face challenges in migrating complex schemas and efficiently transferring large volumes of data. Transitioning from SAP ASE (Sybase ASE) to Azure SQL Database is a common strategy to take advantage of enhanced features, improved scalability, and seamless integration with Microsoft services. With business growth, the limitations of the legacy system become apparent, performance bottlenecks, high maintenance costs, and difficulty in integrating with modern cloud solutions.
SQL Server Migration Assistant for SAP Adaptive Server Enterprise (SSMA) Automates migration from SAP ASE to SQL Server, Azure SQL Database and Azure SQL Managed Instance. While SSMA provides a complete end-to-end migration solution, the custom BCP script (ASEtoSQLdataloadusingbcp.sh) enhances this process by enabling parallel data transfers, making it especially effective for migrating large databases with minimal downtime.
Script Workflow
One of the most common challenges we hear from customers migrating from Sybase ASE to SQL Server is: “How can we speed up data transfer for large tables without overwhelming the system?” When you are dealing with hundreds of tables or millions of rows, serial data loads can quickly become a bottleneck.
To tackle this, we created a script called ASEtoSQLdataloadusingbcp.sh that automates and accelerates the data migration process using parallelism. It starts by reading configuration settings from external files and retrieves a list of tables, either from the source database or from a user-provided file. For each table, the script checks if it meets criteria for chunking based on available indexes. If it does, the table is split into multiple views, and each view is processed in parallel using BCP, significantly reducing the overall transfer time. If chunking is not possible, the script performs a standard full-table transfer.
Throughout the entire process, detailed logging ensures everything is traceable and easy to monitor. This approach gives users both speed and control , helping migrations finish faster without sacrificing reliability.
Prerequisites
Before running the script, ensure the following prerequisites are met:
- Database schema is converted and deployed using SQL Server Migration Assistant (SSMA).
- Both the source (SAP ASE) and target (Azure SQL DB) databases are accessible from the host system running the script.
- Source ASE database should be hosted on Unix or Linux.
- The target SQL Server can be hosted on Windows, Linux, or as an Azure.
Configuration Files
The configuration aspect of the solution is designed for clarity and reuse. All operational parameters are defined in external files, this script will use following external config files during
bcp_config.env
The primary configuration file, bcp_config.env, contains connection settings and control flags. In the screenshot below you can see the format of the file.
chunking_config.txt
The chunking_config.txt file defines the tables to be partitioned, identifies the primary key column for chunking, and specifies the number of chunks into which the data should be divided.
table_list.txt
Use table_list.txt as the input if you want a specific list of tables.
Steps to run the script
Script Execution Log
The script log records tables copied, timestamps, and process stages.
Performance Baseline
A test was run on a 32-core system with a 10 GB table (262,1440 rows) for ASE and SQL. Migration using SSMA took about 3 minutes.
Using the BCP script with 10 chunks, the entire export and import finished in 1 minute 7 seconds. This demonstrates how parallelism and chunk-based processing greatly boost efficiency for large datasets.
Disclaimer: These results are for illustration purposes only. Actual performance will vary depending on system hardware (CPU cores, memory, disk I/O), database configurations, network latency, and table structures. We recommend validating performance in dev/test to establish a baseline.
General Recommendation
- Larger batch sizes (e.g., 10K–50K) can boost throughput if disk IOPS and memory are sufficient, as they lower commit overhead.
- More chunks increase parallelism and throughput if CPU resources are available; otherwise, they may cause contention when CPU usage is high.
Monitor system’s CPU and IOPS:
- When the system has high idle CPU and low I/O wait, increasing both the number of chunks and the batch size is appropriate.
- If CPU load or I/O wait is high, reduce batch size or chunk count to avoid exhausting resources.
- This method aligns BCP operations with your system’s existing capacity and performance characteristics.
Steps to Download the script
Please send an email to the alias: datasqlninja@microsoft.com and we will send you the download link with instructions.
What’s Next: Upcoming Enhancements to the Script
- Smart Chunking for Tables Without Unique Clustered Indexes
- Enable chunk-based export using any unique key column, even if the table lacks a unique clustered index.
- This will extend chunking capabilities to a broader range of tables, ensuring better parallelization.
- Multi-Table Parallel BCP with Intelligent Chunking
- Introduce full parallel execution across multiple tables.
- If a table qualifies for chunking, its export/import will also run in parallel internally, delivering two-tier parallelism: across and within tables.
- LOB Column Handling (TEXT, IMAGE, BINARY)
- Add robust support for large object data types.
- Include optimized handling strategies for exporting and importing tables with TEXT, IMAGE, or BINARY columns, ensuring data fidelity, and avoiding performance bottlenecks.
Feedback and Suggestions
If you have feedback or suggestions for improving this asset, please contact the Data SQL Ninja Team (datasqlninja@microsoft.com).
Note: For additional information about migrating various source databases to Azure, see the Azure Database Migration Guide.
Thank you for your support!