Informix to MySQL: 2 Easy Ways

Luke Smith
Enterprise Solutions Architect
August 25, 2023
Matt Tanner
Developer Relations Lead
August 25, 2023
Matt Tanner
Matt Tanner
Developer Relations Lead
Developer Relations Lead
August 25, 2023
18
 min read
Join our newsletter

In the evolving world of databases, migration between different platforms has become an inevitable aspect of maintaining data infrastructure. Whether the migration is driven by better performance, scalability, or compatibility, the decision to migrate usually brings up a fair amount of concerns, uncertainties, and technical challenges. This is particularly true when transitioning from a traditional system like IBM Informix to a widely embraced, open-source platform like MySQL.

This blog will look at the in-depth details of migrating from an Informix database to a MySQL database. This comprehensive guide delivers clear, step-by-step instructions to ensure a successful migration using two methods. One method uses Arcion to automate the migration process, and the other uses a more traditional custom-code option. Let’s start by diving into a quick overview of both Informix and MySQL.

Table of Contents

Introduction to Informix

IBM Informix is a versatile database server that can manage various data types, including traditional relational, object-relational, and dimensional databases. Excelling in handling Online Transaction Processing (OLTP) and Internet of Things (IoT) data, it's also suitable for Online Analytical Processing (OLAP) and general-purpose data applications.

Informix integrates diverse data types such as SQL, NoSQL, JSON, time-series, and spatial data. With smart triggers and high availability data replication, it enables automated data management. Informix is known for its minimal footprint and flexible deployment options since it can run on UNIX, Linux, Mac OS X, and Windows. All Informix editions also include the IBM Informix Client Software Development Kit (Client SDK) and Informix DataBlade Developers Kit (DBDK) so users can expand the functionality of the database.

Introduction to MySQL

MySQL is a highly popular open-source Relational Database Management System (RDBMS) maintained by Oracle Corporation. Its high performance, reliability, and ease of use, particularly as a part of the widely-used LAMP (Linux/Apache/MySQL/PHP) stack for web development, have made it a favorite among developers.

Compatible with a broad range of computing platforms and languages, MySQL supports popular open-source applications like Drupal, Joomla, Magento, and WordPress. Because of this, MySQL has become a trusted and reliable solution for data-intensive web applications.

MySQL variations cater to specific needs. MySQL HeatWave, for instance, is an in-memory query accelerator for MySQL Database Service in Oracle Cloud. MySQL Enterprise Edition offers advanced features, management tools, and technical support. Other variants include MySQL for OEM/ISV and MySQL Cluster CGE for web, cloud, and communication services.

How to Migrate data from Informix to MySQL

Depending on the complexity of the setup and data, migrating data between Informix and MySQL can go from straightforward to extremely complex. As mentioned, this write-up will show two methods that can be used to migrate data from one platform to the other. The options include using Arcion and manually creating an external table to enable data transfer. 

Method 1: Informix to MySQL using Arcion 

The first method of connecting Informix and MySQL to migrate data discussed uses Arcion. Arcion is an efficient tool for migrating and replicating data across various enterprise databases and data warehouses. Known for its real-time and in-memory Change Data Capture (CDC) solution, Arcion offers users massive scalability and data consistency. 

This section covers connecting IBM Informix as a data source in Arcion. Later, in subsequent steps, you will be shown how to connect MySQL to Arcion as a destination database. Arcion’s CLI tool, Replicant, will create the connection and run the migration via CLI.

 Before we start, here are a few prerequisites to note:

  • Ensure that the specified user on MySQL possesses the CREATE TABLE and CREATE TEMPORARY TABLE privileges on the catalogs and schemas to replicate tables into the needed catalogs and schemas.
  • Grant CREATE DATABASE or CREATE SCHEMA privileges to the user if you want Replicant to create catalogs or schemas for the target MySQL system.
  • You can create a database manually with the name io_blitzz and grant all privileges to the user if the user does not have CREATE DATABASE privilege. Replicant will use the database to maintain internal checkpoints and metadata.

With these things sorted out, let's look at the steps in creating the connections to enable data migration across the platforms.

Step 1: Download And Install Arcion Self-hosted

The first thing we will need is to configure Arcion. For this, we must download and install the self-hosted version of Arcion. To gain access, you must follow the steps outlined on the Arcion Self-hosted webpage. Once you have downloaded Arcion’s CLI tool Replicant, you must create a home directory for it. By default, the home directory will be where Replicant was downloaded. From now on, this home directory will just be referred to as $REPLICANT_HOME in the examples below. The last step is adding your credentials to your Arcion instance, which is covered in the quick start guide that you can reference for more details on how to do this.

Step 2: Setup CDC Replication

Follow the instructions in Enabling CDC Replication for Informix to enable CDC-based replication from the source Informix server.

Step 3: Set up Logical Configuration

To set up the logical log configuration, the instructions found in the Logical Log Configuration Guidelines should also be implemented.

Step 4: Create a Heartbeat Table

A heartbeat table must be created on the source database using the DDL SQL statements below for CDC replication.

CREATE TABLE tpch:tpch.replicate_io_cdc_heartbeat(timestamp INT8 NOT NULL, PRIMARY KEY(timestamp) CONSTRAINT cdc_heartbeat_id_repl1_repl1) LOCK MODE ROW

Remember to grant INSERT, UPDATE, and DELETE privileges for the table to the user to whom Replicant is provided.

Step 5: Set up Connection Configuration for IBM Informix 

Go to the IBM Informix sample connection configuration file found on $REPLICANT_HOME.

vi conf/conn/informix.yaml

‚Äć

In the connection configuration file, we must add our values so Arcion can connect to Informix. If your connection credentials are stored on AWS Secrets Manager, you can retrieve them using Replicant; if not, you can input them into the configuration file, as seen in the example below.

type: INFORMIX host: localhost #Replace localhost with your Informix server's hostnameport: 9088  # In case of SSL connection use SSL port server: 'informix'database: 'tpch' # Name of the catalog from which the tables are to be replicated username: 'informix' #Replace Informix with the user that connects to your Informix serverpassword: 'in4mix' #Replace in4mix with your user's password informix-user-password: 'in4mix' #Password for the "informix" user, required for performing CDC. Not required in snapshot replication. max-connections: 15max-retries: #Number of times any operation on the source system will be re-attempted on failures. #ssl:#  trust-store: #    enable: true#    path: "/home/informix/ssl/truststore.jks" #Path to the JKS truststore containing the trust certificate of the Informix server#    password: "in4mix" #The truststore password

From the sample code above, you must connect to the syscdv1 catalog on the server as the user Informix to use Change Data Capture. Another note is that SSL can connect to the Informix server using the configuration parameters above under the SSL section.

Step 6: Set up Extractor Configuration

Now that our connector configuration file is set, we need to set our extractor configuration. To set this, from the $REPLICANT_HOME directory, open the Extractor configuration file as shown in the sample.

vi conf/src/informix.yaml

The extractor configuration file comprises two parts, parameters related to realtime and snapshot modes.

Parameters Related to Snapshot Mode

The sample below shows an example of how to set up parameters related to snapshot mode.

snapshot:  threads: 16  fetch-size-rows: 5_000 #  lock:#    enable: true#    scope: TABLE   # DATABASE, TABLE#    force: false#    timeout-sec: 5 #  min-job-size-rows: 1_000_000#  max-jobs-per-chunk: 32   split-method: MODULO  # Allowed values are RANGE, MODULO  per-table-config:  - catalog: "tpch"    schema: "tpch"    tables:      lineitem:        split-key: "l_orderkey"#       row-identifier-key: ["l_linenumber", "l_orderkey"]#       split-method : MODULO  # Table specific overridable config : allowed values are RANGE, MODULO#       num-jobs: 1      orders:        split-key: "o_orderkey"#       num-jobs: 1

Parameters Related to Realtime Mode

The realtime section can specify the parameters' configurations in realtime mode.

Below is a sample extractor using a realtime mode configuration for IBM Informix.

realtime:  threads: 4  fetch-size-rows: 256  _buffer-size: 1000  _read-timeout: 3   db-charset: UTF-8   start-position:  # create-checkpoint: false  # sequence-number: 51553788137  # timestamp-ms: 1647378865000   heartbeat:    enable: true    catalog: tpch    schema: tpch    interval-ms: 10000

 From the sample above, the following parameters are specific to realtime mode on Informix:

db-charset [v22.02.12.23]: JDK equivalent of Informix code set. For example, UTF-8, ISO-8859-1, etc.

start-position: This allows users to specify sequence number or timestamp to start reading CDC operations.

sequence-number: This specifies the start position for the CDC.

timestamp-ms: This optional parameter is used as a filter to ignore any transaction with a commit timestamp older than the specified timestamp.

create-checkpoint: This can be set to true or false. If true, it can either obtain the current sequence-number from the Informix server and log it in the trace.log or set up a checkpoint from which replication can be resumed at any point in time. This is done by appending the ‚Äďresume argument to the replication command used in creating the checkpoint.

Step 7: Set up Connection Configuration for MySQL

Next,  we will use the sample connection configuration file found in $REPLICANT_HOME to specify your MySQL connection details.

vi conf/conn/mysql.yaml

Choose between using the basic username and password or using SSL for the authentication connection.

Using Basic Username and Password for the Connection

For basic auth, you can either retrieve connection credentials from AWS Secrets Manager using Replicant if they are stored there or simply specify your credentials in plain text by inputting them into the configuration file. An example of plain text credentials can be seen in the example below. 

type: MYSQL host: HOSTNAME_OR_IPport: PORT_NUMBER username: "USERNAME"password: "PASSWORD" slave-server-ids: [1]max-connections: 30 max-retries: 10retry-wait-duration-ms: 1000

From the example above, you will need to supply values for the following fields:

HOSTNAME_OR_IP: This refers to the hostname or IP address of MySQL.

PORT_NUMBER: This is the port number of the MySQL host.

USERNAME: This refers to the user's username that connects to your MySQL server.

PASSWORD: This is the password associated with the USERNAME. 

Using SSL for the Connection

To connect to MySQL using SSL, we must create the TrustStore and KeyStore on the host running Replicant. After this is done, you’ll need to specify the SSL configuration in the connection configuration file.

Create the TrustStore and KeyStore on the host running Replicant
  • Import the Certificate Authority (CA) certificate PEM file as shown in the sample below.
keytool -importcert -alias MySQLCACert -file /path/to/ca.pem \-keystore TRUSTSTORE_LOCATION \-storepass TRUSTORE_PASSWORD -noprompt

TRUSTSTORE_LOCATION: The TrustStore location corresponds to the ssl.trust-store.path parameter in the SSL Configuration.

TRUSTSTORE_PASSWORD: The TrustStore password corresponds to the ssl.trust-store.password parameter in the SSL Configuration.

The ca.pem file corresponds to the ssl.root-cert field in the SSL Configuration.

  • Once the client private key and certificate files are gotten, import them into a Java KeyStore by converting the client key and certificate files to a PKCS #12 archive as shown below:
openssl pkcs12 -export -in /path/to/client-cert.pem -inkey /path/to/client-key.pem \-name "NAME" -passout pass:PASSWORD \-out client-keystore_src.p12
From the code above, replace the following:PASSWORD: This is the password source for output filesNAME: This is the name of the certificate and key.The client-key.pem and client-cert.pem files correspond to the ssl.ssl-key and ssl.ssl-vert parameters, respectively, in the SSL configuration.After the client key and certificate files are converted to a PKCS #12 archive, import the client key and certificate into a Java Keystore, as seen below.
keytool -importkeystore -srckeystore client-keystore_src.p12 -srcstoretype pkcs12 -srcstorepass SRC_KEYSTORE_PASSWORD \-destkeystore NAME_OF_THE_DST_KEYSTORE_FILE -deststoretype JKS \-deststorepass DST_KEYSTORE_PASSWORD

From the code above, replace the following:

SRC_KEYSTORE_PASSWORD: This is the source keystore password.

NAME_OF_THE_DST_KEYTONE_FILE: This is the name of the destination key store file and corresponds to the ssl.key-store.path parameter in the SSL Configuration.

DST_KEYSTONE_PASSWORD: This is the destination keystore password corresponding to the ssl.key-store.password parameter in the SSL Configuration.

Specify SSL Configuration in the Connection Configuration File

Specify the SSL configuration by using the Realtime and full-mode replication or Snapshot replication. 

  • Realtime and full-mode replication

For CDC-based replication using realtime and full mode, you must specify the SSL configuration under the SSL field in the connection configuration file, as shown below.

ssl:  enable: true  root-cert: PATH_TO_CA_PEM_FILE  ssl-cert: PATH_TO_CLIENT_CERT_PEM_FILE  ssl-key: PATH_TO_CLIENT_KEY_PEM_FILE   hostname-verification: {true|false}   trust-store:                         path: PATH_TO_TRUSTORE    password: "TRUSTSTORE_PASSWORD"  key-store:                            path: PATH_TO_KEYSTORE    password: "KEYSTORE_PASSWORD"

From the above, we have the following:

PATH_TO_CA_PEM_FILE: This is the path to the Certificate Authority (CA) certificate PEM file.

PATH_TO_CLIENT_CERT_PEM_FILE: This is the path to the client SSL public key certificate file in PEM format.

PATH_TO_CLIENT_KEY_PEM_FILE: This is the path to the client's private key.

PATH_TO_TRUSTSTORE: This is the path to the TrustStore.

TRUSTSTORE_PASSWORD: This is the TrustStore password.

PATH_TO kEYSTORE: This is the path to the Java keystore.

KEYSTORE_PASSWORD: This is the KeyStore password.

hostname-verification:: With its default set at true, it enables hostname verification against the server identity according to the specification in the server’s certificate.

  • Snapshot replication

For snapshot mode replication, you must specify the SSL configuration under the ssl field in the connection configuration file, as shown below.

ssl:  enable: true  hostname-verification: {true|false}   trust-store:                         path: PATH_TO_TRUSTORE    password: "TRUSTSTORE_PASSWORD"  key-store:                            path: PATH_TO_KEYSTORE    password: "KEYSTORE_PASSWORD"

From the above, the following values will need to be added in:

hostname-verification:: With its default set at true, it enables hostname verification against the server identity according to the specification in the server’s certificate.

trust-store.path: This is the path to the TrustStore.

trust-store.password: This is the TrustStore password.

key-store.path: This is the path to the Java keystore.

key-store.password: This is the KeyStore password.

Step 8: Set up Applier Configuration

To set up the Applier configuration, go to the sample MySQL Applier configuration file in $REPLICANT_HOME.

vi conf/dst/mysql.yaml</code></pre>

In the connection configuration file, make the necessary changes as shown below.

snapshot:  threads: 16 #Specify the maximum number of threads Replicant should use for writing to the target   #If bulk-load is used, Replicant will use the native bulk-loading capabilities of the target database  bulk-load:    enable: true|false #Set to true if you want to enable bulk loading    type: FILE|PIPE #Specify the type of bulk loading between FILE and PIPE    serialize: true|false #Set to true if you want the generated files to be applied in serial/parallel fashion     #For versions 20.09.14.3 and beyond    native-load-configs: #Specify the user-provided LOAD configuration string which will be appended to the s3 specific LOAD SQL command

Step 9: Running Replicant

From $REPLICANT_HOME, run the following command to execute Replicant in realtime mode. When Replicant starts up,  the pipeline will be immediately activated after the command is executed.

./bin/replicant realtimeconf/conn/informix.yaml \conf/conn/mysql.yaml \--extractor conf/src/informix.yaml \--applier conf/dst/mysql.yaml¬† \--id repl2 --replace ‚Äďoverwrite

After executing the command and the pipeline is started in realtime mode, Replicant does the following:

  • Replicant first creates the destination schemas if they are not already present but, if the destination schemas are present, Replicant appends them to the existing tables.
  • Replicant starts replicating real-time operations performed within the Informix instance, obtained via log-based CDC. By default, real-time mode starts replicating from the latest log position, but a custom start position can be specified by the user in the real-time section of the extractor configuration file.

Advantages of using Arcion

  • Arcion allows smooth schema management of your data, ensuring data integrity, reliability, and consistency. It reads directly from the transaction logs and not from the database itself.
  • Arcion offers sub-second latency due to its distributed & highly scalable architecture. It supports auto-scaling vertically and horizontally so users can get ultra-low latency and maximum throughput even as their data volume grows.
  • Arcion‚Äôs in-built data validation works automatically to ensure data integrity is consistently maintained efficiently. This ensures that zero data loss occurs for scalable data migration and replication.
  • Arcion offers effortless setup and maintenance with its no-code platform that removes DevOps dependencies, so you don‚Äôt need to incorporate Kafka, Spark Streaming, Kinesis, or other streaming tools.¬†
  • Arcion is SOC 2 Type 1 & Type 2, HIPAA, and PCI compliant. The enterprise-grade security and compliance standards ensure data governance.

Method 2: Informix to MySQL by custom code 

Next, we'll delve into a method that involves using Informix's UNLOAD and LOAD statements to export data, or an entire database, into text files. Although many different methods can be used, we will use UNLOAD and LOAD statements for simplicity. We'll also generate a schema file containing the necessary schema information for subsequent import into a MySQL database using the mysqlimport tool.

Now, let's delve into the practical steps in creating a connection between Informix and MySQL, focusing on two key aspects: schema migration and data migration.

Step 1: Schema Migration

Before migrating the actual data, the data schema definition must first be transferred. Suppose you have an existing schema definition used with Informix to create a table. In that case, this schema can create an equivalent table in MySQL with appropriate modifications such as data type and index adjustments. The Informix dbschema utility can be employed for this purpose. Below is an example of how to export an exampleTable from exampleDB using dbschema in a terminal (that has access to the DBSCHEMA command):

DBSCHEMA ‚Äďd exampleDB ‚Äďt exampleTableThis would generate an output like the following:CREATE TABLE table1 (id char(11) NOT NULL PRIMARY KEY,lname char(40) NOT NULL,fname char(20) NOT NULL,phone char(12) NOT NULL,address char(40) NULL,city char(20) NULL,state char(2) NULL,zip char(5) NULL,contract bit NOT NULL);

Remember to account for differences in data types, index types, and other aspects between Informix and MySQL when copying the CREATE TABLE statement into the MySQL client program to create the table on the MySQL server. Once the schema is exported and in the format MySQL requires, run the SQL script to create the table.

Step 2: Data Migration

When the table is created on MySQL, the data can be exported in Informix using the unload command from DB-Access and imported on MySQL using the mysqlimport tool. Both tools accept text file format data and have tab-delimited columns by default, with each row of data on a separate line. 

The sample below shows how you can use the unload command from DB-Access on Informix to export a table known as exampleTable from the exampleDB schema discussed earlier.

UNLOAD to 'exampleTable' DELIMITER '\t'      SELECT * FROM exampleTable

The exported file text is then imported to MySQL using the mysqlimport command below.

mysqlimport --local --delete --user=root test exampleTabletest.exampleTable: Records: 23  Deleted: 0  Skipped: 0  Warnings: 0

 

The results show that the table was successfully imported into the test schema, where the ‚Äďlocal flag allowed the import file to be located in the local file system directory. With the ‚Äďdelete option enabled, the import program will delete all the existing rows in exampleTable in the test database before loading the new data. This makes it so the command can be used repeatedly for the same results. To check the table status, you can run the SQL statement below. From the output, you can confirm the import took place as intended.

SHOW TABLE STATUS LIKE 'exampleTable' \G *************************** 1. row ***************************           Name: exampleTable         Engine: MyISAM        Version: 9     Row_format: Fixed           Rows: 23 Avg_row_length: 152    Data_length: 3496Max_data_length: 652835028991   Index_length: 2048      Data_free: 0 Auto_increment: NULL    Create_time: 2004-12-29 18:55:34    Update_time: 2004-12-29 19:16:04     Check_time: NULL      Collation: latin1_swedish_ci       Checksum: NULL Create_options:        Comment:1 row in set (0.00 sec)

Limitations and Challenges of Using Custom Code Method

Even though custom code methods for data migration offer a granular approach to database migration, there are some significant downsides. It's important to be aware of the potential challenges of using custom code methods. Some of the key factors to consider include:

  • Performance Issues: You may encounter slower performance when using specific Informix tools for migrating data to MySQL.
  • Database Schema and Data Management: If inappropriate tools are used, you may find it difficult to modify the database schema or have to move the entire database repeatedly.
  • Preparation Time: Using tools like the HPL (High-performance Loader) can demand significant preparation time.
  • Testing and Data Integrity: Extensive testing is required to ensure data integrity and accuracy. There's always a potential risk for data loss or corruption.
  • Complexity in Management: Managing database connections and transactions can add to the complexity of this migration method.

While the custom code method provides a viable pathway for database migration from Informix to MySQL, thorough planning, careful execution, and a robust testing strategy are key to a successful transition. Of course, other methods, such as Arcion, can bypass many of these factors.

Conclusion 

In this article, we have explored how to migrate Informix to MySQL by looking at two viable methods. First, we looked at using Arcion's data migration platform, and second, we covered how to do a database migration using custom code.

As organizations aim for a smooth and efficient migration, Arcion emerges as a reliable platform to move data from IBM Informix to MySQL. To start with Arcion today, contact our team of Informix to MySQL migration experts.

Matt is a developer at heart with a passion for data, software architecture, and writing technical content. In the past, Matt worked at some of the largest finance and insurance companies in Canada before pivoting to working for fast-growing startups.
Luke has two decades of experience working with database technologies and has worked for companies like Oracle, AWS, and MariaDB. He is experienced in C++, Python, and JavaScript. He now works at Arcion as an Enterprise Solutions Architect to help companies simplify their data replication process.
Join our newsletter

Take Arcion for a Spin

Deploy the only cloud-native data replication platform you’ll ever need. Get real-time, high-performance data pipelines today.

Free download

8 sources & 6 targets

Pre-configured enterprise instance

Available in four US AWS regions

Contact us

20+ enterprise source and target connectors

Deploy on-prem or VPC

Satisfy security requirements