Database Migration

These instructions describe how to migrate an ATSD instance running on HBase-0.94 to a version running on the updated HBase-1.2.5.

The instructions apply only to single-node ATSD installations running in pseudo-distributed mode.

ATSD upgrades on Docker containers and Hadoop clusters are covered in their respective documents.


Code ATSD Revision Number Java Version HBase Version HDFS Version
Old 16999 and earlier 1.7 0.94.29 1.0.3
New 17000 and later 1.8 1.2.5 2.6.4


Installation Type

  • Single-node ATSD installation in pseudo-distributed mode (with HDFS).
  • ATSD revision 16000 and greater. Older ATSD revisions must be upgraded to revision 16000+.


  • Java 8 installation requires root privileges.

Disk Space

The migration procedure requires up to 30% of the reported /opt/atsd size to store migrated records before old data can be deleted.

Determine the size of the ATSD installation directory.

du -hs /opt/atsd
24G /opt/atsd

Check that free disk space is available on the file system containing the /opt/atsd directory.

df -h /opt/atsd
Filesystem      Size  Used Avail Use% Mounted on
/dev/md2       1008G  262G  736G  27% /

If the backup is stored on the same file system, add it to the estimated disk space usage.

Calculate disk space requirements.

Data Size
Original Data 24G
Backup 24G
Migrated Data 7G (30% of 24G)
Backup + Migrated 31G
Available 736G

In the example above, 736G is sufficient to store 31G of backup and migrated data on the same file system.

Allocate additional disk space, if necessary.

Check Record Count for Testing

Log in to the ATSD web interface.

Open the SQL > SQL Console page.

Count rows for the previously selected metric and compare the results.


The number of records must match the results after the migration.

Install Java 8

Install Java 8 on the ATSD server as described.

Switch back to the axibase user.

su axibase

Execute the remaining steps as the axibase user.

Prepare ATSD For Upgrade

Change to ATSD installation directory.

cd /opt/atsd

Stop ATSD.

/opt/atsd/bin/ stop

Execute the jps command. Verify that the Server process is not present in the jps output.

If the Server process continues running, follow the safe ATSD shutdown procedure.

Remove deprecated settings.

sed -i '/^' /opt/atsd/atsd/conf/

Check HBase Status

Check HBase for consistency.

/opt/atsd/hbase/bin/hbase hbck

The expected message is:

0 inconsistencies detected.
Status: OK

Follow recovery procedures if inconsistencies are reported.

Stop HBase.

/opt/atsd/bin/ stop

Execute the jps command and verify that the HMaster, HRegionServer, and HQuorumPeer processes are not present in the jps command output.

1200 DataNode
1308 SecondaryNameNode
5324 Jps
1092 NameNode

If one of the above processes continues running, follow the safe HBase shutdown procedure.

Check HDFS Status

Check HDFS for consistency.

/opt/atsd/hadoop/bin/hadoop fsck /hbase/

The expected message is:

The filesystem under path '/hbase/' is HEALTHY.

If corrupted files are reported, follow the recovery procedure.

Stop HDFS.

/opt/atsd/bin/ stop

Execute the jps command and verify that the NameNode, SecondaryNameNode, and DataNode processes are not present in the jps command output.


Copy the ATSD installation directory to a backup directory:

cp -R /opt/atsd /home/axibase/atsd-backup

Upgrade Hadoop

Delete the old Hadoop directory

rm -rf /opt/atsd/hadoop

Download a pre-configured Hadoop-2.6.4 archive and unpack it in the ATSD installation directory.

curl -o /opt/atsd/hadoop.tar.gz
tar -xf /opt/atsd/hadoop.tar.gz -C /opt/atsd/

Verify path to the Java 8 home.

dirname "$(dirname "$(readlink -f "$(which javac || which java)")")"

Update the JAVA_HOME variable to Java 8 in the /opt/atsd/hadoop/etc/hadoop/ file.

jp=`dirname "$(dirname "$(readlink -f "$(which javac || which java)")")"`; sed -i "s,^export JAVA_HOME=.*,export JAVA_HOME=$jp,g" /opt/atsd/hadoop/etc/hadoop/ ; echo $jp

Upgrade Hadoop.

/opt/atsd/hadoop/sbin/ start namenode -upgradeOnly

Review the log file.

tail /opt/atsd/hadoop/logs/hadoop-axibase-namenode-*.log

The expected output:

2017-07-26 16:16:16,974 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2017-07-26 16:16:16,959 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2017-07-26 16:16:16,962 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8020: starting
2017-07-26 16:16:16,986 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds
2017-07-26 16:16:16,986 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 1511498 milliseconds
2017-07-26 16:16:16,995 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 9 millisecond(s).

Start HDFS.


Check that HDFS daemons are running.

/opt/atsd/hadoop/bin/hdfs dfsadmin -report

The command returns information about HDFS usage and available data nodes.

Finalize HDFS upgrade.

/opt/atsd/hadoop/bin/hdfs dfsadmin -finalizeUpgrade

The command displays Finalize upgrade successful.

Run jps command to check that NameNode, SecondaryNameNode, and DataNode processes are running.

Upgrade HBase

Delete the old HBase directory

rm -rf /opt/atsd/hbase

Download a pre-configured version of HBase-1.2.5 and unarchive it into ATSD installation directory:

curl -o /opt/atsd/hbase.tar.gz
tar -xf /opt/atsd/hbase.tar.gz -C /opt/atsd/

Update the JAVA_HOME to Java 8 in the /opt/atsd/hbase/conf/ file.

jp=`dirname "$(dirname "$(readlink -f "$(which javac || which java)")")"`; sed -i "s,^export JAVA_HOME=.*,export JAVA_HOME=$jp,g" /opt/atsd/hbase/conf/ ; echo $jp

Check available physical memory on the server.

cat /proc/meminfo | grep "MemTotal"
MemTotal:        1922136 kB

If the memory is greater than 4 gigabytes, increase HBase JVM heap size to 50% of available physical memory on the server in the file.

export HBASE_HEAPSIZE=4096

Upgrade HBase.

/opt/atsd/hbase/bin/hbase upgrade -check

Review the hbase.log file:

tail /opt/atsd/hbase/logs/hbase.log
INFO  [main] util.HFileV1Detector: Count of HFileV1: 0
INFO  [main] util.HFileV1Detector: Count of corrupted files: 0
INFO  [main] util.HFileV1Detector: Count of Regions with HFileV1: 0
INFO  [main] migration.UpgradeTo96: No HFileV1 found.

Start and stop Zookeeper in upgrade mode.

/opt/atsd/hbase/bin/ start zookeeper
/opt/atsd/hbase/bin/hbase upgrade -execute

Review the hbase.log file:

tail -n 20 /opt/atsd/hbase/logs/hbase.log
2017-08-01 09:32:44,047 INFO  migration.UpgradeTo96 - Successfully completed Namespace upgrade
2017-08-01 09:32:44,049 INFO  migration.UpgradeTo96 - Starting Znode upgrade
2017-08-01 09:32:44,083 INFO  migration.UpgradeTo96 - Successfully completed Znode upgrade

Stop Zookeeper.

/opt/atsd/hbase/bin/ stop zookeeper

Start all HBase services.


Verify that the jps command output contains HMaster, HRegionServer, and HQuorumPeer processes.

Check that ATSD tables are available in HBase:

echo "list" | /opt/atsd/hbase/bin/hbase shell 2>/dev/null | grep -v "\["

Execute a sample scan in HBase.

echo "scan 'atsd_d', LIMIT => 1" | /opt/atsd/hbase/bin/hbase shell 2>/dev/null
ROW                  COLUMN+CELL
1 row(s) in 0.0560 seconds

Customize Map-Reduce Settings

If the available memory is greater than 8 gigabytes on the server, customize Map-Reduce settings.

Start Map-Reduce Services

Start Yarn servers:


Start Job History server:

/opt/atsd/hadoop/sbin/ --config /opt/atsd/hadoop/etc/hadoop/ start historyserver

Run the jps command to check that the following processes are running:

9849 ResourceManager  # M/R
25902 NameNode # HDFS
6857 HQuorumPeer # HBase
26050 DataNode # HDFS
26262 SecondaryNameNode
10381 JobHistoryServer  # M/R
10144 NodeManager # M/R
6940 HMaster # HBase
7057 HRegionServer # HBase

Configure Migration Job

Download the migration.jar file to the /opt/atsd directory.

curl -o /opt/atsd/migration.jar

Check that current Java version is 8.

java -version

Add migration.jar and HBase classes to classpath.

export CLASSPATH=$CLASSPATH:$(/opt/atsd/hbase/bin/hbase classpath):/opt/atsd/migration.jar

Set HADOOP_CLASSPATH for the Map-Reduce job.

export HADOOP_CLASSPATH=$(/opt/atsd/hbase/bin/hbase classpath):/opt/atsd/migration.jar

Run Migration Map-Reduce Job

Create Backup Tables

Launch the table backup task and confirm its execution.

java com.axibase.migration.admin.TableCloner -d

The task creates backups by appending a _backup suffix to the following tables:

  • atsd_d_backup
  • atsd_li_backup
  • atsd_metric_backup
  • atsd_forecast_backup
  • atsd_delete_task_backup
Table 'atsd_li' successfully deleted.
Snapshot 'atsd_metric_snapshot_1501582066133' of the table 'atsd_metric' created.
Table 'atsd_metric_backup' is cloned from snapshot 'atsd_metric_snapshot_1501582066133'. The original data are available in this table.
Snapshot 'atsd_metric_snapshot_1501582066133' deleted.
Table 'atsd_metric' successfully disabled.
Table 'atsd_metric' successfully deleted.

Map/Reduce Settings

When running Map/Reduce jobs specified in the next section, the system can encounter a virtual memory error.

17/08/01 10:19:50 INFO mapreduce.Job: Task Id : attempt_1501581371115_0003_m_000000_0, Status : FAILED
Container [...2] is running beyond virtual memory limits... Killing container.

In case of this error, adjust Map-Reduce settings and retry the job by appending the -r setting, for example .DeleteTaskMigration -m 2 -r.

In case of other errors, review job logs for the application ID displayed above:

/opt/atsd/hadoop/bin/yarn logs -applicationId application_1501581371115_0001 | less

Migrate Records from Backup Tables

Step 1. Migrate data from the atsd_delete_task_backup table by launching the task and confirming its execution.

/opt/atsd/hadoop/bin/yarn com.axibase.migration.mapreduce.DeleteTaskMigration -m 2
17/08/01 10:14:27 INFO mapreduce.Job: Job job_1501581371115_0001 completed successfully
17/08/01 10:14:27 INFO mapreduce.Job: Counters: 62
    File System Counters
        FILE: Number of bytes read=6

Step 2. Migrate data from the atsd_forecast table.

/opt/atsd/hadoop/bin/yarn com.axibase.migration.mapreduce.ForecastMigration -m 2

Step 3. Migrate data from the atsd_li table.

/opt/atsd/hadoop/bin/yarn com.axibase.migration.mapreduce.LastInsertMigration -m 2

This migration task writes intermediate results into a temporary directory for diagnostics.

WARN mapreduce.LastInsertMigration: Deleting outputFolder hdfs://localhost:8020/user/axibase/copytable/1609980393918240854 failed!
WARN mapreduce.LastInsertMigration: Data from outputFolder hdfs://localhost:8020/user/axibase/copytable/1609980393918240854 not needed any more. Delete this outputFolder via hdfs cli.
INFO mapreduce.LastInsertMigration: Last Insert table migration job took 37 seconds.

Delete the diagnostics folder manually:

/opt/atsd/hadoop/bin/hdfs dfs -rm -r /user/axibase/copytable

Step 4. Migrate data to the atsd_metric table.

/opt/atsd/hadoop/bin/yarn com.axibase.migration.mapreduce.MetricMigration -m 2

Step 5. Migrate data to the atsd_d table.

/opt/atsd/hadoop/bin/yarn com.axibase.migration.mapreduce.DataMigrator -m 2
17/08/01 10:44:31 INFO mapreduce.DataMigrator: HFiles loaded, data table migration job completed, elapsed time: 15 minutes.

The DataMigrator job takes a long time to complete. You can monitor the job progress in the Yarn web interface at http://atsd_hostname:8050/.

The Yarn interface is automatically stopped once the DataMigrator is finished.

Step 6. Migration is now complete.

Step 7. Stop Map-Reduce servers.

/opt/atsd/hadoop/sbin/ --config /opt/atsd/hadoop/etc/hadoop/ stop historyserver

Start the New Version of ATSD

Remove old ATSD application files.

rm -rf /opt/atsd/atsd/bin/atsd*.jar

Download ATSD application files.

curl -o /opt/atsd/atsd/bin/atsd.17370.jar
curl -o /opt/atsd/scripts.tar.gz

Replace old script files.

tar -xf /opt/atsd/scripts.tar.gz -C /opt/atsd/

Set JAVA_HOME in /opt/atsd/atsd/bin/ file:

jp=`dirname "$(dirname "$(readlink -f "$(which javac || which java)")")"`; sed -i "s,^export JAVA_HOME=.*,export JAVA_HOME=$jp,g" /opt/atsd/atsd/bin/

Start ATSD.

/opt/atsd/bin/ start

Check Migration Results

Log in to the ATSD web interface.

Open the SQL tab.

Execute the query and compare the row count.


The number of records must match the results prior to migration.

Delete Backups

Delete backup tables in HBase

/opt/atsd/hbase/bin/hbase shell
disable_all '.*_backup'
drop_all '.*_backup'

Delete the backup directory

rm -rf /home/axibase/atsd-backup

Remove archives

rm /opt/atsd/hadoop.tar.gz
rm /opt/atsd/hbase.tar.gz
rm /opt/atsd/scripts.tar.gz