Upgrade Path from pre-4.8 to Versions 4.8+

Modified on Sat, 11 Jun 2022 at 06:36 PM

Applies to

Any upgrades performed on the main Airlock Server to pre-v4.8 server builds are required to upgrade to v4.8.1 as part of the mandatory upgrade path.

This only applies to on-premises upgrades, as this process is done by an engineer on the backend for Cloud Hosted instances.


Upgrading to Airlock v4.8.1 performs a required database re-index due to database changes introduced within Airlock in versions v4.8 and onwards. 


Prior to any upgrade you will need to perform a snapshot of the server as a safety measure in case any issues were to occur during or after the upgrade of the Airlock server.

The next step is to check the execution history count for the database, as this can be used to estimate the amount of time the re-indexing will take. While the upgrade itself will only take 20-30 mins, the re-indexing can take anywhere from 30mins to a few hours depending on this count.

To get this count you will need to run the following command on your Airlock server. There are two commands to cover both podman and docker. Docker is used on all CentOS/RHEL versions v8.0 and below, podman is used on CentOS/RHEL v8.1 and above:

podman exec -it airlock_server mongo airlock --eval 'db.exechistories.count()'
docker exec -it airlock_server mongo airlock --eval 'db.exechistories.count()'

This will print the number of executions within your database. For an approximate scale ~10mil entries will take around 1hr to re-index.

If you have upwards of 50mil entries, it is recommended that you reconfigure your Database History Retention, this can also be considered a best practice method regardless of your execution history count. This can be found on the Web Console in Settings --> Database --> Database Execution History Retention. This will have a drop-down menu that will allow you to select the timescale that you want the Airlock Server to retain Execution History at.

It is important to note that this will not remove any of the repository data for files or metadata that is stored there, this will only remove execution history that has occurred prior to the timescale set. Once you have set this, you will need to wait at least 24hrs for the change to take effect.

After this has been done, your exechistories count should drop depending on the timescale that you had set. This will in-turn reduce the amount of time your re-indexing will take.


You should now be able to the run the v4.8.1 installer. After setting the installer up, you will also be able to tail a database log that will allow you to monitor the progress of the re-indexing process.

tail -f /opt/airlock_data/log/mongodb.log

The logs displayed by the above command should look similar to the screenshot below:

Please note, there may be a de-duplication occurring in which case it may take a while (up to 30m and sometimes longer) for the re-index logs to appear. 

Once the re-indexing has completed the log will have significantly reduced reporting. You will also be able to check if the re-indexing has completed successfully after the log appears to have completed. You can do this by doing the following:

#Login to the Container
podman exec -it airlock_server /bin/bash --login
docker exec -it airlock_server /bin/bash --login

#Open Mongo
mongo airlock

#Output number of indexes
var sum=0;
db.getCollectionNames().forEach(function(collection) {
        if (!(collection.includes('temp') || collection.includes('tmp'))){
        print(collection + ": " + db[collection].getIndexes().length);
        sum += db[collection].getIndexes().length
print("Total index Count: " + sum);

This will give you a count of all the indexes within the database, with a total located at the bottom of the results. This total should be above 130, if it is below then the re-indexing may not have completed successfully. 

If this is the case and the index count is below 130, you will be able to run the re-indexing process again, you may have to run it 2-5 time before it is reporting a healthy index count. If you run it up to 5 times and you are still not seeing an index count above 130 you will need to contact the Airlock Support Team. This can be done by either sending and email to support@airlockdigital.com or opening a ticket directly on the Airlock Freshdesk Portal.

If you have an index count above 130 you will be ready to upgrade to either the latest version of the Airlock Server, or to the next mandatory upgrade path version.


If you plan to run this upgrade overnight, you may need to do the following process in order to prevent the server from restarting at midnight, as this will stop the re-indexing process and force it to fail.


crontab -e

[In this VIM you will need to comment out two lines:]

*       *       *       *       *       /usr/sbin/ald_servertask
0       0       *       *       *       [[ ! -f "/opt/airlock_data/sem/reindexdb_flag" ]] && /usr/sbin/logrotate -f /etc/logrotate.d/airlock-server

[To do this all you need to do is add a '#' in front of them, so they will now look like this:]

#*       *       *       *       *       /usr/sbin/ald_servertask
#0       0       *       *       *       [[ ! -f "/opt/airlock_data/sem/reindexdb_flag" ]] && /usr/sbin/logrotate -f /etc/logrotate.d/airlock-server

[You will then be able to confirm that the file was saved by checking the results, to do this you will need to run:]

crontab -l

After the upgrade and re-indexing is complete, you can then remove the '#' symbols to reinstate these lines.

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select atleast one of the reasons

Feedback sent

We appreciate your effort and will try to fix the article