Cluster Deployment

1、Before you begin (please install requirement basic software by yourself)

  • PostgreSQL (8.2.15+) or MySQL (5.7) : Choose One, JDBC Driver 5.1.47+ is required if MySQL is used
  • JDK (1.8+) : Required. Double-check configure JAVA_HOME and PATH environment variables in /etc/profile
  • ZooKeeper (3.4.6+) : Required
  • pstree or psmisc : "pstree" is required for Mac OS and "psmisc" is required for Fedora/Red/Hat/CentOS/Ubuntu/Debian
  • Hadoop (2.6+) or MinIO : Optional. If you need to upload a resource function, you can choose a local file directory as the upload folder for a single machine (this operation does not need to deploy Hadoop). Of course, you can also choose to upload to Hadoop or MinIO.
 Tips: DolphinScheduler itself does not rely on Hadoop, Hive, Spark, only use their clients for the corresponding task of running.

2、Download the binary package.

  • Please download the latest version of the default installation package to the server deployment directory. For example, use /opt/dolphinscheduler as the installation and deployment directory. Download address: download,Download the package and move to the installation and deployment directory. Then uncompress it.
# Create the deployment directory. Do not choose a deployment directory with a high-privilege directory such as / root or / home.
mkdir -p /opt/dolphinscheduler;
cd /opt/dolphinscheduler;
# uncompress
tar -zxvf apache-dolphinscheduler-incubating-1.3.5-dolphinscheduler-bin.tar.gz -C /opt/dolphinscheduler;

mv apache-dolphinscheduler-incubating-1.3.5-dolphinscheduler-bin  dolphinscheduler-bin

3、Create deployment user and hosts mapping

  • Create a deployment user on the ** all ** deployment machines, and be sure to configure sudo passwordless. If we plan to deploy DolphinScheduler on 4 machines: ds1, ds2, ds3, and ds4, we first need to create a deployment user on each machine.
# To create a user, you need to log in as root and set the deployment user name. Please modify it yourself. The following uses dolphinscheduler as an example.
useradd dolphinscheduler;

# Set the user password, please modify it yourself. The following takes dolphinscheduler123 as an example.
echo "dolphinscheduler123" | passwd --stdin dolphinscheduler

# Configure sudo passwordless
echo 'dolphinscheduler  ALL=(ALL)  NOPASSWD: NOPASSWD: ALL' >> /etc/sudoers
sed -i 's/Defaults    requirett/#Defaults    requirett/g' /etc/sudoers

 - Because the task execution service is based on 'sudo -u {linux-user}' to switch between different Linux users to implement multi-tenant running jobs, the deployment user needs to have sudo permissions and is passwordless. The first-time learners who can ignore it if they don't understand.
 - If find the "Default requiretty" in the "/etc/sudoers" file, also comment out.
 - If you need to use resource upload, you need to assign the user of permission to operate the local file system, HDFS or MinIO.

4、Configure hosts mapping and ssh access and modify directory permissions.

  • Use the first machine (hostname is ds1) as the deployment machine, configure the hosts of all machines to be deployed on ds1, and login as root on ds1.

    vi /etc/hosts
    #add ip hostname ds1 ds2 ds3 ds4

    Note: Please delete or comment out the line

  • Sync /etc/hosts on ds1 to all deployment machines

    for ip in ds2 ds3;     # Please replace ds2 ds3 here with the hostname of machines you want to deploy
        sudo scp -r /etc/hosts  $ip:/etc/          # Need to enter root password during operation

    Note: can use sshpass -p xxx sudo scp -r /etc/hosts $ip:/etc/ to avoid type password.

    Install sshpass in Centos:

    1. Install epel

      yum install -y epel-release

      yum repolist

    2. After installing epel, you can install sshpass

      yum install -y sshpass

  • On ds1, switch to the deployment user and configure ssh passwordless login

     su dolphinscheduler;
    ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
    cat ~/.ssh/ >> ~/.ssh/authorized_keys
    chmod 600 ~/.ssh/authorized_keys

​ Note: If configure success, the dolphinscheduler user does not need to enter a password when executing the command ssh localhost

  • On ds1, configure the deployment user dolphinscheduler ssh to connect to other machines to be deployed.

    su dolphinscheduler;
    for ip in ds2 ds3;     # Please replace ds2 ds3 here with the hostname of the machine you want to deploy.
        ssh-copy-id  $ip   # You need to manually enter the password of the dolphinscheduler user during the operation.
    # can use `sshpass -p xxx ssh-copy-id $ip` to avoid type password.
  • On ds1, modify the directory permissions so that the deployment user has operation permissions on the dolphinscheduler-bin directory.

    sudo chown -R dolphinscheduler:dolphinscheduler dolphinscheduler-bin

5、Database initialization

  • Into the database. The default database is PostgreSQL. If you select MySQL, you need to add the mysql-connector-java driver package to the lib directory of DolphinScheduler.
mysql -h192.168.xx.xx -P3306 -uroot -p
  • After entering the database command line window, execute the database initialization command and set the user and password. Note: {user} and {password} need to be replaced with a specific database username and password
   mysql> CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
   mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'%' IDENTIFIED BY '{password}';
   mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'localhost' IDENTIFIED BY '{password}';
   mysql> flush privileges;
  • Create tables and import basic data

    • Modify the following configuration in under the conf directory
      vi conf/
    • If you choose Mysql, please comment out the relevant configuration of PostgreSQL (vice versa), you also need to manually add the [[mysql-connector-java driver jar] ( package to lib under the directory, and then configure the database connection information correctly.
      # mysql
      spring.datasource.url=jdbc:mysql://xxx:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true     # Replace the correct IP address
      spring.datasource.username=xxx						# replace the correct {user} value
      spring.datasource.password=xxx						# replace the correct {password} value
    • After modifying and saving, execute the create table and import data script in the script directory.
    sh script/

Note: If you execute the above script and report "/bin/java: No such file or directory" error, please configure JAVA_HOME and PATH variables in /etc/profile

6、Modify runtime parameters.

  • Modify the environment variable in file which on the 'conf/env' directory (take the relevant software installed under '/opt/soft' as an example)

        export HADOOP_HOME=/opt/soft/hadoop
        export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
        #export SPARK_HOME1=/opt/soft/spark1
        export SPARK_HOME2=/opt/soft/spark2
        export PYTHON_HOME=/opt/soft/python
        export JAVA_HOME=/opt/soft/java
        export HIVE_HOME=/opt/soft/hive
        export FLINK_HOME=/opt/soft/flink
        export DATAX_HOME=/opt/soft/datax/bin/
     `Note: This step is very important. For example, JAVA_HOME and PATH must be configured. Those that are not used can be ignored or commented out.`
  • Create Soft link jdk to /usr/bin/java (still JAVA_HOME=/opt/soft/java as an example)

    sudo ln -s /opt/soft/java/bin/java /usr/bin/java
  • Modify the parameters in the one-click deployment config file conf/config/install_config.conf, pay special attention to the configuration of the following parameters.

    # choose mysql or postgresql
    # Database connection address and port
    # database name
    # database username
    # database password
    # NOTICE: if there are special characters, please use the \ to escape, for example, `[` escape to `\[`
    #Zookeeper cluster
    # Note: the target installation path for dolphinscheduler, please not config as the same as the current path (pwd)
    # deployment user
    # Note: the deployment user needs to have sudo privileges and permissions to operate hdfs. If hdfs is enabled, the root directory needs to be created by itself
    # alert config,take QQ email for example
    # mail protocol
    # mail server host
    # mail server port
    # note: Different protocols and encryption methods correspond to different ports, when SSL/TLS is enabled, make sure the port is correct.
    # mail sender
    # mail user
    # mail sender password
    # note: The mail.passwd is email service authorization code, not the email login password.
    # Whether TLS mail protocol is supported,true is supported and false is not supported
    # Whether TLS mail protocol is supported,true is supported and false is not supported。
    # note: only one of TLS and SSL can be in the true state.
    # note: sslTrust is the same as mailServerHost
    # resource storage type:HDFS,S3,NONE
    # If resourceStorageType = HDFS, and your Hadoop Cluster NameNode has HA enabled, you need to put core-site.xml and hdfs-site.xml in the installPath/conf directory. In this example, it is placed under /opt/soft/dolphinscheduler/conf, and configure the namenode cluster name; if the NameNode is not HA, modify it to a specific IP or host name.
    # if S3,write S3 address,HA,for example :s3a://dolphinscheduler,
    # Note,s3 be sure to create the root directory /dolphinscheduler
    # if not use hadoop resourcemanager, please keep default value; if resourcemanager HA enable, please type the HA ips ; if resourcemanager is single, make this value empty
    # if resourcemanager HA enable or not use resourcemanager, please skip this value setting; If resourcemanager is single, you only need to replace yarnIp1 to actual resourcemanager hostname.
    # resource store on HDFS/S3 path, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。/dolphinscheduler is recommended
    # who have permissions to create directory under HDFS/S3 root path
    # Note: if kerberos is enabled, please config hdfsRootUser=
    # install hosts
    # Note: install the scheduled hostname list. If it is pseudo-distributed, just write a pseudo-distributed hostname
    # ssh port, default 22
    # Note: if ssh port is not default, modify here
    # run master machine
    # Note: list of hosts hostname for deploying master
    # run worker machine
    # note: need to write the worker group name of each worker, the default value is "default"
    # run alert machine
    # note: list of machine hostnames for deploying alert server
    # run api machine
    # note: list of machine hostnames for deploying api server


    • If you need to upload resources to the Hadoop cluster, and the NameNode of the Hadoop cluster is configured with HA, you need to enable HDFS resource upload, and you need to copy the core-site.xml and hdfs-site.xml in the Hadoop cluster to /opt/ dolphinscheduler/conf. Non-NameNode HA skips the next step.

7、Automated Deployment

  • Switch to the deployment user and execute the one-click deployment script


    For the first deployment, the following message appears in step 3 of `3, stop server` during operation. This message can be ignored.
    sh: bin/ No such file or directory
  • After the script is completed, the following 5 services will be started. Use the jps command to check whether the services are started (jps comes with java JDK)

    MasterServer         ----- master service
    WorkerServer         ----- worker service
    LoggerServer         ----- logger service
    ApiApplicationServer ----- api service
    AlertServer          ----- alert service

If the above services are started normally, the automatic deployment is successful.

After the deployment is successful, you can view the logs. The logs are stored in the logs folder.

    ├── dolphinscheduler-alert-server.log
    ├── dolphinscheduler-master-server.log
    |—— dolphinscheduler-worker-server.log
    |—— dolphinscheduler-api-server.log
    |—— dolphinscheduler-logger-server.log


9、Start and stop service

  • Stop all services

    sh ./bin/

  • Start all services

    sh ./bin/

  • Start and stop master service

sh ./bin/ start master-server
sh ./bin/ stop master-server
  • Start and stop worker Service
sh ./bin/ start worker-server
sh ./bin/ stop worker-server
  • Start and stop api Service
sh ./bin/ start api-server
sh ./bin/ stop api-server
  • Start and stop logger Service
sh ./bin/ start logger-server
sh ./bin/ stop logger-server
  • Start and stop alert service
sh ./bin/ start alert-server
sh ./bin/ stop alert-server

Note: Please refer to the "Architecture Design" section for service usage