This article describes how to add a new master service or worker service to an existing DolphinScheduler cluster.
Attention: There cannot be more than one master service process or worker service process on a physical machine.
If the physical machine where the expansion master or worker node is located has already installed the scheduled service, skip to [1.4 Modify configuration] Edit the configuration file `conf/config/install_config.conf` on **all ** nodes, add masters or workers parameter, and restart the scheduling cluster.
Attention: DolphinScheduler itself does not depend on Hadoop, Hive, Spark, but will only call their Client for the corresponding task submission.
# create the installation directory, please do not create the installation directory in /root, /home and other high privilege directories
mkdir -p /opt
cd /opt
# decompress
tar -zxvf apache-dolphinscheduler-1.3.9-bin.tar.gz -C /opt
cd /opt
mv apache-dolphinscheduler-1.3.9-bin dolphinscheduler
Attention: The installation package can be copied directly from an existing environment to an expanded physical machine for use.
# to create a user, you need to log in with root and set the deployment user name, please modify it yourself, later take dolphinscheduler as an example
useradd dolphinscheduler;
# set the user password, please change it by yourself, later take dolphinscheduler123 as an example
echo "dolphinscheduler123" | passwd --stdin dolphinscheduler
# configure sudo password-free
echo 'dolphinscheduler ALL=(ALL) NOPASSWD: NOPASSWD: ALL' >> /etc/sudoers
sed -i 's/Defaults requirett/#Defaults requirett/g' /etc/sudoers
Attention:
- Since it is sudo -u {linux-user} to switch between different Linux users to run multi-tenant jobs, the deploying user needs to have sudo privileges and be password free.
- If you find the line "Default requiretty" in the /etc/sudoers file, please also comment it out.
- If resource uploads are used, you also need to assign read and write permissions to the deployment user on `HDFS or MinIO`.
From an existing node such as Master/Worker, copy the conf directory directly to replace the conf directory in the new node. After copying, check if the configuration items are correct.
Highlights:
datasource.properties: database connection information
zookeeper.properties: information for connecting zk
common.properties: Configuration information about the resource store (if hadoop is set up, please check if the core-site.xml and hdfs-site.xml configuration files exist).
env/dolphinscheduler_env.sh: environment Variables
Modify the dolphinscheduler_env.sh
environment variable in the conf/env directory according to the machine configuration (take the example that the software used is installed in /opt/soft)
export HADOOP_HOME=/opt/soft/hadoop
export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
# export SPARK_HOME1=/opt/soft/spark1
export SPARK_HOME2=/opt/soft/spark2
export PYTHON_HOME=/opt/soft/python
export JAVA_HOME=/opt/soft/jav
export HIVE_HOME=/opt/soft/hive
export FLINK_HOME=/opt/soft/flink
export DATAX_HOME=/opt/soft/datax/bin/datax.py
export PATH=$HADOOP_HOME/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
Attention: This step is very important, such as JAVA_HOME and PATH is necessary to configure, not used can be ignored or commented out
Softlink the JDK to /usr/bin/java (still using JAVA_HOME=/opt/soft/java as an example)
sudo ln -s /opt/soft/java/bin/java /usr/bin/java
Modify the configuration file conf/config/install_config.conf
on the all nodes, synchronizing the following configuration.
# which machines to deploy DS services on, separated by commas between multiple physical machines
ips="ds1,ds2,ds3,ds4"
# ssh port,default 22
sshPort="22"
# which machine the master service is deployed on
masters="existing master01,existing master02,ds1,ds2"
# the worker service is deployed on which machine, and specify the worker belongs to which worker group, the following example of "default" is the group name
workers="existing worker01:default,existing worker02:default,ds3:default,ds4:default"
If the expansion is for worker nodes, you need to set the worker group. Please refer to the user manual 5.7 Worker grouping
On all new nodes, change the directory permissions so that the deployment user has access to the dolphinscheduler directory
sudo chown -R dolphinscheduler:dolphinscheduler dolphinscheduler
# stop command:
bin/stop-all.sh # stop all services
sh bin/dolphinscheduler-daemon.sh stop master-server # stop master service
sh bin/dolphinscheduler-daemon.sh stop worker-server # stop worker service
sh bin/dolphinscheduler-daemon.sh stop logger-server # stop logger service
sh bin/dolphinscheduler-daemon.sh stop api-server # stop api service
sh bin/dolphinscheduler-daemon.sh stop alert-server # stop alert service
# start command::
bin/start-all.sh # start all services
sh bin/dolphinscheduler-daemon.sh start master-server # start master service
sh bin/dolphinscheduler-daemon.sh start worker-server # start worker service
sh bin/dolphinscheduler-daemon.sh start logger-server # start logger service
sh bin/dolphinscheduler-daemon.sh start api-server # start api service
sh bin/dolphinscheduler-daemon.sh start alert-server # start alert service
Attention: When using stop-all.sh or stop-all.sh, if the physical machine executing the command is not configured to be ssh-free on all machines, it will prompt for the password
jps
command to see if each node service is started (jps
comes with the Java JDK
) MasterServer ----- master service
WorkerServer ----- worker service
LoggerServer ----- logger service
ApiApplicationServer ----- api service
AlertServer ----- alert service
After successful startup, you can view the logs, which are stored in the logs folder.
logs/
├── dolphinscheduler-alert-server.log
├── dolphinscheduler-master-server.log
|—— dolphinscheduler-worker-server.log
|—— dolphinscheduler-api-server.log
|—— dolphinscheduler-logger-server.log
If the above services are started normally and the scheduling system page is normal, check whether there is an expanded Master or Worker service in the [Monitor] of the web system. If it exists, the expansion is complete.
The reduction is to reduce the master or worker services for the existing DolphinScheduler cluster. There are two steps for shrinking. After performing the following two steps, the shrinking operation can be completed.
# stop command:
bin/stop-all.sh # stop all services
sh bin/dolphinscheduler-daemon.sh stop master-server # stop master service
sh bin/dolphinscheduler-daemon.sh stop worker-server # stop worker service
sh bin/dolphinscheduler-daemon.sh stop logger-server # stop logger service
sh bin/dolphinscheduler-daemon.sh stop api-server # stop api service
sh bin/dolphinscheduler-daemon.sh stop alert-server # stop alert service
# start command:
bin/start-all.sh # start all services
sh bin/dolphinscheduler-daemon.sh start master-server # start master service
sh bin/dolphinscheduler-daemon.sh start worker-server # start worker service
sh bin/dolphinscheduler-daemon.sh start logger-server # start logger service
sh bin/dolphinscheduler-daemon.sh start api-server # start api service
sh bin/dolphinscheduler-daemon.sh start alert-server # start alert service
Attention: When using stop-all.sh or stop-all.sh, if the machine without the command is not configured to be ssh-free for all machines, it will prompt for the password.
jps
command to see if each node service was successfully shut down (jps
comes with the Java JDK
) MasterServer ----- master service
WorkerServer ----- worker service
LoggerServer ----- logger service
ApiApplicationServer ----- api service
AlertServer ----- alert service
If the corresponding master service or worker service does not exist, then the master/worker service is successfully shut down.
modify the configuration file conf/config/install_config.conf
on the all nodes, synchronizing the following configuration.
# which machines to deploy DS services on, "localhost" for this machine
ips="ds1,ds2,ds3,ds4"
# ssh port,default: 22
sshPort="22"
# which machine the master service is deployed on
masters="existing master01,existing master02,ds1,ds2"
# The worker service is deployed on which machine, and specify which worker group this worker belongs to, the following example of "default" is the group name
workers="existing worker01:default,existing worker02:default,ds3:default,ds4:default"