flink1.12.x:session cluster模式部署

flink1.12.x:per-job cluster模式(yarn-cluster)部署 相同

准备

部署好 hadoop 集群

登录 hadoop1

登录 hadoop1 ,执行下面操作

上传

flink-1.12.5-bin-scala_2.12.tgz 上传到 hadoop1/program 目录

解压缩

root 用户执行下面命令:

tar zxvf flink-1.12.5-bin-scala_2.12.tgz --no-same-owner

配置环境变量

修改 bigdata_env.sh

vim /etc/profile.d/bigdata_env.sh

增加 hadoop 配置目录

增加下面内容:

export HADOOP_CONF_DIR=/program/hadoop-3.0.3/etc/hadoop

必须加上下面环境变量:

export HADOOP_CLASSPATH=`hadoop classpath`

否则报错如下:

# 配置 FLINK_HOME
export FLINK_HOME=/program/flink-1.12.5
export PATH=${FLINK_HOME}/bin:$PATH

立即生效

source /etc/profile

修改 conf/flink-conf.yaml

注意:yaml 文件,:后面要有空格

vim conf/flink-conf.yaml

taskmanager.numberOfTaskSlots

找到下面配置,修改如下:

taskmanager.numberOfTaskSlots: 2

相当于 taskmanager 有 2 个线程执行

修改(可略)

在 flink 配置文件里 flink-conf.yaml设置

classloader.check-leaked-classloader: false

可能报错:

Exception in thread "Thread-7" java.lang.IllegalStateException: Trying to access closed classloader. Please check if you s              tore classloaders directly or indirectly in static fields. If the stacktrace suggests that the leak occurs in a third part              y library and cannot be fixed immediately, you can disable this check with the configuration 'classloader.check-leaked-cla              ssloader'.

原文出处:http://malaoshi.top/show_1IX2SbheZ7fI.html