flink1.12.x:Could not complete the operation. Number of retries has been exhausted. 作者:马育民 • 2021-12-23 19:04 • 阅读:10588 # 错误描述 以 per-job cluster模式(yarn-cluster) 执行,如下: ``` flink run -m yarn-cluster /program/flink-1.12.5/examples/batch/WordCount.jar --input hdfs://hadoop1:8020/test2/a.txt --output hdfs://hadoop1:8020/flink_result11 ``` 过几分钟后,会报错,如下图: [![](https://www.malaoshi.top/upload/pic/flink/Snipaste_2021-12-23_19-07-34.png)](https://www.malaoshi.top/upload/pic/flink/Snipaste_2021-12-23_19-07-34.png) 很可能是 输出文件:`hdfs://hadoop1:8020/flink_result11` 已经存在,无法创建,导致的错误 但在控制台中显示的是上面的错误 原文出处:http://malaoshi.top/show_1IX2SbY9cFK0.html