hadoop3.x ha hdfs api上传文件 作者:马育民 • 2021-11-19 15:38 • 阅读:10091 # 方式一 ``` public class T2上传文件高可用 { public static void main(String[] args) throws Exception { //设置用户,否则报错:Permission denied: System.setProperty("HADOOP_USER_NAME", "root"); //配置 Configuration conf = new Configuration(); conf.set("fs.defaultFS","hdfs://mycluster"); conf.set("dfs.nameservices","mycluster"); conf.set("dfs.ha.namenodes.mycluster","nn1,nn2"); conf.set("dfs.namenode.rpc-address.mycluster.nn1","hadoop1:8020"); conf.set("dfs.namenode.rpc-address.mycluster.nn2","hadoop2:8020"); conf.set("dfs.client.failover.proxy.provider.mycluster","org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"); FileSystem hdfsFS = FileSystem.get( conf); hdfsFS.copyFromLocalFile(new Path("C:\\Users\\mym\\Desktop\\1.txt"), new Path("/1.txt")); // 3 关闭 hdfsFS.close(); } } ``` # 方式二 ### 下载xml 将 `core-site.xml` 和 `hdfs-site.xml` 从服务器下载到 工程的 `resources` 目录下 ### java代码 ``` public class T2上传文件高可用_xml { public static void main(String[] args) throws Exception { //设置用户,否则报错:Permission denied: System.setProperty("HADOOP_USER_NAME", "root"); /* 配置,需要从服务器下载 相关配置文件,并放到 resources 目录下 创建对象时,会自动加载配置文件 */ Configuration conf = new Configuration(); FileSystem hdfsFS = FileSystem.get( conf); hdfsFS.copyFromLocalFile(new Path("C:\\Users\\mym\\Desktop\\core-site.xml2"), new Path("/core-site.xml2")); // 3 关闭 hdfsFS.close(); } } ``` 原文出处:http://malaoshi.top/show_1IX2FvQGaovb.html