赞
踩
官方文档已经很详细,这里把我当时安装过程记录下来,本文是2.0.6.0版本手动安装
官方文档:http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.6.0/bk_installing_manually_book/content/rpm-chap1.html
现在版本已经2.2了,最低要示jdk7,文档路径:
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.2.0/HDP_Man_Install_v22/index.html#Item1.1
一。 准备工作:
Configure NTP clients. Execute the following command on all the nodes in your cluster:
yum install ntp
Enable the service. Execute the following command on all the nodes in your cluster:
chkconfig ntpd on
Start the NTP. Execute the following command on all the nodes in your cluster:
/etc/init.d/ntpd start
You can use the existing NTP server in your environment. Configure the firewall on the local NTP server to enable UDP input traffic on port 123
and replace 192.168.1.0/24 with the ip addresses in the cluster. See the following sample rule:
# iptables -A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state --state NEW -p udp --dport 123 -j ACCEPT
Restart iptables. Execute the following command on all the nodes in your cluster:
# iptables service iptables restart
Configure clients to use the local NTP server. Edit the /etc/ntp.conf
and add the following line:
server $LOCAL_SERVER_IP OR HOSTNAME
创建用户(默认都会创建,不需要自己创建,只需要修改密码):
groupadd hadoop
useradd -g hadoop hdfs
useradd -g hadoop yarn
passwd hdfs
passwd yarn
passwd hive
passwd mapred
下载配置:
wget http://public-repo-1.hortonworks.com/HDP/tools/2.0.6.0/hdp_manual_install_rpm_helper_files-2.0.6.76.tar.gz
修改directories.sh 文件中的TODO目录,指定为自定义的目录(可根据里面提示修改)
拷贝两个文件到/opt下,给于其它用户权限,修改每个用户 ~/.bash_profile加入两行,
#!/bin/bash
. /opt/usersAndGroups.sh
. /opt/directories.sh
让配置生效:
source /etc/profile
执行下面命令
echo "Create the NameNode Directories"
mkdir -p $DFS_NAME_DIR; chown -R $HDFS_USER:$HADOOP_GROUP $DFS_NAME_DIR; chmod -R 755 $DFS_NAME_DIR;
echo "Create the SecondaryNameNode Directories" mkdir -p $FS_CHECKPOINT_DIR;chown -R $HDFS_USER:$HADOOP_GROUP $FS_CHECKPOINT_DIR; chmod -R 755 $FS_CHECKPOINT_DIR;
echo "Create datanode local dir"
echo $DFS_DATA_DIR
mkdir -p $DFS_DATA_DIR;
chown -R $HDFS_USER:$HADOOP_GROUP $DFS_DATA_DIR;
chmod -R 750 $DFS_DATA_DIR;
echo "Create yarn local dir"
mkdir -p $YARN_LOCAL_DIR;
chown -R $YARN_USER:$HADOOP_GROUP $YARN_LOCAL_DIR;
chmod -R 755 $YARN_LOCAL_DIR;
echo "Create yarn local log dir"
mkdir -p $YARN_LOCAL_LOG_DIR;
chown -R $YARN_USER:$HADOOP_GROUP $YARN_LOCAL_LOG_DIR;
chmod -R 755 $YARN_LOCAL_LOG_DIR;
echo "Create the Log and PID Directories" mkdir -p $HDFS_LOG_DIR;chown -R $HDFS_USER:$HADOOP_GROUP $HDFS_LOG_DIR; chmod -R 755 $HDFS_LOG_DIR; mkdir -p $YARN_LOG_DIR; chown -R $YARN_USER:$HADOOP_GROUP $YARN_LOG_DIR; chmod -R 755 $YARN_LOG_DIR;
mkdir -p $HDFS_PID_DIR; chown -R $HDFS_USER:$HADOOP_GROUP $HDFS_PID_DIR; chmod -R 755 $HDFS_PID_DIR;
mkdir -p $YARN_PID_DIR; chown -R $YARN_USER:$HADOOP_GROUP $YARN_PID_DIR; chmod -R 755 $YARN_PID_DIR;
mkdir -p $MAPRED_LOG_DIR; chown -R $MAPRED_USER:$HADOOP_GROUP $MAPRED_LOG_DIR; chmod -R 755 $MAPRED_LOG_DIR;
mkdir -p $MAPRED_PID_DIR; chown -R $MAPRED_USER:$HADOOP_GROUP $MAPRED_PID_DIR; chmod -R 755 $MAPRED_PID_DIR;
#/usr/lib/hadoop/logs HADOOP_LOG_DIR=/www/logs/hadoop mkdir -p $HADOOP_LOG_DIR; chown -R $YARN_USER:$HADOOP_GROUP $HADOOP_LOG_DIR; chmod -R 755 $HADOOP_LOG_DIR
#/usr/lib/hadoop-yarn/logs YARN_LOG_DIR=/www/logs/hadoop-yarn mkdir -p $YARN_LOG_DIR; chown -R $HDFS_USER:$HADOOP_GROUP $YARN_LOG_DIR; chmod -R 755 $YARN_LOG_DIR
MAPRED_LOG_DIR=/www/logs/hadoop-mapred
mkdir -p $MAPRED_LOG_DIR; chown -R $MAPRED_USER:$HADOOP_GROUP $MAPRED_LOG_DIR; chmod -R 755 $MAPRED_LOG_DIR
hive(可选):
mkdir -p $HIVE_LOG_DIR;
chown -R $HIVE_USER:$HADOOP_GROUP $HIVE_LOG_DIR; chmod -R 755 $HIVE_LOG_DIR;
二。 root安装软件:
yum install hadoop hadoop-hdfs hadoop-libhdfs hadoop-yarn hadoop-mapreduce hadoop-client openssl
Complete the following instructions on all the nodes in your cluster:
Install Snappy.
yum install snappy snappy-devel
Make the Snappy libraries available to Hadoop:
ln -sf /usr/lib64/libsnappy.so /usr/lib/hadoop/lib/native/.
Execute the following command on all the nodes in your cluster. From a terminal window, type:
yum install lzo lzo-devel hadoop-lzo hadoop-lzo-native
三。 修改hadoop配置
可从上面下载的工具包中修改,只需要修改$开始的变量,$namenode.full.hostname需要修改成机器IP,
其它类似的也要修改成ip,
并配置/etc/hosts , 增加 $ip namenode
This section describes how to set up and edit the deployment configuration files for HDFS and MapReduce.
Use the following instructions to set up Hadoop configuration files:
We strongly suggest that you edit and source the files you downloaded in Download Companion Files.
Alternatively, you can also copy the contents to your ~/.bash_profile
) to set up these environment variables in your environment.
From the downloaded scripts.zip
file, extract the files from theconfiguration_files/core_hadoop
directory to a temporary directory.
Modify the configuration files.
In the temporary directory, locate the following files and modify the properties based on your environment.
Search for TODO
in the files for the properties to replace. See Define Environment Parameters for more information.
Edit the /etc/hadoop/conf/hadoop-env.sh
file.
Change the value of the -XX:MaxnewSize
parameter to 1/8th the value of the maximum heap size (-Xmx
) parameter.
Edit the core-site.xml
and modify the following properties:
<property> <name>fs.defaultFS</name> <value>hdfs://$namenode.full.hostname:8020</value> <description>Enter your NameNode hostname</description> </property>
Edit the hdfs-site.xml
and modify the following properties:
<property> <name>dfs.namenode.name.dir</name> <value>/grid/hadoop/hdfs/nn,/grid1/hadoop/hdfs/nn</value> <description>Comma separated list of paths. Use the list of directories from $DFS_NAME_DIR. For example, /grid/hadoop/hdfs/nn,/grid1/hadoop/hdfs/nn.</description> </property>
<property> <name>dfs.datanode.data.dir</name> <value>file:///grid/hadoop/hdfs/dn, file:///grid1/hadoop/hdfs/dn</value> <description>Comma separated list of paths. Use the list of directories from $DFS_DATA_DIR. For example, file:///grid/hadoop/hdfs/dn, file:///grid1/hadoop/hdfs/dn.</description> </property>
<property> <name>dfs.namenode.http-address</name> <value>$namenode.full.hostname:50070</value> <description>Enter your NameNode hostname for http access.</description> </property>
<property> <name>dfs.namenode.secondary.http-address</name> <value>$secondary.namenode.full.hostname:50090</value> <description>Enter your Secondary NameNode hostname.</description> </property>
<property> <name>dfs.namenode.checkpoint.dir</name> <value>/grid/hadoop/hdfs/snn,/grid1/hadoop/hdfs/snn,/grid2/hadoop/hdfs/snn</value> <description>A comma separated list of paths. Use the list of directories from $FS_CHECKPOINT_DIR. For example, /grid/hadoop/hdfs/snn,sbr/grid1/hadoop/hdfs/snn,sbr/grid2/hadoop/hdfs/snn </description> </property>
![]() | Note |
---|---|
The value of NameNode new generation size should be 1/8 of maximum heap size ( To change the default value:
|
Edit the yarn-site.xml
and modify the following properties:
<property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>$resourcemanager.full.hostname:8025</value> <description>Enter your ResourceManager hostname.</description> </property>
<property> <name>yarn.resourcemanager.scheduler.address</name> <value>$resourcemanager.full.hostname:8030</value> <description>Enter your ResourceManager hostname.</description> </property>
<property> <name>yarn.resourcemanager.address</name> <value>$resourcemanager.full.hostname:8050</value> <description>Enter your ResourceManager hostname.</description> </property>
<property> <name>yarn.resourcemanager.admin.address</name> <value>$resourcemanager.full.hostname:8141</value> <description>Enter your ResourceManager hostname.</description> </property>
<property> <name>yarn.nodemanager.local-dirs</name> <value>/grid/hadoop/hdfs/yarn/local,/grid1/hadoop/hdfs/yarn/local</value> <description>Comma separated list of paths. Use the list of directories from $YARN_LOCAL_DIR. For example, /grid/hadoop/hdfs/yarn/local,/grid1/hadoop/hdfs/yarn/local.</description> </property>
<property> <name>yarn.nodemanager.log-dirs</name> <value>/gird/hadoop/hdfs/yarn/logs</value> <description>Use the list of directories from $YARN_LOCAL_LOG_DIR. For example, /grid/hadoop/yarn/logs /grid1/hadoop/yarn/logs /grid2/hadoop/yarn/logs</description> </property>
<property> <name>yarn.log.server.url</name> <value>http://$jobhistoryserver.full.hostname:19888/jobhistory/logs/</value> <description>URL for job history server</description> </property>
<property> <name>yarn.resourcemanager.webapp.address</name> <value>$resourcemanager.full.hostname:8088</value> <description>URL for job history server</description> </property>
Edit the mapred-site.xml
and modify the following properties:
<property> <name>mapreduce.jobhistory.address</name> <value>$jobhistoryserver.full.hostname:10020</value> <description>Enter your JobHistoryServer hostname.</description> </property>
<property> <name>mapreduce.jobhistory.webapp.address</name> <value>$jobhistoryserver.full.hostname:19888</value> <description>Enter your JobHistoryServer hostname.</description> </property>
Optional: Configure MapReduce to use Snappy Compression
In order to enable Snappy compression for MapReduce jobs, edit core-site.xml and mapred-site.xml.
Add the following properties to mapred-site.xml:
<property> <name>mapreduce.admin.map.child.java.opts</name> <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value> <final>true</final> </property> <property> <name>mapreduce.admin.reduce.child.java.opts</name> <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value> <final>true</final> </property>
Add the SnappyCodec to the codecs list in core-site.xml:
<property> <name>io.compression.codecs</name> <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec</value> </property>
Copy the configuration files.
$HDFS_USER
is the user owning the HDFS services. For example,hdfs
.
$HADOOP_GROUP
is a common group shared by services. For example,hadoop
.
On all hosts in your cluster, create the Hadoop configuration directory:
rm -r $HADOOP_CONF_DIR mkdir -p $HADOOP_CONF_DIR
where $HADOOP_CONF_DIR
is the directory for storing the Hadoop configuration files.
For example, /etc/hadoop/conf
.
Copy all the configuration files to $HADOOP_CONF_DIR
.
Set appropriate permissions:
chown -R $HDFS_USER:$HADOOP_GROUP $HADOOP_CONF_DIR/../ chmod -R 755 $HADOOP_CONF_DIR/../
修改 hadoop-env.sh
export JAVA_HOME=/opt/jdk1.6.0_33
添加:(指定$HDFS_USER一个有权限访问的目录,用于写日志)
HADOOP_LOG_DIR=/www/logs/hadoop
修改yarn-env.sh
export JAVA_HOME=/opt/jdk1.6.0_33
export HADOOP_YARN_HOME=/www/logs/hadoop-yarn
去掉 所有的$USER
修改后,把所有的文件复制到/etc/hadoop/conf/下
修改/etc/hadoop/conf/slaves
把localhost改成namenode
不然读不到lived node
三。hdfs用户 运行测试:
Execute these commands on the NameNode host machine:
su $HDFS_USER /usr/lib/hadoop/bin/hadoop namenode -format /usr/lib/hadoop/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR start namenode
Execute these commands on the SecondaryNameNode:
su $HDFS_USER /usr/lib/hadoop/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR start secondarynamenode
Execute these commands on all DataNodes:
su $HDFS_USER /usr/lib/hadoop/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR start datanode
jps 查看是否启动成功, 没成功看$HADOOP_LOG_DIR下对应的日志
在任一机器上配置hosts
See if you can reach the NameNode server with your browser:
http://namenode:50070
Create hdfs user directory in HDFS:
su $HDFS_USER hadoop fs -mkdir -p /user/hdfs
Try copying a file into HDFS and listing that file:
su $HDFS_USER hadoop fs -copyFromLocal /etc/passwd passwd hadoop fs -ls
4. Test browsing HDFS:
eg: http://172.16.27.29:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=/&nnaddr=172.16.27.29:8020
四。 yarn用户 启动yarn
修改.bash_profile加入:
PATH=$PATH:/usr/lib/hadoop-yarn/sbin
export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec
(yarn-env.sh中有,可以不设置)
Execute these commands from the ResourceManager server:
<login as $YARN_USER> /usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start resourcemanager
Execute these commands from all NodeManager nodes:
<login as $YARN_USER> /usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start nodemanager
where:
$YARN_USER
is the user owning the YARN services. For example,yarn
.
$HADOOP_CONF_DIR
is the directory for storing the Hadoop configuration files. For example, /etc/hadoop/conf
.
通过下面访问集群:
四。启动MapReduce
Change permissions on the container-executor file.
chown -R root:hadoop /usr/lib/hadoop-yarn/bin/container-executor chmod -R 6050 /usr/lib/hadoop-yarn/bin/container-executor
![]() | Note |
---|---|
If these permissions are not set, the healthcheck script will return an errror stating that the datanode is |
Execute these commands from the JobHistory server to set up directories on HDFS :
su $HDFS_USER hadoop fs -mkdir -p /mr-history/tmp hadoop fs -chmod -R 1777 /mr-history/tmp hadoop fs -mkdir -p /mr-history/done hadoop fs -chmod -R 1777 /mr-history/done hadoop fs -chown -R $MAPRED_USER:$HDFS_USER /mr-history hadoop fs -mkdir -p /app-logs hadoop fs -chmod -R 1777 /app-logs hadoop fs -chown yarn /app-logs
Execute these commands from the JobHistory server:
<login as $MAPRED_USER> export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec/ /usr/lib/hadoop-mapreduce/sbin/mr-jobhistory-daemon.sh --config $HADOOP_CONF_DIR start historyserver
where:
$HDFS_USER
is the user owning the HDFS services. For example,hdfs
.
$MAPRED_USER
is the user owning the MapRed services. For example,mapred
.
$HADOOP_CONF_DIR
is the directory for storing the Hadoop configuration files. For example, /etc/hadoop/conf
.
五。 hive 安装
On all client/gateway nodes (on which Hive programs will be executed), Hive Metastore Server, and HiveServer2 machine, install the Hive RPMs.
yum install hive hcatalog
Optional - Download and add the database connector JAR.
By default, Hive uses embedded Derby database for its metastore. However, you can optionally choose to enable remote database (MySQL) for Hive metastore.
Execute the following command on the Hive metastore machine.
yum install mysql-connector-java*
After the yum install, the mysql jar is placed in '/usr/share/java/'. Copy the downloaded JAR file to the /usr/lib/hive/lib/
directory on your Hive host machine.
Ensure that the JAR file has appropriate permissions.
创建用户 :
CREATE USER 'hive'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'localhost';
#CREATE USER 'hive'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%';
FLUSH PRIVILEGES;
Use the following instructions to set up the Hive/HCatalog configuration files:
Extract the Hive/HCatalog configuration files.
From the downloaded scripts.zip
file, extract the files inconfiguration_files/hive
directory to a temporary directory.
Modify the configuration files.
In the temporary directory, locate the following file and modify the properties based on your environment. Search for TODO
in the files for the properties to replace.
Edit hive-site.xml
and modify the following properties:
<property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://$mysql.full.hostname:3306/$database.name?createDatabaseIfNotExist=true</value> <description>Enter your JDBC connection string. </description> </property>
<property> <name>javax.jdo.option.ConnectionUserName</name> <value>$dbusername</value> <description>Enter your MySQL credentials. </description> </property>
<property> <name>javax.jdo.option.ConnectionPassword</name> <value>$dbuserpassword</value> <description>Enter your MySQL credentials. </description> </property>
Enter your MySQL credentials from Install MySQL (Optional).
<property> <name>hive.metastore.uris</name> <value>thrift://$metastore.server.full.hostname:9083</value> <description>URI for client to contact metastore server. To enable HiveServer2, leave the property value empty. </description> </property>
Copy the configuration files.
On all Hive hosts create the Hive configuration directory.
rm -r $HIVE_CONF_DIR ; mkdir -p $HIVE_CONF_DIR ;
Copy all the configuration files to $HIVE_CONF_DIR
directory.
Set appropriate permissions:
chown -R $HIVE_USER:$HADOOP_GROUP $HIVE_CONF_DIR/../ ; chmod -R 755 $HIVE_CONF_DIR/../ ;
HIVE_LIB=/user/lib/hive
chown -R $HIVE_USER:$HADOOP_GROUP $HIVE_LIB/../ ; chmod -R 755 $HIVE_LIB/../ ;
Create Hive user home directory on HDFS.
Login as $HDFS_USER hadoop fs -mkdir -p /user/$HIVE_USER hadoop fs -chown $HIVE_USER:$HDFS_USER /user/$HIVE_USER
Create warehouse directory on HDFS.
Login as $HDFS_USER
hadoop fs -mkdir -p /apps/hive/external
hadoop fs -mkdir -p /apps/hive/warehouse hadoop fs -chown -R $HIVE_USER:$HDFS_USER /apps/hive hadoop fs -chmod -R 775 /apps/hive
where:
$HDFS_USER
is the user owning the HDFS services. For example,hdfs
.
$HIVE_USER
is the user owning the Hive services. For example,hive
.
Create hive scratch directory on HDFS.
Login as $HDFS_USER hadoop fs -mkdir -p /tmp/scratch hadoop fs -chown -R $HIVE_USER:$HDFS_USER /tmp/scratch hadoop fs -chmod -R 777 /tmp/scratch
Use the following steps to validate your installation:
Start Hive Metastore service.
Login as $HIVE_USER nohup hive --service metastore>$HIVE_LOG_DIR/hive.out 2>$HIVE_LOG_DIR/hive.log &
Smoke Test Hive.
Open Hive command line shell.
hive
Run sample commands.
show databases; create table test(col1 int, col2 string); show tables;
Start HiveServer2.
/usr/lib/hive/bin/hiveserver2 >$HIVE_LOG_DIR/hiveserver2.out 2> $HIVE_LOG_DIR/hiveserver2.log &
Smoke Test HiveServer2.
Open Beeline command line shell to interact with HiveServer2.
/usr/lib/hive/bin/beeline
Establish connection to server.
!connect jdbc:hive2://$hive.server.full.hostname:10000 $HIVE_USER password org.apache.hive.jdbc.HiveDriver
Run sample commands.
show databases; create table test2(a int, b string); show tables;
六, sqooop 安装
待续
七:启动和停止所有服务
启动start ,停止stop;
--config $HADOOP_CONF_DIR 为可选,有默认路径: /etc/hadoop/conf
su $HDFS_USER
hadoop-daemon.sh start namenode
hadoop-daemon.sh start secondarynamenode
su $YARN-USER
yarn-daemon.sh start resourcemanager
yar-daemon.sh start nodemanager
su $MAPRED-USER
export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec/ /usr/lib/hadoop-mapreduce/sbin/mr-jobhistory-daemon.sh start historyserver
su $HIVE_USER
HIVE_LOG_DIR=/www/logs/hive
Start Hive Metastore service:
nohup hive --service metastore>$HIVE_LOG_DIR/hive.out 2>$HIVE_LOG_DIR/hive.log &
Start HiveServer2:
/usr/lib/hive/bin/hiveserver2 >$HIVE_LOG_DIR/hiveserver2.out 2> $HIVE_LOG_DIR/hiveserver2.log &
如果换namenode ip了,都需要重新格式化,还需要重建hadoop目录
如果独立部署hive, 安装hive后,配置hive和日志目录以及相关目录权限,如果出现错误,请把日志级别改为debug,就可以查看详细的错误日志
八。安装oozie:
修改配置:
OOZIE_DATA_DIR=/www/data/oozie
mkdir -p $OOZIE_DATA_DIR;
chown -R $OOZIE_USER:$HADOOP_GROUP $OOZIE_DATA_DIR;
chmod -R 755 $OOZIE_DATA_DIR;
给于oozie用户权限
chown -R $OOZIE_USER:$HADOOP_GROUP /etc/oozie;
chmod -R 755 /etc/oozie;
chown -R $OOZIE_USER:$HADOOP_GROUP /usr/lib/oozie;
chmod -R 755 /usr/lib/oozie;
If you are installing Hive and HCatalog services, you need a MySQL database instance to store metadata information. You can either use an existing MySQL instance or install a new instance of MySQL manually. To install a new instance:
Connect to the host machine you plan to use for Hive and HCatalog.
Install MySQL server. From a terminal window, type:
For RHEL/CentOS:
yum install mysql-server
Start the instance.
/etc/init.d/mysqld start
Set the root
user password.
mysqladmin -u root -p‘{password}’ password $mysqlpassword
Remove unnecessary information from log and STDOUT.
mysqladmin -u root 2>&1 >/dev/null
As root,
use mysql (or other client tool) to create the “dbuser” and grant it adequate privileges. This user provides access to the Hive metastore.
CREATE USER '$dbusername'@'localhost' IDENTIFIED BY '$dbuserpassword'; GRANT ALL PRIVILEGES ON *.* TO 'dbuser'@'localhost'; CREATE USER 'dbuser'@'%' IDENTIFIED BY 'dbuserpassword'; GRANT ALL PRIVILEGES ON *.* TO 'dbuser'@'%'; FLUSH PRIVILEGES;
See if you can connect to the database as that user. You are prompted to enter the $dbuserpassword
password above.
mysql -u dbuser -p $dbuserpassword
Install the MySQL connector JAR file:
yum install mysql-connector-java*
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。