赞
踩
GeoMesa安装
GeoMesa的安装主要包括5个组件的安装,分别是:
- root@HDMachine:~$ sudo add-apt-repository ppa:webupd8team/java
- root@HDMachine:~$ sudo apt-get update
root@HDMachine:~$ sudo apt-get install oracle-java8-installer
- root@HDMachine:~$ java -version
- java version "1.8.0_121"
- Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
- Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
root@HDMachine:~$ sudo addgroup hadoop Adding group `hadoop' (GID 1002) ... Done. root@HDMachine:~$ sudo adduser --ingroup hadoop hduser Adding user `hduser' ... Adding new user `hduser' (1001) with group `hadoop' ... Creating home directory `/home/hduser' ... Copying files from `/etc/skel' ... Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully Changing the user information for hduser Enter the new value, or press ENTER for the default Full Name []: Room Number []: Work Phone []: Home Phone []: Other []: Is the information correct? [Y/n] Y
- hduser@HDMachine:~$ su root
- Password:
-
- root@HDMachine:/home/hduser$ sudo adduser hduser sudo
- [sudo] password for root:
- Adding user `hduser' to group `sudo' ...
- Adding user hduser to group sudo
- Done.
root@HDMachine:~$ sudo apt-get install ssh
- root@HDMachine:~$ which ssh
- /usr/bin/ssh
-
- root@HDMachine:~$ which sshd
- /usr/sbin/sshd
root@HDMachine:~$ su hduser Password: hduser@HDMachine:~$ ssh-keygen -t rsa -P "" Generating public/private rsa key pair. Enter file in which to save the key (/home/hduser/.ssh/id_rsa): Created directory '/home/hduser/.ssh'. Your identification has been saved in /home/hduser/.ssh/id_rsa. Your public key has been saved in /home/hduser/.ssh/id_rsa.pub. The key fingerprint is: 5c:9f:d5:64:8c:fa:2a:a0:a5:48:ff:5b:ed:9d:e0:85 hduser@HDMachine The key's randomart image is: +--[ RSA 2048]----+ | oo| | .+.| | . .. .| | . . ..o | | S o. | | . o . .. | | . o + .. E.. | | . + ..o.+ . | | .o. .o o | +-----------------+ hduser@HDMachine:~$ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
- hduser@HDMachine:~$ ssh localhost
- The authenticity of host 'localhost (127.0.0.1)' can't be established.
- ECDSA key fingerprint is e1:8b:a0:a5:75:ef:f4:b4:5e:a9:ed:be:64:be:5c:2f.
- Are you sure you want to continue connecting (yes/no)? yes
- Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
- Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-40-generic x86_64)
- hduser@HDMachine:~$ wget http://mirrors.sonic.net/apache/hadoop/common/hadoop-2.8.0/hadoop-2.8.0.tar.gz
- hduser@HDMachine:~$ tar xvzf hadoop-2.8.0.tar.gz
- hduser@HDMachine:~/hadoop-2.6.0$ sudo mv * /usr/local/hadoop
- [sudo] password for hduser:
- hduser@HDMachine update-alternatives --config java
- There is only one alternative in link group java (providing /usr/bin/java): /usr/lib/jvm/java-8-oracle/jre/bin/java
- Nothing to configure.
hduser@HDMachine:~$ vi ~/.bashrc #HADOOP VARIABLES START export JAVA_HOME=/usr/lib/jvm/java-8-oracle export HADOOP_INSTALL=/usr/local/hadoop export PATH=$PATH:$HADOOP_INSTALL/bin export PATH=$PATH:$HADOOP_INSTALL/sbin export HADOOP_MAPRED_HOME=$HADOOP_INSTALL export HADOOP_COMMON_HOME=$HADOOP_INSTALL export HADOOP_HDFS_HOME=$HADOOP_INSTALL export YARN_HOME=$HADOOP_INSTALL export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib" #HADOOP VARIABLES END hduser@HDMachine:~$ source ~/.bashrc
- hduser@HDMachine:~$ vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh
-
- export JAVA_HOME=/usr/lib/jvm/java-8-oracle
- hduser@HDMachine:~$ sudo mkdir -p /app/hadoop/tmp
- hduser@HDMachine:~$ sudo chown hduser:hadoop /app/hadoop/tmp
hduser@HDMachine:~$ vi /usr/local/hadoop/etc/hadoop/core-site.xml <configuration> <property> <name>hadoop.tmp.dir</name> <value>/app/hadoop/tmp</value> <description>A base for other temporary directories.</description> </property> <property> <name>fs.default.name</name> <value>hdfs://localhost:54310</value> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.</description> </property> </configuration>
hduser@HDMachine:~$ cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml
- hduser@HDMachine:~$ sudo mkdir -p /usr/local/hadoop_store/hdfs/namenode
- hduser@HDMachine:~$ sudo mkdir -p /usr/local/hadoop_store/hdfs/datanode
- hduser@HDMachine:~$ sudo chown -R hduser:hadoop /usr/local/hadoop_store
hduser@HDMachine:~$ vi /usr/local/hadoop/etc/hadoop/hdfs-site.xml <configuration> <property> <name>dfs.replication</name> <value>1</value> <description>Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. </description> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/usr/local/hadoop_store/hdfs/namenode</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/usr/local/hadoop_store/hdfs/datanode</value> </property> </configuration>
hduser@HDMachine:~$ hadoop namenode -format DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. 17/05/03 11:12:45 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: user = hduser STARTUP_MSG: host = HDMachine/127.0.1.1 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 2.8.0 STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 91f2b7a13d1e97be65db92ddabc627cc29ac0009; compiled by 'jdu' on 2017-03-17T04:12Z STARTUP_MSG: java = 1.8.0_121 ************************************************************/ 17/05/03 11:12:45 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 17/05/03 11:12:45 INFO namenode.NameNode: createNameNode [-format] 17/05/03 11:12:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Formatting using clusterid: CID-ae42624b-2814-4b43-b473-a181d3075c2e 17/05/03 11:12:49 INFO namenode.FSEditLog: Edit logging is async:false 17/05/03 11:12:49 INFO namenode.FSNamesystem: KeyProvider: null 17/05/03 11:12:49 INFO namenode.FSNamesystem: fsLock is fair: true 17/05/03 11:12:49 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false 17/05/03 11:12:49 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 17/05/03 11:12:49 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 17/05/03 11:12:49 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 17/05/03 11:12:49 INFO blockmanagement.BlockManager: The block deletion will start around 2017 May 03 11:12:49 17/05/03 11:12:49 INFO util.GSet: Computing capacity for map BlocksMap 17/05/03 11:12:49 INFO util.GSet: VM type = 64-bit 17/05/03 11:12:49 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB 17/05/03 11:12:49 INFO util.GSet: capacity = 2^21 = 2097152 entries 17/05/03 11:12:49 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false 17/05/03 11:12:49 INFO blockmanagement.BlockManager: defaultReplication = 1 17/05/03 11:12:49 INFO blockmanagement.BlockManager: maxReplication = 512 17/05/03 11:12:49 INFO blockmanagement.BlockManager: minReplication = 1 17/05/03 11:12:49 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 17/05/03 11:12:49 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 17/05/03 11:12:49 INFO blockmanagement.BlockManager: encryptDataTransfer = false 17/05/03 11:12:49 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 17/05/03 11:12:49 INFO namenode.FSNamesystem: fsOwner = hduser (auth:SIMPLE) 17/05/03 11:12:49 INFO namenode.FSNamesystem: supergroup = supergroup 17/05/03 11:12:49 INFO namenode.FSNamesystem: isPermissionEnabled = true 17/05/03 11:12:49 INFO namenode.FSNamesystem: HA Enabled: false 17/05/03 11:12:49 INFO namenode.FSNamesystem: Append Enabled: true 17/05/03 11:12:50 INFO util.GSet: Computing capacity for map INodeMap 17/05/03 11:12:50 INFO util.GSet: VM type = 64-bit 17/05/03 11:12:50 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB 17/05/03 11:12:50 INFO util.GSet: capacity = 2^20 = 1048576 entries 17/05/03 11:12:50 INFO namenode.FSDirectory: ACLs enabled? false 17/05/03 11:12:50 INFO namenode.FSDirectory: XAttrs enabled? true 17/05/03 11:12:50 INFO namenode.NameNode: Caching file names occurring more than 10 times 17/05/03 11:12:51 INFO util.GSet: Computing capacity for map cachedBlocks 17/05/03 11:12:51 INFO util.GSet: VM type = 64-bit 17/05/03 11:12:51 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB 17/05/03 11:12:51 INFO util.GSet: capacity = 2^18 = 262144 entries 17/05/03 11:12:51 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 17/05/03 11:12:51 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 17/05/03 11:12:51 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 17/05/03 11:12:51 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 17/05/03 11:12:51 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 17/05/03 11:12:51 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 17/05/03 11:12:51 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 17/05/03 11:12:51 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 17/05/03 11:12:51 INFO util.GSet: Computing capacity for map NameNodeRetryCache 17/05/03 11:12:51 INFO util.GSet: VM type = 64-bit 17/05/03 11:12:51 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB 17/05/03 11:12:51 INFO util.GSet: capacity = 2^15 = 32768 entries 17/05/03 11:12:51 INFO namenode.NNConf: ACLs enabled? false 17/05/03 11:12:51 INFO namenode.NNConf: XAttrs enabled? true 17/05/03 11:12:51 INFO namenode.NNConf: Maximum size of an xattr: 16384 17/05/03 11:12:52 INFO namenode.FSImage: Allocated new BlockPoolId: BP-130729900-192.168.1.1-1429393391595 17/05/03 11:12:52 INFO common.Storage: Storage directory /usr/local/hadoop_store/hdfs/namenode has been successfully formatted. 17/05/03 11:12:52 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 17/05/03 11:12:52 INFO util.ExitUtil: Exiting with status 0 17/05/03 11:12:52 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at HDMachine/192.168.1.1
hduser@HDMachine:~$ cd /usr/local/hadoop/sbin/ hduser@HDMachine:/usr/local/hadoop/sbin$ ls distribute-exclude.sh hdfs-config.sh refresh-namenodes.sh start-balancer.sh start-yarn.cmd stop-balancer.sh stop-yarn.cmd hadoop-daemon.sh httpfs.sh slaves.sh start-dfs.cmd start-yarn.sh stop-dfs.cmd stop-yarn.sh hadoop-daemons.sh kms.sh start-all.cmd start-dfs.sh stop-all.cmd stop-dfs.sh yarn-daemon.sh hdfs-config.cmd mr-jobhistory-daemon.sh start-all.sh start-secure-dns.sh stop-all.sh stop-secure-dns.sh yarn-daemons.sh hduser@HDMachine:/usr/local/hadoop/sbin$ start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh 17/05/03 14:07:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [localhost] hduser@localhost's password: localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-HDMachine.out hduser@localhost's password: localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-HDMachine.out Starting secondary namenodes [0.0.0.0] hduser@0.0.0.0's password: 0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-HDMachine.out 17/05/03 14:07:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable starting yarn daemons starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-HDMachine.out hduser@localhost's password: localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-HDMachine.out hduser@HDMachine:/usr/local/hadoop/sbin$
- hduser@HDMachine:/usr/local/hadoop/sbin$ jps
- 51633 Jps
- 50756 DataNode
- 50981 SecondaryNameNode
- 51318 NodeManager
- 50570 NameNode
- 51149 ResourceManager
hduser@HDMachine:/usr/local/hadoop/sbin$ netstat -plten | grep java (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) tcp 0 0 0.0.0.0:50070 0.0.0.0:* LISTEN 1003 119588 50570/java tcp 0 0 127.0.0.1:59447 0.0.0.0:* LISTEN 1003 127666 50756/java tcp 0 0 0.0.0.0:50010 0.0.0.0:* LISTEN 1003 127653 50756/java tcp 0 0 0.0.0.0:50075 0.0.0.0:* LISTEN 1003 119763 50756/java tcp 0 0 0.0.0.0:50020 0.0.0.0:* LISTEN 1003 128653 50756/java tcp 0 0 127.0.0.1:54310 0.0.0.0:* LISTEN 1003 128405 50570/java tcp 0 0 0.0.0.0:50090 0.0.0.0:* LISTEN 1003 130314 50981/java tcp6 0 0 :::8088 :::* LISTEN 1003 129481 51149/java tcp6 0 0 :::8030 :::* LISTEN 1003 131806 51149/java tcp6 0 0 :::8031 :::* LISTEN 1003 131788 51149/java tcp6 0 0 :::8032 :::* LISTEN 1003 131810 51149/java tcp6 0 0 :::8033 :::* LISTEN 1003 137455 51149/java tcp6 0 0 :::60261 :::* LISTEN 1003 131852 51318/java tcp6 0 0 :::8040 :::* LISTEN 1003 131858 51318/java tcp6 0 0 :::8042 :::* LISTEN 1003 134564 51318/java
hduser@HDMachine:/usr/local/hadoop/sbin$ stop-all.sh This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh 17/05/03 14:25:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Stopping namenodes on [localhost] hduser@localhost's password: localhost: stopping namenode hduser@localhost's password: localhost: stopping datanode Stopping secondary namenodes [0.0.0.0] hduser@0.0.0.0's password: 0.0.0.0: stopping secondarynamenode 17/05/03 14:25:46 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable stopping yarn daemons stopping resourcemanager hduser@localhost's password: localhost: stopping nodemanager no proxyserver to stop
- hduser@HDMachine:~/zookeeper-3.4.10$ sudo mv * /usr/local/zookeeper
- [sudo] password for hduser:
hduser@HDMachine:~$ cp /usr/local/zookeeper/conf/zoo_sample.cfg /usr/local/zookeeper/conf/zoo.cfg
hduser@HDMachine:/usr/local/zookeeper$ bin/zkServer.sh start
- ZooKeeper JMX enabled by default
- Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
- Starting zookeeper ... STARTED
- hduser@HDMachine:~$ wget https://www.apache.org/dyn/closer.lua/accumulo/1.7.3/accumulo-1.7.3-bin.tar.gz
- hduser@HDMachine:~$ tar xvzf accumulo-1.7.3-bin.tar.gz
- hduser@HDMachine:~/accumulo-1.7.3-bin$ sudo mv * /usr/local/accumulo
- [sudo] password for hduser:
hduser@HDMachine:~$ cp /usr/local/accumulo/conf/examples/512MB/standalone/* /usr/local/accumulo/conf/
- hduser@HDMachine:~$sudo vi ~/.bashrc
-
- export HADOOP_HOME=/usr/local/hadoop/
- export ZOOKEEPER_HOME=/usr/local/zookeeper/
- hduser@HDMachine:~$sudo vi /usr/local/accumulo/conf/accumulo-env.sh
-
- export ACCUMULO_MONITOR_BIND_ALL="true"
- <property>
- <name>instance.secret</name>
- <value>PASS1234</value>
- <description>A secret unique to a given instance that all servers must know in order to communicate with one another.
- Change it before initialization. To
- change it later use ./bin/accumulo org.apache.accumulo.server.util.ChangeSecret --old [oldpasswd] --new [newpasswd],
- and then update this file.
- </description>
- </property>
- <property>
- <name>instance.volumes</name>
- <value>hdfs://localhost:54310/accumulo</value>
- </property>
- <property>
- <name>trace.token.property.password</name>
- <value>mypassw</value>
- </property>
- hduser@HDMachine:/usr/local/accumulo$ bin/accumulo init
- 2017-05-03 16:38:11,332 [conf.ConfigSanityCheck] WARN : Use of instance.dfs.uri and instance.dfs.dir are deprecated. Consider using instance.volumes instead.
- 2017-05-03 16:38:12,800 [fs.VolumeManagerImpl] WARN : dfs.datanode.synconclose set to false in hdfs-site.xml: data loss is possible on hard system reset or power loss
- 2017-05-03 16:38:12,802 [init.Initialize] INFO : Hadoop Filesystem is hdfs://localhost:54310
- 2017-05-03 16:38:12,803 [init.Initialize] INFO : Accumulo data dirs are [hdfs://localhost:54310/accumulo]
- 2017-05-03 16:38:12,803 [init.Initialize] INFO : Zookeeper server is localhost:2181
- 2017-05-03 16:38:12,803 [init.Initialize] INFO : Checking if Zookeeper is available. If this hangs, then you need to make sure zookeeper is running
- Instance name : geomesa
- Enter initial password for root (this may not be applicable for your security setup): ******
- Confirm initial password for root: ******
- 2017-05-03 16:38:28,350 [Configuration.deprecation] INFO : dfs.replication.min is deprecated. Instead, use dfs.namenode.replication.min
- 2017-05-03 16:38:33,501 [Configuration.deprecation] INFO : dfs.block.size is deprecated. Instead, use dfs.blocksize
- 2017-05-03 16:38:35,553 [conf.AccumuloConfiguration] INFO : Loaded class : org.apache.accumulo.server.security.handler.ZKAuthorizor
- 2017-05-03 16:38:35,568 [conf.AccumuloConfiguration] INFO : Loaded class : org.apache.accumulo.server.security.handler.ZKAuthenticator
- 2017-05-03 16:38:35,574 [conf.AccumuloConfiguration] INFO : Loaded class : org.apache.accumulo.server.security.handler.ZKPermHandler
hduser@HDMachine:/usr/local/accumulo$ ./bin/start-all.sh Starting monitor on localhost WARN : Max open files on localhost is 1024, recommend 32768 Starting tablet servers .... done 2017-05-03 16:44:46,682 [conf.ConfigSanityCheck] WARN : Use of instance.dfs.uri and instance.dfs.dir are deprecated. Consider using instance.volumes instead. 2017-05-03 16:44:48,422 [fs.VolumeManagerImpl] WARN : dfs.datanode.synconclose set to false in hdfs-site.xml: data loss is possible on hard system reset or power loss 2017-05-03 16:44:48,426 [server.Accumulo] INFO : Attempting to talk to zookeeper 2017-05-03 16:44:48,578 [server.Accumulo] INFO : ZooKeeper connected and initialized, attempting to talk to HDFS 2017-05-03 16:44:48,720 [server.Accumulo] INFO : Connected to HDFS Starting tablet server on localhost WARN : Max open files on localhost is 1024, recommend 32768 Starting master on localhost WARN : Max open files on localhost is 1024, recommend 32768 Starting garbage collector on localhost WARN : Max open files on localhost is 1024, recommend 32768 Starting tracer on localhost WARN : Max open files on localhost is 1024, recommend 32768
- hduser@HDMachine:~$ wget https://sourceforge.net/projects/geoserver/files/GeoServer/2.9.1/geoserver-2.9.1-bin.zip
- hduser@HDMachine:~$ unzip geoserver-2.9.1-bin.zip
- hduser@HDMachine:~/geoserver-2.9.1-bin$ sudo mv * /usr/local/geoserver
- [sudo] password for hduser:
- hduser@HDMachine:~$ vi ~/.bashrc
-
- export GEOSERVER_HOME=/usr/local/geoserver
hduser@HDMachine:~$source ~/.bashrc
hduser@HDMachine:~$sudo chown -R hduser /usr/local/geoserver/
- hduser@HDMachine:~$cd /usr/local/geoserver/bin
- hduser@HDMachine:/usr/local/geoserver/bin$ ./startup.sh
- $ wget http://repo.locationtech.org/content/repositories/geomesa-releases/org/locationtech/geomesa/geomesa-accumulo-dist_2.11/$VERSION/geomesa-accumulo-dist_2.11-$VERSION-bin.tar.gz
- $ tar xvf geomesa-accumulo-dist_2.11-$VERSION-bin.tar.gz
- $ cd geomesa-accumulo-dist_2.11-$VERSION
- $ ls
- bin/ conf/ dist/ docs/ emr4/ examples/ lib/ LICENSE.txt logs/
- $ git clone https://github.com/locationtech/geomesa.git
- $ cd geomesa
git checkout tags/geomesa-$VERSION -b geomesa-$VERSION
$ mvn clean install
$ mvn clean install -DskipTests=true
$ build/mvn clean install
- # something like this for each tablet server
- $ scp dist/accumulo/geomesa-accumulo-distributed-runtime_2.11-$VERSION.jar \
- tserver1:$ACCUMULO_HOME/lib/ext
- # or for raster support
- $ scp dist/accumulo/geomesa-accumulo-distributed-runtime-raster_2.11-$VERSION.jar \
- tserver1:$ACCUMULO_HOME/lib/ext
./setup-namespace.sh -u myUser -n myNamespace
- $ accumulo shell -u root
- > createnamespace myNamespace
- > grant NameSpace.CREATE_TABLE -ns myNamespace -u myUser
- > config -s general.vfs.context.classpath.myNamespace=hdfs://NAME_NODE_FDQN:54310/accumulo/classpath/myNamespace/[^.].*.jar
- > config -ns myNamespace -s table.classpath.context=myNamespace
- ### in geomesa-accumulo_2.11-$VERSION/:
- $ bin/geomesa configure
- Warning: GEOMESA_ACCUMULO_HOME is not set, using /path/to/geomesa-accumulo_2.11-$VERSION
- Using GEOMESA_ACCUMULO_HOME as set: /path/to/geomesa-accumulo_2.11-$VERSION
- Is this intentional? Y\n y
- Warning: GEOMESA_LIB already set, probably by a prior configuration.
- Current value is /path/to/geomesa-accumulo_2.11-$VERSION/lib.
-
- Is this intentional? Y\n y
-
- To persist the configuration please update your bashrc file to include:
- export GEOMESA_ACCUMULO_HOME=/path/to/geomesa-accumulo_2.11-$VERSION
- export PATH=${GEOMESA_ACCUMULO_HOME}/bin:$PATH
- export GEOMESA_ACCUMULO_HOME=/path/to/geomesa-accumulo_2.11-$VERSION
- export PATH=${GEOMESA_ACCUMULO_HOME}/bin:$PATH
$ source ~/.bashrc
- $ bin/install-jai.sh
- $ bin/install-jline.sh
- $ geomesa
- Using GEOMESA_ACCUMULO_HOME = /path/to/geomesa-accumulo-dist_2.11-$VERSION
- Usage: geomesa [command] [command options]
- Commands:
- ...
$ bin/manage-geoserver-plugins.sh --lib-dir /path/to/geoserver/WEB-INF/lib/ --install Collecting Installed Jars Collecting geomesa-gs-plugin Jars Please choose which modules to install Multiple may be specified, eg: 1 4 10 Type 'a' to specify all -------------------------------------- 0 | geomesa-accumulo-gs-plugin_2.11-$VERSION 1 | geomesa-blobstore-gs-plugin_2.11-$VERSION 2 | geomesa-process_2.11-$VERSION 3 | geomesa-stream-gs-plugin_2.11-$VERSION Module(s) to install: 0 1 0 | Installing geomesa-accumulo-gs-plugin_2.11-$VERSION-install.tar.gz 1 | Installing geomesa-blobstore-gs-plugin_2.11-$VERSION-install.tar.gz Done
- $ tar -xzvf \
- geomesa-accumulo_2.11-$VERSION/dist/geoserver/geomesa-accumulo-gs-plugin_2.11-$VERSION-install.tar.gz \
- -C /path/to/tomcat/webapps/geoserver/WEB-INF/lib/
- ~~~
- 如果使用GeoServer内置的Jetty,命令如下:
- $ $GEOMESA_ACCUMULO_HOME/bin/install-hadoop-accumulo.sh /path/to/tomcat/webapps/geoserver/WEB-INF/lib/
- Install accumulo and hadoop dependencies to /path/to/tomcat/webapps/geoserver/WEB-INF/lib/?
- Confirm? [Y/n]y
- fetching https://search.maven.org/remotecontent?filepath=org/apache/accumulo/accumulo-core/1.6.5/accumulo-core-1.6.5.jar
- --2015-09-29 15:06:48-- https://search.maven.org/remotecontent?filepath=org/apache/accumulo/accumulo-core/1.6.5/accumulo-core-1.6.5.jar
- Resolving search.maven.org (search.maven.org)... 207.223.241.72
- Connecting to search.maven.org (search.maven.org)|207.223.241.72|:443... connected.
- HTTP request sent, awaiting response... 200 OK
- Length: 4646545 (4.4M) [application/java-archive]
- Saving to: ‘/path/to/tomcat/webapps/geoserver/WEB-INF/lib/accumulo-core-1.6.5.jar’
- ...
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。