BlazeGragh HA编译&安装手册
2017-11-30 11:10:53    11    0    0
leome

编译&安装手册

-by yinlei on 2017-5-22


注意: 红色为重点部分。




Blazegragh 是个支持sparql的图数据库而且它支持HA和并行查询。本文档是他的源码编译和安装文档。  目前支持blazegragh 1.5.3 版本

 

  1. 1 编译
    1. 1.1 工程地址

http://192.168.0.155:10080/octopus/blazegragh_1_5_3-ha-balence.git



    1. 1.2 依赖

yum -y install man # man page support.

yum -y install mlocate # optional (used to locate procmail's lockfile, which is at /usr/bin/lockfile).

yum -y install emacs-nox # optional.

yum -y install screen # optional job control utility.

yum -y install telnet # optional (useful for testing services and firewall settings)

yum -y install rpcbind # optional (used by NFS).

yum -y install nfs-utils  # optional (used iff you will use NFS for the shared volume).

yum -y install sysstat # used to collect performance counters from the OS and services.  启动的时候 使用 pidstat 命令检测进程

yum -y install ntp # optional, but highly recommended.

yum -y install subversion # used to checkout bigdata from its SVN repository (only necessary for the main server).

yum -y install ant # used to build bigdata from the source code  (only necessary for the main server).



当然JAVA_HOME   ANT MAVEN 基础的包和环境变量需要提前配置好


    1. 1.3 编译
      1. 1.3.1 单节点打包

命令:

ant deploy-artifact

执行该命令以后会生成$BG_HOME/dist BG_HOME/ant-build两个目录。其中dist目录为单机节点启动目录,单机节点可以直接用该包去部署。





      1. 1.3.2 集群打包

命令:

      1. 1. 编译: ant deploy-artifact
      2. 2. 配置修改(修改的内容直接影响安装目录的配置复杂度 详细见安装章节1.3.3)。
      3. 3. 编译安装包命令: ant -Dlog4j.configuration=src/resources/config/log4j.properties clean install
      4. 4. 编译完后见$BG_HOME/build.properties 中的FEDNASLAS 三个变量所在的目录查看安装包文件。


build.properties配置文件片段:

FED=benchmark


# Bigdata-specific directory on a shared volume accessible by all hosts in the

# cluster.

#

# Note: You can create the appropriate permissions by creating the directory

# ahead of time and doing chown to set the user and group and then chmod to give

# the group read/write permissions. 

NAS=/nas1/bigdata/${FED}


# Bigdata-specific directory on a local volume.  Each host in the cluster will

# place the persistent state for the bigdata services running on that host within

# this directory.  The user which will execute bigdata MUST be able to read/write

# files on this path on each host.  Therefore, if you are not installing as root

# this will need to be a file within the user's home directory or some directory

# which exists on each host and is writable by that user.

LAS=/data1/bigdata/${FED}


编译后目录预览如下:


[root@NDlinux1 benchmark]# ll

总用量 28

drwxr-xr-x 2 root root 4096 5  22 16:25 bin

drwxr-xr-x 5 root root 4096 5  22 16:25 config

drwxr-xr-x 2 root root 4096 5  22 16:25 dist

drwxr-xr-x 7 root root 4096 5  22 16:25 doc

drwxr-xr-x 4 root root 4096 5  22 16:25 lib

drwxr-xr-x 2 root root 4096 5  22 16:25 log

-rw-rw---- 1 root root    6 5  22 16:25 state

[root@NDlinux1 benchmark]# pwd

/nas1/bigdata/benchmark

[root@NDlinux1 benchmark]# tree

.

── bin

   ── archiveRun.sh

   ── bigdata

   ── bigdatadown

   ── bigdataenv

   ── bigdataprecond

   ── bigdatasetup

   ── bigdataup

   ── broadcast_sighup

   ── counters.sh

   ── crontab.system

   ── crontab.user

   ── dumpFed.sh

   ── dumpZoo.sh

   ── extractCounters.sh

   ── jiniStart.sh

   ── listServices.sh

   ── nanoSparqlServer.sh

   ── POST-INSTALL

   ── RDFDataLoadMaster.sh

   ── README

   ── runLog4jServer.sh

   ── setProperties.sh

   ── testStatisticsCollector.sh

   └── throughputMaster.sh

── config

   ── bigdataCluster16.config

   ── bigdataCluster.config

   ── bigdataStandalone.config

   ── browser-logging.properties

   ── jini

     ── browser.config

     ── fiddler.config

     ── mahalo.config

     ── mercury.config

     ── norm.config

     ── outrigger.config

     ── README

     ── reggie.config

     └── startAll.config

   ── log4j.properties

   ── log4jServer.properties

   ── logging.properties

   ── miniCluster.config

   ── ntpd

     ── ntp-client.conf

     └── ntp.conf

   ── policy.all

   ── README

   ── reggie-logging.properties

   ── service.policy

   ── standalone

     ── bigdataStandalone.config

     ── log4j.properties

     └── README

   └── zookeeper-logging.properties

── dist

── doc

   ── bigdata

     └── LEGAL

         ── apache-license-2_0.txt

         ── colt-legal.txt

         ── cweb-license.txt

         ── dsiutils-license.txt

         ── fastutil-license.txt

         ── flot-license.txt

         ── hamcrest-license.txt

         ── icu-license.txt

         ── jetty-license.txt

         ── jquery-license.txt

         ── jsap-license.txt

         ── junit-license.html

         ── log4j-license.txt

         └── NanoHTTPD-license.txt

   ── bigdata-blueprints

     └── LEGAL

         ── apache-commons.txt

         ── blueprints-license.txt

         ── jettison-license.txt

         └── rexster-license.txt

   ── bigdata-jini

     └── LEGAL

         ── apache-river-license.txt

         └── apache-zookeeper-license.txt

   ── bigdata-rdf

     └── LEGAL

         ── nxparser-license.txt

         ── sesame2.x-license.txt

         └── slf4j-license.txt

   ── bigdata-sails

     └── LEGAL

         ── apache-commons.txt

         ── httpclient-cache.txt

         ── httpclient.txt

         ── httpcore.txt

         ── httpmime.txt

         ── jackson-license.txt

         └── sesame2.x-license.txt

   └── LICENSE.txt

── lib

   ── apache

     └── zookeeper-3.4.5.jar

   ── bigdata-1.5.3.jar

   ── bigdata-ganglia-1.0.4.jar

   ── blueprints-core-2.5.0.jar

   ── blueprints-test-2.5.0.jar

   ── bnd-0.0.384.jar

   ── colt-1.2.0.jar

   ── commons-codec-1.4.jar

   ── commons-configuration-1.10.jar

   ── commons-fileupload-1.3.1.jar

   ── commons-io-2.1.jar

   ── commons-logging-1.1.1.jar

   ── dsi-utils-1.0.6-020610.jar

   ── fastutil-6.5.16.jar

   ── hamcrest-core-1.3.jar

   ── high-scale-lib-v1.1.2.jar

   ── httpclient-4.1.3.jar

   ── httpclient-cache-4.1.3.jar

   ── httpcore-4.1.4.jar

   ── httpmime-4.1.3.jar

   ── icu4j-4.8.jar

   ── icu4j-charset-4.8.jar

   ── icu4j-localespi-4.8.jar

   ── jackson-annotations-2.3.1.jar

   ── jackson-core-2.3.1.jar

   ── jackson-databind-2.3.1.jar

   ── jettison-1.3.3.jar

   ── jetty-client-9.2.3.v20140905.jar

   ── jetty-continuation-9.2.3.v20140905.jar

   ── jetty-http-9.2.3.v20140905.jar

   ── jetty-io-9.2.3.v20140905.jar

   ── jetty-jmx-9.2.3.v20140905.jar

   ── jetty-jndi-9.2.3.v20140905.jar

   ── jetty-proxy-9.2.3.v20140905.jar

   ── jetty-rewrite-9.2.3.v20140905.jar

   ── jetty-security-9.2.3.v20140905.jar

   ── jetty-server-9.2.3.v20140905.jar

   ── jetty-servlet-9.2.3.v20140905.jar

   ── jetty-util-9.2.3.v20140905.jar

   ── jetty-webapp-9.2.3.v20140905.jar

   ── jetty-xml-9.2.3.v20140905.jar

   ── jini

     ── lib

       ── asm-3.2.jar

       ── asm-commons-3.2.jar

       ── browser.jar

       ── checkconfigurationfile.jar

       ── checkser.jar

       ── classdep.jar

       ── classserver.jar

       ── computedigest.jar

       ── computehttpmdcodebase.jar

       ── destroy.jar

       ── envcheck.jar

       ── extra.jar

       ── fiddler.jar

       ── group.jar

       ── jarwrapper.jar

       ── jini-core.jar

       ── jini-ext.jar

       ── jsk-debug-policy.jar

       ── jsk-lib.jar

       ── jsk-platform.jar

       ── jsk-resources.jar

       ── mahalo.jar

       ── mercury.jar

       ── norm.jar

       ── outrigger.jar

       ── outrigger-snaplogstore.jar

       ── phoenix-group.jar

       ── phoenix-init.jar

       ── phoenix.jar

       ── preferredlistgen.jar

       ── reggie.jar

       ── serviceui.jar

       ── sharedvm.jar

       ── start.jar

       ── sun-util.jar

       └── tools.jar

     ── lib-dl

       ── browser-dl.jar

       ── fiddler-dl.jar

       ── group-dl.jar

       ── jsk-dl.jar

       ── mahalo-dl.jar

       ── mercury-dl.jar

       ── norm-dl.jar

       ── outrigger-dl.jar

       ── phoenix-dl.jar

       ── reggie-dl.jar

       └── sdm-dl.jar

     └── lib-ext

         └── jsk-policy.jar

   ── junit-4.11.jar

   ── junit-ext-1.1-b3-dev.jar

   ── lgpl-utils-1.0.7-270114.jar

   ── log4j-1.2.17.jar

   ── lucene-analyzers-3.0.0.jar

   ── lucene-core-3.0.0.jar

   ── opencsv-2.0.jar

   ── openrdf-sesame-2.7.13-onejar.jar

   ── rexster-core-2.5.0.jar

   ── servlet-api-3.1.0.jar

   ── sesame-rio-testsuite-2.7.13.jar

   ── sesame-sparql-testsuite-2.7.13.jar

   ── sesame-store-testsuite-2.7.13.jar

   ── slf4j-api-1.6.1.jar

   └── slf4j-log4j12-1.6.1.jar

── log

   └── state.log

└── state


24 directories, 188 files


      1. 5. 至此完毕。你可以手动打包


[root@NDlinux1 bigdata]# ll

总用量 4

drwxrwxr-x 8 root root 4096 5  22 16:25 benchmark

[root@NDlinux1 bigdata]# pwd

/nas1/bigdata

[root@NDlinux1 bigdata]# tar -cvf benchmark.tar benchmark/

[root@NDlinux1 bigdata]# gzip benchmark.tar 

[root@NDlinux1 bigdata]# ll -h

总用量 53M

drwxrwxr-x 8 root root 4.0K 5  22 16:25 benchmark

-rw-r--r-- 1 root root  53M 5  22 16:36 benchmark.tar.gz


      1. 6. 你可以这个包进行集群的部署,当然需要修改一些配置,详细请看安装部分。


      1. 1.3.3 安装编译前修改

Build.properties



bigdata.dir=.

build.dir=ant-build

javac.debug=on

javac.debuglevel=lines,vars,source

javac.verbose=off

javac.source=1.7        //如果用高版本会警告

javac.target=1.7

javac.encoding=Cp1252 

release.dir=ant-release

build.ver=1.5.3   //版本号

build.ver.osgi=1.0

snapshot=false

package.release=1                    //rpm包里边的设置

package.prefix=/usr

package.conf.dir=/etc/bigdata

package.fedname=BigdataFed

package.pid.dir=/var/run/bigdata

package.var.dir=/var/lib/bigdata

package.share.dir=/usr/share/bigdata

ssh.scp=/usr/bin/scp

standalone.fed=testFed

FED=benchmark            //最后打包安装目录名称。

NAS=/nas1/bigdata/${FED}   //打包的根目录路径

LAS=/data1/bigdata/${FED}   //数据放置目录

JAVA_HOME=/opt/jdk1.8.0_101 //java_home 很重要 这个会替换bigdataCluster.config中的jdk变量

JINI_CLASS_SERVER_PORT=8081      //jinni 通信接口   (一般不需要修改)

LOAD_BALANCER_PORT=9999        //balance数据传输接口(一般不需要修改)

REPORT_ALL=false

SYSSTAT_HOME=/usr/local/bin

USE_NIO=true

install.bin.dir=${NAS}/bin

install.doc.dir=${NAS}/doc

install.lib.dir=${NAS}/lib

install.config.dir=${NAS}/config

install.log.dir=${NAS}/log

install.dist.dir=${NAS}/dist

install.user=${user.name}

install.group=${user.name}

LOCK_CMD=lockfile -r 1 -1

LOCK_FILE=${LAS}/lockFile

bigdata.config=${install.config.dir}/bigdataStandalone.config   //这个需要替换为同目录下的bigdataCluster.config或者bigdataCluster16.config 安装时候会替换里边的变量。当然你也可以手动修改。 而且里边对zookeeper和节点功能是需要根据环境进行修改的。

jini.config=${install.config.dir}/jini/startAll.config   //jinni统一启动配置(不需要修改)

policyFile=${install.config.dir}/policy.all

LOG4J_SOCKET_LOGGER_HOST = mytestlog4jserver.com   //配置log4j服务器地址,最好用hosts的机器名称。而且再启动集群脚本的时候他会自动判断server.hostname是否等于这里的配置值,如果相等那么启动log4jserver你就可以在log目录下面查看相应的日志。

LOG4J_SOCKET_LOGGER_PORT = 4445  //log4jserver port

LOG4J_DATE_PATTERN='.'yyyy-MM-dd'.log'

log4j.config=file:${install.config.dir}/log4j.properties //上面的LOG4J_SOCKET_LOGGER_HOST会替换里边的server地址

log4jServer.config=${install.config.dir}/log4jServer.properties// server 配置(一般不需要修改)

logging.config=${install.config.dir}/logging.properties

errorLog=${install.log.dir}/error.log

detailLog=${install.log.dir}/detail.log

eventLog=${install.log.dir}/event.log

ruleLog=${install.log.dir}/rule.log

stateLog=${install.log.dir}/state.log

stateFile=${NAS}/state

forceKillAll=true

NTP_MASTER=          //如果需要ntpd 那么就去启动服务server ntpd start  然后配置这几项

NTP_NETWORK=192.168.6.0

NTP_NETMASK=255.255.255.0

analysis.dir=E:/DPP/dpaether123/async-write-runs-june-09/run5/nas/runs/run5

analysis.counters.dir=${analysis.dir}/counters

analysis.queries=src/resources/analysis/queries

analysis.out.dir=${analysis.dir}/output

install.lubm.dir=${NAS}/lubm

install.lubm.lib.dir=${install.lubm.dir}/lib

install.lubm.config.dir=${install.lubm.dir}/config

LUBM_CLASS_SERVER_PORT = 8082

LUBM_CLASS_SERVER_HOSTNAME =mytest.com 

LUBM_RMI_CODEBASE_URL = http://${LUBM_CLASS_SERVER_HOSTNAME}:${LUBM_CLASS_SERVER_PORT}/

LUBM_ONTOLOGY_DIR=$NAS/lubm

sesame.server.dir = D:/apache-tomcat-6.0.26/webapps/openrdf-sesame

workbench.server.dir = D:/apache-tomcat-6.0.26/webapps/openrdf-workbench

aduna.data.dir = C:/Documents and Settings/Bryan Thompson/Application Data/Aduna/OpenRDF Sesame console

sesame.dir = D:/openrdf-sesame-2.3.1

perf.data.dir=/usr/bigdata/data

perf.run.dir=/usr/bigdata/runs

local.test.zookeeper.installDir=/usr/zookeeper-current   可以指定zookeeper home 集群启动的时候帮你启动,目前看不需要修改

test.zookeeper.tickTime=2000

test.zookeeper.clientPort=2081




      1. 2 安装
    1. 2.1 依赖

yum -y install sysstat   #pidstat

    1. 2.2 首先安装启动Zookeeper(已经有了可以忽略)

简单单机模式安装启动:


需要依赖zookeeper命令

tar -xvf zookeeper-3.4.6.tar.gz

mv zoo_sample.cfg zoo.cfg

./zkServer.sh start


    1. 2.3 修改

编译前需要修改配置,修改的配置会直接影响编译以后的结果,而且和部署的环境环境信息有关。比如java_home .他会编译到集群主要配置里边去,当然你也可以选择修改配置文件/nas/bigdata/benchmark/config/bigdataCluster.config等文件

一般编译打包规范的话,最重要的需要修改的两个配置文件为:

    • λ /config/bigdataCluster.config
    • λ bin/bigdataenv 



bigdataCluster.config


    static private fedname = "benchmark";  //大概75


    /**

     * Where to put all the persistent state.

     */

    static private serviceDir = new File("/data/bigdata/benchmark");


/**

* Which JDK to use.

*/

static private javaHome = new File("/usr/java/jdk1.7.0_55");

...

...

//大概135 ,这里是重点,需要根据实际情况进行替换

static private h0 = "192.168.0.41"; // 4@3ghz/1kb x 4GB x 263G

static private h1 = "192.168.0.42"; // 4@3ghz/2kb x 4GB x 263G 

static private h2 = "192.168.0.41"; // 4@3ghz/1kb x 4GB x 64G

/* Note: this configuration puts things that are not disk intensive

* on the host with the least disk space and zookeeper.

*/

    static private lbs = h2; // specify as @LOAD_BALANCER_HOST@ ?

    static private txs = h2;

    static private mds = h2;


    // 1+ jini servers

    static private jini1 = h2;

    //static private jini2 = h1;

    static private jini = new String[]{ jini1 }; //,jini2};


    // Either 1 or 3 zookeeper machines (one instance per).

    // See the QuorumPeerMain and ZooKeeper configurations below.

    static private zoo1 = h2;

    //static private zoo2 = h1;

    //static private zoo3 = h2;

    static private zoo = new String[] { zoo1 }; // ,zoo2,zoo3};


    // 1+ client service machines (1+ instance per host).

    static private cs0  = h2;


    // 1+ data service machines (1+ instance per host).

    static private ds0  = h0;

    static private ds1  = h1;


    // client servers

    static private cs = new String[] {

    cs0 //, ...

    };


    // The target #of client servers.

    static private clientServiceCount = 1; // was 2

    static private maxClientServicePerHost = 1; // was 2


    // data servers

    static private ds = new String[]{

  ds0, ds1 //, ...

    };


    // The target #of data services.

    static private dataServiceCount = 2;// was 4


    // Maximum #of data services per host.

    static private maxDataServicesPerHost = 1; // was 2


    // @todo also specify k (replicationCount)


    // Sets the initial and maximum journal extents.

    static private journalExtent = ConfigMath.multiply(200, Bytes.megabyte);


    /**

     * A String[] whose values are the group(s) to be used for discovery

     * (no default). Note that multicast discovery is always used if

     * LookupDiscovery.ALL_GROUPS (a <code>null</code>) is specified.

     */


    // one federation, multicast discovery.

    //static private groups = LookupDiscovery.ALL_GROUPS;


    // unicast discovery or multiple federations, MUST specify groups.

    static private groups = new String[]{bigdata.fedname};


    /**

     * One or more unicast URIs of the form <code>jini://host/</code>

     * or <code>jini://host:port/</code> (no default).

     *

     * This MAY be an empty array if you want to use multicast

     * discovery <strong>and</strong> you have specified the groups as

     * LookupDiscovery.ALL_GROUPS (a <code>null</code>).

     */

    static private locators = new LookupLocator[] {


// runs jini on the localhost using unicast locators.

//new LookupLocator("jini://localhost/")

 

// runs jini on two hosts using unicast locators.

new LookupLocator("jini://"+jini1),

//new LookupLocator("jini://"+jini2),


    };



bigdataenv

可以看出来下面的东西都是build.properties 里边定义的东西替换而成的


export FED=benchmark

export NAS=/nas/bigdata/benchmark

export LAS=/data/bigdata/benchmark

export SYSSTAT_HOME=/usr/local/bin         //依赖

export JAVA_HOME=/usr/java/jdk1.7.0_55     // 替换结果,如果没有设置正确就需要修改

export INSTALL_GROUP=root

export BIGDATA_CONFIG="/nas/bigdata/benchmark/config/bigdataStandalone.config"  //未替换就需要修改

export BIGDATA_CONFIG_OVERRIDES=

export JINI_CONFIG="/nas/bigdata/benchmark/config/jini/startAll.config"

export BIGDATA_POLICY=/nas/bigdata/benchmark/config/policy.all

export PATH=${JAVA_HOME}/bin:/nas/bigdata/benchmark/bin:${PATH}

export CLASSPATH="...省略"  //类路径

export JAVA_OPTS="-server -ea \           //设置java虚拟机的配置信息

    -showversion \

    -Dcom.sun.jini.jeri.tcp.useNIO=true \

    -Djava.security.policy=${BIGDATA_POLICY} \

    -Dlog4j.configuration=${BIGDATA_LOG4J_CONFIG} \

    -Djava.util.logging.config.file=${BIGDATA_LOGGING_CONFIG} \

    -Dcom.bigdata.counters.linux.sysstat.path=${SYSSTAT_HOME} \

"

export lockCmd="lockfile -r 1 -1"

export lockFile=/data/bigdata/benchmark/lockFile

export pidFile=${LAS}/pid

export stateFile=/nas/bigdata/benchmark/state

export binDir=/nas/bigdata/benchmark/bin

export libDir=/nas/bigdata/benchmark/lib

export configDir=/nas/bigdata/benchmark/config

export BIGDATA_LOG4J_CONFIG="file:/nas/bigdata/benchmark/config/log4j.properties"

export BIGDATA_LOGGING_CONFIG=/nas/bigdata/benchmark/config/logging.properties

export logDir=/nas/bigdata/benchmark/log

export ruleLog=/nas/bigdata/benchmark/log/rule.log

export eventLog=/nas/bigdata/benchmark/log/event.log

export errorLog=/nas/bigdata/benchmark/log/error.log

export detailLog=/nas/bigdata/benchmark/log/detail.log

export stateLog=/nas/bigdata/benchmark/log/state.log

export FORCE_KILL_ALL=true




    1. 2.4 修改
      1. 2.4.1 /etc/hosts


192.168.0.41 NDlinux1

192.168.0.42 NDlinux2

192.168.0.43 NDlinux3

    1. 2.5 启动顺序


修改完成之后启动服务

      1. 2.5.1 在每个节点上启动服务运行   

/nas/bigdata/benchmark/bin/bigdata start 

然后检查是否OK :可以运行脚本listServices.sh或者./dumpZoo.sh 查看启动host的个数据和服务启动情况。

[root@NDlinux1 bin]# ./listServices.sh

Zookeeper is running.

Discovered 1 jini service registrars.

   192.168.0.41

Discovered 9 services

Discovered 0 stale bigdata services.

Discovered 8 live bigdata services.

Discovered 1 other services.

Bigdata services by serviceIface:

  There are 2 instances of com.bigdata.jini.start.IServicesManagerService on 2 hosts

  There are 1 instances of com.bigdata.journal.ITransactionService on 1 hosts

  There are 1 instances of com.bigdata.service.IClientService on 1 hosts

  There are 2 instances of com.bigdata.service.IDataService on 2 hosts

  There are 1 instances of com.bigdata.service.ILoadBalancerService on 1 hosts

  There are 1 instances of com.bigdata.service.IMetadataService on 1 hosts

Bigdata services by hostname:

  There are 6 live bigdata services on NDlinux1

    There are 1 com.bigdata.jini.start.IServicesManagerService services

    There are 1 com.bigdata.journal.ITransactionService services

    There are 1 com.bigdata.service.IClientService services

    There are 1 com.bigdata.service.IDataService services

    There are 1 com.bigdata.service.ILoadBalancerService services

    There are 1 com.bigdata.service.IMetadataService services

  There are 2 live bigdata services on NDlinux2

    There are 1 com.bigdata.jini.start.IServicesManagerService services

There are 1 com.bigdata.service.IDataService services



[root@NDlinux1 bin]# ./dumpZoo.sh



------------------ -server -ea -showversion -DZooKeeper.servers=192.168.0.158:2181,192.168.0.156:2181,192.168.0.157:2181 -Dcom.sun.jini.jeri.tcp.useNIO=true -Djava.security.policy=/nas/bigdata/benchmark/config/policy.all -Dlog4j.configuration= -Djava.util.logging.config.file= -Dcom.bigdata.counters.linux.sysstat.path=/usr/local/bin

================== /nas/bigdata/benchmark/config/bigdataCluster.config

java version "1.8.0_101"

Java(TM) SE Runtime Environment (build 1.8.0_101-b13)

Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode)


log4j:WARN No appenders could be found for logger (com.bigdata.jini.start.config.ZookeeperClientConfig).

log4j:WARN Please initialize the log4j system properly.

log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

ZookeeperClientConfig{ zroot=/benchmark, sessionTimeout=600000, servers=192.168.0.41:2181, acl=[31,s{'world,'anyone}

]}

Negotiated sessionTimeout=4000ms

benchmark(2 children) 

  config(5 children) 

    com.bigdata.service.jini.ClientServer(1 children) com.bigdata.jini.start.config.ClientServerConfiguration{ className=com.bigdata.service.jini.ClientServer, args=[-Xmx1600m, -XX:+UseParallelOldGC], options=[], serviceDir=/data/bigdata/benchmark, timeout=20000, serviceCount=1, replicationCount=1, constraints=[com.bigdata.jini.start.config.JiniRunningConstraint@2f7a2457, com.bigdata.jini.start.config.ZookeeperRunningConstraint@566776ad, class com.bigdata.jini.start.config.HostAllowConstraint{hosts=[192.168.0.41]}, com.bigdata.jini.start.config.MaxClientServicesPerHostConstraint{ ma

,……….

………..

…………




      1. 2.5.2 启动nanoSparqlServer.sh服务

启动命令为:  /nas/bigdata/benchmark/bin/nanoSparqlServer.sh 9999 qq rw

      • λ 9999 restful 服务端口
      • λ qq:为他指定一个默认的namespace
      • λ rw是读写权限(overrideWebXml 拷贝web目录下)


      1. 2.5.3 web请求测试


http://192.168.0.42:9999/bigdata/namespace/qq/sparql

Accept: text/tab-separated-values

Content-Type: application/x-www-form-urlencoded








Pre: 机器学习实战-1. 机器学习基础

Next: 机器学习实战-3. 决策树(DecisionTree)

11
Sign in to leave a comment.
No Leanote account? Sign up now.
0 comments
Table of content