bash字符串前美元符号的作用

problem

bash内置变量IFS作为内部单词分隔符,其默认值为<space><tab><newline>, 我想设置它仅为\n,于是:

OLD_IFS=$IFS
IFS='\n'
# do some work here
IFS=$OLD_IFS

但结果为:IFS把单独的字符当作了分隔符,即分隔符被设置成下划线和字母n 。

Why ?

Solution

通过google搜索,得知需要把\n转化成ANSI-C Quoting, 方法是把字符串放入$'string'中,即应该设置成:

IFS=$'\n'

顺便搜了下$字符的用途,在Unix & Linux, 中解释了字符串前面加$字符的两种形式,一种是单引号,一种是双引号,即

There are two different things going on here, both documented in the bash manual

$'

Dollar-sign single quote is a special form of quoting: ANSI C Quoting Words of the form $'string' are treated specially. The word expands to string, with backslash-escaped characters replaced as specified by the ANSI C standard.

$"

Dollar-sign double-quote is for localization: Locale translation A double-quoted string preceded by a dollar sign (‘$’) will cause the string to be translated according to the current locale. If the current locale is C or POSIX, the dollar sign is ignored. If the string is translated and replaced, the replacement is double-quoted.

因此单引号表示转化成ANSI-C字符,双引号则表示将字符串本地化。

以下是一个实例,ping /etc/hosts的主机名为video-开头的主机名,检查网络状况!

  #!/bin/bash
trap "echo 'interrupted!';exit 1" SIGHUP SIGINT SIGTERM
OLD_IFS=$IFS
IFS=$'\n'
for i in `awk '$0!~/^$/ && $0!~/^#/ && $2~/^video/ {print $1,$2}' /etc/hosts`
do
    ADDR=$(echo $i | cut -d' ' -f 1)
    DOMAIN=$(echo $i | cut -d' ' -f 2)
    if ping -c 2 $ADDR &>/dev/null
    then
        echo $DOMAIN ok!
    else
        echo $DOMIN dead!
    fi
done
IFS=$OLD_IFS

 

转载请注明:http://krystism.is-programmer.com/若有错误,请多多指正,谢谢!

ubuntu安装Nvidia驱动

 

Step1 Find out your graphics card model

使用lspci命名查看显卡系列

lspci -vnn | grep -i VGA 12

输出

01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK107 [GeForce GT 630 OEM] [10de:0fc2] (rev a1) (prog-if 00 [VGA controller])
    Subsystem: Micro-Star International Co., Ltd. [MSI] Device [1462:275c]
    Flags: bus master, fast devsel, latency 0, IRQ 46
    Memory at f6000000 (32-bit, non-prefetchable) [size=16M]
    Memory at e0000000 (64-bit, prefetchable) [size=256M]
    Memory at f0000000 (64-bit, prefetchable) [size=32M]
    I/O ports at e000 [size=128]
    Expansion ROM at f7000000 [disabled] [size=512K]
    Capabilities: <access denied>
    Kernel driver in use: nouveau

可以看到显卡系列是GeForce GT 630 OEM

Step2 Find out the right driver version for your graphics card

访问Nvidia官方网址,输入显卡类型,点击search按钮,则会显示需要安装的驱动版本。

Version:    340.58
Release Date:   2014.11.5
Operating System:   Linux 64-bit
Language:   English (US)
File Size:  69.00 MB

Step3 Setup the xorg-edgers ppa

运行以下命令更新源:

sudo add-apt-repository ppa:xorg-edgers/ppa -y
sudo apt-get update

Step4 Install the driver

运行以下命令安装驱动:

 sudo apt-get install nvidia-340

Step5 Verify the installation

运行以下命令:

lspci -vnn | grep -i VGA 12

输出

01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK107 [GeForce GT 630 OEM] [10de:0fc2] (rev a1) (prog-if 00 [VGA controller])
    Subsystem: Micro-Star International Co., Ltd. [MSI] Device [1462:275c]
    Flags: bus master, fast devsel, latency 0, IRQ 46
    Memory at f6000000 (32-bit, non-prefetchable) [size=16M]
    Memory at e0000000 (64-bit, prefetchable) [size=256M]
    Memory at f0000000 (64-bit, prefetchable) [size=32M]
    I/O ports at e000 [size=128]
    Expansion ROM at f7000000 [disabled] [size=512K]
    Capabilities: <access denied>
    Kernel driver in use: nvidia

可见Kernel driver in user显示使用的内核驱动为nvidia

Step6 Nvidia settings tool

使用nvidia-settings命令配置驱动。

Removing the drivers

如果安装驱动导致系统无法启动,需要卸载驱动,运行以下命令:

sudo apt-get purge nvidia*

Additional Notes

很多教程说安装了nvidia驱动后需要把nouveau放入黑名单,其实并不需要,因为nvidia驱动会自动把它放入黑名单。 运行以下命令:

grep 'nouveau' /etc/modprobe.d/* | grep nvidia

输出:

/etc/modprobe.d/nvidia-340_hybrid.conf:blacklist nouveau
/etc/modprobe.d/nvidia-340_hybrid.conf:blacklist lbm-nouveau
/etc/modprobe.d/nvidia-340_hybrid.conf:alias nouveau off
/etc/modprobe.d/nvidia-340_hybrid.conf:alias lbm-nouveau off
/etc/modprobe.d/nvidia-graphics-drivers.conf:blacklist nouveau
/etc/modprobe.d/nvidia-graphics-drivers.conf:blacklist lbm-nouveau
/etc/modprobe.d/nvidia-graphics-drivers.conf:alias nouveau off
/etc/modprobe.d/nvidia-graphics-drivers.conf:alias lbm-nouveau off

说明已经把它放到黑名单了,即系统启动时不会自动加载这些模块。

References

参照英文博客

转载请注明:http://krystism.is-programmer.com/若有错误,请多多指正,谢谢!

构建hadoop2.5.2环境下的spark1.1.1

当前已编译的spark二进制包只有hadoop2.4和cdh4版本的,如果搭的是hadoop2.5.x,则需要自己从源码中构建。

下载源码

官网中下载源码,在Chose a package type 中选择Source Code, 下载后解压缩。

tar xvf spark-1.1.1.tgz

编译

Step1 设置maven内存限制

export MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m"

Step2 增加hadoop-2.5的profile

注意hadoop版本2.x.x需要指定profile,在根下的pom.xml下只有2.4.0的profile, 如果要在hadoop-2.5.x下,需要手动加上hadoop-2.5的profile,即添加pom.xml:

 <profile>
      <id>hadoop-2.5</id>
      <properties>
        <hadoop.version>2.5.2</hadoop.version>
        <protobuf.version>2.5.0</protobuf.version>
        <jets3t.version>0.9.0</jets3t.version>
      </properties>
    </profile>

否则编译的结果得到的protobuf版本不对,无法读取hdfs上的文件,抛java.lang.VerifyError: class org.apache.hadoop.hdfs .protocol.proto.ClientNamenodeProtocolProtos$CreateSnapshotRequestProto overrides final method getUnknownFields.() Lcom/google/protobuf/UnknownFieldSet;

Step3 编译

运行:

mvn -Pyarn -Phadoop-2.5 -Dhadoop.version=2.5.2 -Phive -DskipTests clean package

 

转载请注明:http://krystism.is-programmer.com/若有错误,请多多指正,谢谢!

hive单机安装

安装要求

  1. java1.7+
  2. hadoop2.x
  3. mysql5.5+(非必须,建议使用mysql存储元数据)

安装前的环境

  1. JAVA_HOME: java安装目录
  2. HADOOP_HOME: hadoop安装目录
  3. CLASSPATH: 除了hadoop和hive的必须包,还需要包括mysql java驱动,这里使用的是mysql-connector-java-5.1.25.jar, 并把它放入到lib下

安装过程

Step 1 下载tar包

hive官网下载最新的tar包,当前最新包为apache-hive-0.14.0-bin.tar.gz。

Step 2 解压包

假定安装路径为/opt/hive:

sudo mv apache-hive-0.14.0-bin.tar.gz /opt
sudo tar xvf apache-hive-0.14.0-bin.tar.gz
sudo ln -s apache-hive-0.14.0-bin hive
sudo mv mysql-connector-java-5.1.25.jar /opt/hive/lib

Step 3 配置

  • 创建配置文件,直接从模板文件创建即可
sudo rename 's/\.template//' *
sudo touch hive-site.xml
  • 编辑hive-env.sh文件,设置HADOOP_HOME=${HADOOP_HOME-:/opt/hadoop}
  • 创建hive-site-xml文件,添加以下内容:
<property>
  <name>javax.jdo.option.ConnectionURL</name>
  <value>jdbc:mysql://localhost:3306/metastore?createDatabaseIfNotExist=true</value>
  <description>the URL of the MySQL database</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionDriverName</name>
  <value>com.mysql.jdbc.Driver</value>
</property>

<property>
  <name>javax.jdo.option.ConnectionUserName</name>
  <value>hive</value>
</property>

<property>
  <name>javax.jdo.option.ConnectionPassword</name>
  <value>HIVE_DBPASS</value>
</property>

<property>
  <name>datanucleus.autoCreateSchema</name>
  <value>true</value>
</property>

<property>
  <name>datanucleus.fixedDatastore</name>
  <value>false</value>
</property>

<property>
  <name>datanucleus.autoCreateTables</name>
  <value>true</value>
</property>

<property>
  <name>datanucleus.autoCreateColumns</name>
  <value>true</value>
</property>

<property>
  <name>datanucleus.autoStartMechanism</name> 
  <value>SchemaTable</value>
</property> 

<property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/user/hive/warehouse</value>
</property>

<!--
<property>
  <name>hive.metastore.uris</name>
  <value>thrift://localhost:9083</value>
  <description>IP address (or fully-qualified domain name) and port of the metastore host</description>
</property>

<property>
  <name>hive.aux.jars.path</name>
  <value>file:///opt/hive/lib/zookeeper-3.4.5.jar,file:///opt/hive/lib/hive-hbase-handler-0.14.0.jar,file:///opt/hive/lib/guava-11.0.2.jar</value>
</property>
-->

<property>
 <name>hbase.zookeeper.quorum</name>
<value>localhost</value>
</property>

<property>
  <name>hive.support.concurrency</name>
  <description>Enable Hive's Table Lock Manager Service</description>
  <value>true</value>
</property>

</configuration>
  • mysql设置。修改/etc/mysql/my.cnf,修改bind-address 为0.0.0.0,重启mysql服务。
  • 在hdfs创建必要目录:
$HADOOP_HOME/bin/hdfs dfs -mkdir /tmp
$HADOOP_HOME/bin/hdfs dfs -mkdir /user/hive
$HADOOP_HOME/bin/hdfs dfs -chown hive /user/hive
$HADOOP_HOME/bin/hdfs dfs -mkdir  /user/hive/warehouse
$HADOOP_HOME/bin/hdfs dfs -chmod g+w   /tmp
$HADOOP_HOME/bin/hdfs dfs -chmod 777   /user/hive/warehouse
$HADOOP_HOME/bin/hdfs dfs -chmod a+t /user/hive/warehouse
  • 运行$HIVE_HOME/bin/hive, OK!
转载请注明:http://krystism.is-programmer.com/若有错误,请多多指正,谢谢!