bash内置变量IFS作为内部单词分隔符,其默认值为<space><tab><newline>, 我想设置它仅为\n,于是:
OLD_IFS=$IFS
IFS='\n'
# do some work here
IFS=$OLD_IFS
但结果为:IFS把单独的字符当作了分隔符,即分隔符被设置成下划线和字母n 。
Why ?
通过google搜索,得知需要把\n转化成ANSI-C Quoting, 方法是把字符串放入$'string'中,即应该设置成:
IFS=$'\n'
顺便搜了下$字符的用途,在Unix & Linux, 中解释了字符串前面加$字符的两种形式,一种是单引号,一种是双引号,即
There are two different things going on here, both documented in the bash manual
$'
Dollar-sign single quote is a special form of quoting: ANSI C Quoting Words of the form $'string' are treated specially. The word expands to string, with backslash-escaped characters replaced as specified by the ANSI C standard.
$"
Dollar-sign double-quote is for localization: Locale translation A double-quoted string preceded by a dollar sign (‘$’) will cause the string to be translated according to the current locale. If the current locale is C or POSIX, the dollar sign is ignored. If the string is translated and replaced, the replacement is double-quoted.
因此单引号表示转化成ANSI-C字符,双引号则表示将字符串本地化。
以下是一个实例,ping /etc/hosts的主机名为video-开头的主机名,检查网络状况!
#!/bin/bash
trap "echo 'interrupted!';exit 1" SIGHUP SIGINT SIGTERM
OLD_IFS=$IFS
IFS=$'\n'
for i in `awk '$0!~/^$/ && $0!~/^#/ && $2~/^video/ {print $1,$2}' /etc/hosts`
do
ADDR=$(echo $i | cut -d' ' -f 1)
DOMAIN=$(echo $i | cut -d' ' -f 2)
if ping -c 2 $ADDR &>/dev/null
then
echo $DOMAIN ok!
else
echo $DOMIN dead!
fi
done
IFS=$OLD_IFS
使用lspci命名查看显卡系列
lspci -vnn | grep -i VGA 12
输出
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK107 [GeForce GT 630 OEM] [10de:0fc2] (rev a1) (prog-if 00 [VGA controller])
Subsystem: Micro-Star International Co., Ltd. [MSI] Device [1462:275c]
Flags: bus master, fast devsel, latency 0, IRQ 46
Memory at f6000000 (32-bit, non-prefetchable) [size=16M]
Memory at e0000000 (64-bit, prefetchable) [size=256M]
Memory at f0000000 (64-bit, prefetchable) [size=32M]
I/O ports at e000 [size=128]
Expansion ROM at f7000000 [disabled] [size=512K]
Capabilities: <access denied>
Kernel driver in use: nouveau
可以看到显卡系列是GeForce GT 630 OEM
访问Nvidia官方网址,输入显卡类型,点击search按钮,则会显示需要安装的驱动版本。
Version: 340.58
Release Date: 2014.11.5
Operating System: Linux 64-bit
Language: English (US)
File Size: 69.00 MB
运行以下命令更新源:
sudo add-apt-repository ppa:xorg-edgers/ppa -y
sudo apt-get update
运行以下命令安装驱动:
sudo apt-get install nvidia-340
运行以下命令:
lspci -vnn | grep -i VGA 12
输出
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK107 [GeForce GT 630 OEM] [10de:0fc2] (rev a1) (prog-if 00 [VGA controller])
Subsystem: Micro-Star International Co., Ltd. [MSI] Device [1462:275c]
Flags: bus master, fast devsel, latency 0, IRQ 46
Memory at f6000000 (32-bit, non-prefetchable) [size=16M]
Memory at e0000000 (64-bit, prefetchable) [size=256M]
Memory at f0000000 (64-bit, prefetchable) [size=32M]
I/O ports at e000 [size=128]
Expansion ROM at f7000000 [disabled] [size=512K]
Capabilities: <access denied>
Kernel driver in use: nvidia
可见Kernel driver in user显示使用的内核驱动为nvidia
使用nvidia-settings命令配置驱动。
如果安装驱动导致系统无法启动,需要卸载驱动,运行以下命令:
sudo apt-get purge nvidia*
很多教程说安装了nvidia驱动后需要把nouveau放入黑名单,其实并不需要,因为nvidia驱动会自动把它放入黑名单。 运行以下命令:
grep 'nouveau' /etc/modprobe.d/* | grep nvidia
输出:
/etc/modprobe.d/nvidia-340_hybrid.conf:blacklist nouveau
/etc/modprobe.d/nvidia-340_hybrid.conf:blacklist lbm-nouveau
/etc/modprobe.d/nvidia-340_hybrid.conf:alias nouveau off
/etc/modprobe.d/nvidia-340_hybrid.conf:alias lbm-nouveau off
/etc/modprobe.d/nvidia-graphics-drivers.conf:blacklist nouveau
/etc/modprobe.d/nvidia-graphics-drivers.conf:blacklist lbm-nouveau
/etc/modprobe.d/nvidia-graphics-drivers.conf:alias nouveau off
/etc/modprobe.d/nvidia-graphics-drivers.conf:alias lbm-nouveau off
说明已经把它放到黑名单了,即系统启动时不会自动加载这些模块。
参照英文博客
当前已编译的spark二进制包只有hadoop2.4和cdh4版本的,如果搭的是hadoop2.5.x,则需要自己从源码中构建。
从官网中下载源码,在Chose a package type 中选择Source Code, 下载后解压缩。
tar xvf spark-1.1.1.tgz
export MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m"
注意hadoop版本2.x.x需要指定profile,在根下的pom.xml下只有2.4.0的profile, 如果要在hadoop-2.5.x下,需要手动加上hadoop-2.5的profile,即添加pom.xml:
<profile>
<id>hadoop-2.5</id>
<properties>
<hadoop.version>2.5.2</hadoop.version>
<protobuf.version>2.5.0</protobuf.version>
<jets3t.version>0.9.0</jets3t.version>
</properties>
</profile>
否则编译的结果得到的protobuf版本不对,无法读取hdfs上的文件,抛java.lang.VerifyError: class org.apache.hadoop.hdfs .protocol.proto.ClientNamenodeProtocolProtos$CreateSnapshotRequestProto overrides final method getUnknownFields.() Lcom/google/protobuf/UnknownFieldSet;
运行:
mvn -Pyarn -Phadoop-2.5 -Dhadoop.version=2.5.2 -Phive -DskipTests clean package
在hive官网下载最新的tar包,当前最新包为apache-hive-0.14.0-bin.tar.gz。
假定安装路径为/opt/hive:
sudo mv apache-hive-0.14.0-bin.tar.gz /opt
sudo tar xvf apache-hive-0.14.0-bin.tar.gz
sudo ln -s apache-hive-0.14.0-bin hive
sudo mv mysql-connector-java-5.1.25.jar /opt/hive/lib
sudo rename 's/\.template//' *
sudo touch hive-site.xml
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/metastore?createDatabaseIfNotExist=true</value>
<description>the URL of the MySQL database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>HIVE_DBPASS</value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>true</value>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>false</value>
</property>
<property>
<name>datanucleus.autoCreateTables</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoCreateColumns</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoStartMechanism</name>
<value>SchemaTable</value>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>
<!--
<property>
<name>hive.metastore.uris</name>
<value>thrift://localhost:9083</value>
<description>IP address (or fully-qualified domain name) and port of the metastore host</description>
</property>
<property>
<name>hive.aux.jars.path</name>
<value>file:///opt/hive/lib/zookeeper-3.4.5.jar,file:///opt/hive/lib/hive-hbase-handler-0.14.0.jar,file:///opt/hive/lib/guava-11.0.2.jar</value>
</property>
-->
<property>
<name>hbase.zookeeper.quorum</name>
<value>localhost</value>
</property>
<property>
<name>hive.support.concurrency</name>
<description>Enable Hive's Table Lock Manager Service</description>
<value>true</value>
</property>
</configuration>
$HADOOP_HOME/bin/hdfs dfs -mkdir /tmp
$HADOOP_HOME/bin/hdfs dfs -mkdir /user/hive
$HADOOP_HOME/bin/hdfs dfs -chown hive /user/hive
$HADOOP_HOME/bin/hdfs dfs -mkdir /user/hive/warehouse
$HADOOP_HOME/bin/hdfs dfs -chmod g+w /tmp
$HADOOP_HOME/bin/hdfs dfs -chmod 777 /user/hive/warehouse
$HADOOP_HOME/bin/hdfs dfs -chmod a+t /user/hive/warehouse