Blog Archive

Thursday, January 27, 2011

Perl hash array Note

1、二维数组
@a=();
@b=("1","2","3");
@c =("4","5","6");
$a[0] = \@b;
$a[1] =\@c;
1>
print  $a[0]->[0],"\n";
print  $a[1]->[0],"\n";
print  $a[0][0],"\n";
print  $a[1][0],"\n";
2>
foreach(@){
 print "${@$_}[2]\n";#will print value of $b[2] and $c[2] 
 print "@$_\n"; #will print value of all
}

Perl格言:(“There’s More Than One Way To Do It”)

2、数组作为子程序参数
eg1.
#!/usr/bin/perl     
    
  @a=(9,2,3,4);   
  @b=("a","b");   
    
  func(\@a,\@b);   
    
  sub   func   {   
  $c=shift;   
  $d=shift;   
  print   $#{@$c},$#{@$d};   
  }
eg2.
#!/usr/bin/perl
@participants=("mark","terry","jason");
@participants2=('33','yts');
my @arr;
$arr[0]=[@participants];
$arr[1]=\@participants2;
print scalar @arr ."\n";
for(my $i=0;$i<scalar @arr;$i++){
        @temp=$arr[$i];
        print scalar @temp," temp's value\n";
        #print "\t [ @{$arr[$i]} ],,\n";
        for(my $j=0;$j<scalar @{$arr[$i]};$j++){
                print $arr[$i][$j],"\n";
        }
}

3、哈希值是数组
my @difs;
my (%records,%record);
@difs1= qw /xx.dif x2.dif y3.dif/;
@difs2= qw /xx1.dif x22.dif/;
$record{"xg3"}=\@difs1; 
$record{"xg5"}=[@difs2];
print "test value:",$record{"xg5"}[1],"\n";
print "-----------------------------2\n";
while(($key,$value)=each %record){
  print "$key:";
  for(my $i=0;$i<scalar @$value;$i++){
     print @$value[$i]," ";
  }
  print "\n";
}
4、数组作为哈希表A的值,哈希表A作为另一哈希表B的值
print "-----------------------------1\n";
#hash as a value in another hash
my @difs;
my (%records,%record1,%record2);
@difs1= qw /x11.txt x22.dif x3.csv/;
@difs2= qw /ya.dif xyz.dif/;
$record1{"xg3"}=\@difs1; 
$record1{"xg5"}=[@difs2];
print "test value:",$record1{"xg5"}[1],"\n";
print "-----------------------------2\n";
while(($key,$value)=each %record1){
  print "$key:";
  for(my $i=0;$i<scalar @$value;$i++){
     print @$value[$i]," ";
  }
  print "\n";
}
print "-----------------------------3\n";
$record2{"xg7"}=\@difs2; 
$record2{"xg8"}=[@difs1];
$records{"RDBMS_MAIN_LINUX_070827"}={%record1};#should not use [ ] here
$records{"RDBMS_MAIN_LINUX_070828"}=\%record2;
while(($key1,$value1)=each %records){
  print "$key1:";
  while(($key2,$value2)=each %$value1){
   print "$key2:";
   for(my $i=0;$i<scalar @$value2;$i++){
      print @$value2[$i]," ";
   }   
  } 
  print "\n";

output:
-----------------------------1
test value:xyz.dif
-----------------------------2
xg5:ya.dif xyz.dif 
xg3:x11.txt x22.dif x3.csv 
-----------------------------3
RDBMS_MAIN_LINUX_070828:xg7:ya.dif xyz.dif xg8:x11.txt x22.dif x3.csv 
RDBMS_MAIN_LINUX_070827:xg5:ya.dif xyz.dif xg3:x11.txt x22.dif x3.csv


Ref:

Random re-arrange an array in perl

sub randomArray() {
my $ra_array = shift;
for ( my $i = 0 ; $i < @$ra_array ; $i++ ) {
my $index = int( rand(@$ra_array) );
@$ra_array[ $i, $index ] = @$ra_array[ $index, $i ];
}
}

Ref:

for (int i=LENGTH; i>=0; i--) { int index = rand() % i; int tmp = array[i]; array[i] = array[index]; array[index] = tmp; }

Wednesday, January 26, 2011

Linux - 通配符与特殊符号 - LINUX - CLEANER

Linux - 通配符与特殊符号 - LINUX - CLEANER

* - 通配符,代表任意字符(0到多个)
? - 通配符,代表一个字符
# - 注释
\ - 跳转符号,将特殊字符或通配符还原成一般符号
| - 分隔两个管线命令的界定
; - 连续性命令的界定
~ - 用户的根目录
$ - 变量前需要加的变量值
! - 逻辑运算中的"非"(not)
/ - 路径分隔符号
>, >> - 输出导向,分别为"取代"与"累加"
' - 单引号,不具有变量置换功能
" - 双引号,具有变量置换功能
` - quote符号,两个``中间为可以先执行的指令
() - 中间为子shell的起始与结束
[] - 中间为字符组合
{} - 中间为命令区块组合
Ctrl+C - 终止当前命令
Ctrl+D - 输入结束(EOF),例如邮件结束的时候
Ctrl+M - 就是Enter
Ctrl+S - 暂停屏幕的输出
Ctrl+Q - 恢复屏幕的输出
Ctrl+U - 在提示符下,将整行命令删除
Ctrl+Z - 暂停当前命令
&& - 当前一个指令执行成功时,执行后一个指令
|| - 当前一个指令执行失败时,执行后一个指令

Download area

FANT

Friday, January 21, 2011

I SAW YOU WALKING IN THE RAIN LYRICS

I saw you walking in the rain
You were holding hands
And I'll never be the same

Where should I begin? Let's see
You said you needed space
To clear up all the haze
If you've really opened up to me
How can you explain
All this disarray

'Cause I saw you walking in the rain
You were holding hands
And I'll never be the same

Oh no, I'll never be the same
Oh no, never be the same
Oh no, I'll never be the same
Oh no, I'll never be the same
Oh no

The truth is always best, you see
I walk away with dignity
While you swim in regret
All that keeps sparkling in my head
Are some words you said to me in bed
But the love you gave is the love I'll forget

Cause I saw you walking in the rain
You were holding hands
And I'll never be the same

Oh no, I saw you and her walking in the rain
You were holding hands
You were holding hands
You were holding hands
And I'll never be the same ...

Oh no, I'll never be the same (repeat)

Speech Processing Notes

Speech Processing Notes


12.SVM for ASR

1.SVM 和 GMM的区别
   The focus, of the SVM trainingprocess is to model the boundary, as opposed to a traditional GMM-UBM which would model the probability distributions of the two classes.

2.sequence kernel. 把语音特征数据序列{Xi}作为SVM的输入,即如形式K({Xi},{Yi}),K( , )为核函数。We call this a sequence kernel method.

3.GLDS kernel
  •     supervector:average of the expansions of features of every fame。从每一段语音数据得到的N帧语音,提取特征参数得到N个向量,将这N个向量扩展为N 个b(Xi),  然后取平均,就得到了这段语音的supervector。
  •     kernel: generalized linear discriminant sequence (GLDS) kernel.
   
4.Key part of SVM used in ASR
  • supervector的表示。由每个speaker得到的多段语音数据,对于每段语音数据,在GMM模型下,是利用这段语音的每帧数据得到的特征参数去参与计算,而在SVM中则是利用这种supervector。是SVM—ASR系统设计的关键。
  • 核函数的设计。核函数设计时考虑的要素:
  1. 核函数提供一个两段utterance相似性的度量。这里如Fig.3.所示,用utterance1训练出的模型去评判utterance2.作为相似性度量
  2. 计算代价小    适当的近似不仅是为了计算的简便,数据不充足的情况下,还可以降低误差。
  3. 对称(保证满足Mercer条件)         GLDS kenel 中对于R的简化既保证了计算代价的减小,又保证了核函数的对称性。
SVM Using GMM supervector笔记见Google Doc.包括GLDS、 Bhattacharyya Distance、MLLR kernel、Feature Transformation Kernel。
11.几个可能的研究方向

1.Prosodic Features and High-level Features
    tag:Prosodic

2.Model Fusion
  Fusion of VQ、GMM and SVM.
  tag:fusion

3.SVM:supervector and kernel
   tag:SVM、supervector、kernel
10.2009综述<An Overview of Text-Independent Speaker Recognition: from Features to Supervectors>阅读笔记

1.引言
        (1)"In fact, the focus of speaker recognition research over the years has been tending towards such telephone-based applications."
        (2)Speaker diarization, also known as “who spoke when”, attempts to extract speaking turns of the di erent participants from a spoken document, and is an extension of the “classical” speaker recognition techniques applied to recordings with multiple speakers。
        (3)特征被分为以下五类:1)short-term spectral features, (2) voice source features, (3) spectro-temporal features, (4) prosodic features and (5)
                high-level  features。
               
2.Feature Extraction
   To be continued......
3.Speaker model
   (1)VQ   it also provides competitive accuracy when combined with background model adaptation.<Maximum a Posteriori adaptation of the centroid model for speaker verication><Comparative evaluation of maximum a Posteriori vector quantization and gaussian      mixture models in speaker verification>
            (2)GMM.   “ The GMM can be considered as an extension of the VQ model.”(Kmeans、EM和GMM的关系)
          需要研究的地方:
          <1>GMM-UBM,在获得UBM的基础上利用target speaker 的训练数据进行MAP自适应得到speaker model.除了,MAP方法,还可以利用Maximum likelihood
                linear regression (MLLR)。
          <2>a small number or even no EM iterations are needed according to<
Comparative evaluation of maximum a Posteriori vector quantization and gaussian      mixture models in speaker verification>
          <3>phonetic GMM(PGMM) <an overview...>2009.  P12.
   (3)SVM
       与GMM的结合。见第4部分。
   (4)Fusion
      Fusing dependent (correlated) classifiers can enhance the robustness of the score due to variance reduction.
      An implementation of the method is available in the Fusion and Calibration (FoCal) toolkit.This method, being simple and robust at the same time, is usually the first choice in our own research.     见论文tag:Fusion

4.Session compensation:
  (1)Feature normalization
      CMS:
      RASTA:To be continued....

      Feature Mapping(FM):
To be continued....

      the order of  Cobination of these method:A typical robust front-end consists of extracting MFCCs, followed by RASTA filtering, delta feature computation, voice activity detection,feature mapping and global mean/variance normalization in that order.见<The 2004 MIT Lincoln laboratory speaker recognition system>
  (2)Score normalization
      Z-norm:
      T-norm:
  (3)model compensation:
     <1>GMM-UBM:JFA
     <2>SVM:supervector and kernel
           [1]generalized linear discriminant sequence (GLDS) kernel SVM
           [2]Gaussian Supervector SVM
           [3]MLLR supervector SVM
           [4]High-Level Supervector SVM
      <3>supervector normalization
            [1]Nuisance attribute projection (NAP)   It is not specific to some kernel, but can be applied to any kind of SVM supervectors.The NAP transforma-
tion removes the directions of undesired sessions variability from the supervectors before SVM training.
            [2]Within-class covariance normalization (WCCN)
   5.Software Packages for Speaker Recognition
       <1> ALIZE toolkit and SVMTorch
       <2>For score fusion of multiple sub-systems, we recommend the FoCal toolkit. For evaluation purposes, such as plotting DET curves, we recommend the DETware toolbox(forMatlab) by NIST. A similar tool but with more features is SRETools.      
9.ICASSP09关于ASR的趋势

ICASSP09关于speaker Recognition的论文大体分为这么几部分:
(1)NIST SRE的论述,主要在SPEAKER RECOGNITION I部分。
(2)Kernel(包括SVM)在ASR中的应用,占论文数量相当大一部分。
(3)关于JFA(FA)的部分。主要在Speaker Verification部分。
(4)特征的选择部分。很多关于lattice、phase、pitch之类的信息添加到特征中来提高识别率的文章。主要在Speaker Recognition II部分。
另外值得注意的是一个新加坡的研究机构Institute for Infocomm Research,这次会议发表了很多论文。
标签: ICASSP, 研究趋势
8.小波变换在语音处理中应用

7.小波分析笔记

6.信号分析若干问题

1.框架(frame)理论
   简单的讲,框架是希尔伯特空间的满足框架界条件的一组向量。可以看做是基的推广。框架上下界设为A,B。 A=B时为紧框架,A=B=1时为正交基。也可以看出,只有正交基才符合Parsevel定量。这组向量线性独立时为Riesz基,可证明Riesz基及其对偶基构成双正交基。用紧框架(即A=B)对信号进行分解与重构时,基和对偶框架只差一常数因子1/A,计算简单,重构公式也有确定的等式。A!=B时,求对偶框架比较困难,但A和B接近时有近似的公式,信号重构时也可用对偶框架进行近似,A和B相差越大重构误差越大。
2.信号分解与重构
   找一组向量(可以是基,也可以是普通的框架)对信号做投影得一组系数,即为分解变换,反之即为重构。
3.双正交关系
  两组向量{x(i)}和{y(i)}为双正交的,如果<x(i), y(j)>=δ(i-j)。而向量X和Y自身却不必为正交的。
   给定一组基向量,如果不是正交基,要实现对信号的分解,第一步的工作就是求出它的对偶基(双正交基)。当然若是正交基,骑其对偶基就是自身。求对偶基的方法见P32,胡广书《现代信号处理教程》
4.正交基为什么适合于硬件实现?
  用一组基对信号进行分解得到一组系数,这是正变换,α=ΦX;用这组系数重构信号为逆变换,X=Φ_1α=ΦTα。基正交时,称为正交变换,正反变换矩阵只是简单的转置关系,因为变换矩阵是正交的。
5.正交基的选择原则
  <1>尽量简单,减少正反变换的计算量。
  <2>有良好的视频局部特性
  <3>具有好的去相关性能量集中性能,这方面DCT变换表现良好。去相关性,指得到的系数向量相关性小。能量集中即变换得到的系数前几个比较大,  而剩下的很小。
6.瞬时频率
  若一信号能写成幅度和相位的形式,则相位的倒数即为瞬时频率。瞬时频率的是时间的函数,不同于傅里叶频率。理论上它是信号在某一时刻的频率。见P25。
7.信号时宽带宽
   也即主瓣的概念,都用“方差”来定义。时域密度函数即|x(t)|的平方,频域的密度即为X(t)傅里叶变换的平方。这样相应的时域中心(均值)频域中心(均值)和主瓣宽度都可相应求出。其中带宽可以表示为瞬时频率在整个时间轴上的加权和,权值为x(t)的平方。见P18。时宽带宽的乘积固定,且小波母函数带宽与频率中心的比值不变(恒Q特性)。恒Q特性是小波理论的基础,它揭示出了随着尺度因子a的变化,相应的频率分辨率和频率中心都会相应变化。a变小(相应频率变大)时,频率分辨率降低而时间分别率升高,同时频带主瓣中心上升(正对应信号频率变大),这就是小波分析的自适应特性。P242
7*.小波变换和短时傅里叶变换的区别
  短时傅里叶变化不具备恒Q特性,也不具备随时改变频谱带宽的能力,要改变时频分辨率只能改变窗函数。短时傅里叶变换STFT(t,W)只有窗函数的位移,没有时间的伸缩。W的改变只是改变原来谱包络下中心频率,而不会改变主瓣带宽,也即频率分辨率和时间分辨率都不回改变。
8.信号抽取与插值
   首先要满足抽样定理,否则会出现频谱混叠。一般信号M倍抽取前先将其通过一带宽为原来1/M的低通滤波器,再进行抽取。插值过程相反,先进行插值,然后再通过一低通滤波器,因为插值后出现了原频谱的多余的M-1个镜像。
  抽取和插值前后的抽样频率分别发生了M倍的变化。
9.子带分解
  即通过一组滤波器,分别得到一系列的不同的带通(包括一个低通和一个高通)输出,每一个输出对应一个时间信号,称它们为原信号的子带信号。这些带宽可以相等,也可以不想等。小波分析中的二进小波即属于二进制子带分解。子带个数越多,对信号的频谱分解越细,越有利于观察信号频域特征。
  向下抽样:设原信号抽样频率为f, 将信号平均分为两个子带,这样每个子带的抽样频率就不需要再是f了,只要f/2就可以恢复原信号了。所以出现了小波二进制分解时的向下2抽样。注意要满足抽样定理。向下抽样可以用于压缩信号。
    子带分解广泛用于信号压缩。原理:一个信号在频带上很少会均匀分布,在不同的频带上会有不同的分布,在能量大的频带上用较长的字长,在能量小的频带上用较短字长,这就达到了信号压缩的目的。
10.小波重建核
   小波重建核K(a,b,a0,b0)是两个小波的内积,因此K反映了两小波的相关性。重建核方程表征了,二维小波变换函数任意点处的值可以由其他不同点表达出来。这说明了不同点处的小波变换值有相关性,这样连续小波变换的重构公式存在信息冗余。由此可以在平面上离散栅格上的小波变换值重构原信号,这就导出了离散小波变换。
5.Wavelet used in speech recognition

 把小波用在语音识别上无非是看中了小波变换可根据语音信号频率变化而改变窗口大小从而能提取信号局部时频信息的特点。
(1)对于含噪声的语音信号,若是利用DCT变换(DCT的basis vector  cover all frequency bands.如 MFCC用的DCT)来提取参数,则某一频带上的噪声可能会影响整个语音信号,从而影响所提取的所有特征参数。而小波恰恰相反,它只提取某一时间段(根据频率高低变化)上的信息,这样某一频带上的噪声最多之影响某些特征参数(这些我们可以除去)而大部分不受影响。
(2) 还有一种情况下小波也比较好。例如一帧语音信号可能含有两个连接的因素,一个voiced 浊音一个unvoiced清音,第一个会占据低频第二个占据高频。这时MFCC会把一帧语音当做一个因素。而我们可以将此分为多个子频带(参见方法1*)。

有以下几种方法
1.语音--->预处理(预加重、加窗分帧)——>Mel FilterBanks---->log---->DWT分解得到MFDWC
1*.语音--->预处理---->利用DWT Multiresolution将每一帧信号分为多个subband--->分别进行处理获得MFDWC--->GMM综合,参见 A Robust Wavelet-based Text-Independent    Speaker Identification 和APPLICATION OF WAVELET TRANSFORM AND WAVELETTHRESHOLDING  IN ROBUST SUB-BAND SPEECH RECOGNITION ,这两篇文章在对每个subband进行处理后进行Combination,分别对各分量进行加权。第二篇论文提出了新的加权因子计算方法。这里利用Thresholding去除噪声,因为同过实验发现语音信号的MFDWC系数集中在一小部分所得特征系数上,而这部分系数较大,噪声所产生的系数较小,通过设置阈值将小的特征系数归零去噪。
2. using subband energy for a feature parameters.
3. use the actual wavelet coeffients,not the subband energy! 用DWT来处理未经 MelFilterbanks 的信号
见Robust speech recognition using wavelet coefficient features
4. Bark-waveler transform

另外几点:
1.Fletcher and his colleagues [1] suggested that in human auditory perception, the linguistic message gets decoded independently in different frequency sub-bands and the final decoding decision is based on merging the decisions from the sub-bands.由此可知人类的(*)听觉特性:按不同频带感知,再综合。这正是第1*种方法的依据。但经实验确定在纯语音(无噪声)环境下,subband方法并不好,因为它可能将不同子带间的相关性略去了。但在噪声环境下取得了较好的效果。
2.Why we prefer GMM to HMM?
   Although the probabilistic HMM modeling is suitable for speech recognition and text-dependent speaker recognition but for text-independent speaker recognition the sequencing of sounds found in the training data does not necessarily reflect the sound sequences found in the testing data and contains little speaker dependent information. This is supported by experimental results in [6] which found text-independent performance was unaffected by discarding transition probabilities in HMM speaker models。即我们只需有一个状态的HMM——GMM即可。

以上讨论重新整理如下:
小波变换在语音识别和说话人识别中主要在特征提取阶段。无论是语音识别还是说话人识别,最常用的语音特征参数是MFCC。
MFCC计算过程如下:
      先用FFT将每帧信号x(n)变为X[K]---->将其幅度谱 | X[K] | 与Mel尺度滤波器组卷积------>取对数------>DCT变换。
 1 MFCC的优点是它符合人的听觉特性(人类听到的声音高低与声音的频率并不成线性正比关系)。
 2 MFCC的缺点是:
        2.1 DCT变换的basis Vector covers all Frequency banks,这样若是语音信号被某个频带的噪声所污染,那么经过DCT变化后将污染所有的MFCC参数。
        2.2 DCT 的basis vector 几乎具有固定的时频分辨率,不适合与语音这样的时变信号。
        2.3 一帧语音有可能包含两个连接音素,若一个浊音另一个清音,一般发音的音素含有大部分能量
占据低频部分,而不发音的音素能量较小且占据高频,这样MFCC就不能有效识别这两个音素。
对于以上,有如下改进
1. 保留Mel尺度符合人类听觉的优点,语音仍通过Mel尺度滤波器【1,2】或者模拟Mel尺度【3】。
2. 不用DCT改用DWT【1,2】或DWPT【3,7】,其中【7】不经过处理,而直接对每帧语音信号进行DWPT。DWT的优点是
    2.1 具有更好的时频分别率且具有自适应性,这样可以更好的提取局部信息(特别是冲激信息,人类对这种信息敏感)。
    2.2 无论是在某段时间上有噪声污染还是某频率段上有噪声污染,它只会影响提取出的个别参数,我们可以减弱或去除这些分量。
3. 采用subband方法将预处理后的每帧语音信号通过Multisolution分成多个子带【2,3,4,5,6】。对于子带的划分,可以按模拟mel尺度【3】,也可以分成二进子带后让其通过
Mel尺度滤波器【2】。分成子带后,分别对每个子带提取参数,然后可以将各参数组合得到最后参数。【2,4,5】中提出了比较好的组合方法LCGMM,【5】提成了一种新的有效的计算加权因子。对每个子带的提取方法也有不同,有一类是利用子带的能量【6】。
 subband方法是由于(*)和缺点2.3。

 
4.对于Mel尺度的有效性,有人【8,9,10,11】提出了不同的观点,认为虽然Mel尺度适合于speech recognition 和 text-dependent speaker recognition,对text-independent speaker recognition 却不一定适合,并提出了自己的方法。三者都是基于评估个子频带对结果误差某一标准(如ERR【8】)的影响,研究表明语音信号的能量不仅集中在低频带上,还在某高频带上。【9】不像【8】把各子带分开考虑,而是给出了一个复杂的算法评估了各子带间组合 。【10】通过提出的Vector  Ranking 方法得出 0−1000 Hz和3000−4500 Hz频带上信息对说话人识别更有鉴别力,【11】通过研究各频带上的能量给出了类似的结果。

【1】Gowdy, J.N.   Tufekci, Z.   Mel-scaled discrete wavelet coefficients for speech recognition, 2000 ICASSP
【2】Phung Trung Nghia , A Robust Wavelet-based Text-Independent Speaker Identification ,Conference on Computational Intelligence and Multimedia Applications, 2007. International Conference on.
【3】Ruhi Sarikaya, Bryan L. Pellom, John H. L. Hansen ,  Wavelet packet transform features with application to speaker identification ,1998
【4】Wan-Chen Chen,Multiband Approach to Robust Text-Independent Speaker Identification,2004
【5】Babak Nasersharif, Ahmad Akbari,Application of Wavelet Transform and Wavelet Thresholding in robust sub-band speech recognition,,2004
【6】Kidae, Kim,Evaluation of wavelet filters for speech recognition,Systems, Man, and Cybernetics, 2000 IEEE International Conference on
【7】Gupta, M.,Robust speech recognition using wavelet coefficient features,Automatic Speech Recognition and Understanding, 2001. ASRU '01. IEEE Workshop on
【8】Mihalis Siafarikas,Objective Wavelet Packet Features for Speaker Verification,ICSLP 2004
【9】Siafarikas, M. Wavelet Packet Bases for Speaker Recognition,Tools with Artificial Intelligence, 2007. ICTAI 2007. 19th IEEE International Conference on
【10】Orman, O.D., Arslan, L.M., “Frequency Analysis of Speaker Identification”, Proc. of The Speaker Recognition Workshop 2001
【11】 Wu, J. -D. , & Lin, B.-F., Speaker identification using discrete wavelet packet transform technique with irregular decomposition, Expert Systems with Applications ( 2008), doi:10.1016/j.eswa.2008.01.038
4.语音识别概览

语音识别包括两个由最大后验概率引出的重要的基本模块------声学模型和语言模型。 
1.语音信号基础

语音信号处理基础分析部分已经大致结束。包括信号处理基础:信号采样、LTI系统、卷积、Fourier变换、Z变换、DCT变换,语音信号分析基础:语音声学特性、汉语语音特点、语音模型、语音信号预处理、时域分析、频域分析(短时傅立叶变换)、倒谱域分析、线性预测分析等。 
2矢量量化技术(VQ)

   矢量量化技术(VQ)的基本思想是,对K维空间R分为J个部分,每个部分找出一个代表矢量,对于一个新的矢量,判断出它属于哪个部分并用代表矢量替代它用于存储或者通信。找出代表矢量的过程称为码本设计,判断新的矢量属于哪部分称为矢量量化。码本设计和矢量量化是VQ的两个关键技术。
    语音的特征矢量一般选择变换域的参数,常用的是MFCC、LPC、LSP等。特征参数的提取还有很多东西需要学习。
    码本设计和量化过程中都要计算两个矢量之间的“距离”或者叫做失真度。两个矢量的“距离”表示有很多种,最基本的是欧氏距离,它有很多变体但思想基本一致。欧氏距离易于计算和理解,但是它并不能反映人耳对语音感知的特点,所以它一般应用与已经很好的表示了人耳感知特点的那些特征矢量,如LPC、LSP、MFCC、LPCMFCC等。当使用LPC系数时,我们可以采用线性预测失真测度I-S距离,它的变体还有对数似然比失真度和模型失真度。参见P86。
    码本设计一般通过对大量的已观测语音数据进行学习获得,常见的是一种迭代算法-----LBG算法,实际上就是K-Means算法的具体应用。这种迭代算法涉及到出示码本的选择问题,方法有随机选取法、分裂法、链映射法等。普通的矢量量化过程是寻找码本中距离特定的矢量“最近”的码字,一般需要大量的计算。其实这是一个搜索问题,人工智能领域已经有很多方法,如树形搜索、琳域划分搜索等等。
    传统的矢量量化技术有很多缺点,相应的改进方法有:有记忆的矢量量化(利用语音信号之间的相关性)、模糊VQ、遗传算法优化码本等。
3HMM

HMM是双随机过程,一个是表示状态转移的Markov链,假定它是一个单纯Markov过程,是指t+1时刻的状态只与t时刻的状态有关;一个是与Markov链的每一个状态想关联的观测序列的随机过程,表示状态转移时输出符号组成的符号序列。这里假定相邻符号之间是不相关的。隐Markov模型有三个基本问题:HMM估计问题(识别),给定模型参数和输出序列,求序列通过模型的概率;解码问题 ,给定模型和输出序列,找出最可能的状态序列;学习问题,给定训练输出序列求最佳模型参数。这三个基本问题分别有三个基本的算法与之对应。HMM可用M={A,B,Pi)来表示,A,B是此模型中最重要的两个参数,分别表示状态转移矩阵和输出观测值概率的集合。根据参数A可将HMM分为遍历型HMM(A的元素全部大于0)和从左到右型HMM(A是一个上三角矩阵)。根据参数B,HMM可分为离散HMM、连续HMM、半连续HMM,还有一种连续混合密度HMM,一般认为连续HMM具有更好的识别效果,但是却含有更多的参数,因此需要更多的训练数据和计算,尤其是混合密度HMM。
HMM本身包含很多不符合实际的假设,所以它隐含固有的缺陷,针对不同的情况很多改善的算法被提出。HMM真正需要花费时间的地方就在此,如何改善现有的HMM更符合语音的实际情况。HMM有很多实际问题需要注意,如下溢问题、参数初始化问题、改善HMM描述语音动态特性问题。另外,语音信号中各个稳定段是与相应的HMM状态相对应的,利用状态持续时间这一语音的特性可显著改善识别的效果,它将标准HMM中状态自转弧去掉,在每个状态设置一个时长t表示状态的停留,而不是标准HMM中即使连续的稳定语音信号也用多个状态转移来表示。

Monday, January 17, 2011

weka knowledge flow file run through Kettle

weka knowledge flow file run through Kettle: "java weka.gui.beans.FlowRunner "

You can execute the FlowRunner from the command line like so:

java weka.gui.beans.FlowRunner

Friday, January 14, 2011

转贴:perl 函数参数中传递数组和引用的一些问题 百度空间_应用平台

转贴:perl 函数参数中传递数组和引用的一些问题 百度空间_应用平台: "数组或哈希结"

SORT in perl

#!/usr/bin/perl
 
$scoreOfMods{'a'}=-0.1;
$scoreOfMods{'b'}=-0.3;
$scoreOfMods{'c'}= 10;
 
foreach $mod_full_name( sort { $scoreOfMods{$a} <=> $scoreOfMods{$b} } keys %scoreOfMods){
                print  "$mod_full_name==>$scoreOfMods{$mod_full_name}\n";
#b==>-0.3
#a==>-0.1
#c==>10
}
 
foreach $mod_full_name(reverse( sort { $scoreOfMods{$a} <=> $scoreOfMods{$b} } keys %scoreOfMods)){
                print  "$mod_full_name==>$scoreOfMods{$mod_full_name}\n";
#c==>10
#a==>-0.1
#b==>-0.3
 
}
 
foreach $mod_full_name( sort { $scoreOfMods{$b} <=> $scoreOfMods{$a} } keys %scoreOfMods){
                print  "$mod_full_name==>$scoreOfMods{$mod_full_name}\n";
#c==>10
#a==>-0.1
#b==>-0.3
}

Sunday, January 9, 2011

How to: Compile a Native C++ Program from the Command Line

How to: Compile a Native C++ Program from the Command Line

ALGLIB: numerical analysis and data processing library

http://www.alglib.net/

About ALGLIB

ALGLIB is a cross-platform numerical analysis and data processing library. It supports several programming languages (C++, C#, Pascal, VBA) and several operating systems (Windows, Linux, Solaris). ALGLIB features include:
  • Linear algebra (direct algorithms, EVD/SVD)
  • Solvers (linear and nonlinear)
  • Interpolation
  • Optimization
  • Fast Fourier transforms
  • Numerical integration
  • Linear and nonlinear least-squares fitting
  • Ordinary differential equations
  • Special functions
  • Statistics (descriptive statistics, hypothesis testing)
  • Data analysis (classification/regression, including neural networks)
  • Multiple precision versions of linear algebra, interpolation optimization and others algorithms (using MPFR for floating point computations)
Why to choose ALGLIB? Because it is:
  • portable. It can be compiled almost anywhere using almost any compiler (see compatibility matrix for more info).
  • easy to use. It supports many programming languages. If you use one language, you don't need to study another (FORTRAN, for example) to compile and link an external library.
  • open source. It can be used for free under GPL 2+.
  • suited for commercial users too. Those who want to use ALGLIB in commercial applications can buy commercial license without copyleft requirement.

Saturday, January 8, 2011

好酒: 達拉斯的中國餐館--Dallas Chinese Restaurants- diary.wenxuecity.com

好酒: 達拉斯的中國餐館--Dallas Chinese Restaurants- diary.wenxuecity.com

吃過紐約,舊金山,芝加哥,休斯頓,洛杉磯,波士頓和華盛頓DC的中餐後,比較而言達拉斯的中國餐並不是最差的,至少比華盛頓DC的中餐好。其實這麼多年來,最好的一家是已經關閉的“川霸王”,比較正宗的川菜。我從開業,一直吃到它關門。水煮魚,爆腰花,泡椒魚地地道道。中餐經營不容易,需要我們愛吃的愛護。一般中餐剛開業,紅紅火火,過一段時間,不論好壞,人流就稀少了。原因一個是城中的中國人少,而是中國人太節儉,一周出去就餐的次算太少。如果你發現了一家好的餐廳,要常去,不要等到它關門,我們再也吃不上正宗的中餐。好了,給大家提供點線索。
川菜︰老四川(Coit&parker),老熊餐廳 (75 & Legacy 僑冠超市)

江浙︰鴻運來(Coit & Parker)

魯和北方菜︰N/A

山西和陝西菜︰N/A
素菜︰佛光山(Arapoho & Greenville)

台灣菜︰梅子和漢料理(僑冠超市)

日本自助餐︰ Tokyo One ( Addison )

早茶和粵菜︰麒麟閣(75 & Coit),新瑞華(Legacy & Coit)

越南河粉︰家鄉河粉(老僑冠超市)

韓國菜︰大長今 (老僑冠超市)

泰菜︰Banana Leaf

牛肉面︰金湯牛肉面(Pork & Greenville)

便宜的小炒︰家園(老僑冠超市),小林冰霸( 15th & park),一品鍋 (Parker & Independence)

BBQ︰第一燒臘(不收信用卡,Greenville & Main),369 (Legacy & Coit)

中國自助餐︰沒有可推薦的,全成了墨西哥人的大食堂。

Friday, January 7, 2011

WIN7系统下硬盘分区步骤(图解)-广州九洲IT技术网

WIN7系统下硬盘分区步骤(图解)-广州九洲IT技术网

Unable to run job: failed receiving gdi request.

ERROR:

Unable to run job: failed receiving gdi request.
Exiting.


Analysis:

...The "failed receiving gdi request" message always means the client did wait for a reply on a request that was accepted by qmaster, but the reply wasn't delivered within a timeout of 10 minutes or so.
... the full message is worth a read.
 
 
 
Link:
http://gridengine.info/2006/08/31/failed-receiving-gdi-request

Tuesday, January 4, 2011

The 7 Habits of Highly Effective People

http://en.wikipedia.org/wiki/The_Seven_Habits_of_Highly_Effective_People


The 7 Habits

Each chapter is dedicated to one of the habits, which are represented by the following imperatives:
The First Three Habits surround moving from dependence to independence (i.e. self mastery)
  • Habit 1: Be Proactive
Synopsis: Take initiative in life by realizing your decisions (and how they align with life's principles) are the primary determining factor for effectiveness in your life. Taking responsibility for your choices and the subsequent consequences that follow.
  • Habit 2: Begin with the End in Mind
Synopsis: Self-discover and clarify your deeply important character values and life goals. Envision the ideal characteristics for each of your various roles and relationships in life.
  • Habit 3: Put First Things First
Synopsis: Planning, prioritizing, and executing your week's tasks based on importance rather than urgency. Evaluating if your efforts exemplify your desired character values, propel you towards goals, and enrich the roles and relationships elaborated in Habit 2.
The Next Three are to do with Interdependence (i.e. working with others)
  • Habit 4: Think Win-Win
Synopsis: Genuinely striving for mutually beneficial solutions or agreements in your relationships. Valuing and respecting people by understanding a "win" for all is ultimately a better long-term resolution than if only one person in the situation had gotten his way.
  • Habit 5: Seek First to Understand, then to be understood
Synopsis: Using empathetic listening to be genuinely influenced by a person, which compels them to reciprocate the listening, take an open mind to being influenced by you, which creates an atmosphere of caring, respect, and positive problem solving.
  • Habit 6: Synergize
Synopsis: Combining the strengths of people through positive teamwork, so as to achieve goals no one person could have done alone. How to yield the most prolific performance out of a group of people through encouraging meaningful contribution, and modeling inspirational and supportive leadership.
The Last habit relates to self-rejuvenation;
  • Habit 7: Sharpen the Saw
Synopsis: The balancing and renewal of your resources, energy, and health to create a sustainable long-term effective lifestyle.

[edit]Abundance mentality

Covey coined the term abundance mentality or abundance mindset, a concept in which a person believes there are enough resources and success to share with others. It is commonly contrasted with the scarcity mindset (i.e. destructive and unnecessary competition), which is founded on the idea that, if someone else wins or is successful in a situation, that means you lose; not considering the possibility of all parties winning (in some way or another) in a given situation. Individuals with an abundance mentality are able to celebrate the success of others rather than be threatened by it.[2]
A number of books appearing in business press since then have discussed the idea.[3] The abundance mentality is believed to arrive from having a high self worth and security (see Habits 1, 2, and 3), and leads to the sharing of profits, recognition and responsibility.[4] Organizations may also apply an abundance mentality while doing business.[5]

[edit]The Upward Spiral

Covey explains the "Upward Spiral" model in the sharpening the saw section. Through our conscience, along with meaningful and consistent progress, the spiral will result in growth, change, and constant improvement. In essence, one is always attempting to integrate and master the principles outlined in The 7 Habits at progressively higher levels at each iteration. Subsequent development on any habit will render a different experience and you will learn the principles with a deeper understanding. The Upward Spiral model consists of three parts: learn, commit, do. According to Covey, one must be increasingly educating the conscience in order to grow and develop on the upward spiral. The idea of renewal by education will propel one along the path of personal freedom, security, wisdom, and power. [6]

[edit]Sequels

The book was enormously popular[citation needed], and catapulted Covey into public-speaking appearances and workshops. He has also written a number of follow-up books:
  • First Things First
  • Principle Centered Leadership
  • The Power Of The 7 Habits: Applications And Insights
  • Seven Habits of Highly Effective Families
  • Beyond the Seven Habits
  • Living the Seven Habits, a collection of stories from people who have applied the seven habits in their lives
  • The 8th Habit: From Effectiveness to Greatness, a sequel to The Seven Habits published in 2004
  • The Leader in Me, a book on using the seven habits for young children, especially in schools, published in 2008.
Sean Covey (Stephen's son) has written a version of the book for teens, The 7 Habits of Highly Effective Teens. This version simplifies the 7 Habits for younger readers so they can better understand them. In September 2006, Sean Covey also published The 6 Most Important Decisions You Will Ever Make: A Guide for Teens. This guide highlights key times in the life of a teen and gives advice on how to deal with them.
Stephen Covey's eldest son, Stephen M. R. Covey, has written a book titled The Speed of Trust.