擀面皮&烙面皮&普通面皮之实验报告 - 未名空间(mitbbs.com):
'via Blog this'
Blog Archive
-
▼
2014
(190)
-
▼
December
(16)
- 擀面皮&烙面皮&普通面皮之实验报告 - 未名空间(mitbbs.com)
- Data Scientist | Tapjoy Info
- SQL for Beginners - SQL Tutorial for Beginners wit...
- AA187 芝加哥险情_朱培刚_新浪博客
- 百度2014校园招聘深度学习算法研发工程师笔试题
- 如何做cc150算法题
- 8个月从CS菜鸟到拿到Google Offer的经历+内推 - 靖空间 - 博客频道 - CSDN.NET
- [转载] Facebook,Google,Microsoft offer 及 面试经历 - 2013...
- kaldi上使用gpu以及如何安装cuda - u010384318的专栏 - 博客频道 - CSD...
- 66 job interview questions for data scientists - D...
- Machine Learning Algorithm Cheat Sheet - Laura Dia...
- Baidu’s Andrew Ng on Deep Learning and Innovation ...
- (10) What are some fundamental deep learning paper...
- UFLDL Exercise:Convolution and Pooling - YSM'wall ...
- Unsupervised Feature Learning and Deep Learning Tu...
- DNN primer
-
▼
December
(16)
Wednesday, December 31, 2014
Monday, December 29, 2014
Sunday, December 28, 2014
Saturday, December 20, 2014
百度2014校园招聘深度学习算法研发工程师笔试题
百度2014校园招聘深度学习算法研发工程师笔试题: "百度2014校园招聘深度学习算法研发工程师笔试题"
http://www.360doc.com/content/14/0325/18/13256259_363658929.shtml
'via Blog this'
http://www.360doc.com/content/14/0325/18/13256259_363658929.shtml
'via Blog this'
如何做cc150算法题
Cracking the coding interview--问题与解答:
http://www.hawstein.com/posts/ctci-solutions-contents.html
'via Blog this'
http://www.hawstein.com/posts/ctci-solutions-contents.html
高频题:
解题思路和答案:
'via Blog this'
8个月从CS菜鸟到拿到Google Offer的经历+内推 - 靖空间 - 博客频道 - CSDN.NET
8个月从CS菜鸟到拿到Google Offer的经历+内推 - 靖空间 - 博客频道 - CSDN.NET: ". 越败越战
不断磨练自己的面试技巧
发现知识漏洞,及时补救
"
'via Blog this'
不断磨练自己的面试技巧
发现知识漏洞,及时补救
"
'via Blog this'
[转载] Facebook,Google,Microsoft offer 及 面试经历 - 2013.03 | 阿蘑多
[转载] Facebook,Google,Microsoft offer 及 面试经历 - 2013.03 | 阿蘑多: "G电面先warm up 一题是一个array 变BST, 第二题是 skip list,这题以前出现过,
"
'via Blog this'
"
'via Blog this'
Thursday, December 18, 2014
Monday, December 15, 2014
Machine Learning Algorithm Cheat Sheet - Laura Diane Hamilton
Machine Learning Algorithm Cheat Sheet - Laura Diane Hamilton: "Algorithm Pros Cons Good at
Linear regression - Very fast (runs in constant time)
- Easy to understand the model
- Less prone to overfitting - Unable to model complex relationships
-Unable to capture nonlinear relationships without first transforming the inputs - The first look at a dataset
- Numerical data with lots of features
Decision trees - Fast
- Robust to noise and missing values
- Accurate - Complex trees are hard to interpret
- Duplication within the same sub-tree is possible - Star classification
- Medical diagnosis
- Credit risk analysis
Neural networks - Extremely powerful
- Can model even very complex relationships
- No need to understand the underlying data
– Almost works by “magic” - Prone to overfitting
- Long training time
- Requires significant computing power for large datasets
- Model is essentially unreadable - Images
- Video
- “Human-intelligence” type tasks like driving or flying
- Robotics
Support Vector Machines - Can model complex, nonlinear relationships
- Robust to noise (because they maximize margins) - Need to select a good kernel function
- Model parameters are difficult to interpret
- Sometimes numerical stability problems
- Requires significant memory and processing power - Classifying proteins
- Text classification
- Image classification
- Handwriting recognition
K-Nearest Neighbors - Simple
- Powerful
- No training involved (“lazy”)
- Naturally handles multiclass classification and regression - Expensive and slow to predict new instances
- Must define a meaningful distance function
- Performs poorly on high-dimensionality datasets - Low-dimensional datasets
- Computer security: intrusion detection
- Fault detection in semiconducter manufacturing
- Video content retrieval
- Gene expression
- Protein-protein interaction"
'via Blog this'
Linear regression - Very fast (runs in constant time)
- Easy to understand the model
- Less prone to overfitting - Unable to model complex relationships
-Unable to capture nonlinear relationships without first transforming the inputs - The first look at a dataset
- Numerical data with lots of features
Decision trees - Fast
- Robust to noise and missing values
- Accurate - Complex trees are hard to interpret
- Duplication within the same sub-tree is possible - Star classification
- Medical diagnosis
- Credit risk analysis
Neural networks - Extremely powerful
- Can model even very complex relationships
- No need to understand the underlying data
– Almost works by “magic” - Prone to overfitting
- Long training time
- Requires significant computing power for large datasets
- Model is essentially unreadable - Images
- Video
- “Human-intelligence” type tasks like driving or flying
- Robotics
Support Vector Machines - Can model complex, nonlinear relationships
- Robust to noise (because they maximize margins) - Need to select a good kernel function
- Model parameters are difficult to interpret
- Sometimes numerical stability problems
- Requires significant memory and processing power - Classifying proteins
- Text classification
- Image classification
- Handwriting recognition
K-Nearest Neighbors - Simple
- Powerful
- No training involved (“lazy”)
- Naturally handles multiclass classification and regression - Expensive and slow to predict new instances
- Must define a meaningful distance function
- Performs poorly on high-dimensionality datasets - Low-dimensional datasets
- Computer security: intrusion detection
- Fault detection in semiconducter manufacturing
- Video content retrieval
- Gene expression
- Protein-protein interaction"
'via Blog this'
Friday, December 12, 2014
Baidu’s Andrew Ng on Deep Learning and Innovation in Silicon Valley - Digits - WSJ
Baidu’s Andrew Ng on Deep Learning and Innovation in Silicon Valley - Digits - WSJ: "generous interpretation"
'via Blog this'
'via Blog this'
Monday, December 8, 2014
(10) What are some fundamental deep learning papers for which code and data is available to reproduce the result and on the way grasp deep learning? - Quora
(10) What are some fundamental deep learning papers for which code and data is available to reproduce the result and on the way grasp deep learning? - Quora:
'via Blog this'
'via Blog this'
Sunday, December 7, 2014
UFLDL Exercise:Convolution and Pooling - YSM'wall - 博客频道 - CSDN.NET
UFLDL Exercise:Convolution and Pooling - YSM'wall - 博客频道 - CSDN.NET:
http://m.blog.csdn.net/blog/linger2012liu/
'via Blog this'
http://m.blog.csdn.net/blog/linger2012liu/
'via Blog this'
Subscribe to:
Posts (Atom)