Predicting Boston House Prices with Feed Forward Neural Network
Implemented Neural Network to solve regression task. In the result, achieved remarkable result, Mean Squared Error (MSE): 0.00457
Semantic Segmentation with U-Net for Cityscapes Dataset
The Cityscapes Dataset focuses on semantic understanding of urban street scenes. The dataset contains captured images from front car camera and provides labeled mask (Ground truth). In the result, Test Accuracy: 0.8854, Test Mean Intersection over Union: 0.4042
Predicting Wine Quality with Neural Network
Prevent overfitting: Early Stopping, Ridge Regression, Cross-validation (Hold-Out, K-Fold), Dropout. Final Model Test Performance MSE: 0.0127
Human Activity Recognition with Timeseries Classification
Implemented LSTM, Bidirectional LSTM (BiLSTM) and 1DCNN to predict human activities from mulitvaiate time series: x_axis, y_axis, z_axis. In the result, F1 score for LSTM: 0.4008, BiLSTM: 0.4385, 1DCNN: F1: 0.6301.
Multivariate Timeseries Forecasting for Air Quality
Implemented Bidirectional LSTM (BiLSTM) and 1DCNN to forcast Air Quality from mulitvaiate time series: x_axis, y_axis, z_axis. In the result, achieved Mean Squared Error: 0.00450.
Classification task for South African Heart Disease Dataset
Deployed several method for classification with comparation accuracy result, Linear Discriminant Analysis (LDA): 0.7527, LDA polynomial: 0.7101, Quadratic Discriminant Analysis: 0.6989, K-NN Standard Scaler: 0.7312, K-NN MinMax Scaler: 0.7527
"DILLEMA" perform Image Classification and Semantic Segmantation
A framework to generate new test cases to uncover the faulty behavior of Image-based Deep Learning application (e.g. Autonomous driving). In the result, DILLEMA detects 24.8 % pixel-based misclassification of Semantic Segmentation task on Autonomous driving SHIFT dataset. Moreover, it also founds 53.3 % misclassification images in state-of-the-art neural networks for Classification task with ImageNet1K dataset.
Statistical Learning: Linear Regression
Deployed Generelized linear regression to predict and inference the model from Advertising dataset. Perform Hypothesis testing as feature selection process. Implemented Ordinary Least Squared with R squared score: 0.897
Classification Task Iris dataset with SVM
Deployed Support Vector Machine (SVM) with F1-score 0.7999. Showing comparison result between LDA, QDA, KNN, and SVM.
Clustering Task MNIST dataset with K-Means and DBSCAN
Principal Component Analysis (PCA) to perform dimentionality reduction. In the result, Kmeans purity: 0.7896, DBSCAN purity: 0.5025
Leaf Image Classification
Build the model to predict 15 leaf classes with a state-of-the-art model and architecture. Comparing the result and shows PRC score Inception-ResNet: 0.8867.
Multivariate Time-series Forecasting
Multivariate dataset that consists of 7 ordered features. Provide a prediction for each time step in the test prediction window. Bidirectional LSTM with two-layer, with window=600, stride=1 and telescope=864 has Root Mean Squared Error (RMSE): 4.0347.
CNN (LeNet, VGG-like) for image classification
Solve classification problem of MNIST and CIFAR-10 datasets. In the result the model to predict MNIST dataset got F1 score: 0.9899. This project implemented VGG-like architecture in a simple model and achieved F1 score: 0.7866 for CIFAR-10
Improve CNN performance with Transfer Learning and Fine Tuning
Comparing re-training vs Fine tune MobileNetV2 architecture to solve minimum datasets availability
Statistical Learning: Bias and Variance Trade-off
Understanding Bias and Variance theoretically, then show trade-off related to model complexity in the regression.
Improve CNN performance with Image Augmentation
Solve lack of dataset using Image Augmentation (Rotation, Flip, Translation, Brightness, Cntrass).
Feature Selection Process of MPG (Miles per Gallon) dataset
Implemented feature selection with Forward and Backward process. Perfrom hyperparameter selection using cross-validation (Hold-out and K-Fold). In the result, R squared: 0.8462.
Generative Adversarial Networks with MNIST
Generate MNIST image from sampled data. Build from scratch Generator and Descriminator model. Create custom loss function, metrics for GAN.
Building heatmaps with FullyCNN
Given the high spatial dimention of Sealion population, then classifies sealion type of class by create a heatmaps to show the result. Instead of having categorical result with fully connected layer as the output model, this model use fully Convolutional NN.