- Developed an iOS app, DeepColors that helps users colorie sketches and grayscale images, by using the power of DeepLearning.
- More than 3 years of Machine Learning Experience including 6 months of internship as an ML engineer.
- Looking for a full time job as a Machine Learning Engoneer in the Vancouver area.
- Already have a valid work permit for one year in Canada.
Singular Software is a company which has developed a phone-based hearing assistive app, HeardThat which debuted at CES 2020, that helps people to hear speech in noisy situations by using the power of deep learning. The company was nominated at the top 10 out of about 200 in the New Ventures BC 2019 and was also the winner of 2020 What's Next Innovation Challenge.I worked as a machine learning developer intern and involved in creating HeardThat app in the early developing stage.
- One of the things I developed for the purpose of the evaluation was an internal visualization tool with Flask where team members can check the spectrogram and waveform of any audio files interactively. Below gif is what I have found on twitter and is pretty similar to what I developed except for one feature that the tool I developed was also able to do arithmetic calculation between audio files and to plot its spectrogram and waveform.
- Another thing I developed was an external platform which was for Mechanical Turk where we could collect scores and transcription from people so that we were then able to calculate metrics such as Mean Opinion Score (MOS) and Word Error Rate (WER) that were part of metrics that we used to evaluate deep learning models.
Worked with team members in an agile way by changing things as we needed and learned how Scrum works. Since the internship was remote, gained necessary skills such as SSH on Ubuntu, good communication skills with both verbal and written English with concise explanations, and writing easy to understand documentation.
1. Convert a frame to gray scale.
2. Applies an adaptive threshold to an array.
3. Blurs an image using the median filter.
4. Detect the puzzle.
5. Create a mask.
6. Capture the grid.
7. Detect the vertical lines.
8. Detect the horizontal lines.
9. Calculating the points where the vertical lines and horizontal lines cross.
Then we create a 2D Numpy array which represents the sudoku puzzle that the user showed and which is created based on recognition of a CNN model, which is trained on Chars74K, for each cell in the grid.
The last thing the user has to do is that fix the numbers that are misclassified by the CNN model by filling the corresponding text box with a correct one.
The coefficients and are constant
we choose to define our problem, as is the constant
The binary variables and are the values that we are looking for to solve our problem.
The best solution for these variables is the value for each that produces the smallest value for the overall expression.
Searching for the variables that minimize an expression is called an “argmin” in mathematics.
This is an automated forex trading strategy by using the power of reinforcement learning which is a type of machine learning. The goal is to optimize a forex trading strategy and to make a profit with it on the real financial market while I am sleeping.About this project can be separated into two sections, MVP and version 2.0.
I loosely followed this paper, Deep Reinforcement Learning for Foreign Exchange Trading.In this paper, what they tried was that tried to optimize SureFireStrategy which is a variant of the Martingale by using ConvNet as the agent in reinforcement learning in order to find patters in heatmap images encoded from time series data by Gramian Angular Field (GAF) which I will talk about later.
The Sure-Fire starategy
First, as illustrated in Fig. 2, we purchase one unit at any price and set a stop-gain price of +k and a stop-loss price of −2k. At the same time, we select a price with a difference of −k to the buy price and +k to the stop-loss price and set a backhand limit order for three units. Backhand refers to engaging in the opposite behavior. The backhand of buying is selling and the backhand of selling is buying. A limit order refers to the automatic acquisition of corresponding units.
As illustrated in Fig. 3, when a limit order is triggered, and three units are successfully sold backhand, we place an additional backhand limit order, where the buy price is +k to the sell price and −k to the stop-loss price. We set the stopgain point as the difference of +k and the stop-loss point as the difference of −2k, after which an additional six units are bought.
As illustrated in Fig. 4, the limit order is triggered in the third transaction. The final price exceeded the stop-gain price of the first transaction, the stop-loss price of the second transaction, and the stop-gain price of the third transaction. In this instance, the transaction is complete. The calculation in the right block shows that the profit is +1k.
On the left side, it shows price movement in 5 minutes time frame with 12 window size. On the right image, it is an image encoded by GAF which represents the price movement on the left side and is a sample of images that were fed into ConvNet and that were defined as the states in reinforcement learning. Each image had 4 channels that corresponded to Open, High, Low, and Close in a timeframe.
|2020/May/18 - 2020/May/23||2020/May/25 - 2020/May/30|
|Number of trading||117||105|
|Number of winning||86||79|
|Number of losing||31||26|
This is the final team project in Machine Learning Bootcamp at 7 Gate Academy and
is an application predicting the likelihood of getting a parking ticket in
the Vancouver area based on
the user's geolocation and the time. When a user taps a location at where he is planning to park his
car or at where he is currently parking his car, that is going to be a trigger to call AWS Lambda
where our machine learning model runs to predict the likelihood.
Here is how I and Paul had created this application within a month.
We found dataset on Vancouver open data catalog, the original dataset had the information of parking tickets issued such as date time, address including block, infraction, status, etc. However the dataset obviously did not have any target variable that we could use in our case the likelihood or probability of getting a parking ticket. I will explain how we solve this problem in Obstacles section below but the simple answer is that we created by using traffic counts on each street.
We estimated the probability for each street and thresholded them to create three categories, Low, Medium, and High that were the likelihood we were predicting. So we dealt with this problem as a classification problem because it was more user-friendly than giving users a probability.
Passed coding examinations to take this course. I am one of the about 400 students who have successfully finished this course out of 900, the number of students who had started the course.Curriculum