Twitter M&A Data Analytics
I created automated scripts that downloaded and preprocessed tweets during the event of M&A deal for 600+ companies. I performed data analysis, topic modeling of the tweets and developed ML model that predicts wether a deal is withdrawn or completed based on features such as tweet sentiments, company’s financial attributes such as transaction value, hedge fund involvement etc.
I developed an automation bot to scrape data from Pulsepoint web logs. I used third-party tools, libraries, and API to prepare the data and implemented a cron job to automate the scraping process. I used selenium, GCP, and relevant python libraries to build the ETL pipeline, improved the scalability of the project and optimized the scraping process. Later, I developed a dashboard using Django and bootstrap to monitor the data in real-time.
Facebook Marketplace Scraper
I developed several automation bots using selenium to crawl data from Facebook Marketplace and stored it into AWS RDS(MySQL, PostgreSQL database) as per the requirements. I had to perform some NLP techniques to extract the data in specific formats from the text.
Cloud Server Automation
I developed several automation scripts using selenium and Vultr API to create, manage and update servers and Plesk app on Vultre based on custom configuration such as number of domains/sub-domain per plesk app or number of plesk app per server.
Indeed Job Scraper
I developed the scraping pipeline that scrapes indeed jobs from 30k+ locations (6 countries) using scrapy, pandas and few python utility libraries. I also setup proxy middleware for IP rotation and integrated the scraping pipeline with AWS Batch job scheduler.
I collaborated with product management, design, DevOps, and QA teams to create the fully Responsive E-commerce Odoo Theme. I used the Odoo tech stack (Python, JS, QWeb, PostgreSQL) along with some third-party libraries. I made multiple versions of the theme to ensure its compatibility with different Odoo versions.
It is a Bangla News aggregator mobile Application like Feedly or UpWire News. It gathers news from the most popular newspapers and Youtube News channels from Bangladesh. As of August 2021, it has been downloaded over 12 thousand. It has more than 1 thousand daily visitors. A cron job is implemented to automate the data scraping process and Firebase Cloud Messaging for push notifications. It uses Scrapy-Django integration as a backend API and the android application is developed in java. I implemented several algorithms to optimize the data flow.
- Get News from most popular newspapers, youtube channels of Bangladesh
- Filter Newspaper and Youtube Channels
- Dark Mode, Light Mode
- Browse, filter and search highlights, categorical news
- Image zoom in
- Save and Offline browsing (cache)
- Easy pull to refresh
Tech Stack : Python, Android (java), Django, Django Rest Framework, Scrapy, MySql, MongoDB, Firebase
To build an end-to-end e-commerce mobile platform, I developed two native android apps (B2B & B2C) from scratch using android jetpack libraries along with the REST APIs and database system. I also developed a web scraping ETL pipeline for the data automation process using python libraries and MySQL. I also built a web dashboard using PHP, Bootstrap to monitor the system. For high-quality image compression I used fb spectrum.
The objective of this project is to match celebrity users with their respective tweets by making use of Semantic Textual Similarity utilizing Transformers, streamlit and FastAPI. This app finds similar Twitter users based on their tweets. It works in two ways - 1. Get a list of most similar celebrity Twitter accounts based on a predefined Twitter celebrity list. 2. Find similarity between two Twitter users based on their tweets.
I scraped Medicine data using scrapy and integrated it with DRF and PostgreSQL. I designed the scraper in a way to keep the relations between the data models. I also customized the django admin panels, added additional features such as - auto complete lookup relational fields, custom filtering (alphabetical, model property), bulk actions (e.g., export to csv). I customized the scrapy command to run scrapy spiders from django command line and added proxy configuration to scrapy.
The scraped dataset and my analytics notebook (cleaning, wrangling, visualization) is available in kaggle -
Scrapy MongoDB Django integrated API that scrapes popular e-commerce sites (10+) from Bangladesh. I used Djongo to integrate MongoDB with DRF. I also added some extra functionalities for the API, ex- custom price and product tag filter
I did an extensive exploratory data analysis of the IBM telco customer churn imbalanced dataset and implemented various statistical hypotheses tests. Also I performed single level Stacking Ensemble with Triple Gradient boosting algorithms (XGBoost, CatBoost, LightGBM) and tuned hyperparameters of all models using Optuna.
I built text classification model, tuned model hyperparameters on on the 515K Hotel Reviews Data in Europe dataset utilizing TF-IDF, CountVectorizer, Tf-Hub - universal sentence encoder word-embedding models, LSTM, and built a REST API using FastAPI.
Performed multivariate time-series analysis to predict bike share demand using deep learning approach (stacking bi-LSTM) with 10 timesteps. With 90:10 dataset splitting the final model achieved ~0.027 evaluation loss and 0.93 r2-score on the test data set.
Performed univariate analysis appyling stacking bi-LSTM. Data was collected using yfinance API
Tech Stack : keras, tensorflow, scikit-learn
A web application where user can enter the monthly earning amount of each administrative region in Bangladesh using excel sheets and those data is parsed and displayed in Bangladesh Map for individual regions denoting different colors for different earning ranges.
An e-commerce web-based project specialized in meta-searching, where user can see comparison list of prices of branded products from various e-commerce websites. User can add products to the wish list to track for price change.
Tech Stack : BeautifulSoup4, Django, Django REST framework, react.js, Material-UI
Project doc : Seradam Doc Pdf
It serves as a social media platform for AIUB students. The user account is created via web scraping using AIUB-portal ID and password. Apart from basic social media activities, users can do text messaging(ChatCamp) and video chat(Vidyo.io).
Tech Stack : Social Network - Node.js, Laravel, MySQL, Bootstrap, CHATCAMP, Vidyo.io
Project link : Previous Version (Node.js, Socket.IO)
This project comprises a Human Resource Management System database that combines several processes to ensure the easy management of human resources, business processes, and data. The database is created from scratch using ER and Schema diagrams and the normalization process. Some advanced database features using PL/SQL such as - package, trigger, cursor, procedure, function, record, etc. were implemented in the project.
Tech Stack : Bootstrap4, PHP, Oracle Database (10g XE)
Project doc : Project Doc Pdf
An automobile showroom management system where a sales personnel of a showroom (e.g.-Manager) can do basic inventory CRUD operations on automobiles (Cars) and order with some extensible features such as billing form with proper authentication.
Tech Stack : java, MySQL
Project doc : Project Doc MS Word
A multiplayer FPS game powered by UNet service. Users can connect with anyone from around the world and play the game.
Tech Stack : C#, Unity 3d
Gameplay link : Youtube Clip
A collection of test cases and automation script made with java and selenium to test the AIUB website.
Tech Stack : java, selenium
A basic pot player controller which is powered by esp32 Arduino.
Tech Stack : python, Arduino
A collection of scenarios represents sea-beach , marine drive and resort.
Tech Stack : C++, OpenGl, GLUT
Transition Clip : Transition Clip Gif