Find more Data Mining And Management Remote Jobs posted recently Worldwide

Required Multithreaded Programming,Python freelancer for I need experienced Python developer for web scraping with multi-threading/multiprocessing job

Posted at - Aug 1, 2018

Toogit Instant Connect Enabled


I need Python developer who is high experienced with:
- multi threading/multiprocessing
- rotating IP's (I will provide proxies)
- working with virtual machines (google compute engine)
- scraping using selenium (requests or urllib are very welcome if You can handle javaScript)
- working with database (expecting more than 200GB at end, speed is important)

Project is about scraping one website for page sources and inserting same into database. Since there are few million queries, I want to run it on virtual machine or even on few of them. Except You can handle javaScript with requests or urllib, than I can run it localy. Everything is about total time and getting at least 90% of results from website. Website is using a third-party anti web-scraping service 'Distil Networks' (https://www.distilnetworks.com/). You must be able to handle it!

I already have code from previous developer but it is not working like I expected. You can modify this code or create Your own.
Write 'apple' in first line when You apply so I know You read full post. Reply without this word will be ignored.
I need all buildings and their units from this page https://streeteasy.com/buildings/nyc There are four tabs for units, 'active listings' , 'past sales', 'past rentals' and 'all units'. Tab 'all units' doesn't have all units, so You must check in all tabs (each tab will require click() with selenium. otherwise data will not be inside page source)

I need four scripts:
1) collect_building_urls
- first script should collect all building urls from website and insert them into database without duplicates
2) collect_building_page_sources
- second script should collect page source for each building that is not already done and insert them into database
3) parse_unit_urls
- third script should parse each building page source that is not already done, find all unit links and insert them into database, without duplicates
4) collect_unit_page_sources
- fourth script should collect page source for each unit that is not already done and insert them into database

If You are interested for this project, You'll need to create four scripts, set them on virtual machine, test and make able to finish job.
Finding page sources and urls is very easy, few line of codes, but there are a lot of checking to make sure script is on right page and all content is loaded.
There must be log about each thread/process

I need this to be finished in next few days.
I already waste a lot of my time, so please apply only if You are able to finish this on time and let me know Your price for this project.
Conversation via skype will be required and also sharing screen if needed.

About the recuiterMember since Aug 25, 2017 Musarrat
from West Bengal, India

Skills & Expertise Required

Multithreaded Programming Python 

Candidate shortlisted and hired
Hiring open till - Aug 1, 2018

Work from Anywhere
40 hrs / week
Fixed Type
Remote Job
$104.35
Cost

Looking for help? Checkout our video tutorial
How to search and apply for jobs

How to apply? Do you have more questions about the Job?
See frequently asked questions


Apply on more work from home jobs posted in Data Mining And Management category.