Remote Data Mining And Management Job In Data Science And Analytics

Crawling food data in different postal codes and creating a site comparison website

Find more Data Mining And Management remote jobs posted recently Worldwide

Hi there,

Looking for someone who can help crawl the internet for certain areas in the UK and pull all menus online for those various areas. Then create a site comparison with the data collected.
About the recuiter
Member since Mar 14, 2020
Mr. Bhavesh Pur
from Chobe, Botswana

Skills & Expertise Required

Scrapy Web Scraping Web Crawling Python Camelot 

Candidate shortlisted and hired
Hiring open till - Sep 28, 2023

Work from Anywhere

40 hrs / week

Hourly Type

Remote Job

$34.52

Cost

Looking for help? Checkout our video tutorial
How to search and apply for jobs

How to apply? Do you have more questions about the Job?
See frequently asked questions

Similar Projects

Machine Learning - Comparison of Brands with Description

Hello Applicants,

I am looking at building an algorithm that compares Description of the item with the respective brand for a comparative study

I will share some files with shortlisted candidates there are TWO Worksheets. First shee...read more

Looking for Python programmer to write downloader for Google Open Images based on image type

Hello,

We are looking to hire a Python programmer for a small project-based commission of writing a downloader for Google Open Images.

Details are available in private, thank you.

European Fake Licence Plates Generator in Python

Hey,

I need a generator to create a dataset for European Licence Plate.

Please check to have a view of european licence plate if you are not from europe

https://en.wikipedia.org/wiki/Vehicle_registration_plates_of_Europeread more

Web Scraping/Data Extraction Needed - Fourth World

We are looking for a web scraper who can gather data for our front-end designer of our website. This will be the first iteration of the website, and we need an individual to get us out to market as we develop out the rest of the website. We have the...read more

scraper part 2

1- so take the csv called count_fast which is the output file, and use this as the ingest file
2- replace the columns name called videocount with the timecode that the scraper is launched (so on each launch the header is the new date
3- also...read more