Remote Web Development Job In IT And Programming

Backend developer needed to build a custom data scraper & parser service & API

Find more Web Development remote jobs posted recently Worldwide

The goal of this project is to ingest, sanitize, and structure data from LinkedIn in order to receive a continuous stream of updated profiles that meet specific criteria. The objective is to both obtain an initial dataset that is in a machine-readable format (CSV, XML, etc.) but then also to provide updated versions when a change is found.

Preliminary research into data access has shown that some information is available via:
The LinkedIn API
Query parameters in the URL that can be reverse-engineered to map to different values
Structured data accessible via a profile pages markup
Unstructured information
Different search products offered by LinkedIn (the free product, Recruiter, etc.).

Were looking to run searches for specific values (no current employment, number of years tenured in most recent position, etc.), keywords (practice area(s) selected from a specified list, etc.), and compound queries (attending a Top 50 ranked law school, Tier 1 law schools, etc.).

The types of queries were looking to track for a change are similar to:
When an individual from a specified list of companies leaves their current employer and changes to a new employment status that meets specific criteria.
When someone we currently track in our CRM switches jobs.
Etc.

We will be developing a separate rules engine that will provide the parameters, bounds, and frequency of updates for any given ongoing search. While the structure of these requests is not yet defined, the tool should be built so that a query can be carried out based on a set of parameters (JSON, etc.).

The tool should be able to search on behalf of specific employees who have granted the tool access to their accounts in aggregate.

Results and updates should be delivered every week. This check could be done via polling only for deltas, comparing two entire data sets, etc. The data should have a flag and a unique identifier to signal whether each result is new or an update.

Each batch of results should also have relevant timestamps to reflect both the time of change and the time the change was crawled.

As an output, we will need all of the data in a single file in a machine-readable format (CSV, XML, etc.).

Additionally, providing the data via a documented API that delivers the output in XML/JSON could be part of the initial scope or another phase.
About the recuiter
Member since May 20, 2018
Aditya Maulana
from Central Serbia, Serbia

Skills & Expertise Required

Data Scraping Web Scraping 

Candidate shortlisted and hiredHiring open till - Mar 13, 2020

Work from Anywhere

40 hrs / week

Hourly Type

Remote Job

$12.51

Cost

Looking for help? Checkout our video tutorial
How to search and apply for jobs

How to apply? Do you have more questions about the Job?
See frequently asked questions

Similar Projects

Data Mining

I am looking for a software developer to create a robot that will scrape online and county records for recent bankruptcies and evictions within the last 30 days.

scraper part 2

1- so take the csv called count_fast which is the output file, and use this as the ingest file
2- replace the columns name called videocount with the timecode that the scraper is launched (so on each launch the header is the new date
3- also...read more

Experienced Data Analyst wanted

We are looking for experienced Data Analyst to analyze, extract, transform and visualize data sets to join our team for future client projects.

Web Scraping for Apartment Prices

Hi! Im hoping to get help with a simple web scraping project. Im trying to track the prices of apartments at 3 buildings over a few months so I can get a sense of when there are good deals. Id like to have a script (in python, VBA, or similar)...read more

Urgent Data Entry

I have an urgent assignment for a data entry project.

I have a list with company name, first_name and last name with 134 records.

First, I require that you find the title and email for listed contact.

Secondly, find websit...read more