Find more Web Development Remote Jobs posted recently Worldwide

Required Data Scraping,Web Scraping freelancer for Backend developer needed to build a custom data scraper & parser service & API job

Posted at - May 18, 2019


The goal of this project is to ingest, sanitize, and structure data from LinkedIn in order to receive a continuous stream of updated profiles that meet specific criteria. The objective is to both obtain an initial dataset that is in a machine-readable format (CSV, XML, etc.) but then also to provide updated versions when a change is found.

Preliminary research into data access has shown that some information is available via:
The LinkedIn API
Query parameters in the URL that can be reverse-engineered to map to different values
Structured data accessible via a profile page's markup
Unstructured information
Different search products offered by LinkedIn (the free product, Recruiter, etc.).

We're looking to run searches for specific values (no current employment, number of years tenured in most recent position, etc.), keywords (practice area(s) selected from a specified list, etc.), and compound queries (attending a Top 50 ranked law school, "Tier 1" law schools, etc.).

The types of queries we're looking to track for a change are similar to:
When an individual from a specified list of companies leaves their current employer and changes to a new employment status that meets specific criteria.
When someone we currently track in our CRM switches jobs.
Etc.

We will be developing a separate rules engine that will provide the parameters, bounds, and frequency of updates for any given ongoing search. While the structure of these requests is not yet defined, the tool should be built so that a query can be carried out based on a set of parameters (JSON, etc.).

The tool should be able to search on behalf of specific employees who have granted the tool access to their accounts in aggregate.

Results and updates should be delivered every week. This check could be done via polling only for deltas, comparing two entire data sets, etc. The data should have a flag and a unique identifier to signal whether each result is new or an update.

Each batch of results should also have relevant timestamps to reflect both the time of change and the time the change was crawled.

As an output, we will need all of the data in a single file in a machine-readable format (CSV, XML, etc.).

Additionally, providing the data via a documented API that delivers the output in XML/JSON could be part of the initial scope or another phase.

About the recuiterMember since May 20, 2018 Aditya Maulana
from Central Serbia, Serbia

Skills & Expertise Required

Data Scraping Web Scraping 

Candidate shortlisted and hiredHiring open till - Mar 13, 2020

Work from Anywhere
40 hrs / week
Hourly Type
Remote Job
$12.52
Cost

Looking for help? Checkout our video tutorial
How to search and apply for jobs

How to apply? Do you have more questions about the Job?
See frequently asked questions


Apply on more work from home jobs posted in Web Development category.