Remote Web Development Job In IT And Programming

Backend developer needed to build a custom data scraper & parser service & API

Find more Web Development remote jobs posted recently Worldwide

The goal of this project is to ingest, sanitize, and structure data from LinkedIn in order to receive a continuous stream of updated profiles that meet specific criteria. The objective is to both obtain an initial dataset that is in a machine-readable format (CSV, XML, etc.) but then also to provide updated versions when a change is found.

Preliminary research into data access has shown that some information is available via:
The LinkedIn API
Query parameters in the URL that can be reverse-engineered to map to different values
Structured data accessible via a profile pages markup
Unstructured information
Different search products offered by LinkedIn (the free product, Recruiter, etc.).

Were looking to run searches for specific values (no current employment, number of years tenured in most recent position, etc.), keywords (practice area(s) selected from a specified list, etc.), and compound queries (attending a Top 50 ranked law school, Tier 1 law schools, etc.).

The types of queries were looking to track for a change are similar to:
When an individual from a specified list of companies leaves their current employer and changes to a new employment status that meets specific criteria.
When someone we currently track in our CRM switches jobs.
Etc.

We will be developing a separate rules engine that will provide the parameters, bounds, and frequency of updates for any given ongoing search. While the structure of these requests is not yet defined, the tool should be built so that a query can be carried out based on a set of parameters (JSON, etc.).

The tool should be able to search on behalf of specific employees who have granted the tool access to their accounts in aggregate.

Results and updates should be delivered every week. This check could be done via polling only for deltas, comparing two entire data sets, etc. The data should have a flag and a unique identifier to signal whether each result is new or an update.

Each batch of results should also have relevant timestamps to reflect both the time of change and the time the change was crawled.

As an output, we will need all of the data in a single file in a machine-readable format (CSV, XML, etc.).

Additionally, providing the data via a documented API that delivers the output in XML/JSON could be part of the initial scope or another phase.
About the recuiter
Member since May 20, 2018
Aditya Maulana
from Central Serbia, Serbia

Skills & Expertise Required

Data Scraping Web Scraping 

Candidate shortlisted and hiredHiring open till - Jan 1, 2021

Work from Anywhere

40 hrs / week

Hourly Type

Remote Job

$12.51

Cost

Looking for help? Checkout our video tutorial
How to search and apply for jobs

How to apply? Do you have more questions about the Job?
See frequently asked questions

Similar Projects

Developer for creating javascript server addons for our media center

We need various addons for our media center app. I will share more info about app with shortlisted applicants
It is a media center based on addons, similar to the popular Kodi project.

Addons for our project are basically nodejs server p...read more

Create a bot which extracts information from a website

Hi I want to create a bot which extracts information from a website I should be able to given parameters such as how many pages what sort of information is needed to be extracted.

Need freelancer for Data Entry work

Need freelancer to work with my team as a Data Entry worker. task is data entry and internet research. New freelancer are welcome.

Zillow data scrape

I want to create a spreadsheet of all the agents/posters who posted a house for sale in San Diego ca, Corpus Christi tx, and Tallahassee FL in the pst 2 years. Looking for full name of poster/agent, phone number, address of agent (if applicable), pho...read more