I recently spoke with a resource-limited organization that is investigating government corruption and wants to access various public datasets to monitor politicians and law firms. They don’t have developers in-house, but feel pretty comfortable analyzing datasets in CSV form. While many public datasources are available in structured form, some sources are hidden in what us data folks call the deep web. Amazon is a nice example of a deep website, where you have to enter text into a search box, click on a few buttons to narrow down your results, and finally access relatively structured data (prices, model numbers, etc.) embedded in HTML. Amazon has a structured database of their products somewhere, but all you get to see is a bunch of webpages trapped behind some forms.
A developer usually isn’t hindered by the deep web. If we want the data on a webpage, we can automate form submissions and key presses, and we can parse some ugly HTML before emitting reasonably structured CSVs or JSON. But what can one accomplish without writing code?
This turns out to be a hard problem. Lots of companies have tried, to varying degrees of success, to build a programmer-free interface for structured web data extraction. I had the pleasure of working on one such project, called Needlebase at ITA before Google acquired it and closed things down. David Huynh, my wonderful colleague from grad school, prototyped a tool called Sifter that did most of what one would need, but like all good research from 2006, the lasting impact is his paper rather than his software artifact.
Below, I’ve compiled a list of some available tools. The list comes from memory, the advice of some friends that have done this before, and, most productively, a question on Twitter that Hilary Mason was nice enough to retweet.
The bad news is that none of the tools I tested would work out of the box for the specific use case I was testing. To understand why, I’ll break down the steps required for a working web scraper, and then use those steps to explain where various solutions broke down.
The anatomy of a web scraper
There are three steps to a structured extraction pipeline:
Authenticate yourself. This might require logging in to a website or filling out a CAPTCHA to prove you’re not…a web scraper. Because the source I wanted to scrape required filling out a CAPTCHA, all of the automated tools I’ll review below failed step 1. It suggests that as a low bar, good scrapers should facilitate a human in the loop: automate the things machines are good at automating, and fall back to a human to perform authentication tasks the machines can’t do on their own.
Navigate to the pages with the data. This might require entering some text into a search box (e.g., searching for a product on Amazon), or it might require clicking “next” through all of the pages that results are split over (often called pagination). Some of the tools I looked at allowed entering text into search boxes, but none of them correctly handled pagination across multiple pages of results.
Extract the data. On any page you’d like to extract content from, the scraper has to help you identify the data you’d like to extract. The cleanest example of this that I’ve seen is captured in a video for one of the tools below: the interface lets you click on some text you want to pluck out of a website, asks you to label it, and then allows you to correct mistakes it learns how to extract the other examples on the page.
As you’ll see in a moment, the steps at the top of this list are hardest to automate.
What are the tools?
Here are some of the tools that came highly recommended, and my experience with them. None of those passed the CAPTCHA test, so I’ll focus on their handling of navigation and extraction.
Web Scraper is a Chrome plugin that allows you to build navigable site maps and extract elements from those site maps. It would have done everything necessary in this scenario, except the source I was trying to scrape captured click events on links (I KNOW!), which tripped things up. You should give it a shot if you’d like to scrape a simpler site, and the youtube video that comes with it helps get around the slightly confusing user interface.
Kimono was one of the most popular suggestions I got, and is quite polished. After installing the Kimono bookmarklet in your browser, you can select elements of the page you wish to extract, and provide some positive/negative examples to train the extractor. This means that unlike import.io, you don’t have to wait to get access to the extracted data. After labeling the data, you can quickly export it as CSV/JSON/a web endpoint. The tool worked seamlessly to extract a feed from the Hackernews front page, but I’d imagine that failures in the automated approach would make me wish I had access to import.io’s crowd workers. The tool would be high on my list except that navigation/pagination is coming soon, and will ultimately cost money.
Dapper, which is now owned by Yahoo!, provides about the same level of scraping capabilities as Kimono. You can extract content, but like Kimono it’s unclear how to navigate/paginate.
Google Docs was an unexpected contender. If the data you’re extracting is in an HTML table/RSS Feed/CSV file/XML document on a single webpage with no navigation/authentication, you can use one of the Import* functions in Google Docs. The IMPORTHTML macro worked as advertised in a quick test.
iMacros is a tool that I could imagine solves all of the tasks I wanted, but costs more than I was willing to pay to write this blog post. Interestingly, the free version handles the steps that the other tools on this list don’t do as well: navigation. Through your browser, iMacros lets you automate filling out forms, clicking on “next” links, etc. To perform extraction, you have to pay at least $495.
A friend has used Screen-scraper in the past with good outcomes. It handles navigation as well as extraction, but costs money and requires a small amount of programming/tokenization skills.
Winautomation seems cool, but it’s only available for Windows, which was a dead end for me.
So that’s it? Nothing works?
All hope is not lost, however. Where pure automation fails, a human can step in. Several proposals suggested paying people on oDesk, Mechanical Turk, or CrowdFlower to extract the content with a human touch. This would certainly get us past the CAPTCHA and hard-to-automate navigation. It might get pretty expensive to have humans copy/paste the data for extraction, however. Given that the tools above are good at extracting content from any single page, I suspect there’s room for a human-in-the-loop scraping tool to steal the show: humans can navigate and train the extraction step, and the machine can perform the extraction. I suspect that’s what import.io is up to, and I’m hopeful they keep the tool available to folks like the ones I initially tried to help.
While we’re on the topic of human-powered solutions, it might make sense to hire a developer on oDesk to just implement the scraper for the site this organization was looking at. While a lot of the developer-free tools I mentioned above look promising, there are clearly cases where paying someone for a few hours of script-building just makes sense.
- I’m personally excited about the acquisition. We’ve been working with the folks from GoDaddy for several months now, and the team is sharp and energized about helping hundreds of millions of local merchants find their home on the web.
- Locu remains Locu as a team, a set of offices, a product, and a mission. For the most part, Locu will be bringing new technology and design to the table, and GoDaddy will be bringing a level of scale that would take years to build up on our own. Locu offers a healthy dose of data structuring and crowdsourcing technology alongside the design chops to make previously complicated things simple. GoDaddy is the largest privately held company in the world that focuses on helping small businesses with their web presence, and brings years of sales and marketing experience to Locu’s products. GoDaddy also has a deep understanding of scale both in terms of the tens of millions of people they work with, and the billions of dollars of revenue they bring in.
- Aside from the business side of things, we’re still very excited to be releasing open source projects and publishing more about our approach to structured data extraction and crowd work. The open source and research communities have been so fundamental to what we do, and I’m excited we can continue to repay that debt.
- As a human being, I care a lot about the values of the company I work for. It would be ignorant to ignore the fact that previous incarnations of GoDaddy have been responsible for sexist Super Bowl commercials, and have supported web-endangering efforts like SOPA. We’ve been assured that the people who were behind these efforts are no longer working at GoDaddy. In fact, an entirely new leadership team (including CEO, COO, CTO, Chief Architect, etc.) has been put in place since these controversies, and I count myself as one of the folks that expects a lot of them in the coming years.
A little delayed, but I put together a guide of advice I’ve given other students in grad school. Send feedback, or write your own!
Life update: I’ve defended my thesis and I’m now the Director of Data at Locu. This doesn’t change much on the blog, as I’ll still periodically update it with random thoughts. I’m also doing a bit of blogging on the Locu blog on topics like our technology workflow, designing for crowds, and the human side of crowdsourcing.
It’s an exciting and very different next step for me. I’m still very excited about introducing new students to data and computer science, and will keep that up as well.
course on data literacy basics targeted at computer science undergraduates. Our initial motivation was selfish: as databases researchers, we didn’t have a lot of experience with an end-to-end raw data->data product pipeline. After a few trial runs of our own, we realized certain data processing patterns kept showing up, and saw that we had a small course worth of content on our hands. The important thing here is that even with undergraduate- and graduate-level machine learning, statistics, and database courses under our belts, we still had a lot to learn about working with honest-to-goodness dirty data.
Each module of our course could have had an entire semester dedicated to it, and so we favored basic skills with lots of hands-on experience over intellectual depth and rigor. We kept lectures to 20-30 minutes, giving students the remaining 2.5 hours to go through the labs we set up while we walked around answering questions. Lectures allowed students to know what they were in for at a high level, and the lab portion allowed them to cement those concepts with real datasets, code, and diagrams. All of the course content is available on github, and as an example, here is a direct link to day 1’s lab.
The syllabus we covered was:
- Day 1: an end-to-end experience in downloading campaign contribution data from the federal election commission, cleaning it up, and programmatically displaying it using basic charts.
- Day 2: visualization/charting skills using election and county health data.
- Day 3: statistics to take the hunches they got on day 2 and quantify them, learning about T-Tests and linear regression along the way.
- Day 4: text processing/summarization using the Enron email corpus.
- Day 5: MapReduce to scale up Day 4’s analysis using Elastic MapReduce on Amazon Web Services. This felt a bit forced, but the students were clamoring for distributed data processing experience.
- Day 6: the students teach us something they learned on their own datasets using techniques we’ve taught them.
While we set out to give computer science students with familiarity in python programming a dive into data, we ended up with folks from the physical sciences, doctors, and a few social scientists who had their own datasets to answer questions about. The last day allowed them to experiment with their new skills on their own data. Attendance on this day was lower than the previous days: the majority of the folks in attendance on day 6 were on the more experienced end, and I suspect that the undergrads, who were not yet exposed to data problems of their own, didn’t find it as engaging. It would be interesting to see how to develop course content that allows self-directed data science for students who still need a bit more inspiration.
I should also say that our attempt is not the first one to bring data to the classroom. Jeff Hammerbacher and Mike Franklin at Berkeley have a wonderful semester-length course on data science. The high-level outline of the course seems similar, but they get farther into data product design, and jump into each topic in more depth. Their resources page has a nice set of links to other educational efforts worth checking out.
I consume content through many aggregators, but The New York Times (The Gray Lady) is the single source of content I go to directly at least daily to know what’s happening in the world. While it’s good for news, what sets The Times apart from other content sources is its depth of reporting. There’s one problem, though: by default, longer NYT articles do not appear in Single Page mode. This has caused me problems in the past, ranging in severity from slightly annoying (having to click Next Page) to pretty frustrating (loading articles for offline reading only to realize I only had the first page).
So I created One Gray Lady, a Greasemonkey plugin that loads all NYT content in single page mode.
To install it in Google Chrome or Firefox with the Greasemonkey plugin, click here.
I have only tested the code in Chrome, and while I did a bit of testing on various URLs, I’m sure I missed something. Feel free to send updates or suggestions!
I recently sat in on a lecture for Professor Peter Szolovits’s Biomedical Computing course. The lecture was open to a greater audience, given the prominence of the speaker. As a non-expert, I found it to be a useful look into the current state of healthcare IT and the coming legislative and technical challenges facing the industry. My notes are below.
John Glaser, Ph.D.
Formerly CIO of Partners/Brigham And Women’s Hospital
Currently CEO of Siemens Health Services
Free advice: get a healthcare proxy and power of attorney set up. Easier to do now than have someone else guess later how you want to live/die.
Why does Health IT suck?
- Not for lack of money put into the system
- Not for lack of smart people working on the problem
- Insurance companies/patients pay per volume (per birth, per surgery, etc.) almost regardless of quality
- Boards of directors are very conservative. Don’t want to be the board that made an IT decision that made a huge hospital fail.
U.S. Numbers to give context
- 60% of hospitals are <= 100 beds
- Of 500K physicians, majority work in 2-3-doctor practice (not IT-savvy, or modestly interested in IT at best)
- 2/3 of medical decisions are heuristic/not scientific, and many have a difficult-to-verify outcome
- volatile knowledge domain: 700k academic articles have come out in the last (decade?)
- 20% of doctors are a decade away from retirement, so perhaps newer doctors will bring IT mentality with them?
- PricewaterhouseCoopers survey: 58% of (independent?) doctors considering quitting, selling practice, or joining a larger practice
- various societies are discussing requirements: to become board (re-)certified (oncology, etc.), you have to show facility in technology.
Health IT Services
- huge fragmentation: the 3rd largest health IT services company has 7% of market. if they win every open engagement from now until (?), they will have 11% of the market.
- lots of players: 300 electronic health record providers in US, 25% exit and 25% enter per year
- engagements are long: bringing up a new hospital IT system takes 2-4 years. from the moment you decide to change IT systems, you will continue to use your old one for the next 4-5 years as you transition.
Affordable Care Act (ACA)
- costs are projected to go up 26% in the next decade. ACA stipulates that govt. will compensate 12% more in the next decade: providers have to make up the difference.
- to incentivize quality care, govt. will hold on to 10% of payments until you prove treatment was effective (hard to define).
- currently, for a single procedure (e.g., total hip replacement) you might get 12 different bills (e.g., surgeon, materials, anesthesia). new system: govt. pays a single provider one bill, with a fixed amount. incentivizes a holistic view.
- risk: hospitals go out of business. potential future doctors don’t enter medicine. doctors “fire” bad patients to make their numbers look good.
- doctors in small practices joining larger networks to avoid managing the ACA requirements.
- single payment requirement will cause groups of doctors to more tightly collaborate (contractually).
- ACA is rolling out over the course of a decade.
- need to be careful, since some patients will be handled by old rules, and some by new rules. so do you not apply decision support-based treatment to patients on old rules, or just do fee-for-service? lots of mental overhead for doctors.
Fixed fee challenges
- paying a fixed amount per treatment doesn’t work for everything. Diabetes is sort of predictable, but a trauma might range from a broken toe to severe burns on 90% of body.
- (Adam’s note) perhaps large pools of insured patients will smooth over the individual spikes in cost of care.
Information Technology needs
- systems must span inpatient, outpatient, emergency care, rehab
- need revenue cycle + contract management system that handles continuum of care. this is complex: medicare + blue cross might pay diff amounts for “good” diabetes treatment, and “good” might be defined differently.
- systems should manage individuals and populations: how did all 100 people w/ respiratory problems do last month? which patients strayed from predicted path? what should have happened? why/why not?
- sophisticated business intelligence + analysis: predict who will get worse, etc.
- interoperability w/ different providers
- rules+workflow engines to ensure followups/next steps/help primary care doctors coordinate care, manage exceptions, follow up properly. also allow this in collaborative care environment w/ lots of specialists checking in and out.
- high availability + low total cost of ownership
- engage patients
New challenges for primary care physicians (PCPs)
- At the moment, PCP moves from one patient to the next every 15 minutes, sees 100s of lab results per day
- Only 25% of data from specialists comes back to a PCP within a month
- In future, PCPs will be responsible for closing the loop on specialists, tests, etc., with more accountability, but still be given just as much or more information, with similar delays. Workflow management systems are key here!
Interesting technical challenges
- filtering patient care notes: 10s of pages of patient care history. No doctor can read them all before seeing patient. how to help doctors find relevant notes across different doctors, annotations, etc.
- supporting collaboration between multiple providers
- parsing notes to remind providers. e.g., “Ask about patient’s daughter next time.”
- cleaning up conflicting medical record data: was it type 1 or type 2 diabetes? was it a heart attack, or just a test for one?
(Cross-posted on the Crowd Research Blog)
There has been a lot of excitement in the database community about crowdsourced databases. At first blush, it sound like databases are yet another application area for crowdsourcing: if you have data in a database, a crowd can help you process it in ways that machines cannot. This view of crowd-powered databases misses the point. The real benefit of thinking of human computation as a databases problem is that it helps you manage complex crowdsourced workflows.
Many crowd-powered tasks require complicated workflows in order to be effective, as we see in algorithms like Soylent’s Find-Fix-Verify. These custom workflows require thousands of lines of code to curry data between services like MTurk and business logic in several languages (1000-2000 in the case of Find-Fix-Verify!). If we provide workflow developers with a set of common operators, like filters and sorts, and a declarative interface to combine those operators, such as SQL or PigLatin, we can reduce the painful crowdsourced plumbing code while focusing on a set of operators to improve as a community.
This is not an academic argument: Find-Fix-Verify can be implemented with a FOREACH-FOREACH-SORT in PigLatin, or a SELECT-SELECT-ORDERBY in SQL, resulting in several tens of lines of code. All told, we can get a two order-of-magnitude reduction in workflow code. The task at hand is thus to make the best-of-breed reusable operators for crowd-powered workflows. In our VLDB 2012 paper, we look at two such operators: Sorts and Joins.
Human-powered sorts are everywhere. When you submit a product review with a 5-star rating, you’re implicitly contributing a datapoint to a large product ranking algorithm. In addition to rating-based sorts, there are also comparison-based ones, where a user is asked to compare two or more items along some axis. For a particularly cute example of comparison-based sorting, see The Cutest, a site that identifies the cutest animals in the world by getting pairwise comparisons from heartwarmed visitors.
The two sort-input methods can be found in the image below. On the left, users compare five squares by size. On the right, users rate each square on a scale from one to seven by size after seeing 10 random examples.
In our paper, we show that comparisons provide accurate rankings, but are expensive: they require a number of comparisons quadratic in the number of items being compared. Rating is quite accurate, and cheaper than sorts: it’s linear in the number of items rated. We also propose a hybrid of the two that balances cost and accuracy, where we first rate all items, and then compare items with similar ratings.
These techniques can reduce the cost of sorting a list of items by 2-10x. Human-powered sorts are valuable for a variety of tasks. Want to know which animals are most dangerous? From least to most dangerous, a crowd of Turkers said:
flower, ant, grasshopper, rock, bee, turkey, dolphin, parrot, baboon, rat, tazmanian devil, lemur, camel, octopus, dog, eagle, elephant seal, skunk, hippo, hyena, great white shark, moose, komodo dragon, wolf, tiger, whale, panther
The different sort implementations highlight another benefit of declaratively defined workflows. A system like Qurk can take user constraints into account (linear costs? quadratic costs? something in between?) and identify a comparison-, rating-, or hybrid-based sort implementation to meet their needs.
Human-powered Joins are equally pervasive. The area of Entity Resolution has captured the attention of researchers and practitioners for decades. In the space of finance, is IBM the same as International Business Machines? Intelligence analysis runs into a combinatorial explosion in the number of ways to say Muammar Muhammad Abu Minyar al-Gaddafi's name. And most importantly, how can I tell if Justin Timberlake is the person in the image I'm looking at?
We explored three interfaces for solving the celebrity matching problem (and more broadly, the human-powered entity resolution problem). The first is a simple join interface, asking users if the same celebrity is displayed in two images. The second employs batching, asking Turkers to match several pairs of celebrity images. The third interface employs more complex batching by asking Turkers to match celebrities arrayed in two columns.
As we batch more pairs to match per task, cost goes down, but so does Turker accuracy. Still, we found that we can achieve around a 10x cost reduction without significantly losing in result quality. We can achieve even more savings by having workers identify features of the celebrities, so that we don’t, for example, try to match up males with females.
We’re Not Done Yet
We now have insight into how to effectively design two important human-powered operators, sorts and joins. There are two directions to go from here: bring in learning models, and design more reusable operators.
Our paper shows how to achieve more than order-of-magnitude cost reductions in join and sort costs, but this is often not enough. To further reduce costs while maintaining accuracy, we’re looking at training machine learning classifiers to perform simple join and sort tasks, like determining that Cambridge Brewing Co. is likely the same as Cambridge Brewing Company. We’ll still need humans to handle the really tricky work, like figuring out which of the phone numbers for the brewing company is the right one.
Sorts and joins aren’t the only reusable operators we can implement. Next up: human-powered aggregates. In groups, humans are surprisingly accurate at estimating quantities (jelly beans in a jar, anyone?). We’re building an operator that takes advantage of this ability to count with a crowd.
Over the past month, a petition has been circulating asking the Obama administration to bring graduate student stipends back to their pre-1986 tax-exempt status. I urge you to not sign this petition, as it is misguided and damaging to our image. If you believe graduate student researchers are more valuable than their compensation, then demand more compensation, not a tax loophole.
First, the caveat: I can only speak for the STEM fields. In these fields, a combination of government, corporate, and university grants support research-track students in the lab and classroom. This compensation usually comes in the form of full tuition coverage and a stipend in the range of $1500-$2500 per month, and sometimes includes health coverage.
Our stipends put our yearly income at $18,000-$30,000/year. Compare this to a poverty threshold of $18,530 for a family of three, or $29,990 for a family of six. In computer science, you can double your income with a summer internship, placing you above the median 2009 household income. At first glance, it seems like we are reasonably compensated before we take into account the education, advising, networking, and travel opportunities our life decision has earned us.
Of course, the argument in the petition is more nuanced than one of unreasonable taxation. The petition speaks to the value of our “innovative, cutting-edge thinking” relative to “bankers, lobbyists, or hedge-fund managers.” The comparison is certainly timely, but sweeps under the rug other valuable fields, like Nursing or Carpentry. Both of these fields earn more than the median graduate student in STEM, but optimistically, we are in a position of higher upward mobility once we graduate.
Perhaps a better comparison is what we could earn if we had not chosen graduate studies. With a B.S. in Computer Science, my undergraduate colleagues at large technology firms and startups are earning 3-5x what I earn through my stipend. Am I more valuable as a researcher than I would be in their shoes? This seems like a good conversation to have.
This is a discussion one of relative value. In the absolute sense, graduate students in STEM are not poor, and should pay taxes in whatever tax bracket we fall. Perhaps we’re not compensated enough for what we provide to society. I would like to believe that STEM’s contribution to social and economic development is significant. If we’re seeing a dirth of STEM researchers and our value to society is high, the market failure should be supplemented by the government. Not in the form of yet another tax break, but as an increase in the number of stipends or the amount of compensation distributed per researcher.
STEM is under attack. We should elevate its image by discussing how valuable our work is, not by asking for pity. Demand what you are worth, but remember how lucky you are.
There is little I like more than a fine cheese and fresh-baked bread. Still, to fill the rest of my day without expanding my waistline, I go for a mix of databases and human-computer interaction. That’s why I was excited to see several database-oriented papers presented at CHI. While many papers contained some amount of data, I’ll stick to the three that are unquestionably of interest to the databases community.
The first paper was for the social scientist in all of us. Amy Voida, Ellie Harmon, and Ban Al-Ani presented Homebrew Databases: Complexities of Everyday Information Management in Nonprofit Organizations. Nonprofits are arguably some of the most difficult database users to design for. They have minimal resources, rarely employ fulltime technical staff, and solve non-core problems as they show up. This practice leads to homebrew, just-functional-enough solutions to many data management problems. The authors provide an interesting qualitative study of how nonprofits manage volunteer demographic and contact information. They provide descriptions of the homebrewed, often fractured collections of data stored in several locations. Reading this paper, I couldn’t help but think of how perfectly these homebrewed databases resembled Franklin, Halevy, and Maier’s dataspaces.
Sean Kandel presented Wrangler, a project he’s been working on with Andreas Paepcke, Joe Hellerstein, and Jeff Heer. Wrangler lets users specify transformations on datasets by example. Each time a user shows Wrangler how to modify a record (or line of unstructured text), Wrangler updates its rank-ordered list of potential transformations that could have led to this modification. Wrangler borrows concepts such as interactive transformation languages from Vijayshankar Raman and Joe Hellerstein’s Potter’s Wheel. Its interface has a taste of David Huynh and Stefano Mazzocchi’s Refine as well as Huynh’s Potluck. Wrangler’s novelty comes in combining the interfaces and transformation languages with an inference and ranking engine. Since Wrangler is hosted, it is also capable of learning which transformations users prefer and improving its rankings over time!
The last slot goes to our own Eirik Bakke, who presented Related Worksheets along with David Karger and Rob Miller. Related worksheets make foreign key references a first-class citizen in the world of spreadsheets. Just as spreadsheets secretly made every office worker capable of maintaining a single-user, single-table relational database, Eirik has secretly enabled those workers to make references between spreadsheets without having to program. While adding foreign key references to a spreadsheet requires a simple user interface modification, its implications on how to display multi-valued cells in the spreadsheet are significant. Read the paper to see Eirik’s hierarchical solution to this problem!
Keep it up, data nerds! Soon we’ll be able to start a data community at CHI!