My Review of Hadoop Summit 2011 #hadoopsummit

I woke up early and cheery Wednesday morning to attend the 2011 Hadoop Summit in Santa Clara, after a long drive from Los Angeles and the Big Data Camp that lasted until 10pm the night before. Having been to Hadoop Summit 2010, I was interested to see how much of the content in the conference had changed.

This year, there were approximately 1,600 participants and the summit was moved a few feet away to the Convention Center rather than the Hyatt. Still, space and seating was pretty cramped. That just goes to show how much the Hadoop field has grown in just one year.

Keynotes

We first heard a series of keynote speeches which I will summarize. The first keynote was from Jay Rossiter, SVP of the Cloud Platform Group at Yahoo. He introduced how Hadoop is used at Yahoo, which is fitting since they organized the event. The content of his presentation was very similar to last year’s. One interesting application of Hadoop at Yahoo was for “retiling” the map of the United States. I imagine this refers to the change in aerial imagery over time. When performed by hand, retiling took 6 weeks; with Hadoop, it took 5 days. Yahoo also uses Hadoop for fraud detection, spam detection, search assist, geotagging data/local indexing, ad targeting, predicting supply and demand and the aggregation and categorization of news stories. Jay also mentioned that Dapper runs models on data with Hadoop for ad personalization. Jay also mentioned that Big Data conferences all over the country are selling out.

Eric Baldeschwieler, the CEO of Hortonworks was next. Hortonworks seems to be a new company that spun off from Yahoo. Their goal is to provide commercial support and a full Apache Hadoop platform for users. Yes, they are very similar to Cloudera, and yes, they are competition. (Hortonworks and MapR both did a good job of not stepping on everyone’s toes in terms of how they presented themselves.) Cloudera provides its own distribution of Hadoop, which is of course similar to the Apache version. Hortonworks’ goal is to provide similar services, but with more transparency by using the Apache Hadoop distribution rather than wrapping its own. Paraphrasing Eric, Hortonworks is open-source from the ground up. A bit later, Sanjay Radia also of Hortonworks discussed Hadoop for the enterprise. Hortonworks has contributed, or is working on security (preventing users from deleting others’ data), service level agreements (SLAs), predictability and a Fair-Share scheduler.

Anant Jhingran, CTO of IBM discussed how Hadoop was used in IBM Watson. It seemed pretty obvious that Hadoop or some form of map-reduce was used in the system, but it did not seem to be highly publicized. Watson learned from 200 million pages of data, about 2-5TB and required between 3000 and 4000 Watts. Anant went quickly through a cool user interface representing a Jeopardy board and stated that the user interface to an artificial intelligence application is just important as the application itself. He also prefers the term IA (intelligence augmentation) over AI, and apparently this is a common distinction. To me, I interpret AI vs. IA to be artificial intelligence vs. knowledge discovery (data mining).

Karthic Ranganathan from Facebook discussed Facebook’s messaging system which was built on HBase, HDFS and MapReduce. Facebook sees 15 billion messages per month, excluding SMS and email, approximately 14TB of data! There are also 120 billion chat messages (25TB), for a grand total of almost 300TB per month. (I may have missed something as these numbers do not add up). Facebook uses HBase for the bodies of small messages, metadata, and for the search index. Facebook uses HBase because of its high write throughput and easy horizontal scalability. Facebook uses another system called Haystack for photos, bodies of large messages and attachments. Of course, HDFS is used for fault tolerance, scalability, checksums for data integrity and its MapReduce abilities. Profiles and services are partitioned by user. Each machine has 16 cores, with 12 1TB hard disks, and 48GB RAM (24GB used for HBase). Some things that Facebook would like contribute and improve: NameNode high availability and a second NameNode, better performance overall, and using flash memory to improve performance. Facebook often adds several columns to a table so that DevOps does not need to take the server offline to add new columns.

Big Data conferences all over the country are selling out.

Breakout Sessions

There were so many great sessions and I can only summarize the ones I attended. Check out the event agenda for abstracts on all sessions.

First I attended Web Crawl Cache – Using HBase to Manage a Copy of the Web. In this talk, we learned about Yahoo’s Web Crawl Cache (WCC) that collects and organizes data from Microsoft as a result of a search deal. These snapshots of the web are not only useful for search, but also for drilling into other avenues such as local assets, influence and language corpora. WCC uses HBase for several reasons: bulk load, MapReduce jobs are efficient, random access reads, a usable consistency model, and it is easy to dynamically add columns (this seems to contradict Karthick’s claim).

It was very difficult to pick a session for the 1:45 to 2:15 time slot. Options included Next Generation Hadoop, Scaling out Realtime Data (Facebook) and Building Kafka (LinkedIn). I admire the work and clout that LinkedIn has built over the past year or two, so I attended Jay Kreps session. LinkedIn’s data pipeline includes a lot of tracking, logging, metrics, messages and queuing. LinkedIn attempted to use messaging systems such as JMS and RabbitMQ. Streaming data is prevalent at LinkedIn such as search trends, click trends, invitation social networks etc. Kafka is LinkedIn’s solution for a distributed message queue; rather than polling for data, users subscribe to a data stream and data sources publish data to it. Kafka is 7000 lines of Scala, a functional and object-oriented language on top of the Java Virtual Machine (JVM). Kafka can produce about 250,000 messages per second (50 MB) and consume 550,000 messages per second (110 MB).

Next I attended another talk by Hortonworks, this time on HCatalog. HCatalog changes the way we think about data in HDFS. No longer do we need to worry about files and directories. Instead, HCatalog seems to add a layer of abstraction on top of HDFS that treats data as a set of tables. Tools such as Pig and Hive use this layer of abstraction, and currently Hive is tightly integrated with HCatalog. Hortonworks intends to add support for HBase and Streaming later this year.

I waited all day to see Matei Zaharia‘s talk on Spark. Zaharia is a graduate student at UC Berkeley and it was a nice change of pace to see a student present some work. Spark is a data processing platform that sits on top of the Mesos cluster management project (also produced by Berkeley). Mesos can handle 10,000s nodes, 100s of concurrent jobs and can be isolated in Linux containers (i.e. OpenVZ). Spark aims to extend MapReduce for iterative algorithms, and interactive low latency data mining. One major difference between MapReduce and Spark is that MapReduce is acyclic. That is, data flows in from a stable source, is processed, and flows out to a stable filesystem. Spark allows iterative computation on the same data, which would form a cycle if jobs were visualized. Resilient Distributed Dataset (RDD) serves as an abstraction to raw data, and some data is kept in memory and cached for later use. This last point is very important; Spark allows data to be committed in RAM for an approximate 20x speedup over MapReduce based on disks. RDDs are immutable and created through parallel transformations such as map, filter, groupBy and reduce. RDD immutability is similar to immutable types in functional programming languages. It does not mean that the dataset cannot change. Instead, it means that a new copy of the dataset is created, with the change included. The user can also perform actions on RDDs such as count, collect, etc. Some applications using Spark are traffic prediction (Berkeley), spam classification (Twitter), kmeans, alternating least squares matrix factorization, and network simulation.

The main takeaway from Hadoop Summit 2010 was Cascalog. I predict the main takeaway from Hadoop Summit 2011 is Spark.

One time at work I had a bizarre issue with corrupted data in HDFS. After that, I began blaming everything on HDFS. The next session Data Integrity and Availability of HDFS was englightening. HDFS takes good care of Yahoo’s data. We can trust Yahoo because if HDFS breaks, Yahoo begins losing money so they know what they are talking about! Yahoo’s goal is to have 60 PB online all the time. The key to HDFS reliability is replication. A replication factor of 3 (3 copies of every file? block?) is appropriate. A replication factor of 2 is also quite robust, but should only be used when there is a backup of the data because the probability of data loss is much higher. Yahoo has had issues with losing blocks (blocks are pieces of data, so lost blocks = data loss). There are a variety of reasons and most of them had nothing to do with HDFS. One cause of lost blocks is a bug in a Hadoop component like Pig, particularly a new version. In one incident, a new version of Pig opened a lot of files without closing them, and created a lot of abandoned files. In the speaker’s anecdotal case study, none of the incidents of data loss were caused by HDFS proper. Other causes of data loss encoutered were exhausting disk space, users hammering HDFS, and “other.” The speaker noted that NameNode high availability (a hot topic) would have only helped in 8 of the 36 incidents studied. Some ways of preventing data loss include resource allocation, selecting good tenants of a cluster, and fixing hardware errors quickly.

If your job isn’t running, it’s not likely caused by HDFS.

Bill Graham of CBS Interactive gave an interesting talk about using Hadoop to build a graph of users a content. Surprisingly, CBSi has quite a large arsenal of MapReduce enabled technologies: Chukwa, Pig, Hive, HBase, Cascading, Sqoop and Oozie. CBSi uses only 100 nodes with 500 TB of disk space for processing data associated with 235 million uniques (individuals, roughly). Mapping users to content should be easy, right? Well, some users have multiple identities, including anonymous identities. The goal is to create a holistic graph that “matches” all of the identities efficiently for uses such as ad targeting. CBSi’s needs in a Hadoop platform: rapid experimentation and data mining, and to power new site features and ad optimization. The main vehicle for representing data is a Pig RDF that allows for a kind of graph based join so to speak. CBSi hopes to add Oozie, Azkaban, HCatalog and Hama (graph processing) to its arsenal.

MapR was a very prominent sponsor of Hadoop Summit. M. C. Srivas presented a technical discussion of MapR’s capabilities and how it differs from Apache Hadoop. MapR is a full distribution of Hadoop and is 100% compatible with the Apache distribution and projects such as Pig and Oozie. MapR is fast and boasts high availability by rethinking the NameNode. The NameNode is a bottleneck since 60% of file operations are metadata. The NameNode and its limitations limit the size of a cluster. To resolve some problems with the NameNode, MapR turns every server into a metadata server. Since metadata is seldom retrieved, it is paged to disk so more RAM can be used for MapReduce proper. MapR distributes NameNode functionality and provides full random read and write semantics as well as export to NFS. With the distributed NameNode, runaway tasks no longer take down the NameNode. MapR has some lofty performance goals. While HDFS can handle 10-50PB, MapR can handle 1010 EB (exabytes). While HDFS can handle 2000 nodes in a cluster, MapR can handle 10,000 or more. It was mentioned at BigDataCamp that MapR does not rely on HDFS at all.

The final session I attended was Avery Ching’s talk on Giraph: Large-scale Graph Processing on Hadoop. Unfortunately, Avery jumped right into the technical details of Giraph without giving a high level overview of the problem Giraph solves. Also, his slides were in 10 point font and I could not read them. Combine this with the fact that my brain was exhausted, so I wanted to head to the bar. Vanilla Hadoop incurs too much overhead for graph data processing. Yahoo used MPI in the past for graph data but it had no fault tolerance and was too generic. Giraph is a library for iterative graph processing. Giraph is fault-tolerant and dynamic. Giraph takes a vertex centric approach to graph data. I found this interesting because most of my work is edge centric. Overall, Giraph is similar in goal to Pregel, but available to non-Googlers and has no single point of failure (except those incurred by Hadoop).

Now I have to catch my breath with some wine, beer and cheese at the nice happy hour reception afterwards. It was a long day, and a great day at Hadoop Summit 2011 and I will of course be back next year. I have no clue what is in store for me next year. Will the NameNode be removed as the single point of failure? Will other open-source software start integrating Hadoop? We shall see…

And now it is time to head back to Los Angeles.

Have a happy and safe Fourth of July!

10 comments to My Review of Hadoop Summit 2011 #hadoopsummit

  • […] My Review of Hadoop Summit 2011 #hadoopsummit (bytemining.com) […]

  • […] My Review of Hadoop Summit 2011 #hadoopsummit « Byte Mining. […]

  • Hadoop Summit 2011、来場者は1,600人 | 大規模計算ドットコム

    […] Summit 2011 – Yahoo! Developer Network source:My Review of Hadoop Summit 2011 #hadoopsummit 集合知イン・アクション posted with amazlet at 11.07.04 Satnam Alag […]

  • phillip young

    Actually, Yahoo admitted (in one talk) that a handful of HDFS failures were due to 2 bugs in HDFS which “were fixed quickly”.

    HCatalog is Yahoo’s recognition that schemas and tables are good. Hive (created by Facebook) and Pig (created by Yahoo) are both ways to generate map-reduce jobs by hand. These communities used to argue, but it’s great to see HCatalog bringing these things together. HCatalog uses the Hive metastore.
    To me this is another example of Yahoo integrating with “the community”. Good for them (and us).

    MapR is a plug-in replacement for Hadoop-core (name-nodes/data-nodes, job-tracker/task-trackers). You can run native Hive/Pig/Map-R etc by submitting your standard JARs to their cluster. They offer a free version (M3) which does not include their enterprise features, such as SNAPSHOTTING and REPLICATION which are enterprise standard features (in SANs, so why not HDFS?).

    • Interesting. If it was the talk I was in, I must have missed it. Makes sense that HDFS bugs would cause issues. They seem to harp more on “new versions of Pig etc.”

      Thanks for the clarification on HCatalog. I will need to make some changes to reflect correct information. I haven’t gotten much into Hive (or Pig for that matter) so the details were a bit lost on me. I love Yahoo, and wish they could somehow restructure because they have a great research team and are very community oriented. Unfortunately, I think their days are numbered but I hope I am wrong.

      Since I am not DevOps and do not make decisions about purchasing systems, I kind of skated by the details. Thanks for the clarification.

  • Leif Wickland

    Thanks for the write up, Ryan.

    I wonder if the confusion about the easy of adding columns to HBase was due to the distinction between columns and columns families. (A piece of data in HBase, a cell, is addressed by the full combination of rowId+columnFamily+column+timestamp.) You don’t have to “add” columns; simply by writing data at a column name is it “created.” Adding a column family is a little trickier. You can do it easily enough in HBase’s shell or with the HBase API, but to do it “right,” you manually configure the splits.

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>