Search results for: distributed-graph-analytics

Distributed Graph Analytics

Author : Unnikrishnan Cheramangalath
File Size : 40.8 MB
Format : PDF, ePub
Download : 959
Read : 380
Download »
This book brings together two important trends: graph algorithms and high-performance computing. Efficient and scalable execution of graph processing applications in data or network analysis requires innovations at multiple levels: algorithms, associated data structures, their implementation and tuning to a particular hardware. Further, programming languages and the associated compilers play a crucial role when it comes to automating efficient code generation for various architectures. This book discusses the essentials of all these aspects. The book is divided into three parts: programming, languages, and their compilation. The first part examines the manual parallelization of graph algorithms, revealing various parallelization patterns encountered, especially when dealing with graphs. The second part uses these patterns to provide language constructs that allow a graph algorithm to be specified. Programmers can work with these language constructs without worrying about their implementation, which is the focus of the third part. Implementation is handled by a compiler, which can specialize code generation for a backend device. The book also includes suggestive results on different platforms, which illustrate and justify the theory and practice covered. Together, the three parts provide the essential ingredients for creating a high-performance graph application. The book ends with a section on future directions, which offers several pointers to promising topics for future research. This book is intended for new researchers as well as graduate and advanced undergraduate students. Most of the chapters can be read independently by those familiar with the basics of parallel programming and graph algorithms. However, to make the material more accessible, the book includes a brief background on elementary graph algorithms, parallel computing and GPUs. Moreover it presents a case study using Falcon, a domain-specific language for graph algorithms, to illustrate the concepts.

Distributed Graph Analytics

Author : Unnikrishnan Cheramangalath
File Size : 90.31 MB
Format : PDF
Download : 279
Read : 960
Download »
This book brings together two important trends: graph algorithms and high-performance computing. Efficient and scalable execution of graph processing applications in data or network analysis requires innovations at multiple levels: algorithms, associated data structures, their implementation and tuning to a particular hardware. Further, programming languages and the associated compilers play a crucial role when it comes to automating efficient code generation for various architectures. This book discusses the essentials of all these aspects. The book is divided into three parts: programming, languages, and their compilation. The first part examines the manual parallelization of graph algorithms, revealing various parallelization patterns encountered, especially when dealing with graphs. The second part uses these patterns to provide language constructs that allow a graph algorithm to be specified. Programmers can work with these language constructs without worrying about their implementation, which is the focus of the third part. Implementation is handled by a compiler, which can specialize code generation for a backend device. The book also includes suggestive results on different platforms, which illustrate and justify the theory and practice covered. Together, the three parts provide the essential ingredients for creating a high-performance graph application. The book ends with a section on future directions, which offers several pointers to promising topics for future research. This book is intended for new researchers as well as graduate and advanced undergraduate students. Most of the chapters can be read independently by those familiar with the basics of parallel programming and graph algorithms. However, to make the material more accessible, the book includes a brief background on elementary graph algorithms, parallel computing and GPUs. Moreover it presents a case study using Falcon, a domain-specific language for graph algorithms, to illustrate the concept s.

Distributed Graph Partitioning for Large scale Graph Analytics

Author : Lukas Rieger
File Size : 21.12 MB
Format : PDF, ePub, Docs
Download : 438
Read : 541
Download »

Compiler and System for Resilient Distributed Heterogeneous Graph Analytics

Author : Gurbinder Singh Gill
File Size : 38.26 MB
Format : PDF, Docs
Download : 826
Read : 205
Download »
Graph analytics systems are used in a wide variety of applications including health care, electronic circuit design, machine learning, and cybersecurity. Graph analytics systems must handle very large graphs such as the Facebook friends graph, which has more than a billion nodes and 200 billion edges. Since machines have limited main memory, distributed-memory clusters with sufficient memory and computation power are required for processing of these graphs. In distributed graph analytics, the graph is partitioned among the machines in a cluster, and communication between partitions is implemented using a substrate like MPI. However, programming distributed-memory systems are not easy and the recent trend towards the processor heterogeneity has added to this complexity. To simplify the programming of graph applications on such platforms, this dissertation first presents a compiler called Abelian that translates shared-memory descriptions of graph algorithms written in the Galois programming model into efficient code for distributed-memory platforms with heterogeneous processors. An important runtime parameter to the compiler-generated distributed code is the partitioning policy. We present an experimental study of partitioning strategies for distributed work-efficient graph analytics applications on different CPU architecture clusters at large scale (up to 256 machines). Based on the study we present a simple rule of thumb to select among myriad policies. Another challenge of distributed graph analytics that we address in this dissertation is to deal with machine fail-stop failures, which is an important concern especially for long-running graph analytics applications on large clusters. We present a novel communication and synchronization substrate called Phoenix that leverages the algorithmic properties of graph analytics applications to recover from faults with zero overheads during fault-free execution and show that Phoenix is 24x faster than previous state-of-the-art systems. In this dissertation, we also look at the new opportunities for graph analytics on massive datasets brought by a new kind of byte-addressable memory technology with higher density and lower cost than DRAM such as intel Optane DC Persistent Memory. This enables the design of affordable systems that support up to 6TB of randomly accessible memory. In this dissertation, we present key runtime and algorithmic principles to consider when performing graph analytics on massive datasets on Optane DC Persistent Memory as well as highlight ideas that apply to graph analytics on all large-memory platforms. Finally, we show that our distributed graph analytics infrastructure can be used for a new domain of applications, in particular, embedding algorithms such as Word2Vec. Word2Vec trains the vector representations of words (also known as word embeddings) on large text corpus and resulting vector embeddings have been shown to capture semantic and syntactic relationships among words. Other examples include Node2Vec, Code2Vec, Sequence2Vec, etc (collectively known as Any2Vec) with a wide variety of uses. We formulate the training of such applications as a graph problem and present GraphAny2Vec, a distributed Any2Vec training framework that leverages the state-of-the-art distributed heterogeneous graph analytics infrastructure developed in this dissertation to scale Any2Vec training to large distributed clusters. GraphAny2Vec also demonstrates a novel way of combining model gradients during training, which allows it to scale without losing accuracy

Practical Graph Analytics with Apache Giraph

Author : Roman Shaposhnik
File Size : 71.54 MB
Format : PDF, Mobi
Download : 843
Read : 908
Download »
Practical Graph Analytics with Apache Giraph helps you build data mining and machine learning applications using the Apache Foundation’s Giraph framework for graph processing. This is the same framework as used by Facebook, Google, and other social media analytics operations to derive business value from vast amounts of interconnected data points. Graphs arise in a wealth of data scenarios and describe the connections that are naturally formed in both digital and real worlds. Examples of such connections abound in online social networks such as Facebook and Twitter, among users who rate movies from services like Netflix and Amazon Prime, and are useful even in the context of biological networks for scientific research. Whether in the context of business or science, viewing data as connected adds value by increasing the amount of information available to be drawn from that data and put to use in generating new revenue or scientific opportunities. Apache Giraph offers a simple yet flexible programming model targeted to graph algorithms and designed to scale easily to accommodate massive amounts of data. Originally developed at Yahoo!, Giraph is now a top top-level project at the Apache Foundation, and it enlists contributors from companies such as Facebook, LinkedIn, and Twitter. Practical Graph Analytics with Apache Giraph brings the power of Apache Giraph to you, showing how to harness the power of graph processing for your own data by building sophisticated graph analytics applications using the very same framework that is relied upon by some of the largest players in the industry today.

Scaling Analytics Via Approximate and Distributed Computing

Author : Aniket Chakrabarti
File Size : 35.37 MB
Format : PDF, ePub, Docs
Download : 502
Read : 709
Download »
Prior works have mostly designed and developed approximation techniques and distributed computing techniques to scale up in isolation. We next show how insights from the approximate computing space are crucial in improving the execution of distributed analytics. We show that distributed analytics are extremely sensitive to the characteristics of data in the partitions. Key analytics tasks such as frequent pattern mining (FPM) degrades significantly in execution if the data partitions look statistically different (not representative of the true underlying distribution). We show the efficacy of LSH in creating a small sketch to understand the global characteristics of the data and hence effectively partition the data of downstream analytics. Furthermore, we show that data approximation can also control the algorithmic complexity of graph analytics tasks. This is of particular interest as distributed graph analytics follow a different execution model (vertex-centric gather-update-scatter) that traditional tasks such as FPM (MapReduce model). We show that by approximating the input graph meaningfully we are able to significantly scale the Loopy Belief Propagation (LBP) inference to very large graphs. Finally, in the distributing computing space, we also developed a probabilistic method to manage latencies of distributed key value stores (usual storage medium for large data). This is crucial for interactive analytics.

Systems for Big Graph Analytics

Author : Da Yan
File Size : 37.52 MB
Format : PDF, Mobi
Download : 730
Read : 870
Download »
There has been a surging interest in developing systems for analyzing big graphs generated by real applications, such as online social networks and knowledge graphs. This book aims to help readers get familiar with the computation models of various graph processing systems with minimal time investment. This book is organized into three parts, addressing three popular computation models for big graph analytics: think-like-a-vertex, think-likea- graph, and think-like-a-matrix. While vertex-centric systems have gained great popularity, the latter two models are currently being actively studied to solve graph problems that cannot be efficiently solved in vertex-centric model, and are the promising next-generation models for big graph analytics. For each part, the authors introduce the state-of-the-art systems, emphasizing on both their technical novelties and hands-on experiences of using them. The systems introduced include Giraph, Pregel+, Blogel, GraphLab, CraphChi, X-Stream, Quegel, SystemML, etc. Readers will learn how to design graph algorithms in various graph analytics systems, and how to choose the most appropriate system for a particular application at hand. The target audience for this book include beginners who are interested in using a big graph analytics system, and students, researchers and practitioners who would like to build their own graph analytics systems with new features.

Big Graph Analytics on Just A Single PC

Author : Kai Wang
File Size : 87.88 MB
Format : PDF, Mobi
Download : 614
Read : 1250
Download »
As graph data becomes ubiquitous in modern computing, developing systems to efficiently process large graphs has gained increasing popularity. There are two major types of analytical problems over large graphs: graph computation and graph mining. Graph computation includes a set of problems that can be represented through liner algebra over an adjacency matrix based representation of the graph. Graph mining aims to discover complex structural patterns of a graph, for example, finding relationship patterns in social media network, detecting link spam in web data. Due to their importance in machine learning, web application and social media, graph analytical problems have been extensively studied in the past decade. Practical solutions have been implemented in a wide variety of graph analytical systems. However, most of the existing systems for graph analytics are distributed frameworks, which suffer from one or more of the following drawbacks: (1) many of the (current and future) users performing graph analytics will be domain experts with limited computer science background. They are faced with the challenge of managing a cluster, which involves tasks such as data partitioning and fault tolerance they are not familiar with; (2) not all users have access to enterprise cluster in their daily development tasks; (3) distributed graph systems commonly suffer from large startup and communication overhead; and (4) load balancing in a distributed system is another major challenge. Some graph algorithms have dynamic working sets and and it is thus hard to distribute the workload appropriately before the execution. In this dissertation, we identify three categories of graph workloads for which single-machine systems are more suitable than distributed systems: (1) analytical queries that do not need exact answers; (2) program analysis tasks that are widely used to find bugs in real-world software; and (3) graph mining algorithms that are important for many information-retrieval tasks. Based on these observations, we have developed a set of single-machine graph systems to deliver efficiency and scalability specifically for these workloads. In particular, this dissertation makes the following contributions. The first contribution is the design and implementation of a single-machine graph query system named GraphQ, which divides a large graph into partitions and merges them with the guidance from an abstraction graph. By using multiple levels of abstraction, it can quickly rule out infeasible solutions and identify mergeable partitions. GraphQ uses the memory capacity as a budget and tries its best to find solutions before exhausting the memory, making it possible to answer analytical queries over very large graphs with resources affordable to a single PC. The second contribution is the design and implementation of Graspan, a single-machine, disk-based graph processing system tailored for interprocedural static analyses. Given a program graph and a grammar specification of an analysis, Graspan uses an edge-pair centric computation model to compute dynamic transitive closures on very large program graphs. With the help of novel graph processing techniques, we turn sophisticated code analyses into scalable Big Graph analytics. The third contribution of this dissertation is a single-machine, out-of-core graph mining system, called RStream, which leverages disk support to support efficient edge streaming for mining very large graphs. RStream employs a rich programming model that exposes relational algebra for developers to express a wide variety of mining tasks and implements a runtime engine that delivers efficiency with tuple streaming. In conclusion, this dissertation attempts to explore the opportunities of building single-machine graph systems for scenarios where distributed systems do not work well. Our experimental results demonstrate that the techniques proposed in this dissertation can efficiently solve big graph analytical problems on a single consumer PC. We hope that these promising results will encourage future work to continue building affordable single-machine systems for a rich set of datasets and analytical tasks.

Irregular Graph Algorithms on Modern Multicore Manycore and Distributed Processing Systems

Author : George Slota
File Size : 70.45 MB
Format : PDF
Download : 550
Read : 226
Download »
Graph analysis is the study of real-world interaction data, be it through biological or chemical interaction networks, human social or communication networks, or other graph-representable datasets pervasive throughout the social and physical sciences. Due to increasing data sizes and complexities, it is important to develop efficient and scalable approaches for the algorithms, tools, and techniques used to study such data. Efficient utilization of the increasing heterogeneity and complexity of modern high performance computing systems is another major consideration for these efforts. The primary contributions of this thesis are as follows: First, parallel and scalable solutions to several basic graph analytics are presented. An implementation of the color-coding algorithm for subgraph isomorphism is introduced as FASCIA (Fast Approximate Subgraph Counting and Enumeration). Using several optimizations for work avoidance, memory usage reduction, and cache/data movement efficiency, FASCIA demonstrates up to a five orders-of-magnitude per-core speedup relative to prior art. FASCIA is able to calculate the counts of subgraphs up to 10 vertices on multi-billion edge graphs in minutes on a modest 16 node cluster and use these counts for a variety of analytics. Using FASCIA's baseline approach, FastPath is also introduced to find minimum weight paths in weighted networks. The Multistep method is next introduced as an approach for graph connectivity, weak connectivity, and strong connectivity, with a generalization of Multistep also presented for graph biconnectivity. The Multistep approaches are shown to demonstrate a 2-7x mean speedup relative to the prior state-of-the-art. A graph partitioner called PuLP (Partitioning using Label Propagation) is also introduced along with a general distributed graph layout strategy, DGL. PuLP was specifically designed to partition small-world graphs having skewed degree distributions, such as social interaction networks and web graphs. PuLP is able to partition such graphs an order of magnitude faster and with a fraction of the memory of other comparable partitioners (ParMETIS, PT-Scotch) while giving comparable partitions in terms of cut quality and balance. Additionally, this thesis presents how using techniques derived from these efforts, a suite of distributed graph analytics could be implemented and applied to the largest publicly-available web crawl of 3.5 billion pages and 130 billion links. End-to-end execution of analysis using these implementations completing in 20 minutes on only 256 nodes of the Blue Waters supercomputing system. Throughout this thesis, analyses of the algorithms and subroutines that comprise the Multistep, FASCIA/FastPath, and PuLP/DGL implementations is undertaken. Common optimizations are then identified (e.g., multiple levels of queues to match the memory hierarchy, techniques for non-blocking and asynchronous updates to shared data, efficient distributed communication patterns, among others) and their effects on performance are quantified. It is demonstrated how the optimization techniques can be utilized when processing under the higher degree of parallelism available in modern manycores (GPUs, Intel Xeon Phis) as well as how the techniques can be extended for more general-purpose graph processing in both the shared- and distributed-memory spaces. Also under consideration is the state of current hardware trends, with the goal of identifying how to modify and extend these general optimizations for forthcoming high performance computing architectures. Additionally, new optimizations and potential further research areas are introduced which might also be applicable for accelerating graph processing on these future systems.

Large Scale Graph Processing Using Apache Giraph

Author : Sherif Sakr
File Size : 29.99 MB
Format : PDF, ePub, Mobi
Download : 892
Read : 1285
Download »
This book takes its reader on a journey through Apache Giraph, a popular distributed graph processing platform designed to bring the power of big data processing to graph data. Designed as a step-by-step self-study guide for everyone interested in large-scale graph processing, it describes the fundamental abstractions of the system, its programming models and various techniques for using the system to process graph data at scale, including the implementation of several popular and advanced graph analytics algorithms. The book is organized as follows: Chapter 1 starts by providing a general background of the big data phenomenon and a general introduction to the Apache Giraph system, its abstraction, programming model and design architecture. Next, chapter 2 focuses on Giraph as a platform and how to use it. Based on a sample job, even more advanced topics like monitoring the Giraph application lifecycle and different methods for monitoring Giraph jobs are explained. Chapter 3 then provides an introduction to Giraph programming, introduces the basic Giraph graph model and explains how to write Giraph programs. In turn, Chapter 4 discusses in detail the implementation of some popular graph algorithms including PageRank, connected components, shortest paths and triangle closing. Chapter 5 focuses on advanced Giraph programming, discussing common Giraph algorithmic optimizations, tunable Giraph configurations that determine the system’s utilization of the underlying resources, and how to write a custom graph input and output format. Lastly, chapter 6 highlights two systems that have been introduced to tackle the challenge of large scale graph processing, GraphX and GraphLab, and explains the main commonalities and differences between these systems and Apache Giraph. This book serves as an essential reference guide for students, researchers and practitioners in the domain of large scale graph processing. It offers step-by-step guidance, with several code examples and the complete source code available in the related github repository. Students will find a comprehensive introduction to and hands-on practice with tackling large scale graph processing problems using the Apache Giraph system, while researchers will discover thorough coverage of the emerging and ongoing advancements in big graph processing systems.

Large scale Graph Visual Analytics

Author : Fangyan Zhang
File Size : 59.10 MB
Format : PDF
Download : 306
Read : 1241
Download »
Large-scale graph analysis and visualization is becoming a more challenging task, due to the increasing amount of graph data. This dissertation focuses on methods to ease the task of exploring large-scale graphs through graph sampling and visualization. Graph sampling aims to reduce the complexity of graph drawing, while preserving properties of the original graph, allowing analysis of the smaller sample which yields the characteristics similar to those of the original graph. Graph visualization is an effective and intuitive approach to observing structures within graph data. For large-scale graphs, graph sampling and visualization are straightforward strategies to gain insights into common issues that are often encountered. This dissertation evaluates commonly used graph sampling methods through a combined visual and statistical comparison of graphs sampled at various rates based on random graphs, small-world graphs, scale-free graphs, and real-world graphs. This benchmark study can be used as a guideline in choosing the appropriate method for a particular graph sampling task. In addition, this thesis proposes three types of distributed sampling algorithms and develops a sampling package on Spark. Compared with traditional/nondistributed graph sampling approaches, the scalable distributed sampling approaches are as reliable as the traditional/non-distributed graph sampling techniques, and they bring much needed improvement to sampling efficiency, especially with regards to topologybased sampling. This benchmark study in traditional/non-distributed graph sampling is also applicable to distributed graph sampling as well. A contribution to the area of graph visualization is also made through the presentation of a scalable graph visualization system-BGS (Big Graph Surfer) that creates hierarchical structure from an original graph and provides interactive navigation along the hierarchy by expanding or collapsing clusters when visualizing large-scale graphs. A distributed computing framework-Spark provides the backend for BGS on clustering and visualization. This architecture makes it capable of visualizing a graph up to 1 billion nodes or edges in real-time. In addition, BGS provides a series of hierarchy and graph exploration methods, such as hierarchy view, hierarchy navigation, hierarchy search, graph view, graph navigation, graph search, and other useful interactions. These functionalities facilitate the exploration of very large-scale graphs. Evaluation of BGS is performed through application to several representative of large-scale graph datasets and comparison with other existing graph visualization tools in scalability, usability, and flexibility. The dissertation concludes with a summarization of the contributions and their improvement on large-scale graph analysis and visualization, and a discussion about possible future work on this research field.

On Software Infrastructure for Scalable Graph Analytics

Author : Yingyi Bu
File Size : 82.83 MB
Format : PDF
Download : 727
Read : 1004
Download »
Recently, there is a growing need for distributed graph processing systems that are capable of gracefully scaling to very large datasets. In the mean time, in real-world applications, it is highly desirable to reduce the tedious, inefficient ETL (extract, transform, load) gap between tabular data processing systems and graph processing systems. Unfortunately, those challenges have not been easily met due to the intense memory pressure imposed by process-centric, message passing designs that many graph processing systems follow, as well as the separation of tabular data processing runtimes and graph processing runtimes. In this thesis, we explore the application of programming techniques and algorithms from the database systems world to the problem of scalable graph analysis. We first propose a bloat-aware design paradigm towards the development of efficient and scalable Big Data applications in object-oriented, GC enabled languages and demonstrate that programming under this paradigm does not incur significant programming burden but obtains remarkable performance gains (e.g., 2.5X). Based on the design paradigm, we then build Pregelix, an open source distributed graph processing system which is based on an iterative dataflow design that is better tuned to handle both in-memory and out-of-core workloads. As such, Pregelix offers improved performance characteristics and scaling properties over current open source systems (e.g., we have seen up to 15X speedup compared to Apache Giraph and up to 35X speedup compared to distributed GraphLab). Finally, we integrate Pregelix with the open source Big Data management system AsterixDB to offer users a mix of a vertex-oriented programming model and a declarative query language for richer forms of Big Graph analytics with reduced ETL pains.

Improving Distributed Graph Processing by Load Balancing and Redundancy Reduction

Author : Shuang Song (Ph. D.)
File Size : 56.66 MB
Format : PDF, ePub, Docs
Download : 273
Read : 709
Download »
The amount of data generated every day is growing exponentially in the big data era. A significant portion of this data is stored as graphs in various domains, such as online retail and social networks. Analyzing large-scale graphs provides important insights that are highly utilized in many areas, such as recommendation systems, banking systems, and medical diagnosis. To accommodate analysis on large-scale graphs, developers from industry and academia design the distributed graph processing systems. However, processing graphs in a distributed manner suffers performance inefficiencies caused by workload imbalance and redundant computations. For instance, while data centers are trending towards a large amount of heterogeneous processing machines, graph partitioners still operate under the assumption of uniform computing resources. This leads to load imbalance that degrades the overall performance. Even with a balanced data distribution, the irregularity of graph applications can result in different amounts of dynamic operations on each machine in the cluster. Such imbalanced work distribution slows down the execution speed. Besides these, redundancy also impacts the performance of distributed graph analysis. To utilize the available parallelism of computing clusters, distributed graph systems deploy the repeated-relaxing computation model (e.g., Bellman-Ford algorithm variants) rather than in a sequential but work-optimal order. Studies performed in this dissertation show that redundant computations pervasively exist and significantly impact the performance efficiency of distributed graph processing. This dissertation explores novel techniques to reduce the workload imbalance and redundant computations of analyzing large-scale graphs in a distributed setup. It evaluates proposed techniques on both pre-processing and execution modules to enable fair data distribution, lightweight workload balancing, and redundancy optimization for future distributed graph processing systems. The first contribution of this dissertation is the Heterogeneity-aware Partitioning (HAP) that aims to balance load distribution of distributed graph processing in heterogeneous clusters. HAP proposes a number of methodologies to estimate various machines’ computational power on graph analytics. It also extends several state-of-the-art partitioning algorithms for heterogeneity-aware data distribution. Utilizing the estimation of machines’ graph processing capability to guide extended partitioning algorithms can reduce load imbalance when processing a large-scale graph in heterogeneous clusters. This results in significant performance improvement. Another contribution of the dissertation is the Hula, which optimizes the workload balance of distributed graph analytics on the fly. Hula offers a hybrid graph partitioning algorithm to split a large-scale graph in a locality-friendly manner and generate metadata for lightweight dynamic workload balancing. To track machines’ work intensity, Hula inserts hardware timers to count the time spent on the important operations (e.g., computational operations and atomic operations). This information can guide Hula’s workload scheduler to arrange work migration. With the support of metadata generated by the hybrid partitioner, Hula’s migration scheme only requires a minimal amount of data to transfer work between machines in the cluster. Hula’s workload balancing design achieves a lightweight imbalance reduction on the fly. Finally, this dissertation focuses on improving the computational efficiency of distributed graph processing. To do so, it reveals the root cause and the amount of redundant computations in distributed graph processing. SLFE is proposed as a system solution to reduce these redundant operations. SLFE develops a lightweight pre-processing technique to obtain the maximum propagation order of each vertex in a given graph. This information is defined as Redundancy Reduction Guidance (RRG) and is utilized by SLFE’s Redundancy Reduction (RR)-aware computing model to prune redundant operations on the fly. Moreover, SLFE provides RRaware APIs to maintain high promgrammablity. These techniques allow the redundancy optimizations of distributed graph processing to become transparent to users

Large Scale Graph Processing Using Apache Giraph

Author : Sherif Sakr
File Size : 62.9 MB
Format : PDF, Docs
Download : 767
Read : 864
Download »
This book takes its reader on a journey through Apache Giraph, a popular distributed graph processing platform designed to bring the power of big data processing to graph data. Designed as a step-by-step self-study guide for everyone interested in large-scale graph processing, it describes the fundamental abstractions of the system, its programming models and various techniques for using the system to process graph data at scale, including the implementation of several popular and advanced graph analytics algorithms. The book is organized as follows: Chapter 1 starts by providing a general background of the big data phenomenon and a general introduction to the Apache Giraph system, its abstraction, programming model and design architecture. Next, chapter 2 focuses on Giraph as a platform and how to use it. Based on a sample job, even more advanced topics like monitoring the Giraph application lifecycle and different methods for monitoring Giraph jobs are explained. Chapter 3 then provides an introduction to Giraph programming, introduces the basic Giraph graph model and explains how to write Giraph programs. In turn, Chapter 4 discusses in detail the implementation of some popular graph algorithms including PageRank, connected components, shortest paths and triangle closing. Chapter 5 focuses on advanced Giraph programming, discussing common Giraph algorithmic optimizations, tunable Giraph configurations that determine the system’s utilization of the underlying resources, and how to write a custom graph input and output format. Lastly, chapter 6 highlights two systems that have been introduced to tackle the challenge of large scale graph processing, GraphX and GraphLab, and explains the main commonalities and differences between these systems and Apache Giraph. This book serves as an essential reference guide for students, researchers and practitioners in the domain of large scale graph processing. It offers step-by-step guidance, with several code examples and the complete source code available in the related github repository. Students will find a comprehensive introduction to and hands-on practice with tackling large scale graph processing problems using the Apache Giraph system, while researchers will discover thorough coverage of the emerging and ongoing advancements in big graph processing systems.

Compiler and Runtime Systems for Homomorphic Encryption and Graph Processing on Distributed and Heterogeneous Architectures

Author : Roshan Dathathri
File Size : 44.60 MB
Format : PDF, Docs
Download : 750
Read : 1117
Download »
Distributed and heterogeneous architectures are tedious to program because devices such as CPUs, GPUs, and FPGAs provide different programming abstractions and may have disjoint memories, even if they are on the same machine. In this thesis, I present compiler and runtime systems that make it easier to develop efficient programs for privacy-preserving computation and graph processing applications on such architectures. Fully Homomorphic Encryption (FHE) refers to a set of encryption schemes that allow computations on encrypted data without requiring a secret key. Recent cryptographic advances have pushed FHE into the realm of practical applications. However, programming these applications remains a huge challenge, as it requires cryptographic domain expertise to ensure correctness, security, and performance. This thesis introduces a domain-specific compiler for fully-homomorphic deep neural network (DNN) inferencing as well as a general-purpose language and compiler for fully-homomorphic computation: 1. I present CHET, a domain-specific optimizing compiler, that is designed to make the task of programming DNN inference applications using FHE easier. CHET automates many laborious and error prone programming tasks including encryption parameter selection to guarantee security and accuracy of the computation, determining efficient data layouts, and performing scheme-specific optimizations. Our evaluation of CHET on a collection of popular DNNs shows that CHET-generated programs outperform expert-tuned ones by an order of magnitude. 2. I present a new FHE language called Encrypted Vector Arithmetic (EVA), which includes an optimizing compiler that generates correct and secure FHE programs, while hiding all the complexities of the target FHE scheme. Bolstered by our optimizing compiler, programmers can develop efficient general-purpose FHE applications directly in EVA. EVA is designed to also work as an intermediate representation that can be a target for compiling higher-level domain-specific languages. To demonstrate this, we have re-targeted CHET onto EVA. Due to the novel optimizations in EVA, its programs are on average ~ 5.3x faster than those generated by the unmodified version of CHET. These languages and compilers enable a wider adoption of FHE. Applications in several areas like machine learning, bioinformatics, and security need to process and analyze very large graphs. Distributed clusters are essential in processing such graphs in reasonable time. I present a novel approach to building distributed graph analytics systems that exploits heterogeneity in processor types, partitioning policies, and programming models. The key to this approach is Gluon, a domain-specific communication-optimizing substrate. Programmers write applications in a shared-memory programming system of their choice and interface these applications with Gluon using a lightweight API. Gluon enables these programs to run on heterogeneous clusters in the bulk-synchronous parallel (BSP) model and optimizes communication in a novel way by exploiting structural and temporal invariants of graph partitioning policies. We also extend Gluon to support lock-free, non-blocking, bulk-asynchronous execution by introducing the bulk-asynchronous parallel (BASP) model. Our experiments were done on CPU clusters with up to 256 multi-core, multi-socket hosts and on multi-GPU clusters with up to 64 GPUs. The communication optimizations in Gluon improve end-to-end application execution time by ~ 2.6x on the average. Gluon's BASP-style execution is on average ~ 1.5x faster than its BSP-style execution for graph applications on real-world large-diameter graphs at scale. The D-Galois and D-IrGL systems built using Gluon scale well and are faster than Gemini, the state-of-the-art distributed CPU-only graph analytics system, by factors of ~ 3.9x and ~ 4.9x on average using distributed CPUs and distributed GPUs respectively. The Gluon-based D-IrGL system for distributed GPUs is also on average ~ 12x faster than Lux, the only other distributed GPU-only graph analytics system. The Gluon-based D-IrGL system was one of the first distributed GPU graph analytics systems and is the only asynchronous one

Concurrent Query Analytics on Distributed Graph Systems

Author : Jonas Grunert
File Size : 45.73 MB
Format : PDF
Download : 333
Read : 370
Download »

Distributed Computing and Internet Technology

Author : Dang Van Hung
File Size : 44.9 MB
Format : PDF, Docs
Download : 888
Read : 615
Download »
This book constitutes the proceedings of the 16th International Conference on Distributed Computing and Internet Technology, ICDCIT 2020, held in Bhubaneswar, India, in January 2020. The 20 full and 3 short papers presented in this volume were carefully reviewed and selected from 110 submissions. In addition, the book included 6 invited papers. The contributions were organized in topical sections named: invited talks; concurrent and distributed systems modelling and verification; cloud and grid computing; social networks, machine learning and mobile networks; data processing and blockchain technology; and short papers.

A Declarative Framework for Big Graph Analytics and Their Provenance

Author : Vasiliki Papavasileiou
File Size : 59.61 MB
Format : PDF, ePub, Docs
Download : 809
Read : 612
Download »
Recent years have witnessed an explosion in size of graph data and complexity of graph analytics in fields such as social and mobile networks, science and advertisement. Analyzing and extracting knowledge from Big Graphs (in analogy to Big Data) is hard. The size of Big Graphs necessitates the use of distributed infrastructures and parallel programming. Moreover, implementing performant and correct analytics requires in depth knowledge of both algorithm and input data. Developers of graph analytics face two major challenges: i) There is a myriad of Big Graph processing frameworks, each uses a different imperative programming language and implements different low-level optimizations. Developers are burdened with understanding the low-level characteristics of an execution framework that suits best their algorithms and data. ii) Assessing the quality of both data and analytics is a tedious and manual task. Devising new graph analytics is an iterative process, where developers incrementally refine their algorithms and clean their data by analyzing results, correcting for errors and run again until the end results are satisfiable. In this dissertation we offer a declarative framework that addresses the entire life-cycle, from designing to executing, of Big Graph analytics. Our approach uses a single language for both authoring graph analytics and fine-tuning them. Specifically, this dissertation makes the following two main contributions: We design and demonstrate Datalography, the first approach for declarative graph analytics on Vertex-Centric graph processing engines. To accommodate different programming models, we design and implement a compiler that takes general Datalog queries and rewrites them into distribution-aware queries that can be efficiently evaluated on any Vertex-Centric framework. Moreover, our compiler implements automatic and transparent to the user optimizations in the form of logical query rewritings and thus are portable to any Vertex-Centric system. We demonstrate the effectiveness of our approach with an experimental evaluation on real-world graphs that indicates Datalography offers superior performance when compared to native, imperative implementations. Our second contribution is a novel provenance management approach that enables developers to customize provenance capturing and analysis with twofold benefits: the amount of captured provenance is minimized to include only the necessary information and analysis is extended beyond the traditional tracing queries. We present formal semantics of our provenance query language, based on Datalog, and identify an important class of queries that can be evaluated online, simultaneously with the graph analytic. We showcase our approach with Ariadne, a provenance management system that supports efficient debugging, auditing and fine-tuning of graph analytics.

Software Foundations for Data Interoperability and Large Scale Graph Data Analytics

Author : Lu Qin
File Size : 54.41 MB
Format : PDF, Mobi
Download : 413
Read : 1205
Download »

Handbook of Big Data Technologies

Author : Albert Y. Zomaya
File Size : 83.90 MB
Format : PDF, Mobi
Download : 134
Read : 1261
Download »
This handbook offers comprehensive coverage of recent advancements in Big Data technologies and related paradigms. Chapters are authored by international leading experts in the field, and have been reviewed and revised for maximum reader value. The volume consists of twenty-five chapters organized into four main parts. Part one covers the fundamental concepts of Big Data technologies including data curation mechanisms, data models, storage models, programming models and programming platforms. It also dives into the details of implementing Big SQL query engines and big stream processing systems. Part Two focuses on the semantic aspects of Big Data management including data integration and exploratory ad hoc analysis in addition to structured querying and pattern matching techniques. Part Three presents a comprehensive overview of large scale graph processing. It covers the most recent research in large scale graph processing platforms, introducing several scalable graph querying and mining mechanisms in domains such as social networks. Part Four details novel applications that have been made possible by the rapid emergence of Big Data technologies such as Internet-of-Things (IOT), Cognitive Computing and SCADA Systems. All parts of the book discuss open research problems, including potential opportunities, that have arisen from the rapid progress of Big Data technologies and the associated increasing requirements of application domains. Designed for researchers, IT professionals and graduate students, this book is a timely contribution to the growing Big Data field. Big Data has been recognized as one of leading emerging technologies that will have a major contribution and impact on the various fields of science and varies aspect of the human society over the coming decades. Therefore, the content in this book will be an essential tool to help readers understand the development and future of the field.