[ No Description ]



 



RM 83.00

Key FeaturesAn advanced guide with a combination of instructions and practical examples to extend the most up-to date Spark functionalities.Extend your data processing capabilities to process huge chunk of data in minimum time using advanced concepts in Spark.Master the art of real-time processing with the help of Apache Spark 2.xBook DescriptionApache Spark is an in-memory cluster-based parallel processing system that provides a wide range of functionalities such as graph processing, machine learning, stream processing, and SQL. This book aims to take your knowledge of Spark to the next level by teaching you how to expand Sparks functionality and implement your data flows and machine/deep learning programs on top of the platform.The book commences with an overview of the Spark ecosystem. It will introduce you to Project Tungsten and Catalyst, two of the major advancements of Apache Spark 2.x.You will understand how memory management and binary processing, cache-aware computation, and code generation are used to speed things up dramatically. The book extends to show how to incorporate H20, SystemML, and Deeplearning4j for machine learning, and Jupyter Notebooks and Kubernetes/Docker for cloud-based Spark. During the course of the book, you will learn about the latest enhancements to Apache Spark 2.x, such as interactive querying of live data and unifying DataFrames and Datasets.You will also learn about the updates on the APIs and how DataFrames and Datasets affect SQL, machine learning, graph processing, and streaming. You will learn to use Spark as a big data operating system, understand how to implement advanced analytics on the new APIs, and explore how easy it is to use Spark in day-to-day tasks.What you will learnExamine Advanced Machine Learning and DeepLearning with MLlib, SparkML, SystemML, H2O and DeepLearning4JStudy highly optimised unified batch and real-time data processing using SparkSQL and Structured StreamingEvaluate large-scale Graph Processing and Analysis using GraphX and GraphFramesApply Apache Spark in Elastic deployments using Jupyter and Zeppelin Notebooks, Docker, Kubernetes and the IBM CloudUnderstand internal details of cost based optimizers used in Catalyst, SystemML and GraphFramesLearn how specific parameter settings affect overall performance of an Apache Spark clusterLeverage Scala, R and python for your data science projectsAbout the AuthorRomeo Kienzler works as the chief data scientist in the IBM Watson IoT worldwide team, helping clients to apply advanced machine learning at scale on their IoT sensor data. He holds a Masters degree in computer science from the Swiss Federal Institute of Technology, Zurich, with a specialization in information systems, bioinformatics, and applied statistics. His current research focus is on scalable machine learning on Apache Spark. He is a contributor to various open source projects and works as an associate professor for artificial intelligence at Swiss University of Applied Sciences, Berne. He is a member of the IBM Technical Expert Council and the IBM Academy of Technology, IBMs leading brains trust.Table of ContentsA first taste and whats new in ApacheSpark V2Apache Spark SQLThe Catalyst OptimizerProject TungstenApache Spark StreamingStructured StreamingApache Spark MLlibApache SparkMLApache SystemMLDeepLearning on Apache Spark with DeepLearning4J, ApacheSystemML,H2OApache Spark GraphXApacheSpark GraphFramesApacheSpark with Jupyter Notebooks on IBM DataScience ExperienceApacheSpark on Kubernetes
view book